Blog Archives

“Above the Trend Line” – Your Industry Rumor Central for 6/23/2021

Above the Trend Line: your industry rumor central is a recurring feature of insideBIGDATA. In this column, we present a variety of short time-critical news items grouped by category such as M&A activity, people movements, funding news, financial results, industry alignments, customer wins, rumors and general scuttlebutt floating around the big data, data science and machine learning industries including behind-the-scenes anecdotes and curious buzz.



Data, Data Everywhere–Are You Capturing The Data Gold From Your Facilities?

In this special guest feature, Michael C. Skurla, Chief Product Officer of Radix IoT, points out that with previously untapped information now at their fingertips, facilities owners and operators are no longer challenged with automating their digitized facilities. They tap into IoT platform solutions to gain valuable big data transformed into actionable insight for a full, in-depth view of all their building’s systems across distributed facilities beyond just the physical building operations.



Infuse intelligent automation at scale with IBM Cloud Pak for Data 4.0

When’s the last time you considered if you’re operating in a truly predictive enterprise, furthermore, if it’s easy for your data consumers, models and apps to access the right data? More often than not the answer is a resounding “not very”. Between the proliferation of data types and sources and tightening regulations, data is often held captive, sitting in silos. Traditionally, strategies for overcoming this challenge relied on consolidating the physical data into a single location, structure and vendor. While this strategy seemed great in theory, anyone that has undertaken a migration of this magnitude can tell you it’s easier said than done.

Earlier this year at THINK we unveiled our plans for the next generation of IBM Cloud Pak for Data, our alternative to help customers connect the right people to the right data at the right time. Today, I’m excited to share more details on how the latest version of the platform, version 4.0, will bring that vision to life through an intelligent data fabric.

The journey so far

Since the launch of IBM Cloud Pak for Data in 2018, our goal has always been to help customers unlock the value of their data and infuse AI throughout their business. Understanding the needs of our clients, we doubled down on delivering a first-of-its-kind containerized platform that provided flexibility to deploy the unique mix of data and AI services a client needs, in the cloud environment of their choice.

IBM Cloud Pak for Data supports a vibrant ecosystem of proprietary, third party and open source services that we continue to expand on with each release. With version 4.0 we take our efforts to the next level. New capabilities and intelligent automation help business leaders and users tackle the overwhelming data complexity they face to more easily scale the value of their data.

Weaving the threads of an intelligent data fabric

A data fabric is an architectural pattern that dynamically orchestrates disparate data sources across a hybrid and multicloud landscape to provide business-ready data in support of analytics, AI and applications. The modular and customizable nature of IBM Cloud Pak for Data offers the ideal environment to build a data fabric from best-in-class solutions that is tailored to your unique needs. The tight integration of the microservices within the platform allow for further streamlining of the management and usage of distributed data by infusing intelligent automation. With version 4.0 we’re applying this automation in three key areas:

  1. Data access and usability – AutoSQL is a universal query engine that automates how you access, update and unify data across any source or type (clouds, warehouses, lakes, etc.) without the need for data movement or replication.
  2. Data ingestion and cataloging – AutoCatalog automates the discovery and classification of data to streamline the creation of a real-time catalog of data assets and their relationships across disparate data landscapes.
  3. Data privacy and security – AutoPrivacy uses AI to intelligently automate the identification, monitoring and enforcement of sensitive data across the organization to help minimize risk and ensure compliance.

Register for the webinar to learn more about our intelligent data fabric and how you can take advantage of these new technologies.

Additional enhancements woven into 4.0

Further augmenting the intelligent automation of our data fabric capabilities is another new service coming to IBM Cloud Pak for Data, IBM Match 360 with Watson. Match 360 provides a machine learning-based, easy to use experience for self-service entity resolution. Non-developers can now match and link data from across their organization, helping to improve overall data quality.

IBM SPSS Modeler, IBM Decision Optimization and Hadoop Execution Engine services are also included as part of IBM Cloud Pak for Data 4.0. These capabilities complement the IBM Watson Studio services already within the base and enables users such as business analysts and citizen data scientists, to participate in building AI solutions.

AutoAI is enhanced to support relational data sources and generate exportable python code, enabling data scientists to review and update models generated through AutoAI. This is a significant differentiator compared to the AutoML capabilities of competitors, where the generated model is more of a black box.

Complementary capabilities are also releasing on IBM Cloud Pak for Data as a Service, including IBM DataStage and IBM Data Virtualization. Now available fully managed, DataStage helps enable the building of modern data integration pipelines, and the Data Virtualization capability helps to share data across the organization in near real-time, connecting governed data to your AI and ML tools.

Finally, IBM Cloud Pak for Data 4.0 includes several platform enhancements, most notable of which. is the addition of Red Hat OpenShift Operators. These help to automate the provisioning, scaling, patching and upgrades of IBM Cloud Pak for Data. First time installs are significantly simplified, decreasing the cost of implementation, while seamless upgrades reduce the upgrade process from weeks to hours. Also beginning in 4.0, IBM Cloud Pak for Data is built on a common IBM Cloud Pak platform, enabling standardized Identify and Access Management and seamless navigation across all of the IBM Cloud Paks.

Data is a huge competitive advantage to companies and when combined with AI, has the power to drive business transformation. IBM Cloud Pak for Data’s latest version enables just that, but 10x faster.

Learn more about the latest version of IBM Cloud Pak for Data by signing up for the Data Fabric Deep Dive webinar or by registering for a free trial.

The post Infuse intelligent automation at scale with IBM Cloud Pak for Data 4.0 appeared first on Journey to AI Blog.



The 3 Reasons Enterprises Need an AI Operating System for Intelligent process Automation

This new whitepaper, “The 3 Reasons Enterprises Need an AI Operating System for Intelligent process Automation,” from Veritone highlights how evolving technology meets enterprise demand for agile, intelligence-based solutions in the shape of AI-based operating systems (OS) across three areas: (i) AI OS for automation of human work; (ii) AI OS for process automation across all data sources; and (iii) AI OS for democratization of AI across the
enterprise.



A Laundry List for Cleaning Messy Data and Making It Business Ready

In this special guest feature, Mark Palmer, TIBCO SVP & GM of Analytics, Data Science & Data Virtualization, believes that companies who learn to leverage their data will beat out their competition. Making data-driven decisions for business strategy is essential in today’s tech-centric environment, and anyone who is not taking advantage of the information they’ve gathered will fall behind.



Solidifying Absolute and Relative Data Quality with Master Data Management

In this contributed article, editorial consultant Jelani Harper highlights that contrary to popular belief, data are not the oil, fuel, energy, or life force coursing through the enterprise to inform decision-making, engender insights, and propel timely business action rooted in concrete facts. Data quality is.



How the Shift to Remote Work is Accelerating Speech Recognition

In this contributed article, Ryan Scolnik, VP of Data Science at FortressIQ, discusses the technology’s applications and what the future of speech recognition may hold. The speech recognition market was projected to reach just over $29 billion by 2026, but that figure will likely end up much higher due to the move to remote work driven by the pandemic.



AI Under the Hood: Object Detection Model Capable of Identifying Floating Plastic Beneath the Surface of the Ocean

deep learningA group of researchers, Gautam Tata, Sarah-Jeanne Royer, Olivier Poirion, and Jay Lowe, have written a new paper “DeepPlastic: A Novel Approach to Detecting Epipelagic Bound Plastic Using Deep Visual Models.” The workflow described in the paper includes creating and preprocessing a domain-specific data set, building an object detection model utilizing a deep neural network, and evaluating the model’s performance.



Trustworthy AI helps Regions Bank better serve customers

Financial institutions worldwide are feeling the scrutiny from both customers and regulators alike. Perceptions of an institution’s governance practices, including its commitment to ethics, fairness, explainability and transparency of decisions, are critical to its standing. No wonder those poised to gain a competitive advantage today want to ensure their AI is fair, trustworthy, and explainable.

A member of the S&P 500 Index, Regions Financial Corporation is one of the United States’ largest full-service providers of consumer and commercial banking, wealth management and mortgage products and services. This Birmingham, Alabama-based organization has extended its culture of doing the right thing to both its customer relationships and its approach to AI.

In a recent IBM Data and AI Keynote, Trustworthy AI: Forging the future of banking, insurance and financial markets, Manav Misra, Chief Data and Analytics Officer of Regions Bank, detailed the bank’s stringent efforts to build a trustworthy AI framework – and how they’re paying off.

“Trustworthy, transparent models are critical to our success and really go back to our culture and key tenets — “to serve our customers,”’ he said.

As banks, insurance companies and other financial institutions look to innovate with AI, the new currency is trust.  Although the use of artificial intelligence continues to grow across industries including financial services, trust is at a premium and that’s bringing greater scrutiny on AI deployments, according to IBM and Morning Consult’s Global AI Adoption Index 2021. More importantly, the index reveals that 91 percent of businesses using AI say their ability to explain how it arrived at a decision is critical.

Trustworthy AI requires data completeness, accuracy and quality, and the data underlying the models must be representative of the data used to make the decisions. Plus, the models must be “explainable,” meaning their decision-making processes are easily understood. This is especially critical in the highly regulated world of financial services.

Regions wanted to create a trustworthy framework for AI that included ModelOps capabilities and the ability to identify data and model drift.  That meant that it needed tools and processes to monitor data drift and ways to ensure models could be adapted if the data started to change. Misra and his team worked with IBM Data and AI Expert Labs and the IBM Data Science and AI Elite team to align with data tools, methodology and personnel. Part of this effort involved understanding how IBM Cloud Pak® for Data could help them assess data drift, measure model performance, and keep their personnel informed.

Read here about the methodology they used to develop high quality and trusted AI. 

When Misra joined Regions, it was critical to demonstrate the value that data and AI could bring to the business. Rather than starting with a small project, he looked to make the biggest impact quickly.

“I had to make sure that we could show that we could move the needle and deliver large amounts of value to the business,” he said.The first data project Regions built delivered tens of millions of dollars to the business in additional revenue while saving losses. “I used that as a way to demonstrate to other parts of the business: ‘look, we’ve done this, we can do this for you as well.’”

Soon, there was more demand than Misra’s team could meet. “It was something they signed on to and became big proponents of, so much so, that innovating with digital and data is one of three strategic initiatives for the bank right now.”

Misra explained that to create trust in business decisions driven by artificial intelligence, a variety of stakeholders in the second and third line of defense provide oversight into the quality of the company’s models. The result has been trusted data products (including those that help reduce fraud for the bank, assist commercial banker and wealth advisors, and provide insights into consumers) so Regions can better serve customers.

For more insights from Regions Bank, State Bank of India, UBS, CIBC, ING, Rabobank, Citigroup and others, register for the recent Data and AI Virtual Forum: Banking, Insurance and Financial Markets here.

Accelerate your journey to AI by exploring IBM Cloud Pak for Data.

The post Trustworthy AI helps Regions Bank better serve customers appeared first on Journey to AI Blog.



Operationalize AI: You built an AI model, now what?

Global AI Adoption Index 2021 reports the top drivers of AI adoption in organizations are: 1. Advances in AI that make it more accessible (46%); 2. Business needs (46%); and 3. Changing business needs due to COVID-19 (44%). To bring AI models into production, businesses are also mitigating the following AI modeling and management issues:

66%      Lack of clarity on provenance of training data

64%      Lack of collaboration across roles involved in AI model development and deployment

63%      Lack of AI policies

63%      Monitoring AI across cloud and AI environments.

Given the acceleration of AI adoption and the need to solve AI implementation challenges, AI engineering is rising to the top of agenda for technology leaders. Software engineering and DevOps leaders can empower developers to become AI experts and play a pivotal role in ModelOps. This blog will discuss the five imperatives in operationalizing AI that can help r teams boost their chances for success while addressing common challenges pre-and post-deployment.

 Automate and simplify AI lifecycles

Having built DevOps, many software and technology leaders are adept at optimizing the Software Development Lifecycle (SDLC).  More development organizations are expanding the responsibilities of deploying data and AI services as part of the development lifecycle. Advances in automated AI lifecycles can bridge the skills gap, streamline processes across teams and help synchronize cadences between DevOps and ModelOps. By uniting tools, talent and processes, you can build your DevOps practices to be AI-ready and realize returns as you move through Day-2 operations and beyond.

Implement responsible, explainable AI

The disruption caused by COVID-19 and other world events this past year may have pushed consumers past a tipping point: an organizational stance for sustainability and social responsibility is no longer one of the considerations but can be a deciding factor for engaging with a brand or not, let alone buy. Misbehaving models and concerns about AI bias and risk are part of the checklist for go or no-go decisions to implement AI. Further, the evolving nature of AI-related regulations and varying policy responses make responsible, explainable AI implementation one of the top concerns for businesses. IBM donates Trusted AI toolkits to the Linux Foundation AI so that developers and data scientists can access toolkits in adversarial robustness, fairness, and explainability, and help build the foundations of trustworthy AI.

Support model scalability, resiliency, and governance

As discussed earlier, training data is the number one concern in AI development and deployment as it can have a substantial impact on model performance. Collecting, organizing and analyzing a sufficient volume of relevant, high quality data to train models under enterprise constraints can be challenging, especially for distributed, heterogeneous environments. Federated learning enables organizations to achieve better model accuracy by securing model training without having to transfer data to a centralized location, minimizing privacy and compliance risks. A data and AI platform with model transparency and auditability as well as model governance with access control and security can seamlessly integrate with DevOps toolchains and framework.

Run any AI models – language, computer vision and other custom AI models

Successful software development teams are also not only integrating off-the-shelf AI services like chatbots but also building custom AI models to drive real business value. For example, development teams can combine deep learning models for speech-to-text, custom machine learning models predicting the next best offers and decision optimization models for workforce scheduling to be deployed with an app for a better customer experience. Beyond packaged machine learning, businesses can now more easily architect a solution that consists of a diverse set of AI models using language, computer vision and other AI techniques aided by industry accelerators.

Get more from your application, AI and cloud investments

As a development team, you are familiar with the power of innovation in an open, modern environment. By using a modern data and AI platform you can enjoy the flexibility to run your AI-powered applications across various environments—from edge to hybrid clouds—and rapidly move ideas from development to production. Watson Studio on IBM Cloud Pak for Data with Red Hat OpenShift helps you build and deploy AI-powered apps anywhere while taking advantage of one of the richest open source ecosystems with secure, enterprise-grade Kubernetes orchestration. You can start with one use case and build on your success using the same tools and processes. As you take the next steps in the journey to AI, Watson Studio can be a natural fit for building AI in your development and DevOps practices.

Next Steps

 

 

 

 

 

 

 

 

The post Operationalize AI: You built an AI model, now what? appeared first on Journey to AI Blog.



Top