Blog Archives

Articles via RSS from IBM Big Data Hub

Turning insights into actions with IBM Business Analytics

We are living in the age of the unexpected. The pandemic, regulatory changes, economic questions, and human resource and supply chain challenges are just some of the disruptions that have impacted organizations. Disruptions will continue to surface unexpectedly, leaving broad and lasting impacts on organizations and their ecosystems. The result is an increased pressure to make smart decisions faster and often against a moving target.

Most organizations are now understanding the value of making decisions based on data insights rather than experience or intuition alone. However, the organizations that will navigate the unexpected successfully and win will do more than make data-driven decisions. These organizations will focus on how insights are framed, created, marketed, consumed and stored for reuse.

That’s where business analytics comes in.

What is IBM Business Analytics?

IBM is helping clients successfully navigate the age of the unexpected with IBM Business Analytics, an enterprise-grade, trusted, scalable and integrated analytics solution portfolio. It streamlines and extends enterprise reporting, self-service analysis and planning strategies across the organization to empower teams to better predict and shape future outcomes.

With the new IBM Business Analytics Enterprise (BAE), we are bundling together Planning Analytics with Watson, Cognos Analytics with Watson and the new Analytics Content Hub. This enables a single point of entry for planning, budgeting, forecasting, dashboarding and reporting. Now you aren’t just breaking down departmental and data silos, but analytic silos, too.

The capabilities of bundled business analytics

Planning Analytics with Watson addresses integrated business planning in extended planning and analysis (xP&A) including FP&A, HR, S&OP, Marketing, Project/IT planning and more. It’s the only planning analytics solution on the market that excels in all areas of continuous, integrated, predictive and prescriptive planning.

Next, IBM Cognos Analytics with Watson is a trusted AI co-pilot for business decision-makers who want to improve the impact of their business function by empowering every user to turn data into insights, and rapidly make business decisions. IBM is the only partner that can plan at the speed of your business and for the integrity of your environment, increasing accuracy and consistency with AI and prescriptive analytics capabilities you can trust.

And last but certainly not least, we’ll showcase the new IBM Analytics Content Hub in Business Analytics Enterprise, which is designed to break down organizational analytic silos and help you deliver all your analytics capabilities to your teams.

The benefits of business analytics

Most recently, review site G2 named Planning Analytics a “Leader” in their Fall 2022 report and Cognos Analytics a “Top 50 Analytics and AI” product for 2022. TrustRadius awarded both Planning Analytics and Cognos Analytics a “Top Rated” designation. Over the last couple years, a range of companies shared their feedback, leading to many of the improvements in the user experience, AI innovations and deployment options available today.

Organizations use analytics and AI to enhance decision-making that drives competitive advantage. Consider food packaging leader Novolex, who had to adapt their planning cycles during the COVID-19 pandemic. As shared in the case study, Violeta Nedelcu, Supply Chain Director at Novolex states, “Instead of taking weeks, the company can now process data within a few hours, taking two days for analysis, discussion and review, and provide clarity on the available capacity to proceed with new products and to support the current market.” Overall, Novolex was able to see a 83% reduction in forecasting processing times.

With business analytics, organizations in all industries, can experience the power of faster, better planning and analysis with data-driven precision. We look to continue helping organizations achieve successful implementations across their analytics cycle. As such, we have exciting new updates to our business analytics solution portfolio coming in the next month.

Register today for our Business Analytics launch event on October 25th to hear about the new Business Analytics Enterprise, including new deployment options and capabilities. You don’t want to miss out!

The post Turning insights into actions with IBM Business Analytics appeared first on Journey to AI Blog.



From principles to actions: building a holistic approach to AI governance

Today AI permeates every aspect of business function. Whether it be financial services, employee hiring, customer service management or healthcare administration, AI is increasingly powering critical workflows across all industries.

But with greater AI adoption comes greater challenges. In the marketplace we have seen numerous missteps involving inaccurate outcomes, unfair recommendations, and other unwanted consequences. This has created concerns among both private and public organizations as to whether AI is being used responsibly. Add navigating complex compliance regulations and standards to the mix, and the need for a solid and trustworthy AI strategy becomes clear.

To scale use of AI in a responsible manner requires AI governance, the process of defining policies and establishing accountability throughout the AI lifecycle. This in turn requires an AI ethics policy, as only by embedding ethical principles into AI applications and processes can we build systems based on trust.

IBM Research has been developing trustworthy AI tools since 2012. When IBM launched its AI Ethics Board in 2018, AI ethics was not a hot topic in the press, nor was it top-of-mind among business leaders. But as AI has become essential, touching on so many aspects of daily life, the interest in AI ethics has grown exponentially.

In a 2021 study by the IBM Institute of Business Value, nearly 75% of executives ranked AI ethics as important, a jump from less than 50% in 2018. What’s more, suggests the study, those organizations who implement a broad AI ethics strategy, interwoven throughout business units, may have a competitive advantage moving forward.

The principles of AI ethics

At IBM we believe building trustworthy AI requires a multidisciplinary, multidimensional approach based on the following three ethical principles:

  1. The purpose of AI is to augment human intelligence, not replace it.
    At IBM, we believe AI should be designed and built to enhance and extend human capability and potential.
  2. Data and insights belong to their creator.
    IBM clients’ data is their data, and their insights are their insights. We believe that data policies should be fair and equitable and prioritize openness.
  3. Technology must be transparent and explainable.
    Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.

When thinking about what it takes to really earn trust in decisions made by AI, leaders should ask themselves five human-centric questions: Is it easy to understand? Is it fair? Did anyone tamper with it? Is it accountable? Does it safeguard data? These questions translate into five fundamental principles for trustworthy AI: fairness, robustness, privacy, explainability and transparency.

AI governance: From principles to actions

When discussing AI governance, it’s important to be conscious of two distinct aspects coming together:

Organizational AI governance encompasses deciding and driving AI strategy for an organization. This includes establishing AI policies for the organization based on AI principles, regulations and laws.

AI model governance introduces technology to implement guardrails at each stage of the AI/ML lifecycle. This includes data collection, instrumenting processes and transparent reporting to make needed information available for all the stakeholders.

Often, organizations looking for trustworthy solutions in the form of AI governance require guidance on one or both of these fronts.

Scaling trustworthy AI

Recently an American multinational financial institution came to IBM with several challenges, including deploying machine learning models in the hundreds that were built using multiple data science stacks comprised of open source and third-party tools. The chief data officer saw that it was essential for the company to have a holistic framework, which would work with the models built across the company, using all these diverse tools.

In this case IBM Expert Labs collaborated with the financial institution to create a technology-led solution using IBM Cloud Pak for Data. The result was an AI governance hub built at enterprise scale, which allows the CDO to track and govern hundreds of AI models for compliance across the bank, irrespective of the machine learning tools used.

Sometimes an organization’s need is more tied to organizational AI governance. For instance, a multinational healthcare organization wanted to expand an AI model that was being used to infer technical skills to now infer soft/foundational skills. The company brought in members of IBM Consulting to train the organization’s team of data scientists on how to use frameworks for systemic empathy, well before code is written, to consider intent and safeguard rails for models.

After the success of this session, the client saw the need for broader AI governance. With help from IBM Consulting, the company established its first AI ethics board, a center of excellence and an AI literacy program.

In many instances, enterprise-level organizations need a hybrid approach to AI governance. Recently a French banking group was faced with new compliance measures. The company did not have enough organizational processes and automated AI model monitoring in place to address AI governance at scale. The team also wanted to establish a culture to responsibly curate AI. They needed both an organizational AI governance and AI model governance solution.

IBM Consulting worked with the client to establish a set of AI principles and an ethics board to address the many upcoming regulations. This effort ran together with IBM Expert Labs services that implemented the technical solution components, such as an enterprise AI workflow, monitors for bias, performance and drift, and generating fact sheets for the AI models to promote transparency across the broader organization.

Establishing both organizational and AI model governance to operationalize AI ethics requires a holistic approach. IBM offers unique, industry-leading capabilities for your AI governance journey:

  • Expert Labs for a technology solution that provides guardrails across all stages of the AI lifecycle
  • IBM Consulting for a holistic approach to socio-technological challenges

The post From principles to actions: building a holistic approach to AI governance appeared first on Journey to AI Blog.



Is your conversational AI setting the right tone?

Conversational AI is too artificial

Nothing is more frustrating than calling a customer support line to be greeted by a monotone, robotic, automated voice. The voice on the other end of the phone is taking painfully long to read you the menu options. You’re two seconds away from either hanging up, screaming “representative” into the phone, or pounding on the zero button until you reach a human agent. That’s the problem with many IVR solutions today. Conversational AI is too artificial. Customers feel they’re not being heard or listened to, so they just want to speak with a human agent.

IBM Watson Expressive Voices 

Luckily, there is a way to fix that problem and make the customer experience more pleasant. With IBM Watson’s newest technology of expressive voices, you will no longer feel like you’re talking to a typical robot; you’ll feel like you’re talking to a live human agent without any of the wait time. These highly natural voices have conversational capabilities like expressive styles, emotions, word emphasis and interjections. Not only do these voices relieve the customer frustration of feeling like they’re talking to a bot, but they also contribute to the goal of call deflection from human agents. It’s a win-win for customers and businesses.

Best suited for the customer care domain, the voices will have a conversational style enabled by default; however, the voices also support a neutral style which may be optimal for other use cases (newscasting, e-learning, audio books, etc.). Have a listen to the expressive voice samples below:

Emotions, Emphasis, Interjections

As humans, we convey emotion in the words we speak, whether we realize it or not. We tend to sound empathetic when apologizing to one another. We sound uncertain when we don’t know the answer to something, and perhaps cheerful when we finally discover the answer. The ability to convey emotion is what makes us human. IBM Watson’s expressive voices can express emotion in order to better convey the meaning behind the words, ultimately reducing customer frustration when dealing with today’s phone experiences. Your voice bot will sound empathetic when telling the customer their package is delayed or cheerful when they’ve successfully helped the customer book an airline ticket.

Emphasis is another important aspect of human speech. Did you say Austin or August? Did you say you lost the card ending in 4876? IBM expressive voices support word emphasis so that your bot can better convey the desired meaning of the text. Users can indicate the location of the stress with four levels – none, moderate, strong, and reduced.

Interjecting with words like hmm, um, oh, aha, or huh is another feature of human speech that IBM expressive voices now support to enable an interaction that feels more natural and human-like. The new expressive voices will automatically detect these interjections in text and treat them as such without any SSML (Speech Synthesis Markup Language) indication. There’s an also an option to disable the interjections when it’s not appropriate (e.g., ‘oh’ can be used to spell out the number 0 or as an interjection).

How to Get Started with Expressive Voices

Expressive voices and features will be available in US-English first in September 2022, followed by other languages in early 2023. The US-English expressive voices are Michael, Allison, Lisa, and Emma. For customers using the V3 version of Michael, Allison, or Lisa, switching to the expressive voices shouldn’t cause disruption as it will still sound like the same speaker, but with a more natural and conversational style. It’s easy to start using the new voices – simply indicate the voice name in the API reference, just like any other voice.

In summary, IBM’s new technology of expressive voices is the next level of conversational AI. It checks the box when it comes to an engaging and natural experience that mirrors that of a human agent. The new voices relieve the customer frustration of feeling unheard and drive call deflection from human agents. To learn more about the expressive voices, see the resources below.

The post Is your conversational AI setting the right tone? appeared first on Journey to AI Blog.



Creating a holistic 360-degree “citizen” view with data and AI

Achieving health equity is perhaps the greatest challenge facing US public health officials today. In a 2021 report released by the Commonwealth Fund, the nation ranked last among high-income countries in access to healthcare and equity, despite spending a far greater share of its GDP on healthcare.

Healthcare disparities are closely linked to race, ethnicity, gender and other demographic and socioeconomic issues surrounding access, cost and quality of care. Health inequities in the US came into sharp relief during the COVID-19 pandemic: Analyses of federal, state and local healthcare data show that people of color experienced a disproportionate burden of cases and deaths.

But there is promising news. The recent crisis not only highlighted the critical need to focus more on health equity but also revealed how tapping into data-driven technologies can better ensure equity for marginalized groups.

In 2020, IBM collaborated with the Rhode Island Department of Health, uncovering existing and emerging data patterns to aid the agency’s overall response to the health crisis. This work resulted in real-time, data-driven decisions that identified pandemic-fueled disparities such as lack of access to vaccines. Ultimately, the project led to more equitable emergency response services in the Rhode Island regions that needed it most.

Today state health departments around the country are taking the data-leveraging lessons learned during the pandemic and applying them to an array of public health crises affecting underserved groups. Health departments are focusing on issues such as food insecurity, unwanted pregnancies, increased suicide rates and opioid addiction. Thanks to innovations in data analytics and AI, leaders can make smarter, faster and more efficient decisions to improve public health outcomes and advance health equity.

Learn how you can take advantage of your data so users make faster, better decisions using the right architecture.

Creating a citizen 360 view

The journey begins with building a data fabric architecture to ensure quality data can be accessed by the right people at the right time, no matter where it resides. The key is making sure all this data is transparent and responsibly governed for privacy and security.

A data fabric facilitates the end-to-end integration of various data pipelines and cloud environments by using intelligent and automated systems. It also provides a strong foundation for 360-degree views of customers, or in this case citizens rather than customers.

In B2B or B2C circles, a 360-degree view of customers or citizens offers a holistic, comprehensive picture of a person based on data collected from all touch points. This drives business value by creating more effective outcomes as well as more personalized customer experiences. For instance, this data infrastructure enables a state health workforce to better understand the overall healthcare landscape and subsequently improve individual care and address inequities.

Achieving data literacy with storytelling and visualization

Collecting massive amounts of data presents a common issue for both private and public enterprises: how to make sense of all that data.

Part of data storytelling involves data visualization, the process of analyzing large amounts of data and communicating the results in a visual context. But strong storytelling must go beyond presenting data in the form of charts, graphs and tables.

For instance, state health departments comprising many stakeholders and players need to create a compelling storyline and consistent messaging around their data, so they can communicate it effectively to their entire workforce.

Keeping the citizen front and center

Data and trustworthy AI also provide predictive analytics for insights that can solve some of the most pressing health issues, including hunger and food insecurity.

For example, the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), a federal food assistance program that operates through state health departments and local agencies, has seen decreasing enrollment over the past decade despite a sharp rebound in poverty levels. Suspected factors include slow modernization — until recently, all WIC benefits were still delivered as paper vouchers — and persistent stigma against federal assistance. Providing assistance depends on identifying and addressing these and other factors.

The WIC Enrollment Collaboration Act of 2020 calls for state health departments to count unenrolled WIC-eligible families. A data fabric with a 360-degreee view can help that count. It can also help states build and deploy referral mechanisms and conduct a comprehensive outreach campaign (also detailed in the Act). Working together, states can use data to assess and improve access to WIC and better limit food hardship.

Throughout the US, state departments of health, education and behavioral health are using data to overcome other health crises, including the opioid and suicide epidemics. A centralized data hub provides a powerful public health crisis response system that allows for collaboration across government branches and state lines. Such multi-pronged efforts are closing the gap in critical information, shedding light on how and why disparities occur and paving the way to better health equity for all.

Today IBM is working with state health departments to accelerate their digital transformations in the areas of overall governance, operations, automation, data insights and more.

The post Creating a holistic 360-degree “citizen” view with data and AI appeared first on Journey to AI Blog.



Moving beyond spreadsheets with IBM Planning Analytics

My journey with IBM Planning Analytics started with an early morning phone call to tell me that a member of my team had died, suddenly and unexpectedly. Not only was his loss a personal tragedy, it was a tragedy for the whole organization. Our teams relied heavily on his decades of expertise to help us plan and forecast strategically for the future.

The company had been through tough times overall. An expensive enterprise resource planning (ERP) implementation meant there was no money left for other systems, and we’d been forced to run our budget process on a complicated network of 27 linked spreadsheets. Fred was the only one who knew how they worked and suddenly he wasn’t around.

If there was ever an example of key-person risk, this was it.

A world without spreadsheets 

We stumbled our way through the next budget process as best we could, until we came across IBM Planning Analytics with Watson. We could see, for the first time, a world that could exist without spreadsheets. We could see a world where people worked together on a common tool using a common approach to unite and agree on data-driven decisions for the good of the business. Better still, it was a world that didn’t rely on a single person.

But the story doesn’t end there.

Making sense of the data

Once we’d moved off the spreadsheets, we discovered the power that comes from managing data. We found countless problems with our master data, all of which had been masked through spreadsheet aggregation. We had been blissfully unaware of these challenges for years and now it was time to address them. By having full visibility of our data with IBM Planning Analytics, we could finally make sense of all our data together.

These problems were not trivial. In fact, we found examples where our product costs were materially misstated and discovered we’d been selling some products lower than what it cost to make. Through manual updates to spreadsheets, and working at a high level, errors – even seemingly blatant ones – were hiding in plain sight.

There’s little doubt in my mind that our investment in IBM Planning Analytics paid for itself several times over. Not only did we mitigate the key-person risk, which is honestly all we wanted to do, but we gained so much more. We made the organization value its data and want to put data to work for the good of the business.

Unlocking the value of data with the promise of AI

It’s often hard for leaders to see the value in analytical tools. Spreadsheets seem fine, but they’re not. They lull you into a false sense of security. Not only is the business logic linked with the spreadsheet owner, a risk on its own, but the sheer simplicity of the spreadsheet conceals the countless treasures within.

The promise of AI is tantalizing. It can provide insights that humans could never find. But realistically, can we ever hope to get there if the business still thinks in rows and columns? Our expert colleague’s untimely death was a tragedy, but we thank him every day for his legacy. I encourage business leaders that want to make a true impact on their bottom line to explore a continuous integrated planning solution like IBM Planning Analytics, which eliminates the manual work and helps to:

  • Enable automated planning processes
  • Encourage cross-functional collaboration
  • Embed predictive AI capabilities for more accurate predictive forecasting

If you want to learn more, including how to create multidimensional plans, budgets and forecasts, explore interactive dashboards and reports, and discover pre-built solutions by industry or use case, you can get started today with a 30-day free trial or request a demo of IBM Planning Analytics with Watson.

I also encourage you to join the IBM Business Analytics live stream event on October 25, to hear more case studies on how others have used Planning Analytics to accelerate decision making.

The post Moving beyond spreadsheets with IBM Planning Analytics appeared first on Journey to AI Blog.



Eli Manning and the power of AI in ESPN fantasy football

Eli Manning was the obvious choice. For the last six years, IBM has been working with ESPN to infuse AI-generated insights into their fantasy football platform. But we needed someone who could help us tell the story; someone who could grab the attention of fantasy football enthusiasts, introduce them to the artificial intelligence of Watson, and encourage them to embrace the era of data-driven decision making. (check out Eli’s visit to IBM Research here)

Why Eli? No, it’s not because I’ve been a New York Giants fan my whole life. And no, it’s not because the Giants and IBM are both nicknamed “Big Blue.” While neither of those things hurt, we ultimately chose Eli because he has so much in common with IBM.

Let me explain. Back in 2016, IBM formed a partnership with ESPN. In this relationship, we use IBM’s advanced analytics and AI capabilities to analyze the massive amount of data produced by fantasy football. We then serve up insights that help guide the roster decisions of ESPN’s fantasy football users. Today, those insights take the form of two features:

  • Trade Analyzer with Watson, which uses AI to analyze player statistics and media commentary to help team managers understand the value of a potential trade.
  • Player Insights with IBM Watson, which helps fantasy managers estimate the potential upside and downside of a matchup, analyze boom or bust chances, and assess injuries.

Why is IBM in the fantasy football business? Great question. Two reasons: First, we’re solving a very real business problem for a valued partner. ESPN’s Fantasy Football may look like fun and games, but it’s also serious business. More than 11 million people play on ESPN’s fantasy platform. And it’s a critical form of digital engagement for ESPN, one that also drives consumption of related football content, both digital and broadcast. Just like IBM’s other clients, ESPN is operating in a highly competitive market, and requires constant innovation to improve the customer experience. Using AI to produce insight at scale addresses a critical need for ESPN, just as it does for IBM clients in other industries.

The second reason is more self-serving. Simply put, ESPN Fantasy Football offers IBM a powerful platform to demonstrate our capabilities to millions of people. Both Trade Analyzer and Player Insights are produced by transforming vast quantities of data into insights that inform decision making. We’re analyzing the performance statistics of all 1,900 players in the league. But the numbers don’t always tell the whole story. So we’re also using the natural language processing capability of Watson Discovery to mine insights from millions of blogs, articles and podcasts produced by media experts (see here to learn more). Last year alone Watson served up more 34 billion AI-powered insights to ESPN fantasy players.

Which brings me back to Eli. When Eli Manning joined the New York Giants back in 2004 as the number one pick in the draft, many Giants fans thought he would be the second coming of Joe Namath: a big star in the big city. But Eli was more subtle than that, more Ordinary Joe than Broadway Joe. There were no flashy fur coats and movie star girlfriends. Just an understated, workman-like grit that resulted in two championships. An understated assassin who let his actions on the field speak do all the talking.

How is this similar to IBM? Well, it’s been 17 years since IBM sold its ThinkPad business to Lenovo. That was the last time our iconic “eight-bar” logo appeared on a consumer-facing device. But despite this lack of visibility, our work has never been more consequential than it is today. It’s not flashy, but our technology and expertise support the operation of the most mission-critical systems on the planet: electrical grids, airlines, telecommunications networks, banks, government services, and many others.

Technologies like hybrid cloud and AI are powerful, complex, and often difficult for people to comprehend. They operate behind the scenes, in data centers and back offices. But they are critically important to our clients. That’s why we showcase the work of IBM Consulting through partnerships like the Masters, the US Open, and ESPN’s Fantasy Football. And that is why Eli Manning is helping us tell our story.

The post Eli Manning and the power of AI in ESPN fantasy football appeared first on Journey to AI Blog.



How to enable trustworthy AI with the right data fabric solution

Organizations are increasingly depending upon artificial intelligence (AI) and Machine Learning (ML) to assist humans in decision making. It’s how top organizations improve customer interactions and accelerate time-to-market for goods and services. But these organizations need to be able to trust their AI/ML models before they can be operationalized and used in crucial business processes. Trustworthy AI has become a requirement for the successful adoption of AI in the industry.

These days, if an AI model makes a biased, unfair decision involving the health, wealth or well-being of humans, an organization can hit the news for the wrong reasons. Alongside the significant brand reputation risk, there’s also a growing set of data and AI regulations across the world and across industries — like the upcoming European Union AI Act — that companies must adhere to.

Examine the following checklist for grading the trustworthiness of any AI model:

  • Fairness: Can you confirm that the machine learning model is not providing a systematic disadvantage to any individual group of people over another, based on factors like gender, orientation, age or ethnicity?
  • Explainability: Can you explain why the model made a certain decision? For instance, if someone applies for a loan, the bank should be able to clearly explain why that person was rejected or approved.
  • Privacy: Are the right rules and policies in place for various people to access the data at different stages of the AI lifecycle?
  • Robustness: Does the model behave consistently as conditions change? Is it scalable? How do you accommodate for drifting data patterns?
  • Transparency: Do you have all the facts relevant to the usage of the model? Are they captured throughout different stages of the lifecycle and readily available (much like a nutrition label)?

How a data fabric enables trustworthy AI

Before you can trust an AI model and its insights, you need to be able to trust the data that’s being used. The right data fabric solution will naturally support these pillars and help you build trustworthy AI models. Consider these three crucial steps in the lifecycle of building out your next AI or machine learning model or improving a current one.

1. Comprehensive, trusted data sets

First things first: you need access and insight into all relevant data.

Research shows that up to 68% of data is not analyzed in most organizations. But successful AI implementations require connection to high quality, accurate data that’s ready for self-service consumption by the right stakeholders. Without the ability to aggregate data from disparate internal and external sources (on-premises, public or private clouds), you’ll have an inferior AI model, simply because you don’t have all the information you need.

Second, you need to make sure that the data itself can be trusted. There are two factors in a trusted data set:

  1. Do you have the right rules and policies for who can access and use data?
  2. Do you understand bias that exists in the data, and do you have the right guardrails to use that data for building and training models?

2. Guardrails during model building, deployment, management and monitoring

According to Gartner, 53% of AI and ML projects are stuck in pre-production phases. You can operationalize your AI by looking at all stages of the AI lifecycle. Automated, integrated data science tools help build, deploy, and monitor AI models. This approach helps ensure transparency and accountability at each stage of the model lifecycle. But to do so, you also need to ensure guardrails for fairness, robustness, fact collection and more, throughout each stage of the model life cycle.

Often data scientists aren’t thrilled with the prospect of generating all the documentation necessary to meet ethical and regulatory standards. This is where technology such as IBM FactSheets, can help by reducing the manual labor needed to capture metadata and other facts about a model across stages of the AI lifecycle. With AI governance solutions, a data scientist using standard, open Python libraries and frameworks can have facts about the model building and training automatically collected.

Similarly, facts can be collected while the model is in the testing and validation stages. All this information is incorporated into end-to-end workflows to ensure the team meets ethical and regulatory standards.

3. Processes that provide AI governance

In most organizations there are a number of data science tools, making it difficult to govern and manage information, let alone adhere to increasingly strict security, compliance and governance regulations. You can use automated, scalable AI governance to drive consistent, repeatable processes designed to increase model transparency and ensure both traceability and accountability. You can improve collaboration, compare model predictions, quantify model risk and optimize model performance, identify and mitigate bias, reduce risks like drift and decrease the need for model retraining.

Ultimately, data management and providing users access to the right data at the right time are at the core of successful AI and AI governance. A data fabric architecture helps you accomplish this by minimizing data integration complexities and simplifying data access across an organization to facilitate self-service data consumption. With IBM Cloud Pak® for Data, you can formalize a workflow that allows different teams to interact with your model at various stages. It’s not just about granting proper access to data science teams. Your model risk management team, IT operations team and line-of-business employees also need appropriate access.

You can also handle different data sets and sources, from training data to payload data to ground truth data, with the right levels of privacy and governance around them. Critically, you can automate the capture of metadata from each data set and model and keep it in a central catalog. Using IBM Cloud Pak for Data, you can do this at scale with consistency and apply it to models that have been built using open-source or third-party tools.

Better data-driven decision making with AI and AI governance

The potential advantage of AI is reflected in the strategy trends of industry leaders. By 2023, it’s estimated that 60% of enterprise intelligence initiatives will be business-specific, shortening the data-to-decisions time frame by 30%, driving higher agility and resiliency. But to cement this data-driven trust with clients, it’s crucial that proper controls are in place across the AI lifecycle, especially when AI is used in critical situations.

The post How to enable trustworthy AI with the right data fabric solution appeared first on Journey to AI Blog.



Real-time analytics on IoT data

Join SingleStore and IBM on September 21, 2022 for our webinar “Accelerating Real-Time IoT Analytics with IBM Cognos and SingleStore”.

Why real-time analytics matters for IoT systems

IoT systems access millions of devices that generate large amounts of streaming data. For some equipment, a single event may prove critical to understanding and responding to the health of the machine in real time, increasing the importance of accurate, reliable data. While real-time data remains important, storing and analyzing the historical data also creates opportunities to improve processes, decision-making and outcomes.

Smart grids, which include components like sensors and smart meters, produce a wealth of telemetry data that can be used for multiple purposes, including:

  • Identifying anomalies such as manufacturing defects or process deviations
  • Predictive maintenance on devices (such as meters and transformers)
  • Real-time operational dashboards
  • Inventory optimization (in retail)
  • Supply chain optimization (in manufacturing)

Considering solutions for real-time analytics on IoT data

One way to achieve real-time analytics is with a combination of a time-series database (InfluxDB or TimescaleDB) or a NoSQL database (MongoDB) + a data warehouse + a BI tool:

an equation showing the need for a database + data warehouse + BI tool

This architecture raises a question: Why would one use an operational database, and still need a data warehouse? Architects consider such a separation so they can choose a special-purpose database — such as a NoSQL database for document data — or a time-series database (key-value) for low costs and high performance.

However, this separation also creates a data bottleneck — data can’t be analyzed without moving it from an operational data store to the warehouse. Additionally, NoSQL databases are not great at analytics, especially when it comes to complex joins and real-time analytics.

Is there a better way? What if you could get all of the above with a general-purpose, high-performance SQL database? You’d need this type of database to support time-series data, streaming data ingestion, real–time analytics and perhaps even JSON documents.

Graphic of SingleStoreDB Multimodel capabilities

Achieving a real-time architecture with SingleStoreDB + IBM Cognos

SingleStoreDB supports fast ingestion with Pipelines (native first class feature) and concurrent analytics for IoT data to enable real-time analytics. On top of SingleStoreDB, you can use IBM® Cognos® Business Intelligence to help you make sense of all of this data. The previously described architecture then simplifies into:

Graphic of Simplified database and data warehouse architecture

Real-time analytics with SingleStoreDB & IBM Cognos

 

Pipelines in SingleStoreDB allow you to continuously load data at blazing fast speeds. Millions of events can be ingested each second in parallel from data sources such as Kafka, cloud object storage or HDFS. This means you can stream in structured — as well as unstructured data — for real-time analytics.

Streamlined data pipeline

But wait, it gets better…

  1. Once data is in SingleStoreDB, it can also be used for real-time machine learning, or to safely run application code imported into a sandbox with SingleStoreDB’s Code Engine Powered by Web Assembly (Wasm).
  2. With SingleStoreDB, you can also leverage geospatial data — for instance to factor site locations, or to visualize material moving through your supply chains.

Armis and Infiswift are just a couple of examples of how customers use SingleStoreDB for IoT applications:

  • Armis uses SingleStoreDB to help enterprises discover and secure IoT devices. Armis originally started with PostgreSQL, migrated to ElasticSearch for better search performance and considered Google Big Query before finally picking SingleStoreDB for its overall capabilities across relational, analytics and text search. The Armis Platform, of which SingleStoreDB now plays a significant part, collects an array of raw data (traffic, asset, user data and more) from various sources — then processes, analyzes, enriches and aggregates it.
  • Infiswift selected SingleStoreDB after evaluating several other databases. Their decision was driven in part because of SingleStore’s Universal Storage technology (a hybrid table type that works for both transactional and analytical workloads).

Want to learn more about achieving real-time analytics?

Join IBM and SingleStore on Sep 21, 2022 for our webinarAccelerating Real-Time IoT Analytics with IBM Cognos and SingleStore”. You will learn how real-time data can be leveraged to identify anomalies and create alarms by reading meter data, and classifying unusual spikes as warnings.

We will demonstrate:

  • Streaming data ingestion using SingleStoreDB Pipelines
  • Stored procedures in SingleStoreDB to classify data before it is persisted on disk or in memory
  • Dashboarding with Cognos

These capabilities enable companies to:

  • Provide better quality of service through quickly reacting to or predicting service interruptions due to equipment failures
  • Identify opportunities to increase production throughput as needed
  • Quickly and accurately invoice customers for their utilization

The post Real-time analytics on IoT data appeared first on Journey to AI Blog.



How to stay ahead of ever-evolving data privacy regulations

Enterprises are dealing with a barrage of upcoming regulations concerning data privacy and data protection, not only at the state and federal level in the US, but also in a dizzying number of jurisdictions around the world.

Kicked off several years ago by the groundbreaking introduction of the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), the regulation and compliance trend is only going to intensify. In August the Federal Trade Commission (FTC) released an Advance Notice of Proposed Rulemaking (ANPRM) titled Commercial Surveillance and Data Security that encompasses a wide range of data protection and privacy issues, including data monetization models, discrimination and algorithmic biases and data security, to name a few.

As these types ANPRMs continue to be released and regulation swiftly catches up to innovation, a recent Gartner survey predicts that 75% of the world’s population will have its personal data covered under modern privacy regulations by the end of 2024.

At IBM’s recent Chief Data and Technology Officer Summit on data privacy, I spoke with some of the world’s top data leaders about the two-pronged challenge they’re now facing: ensuring that data policies and practices meet regulatory demands, while also continuing to innovate with new technologies.

We agreed there is a way to navigate this complicated landscape and maintain a competitive advantage that delivers business value. The journey starts with having a multimodal data governance framework that is underpinned by a robust data architecture like data fabric. This framework can create a standard approach for meeting regulatory compliance while allowing for customization to address local regulations and being proactive when handling new regulations.

Adopting a privacy-centric approach built around a data fabric

data fabric is an architectural approach that simplifies data consumption across a diverse and distributed landscape, while adhering to data privacy requirements. Think of a data fabric as a single pane of glass that creates visibility across an enterprise. By doing so, it greatly reduces the complexity of managing disparate regulations worldwide. What’s more, a data fabric can automate data governance and security by creating a governance layer across the lifecycle.

To understand how a data fabric helps maintain compliance to privacy regulations, it’s helpful to look at some essential elements of that single pane of glass.

Build a foundation using a common catalog and metadata

Building a data fabric starts with creating visibility using a data catalog, which is an inventory of an organization’s information assets. It lets appropriate parties, such as the company’s chief data analyst, know what the data is and where it resides. Without a data catalog, data can remain hidden or unused and become impossible to manage.

A proper data catalog has a common taxonomy that helps everyone communicate more effectively and solves a common challenge of data integration—different data sets describing the same terms differently. This is important for data privacy: If the wrong term is used, data that should be limited in access might accidentally be made available to the whole business.

Similarly, active metadata — data about data — is at the heart of how a data fabric delivers on privacy for the same reason as a common data catalog. If you don’t know the details about your data, how can you truly say who is meant to see it or how you can use it? In the context of a data fabric, think of metadata as an augmented knowledge graph displaying the network of data across an entire enterprise, along with the conditions that apply to these sets of data.

Operationalize data privacy through automation

Once metadata has been created, it can be tagged, signifying which data is sensitive, limiting who has access to it and so forth. Then intelligent automation begins.

Automated metadata generation is particularly important for access and privacy. Consider, for example, an enterprise that wants to bring in a new data set containing transaction information such as item descriptions, quantity purchased, name, address and credit card number. When this data set is ingested, automated tagging labels the item descriptions and quantity as general transaction data, the name and address as personal data, and the credit card number as financial data. This tagging allows policy enforcement at the point of access. If business users access the data set, they can see the general transaction data, but the personal and financial data is automatically made anonymous.

Govern data and allow self-service consumption

While many of the regulations coming down the pike will be similar or even identical, how they are enacted will look very different across countries and regions. The challenge lies with demonstrating compliance to regulators while providing business users with a way to easily access the information. Otherwise, compliance creates a speed bump for innovation. That’s where the self-service element plays a critical role.

While self-service suggests a lot of freedom, the data fabric must include multimodal governance, allowing only certain people to access that data. Again, that single pane of glass will bring together the privacy and the security aspects at a single access point, while offering users an easier way to serve the data they want accessible to others. The ability to conduct real-time monitoring and audits helps secure the systems and comply with regulations, but it also helps the business mitigate data loss through breaches and keep models accurate.

Find your holistic data privacy and security solution by getting started with a data fabric strategy.

To hear more from data leaders around privacy, watch the replay of our CDO/CTO Summit series and attend our upcoming in-person CDO Summit.

The post How to stay ahead of ever-evolving data privacy regulations appeared first on Journey to AI Blog.



Optimizing shipping logistics in a time of change

Within logistics, shipping is a vast and delicate ecosystem. Over the last couple of years many people were directly impacted by complete production shutdowns, huge and unexpected swings in consumer demand, lack of labor at ports, a shortage of shipping containers… just to name a few!

Addressing challenges with business analytics

To help with some of these challenges, my company Spitfire Analytics has been working with a global retail organization that is responsible for one of the largest independent shipping networks in the world. This client already used IBM Planning Analytics with Watson to provide inputs to a demand plan and combine it with various input factors – allocation of volume from a business unit to a specific address, filling rates of containers, etc. However, most of the calculations were done in a traditional relational database with week-level data granularity that took approximately 4 hours to run allocations.

When market disruptions occurred, their demand plan became so much more vital to answering critical questions, such as: “Do we have enough carrier capacity to transport our forecast volume? Do we have enough handling employee hours available at a warehouse to unload the inbound stock? Can we give our customers what they want?!”

The decision was made to move to a more continuous planning cycle as the need to align the traditional mid-term plans (monthly granularity) and compare it the operational plans (weekly granularity) became vital. The only option was to start breaking the data down to specific days and dates. The existing relational database logic was already taking 4 hours to perform 1/7th of the calculations that would be required – so that was a no-go.

Leaning into the benefits of IBM Planning Analytics

This is when we decided to do everything in IBM Planning Analytics, which enabled:

  • A single platform for user access
  • ETL imports from multiple source systems
  • An advanced UI to easily process and calculate vast allocations and export all that data to be picked up by external applications

All these benefits resulted in getting the total processing time to under 1 hour. Now users could see results of their plans quick enough to make decisions and act efficiently.

Through the scalable nature of IBM Planning Analytics, we were able to handle algorithm sparsity exceptionally well. When routes introduced multiple stopping points, carriers were allocated volumes, and lead-times were applied to provide a receiving date, resulting in approximately 300 million records being exported to a Database table every night.

Now working with that same retail customer, we are at a point of producing multiple other applications that take those outputs and assess and project where they may encounter a bottleneck. For the long-term tactical planning process, this customer can start making fast, data-driven decisions like whether to temporarily rent external warehouse space, build a completely new warehouse, enter into new shipping contracts, or make new staffing changes.

Lessons in optimizing logistics

Ultimately, we found the business needed to change its way of working due to external factors outside their control. We’ve been hearing this consistently from many customers as a result of unexpected changes and market shifts that can impede the growth of a business. Accordingly, we’ve found a good way of working that involves identifying the root cause of problems in the planning process and quickly focusing on developing and implementing a planning solution that works alongside representatives from the customer’s organization – often in as little as 5-10 days of consultancy time. This leaves the customer not only with a better planning solution but with the skills to start expanding that solution and use case for themselves to handle the next set of business priorities.

By working with a strong planning analytics solution, organizations no longer need to require someone to be versed in multiple coding languages and various skills like Macros in Excel. Now, teams across any department – especially financial and operational teams – can work from a single source of truth to streamline planning, reporting, and analysis to manage performance and build alignment across the enterprise.

Ready to learn more?

The post Optimizing shipping logistics in a time of change appeared first on Journey to AI Blog.



Top