Monthly Archives: April 2022

Big Data Will Open Up the Benefits of Sustainability Across the Agriculture Sector

In this special guest feature, Lindsay Suddon, Chief Strategy Officer for Proagrica, believes that now is the time for the agriculture sector to harness the power of data and work together to achieve increases in productivity, profitability, sustainability, food safety and security. It won’t just be just the challenges of climate change forcing the issue, it shall be the increasing groundswell of global consumer sentiment and demand for change too.



Achieving Data and Legal Compliance in the Event Industry

In this contributed article, Devin Clearly, VP of Global Events at Bizzabo, discusses how to balance the benefits and risks in the new era of event data and best practices for ensuring data security, privacy and compliance. Data security and legal compliance must be top of mind for today’s event industry organizers. Data is currency in the digital age.



Using AI to reinvent the enterprise

From how businesses communicate with their customers through virtual assistants, to automating key workflows and even managing network security, there is no doubt that AI is a catalyst for accelerating top-line impact, causing disruption and unlocking new market opportunities.

At IBM’s recent Chief Data and Technology Officer Summit, I had an exciting conversation with Mark Foster, Chairman, IBM Consulting, where we talked about how enterprises are using AI to reinvent themselves, their main challenges and how they see investing in AI over the next 24 months.

With the accelerated pace of many organizations’ digital transformation, we have seen the emergence of new platform business models. These models enable enterprises to make better use of data, and achieve their strategic business objectives, through improved service to their clients, more efficient operations, and better experiences for their employees.

Organizations have been digitally transforming in two ways simultaneously: from the inside-out and the outside-in. The ability to apply AI, automation, blockchain, the Internet of Things (IoT), 5G, cloud and quantum computing at scale drives the inside-out cognitive transformation of organizations. And organizations also experience outside-in reinvention, a new way to reach, engage and enable customers to interact with the enterprise, with responsible use of the exploding volumes of data companies now hold.

Now, we are seeing a new third dimension of digital transformation: openness of business platforms across their ecosystem, resulting in a Virtual Enterprise. By stretching intelligent workflows and virtualized processes across broader systems, the return on investment of a Virtual Enterprise compounds from the resulting ecosystems, digital workflows and networked organizations. The Virtual Enterprise is supported by a “golden thread” of value that animates the enterprise and binds ecosystem participants. A key characteristic of the Virtual Enterprise is data-led innovation – the openness of the virtual enterprise accelerates access to new sources of product and service innovation using innovative technologies like AI to do it.

The challenges to successful AI adoption

  1. Strategic perception – with the advent of the Virtual Enterprise, the complexity of organizations has increased. While some enterprises have a clear vision of what they want to be, many are struggling with that big picture.
  2. Execution –Delivering transformation at scale remains the main challenge for many enterprises to continue their digital reinvention. How fast and how much can the business model be transformed?
  3. Skills – Lack of skills inside the organization is one of the top challenges. IBM Garage Methodology has been helping many of our clients navigate skills gaps and solve significant problems using their data, new technologies, and existing ecosystems.

Companies that can overcome adoption and deployment barriers and tap AI and automation tools to tackle these challenges will be able to deliver value from AI.

Investing in AI

Businesses plan to invest in all areas of AI, from skills and workforce development to buying AI tools and embedding those into their business processes, creating agile learning systems that will build applications more efficiently and effectively.

Over the next 24 months, most AI investments will continue to focus on key capabilities that define AI for business — automating IT and processes, building trust in AI outcomes, and understanding the language of business.

In our previous CDO/CTO Summit, “Leadership During Challenging Times,” I shared how enterprises are becoming more intelligently automated, data-driven and predictive; risk-aware and secure. Leaders are designing organizations for agility and speed by infusing AI across the foundational business functions: customer care, business operations, the employee experience, financial operations and, of course, IT operations. I believe these investments will continue to accelerate rapidly as customers look for new, innovative ways to drive their digital transformations by using hybrid cloud and AI.

To hear more from Chief Artificial Intelligence Officers from some of the world’s most prestigious organizations and learn about adopting AI and delivering more business value, watch the replay of the CDO/CTO Summit “Using AI to reinvent the enterprise.” I invite you to register for our CDO/CTO Summit series here. In June, we will discuss “Data Fabric: Delivering on your Data Strategy.”

The post Using AI to reinvent the enterprise appeared first on Journey to AI Blog.



Data Virtualization’s Ubiquity: Data Meshes, Data Products, Data Lake Houses, Data Fabrics 

In this contributed article, editorial consultant Jelani Harper discusses how data virtualization is the underlying technology for some of the most progressive architectures today, including that of the data mesh, data lake house, and data fabric. Although it’s still regarded as a desirable, dynamic means of integrating data, it’s silently reshaping itself into something that encompasses this attribute but, ultimately, is much more.



How a data fabric overcomes data sprawls to reduce time to insights

Data agility, the ability to store and access your data from wherever makes the most sense, has become a priority for enterprises in an increasingly distributed and complex environment. The time required to discover critical data assets, request access to them and finally use them to drive decision making can have a major impact on an organization’s bottom line. To reduce delays, human errors and overall costs, data and IT leaders need to look beyond traditional data best practices and shift toward modern data management agility solutions that are powered by AI. That’s where the data fabric comes in.

A data fabric can simplify data access in an organization to facilitate self-service data consumption, while remaining agnostic to data environments, processes, utility and geography. By using metadata-enriched AI and a semantic knowledge graph for automated data enrichment, a data fabric continuously identifies and connects data from disparate data stores to discover relevant relationships between the available data points. Consequently, a data fabric self-manages and automates data discovery, governance and consumption, which enables

enterprises to minimize their time to value. You can enhance this by appending master data management (MDM) and MLOps capabilities to the data fabric, which creates a true end-to-end data solution accessible by every division within your enterprise.

Data fabric in action: Retail supply chain example

To truly understand the data fabric’s value, let’s look at a retail supply chain use case where a data scientist wants to predict product back orders so that they can maintain optimal inventory levels and prevent customer churn.

Problem: Traditionally, developing a solid backorder forecast model that takes every factor into consideration would take anywhere from weeks to months as sales data, inventory or lead-time data and supplier data would all reside in disparate data warehouses. Obtaining access to each data warehouse and subsequently drawing relationships between the data would be a cumbersome process. Additionally, as each SKU is not represented uniformly across the data stores, it is imperative that the data scientist is able to create a golden record for each item to avoid data duplication and misrepresentation.

Solution: A data fabric introduces significant efficiencies into the backorder forecast model development process by seamlessly connecting all data stores within the organization, whether they are on-premises or on the cloud. It’s self-service data catalog auto-classifies data, associates metadata to business terms and serves as the only governed data resource needed by the data scientist to create the model. Not only will the data scientist be able to use the catalog to quickly discover necessary data assets, but the semantic knowledge graph within the data fabric will make relationship discovery between assets easier and more efficient.

The data fabric allows for a unified and centralized way to create and enforce data policies and rules, which ensures that the data scientist only accesses assets that are relevant to their job. This removes the need for the data scientists to request access from a data owner. Additionally, the data privacy capability of a data fabric ensure the appropriate privacy and masking controls are applied to data used by the data scientist. You can use the data fabric’s MDM capabilities to generate golden records that ensure product data consistency across the various data sources and enable a smoother experience when integrating data assets for analysis. By exporting an enriched integrated dataset to a notebook or AutoML tool, data scientists can spend less time wrangling data and more time optimizing their machine learning model. This prediction model could then easily be added back to the catalog (along with the model’s training and test data, to be tracked through the ML lifecycle) and monitored.

How does a data fabric impact the bottom line?

With the newly implemented backorder forecast model that’s built upon a data fabric architecture, the data scientist has a more accurate view of inventory level trends over time and predictions for the future. Supply chain analysts can use this information to ensure that out of stocks are prevented, which increases overall revenue and improves customer loyalty. Ultimately the data fabric architecture can help significantly reduce time to insights by unifying fragmented data on a singular platform in a governed manner in any industry, not just the retail or supply chain space. Learn more about a data fabric architecture and how it can benefit your organization.

The post How a data fabric overcomes data sprawls to reduce time to insights appeared first on Journey to AI Blog.



insideBIGDATA Latest News – 4/28/2022

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.



Hasura Introduces GraphQL Joins to Build a Unified Data API to Multiple GraphQL Sources

GraphQL innovation leader Hasura announced GraphQL Joins, a new set of capabilities that let developers instantly join data from across different GraphQL services to create a unified GraphQL API. Hasura does this using open source GraphQL standards, eliminating the need for custom code or upstream service changes.



Baseten Gives Data Science and Machine Learning Teams the Superpowers They Need to Build Production-Grade Machine Learning-Powered Apps

Baseten formally launched with its product that makes going from machine learning model to production-grade applications fast and easy by giving data science and machine learning teams the ability to incorporate machine learning into business processes without backend, frontend or MLOps knowledge. The product has been in private beta since last summer with well-known brands that have used it for everything from abuse detection to fraud prevention. It is in public beta at this time.



Augmented data management: Data fabric versus data mesh

Data fabric and data mesh are emerging data management concepts that are meant to address the organizational change and complexities of understanding, governing and working with enterprise data in a hybrid multicloud ecosystem. The good news is that both data architecture concepts are complimentary. But what exactly are a data fabric and data mesh, and how can you use these data management solutions to take advantage of your enterprise data for better decision-making?

What’s a data fabric?

Gartner defines a data fabric as “a design concept that serves as an integrated layer of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable and inferenced metadata to support the design, deployment and utilization of integrated and reusable datasets across all environments, including hybrid and multicloud platforms.” [1]

The data fabric architectural approach can simplify data access in an organization and facilitate self-service data consumption at scale. This approach breaks down data silos, allowing for new opportunities to shape data governance, data integration, single customer views and trustworthy AI implementations among other common industry use cases.

Read: The first capability of a data fabric is a semantic knowledge data catalog, but what are the other 5 core capabilities of a data fabric?

Since its uniquely metadata-driven, the abstraction layer of a data fabric makes it easier to model, integrate and query any data sources, build data pipelines, and integrate data in real-time. A data fabric also streamlines deriving insights from data through better data observability and data quality by automating manual tasks across data platforms using machine learning. This improves data engineering productivity and time-to-value for data consumers.

What’s a data mesh?

According to Forrester, “A data mesh is a decentralized sociotechnical approach to share, access and manage analytical data in complex and large-scale environments—within or across organizations using.” [2]

The data mesh architecture is an approach that aligns data sources by business domains, or functions, with data owners. With data ownership decentralization, data owners can create data products for their respective domains, meaning data consumers, both data scientist and business users, can use a combination of these data products for data analytics and data science.

Watch: What is a data fabric, how does it differ from a data mesh, and where do data stores, data lakes and data warehouses fit into the conversation?

The value of the data mesh approach is that it shifts the creation of data products to subject matter experts upstream who know the business domains best compared to relying on data engineers to cleanse and integrate data products downstream.

Furthermore, the data mesh accelerates re-use of data products by enabling a publish-and-subscribe model and leveraging APIs, which makes it easier for data consumers to get the data products they need including reliable updates.

Data fabric vs data mesh: How does a data fabric relate to a data mesh?

A data fabric and data mesh can co-exist. In fact, there’s three ways a data fabric enables the implementation of a data mesh:

  1. Provides data owners data products creation capabilities like cataloging data assets, transforming assets into products and following federated governance policies
  2. Enable data owners and data consumers to use the data products in various ways such as publishing data products to the catalog, searching and find data products, and querying or visualizing data products leveraging data virtualization or using APIs.
  3. Use insights from data fabric metadata to automate tasks by learning from patterns as part of the data product creation process or as part of the process of monitoring data product

A data fabric gives you the flexibility to start with a use case allowing you to get quick-time-to-value regardless of where your data is.

When it comes to data management, a data fabric provides the capabilities needed to implement and take full advantage of a data mesh by automating many of the tasks required to create data products and manage the lifecycle of data products. By using the flexibility of a data fabric foundation, you can implement a data mesh, continuing to take advantage of a use case centric data architecture regardless if your data resides on premises or in the cloud.

Learn more about how you can use a data fabric to put your datasets to work across use cases such as data governance, customer 360 views, data integration, or even trustworthy AI.

 

1 “Data Fabric Architecture is Key to Modernizing Data Management and Data Integration” Gartner. 11 May 2021. 

2 “Exposing The Data Mesh Blind Side” Forrester. 3 March 2022

The post Augmented data management: Data fabric versus data mesh appeared first on Journey to AI Blog.



Hewlett Packard Enterprise Accelerates AI Journey from POC to Production with New Solution for AI Development and Training at Scale

Hewlett Packard Enterprise (NYSE: HPE) announced that it is removing barriers for enterprises to easily build and train machine learning models at scale, to realize value faster, with the new HPE Machine Learning Development System. The new system, which is purpose-built for AI, is an end-to-end solution that integrates a machine learning software platform, compute, accelerators, and networking to develop and train more accurate AI models faster, and at scale. 



Top