Monthly Archives: August 2021

insideBIGDATA Latest News – 8/31/2021

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.



eBook: A Practical Guide to Using Third-Party Data in the Cloud

[Sponsored Post] To help you navigate a proliferating data landscape, AWS Data Exchange would like to present you with a copy of the new eBook, “A Practical Guide to Using Third-Party Data in the Cloud.” Learn how innovative teams are shifting their focus from data-driven business intelligence to accelerating insight-driven decision-making and now are turning to third-party datasets as a differentiator.



Whitepaper: A Practical Guide to Using Third-Party Data in the Cloud

[Sponsored Post] To help you navigate a proliferating data landscape, AWS Data Exchange would like to present you with a copy of the new eBook, “A Practical Guide to Using Third-Party Data in the Cloud.” Learn how innovative teams are shifting their focus from data-driven business intelligence to accelerating insight-driven decision-making and now are turning to third-party datasets as a differentiator.



Infographic: How to Leverage Big Data Tech in Your Construction Company

The big data market is expected to reach $99.31 billion this year, and companies that take advantage of big data analytics have reported increases in revenue, productivity and efficiency. The visual below from our friends over at BigRentz shows how construction companies can leverage big data and take advantage of new analytics technologies—from 3D modeling to material tracking and on-site safety sensors. 



Data Warehouse Costs Soar, ROI Still Not Realized

Enterprises are pouring money into data management software – to the tune of $73 billion in 2020 – but are seeing very little return on their data investments.  According to a new study out from Dremio, the SQL Lakehouse company, and produced by Wakefield Research, only 22% of the data leaders surveyed have fully realized ROI in the past two years, with most data leaders (56%) having no consistent way of measuring it. 



Supply chain planning in an xP&A world 2.0

Supply chains continue to be tested and transformed by the increasing globalization of the world economy, along with the massive amounts of insightful—but often disparate—data that comes with it. Demand volatility is on the rise, and given the pandemic’s ongoing uncertainty, it shows no signs of slowing down any time soon. Supply chain planning seeks to achieve and maintain an effectively lean supply equilibrium, one in which organizations store the necessary level of inventory on hand to meet the projected demand and reduce overhead and carrying costs.  Supply chain leaders know one thing is clear: inventory accuracy has never been more important, and they’ll need a truly comprehensive planning tool to get it right.

Finding the perfect balance that exists between sufficiency and surplus can prove especially tricky; however, extended planning and analysis (xP&A) solutions are making this easier.  At IBM, we take this a step further with “continuous integrated planning.” This enables planning to expand beyond the walls of finance and foster collaboration with the other functional teams to find the right supply and demand balance.

Supply and demand: the risk of getting it wrong

Inventory accuracy often dictates a company’s success. Overshooting projections can lead to obsolescence with excess inventory sitting in a warehouse, incurring costs and consuming valuable space that could be used for faster-moving inventory.  To mitigate these expenses, organizations resort to deeply discounting the slow-moving inventory.  In addition to the margin erosion associated with these activities, there are brand and market implications as well, such as lower consumer expectations and confidence around price and quality. This trend is extremely difficult to reverse provided you can even sell the extra inventory.

Excess inventory is only one part of the problem. Inventory shortages can also wreak havoc on a company’s bottom line. A large retail organization in the U.S. found out first-hand in December 2020 when they kept in-store inventory exceptionally lean with the expectation that their route to market would shift from in-person shopping to online shopping. Ultimately, this retailer disappointed customers who browsed empty shelves as a result of out-of-stock items, and lost repeat customer business.

Inventory forecasts are sensitive not only to internal data but also to external factors and environmental shifts. To plan with greater certainty, organizations need a planning solution that embeds predictive analytics and prescriptive analytics along with what-if scenario planning. They need a solution that integrates external factors such as weather data, market data and consumer buying patterns into the process, providing them the foresight to pivot quickly.

With prescriptive analytics, the operations team can match demand with current inventory levels at distribution locations to optimize placing the right products in the right locations at the right time. An extended planning & analysis solution provides full end-to-end visibility into both inventory and demand in real-time, thereby reducing the imbalance and providing a 360-degree view of the supply chain process.

Here’s how companies have achieved inventory accuracy with an integrated planning solution.

Allen Edmonds: Finding the perfect fit between inventory levels and customer demand 

When you buy a pair of shoes from Allen Edmonds, you expect a perfect fit. To keep customers coming back for more, it’s vital to stock the right styles and sizes in the right stores at the right time.

By transforming its planning processes with IBM Planning Analytics with Watson, Allen Edmonds gained insight into sales, regional preferences and more. Smarter decisions about which items to place in which stores helped the company boost sales, customer satisfaction and loyalty—even while reducing inventory levels.

Results: 10% lift in forecasting accuracy: results for one major event were within 3% of forecast

Pebble Beach: Creating the best shopping experience with IBM Planning Analytics with Watson 

Pebble Beach needed to satisfy shoppers with 15 unique stores and over 30,000 products.

Pebble Beach deployed IBM Planning Analytics with Watson to help its retail division analyze inventory levels, optimize purchasing, and make better use of merchandise. As a result, Pebble Beach keeps its stores fully stocked with the most desirable items, boosting sales and helping guests find the perfect memento of their visit.

In summary, we know that achieving lean supply equilibrium is difficult. Embracing extended planning using IBM Planning Analytics with Watson empowers organizations to not only overcome the supply vs. demand imbalance, but also to mitigate operating costs and ultimately enhance customer loyalty.

Learn more about IBM Planning Analytics with Watson 

 

 

The post Supply chain planning in an xP&A world 2.0 appeared first on Journey to AI Blog.



5 Misconceptions of ML Observability

In this special guest feature, Aparna Dhinakaran, Chief Product Officer at Arize AI, explains five of the biggest misconceptions surrounding machine learning observability. As tools emerge to facilitate the three stages of the machine learning workflow–data preparation, model building, and production–it’s typical for teams to develop misconceptions as they attempt to make sense of the crowded, confusing, and complex ML Infrastructure space.



The Future Is Now: Why Data Is Key to Tech Research & Development

In this contributed article, IT and digital marketing specialist Natasha Lane, highlights the reasons why using data is so crucial for research and development. Using data can be the key to recognizing and solving humanity’s leading challenges in the years to come. We’re talking about everything from water shortage, climate change, the need to develop safe self-driving cars, and so on.



A new chapter in the IBM and Cloudera partnership

The amount of data collected by large enterprises is estimated to grow 10 times each year, and 90% of this data remains unused or underutilized. Managing these data sources across various siloes is time-consuming and costly. A lack of a cohesive governance strategy can lead to challenges in visibility, governance, portability and management that prevent enterprises from unlocking the business value of their data.

To help enterprises effectively manage their data needs, IBM entered a partnership with Cloudera almost a decade ago to expand our big data capabilities. In 2019, Cloudera merged with Hortonworks to pursue a hybrid cloud vision that further brought our companies together.

Today IBM is excited to announce a new chapter of our partnership with Cloudera that puts us in an even stronger position to help enterprises with their data and AI needs. We are strengthening our joint development and go-to-market programs to bring the advanced analytical capabilities of IBM Cloud Pak for Data, a unified platform for data and AI, to Cloudera Data Platform. This new offering will enable use cases in data science, machine learning, business intelligence, and real-time analytics directly on data within Cloudera Data Platform. The integration brings Cloudera under the IBM data fabric, a hybrid, multicloud data architecture that helps businesses access the right data just in time at the optimum cost, with end-to-end governance, regardless of where the data is stored.

Introducing Cloudera Data Platform for IBM Cloud Pak for Data

As the name suggests, this offering combines Cloudera’s best-in-class data lake with the advanced analytical capabilities of IBM Cloud Pak for Data. Cloudera Data Platform (CDP) for IBM Cloud Pak for Data provides one of the most complete multi-function platforms in the market. Now, businesses can run edge, streaming, data engineering, ETL, data warehousing, data visualization, and machine learning use cases with a single offering.

CDP for IBM Cloud Pak for Data provides a fast path to modernize data platforms in place  without performing a costly architectural reimplementation and migration.

CDP for IBM Cloud Pak for Data is hybrid and secure. It can run end-to-end anywhere with a full span of security and fine-grained enterprise-level governance that many other platforms can’t match. IBM’s state-of-the-art data fabric uses AI to automate complex data management tasks and universally discover, integrate, catalog, secure, and govern data across multiple environments.

Key features

  • Separation of storage and compute — CDP for IBM Cloud Pak for Data provides a data fabric with secure access to data anywhere it resides, from ingest to governance and data engineering, serving advanced analytics and high-performance BI all on one platform.
  • SQL analytics for all your data — By leveraging Big SQL as well as Hive and Impala, CDP for IBM Cloud Pak for Data provides warehouse-grade performance that exceeds the performance of alternatives in the market.
  • Run data science at scale — Use Watson Studio and CDP to build, run, and manage AI models to a petabyte scale.
  • Automated AI lifecycle management — CDP for IBM Cloud Pak for Data leverages the automation capabilities of IBM Watson Studio to speed up lifecycle of your critical data science projects.
  • Streamline data engineering — Take advantage of Cloudera Streaming Analytics, such as Flink, Apache Kafka, and SQL Stream Builder, and integrate it with IBM technologies like DataStage to achieve full breadth data engineering
  • Real-time reporting and BI — Data can be ingested in real-time with Flink and then displayed in IBM Cloud Pak for Data analytics dashboards.
  • Automated governance and cataloging — Data and associated metadata discovered are automatically catalogued, and assets are generated, removing the need for manual metadata/DDL generation
  • Open platform — Built on open systems and using non-proprietary data formats, the solution allows businesses to leverage data on any cloud.

In short, CDP for IBM Cloud Pak for Data:

  1. Enables data science at scale
  2. Provides a seamless single view of data with complete security and governance, without the need for data movement or replication
  3. Merges stream and batch data sets for analytics and real-time dashboards.

Together these benefits protect your existing technology investments in Hadoop while unlocking the business value of your data.

Next steps

To learn more about CDP for IBM Cloud Pak for Data, please visit our product page. You can also book a personal consultation there.

For more details, please visit IBM Cloud Pak for Data, IBM Data Fabric, and Cloudera Data Platform or join the Cloud Pak for Data Community.

The post A new chapter in the IBM and Cloudera partnership appeared first on Journey to AI Blog.



Heard on the Street – 8/26/2021

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.



Top