Big Data Industry Predictions for 2024

Big Data Industry Predictions for 2024
image

Welcome to insideBIGDATA’s annual technology predictions round-up! The big data industry has significant inertia moving into 2024. In order to give our valued readers a pulse on important new trends leading into next year, we here at insideBIGDATA heard from all our friends across the vendor ecosystem to get their insights, reflections and predictions for what may be coming. We were very encouraged to hear such exciting perspectives. Even if only half actually come true, Big Data in the next year is destined to be quite an exciting ride. Enjoy!

[NOTE: please check back often as we’ll be adding new content to this feature article into February 2024]

Daniel D. Gutierrez – Editor-in-Chief & Resident Data Scientist

Analytics

The landscape of advertising analytics is poised for a seismic shift with the evolution of omni-channel commerce. The traditional silos between online and offline consumer interactions are crumbling, paving the way for a truly omni-channel consumer. While physical/digital walls are falling down across the consumer’s journey, walled gardens and consumer privacy will still loom large, complicating analytics. This growth of the omni-channel consumer will demand a recalibration of marketing measurement models. The traditional digital last-click attribution will make way for a more nuanced approach, recognizing the influence of multiple touchpoints along the customer journey. This shift will bring forth a more accurate representation of the incremental value each channel contributes to creating and converting consumer demand. Privacy concerns will loom large, necessitating a delicate balance between data-driven personalization and respecting user privacy. Striking this equilibrium will be crucial to maintaining consumer trust while harnessing the full potential of omnichannel analytics. The future of advertising analytics in the era of omnichannel e-commerce will be characterized by a convergence of data, a redefinition of attribution, and a delicate dance with privacy. It’s not just a transformation; it’s a revolution in how we understand, interpret, and leverage consumer data for the art and science of advertising. – Skye Frontier – SVP of Growth at Incremental

Artificial Intelligence

AI won’t replace low-code, but enhance it for improved outcomes: For years, low code has given citizen developers the ability to create applications without coding experience. Now ChatGPT has brought to horizon a promise of dramatic productivity gains for writing code. However, simply using ChatGPT to write code that a developer would have otherwise written is not solving the productivity problem at the right scale. The problem of reuse and maintenance remain unaddressed. Many months of developer time is taken up in absorbing upgrades from upstream teams, executing tech stack upgrades, implementing redesigns to uplevel their app to modern UI/UX patterns etc. Therefore, AI will not replace low-code but rather be used in tandem with low-code to improve productivity. Next year we will see enterprise software vendors using a combination of computer vision or a trained model to understand patterns and then triggering generative code within their low code platforms. – Vikram Srivats with WaveMaker

Ownership will become the key determinant of success in whether companies’ AI initiatives actually take off in 2024: Businesses were eager to begin adopting generative AI in 2023, particularly as they saw the immediate impacts it had on internal productivity. But in the new year we’ll begin to see that, while it’s easy for companies to play around with AI, actually driving business impact takes much more than that. Companies delegating AI exploration without a clear problem or dedicated team tend to falter, leading to ineffective outcomes. Ownership will become the key determinant of success in whether companies’ AI initiatives actually take off in 2024 and beyond. When a business owner takes a vested interest in digital innovation, identifies a specific challenge, and assembles a team for experimentation and action, the likelihood of success surges. Ownership will be the key driver of who will succeed in harnessing AI’s transformative potential and who won’t. – Raj De Datta, AI visionary and CEO at Bloomreach

From Enterprise AI to Zero-Trust AI: In 2024, we will see a significant shift in how enterprises approach AI, from focusing on performance to emphasizing accountability. As AI becomes more integrated into critical decision-making processes, organizations will prioritize ensuring the accuracy and reliability of AI outputs. This shift will lead to the development of “zero-trust AI,” where the validation of data sources and the transparency of AI-induced modifications become paramount. The goal will be to create AI systems whose operations and decisions are not just effective but also understandable and reviewable by all stakeholders, thereby fostering a culture of trust and responsibility around AI usage. – David Boskovic, founder and CEO of Flatfile

AI will continue to boom, and we will see adaptations in almost every area of our lives. While it will undoubtedly make our lives easier in many ways, we will see an uptick in error rates because this technology is only as smart as the language it’s been trained on. AI will inevitably replace more people and jobs, but the good news is that it will also create more jobs. In a few years, we will see many IoT devices generating huge volumes of high-cardinality data. With AI, the possibilities are virtually endless, and we are only now starting to explore them. – Jason Haworth, CPO, Apica

AI had quite the year in 2023, dominating the headlines with major analyst firms predicting its significant impact over the years to come. But to be successful in 2024 and beyond, AI will be forced to rely on the very sources many fear the technology will replace: people and data. Retail data is highly complex and dynamic with siloed information that is constantly in flux, whether it’s consumer buying behaviors, delayed shipments, product shortages or labor demands. Teams equipped with retail order and inventory data management systems, will play a major role next year to help produce and maintain clean, accurate and accessible data needed for businesses to take full advantage of AI. – Nicola Kinsella, SVP of global marketing at Fluent Commerce 

Organizations will appoint a chief AI officer to oversee the safe and responsible use of AI: In 2024, organizations will increasingly appoint senior executives to their leadership teams to ensure readiness for AI’s security, compliance, and governance implications. As employees become more accustomed to using AI in their personal lives, through exposure to tools such as ChatGPT, they will increasingly look to use AI to boost their productivity at work. Organizations have already realized that if they don’t empower their employees to use AI tools officially, they will do so without consent. Organizations will, therefore, appoint a chief AI officer (CAIO) to oversee their use of these technologies in the same way many have a security executive, or CISO, on their leadership teams. The CAIO will center on developing policies and educating and empowering the workforce to use AI safely to protect the organization from accidental noncompliance, intellectual property leakage, or security threats. These practices will pave the way for widespread adoption of AI across organizations. As this trend progresses, AI will become a commodity, as the mobile phone has. – Bernd Greifeneder, Chief Technology Officer and Founder, Dynatrace    

2024 will be the year of the AI and data C-Suite leader: If 2023 is the year that enterprise AI burst onto the scene, then 2024 will be a year of consolidation as businesses look to understand how to use it to gain a competitive advantage and comply with inevitable future regulations. To future-proof AI deployments, organizations will increasingly look to build out a role at the C-Suite level to oversee both AI innovation and compliance, but that won’t necessarily be in the form of a Chief AI Officer. Instead, AI will likely create a new generation of Chief Data Officers where existing data leaders develop new skill sets. Just as we’ve seen the rise of Chief Data and Analytics Officers, we could be about to see the start of a fresh generation of Chief Data and Artificial Intelligence Officers focused on ensuring the data foundations of AI models are compliant with new legislation and of a high enough quality to gain the business a competitive advantage. What’s certain is the rise of AI Governance committees, taking cross-functional roles in ensuring safe and efficient enterprise AI and partnering with Legal, Ethics, Security, and Privacy constituencies in the same way that Data officers have in years past. – Satyen Sangani, CEO and co-founder of Alation

AI’s Ugly Side is Further Revealed: The 2024 Presidential Election is one example of how the coming year will reveal more of AI’s nefarious capabilities. Expect to see deepfakes and other AI-generated disinformation designed to influence the election emerge at an alarming rate. If used by savvy threat actors, it’s possible these images could become compelling propaganda, creating a veritable wilderness of mirrors for voters, who will have trouble discerning reality from carefully crafted disinformation. This will be a growing focus area as the candidates’ campaigns kick into high gear.  Perhaps no better example of the technology’s ugly side exists than AI-generated abuse imagery, which has been increasing in recent months. We’ll see more attention focused on preventing this in 2024, with a cluster of new solutions released to address the issue. Of course, we can also expect hackers to increasingly leverage AI for their bread-and-butter campaigns—attacking organizations and employees to exfiltrate sensitive data. Think threat actors leveraging the technology to improve their malware code or relying on generative AI to craft more legitimate phishing emails. As this happens, organizations will need to adjust their training—for example, poor grammar, once a hallmark of phishing campaigns, will no longer serve as a red flag, thanks to generative AI– Mike Wilson, founder and CTO of Enzoic

AI Regulation: We’ll start to see AI regulations in 2024: For example, there have been discussions around monitoring the frontier model developments that consume lots of GPU compute. There will also need to be guardrails in place against DeepFakes on the internet given the 2024 presidential election. We think the efforts will make AI safer similar to how the FDA regulates the drug industry. – Tim Shi, co-founder and CTO of Cresta

In 2024 we’ll see AI will move beyond the hype cycle and put IT efficiency into overdrive: Like any other new technology, AI is still going through a hype cycle. People are beginning to better understand what AI looks like and in 2024, we’ll move beyond the hype to more valid use cases. One result of this is that CIOs will need to show that they’re not using AI for AI’s sake. As we see IT pros embrace AI to automate workflows and boost efficiency, CIOs need to focus on arming their teams with the AI tools to better their business and optimize IT workflows across the teams. – Freshworks CIO, Prasad Ramakrishnan 

The Future of AI Adoption & Roadblocks: AI adoption will accelerate, and it’ll spread. We will continue to see big advances in the capability of models, and our understanding of how they work will increase, which will itself unlock new advances. We’ll see more models tuned to specific use cases, from code to DNA, to CAD, to chemical structure, to image analysis. We’ll also see better integrations and user experience design within applications and workflows, much beyond a text box in which one types prose. Making models ‘natural’ to use may actually become the most impactful development, just like tuning and wrapping GPT-3 into a chat app made it usable for millions of users. Investments and funding for the companies building generative AI technologies will not slow down in the next year, even with the state of the financial system. What could slow down the development of generative AI, however, is the unavailability of enough hardware to satisfy the demand. In this case, only the biggest companies, or those that already own a large amount of hardware, will be able to continue developing new approaches at scale. – Alex Chabot-Leclerc, Ph.D., VP of Digital Transformation at Enthought 

Shallow AI solutions will be exposed: Overly complicated SaaS add-ons and features that claim to automate, but really just have an “AI sticker on top” will be exposed after detract from productive working hours. Users are getting smarter when it comes to AI, and a recent survey shows that a majority of IT pros (71%) are using AI to support their own workload. Relentless app rationalization and scrutiny is critical, especially in the new AI era. – Freshworks CIO, Prasad Ramakrishnan 

The struggle for AI profitability will continue — and that’s okay: Companies building massive AI applications are not going to turn a profit any time soon, and that means the only people that can actually run them are companies with insane cash balances, like Google and Microsoft. But these companies will continue to fight their way through this in 2024 and run losses for a very long period of time, until the economies of scale bring the price of chips and processing down. Something to consider as these companies move forward is how open source fits into all of this. The risk for these larger companies is the possibility that they’ll make this sizable investment in their models — and then the models that actually end up winning are open source ones. So it will be critical for them to think about how to create differentiation in their models that go beyond what the open source community will tackle. – Raj De Datta, AI visionary and CEO at Bloomreach

Ethical frameworks and regulation are necessary for AI and not just a distraction for organizations as they pursue their bottom line. We cannot avoid AI, as it’s the only way we can scale our operations in the asymmetrical cyber battlefield. Ethical frameworks and regulatory governance will become critically important to help AI function efficiently and equitably. Every new piece of software or service will have an AI or ML element to it.  Establishing best practices for ethics in AI is a challenge because of how quickly the technology is developing, but several public- and private-sector organizations have taken it upon themselves to deploy frameworks and information hubs for ethical question. All of this activity is likely to spark increasing amounts of regulation in the major economies and trading blocks which for a while which could lead to an increasingly piecemeal regulatory landscape at least for now.  It’s safe to predict that the current “Wild West” era of AI and ML will fade quickly, leaving organizations with a sizable compliance burden when they want to take advantage of the technology.  – Nick Savvide, Director of Strategic Accounts, Asia Pacific, Forcepoint

As boardrooms and C-suites intensify their focus on AI, the spotlight will magnify the imperative to resolve underlying data issues: In 2024, more CEOs and boardrooms will increasingly realize that data is the linchpin for AI’s success. I’m witnessing a seismic shift in the executive mindset; for the first time in years, CEOs actively seek to increase their technology spend, particularly in AI, as they see great promise. CEOs are not merely intrigued by AI’s potential; they’re captivated by the promise of GenAI to redefine the very fabric of how we conduct business—from revolutionizing customer experiences to optimizing supply chains and bolstering risk management. The allure of AI is undeniable; it holds the key to unlocking new markets, saving millions, and catapulting companies into a league of their own. However, the sobering truth that every CIO understands is that AI is not a plug-and-play miracle. The Achilles’ heel lies within our data—the most valuable yet underperforming asset due to its fragmented nature. Investments in AI are futile without unifying and managing our data to ensure it’s clean, connected, and trustworthy. The path to AI’s promise is paved with data unification. It’s about transforming data into a singular, interoperable product that can truly catalyze digital transformation and harness AI’s transformative power. – Manish Sood, Founder, CEO and Chairman of Reltio

2024 will be the year of adaptability and useability of AI tools: 2023 was the year of cautious experimentation of AI tools but in 2024 organizations will shift their focus towards responsible deployment. While much remains that companies don’t fully understand about AI, along with its associated risks, there are many opportunities to take advantage of moving forward in business and life. Falling behind in the AI adoption race can pose significant challenges for organizations. However, there is no one-size-fits-all model for organizations to follow. Technology leaders will need to assess which use cases benefit from the integration of new AI tools and which tools are better left untouched. They will also need to ensure that GenAI tools are used in a safe and responsible way governed and controlled by organizational governance processes. This strategic approach ensures that AI adoption aligns with an organization’s unique goals and needs. – Barry Shurkey, CIO at NTT DATA

AI is Recession and Inflation Proof: Despite economic headwinds or tailwinds, interest in AI will remain strong in 2024 regardless of which way the economy turns. AI’s potential to drive innovation and competitive advantage is a must-have, with its own line item in the budget. Measuring the ROI on AI will be critical and practical use cases will be put under the microscope. For example, proving out how AI can make everyday tasks like data analysis cheaper and more broadly available to business users will be key. Likewise, investors will be more wary of AI companies. – Arina Curtis, CEO and co-founder of DataGPT

Ensuring AI Integrity in a Trustless World: With the proliferation of AI technologies like deepfakes and automated content generation, there’s an increasing need for mechanisms to verify AI. Web3 technologies offer a solution to this challenge by providing a framework for transparent, verifiable AI operations. This shift will be crucial for industries that are increasingly relying on AI, ensuring that AI remains a trustworthy tool despite the decentralized and often opaque nature of its operation. – Blane Sims, Head of Product of Truebit

60% of workers will use their own AI to perform their job and tasks. Businesses are scrambling to capitalize on the AI opportunity, but they won’t innovate fast enough to outpace widespread employee usage of consumer AI services — also known as bring-your-own-AI (BYOAI). Enterprises should focus on building a strategy to manage and secure BYOAI now, while they develop formal company-sanctioned AI resources. – Forrester

Access, scale, and trust: In 2024, the three biggest challenges that AI companies will face are access to AI tools, scalability within specific industries, and user trust in popular AI tools. We’ve seen the question of trust emerge in 2023, and that will be even bigger in 2024 when we see the impact of the AI Act. – Dan Head, CEO of Phrasee

2023 was the year of AI promises — 2024 will be the year of AI action. We will start to see the tangible outcomes of the initiatives companies have been putting in place and discover their impact on customers. Those who have chosen to invest in resources and identify opportunities for AI to work collaboratively with human intelligence (as opposed to replacing it) will be the ones who emerge ready to capture the market. – Laura Merling 

In 2024, we can expect to see a move towards automating the data collection process on the construction site. Today, teams are burdened to get the project done on time and within budget – while still keeping safety and quality requirements in mind. With AI, both computer vision and generative AI, companies will be able to structure and standardize their data across the entire lifespan of a project. Whether it’s during the design process with building information modeling (BIM) and drawings, inputting credit cards to purchase materials, or validating insurance information to protect workers and the project, the construction industry works with a vast amount of data. We are already beginning to see general contractors leverage data in unique ways to improve their business but a lot of the data is unstructured and isn’t used to its full potential. It’s reported that nearly 20% of time on a typical project is spent just searching for data and information. AI will be able to solve this problem through automated data collection, allowing individuals to spend more time and resources on pulling insights from their data to mitigate risk and improve the business. – Procore’s VP of Product, Data & AI Rajitha Chaparala

CX Gets a Facelift with AI: AI will help agents contribute to success by answering questions faster and better, resolving problems on first contact, communicating clearly, and leaving the customer feeling satisfied. This will lead to new CX strategies centered around AI to design, execute, and measure new or reimagined customer service experiences. According to Forrester, the key to many of 2024’s improvements will be behind-the-scenes GenAI, which augments customer service agents’ capabilities. – Sreekanth Menon, Genpact’s Global AI/ML Services Leader

Companies will have top-down mandates on the adoption of AI in 2024: Many team leaders will come back from the holidays to find mandates from their CEO and CFO with pointed targets that AI adoption should achieve. Expectations like reducing Opex by 20%, increasing CSAT/NRR by 10%, and generating 10% topline revenue through AI-based products and experiences will be at the forefront. In service of these objectives, some C-suite teams will appoint an AI leadership role to mimic the success of digital transformation winners in the previous decade. We anticipate Chief AI Officer or similarly titled roles will become common as organizations grapple with how to rapidly integrate this new technology into legacy operations. This new role will be somewhat contentious with the increasingly fractional role of the CIO. Whether CIOs can deploy enough automation to carve out a strong focus on AI or ultimately cede that territory to this newcomer in the C-suite is something to watch closely. – Sean Knapp, CEO of Ascend.io

Over the past few years, the CTO role has become the bridge between the tech-savvy and the business-savvy, charged with enabling the right solutions to create the best overall business outcomes. This comes with its communication challenges as the CTO needs to navigate how to translate tech into an ROI for the organization’s board and C-suite. In 2024, the ability to educate their C-level colleagues will become even more important as artificial intelligence (AI) technologies become commonplace. The CTO will not only need to be able to collaborate with the tech side of the business to ensure what is realistically possible in the realm of AI but will need to communicate on a business level its potential – both from employee productivity and product standpoint. – Bernie Emsley, CTO at insightsoftware

AI will bridge the gap between managers and their direct reports. In 2024, AI will fill the missing gaps that managers have inadvertently caused. Whether it’s crafting more thoughtful performance reviews or identifying internal growth opportunities for their direct reports, AI will provide much needed support on tasks where managers are either inexperienced or too burnt out to handle. These AI capabilities will help them become stronger managers, in turn allowing them to better empower their direct reports. – David Lloyd, Chief Data Officer, Ceridian

AI Will Need to Explain Itself: Users will demand a more transparent understanding of their AI journey with “Explainable AI” and a way to show that all steps meet governance and compliance regulations. The White House’s recent executive order on artificial intelligence will put heightened pressure on organizations to demonstrate they are adhering to new standards on cybersecurity, consumer data privacy, bias and discrimination. – Mark Do Couto, SVP, Data Analytics, Altair

Impact of AI on 2024 Presidential Election: AI promises to shape both the 2024 campaign methods and debates; however, it’s interesting that even candidates with tech backgrounds have avoided AI specifics so far. We’ve seen immense interest in AI and machine learning as they transform the way the world works, does business, and uses data. As a global society we need to be aware of and carefully consider potential shortcomings of AI, such as unintended bias, erroneous baseline data, and/or ethical considerations. Even if the topic isn’t covered in debates, the challenge and opportunity of AI is something that the next administration will have to grapple with. – Ivanti’s Chief Product Officer, Sri Mukkamala 

AI Answers The Call for Help Managing Data Overabundance: Today’s data professionals have an overwhelming amount of information at their fingertips but many may lack the actionable insights they need. And, with the increase in data being categorized across distributed sources—328.77 million terabytes daily—organizations are grappling with the challenges of data management. Data is one of the most valuable assets an enterprise has, yet it’s fundamentally useless unless it can be leveraged, understood, and applied effectively. As we approach 2024, data management is rapidly evolving toward a future dominated by artificial intelligence. AI is the answer for IT teams as they navigate today’s increasingly complex distributed and hybrid digital environments. Because these technologies process more information than any one human ever could, they support resource-constrained IT teams by ensuring applications and services are running properly without the need for human intervention. AI-powered observability and ITSM solutions, in particular, can provide a lift to IT teams by enabling them to automate tasks, detect security threats and performance anomalies, optimize performance, and make better decisions based on data analysis. Yet our path forward in 2024 requires deliberate planning and a keen understanding of how and in what ways AI can help us. While walking the exhibit halls of several large IT conferences this year, I was surprised how almost every vendor’s booth was blazoned with AI captions. These frothy headlines won’t turn a poor or mediocre product into a good one. And organizations that begin their journey to AI by hurrying to implement the latest shiny new technology without analysis, are least likely to see long-term and sustainable success. Instead, carefully plan your AI strategy and you’ll reap the rewards long into the future. – Kevin Kline, Senior Staff Technical Marketing Manager from SolarWinds

Companies will upskill non-technical teams on data and analytics, in preparation for an AI-led future: AI has significant potential to transform the roles of many knowledge workers, but there’s one problem: too few employees understand data and analytics to be able to use it effectively. Generative models are literally designed to generate data. More than ever, we need people to interpret the output and layer in the business context or adjustments of the raw outbound to ensure it’s appropriate. – Megan Dixon – VP of Data Science at Assurance IQ  

AIOps for Network Operations: Network optimization can support better performance of AI, but AI can also support better performance of networks. Although it’s still early days for AIOps (AI for IT operations), it is beginning to show potential. While all areas of IT operations are covered by AIOps, one area which is now emerging as an important component is AIOps for network operations. Network engineers are being faced with increasingly complex network landscapes, combining a distributed workforce, a multitude of devices, and cloud infrastructure, etc. AIOps simplifies the management of network operations through automation, predictive analytics, and root cause analysis on the basis of big data and machine learning. AIOps can speed up troubleshooting and resolving issues for customers, and at the same time reduce costs, as precious NOC employees can work on more critical tasks that AI can’t solve today. In late 2023, one survey found that while only 4% of respondents have already integrated some kind of AIOps organization-wide, a further 15% have implemented AIOps as a proof of concept, and 29% have identified use cases for its future implementation. The market is forecast to triple in size over the next four years, reaching nearly US$ 65 billion in 2028. – Dr. Thomas King, CTO at DE-CIX

Optimizing Use of AI Will Determine Future Supply Chain Winners: AI and predictive analytics will separate the winners and losers over the next decade across manufacturing and retail. Leaders who harness big data to optimize inventory, forecast demand, control costs, and personalized recommendations will dominate their less analytical peers. Companies that fail to adopt will see spiraling costs and plummeting efficiency. – Padhu Raman, co-founder and chief product officer of Osa Commerce

Expect AI backlash, as organizations waste more time and money trying to ‘get it right’: “As organizations dive deeper into AI, experimentation is bound to be a key theme in the first half of 2024. Those responsible for AI implementation must lead with a mindset of “try fast, fail fast,” but too often, these roles need to understand the variables they are targeting, do not have clear expected outcomes, and struggle to ask the right questions of AI. The most successful organizations will fail fast and quickly rebound from lessons learned. Enterprises should anticipate spending extra time and money on AI experimentation, given that most of these practices are not rooted in a scientific approach. At the end of the year, clear winners of AI will emerge if the right conclusions are drawn. With failure also comes greater questioning around the data fueling AI’s potential. For example, data analysts and C-suite leaders will both raise questions such as: How clean is the data we’re using? What’s our legal right to this data, specifically if used in any new models? What about our customers’ legal rights? With any new technology comes greater questioning, and in turn, more involvement across the entire enterprise.” – Florian Wenzel, Global Head of Solution Engineering, Exasol

Organizations will (finally) Manage the Hype around AI: As the deafening noise around GenAI reaches a crescendo, organizations will be forced to temper the hype and foster a realistic and responsible approach to this disruptive technology. Whether it’s an AI crisis around the shortage of GPUs, climate effects of training large language models (LLMs), or concerns around privacy, ethics, bias, and/or governance, these challenges will worsen before they get better leading many to wonder if it’s worth applying GenAI in the first place. While corporate pressures may prompt organizations to do something with AI, being data driven must come first and remain top priority. After all, ensuring data is organized, shareable, and interconnected is just as critical as asking whether GenAI models are trusted, reliable, deterministic, explainable, ethical, and free from bias. Before deploying GenAI solutions to production, organizations must be sure to protect their intellectual property and plan for potential liability issues. This is because while GenAI can replace people in some cases, there is no professional liability insurance for LLMs. This means that business processes that involve GenAI will still require extensive “humans-in-the-loop” involvement which can offset any efficiency gains. In 2024, expect to see vendors accelerate enhancements to their product offerings by adding new interfaces focused on meeting the GenAI market trend. However, organizations need to be aware that these may be nothing more than bolted-on band aids. Addressing challenges like data quality and ensuring unified, semantically consistent access to accurate, trustworthy data will require setting a clear data strategy, as well as taking a realistic, business driven approach. Without this, organizations will continue to pay the bad data tax as AI/ML models will struggle to get past a proof of concept and ultimately fail to deliver on the hype. – Atanas Kiryakov, founder and CEO of Ontotext

Thoughts on AI: As with any hype cycle, a lot of people are going to jump on this with poor plans or inadequate knowledge or ability and they’re going to produce bad, or even dangerous, code and applications. Organizations that invest heavily in AI and then fail are likely to be in trouble. Other organizations that take on these questionable AI apps and processes may suffer data breaches, bad or misinformed decision making, and suffer from their reliance on poor code. – Grant Fritchey, Product Advocate at Redgate Software 

A Push for Greater AI Explainability: The business community has witnessed significant advances in artificial intelligence over the last two years. Yet a defining characteristic of sophisticated AI systems, including neural networks, is that they do not always behave as we might expect. Indeed, the path an AI system chooses to arrive at a destination may vary significantly from how a human expert would respond to the same challenge. Studying these choices – and building in tools for AI explainability – will become increasingly important as AI systems grow more sophisticated. Organizations must have the ability to analyze the decision-making of AI systems to put adequate safeguards in place. Additionally, the outputs that AI systems provide to explain their thinking will be critical toward making further improvements over time. – Paul Barrett, CTO at NETSCOUT

Balancing act of AI content and bans – Visibility vs. Control: Publishers’ consideration of AI bans stems from a desire to maintain control over their content. However, this approach may result in decreased visibility in search results as search engines increasingly rely on AI to curate content. Integration vs. Exclusion: While some brands may see AI bans as a way to protect their content, they risk missing out on the advantages that AI, especially LLMs, can provide in content matching and query understanding. The reasoning against AI bans is that LLMs can leverage alternative means to access content, making total exclusion challenging. Balancing Act: Brands will need to find a balance between protecting their content and leveraging AI to increase their visibility and relevance in search results. This might involve developing nuanced policies that regulate AI interaction with content without full exclusion. – A.J. Ghergich, VP of Consulting Services, Botify

AI can certainly help clean up “messy data”, but it’s also a bit circular in that AI use should be based on strong data governance, as data protection law requires companies to understand which personal data is used in AI use cases.  As such, in 2024 we will see a bigger focus on data inventory and classification as a necessary foundational piece for companies that want to lean into the power of AI. – Seth Batey, Data Protection Officer and Senior Managing Privacy Counsel at Fivetran

In my opinion, the marketing world is poised for a paradigm shift from broad marketing monologues to interactive, AI-driven customer dialogues. This change will mandate reevaluating marketing technology stacks to prioritize real-time, meaningful interactions. Simultaneously, personalization will transition from perceived intrusiveness to trust-building through responsive dialogues. I believe this will gradually phase out traditional navigation, like drop-down menus, in favor of search and chat interfaces. In this evolving landscape, companies will recognize that their AI strategy is intrinsically linked to their data strategy. Emphasizing lean data becomes essential to leverage new interfaces and tools effectively and compliantly, ensuring that data quality and relevance are at the forefront of these technological advancements. – Christian Ward, Executive Vice President & Chief Data Officer of Yext

Optimizing Use of AI Will Determine Future Supply Chain Winners: AI and predictive analytics will separate the winners and losers over the next decade across manufacturing and retail. Leaders who harness big data to optimize inventory, forecast demand, control costs, and personalized recommendations will dominate their less analytical peers. Companies that fail to adopt will see spiraling costs and plummeting efficiency. – Padhu Raman, co-founder and CEO of Osa Commerce

AI is already proving to be an incredibly powerful tool for developers, though many are skeptical about the extent of its capabilities and concerned about the potential it has to disrupt traditional workplace practices, jobs, and processes. From my point of view, AI is set to enhance developers’ every day workflow, rather than replace it. More and more developers will use AI to automate simple tasks such as scanning for performance issues, spotting patterns in workflows, and writing test cases. Instead of “AI-jacking”, it will actually free up developers to spend more time on impactful, innovative work. – Dana Lawson, Senior Vice President of Engineering at Netlify

Artificial intelligence will bring teams closer together as leaders across every industry begin to embrace the technology: Within the next year, AI will become the primary driver of the development life cycle — not just as an IT assistant, but as a collaborative tool. Developer and engineering teams have had their work largely restricted to the backend, but I anticipate IT leaders to become key advisors as AI becomes more ingrained in a business’ overarching goals. Both technical and non-technical staff will need to align on their AI strategy in tandem as organizations seek to utilize AI for automation, prototyping, testing, and quality assurance to drastically reduce the time needed to develop new projects. This will enable technical staff to innovate more frequently, and non-technical staff can have a stake in building solutions, rather than just providing requirements. – Ed Macosky, Chief Product and Technology Officer, Boomi

On adopting/investing in AI: Investing in AI tools can be a lever that helps some developers become more productive. The more training on prompting, the more likely it is that you will get increased productivity from developers. The downside is that often the AIs don’t really know the problem space and might be using code that is subpar. Lots of training code out there on the Internet isn’t suitable for your application. Some of it isn’t suitable for any application, so expecting an AI to make developers better is unlikely to work. AI is a tool or lever, not a substitute for training and skill. – Steve Jones, DevOps Advocate at Redgate Software

The digital capacity race to fuel AI advancements: AI is a data-hungry technology, and the demand for bandwidth to move and process that data will skyrocket in the coming years. AI applications are evolving much faster than infrastructure can be built, leading to the risk of a capacity shortage. Network infrastructure must rapidly develop to meet connectivity demands and avoid the crunch. This will require investment in new technologies and infrastructure and a more collaborative approach between network operators, hyperscale giants and other stakeholders. AI is nothing short of a trillion-dollar opportunity, and it will drive unprecedented demand for bandwidth, making it much different from other hype cycles like 5G and IoT, where monetization is unclear. Industries that rely heavily on data and computing — such as healthcare, finance, and manufacturing — will be the first to reap the benefits of AI. Hyperscale giants will invest heavily in digital infrastructure to prepare for this surge, and as we look ahead, smaller players must follow suit or get left behind. – Bill Long, CPO of Zayo

Companies will prioritize minding the gap between data foundations and AI innovation. There is no AI strategy without a data strategy and companies will need to prioritize closing gaps in their data strategy; specifically, the foundational elements of more efficiently accessing more accurate data securely. – Justin Borgman, Cofounder and CEO, Starburst

As a whole, the bar for understanding and harnessing the full value of AI is still low but it won’t be for long as market pressures continue to accelerate AI adoption. The future of enterprise AI will be centered on AI being built into the products and services already in use. But as AI innovation evolves, we’ll see enterprises learn to build their own in-house AI data platform and move part of the workflows into their own infrastructure. For enterprises who want to get ahead of the curve, its critical that they start investing in building their in-house expertise now. A central ‘center of excellence’ for AI and Data Sciences will be more beneficial than individual AI projects scattered around the company. – Pure Storage Analytics & AI Global Practice Lead, Miroslav Klivansky

Real-Time AI Monitoring: A Data-Driven Future: 2024 will witness the rise of real-time AI monitoring systems, capable of detecting and resolving data anomalies instantaneously. This transformative technology will ensure data reliability and accessibility, especially for the ever-growing volume of unstructured data. – CEO of Acceldata, Rohit Choudhary

After the boom, there will be an extinction for many AI companies as a direct result of enhanced scrutiny around data privacy, security and safety. As such, 2024 will be the year of the secure, safe harbor AI company, and the explosion in AI investment and innovation will both consolidate and accelerate. Winners will begin to emerge in all fields. AI will go mainstream, no longer serving as a supportive tool for experimental production, but a vital, strategic business asset. It will operate at warp speed and drive major business decisions by the end of 2024. AI models and chips that offer increased compute power while simultaneously reducing energy consumption and lowering total cost of ownership will trend. In other words, ESG (environmental, social and governance) will quickly become the new North Star. – SambaNova’s CEO, Rodrigo Liang

Big Data

Investment in digital transformation will be a priority on the CIO agenda for 2024, especially with rising inflation, as this will allow for greater risk management, reduction in costs, and improved customer experience. Additionally, following the trend we’ve seen this year, there will also be a continuous investment in generative AI. Equally crucial in assessing our initial business needs and objectives is our commitment to establishing guidelines that prioritize responsible use. Finally, as an industry, I believe we need to embrace data silos. We can’t omit silos, so we need to better enable them and give them the ability to pull the vetted data they need. – Danielle Conklin, CIO at Quility

The innate characteristics of big data – volume, velocity, value, variety, and veracity – remain the same yearly, while evolving technologies that emerge each year helps us to use domain knowledge to contextualize data and gain more insights, accelerating business transformation. – Dr. Ahmed El Adl, Senior Advisor, Sand Technologies

Big data insights won’t be just for data scientists anymore: The ability to extract meaningful business insights from big data has largely been the domain of the highly specialized data scientist. But, as in cybersecurity, these experts are rather few and far between, and more and more teams are placing demands on this finite resource. In the coming year, we’ll see this change exponentially. Data fabric platforms, and data science and machine language (DSML) platforms, are changing the game, unifying and simplifying access to enterprise data. The more user-friendly interfaces of these platforms give more people on more teams the ability to see and act on threats or other challenges to the business. The democratization of data comes none too soon, as advancements in AI are making it easier for bad actors to infiltrate. With more eyes watching and able to take protective action, enterprises have a real shot at staying ahead of threats. – Nicole Bucala, vice president and general manager, Comcast Technology Solutions

Chief Data Officers (or any data leaders for that matter) will need to be change management specialists first and data specialists second to be successful in 2024. Creating a data culture is the exact opposite of the “Build it and they will come” approach from Field of Dreams; CDOs have found themselves too often in a field alone with only their own dreams. You have to bring the “data dream” to all areas of the organization to make a data-driven culture a reality; generative AI is the most tangible and relatable vessel that CDOs have ever had to do just that. – Niamh, Senior Manager of Solution Architecture at Fivetran

In the upcoming year, we predict a growing demand for evolved data lakes and how genAI can help make Big Data more accessible for organizations. Business leaders will be seeking more than just an organized storage space; they will be looking for an intelligent and interactive platform that fosters meaningful dialogues with data, translating it into actionable insights. Large Language Models (LLMs) in genAI have introduced new opportunities to bridge the gap between Big Data and decision-making. Powered by LLMs, Intelligent Agents will have the inventive ability to understand and respond to natural language queries, breaking new ground for businesses as it will allow their users to engage with data in a conversational manner. This shift propels organizations toward well-organized data repositories, empowering users to have useful understandings with their data. – Nirav Patel, CEO, Bristlecone

2024 is the year we stop moving data and start working with data in place: Data growth has outpaced connectivity for over two decades, leading to an exponential problem. Exponential problems can suddenly become overwhelming, like a jar filled with grains of sand that are doubled daily. One day it’s half full; the next it’s overflowing. Data transfer rates cannot meet our needs, prompting solutions like Amazon’s AWS Snowmobile, a 45-foot-long shipping container pulled by a truck designed to transport exabyte-scale data. We’ve reached a point where we can’t move all the data to where it needs to be analyzed or used – we’ve shifted from data centers to centers of data. Exabytes of data are generated daily at the edge (e.g., factories, hospitals, autonomous vehicles) to power new AI models. However, our AI ecosystem primarily resides in the cloud, and shifting this immense volume of data from the edge to the cloud is not feasible. In 2024, we foresee the rise of tools that allow us to work with data in place without moving it. These tools will enable cloud applications to access edge data as if it were local or data center apps to access cloud data as if it were local. Welcome to the era of data everywhere. – Kiran Bhageshpur, CTO at Qumulo

Cloud

Cloud and OS agnostic high availability becomes an expected requirement for most applications: IT teams will look for application HA solutions that are consistent across operating systems and cloud reducing complexity and improving cost-efficiency. As the need for HA rises, companies running applications in both on-prem and cloud environments as well as those running applications in both Windows and Linux environments will look to streamline their application environments with HA solutions that deliver a consistent user interface across all of their environments and also for matching cloud and OS technical support and services from the HA vendor. – Cassius Rhue, Vice President, Customer Experience, SIOS Technology

Organizations will continue looking for public cloud DBaaS alternatives: What we hear from our users, customers and the market in general is that they want public cloud DBaaS alternatives. There are multiple reasons for this – for example, they may want more independence from their vendor, they may want to optimize costs, or get more flexibility around their database configurations. Right now, the market provides a limited number of alternatives to those willing to make a change. Rather than looking at DBaaS from a specific provider, there is a gap in the market for open source private database platform that gives organizations and IT teams greater control over data access, configuration flexibility, and costs associated with cloud-based databases. The growth of Kubernetes and Kubernetes operators has made it easier to implement this kind of approach, but there are still multiple gaps around this that make it harder to deploy and run in production. Closing those gaps and making fully open source DBaaS options available will be something that comes to fruition in 2024. – Aleksandra Mitroshkina, Senior Manager, Product Marketing, Percona

Building starts with a prompt and hosting with the cloud: In the near future, AI-driven Language Models (LLMs) will keep revolutionizing server-based (virtualized) computing, where fast deployment with automation tools will drive the change. It starts with a simple prompt directing you to create a website.  Adding additional directions to direct what kind of website you are building. Cloud hosting will be top of mind, with the ability to scale, load-balance, secure, and handle large amounts of traffic as online presence grows. For reliability,  security, and flexibility, more and more users may want to switch to a multi-cloud approach, thus avoiding to be locked in by a single provider. Serverless Functions enabling running code on-demand without needing to manage infrastructure, provision servers, or upgrade hardware, will even more become the go-to architecture for developers. It simplifies the deployment process, allows for more efficient resource allocation, and will lead to  substantial savings of effort and time. As quantum computing advances, even while doing it slowly, it will disrupt traditional encryption methods. Cloud hosting providers must adapt by offering quantum-resistant security solutions to protect sensitive data. Rising energy prices will drive the adoption of more sustainable practices in cloud hosting. More providers will commit to using renewable energy, reusing wastewater, reducing carbon footprints, and promoting eco-friendly cloud services. – Mark Neufurth, Lead Strategist at IONOS

Database/Data Warehouse/Data Lake/Data Management

Data models will reach a tectonic shift away from highly structured traditional databases. As more companies integrate AI capabilities to gain a competitive edge and transform the real-time pace of business, the historical approach to data management will fall by the wayside and there will be a need for a new data model to take its place. – VC firm General Catalyst

A new class of data warehousing will emerge: Snowflake, BigQuery, and Redshift brought enterprise data to the cloud. In 2024 we’ll see a new generation of databases steal workload from these monolithic data warehouses. These real-time data warehouses will do so by offering faster and more efficient handling of real-time data-driven applications that power products in observability and analytics. – ClickHouse’s VP of Product, Tanya Bragin

SQL is here to stay: Structured Query Language or SQL is proclaimed too old-fashioned every few years and in 2024 proposals to use LLM AI tools to generate database queries will get a lot of attention. But one of the reasons SQL is the only programming language from the 1970s that still gets used so widely today is its power in querying data. You may not like the syntax. You may find its rules somewhat arbitrary. You may have gripes about learning such an old language. But for decades, SQL has proven itself again and again as the premier tool to manipulate data. It won’t be going out of fashion any time soon. – Dave Stokes, Technology Evangelist, Percona

Rise of the Data Lakes and Fall of Data Lake Vendors: While some companies may choose to collect less data, increasing regulatory requirements mean that most teams have no choice but to do more with less. As they struggle to find cost-effective means to store data of unpredictable value, companies are increasingly reconsidering data lakes. Once considered the final resting place for unstructured data, I see the migration to data lakes accelerating in 2024, driven by increasing storage costs, as well as advancements in query capabilities across data lakes and object storage, and the comparative ease with which data can be routed into them. With the ability to quickly and cost-effectively search large data stores, companies will start using data lakes as a first stop, rather than a final destination for their data. This will cause a shift of data volumes out of analytics platforms and hot storage into data lakes. In contrast to this growth, we anticipate data lake vendors who are not best-of-breed may see slowing growth and consolidation next year, as the market matures from theory and deployment to reality and utilization. For the segments of industries that experienced outsized growth leading into the looming economic downturn, this pain will be more acute, and data lake vendors are definitely on that list. – Nick Heudecker, Senior Director, Market Strategy & Competitive Intelligence, Cribl

English will replace SQL as the lingua-franca of business analysts: We can anticipate a significant mainstream adoption of language-to-SQL technology, following successful efforts to address its accuracy, performance, and security concerns. Moreover, LLMs for language-to-SQL will move in-database to protect sensitive data when utilizing these LLMs, addressing one of the primary concerns surrounding data privacy and security. The maturation of language-to-SQL technology will open doors to a broader audience, democratizing access to data and database management tools, and furthering the integration of natural language processing into everyday data-related tasks.- Nima Negahban, CEO and Cofounder, Kinetica

Open formats are poised to deal the final blow to the data warehouse model. While many anticipate the data lakehouse model supplanting warehouses, the true disruptors are open formats and data stacks. They free companies from vendor lock-in, a constraint that affects both lakehouse and warehouse architectures. – Justin Borgman, Cofounder and CEO, Starburst

Data first architecture means and data management strategies: We’re about to see another explosion in the data people are keeping. By 2025, global data creation is projected to grow to more than 180 zettabytes. Data is becoming more valuable to organizations, even if they don’t know how they’re going to use or need it in the long term. The data explosion will continue to drive the need for highly available and scalable solutions. To take advantage of this burst, organizations will need to democratize data across departments for a data-first approach so all things would truly benefit every aspect of an organization. – Jeff Heller, VP of Technology and Operations, Faction, Inc.

Data Engineering

AI Technology Will Not Replace Developers: AI is moving to the forefront of software development, with IT leaders using AI to speed time to market and alleviate the developer shortage. While generative AI–based tools can speed up many common developer tasks, complex tasks remain in the domain of developers for now. AI technology will be used to augment developers rather than replace them as some tasks continue to demand skilled developer expertise. – Jason Beres, Sr. VP of Developer Tools at Infragistics

AI-generated code will create the need for digital immune systems: In 2024, more organizations will experience major digital service outages due to poor quality and insufficiently supervised software code. Developers will increasingly use generative AI-powered autonomous agents to write code for them, exposing their organizations to increased risks of unexpected problems that affect customer and user experiences. This is because the challenge of maintaining autonomous agent-generated code is similar to preserving code created by developers who have left an organization. None of the remaining team members fully understand the code. Therefore, no one can quickly resolve problems in the code when they arise. Also, those who attempt to use generative AI to review and resolve issues in the code created by autonomous agents will find themselves with a recursive problem, as they will still lack the fundamental knowledge and understanding needed to manage it effectively. These challenges will drive organizations to develop digital immune systems, combining practices and technologies for software design, development, operations, and analytics to protect their software from the inside by ensuring code resilience by default. To enable this, organizations will harness predictive AI to automatically forecast problems in code or applications before they emerge and trigger an instant, automated response to safeguard user experience. For example, development teams can design applications with self-healing capabilities. These capabilities enable automatic roll-back to the latest stable version of the codebase if a new release introduces errors or automated provisioning of additional cloud resources to support an increase in demand for compute power. – Bernd Greifeneder, Chief Technology Officer and Founder, Dynatrace   

Data Governance and Regulation

40% of enterprises will proactively invest in AI governance for compliance. With the EU due to pass the new EU AI Act soon, the US rushing regulators to produce AI and generative AI collaterals, and China’s recent genAI regulation, some companies will push even more on AI compliance. Failure to do so means missing compliance deadlines and having to retrofit AI governance which increases complexity, cost, and time. To meet current and future compliance requirements, enterprises will invest in acquiring new technology, filling the talent gap, and securing the third-party support they need. – Forrester

Data governance will evolve into data intelligence: Data loss prevention and protection strategies ruled the roost during the early days of data governance. Although still useful for meeting governmental requirements, these tools may impede the effective exploitation of data. When data is locked away tightly, stewards can’t understand how their data is used, moved or accessed, so they cannot effectively improve their data storage and implementation practices. But I foresee a change coming soon. Yes, data governance will remain vital for maintaining compliance. However, evolved data intelligence capabilities have now emerged, allowing practitioners to not only control data but also understand it — and these capabilities are a must in the modern business world. Mining metadata to comprehend its lifecycle will allow teams to more effectively support their business requirements. These enlightened governance strategies will help organizations achieve mutual goals of data compliance while also uncovering granular data insights. – Brett Hansen, Chief Growth Officer at Semarchy

AI will be dragged through a messy regulatory maze. Regulations will rain down on AI from all corners of the world, creating a complex regulatory maze that will be challenging for companies to navigate. Specifically, within the United States, AI regulation could and likely will vary on a state-by-state or even a city-by-city basis, similar to how tax laws currently vary by jurisdiction. In 2024, as organizations work to address a patchwork of regulatory AI frameworks, they must ask themselves: ‘Should AI be enabled here, and if so, how?’ – David Lloyd, Chief Data Officer, Ceridian

The U.S. is unlikely to enact laws related to AI in 2024:  If history is any indication, it will take a long time for legislators to develop a working knowledge about AI, understand their options, and develop a sufficient consensus to enact a law. Predicting the outcome of any complex political process is difficult, especially with an impending presidential election. However, there is a sense of urgency given how generative AI took hold of the public’s imagination in 2023, which may have been an impetus for President Biden’s Executive Order (EO) on Safe, Secure, and Trustworthy AI. In lieu of federal law to guide the use and development of LLMs and AI, the  EO will help to further AI safety and security by leveraging the power and resources of the Executive branch departments, such as Homeland Security, Defense, Energy, Commerce, etc. The government’s influence on markets via its broad purchasing power will also be leveraged to drive the development and adoption of safety and security controls. – Maurice Uenuma, VP & GM, Americas at Blancco

Trusted data will become the most critical asset in the world: The critical role of trusted data in AI systems is becoming a cornerstone for the future of technology. Ensuring the information and data that come out of the AI system are trustworthy is just as critical. In a world that’s inching closer and closer to artificial general intelligence (AGI), knowing what to trust and who to trust will be critical to everything we learn and everything we think we know. Highlighting this shift, Forrester predicts that domain-specific, Large Language Model (LLM)-infused digital coworkers will soon assist 1 in 10 operational tasks. When tailored to specific business needs, these LLMs promise substantial investment returns. This trend has led organizations to focus more on finding, understanding, and governing high-quality, dependable data, which is vital for training AI models tailored to specific business requirements. The result is that AI governance is going to gain importance quickly. It involves more than just managing data; it’s about understanding the entire lifecycle of information and models. The analogy of data as the new oil now seems insufficient in the era of generative AI and the challenges hallucinations bring. Merely amassing and analyzing large data sets is no longer adequate in today’s business environment. In 2024 and beyond, trusted data – and all the tools associated with building trust in data – will be the number one commodity for organizations. – Satyen Sangani, CEO and co-founder of Alation

Generative AI adoption will slow amid regulatory hurdles, shifting focus to enterprise data usability: After its 2023 limelight, generative AI will face regulatory headwinds in the new year, causing businesses to tread more cautiously into 2024. The looming regulations and mounting security concerns are prompting organizations to hit the brakes on wholesale adoption. While pilot initiatives will be numerous, many may not achieve the desired outcomes, tempering enterprise enthusiasm. As the AI evaluation intensifies, vendors will face heightened scrutiny. Yet, this scrutiny could pave the way for a more data-centric, user-friendly application landscape. – Nick Heinzmann, Zip Head of Research

Data Integration, Data Quality

Businesses Big and Small Will Prioritize Clean Data Sets: As companies realize the power of AI-driven data analysis, they’ll want to jump on the bandwagon – but won’t get far without consolidated, clean data sets, as the effectiveness of AI algorithms is heavily dependent on the quality and cleanliness of data. Clean data sets will serve as the foundation for successful AI implementation, enabling businesses to derive valuable insights and stay competitive. – Arina Curtis, CEO and co-founder of DataGPT

Data Mesh, Data Fabric

Data fabric and data mesh will continue to be hot topics as companies look to share data across distributed environments. Implement a data mesh architecture. Let each business unit design its own data solution and then only connect it to the components of the bigger scale they need. – Manish Patel, Chief Product Officer at CData

Data Observability

Data Observability: Data observability emerges as a critical trend, proactively ensuring data quality and addressing anomalies throughout data pipelines. The 5 key pillars of Data Observability are Lineage, Quality, Freshness, Volume, and Schema Drift. Active monitoring of these pillars in cloud setups can result in significant cost savings, potentially reducing costs by 30-40%. The significance lies in the fact that high-quality data is imperative for informed decision-making. Ensuring proper observability across the landscape enables users to access trustworthy and curated data assets for valuable insights. – Arnab Sen, VP, Data Engineering, Tredence Inc. 

Deep Learning

Deep fake danger: 2024 will bring forth a slew of deep fake dangers consumers should be wary of – especially in virtual customer service settings. Identity and verification (ID & V) is a standard practice in most industries, where customer identity and right to transact is established. However, if a customer generates a fake image implicating that a company’s product was used to commit a crime, deep fakes have the potential to overcome biometric verification and authentication methods – making identity theft far easier. And this is just the beginning. Deep fake tech is in its infancy, and will only get better and more cunning. Fortunately, more predictive signals can be used to detect that fraud is potentially occurring, given that stolen identity can mean that bad actors can pass ID & V in some circumstances. Technology is evolving to address these issues, and we’ll undoubtedly see major tech innovation in the year on both sides of the coin. – Brett Weigl, SVP & GM – Digital, AI, and Journey Analytics, Genesys

Generative AI

Generative AI Will Move to Modern Data Management. Historically, data management is a bit of a black box with highly technical skills required to create a strategy and manage data efficiently. With the help of LLMs, modern data management will change its framework, allowing users to participate in the entire data stack in a fully governed and compliant manner. – Vasu Sattenapalli, CEO at RightData

AI will reach “plateau of productivity”: In 2023, with the release of ChatGPT, we witnessed inflated expectations and billions of dollars poured into AI startups. In 2024, we’ll start to see more Generative AI Act 2.0, with companies building not just a foundation model, but a holistic product solution with workflows re-imagined. We’ll see the market transition from the noise of “everyone can do everything” to a few GenAI winning companies delivering real value. – Tim Shi, co-founder and CTO of Cresta

There is going to be a rapid shift from infrastructure-based Gen AI to local Gen AI because right now, that’s not really possible. The average startup doesn’t have thousands of dollars to throw at a cloud provider and it will prove almost impossible to run by yourself but that is changing quickly with the innovation around local generative AI. With it going local, you will have a complete RAG stack under your control with your access controls. That way, you won’t have to expose your proprietary data in any way. When we go from centralized, API-based LLMs to local LLMs, it will happen quickly. The ones that will work will be adopted like wildfire. Just be mindful of the downside as de-centralized LLMs introduce the concept of bad actors in the loop. –  Patrick McFadin, VP of Developer Relations, DataStax.

Large language models will commoditize in 2024: There’s a huge race for companies today to build their own unique large language models (LLMs), like OpenAI’s GPT-4 or Meta’s LLaMA. However, I predict that these models will commoditize in 2024. The differentiation will come down to what data is being fed into the LLM and what its purpose is. This is similar to what happened in cable TV and streaming, where one monthly cable bill turned into a number of disparate streaming subscriptions. We’re seeing a similar “unbundling” of AI models, with the formation of many new companies that each have their own differentiated models. In the future, these AI models will likely aggregate back into a single technology, with data as the unique differentiator. – Spencer Thompson, Co-Founder and CEO, Prelude Security

In 2024, an important impact that generative AI will have is empowering people to discuss their financial worries or hardships without fear or embarrassment. For some, it is easier to talk to a chatbot than a live human when seeking advice about financial matters. By providing a confidential and non-judgmental way to get financial advice and support, AI will create a more financially inclusive future where everyone has access to the financial advice and support they need, regardless of their background or circumstances. – David Dowhan, Chief Product Officer of SavvyMoney

As generative AI becomes more mainstream, the potential productivity gains will significantly benefit these organizations. We will see tech leaders invest more in training, innovation centers setup and the adoption of new development platforms that maximize the value tech teams deliver. Tech leaders will need to take a two-pronged approach, enabling creative playgrounds for data experimentation while applying AI services to accelerate outcomes. All of which will be required to govern innovative creation and mitigate the risks associated with public AI models. – Miguel Lopes, VP Analyst Relations at OutSystems

A year into the ChatGPT-induced AI revolution, will we soon be surrounded by dramatic GenAI success stories or will we see the fastest collapse into the trough of disillusionment of a technology to date? Both! AI-savvy enterprises are already augmenting their most valuable employees and, occasionally automating them, and the trend will gain momentum as clear, repeatable GenAI use cases mature and investments in MLOps and LLMOps bear fruit. Meanwhile most PoCs — dazzled by the mirage of democratized, outsourced GenAI — crash headfirst into the realities of operationalizing production-grade GenAI applications, leading to widespread disillusionment. It turns out that human intelligence about AI is the most important factor for GenAI success, and ‘Generalized Pre-trained Transformer Models’ are more valuable when they are specialized for specific use cases and verticals. – Dr. Kjell Carlsson, head of AI strategy at Domino Data Lab

LLMs will assist generative AI to reason more and hallucinate less: AI is moving beyond the Large Language Model (LLM) text world of ChatGPT and the landscapes of Midjourney to Large Multimodal Models (LMMs), systems that can reason across different media types.  This is opening up new types of applications and possibilities, such as image-based inventory or virtual product support assistants for small businesses, and may help to ground future AI systems on more real-world examples that mitigate the potential of hallucination. We expect many more applications over the next 12 months, and as generative AI learns with sound, vision, and other senses, the near future may bring with it AI systems that can distinguish between reality and fiction. – Ashok Srivastava, Senior Vice President & Chief Data Officer at Intuit 

The commoditization of analytics: Natural Language Processing (NLP) has been instrumental in increasing the adoption of analytics among users. Now, the right mix of NLP and Large Language Models (LLMs) will help further commoditize analytics. LLMs have been helpful in assisting users in performing complex computations in analytics software. Analytics vendors will incorporate such features into analytics software without depending on LLMs to fill the gaps and mitigate privacy concerns introduced by LLMs. – Rakesh Jayaprakash, product manager, ManageEngine    

ChatGPT Will No Longer Be the Prevailing Technology for the Enterprise by 2025: Like most first movers in technology, ChatGPT will become less and less relevant as the year progresses. Local LLMs like Llama2 (and whatever comes next) will become the engine of corporate AI. There are many reasons for this, but data security and the ability to influence the results by augmenting a local LLM with industry-specific content are likely to be the two that drive this change. – Jeff Catlin, EVP of AI Products at InMoment

AI Cold Shower: 2024 will be the year that Generative AI faces a ‘cold shower’ wake-up call, according to new data from CCS Insight. Companies have been pulled in by the overhype around AI to develop hopeful long-term objectives for productivity and transformation. With these blinders on, many have overlooked the burdens of cost, risk, and complexity involved with adopting and deploying Gen AI. And it’s only getting worse – now we’re being told that by 2027, AI may need as much electricity as an entire country. The promise of AI is huge, but resources are a problem. Not every organization or government can afford it, and not everyone has the resources to embed it into their existing systems and processes. The world is still in the early stages of developing AI regulations, and the absence of set boundaries and safety nets could put many industries at risk. We have already seen a period of fragmentation when it comes to AI. The fact is AI developments are moving faster than many are prepared for, and the technology needs different resources to run. To prevent being caught in the “cold shower” next year, organizations must strategically invest in how they will power the AI of the future (investing in things like photonics and digital twins, to address the underlying problem of inequity in resources). Harnessing the power of cutting-edge technologies can help build a smarter world, where people and society are optimized using all types of accessible, connected and cohesive information. – Tanvir Khan, Chief Digital and Strategy Officer, NTT DATA

Data Poisoning: The Newest Threat to Generative AI:   Perhaps nothing illustrates the rapid mainstreaming of machine learning and artificial intelligence more than ChatGPT. But as algorithms become a staple of everyday life, they also represent a new attack surface. This type of attack, called data poisoning, is becoming more prolific as bad actors gain access to greater computing power and new tools. Looking ahead to 2024, considering the popularity and uptake of new machine learning and AI tools, companies can expect to see an increase in data poisoning attacks which include availability attacks, backdoor attacks, targeted attacks, and subpopulation attacks. The unfortunate reality is that data poisoning is difficult to remedy. The only solution is to retrain the model completely. But that’s hardly simple or cheap. As organizations use artificial intelligence and machine learning for a broader range of use cases, understanding and preventing such vulnerabilities is of the utmost importance. While generative AI has a long list of promising use cases, its full potential can only be realized if we keep adversaries out and models protected. – Audra Simons, Senior Director, Global Products, Forcepoint Global Governments

GenAI will change the nature of work for programmers and how future programmers learn. Writing source code will become easier and faster, but programming is less about grinding out lines of code than it is about solving problems. GenAI will allow programmers to spend more time understanding the problems they need to solve, managing complexity, and testing the results, resulting in better software: software that’s more reliable and easier to use. – Mike Loukides, Vice President of Emerging Tech Content at O’Reilly Media

Primary value use cases of adopting LLMs in the enterprise will finally be established. While 2023 was about dreaming up the possibilities of generative AI, 2024 will be the year that the enterprise puts it into action. After a year of speculation, businesses will finally get specific about applying LLMs to streamline their workflows. By the end of the year, there will be a handful of named scenario-based areas of value that people understand, moving us past “what-ifs” and shedding light on clear use cases. – VC firm General Catalyst 

Generative AI will continue to face organizational scrutiny: With the rapid growth of generative AI tools in 2023, organizations will intensify their scrutiny of the effects of AI tools on their employees and systems in the new year. One challenge is the persistence of misinformation and questions around the legality of AI tools, including the exposed source codes and the ability to determine the legitimacy of the results that employees are receiving. Leaders will need to establish methods to validate and authenticate information, while defining clear parameters determining how employees can use AI tools within their organization. – Bret Settle, Chief Strategy Officer, ThreatX

Generative AI will unlock the value and risks hidden in unstructured enterprise data: Unstructured data — primarily internal document repositories — will become an urgent focus for enterprise IT and data governance teams. These repositories of content have barely been used in operational systems and traditional predictive models to date, so they’ve been off the radar of data and governance teams. GenAI-based chat bots and fine-tuned foundation models will unlock a host of new applications of this data, but will also make governance critical. Companies who have rushed to develop GenAI use cases without having implemented the necessary processes and platforms for governing the data and GenAI models will find their projects trapped in PoC purgatory, or worse. These new requirements will give rise to specialized tools and technology for governing unstructured data sources. – Nick Elprin, co-founder and CEO, Domino Data Lab

OpenAI Drama Will Continue to Fill 2024: The ouster and rehiring of Sam Altman to OpenAI created news cycles jam-packed with gossip and hot takes, and I suspect OpenAI stories will continue to fill headlines all next year. The underlying catalysts – the unique non-profit/for-profit hybrid structure, the massive costs, the risks and promises of AI –  haven’t changed, and with the speed this field has been advancing, there’s ample opportunity for these forces to come to a head again and again next year. – Paul Barba, Chief Scientist at InMoment

As the “Generative AI Era” enters its second year we will start seeing more purpose and order in AI usage in enterprises: As the “wow” effect regarding what can be done with Generative AI remains prominent for a second year in a row, being fed by consequent innovations delivered by the likes of OpenAI and Google, organizations everywhere will start figuring out how to harness AI capabilities for their purposes, rather than just being astonished by the “art of the possible.” The first generation of AI capabilities in various enterprise products, focused on low-hanging, non-complex scenarios, such as all kinds of co-pilots, will no longer easily astonish and dazzle every person seeing them for the first time. The result will be a requirement that AI-powered capabilities focus on use value and being harnessed to solve real issues. – Leonid Belkind, Co-founder and CTO, Torq

Increased adoption of generative AI will drive need for clean data. The foundation of generative AI is data. That is, to function as desired, data is what provides the basis for this new technology. However, that data also needs to be clean. Regardless of where you’re pulling the data from – whether you’re using something like modeling or a warehouse of your choice – quality data will be essential. Bad data can lead to bad recommendations, inaccuracies, bias, etc. Having a strong data governance strategy will become more important as more organizations seek to leverage the power of generative AI in their organization. Ensuring your data stewards can access and control this data will also be key. – Rex Ahlstrom, CTO and VP of Innovation and Growth, Syniti  

60% of enterprise employees will receive prompt engineering training. With AI at the center of future enterprise workplace productivity for all employees, teams will need to continue to invest in Data / AI literacy programs to close the skills gap in learning how to engineer successful prompts. Don’t leave this important training to L&D — IT needs to develop BYOAI guidelines and enterprise training programs for employees to help them best leverage generative AI consistently and safely. – Forrester

More organizations will jump on the AI operating system bandwagon: Generative AI operating systems will receive more attention and investment in the year ahead. AI operating systems are the interface between artificial intelligence and everything else, from the engineers and designers leveraging generative AI tools, to the robotic systems being trained by generative AI to mimic human behavior and action in the physical world. Because of the well-documented high-stakes of widespread AI adoption, more emphasis will be placed upon the importance for organizations to build operating systems that can act as an intermediary between AI and everything else as more companies and public sector organizations embrace advanced AI technology at scale. – Ashok Srivastava, Senior Vice President & Chief Data Officer at Intuit

From Search Engine to Intelligent Assistant: How Retrieval Augmentation Generation (RAG) is Set to Improve Large Language Model Responses in 2024: As the calendar flips to 2024, one obscure term is set to captivate the tech world’s attention. Though not widely recognized until now, Retrieval Augmentation Generation (RAG) has begun to make waves as a transformative framework for technologists. RAG augments the capabilities of a Large Language Model (LLM) by capturing information from external sources, like an external knowledge base, to enhance the quality and accuracy of search responses by including data that is new to the LLM. Think of RAG as personalizing the LLM for your needs, providing the same LLM intelligent insights but from your data.  It’s like upgrading from a regular internet search to having a personal research assistant who finds exactly what you need. Financial decision makers have seen the boon that generative AI has been for other stakeholders in their organizations. Chief Investment Officers are eager to apply generative AI to reduce the “time-to-insights” gap while filtering in more information to produce more accurate results. Thanks to innovations improving RAG, sophisticated ring-fencing to ensure appropriate access to queries has become a reality. In short order, I believe RAG will continue to overcome knowledge gaps with LLMs, enhance accuracy, and serve as a solution for knowledge-intensive activities across a number of industries, including investment management. In addition, RAG can constrain which data is used for the LLM to process, which ensures that responses are only from the RAG data and not sourced from the general LLM data. RAG can also be enabled to provide citations of where the data came from so the users have confidence in the response. Enhancing security, you can have multiple RAG data sources and lock down access to certain ones. This way, only authorized users for those data sources can use the LLM for questions on that sensitive data. Looking to 2024, highly-regulated industries are expected to drive the adoption of gen AI, with RAG able to capture better information for their stakeholders. – Souvik Das, CTO, Clearwater Analytics

Private LLMs Will Take Off: Concerns about data privacy and security will drive organizations in 2024 to invest in private LLMs tailored to their specific needs and datasets. These private LLMs will be fine-tuned to ensure greater compliance with regulatory standards and data protection requirements. This shift toward privacy-centric LLMs will empower businesses with more control over their AI applications, foster trust among users, and open the door to innovative and secure AI solutions in industries ranging from healthcare to finance. – Dr. Jans Aasman, CEO of Franz Inc.

Generative AI initiatives will be driven by Line of Business not IT: Executives traditionally require organizations to adopt new tools to enable new (and     better) business practices and save money, even if the users prefer to stick with what they already know. IT supports the rollout while implementation teams debate change management procedures, conduct extensive training for potentially reluctant users, and stamp out any continued use of the older tools. However, ensuring compliance and achieving the desired benefits quickly is no easy feat. GenAI will be the opposite in 2024. The enthusiasm among users for GenAI-enabled solutions is palpable, as many have already tried these tools in various forms. The user-friendly nature of GenAI, with its natural language interfaces, facilitates seamless adoption for non-technical stakeholders. However, technical teams are left grappling with inherent challenges, including hallucinations, the lack of explainability, domain-specific knowledge limitations, and cost concerns. In some organizations, the use of GenAI is forbidden until their technical teams come up to speed. Detecting ‘shadow’ usage, where individuals become suddenly hyper-productive after a brief period of quiet, adds an additional complication to the implementation challenges. Next year, organizations will work out a process to evaluate the myriad of options available and allow the business to use the few tools that are capable of addressing all of GenAI’s challenges in an enterprise environment. – Ryan Welsh, Founder and CEO of Kyndi

Generative AI (GenAI) Maturity as Table Stakes: The broad democratization of GenAI capabilities has forever reshaped the dynamics of knowledge work and the global labor marketplace, already shaken up by the pandemic and recovery timelines. The broad consensus across the industry, is while embracing GenAI may seem optional today, very soon the choice will be to embrace it or go extinct. Expect business, technology, and security decisions to be augmented by GenAI, leading to an even greater focus on AI governance and ethics requirements. An example of this push is the recently released White House executive order calling on AI vendors to ensure the trust, safety, and security of AI platforms in the context of national security and public safety. The demand for AI skills will continue to grow as innovation in this space redefines our relationship with digital ecosystems. – Igor Volovich, Vice President of Compliance Strategy at Qmulos

The next phase of AI from Gen. AI to AGI: There is an apparent shift with Generative AI and its direction. The focus is increasingly centered around artificial general intelligence (AGI) and the rise of intelligent agents. For agents, there are two parts that will be critical in the world of AlOps and MLOps. One is purely around learning control and infrastructure management with agents ensuring automated configuration management and drift protection. The learning agent needs to understand how to make improvements, perform, give feedback and determine how the performance should be modified. This practice applies to AI infrastructure management, ensuring it’s built and test to deploy tasks by the agent. Looking at the near-future agenda, the trends within workplaces, most notably the bigger companies, will be associated with AI and organizations will need to control the agents. Organizations cannot let AI become autonomous without proper infrastructure. For the next phase of AI to reach from Generative AI to AGI, infrastructure needs to be set in place first and foremost and embedding platform engineering will be important to accelerate the delivery of applications. Organizations need configurations to work no matter where learning systems are (hybrid or private cloud). – Kapil Tandon, Vice President of Product Management in the IT Ops Business Unit, on open source, AGI, and IT Ops, Perforce

The Rise of Custom Enterprise Foundation Models (FMs): The debate around open-source vs closed source will only get heated as we move to 2024. The open-source LLMs like Meta’s Llama are catching up to the closed-source LLMs like GPT-4. Both these models come with their trade-offs with regard to performance and privacy. Enterprises would want to deliver on both fronts. The recent updates, such as OpenAI Enterprise, allow enterprises to build custom models to suit their solutions. Similarly, open-source models allow enterprises to build lightweight custom models with privacy in mind. This trend will continue, and we will see custom tiny language models take center stage. – Sreekanth Menon, Genpact’s Global AI/ML Services Leader

“Me Too” AI Vendors Sink as Generative AI hits a trough of disillusionment: Right now, generative AI is at the peak of its hype cycle. Next year, some orgs will begin to be disillusioned when their AI investments don’t provide the complete transformation they’re expecting. Customers will grow wary with vendors that have been late to the AI race, tacking on AI capabilities that provide little business value or compelling functionality. But orgs who weigh their expectations and use generative AI correctly – supporting proven use cases – can avoid this disillusionment and see expected value from AI. – Mike Finley, CTO of AnswerRocket   

2024 will be the year of enterprise grade open-source AI adoption. To date, there are not a lot of examples of meaningful, production-based adoption of LLMs in the enterprise. For instance, not a lot has been built around enterprise grade resilience, security, uptime, or predictability. Over the next year, a handful of companies will turn the tables by taking advantage of open source language models and making them more production-ready. This will result in more serverless, open source language models for enterprise grade scenarios to be built upon, allowing enterprises to adopt this technology in a more turnkey fashion. – VC firm General Catalyst 

Generative AI will become more factual thanks to retrieval augmented generation (RAG): This technology will allow engineers to feed clean business data into LLMs models to reduce hallucinations and ground outputs in on factual information. This clean business data will be generated by traditional data pipelines that handle data extraction, cleansing, normalization, and enrichment on an organization-wide scale. RAG is starting to emerge now and will see increased adoption next year as businesses seek to ensure more accurate results from generative AI. – Sean Knapp, CEO of Ascend.io

Towards AGI – Memory, Input, and Learning: The pursuit of AGI will focus on three key areas: enhancing LLMs’ long-term memory, enabling continuous input and internal state, and advancing reinforcement learning. Developments like the increased context length in Claude 2 and GPT-4 Turbo, and architectures aimed at better memory and continuous learning, exemplify this trend. Rumors of OpenAI’s Q* algorithm also indicate significant strides in this direction. These predictions for 2024 reflect not just the rapid advancements in AI and big data but also underscore the shifts in the industry landscape, where efficiency, multimodality, and deeper AI capabilities will drive innovation and competition. – Tomer Borenstein, Co-Founder & CTO of BlastPoint, Inc.

GenAI could stifle innovation: When you got your first iPhone, you quickly forgot people’s phone numbers. The same happened with your navigation abilities when you started using Google Maps or Waze. Similarly, in the coming years, we’ll see people lose their innovation skills as they become more dependent on GenAI to help generate code. We’re going to have to start thinking about how to preserve knowledge and encourage innovation in 2024. – Ori Keren, Co-founder and CEO, LinearB

Multimodal LLMs and databases will enable a new frontier of AI apps across industries: One of the most exciting trends for 2024 will be the rise of multimodal LLMs. With this emergence, the need for multimodal databases that can store, manage and allow efficient querying across diverse data types has grown. However, the size and complexity of multimodal datasets pose a challenge for traditional databases, which are typically designed to store and query a single type of data, such as text or images. Multimodal databases, on the other hand, are much more versatile and powerful. They represent a natural progression in the evolution of LLMs to incorporate the different aspects of processing and understanding information using multiple modalities such as text, images, audio and video. There will be a number of use cases and industries that will benefit directly from the multimodal approach including healthcare, robotics, e-commerce, education, retail and gaming. Multimodal databases will see significant growth and investments in 2024 and beyond — so businesses can continue to drive AI-powered applications. – Rahul Pradhan, VP of Product and Strategy at Couchbase

Generative AI will quickly move from the peak of inflated expectations to the trough of disillusionment. There’s a lot of hype right now around generative AI, to put it mildly. However, all of this hype means that for some organizations, adoption of this technology is more of a matter of “keeping up with the Jones” rather than because it is truly the best solution for a specific problem they are trying to solve for. As a result, we’re likely to see a lot of money invested in failed generative AI projects – hence, the failing into the trough of disillusionment. It’s the shiny new object and many CIOs and other senior leaders may feel pressured to be able to say they have a generative AI program in place. The key to limiting these failed projects will lie in really ensuring that your organization understands the specific reason for using generative AI, that it’s tied to a defined business outcome and there’s a method established for measuring the success of the investment. – Rex Ahlstrom, CTO and VP of Innovation and Growth, Syniti 

Generative AI will cause a clash between executives as they vie for control over its agenda within the enterprise: Nearly half of executives report that that their AI investments will increase next year to jump on the generative AI bandwagon, while 70% are already in generative AI exploration mode. Now that organizations are ramping up AI adoption in the enterprise, every executive wants to be the one to take their company on its AI journey. In 2024, the AI agenda will become more complex as more players enter the chat to gain control, from the CTO to the CIO to data analytics executives. The C-Suite will need to identify where their opportunities for AI lie and what conversation they must have with different departments to decide who should be the one to take the lead. In the meantime, CIOs are facing pressure from CEOs to expand their use of generative AI. In 2024, we will see CIOs continuing to push forward their exploratory AI experiments and projects as the battle continues. – Alon Goren, CEO, AnswerRocket

An army of smaller, specialized Large Language Models will triumph over giant general ones. As we saw during the era of “big data” — bigger is rarely better. Models will “win” based not on how many parameters they have, but based on their effectiveness on domain-specific tasks and their efficiency. Rather than having one or two mega-models to rule them all, companies will have their own portfolio of focused models, each fine-tuned for a specific task and minimally sized to reduce compute costs and boost performance. – Nick Elprin, co-founder and CEO, Domino Data Lab

Generative AI turns its focus towards structured, enterprise data: Businesses will embrace the use of generative AI for extracting insights from structured numeric data, enhancing generative AI’s conventional applications in producing original content from images, video, text and audio. Generative AI will persist in automating data analysis, streamlining the rapid identification of patterns, anomalies, and trends, particularly in sensor and machine data use cases. This automation will bolster predictive analytics, enabling businesses to proactively respond to changing conditions, optimizing operations, and improving customer experiences. – Nima Negahban, CEO and Cofounder, Kinetica

AI-powered Human quality translation will increase productivity by 10X or more: At the beginning of 2023, everyone believed that LLMs alone would produce human-quality translations. Over the year, we identified multiple gaps in LLM translations ranging from hallucinations to subpar performance in languages other than English. Like cloud storage or services, AI-powered Human quality translation is increasingly moving toward a cost at which the ROI of translating nearly all content becomes attractive, creating a competitive advantage for those companies that use it to access the global market. Contrary to the shared belief that the language services industry will shrink in 2024, it will grow as more content gets localized, but it costs less to do. 2024 will be the year the cost of translation plummets. Translators powered by Language AI and AI-powered Language Quality Assurance increase their productivity by 10X or more. – Bryan Murphy, CEO of Smartling

While 2023 saw euphoric hype around the emergence of artificial intelligence (AI) with seemingly boundless potential, in healthcare, we have already begun to see the limitations of prescriptive, large language model (LLM)-based solutions in providing clinical recommendations and insights. In 2024, we anticipate that clinicians, increasingly sophisticated when it comes to AI, will seek ways to mitigate the potential risks of accepting prescriptive recommendations from LLM-based solutions and instead choose responsible AI solutions that provide evidence-based and explainable recommendations. As the focus shifts towards responsible AI, healthcare leaders seeking to incorporate innovative AI technologies into their organizations’ clinical workflows will need to be aware of how these tools work. Solutions relying on licensed LLMs cannot provide tailored recommendations for care for individual patients, as these solutions are based on millions of data points with no specific emphasis on the individual. The lack of personalized focus and ‘explainability’ in the ‘black box’ nature of these solutions will underscore the necessity of clinicians having the final word in their decision-making. As a result, we anticipate a natural split will emerge in 2024: solutions that exist to provide clinical recommendations will increasingly be based on specific data and provide evidence for AI-generated insights. In contrast, solutions that aim to support clinicians in writing documentation and visit summarization, which rely heavily on natural language generation, will benefit from using universal LLMs. – Ronen Lavi, CEO and Co-Founder, Navina

While AI and LLMs continue to increase in popularity, so will the potential danger: With the rapid rise of AI and LLMs in 2023, the business landscape has undergone a profound transformation, marked by innovation and efficiency. But this quick ascent has also given rise to concerns about the utilization and the safeguarding of sensitive data. Unfortunately, early indications reveal that the data security problem will only intensify next year. When prompted effectively, LLMs are adept at extracting valuable insight from training data, but this poses a unique set of challenges that require modern technical solutions. As the use of AI and LLMs continues to grow in 2024, it will be essential to balance the potential benefits with the need to mitigate risks and ensure responsible use. Without stringent data protection over the data that AI has access to, there is a heightened risk of data breaches that can result in financial losses, regulatory fines, and severe damage to the organization’s reputation. There is also a dangerous risk of insider threats within organizations, where trusted personnel can exploit AI and LLM tools for unauthorized data sharing whether it was done maliciously or not, potentially resulting in intellectual property theft, corporate espionage, and damage to an organization’s reputation.  In the coming year, organizations will combat these challenges by implementing comprehensive data governance frameworks, including, data classification, access controls, anonymization, frequent audits and monitoring, regulatory compliance, and consistent employee training. Also, SaaS-based data governance and data security solutions will play a critical role in keeping data protected, as it enables organizations to fit them into their existing framework without roadblocks. –  ALTR CEO, James Beecham

Generative AI and large language model (LLM) hype will start to fade: Without a doubt, GenAI is a major leap forward; however, many people have wildly overestimated what is actually possible. Although generated text, images and voices can seem incredibly authentic and appear as if they were created with all the thoughtfulness and the same desire for accuracy as a human, they are really just statistically relevant collections of words or images that fit together well (but in reality, may be completely inaccurate). The good news is the actual outputs of AI can be incredibly useful if all of their benefits and limitations are fully considered by the end user. – Ryan Welsh, Founder and CEO of Kyndi

As a result, 2024 will usher in reality checks for organizations on the real limitations and benefits GenAI and LLMs can bring to their business, and the outcomes of that assessment will reset the strategies and adoption of those technologies. Vendors will need to make these benefits and limitations apparent to end users who are appropriately skeptical of anything created by AI. Key elements like accuracy, explainability, security, and total cost must be considered. In the coming year, the GenAI space will settle into a new paradigm for enterprises, one in which  they deploy just a handful of GenAI-powered applications in production to solve specific use cases.

One-way Ticket to Vector-town: As new applications get built from the ground up with AI, and as LLMs become integrated into existing applications, vector databases will play an increasingly important role in the tech stack, just as application databases have in the past. Teams will need scalable, easy to use, and operationally simple vector data storage as they seek to create AI-enabled products with new LLM-powered capabilities. – Avthar Sewrathan, GM for AI and Vector at Timescale

Competition Among LLM Providers: The landscape of Large Language Models (LLMs) is heating up. OpenAI, with its GPT-4 Turbo, has been leading the race, but others like Anthropic’s Claude, Google’s Gemini, and Meta’s Llama are close on its heels. The recent management turmoil at OpenAI, notably involving Sam Altman, has opened up opportunities for these competitors to advance and potentially outpace OpenAI in certain areas. – Tomer Borenstein, Co-Founder & CTO of BlastPoint, Inc.

Generative AI will reach the trough of disillusionment as organizations realize there is no magic bullet. There is no doubt that the usage of generative AI will continue to explode in 2024. However, many organizations may be disappointed with the performance of generative AI if their expectations of how quickly its benefits come to fruition are unrealistic, or if they don’t have the expertise to implement and use it effectively. In 2024, we can expect to see a trough of disillusionment for generative AI. This is not to say that generative AI is a failure. It simply means that it will take more time for generative AI solutions to reach the desired results to match the hype. – Cody Cornell, Co-Founder and Chief Strategy Officer of Swimlane

There will be spike in interest in vector databases, but it won’t last: Vector databases will be the hot new area for discussion by many but will eventually be absorbed by relational databases after a few years. Every ten or so years a ‘new’ database technology is proclaimed to be the end of relational databases, and developers jump on that bandwagon only to rediscover that the relational model is extremely flexible and relational database vendors can easily adapt new technologies into their products. Look at PostgreSQL’s pgVector as an example of how a relational database can process vector data today and why you will be able to ignore the hype around specialized vector databases. The community around pgVector and PostgreSQL was able to support this use case around vector data quickly – the project started in 2021, but it has developed quickly this year with all the interest in Generative AI and vector data. For those thinking about this area and looking at implementing open source components in their projects, pgVector makes PostgreSQL an obvious choice. – Dave Stokes, Technology Evangelist, Percona

Companies are accelerating their investment in safeguarding generative AI for employees, alongside their AI investments overall: Investment in technology is increasing, even more than in office spaces. AI brings perhaps the largest growth potential of any category today and also some of the largest risks. Companies will invest in seizing the AI advantage while proactively mitigating and addressing its risk factors. As generative AI finds its role in the workplace, employers are investing in guidelines, risk mitigation technologies and parameters, particularly when it comes to securing company information from ‘unknown unknown’ risk factors. A 2023 report from McKinsey stated that 60% of companies with reported AI adoption are using generative AI. WalkMe believes this number will continue to increase, along a path similar to cloud and internet adoption. The same report found that the two biggest risks with generative AI are inaccuracy and cybersecurity. We anticipate these issues will escalate, and enterprises’ ability to face the risks will improve as technology posture improves. – Uzi Dvir, CTO, WalkMe

More organizations are dipping their toes into generative AI and also increasing their investment in machine learning more broadly. There are so many operational challenges for platform teams that want to facilitate running ML jobs on cloud platforms. MLOps is a hot topic at the moment but still in the early stages of adoption – we’ll see advancements there as more organizations mature their ML infrastructure. – Malavika Balachandran Tadeusz, Senior Product Manager, Tigera

LLMs to transition to smaller models for more accessibility: Though LLMs are impressive in their generality, they require huge amounts of compute and storage to develop, tune, and use, and thus may be cost-prohibitive to the overwhelming majority of organizations. Only companies with vastly deep resources have the means to access them. Since there needs to be a path forward for making them more economically viable, we should expect to see solutions that decentralize and democratize their use. We should anticipate more numerous, more focused, and smaller models that consume less power becoming more readily available to a wider range of users. These focused models should also be less susceptible to the hallucination effects from which LLMs often suffer. – Naren Narendran, Chief Scientist, Aerospike 

The data ownership conversations will heat up: As large language models (LLMs) become more powerful and sophisticated, there will be a growing debate about data ownership. Similar to what we saw with open-source code, there is an ongoing discussion about how large companies are using data that they do not own to train their models, which could lead to a concentration of power in the hands of a few large companies. To address this issue, we will see new licensing frameworks for data. These frameworks should ensure that data owners are fairly compensated for the use of their data and that users can access and use data in a responsible and ethical manner. – Bob Friday, Chief AI Officer at Juniper Networks

To Invest in AI Chatbots, or Not: We know Gen Z typically seeks out digital forms of communication rather than having to speak with someone over the phone, which is especially true for customer service requests. The caveat is that this demographic expects their media and technology to work in a symbiotic relationship that supports connection, engagement and utility; they know good customer experience when they see it and will avoid anything that delivers a subpar experience. Organizations are investing in generative AI capabilities to entice people to stay on their applications longer and drive more activity among Gen Z users. This is the right move and can have a tremendous impact if done correctly. Organizations will not find success simply by creating better chatbots because Gen Z craves authentic connection and utility which is hard to replicate. If the chatbot could provide users with new experiences, recommendations and other helpful services, then it may increase activity on specific applications or a brand’s website. That being said, users will likely be skeptical and cautious of genAI bots, and organizations will need to show incremental wins to reinforce the chatbot’s safety and value. – Robin Gomez, Director of Customer Care Innovation at Radial

While 2023 marked a breakout year for generative AI, the supply chain industry has lagged in adoption due to data barriers – just 3% of organizations reported using generative AI for supply chain management. Manual, paper-based processes still dominate global trade, so many supply chain companies have struggled to unify the vast amount of unstructured data across disparate sources. Yet, companies who have solved this data problem will make 2024 the year of generative AI supply chain breakthroughs. As generative AI models are trained to be supply chain experts, global supply chains will become more autonomous, self-repairing and self-optimizing. For example, generative AI could tell a shipper about an exception (its shipment was delayed due to extreme weather), what to do about it (reroute to a more reliable location) and ultimately even execute the solution. By telling companies where they need to focus their efforts, these AI innovations will enable global brands to deliver a better customer experience and grow their business at the lowest cost and environmental impact. – AJ Wilhoit, Chief Product Officer, project44 

Generative AI dominated the conversation this year, and for good reason— it will significantly mature and scale in 2024. There’s a vast array of applications for generative AI that are currently in experimental stages and are poised to evolve. The real value will lie in its capacity to help people make sense of unstructured information in various internal use cases— parsing through extensive volumes of documents, generating more concise and informative summaries, and facilitating Q&A interactions with these documents, thereby ensuring consistency across multiple domains. On top of this, LLM interfaces and text-based interfaces will become integral components of nearly every software product. These interfaces will be used for everything, from controlling applications to providing answers to user inquiries about the application itself. We are starting to see this emerge in corporate websites that have consumer facing elements. Additionally, in the next year we can expect to see a shift toward smaller, more specialized LLMs, reducing the amount of data required for their training. This transition aligns with the broader push toward open-source solutions, particularly models that can prove a pedigree of information sources. – Michael Curry, President of Data Modernization at Rocket Software

Generative AI and AI coding assistants will move from what some people call “junior developer” level, with a 25-30% code acceptance rate status, to CTO status through embedded context. The ability to add more context, including runtime context, will exponentially increase the value and massively improve the acceptance rate (70% and better) of AI generated code. Going one level deeper….. Currently activities like deep debugging, multi-file changes, using large files as inputs is beyond the scope of most coding assistants. – Elizabeth Lawler, CEO of AppMap

GenAI will transform transformation: In 2024, GenAI will drive transformation in various areas, making it more urgent and transformative. With the help of customized GenAI agents, tasks like reading, organizing, and cleansing unstructured data can be done “AI-first” reducing a lot of manual effort. Data can be accessed from anywhere for GenAI to access, but governance, data pipelines, and processes will still be necessary for managing quality, enabling outcomes, assessing value, determining rights, and achieving compliance. GenAI, in combination with the cloud, can accelerate data-related transformation initiatives. Additionally, GenAI can enable organizations to leapfrog competitors and accelerate transformation, handling complex tasks and processes in finance, tax, legal, IT, compliance, and other departments. Leveraging GenAI as a catalyst for transformation has the potential to create a divide between competitors and organizations that fail to utilize GenAI may struggle to compete against those who do. – Bret Greenstein, Data and AI Leader, PwC US    

Graph

Goodbye Hallucinations – Hello Amplified Content! In 2024 Generative AI, powered by rapidly advancing language models and grounded by Knowledge Graphs will hallucinate less and produce content that is increasingly contextually relevant and insightful. This will pave the way for groundbreaking developments in natural language understanding, tailored content creation, and complex problem-solving across various domains such as healthcare, drug discovery, and engineering.  – Dr. Jans Aasman, CEO of Franz Inc.

Graph databases are poised to continue revolutionizing how data science and engineering teams process large-scale and real-time datasets, enabling them to extract deeper insights and achieve faster time-to-value. As the volume and velocity of data continues to grow exponentially, particularly real-time data like points-of-interest and foot traffic, teams will need to rethink their data management tech stack to keep up. I expect more and more teams to turn to graph databases to navigate complex datasets, boost efficiency, and do it all in a way that protects consumer privacy. – Emma Cramer, Senior Manager of Engineering at Foursquare

Knowledge Graph Adoption Accelerates Due to LLMs and Technology Convergence: A key factor slowing down knowledge graphs (KG) adoption is the extensive (and expensive) process of developing the necessary domain models. LLMs can optimize several tasks ranging from the evolution of taxonomies, classifying entities, and extracting new properties and relationships from unstructured data. Done correctly, LLMs could lower information extraction costs, as the proper tools and methodology can manage the quality of text analysis pipelines and bootstrap/evolve KGs at a fraction of the effort currently required. LLMs will also make it easier to consume KGs by applying natural language querying and summarization. Labeled Property Graphs (LPG) and Resource Description Frameworks (RDF) will also help propel KG adoption, as each are powerful data models with strong synergies when combined. So while RDF and LPG are optimized for different things, data managers and technology vendors are realizing that together they provide a comprehensive and flexible approach to data modeling and integration. The combination of these graph technology stacks will enable enterprises to create better data management practices, where data analytics, reference data and metadata management, data sharing and reuse are handled in an efficient and future proof manner. Once an effective graph foundation is built, it can be reused and repurposed across organizations to deliver enterprise level results, instead of being limited to disconnected KG implementations. As innovative and emerging technologies such as digital twins, IoT, AI, and ML gain further mind-share, managing data will become even more important. Using LPG and RDF’s capabilities together, organizations can represent complex data relationships between AI and ML models, as well as tracking IoT data to support these new use cases. Additionally, with both the scale and diversity of data increasing, this combination will also address the need for better performance. As a result, expect knowledge graph adoption to continue to grow as businesses look to connect, process, analyze, and query the large volume data sets that are currently in use. – Atanas Kiryakov, founder and CEO of Ontotext

Hardware

Limited chip availability drives common sense and tampers down AI expectations. The mad dash for AI has demand for GPUs and related chip production at its limits. With constrained capacity to make more of these chips, AI processing will hit a wall in 2024. This shortage will most acutely affect large buyers like cloud providers, Meta, Tesla, and OpenAI. – Forrester

Access to GPUs is becoming increasingly expensive and competitive, which will usher forth a new chapter in the cloud industry. The traditional providers – AWS, Microsoft Azure and Google Cloud – are unable to meet demand from developers, with smaller companies finding it hard to afford and reserve the compute they need to train large language models. As a result, an increasing number of organizations will turn to distributed and permissionless cloud networks to gain access to GPUs, including less sophisticated chips that in many cases sit idle. Looking ahead to 2024, this newfound attention to “lesser” GPUs will help sustain the AI boom, and mitigate concerns that Microsoft, Alphabet and Meta will dominate the tech transformation. Those seeking alternatives amid the GPU squeeze will make progress by using less intensive data set requirements, deploying more efficient techniques like Low-Rank Adaptation (LoRA) to train language models, and distributing workloads in a parallel manner. This involves deploying clusters of lower-tier chips to accomplish tasks equivalent to a smaller number of A100s and H100s. A new era of cloud computing will emerge, one in which power is decentralized and not in the hands of just a few. – Greg Osuri, founder of Akash Network and CEO of Overclock Labs

Today’s technologies for compute, memory, networking will prove highly limiting for scaled deployment, restricting the economic impact of AI. New technologies will be required on all three fronts, going beyond the ill-validated technology investments driven by hype we have seen in the last few years. Fundamental technological barriers across compute, memory, and networking will drive specialized inference infrastructure for different use-case profiles and models. We will see substantial dedicated investment in inference infrastructure (which generates predictions to make decisions), to address the critical bottleneck to scaled deployment. As we move towards scaled deployment, sustainability issues will emerge as one of the key factors limiting wide-scale AI deployment. These include energy consumption and the impact on our planet. Early value applications of generative AI will focus on internal efficiency improvements for cost-reduction, rather than external/customer-facing revenue growth. Open Source models will enable broad early exploration of generative AI, but ultimately end users will need to invest in specialized internal teams or engage external partners to leverage both open source models and/or custom models for value deployments. – Naveen Verma, PhD, CEO, EnCharge AI

IoT and Edge Computing

Edge computing’s influence on tech investment in 2024: In 2024, edge computing will continue to grow in importance. Organizations will invest in edge infrastructure to support applications requiring low latency, such as autonomous vehicles, augmented reality, and industrial automation. – Srinivasa Raghavan, director of product management, ManageEngine 

The success of Edge AI will depend on advancements in lightweight AI models: The innovation surrounding AI is exciting, and edge computing is one way to enable new AI applications. However, in order to make edge AI a viable option, AI models need to be lightweight and capable of running in resource constrained embedded devices and edge servers while continuing to deliver results at acceptable levels of accuracy. Models need to strike the right balance — meaning, models must be small and less computationally intensive so they can run efficiently at the edge while also delivering accurate results. While a lot of progress has been made in model compression, I predict that there will be continued innovation in this space, which when coupled with advancements in edge AI processors will make EdgeAI ubiquitous. – Priya Rajagopal, Director of Product Management at Couchbase

Long-awaited edge computing: As AI applications are developed, companies will look for processing power closer to where the application is being utilized. That means data centers will focus on keeping the heavy compute closer to where the data is actually being used. – Michael Crook, Market Development Manager – Data Centers, Corning Optical Communications

 MLOps (Machine Learning Operations) will significantly evolve to not only provide operational capabilities such as deployment, scaling, monitoring, etc. but will include model optimization. This will encompass everything from hyperparameter tuning to tweak model performance to model size/quantization and performance optimization for specific chipsets and use cases such as for edge computing on wearable devices or cloud computing. – Yeshwant Mummaneni, Chief Engineer, Cloud, Altair

Low-code/No-code

Low Code Abstraction Frameworks: Abstraction frameworks like DBT Labs facilitate SQL-based code that can seamlessly run on various underlying platforms such as Snowflake and Databricks. This abstraction simplifies technology transitions, offering enhanced flexibility and reducing effort and costs associated with platform changes. The goal is to empower citizen data analysts to operate platforms independently, reducing reliance on experts, considering the scarcity of talent in the field. – Arnab Sen, VP, Data Engineering, Tredence Inc. 

LLMs won’t replace low code – AI will push existing low-code solutions to do even more: Looking ahead to next year, some low-code vendors have proposed putting AI to work generating code as a means of fixing gaps in their platforms. The results will likely be less robust applications, higher technical debt, and greater cost and risk to clients. Rather than having AI generate massive amounts of flawed custom code, and creating apps that will only get worse over time, 2024 is the year we will set our sites on super-powering low-code solutions with AI. We’ll see AI making low-code platforms even more intuitive, lowering the bar for business users to create their own intelligent business processes and pushing citizen development further than ever before. – Anthony Abdulla, Senior Director, Product Marketing, Intelligent Automation at Pega

Low-Code/No-Code Tools Will Dominate Software Development in 2024: In 2024, low-code/no-code tools will dominate software development as they bring the power of app development to users across the business. The rise of “citizen developers” has proven that as we move toward a no-code future, people without coding experience are changing the working world. As tech companies adopt low-code/no-code tools, they’ll save time and money, rather than falling behind early adopters. – Jason Beres, Sr. VP of Developer Tools at Infragistics

Natural language will pave the way for the next evolution of no-code: Automation is only effective when implemented by teams on the frontline. Five years ago, the best way to place powerful automation in the hands of non-technical teams was via low- or no-code interfaces. Now, with AI chatbots that let people use natural language, every single team member — from sales to security — is technical enough to put automation to work solving their own unique problems. The breakthrough in AI was the new ability to iterate in natural language, simply asking an LLM to do something a bit differently, then slightly differently again. Generative AI and LLMs are obliterating barriers to entry, like no-code tools once did for the need to know how to code, and no-code will be the next barrier to fall. We’ve already moved from programming languages like Python to Microsoft Excel or drag-and-drop interfaces. Next year, we will see more and more AI chat functions replace no-code interfaces. We can expect non-technical teams throughout organizations embracing automation in ways they never thought possible. Natural language is the future on the frontline. – Eoin Hinchy, co-founder and CEO at Tines

Machine Learning

Machine Learning Key to Detecting Security Anomalies in IoT Devices: As more devices are connected, the risk of a cyber attack— and its consequences— continues to escalate. Machine learning will increasingly become pivotal in helping identify threats before they become serious security risks. In 2024, you can expect a slew of new ML-driven solutions to enter the market to help address this growing problem with IoT devices. – Mike Wilson, founder and CTO of Enzoic

The need for reusable data will drive the adoption of data management and unification tools integrated with AI/ML capabilities: We’re on the cusp of a data renaissance where sophisticated data management and unification tools, seamlessly integrated with AI and ML capabilities, will enhance and revolutionize how we automate and deliver data products. This is about crafting certified, effortlessly consumable, and eminently reusable data assets tailored to many business use cases. We’re not just talking about making data work smarter; we’re architecting a future where data becomes the lifeblood of decision-making and operations, driving unprecedented efficiency and innovation across industries. – Manish Sood, Founder, CEO and Chairman of Reltio

Quantum Computing

Quantum Neural Networks Will Make Machines Talk More Like Humans: The development of quantum neural networks is poised to reshape the AI landscape, particularly in the domains of NLP and image recognition. Quantum-enhanced capabilities will bring about more accurate, efficient, and versatile AI models, driving innovation across industries and unlocking new possibilities for AI applications. QNNs will also address the challenges of long-range dependencies and ambiguity in language, resulting in more contextually accurate and human-like responses in conversational AI.  – Dr. Jans Aasman, CEO of Franz Inc.

In 2024, the industry will risk falling behind if they neglect “early quantum adoption”: Like the rise of AI, new and powerful technologies such as quantum computing present a large unknown that looms over the security industry. The ambiguity of not knowing whether quantum will prove to be a greater threat than an asset exposes the sobering reality that even the most technical audiences have difficulty understanding how it works. In order to adequately prepare for the quantum evolution, the security industry must avoid the faulty position of waiting to see how others prepare. Instead, they must be early adopters of defensive protocols against quantum. – Jaya Baloo, CSO at Rapid7

Quantum computing in the future: Quantum computing will leap in scale and bring our expectations for tech into reality. CIOs should lean on the patterns of the past to prepare for the future and the scale of processing quantum computing will bring – 20 days to 20 milliseconds. Examine the underpinning systems that went into your organization’s data gathering, and security and start preparing the infrastructure to be able to handle the increase in load this will bring. We saw this same process with remote working – most of our applications and infrastructure weren’t originally built for remote work and had to be refactored to allow for internet speeds, mobile devices, and new applications. There has been a lot of talk about remote work causing burnout in IT, but the real root cause is that our applications weren’t built to enable remote work. We’ll see the same burnout when quantum computing takes off if our environments aren’t ready for this next evolution of tech. – Ivanti’s CIO, Robert Grazioli 

In 2024, the landscape of computing will continue to experience a transformative shift as quantum computing steadily moves from theoretical promise to practical implementation. While they have amazing capabilities to solve some of our world’s greatest problems, they also pose a massive risk to today’s widely used public key infrastructure (PKI) cryptography. The foundation of practically all cryptographic protection is PKI and, as quantum computers increasingly come online around the 2030 time period, these algorithms will be vulnerable to attacks. As advancements accelerate, quantum computing is anticipated to become more accessible, heralding a new era of computational power. Shifting to post-quantum cryptography (PQC) will be key to defending against quantum computing attacks. As quantum computers threaten current encryption standards, there is an urgent need to fortify our digital security against potential vulnerabilities. U.S. government regulations like the Commercial National Security Algorithm Suite (CSNA) 2.0 and the Quantum Computing Cybersecurity Preparedness Act have taken effect and are mandating a switchover to quantum resilient security algorithms starting as early as 2025 for certain critical infrastructure components. The National Institute of Standards and Technology (NIST) is also expected to release the final versions of PQC algorithms within 2024. Simultaneously, the proliferation of quantum computing demands parallel focus on cyber resilience, as the threat landscape continues to evolve. Strengthening infrastructure to withstand and recover from the growing sophistication of cyberattacks will become paramount, necessitating a proactive approach to safeguard digital assets in the quantum-powered future. Flexible solutions like FPGAs will be essential in ushering in a new wave of innovation in the industry to ensure data protection and system integrity in the face of evolving threats. – Mamta Gupta, Director of Security and Comms Segment Marketing at Lattice Semiconductor

RPA, Automation, Robotics

Automation, not AI, will have a bigger enterprise impact in 2024: While AI is likely to continue making headlines next year, automation will be the more impactful technology for enterprises from an implementation perspective. The truth is that most of the world is not very automated today. If you look at any technology stack right now, you’re likely to find some poorly implemented automation and a lot of manual processes under the hood. However, as businesses look for ways to improve efficiency in 2024, most will turn to automation, particularly for their engineering and infrastructure functions. This is because automation is highly efficient and requires very few people to manage it. For many use cases, businesses can set up fully automated systems that operate just as well – if not better – than humans or even AI-augmented humans. – David Hunt, Co-Founder and CTO, Prelude Security

Automation tools will make a more visible impact on developer velocity and how developers’ work is measured. This year’s explosion of AI and ML is instigating an unparalleled transformation in business productivity expectations. In 2024, the extended accessibility to AI- and ML-driven automation tools will continue to elevate the benchmarks for code quality, reliability, and security, in response to the escalating demand for expedited software delivery. – Sairaj Uddin, SVP of Technology at The Trade Desk

Automation and AI tooling will come together to make one central “enterprise autopilot.” Infusing process mining and task mining with AI and automation will finally bring digital transformation full circle in 2024. These technologies will no longer operate as separate, but will be combined to power the full potential of automation. Enterprises that bring AI and automation together under one unified system will connect work from dispersed processes and systems to enable the intelligence and agility that businesses desperately need to keep pace with digital transformation. – Anthony Abdulla, Senior Director, Product Marketing, Intelligent Automation at Pega

Security

GenAI will prove preexisting security awareness training antiquated in 2024; organizations will modernize their programs to address these new, more sophisticated threats. With the consumption of GenAI at scale within the bad actor community, the value of traditional security awareness training will decline rapidly. Companies will modernize security awareness programs to include continuous user-focused controls with a greater ability to identify and defend against today’s modern social engineering attacks alongside real-time user guidance to prevent users from accidentally falling victim to such attacks in the blink of an eye. – Curtis Simpson, CISO, Armis

Prediction: API security evolves as AI enhances offense-defense strategies: In 2023, AI began transforming cybersecurity,  playing pivotal roles both on the offensive and defensive security fronts. Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention. AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks. We will witness a pivotal shift as AI becomes a powerful tool for both malicious actors and defenders, redefining the dynamics of digital security. – Shay Levi – CTO and co-founder – Noname Security

Deceptive AI Driven Techniques Will Become Prominent in 2024: The level of sophistication in cybersecurity has evolved exponentially over time, but 2023 saw some of the quickest innovations as generative AI became more prominent. Because these tools are often generally available and easily accessible, we must assess the risk it poses to the current cyber landscape. Generative AI is a double edged sword for the cybersecurity industry – it’s used to make defenders faster and more capable, but it’s also doing the same thing for adversaries. Attackers have become more deceptive in their techniques and harder to detect as generative AI gets better and better at impersonating humans, making traditional signs of social engineering harder to identify from the first point of contact. These trends will continue into 2024, and become even more dangerous. It’s important that the industry’s capabilities continue to keep pace with attackers’ use of emerging technologies like generative AI and 5G in the coming year. – Siroui MushegianBarracuda CIO  

AI will play a key and growing role in how organizations analyze and act on security data: We will begin to see quantifiable benefits from the use of AI as it relates to analytics and operational playbooks. These benefits will help bridge some of the heavy lifting that Security Operations Center (SOC) analysts do today. AI will also benefit how response and mitigation capabilities translate into operational ones.” – Mike Spanbauer, Field CTO, Security at Juniper Networks

From Productivity to Peril: AI’s Impact on Identity Security: In the broader context of digital transformation, AI has supercharged productivity like never before and has opened up remarkable possibilities, such as creating realistic images, videos, and text virtually indistinguishable from human-created content. But in the upcoming year, organizations must brace for AI’s double-edged sword. This capacity for hyper-realistic content generation has profound implications, and the rise of Generative AI will turbocharge identity-based attacks. The development of AI is intertwined with a broader landscape of identity-based risks and vulnerabilities, including the growing threat of phishing and spear phishing campaigns, techniques where attackers target a specific person or group and often will include information known to be of interest to the target, which has taken on a new dimension due to the capabilities of AI. As we head into 2024, organizations must stay vigilant, understand the technology’s risks, invest in advanced security measures, and develop a complete picture of their identity infrastructure to stand a chance against threat actors. – Silverfort’s Co-founder and CTO, Yaron Kassner 

AI will drive the adoption of proactive security models. There will be a greater focus on proactive approaches and tools including firewalls, zero trust, malware, and hardening. The top GenAI threat issues are growing privacy concerns, undetectable phishing attacks, and an increase in the volume/velocity of attacks. Addressing the complex security challenges AI poses requires strategic planning and proactive measures. On O’Reilly’s learning platform, we have seen a huge increase in interest in most security topics. Governance, network security, general application security, and incident response have shown the largest increases. Security is on the map in a way that it hasn’t been in many recent years. – Mike Loukides, Vice President of Emerging Tech Content at O’Reilly Media

Secure data sharing becomes the linchpin in robust and resilient Generative AI-driven cyber defenses: Generative AI is a dual-use technology with the potential to usher humanity forward or, if mismanaged, regress our advancements or even push us toward potential extinction. APIs, which drive the integrations between systems, software, and data points, are pivotal in realizing the potential of AI in a secure, protected manner. This is also true when it comes to AI’s application in cyber defenses. In 2024, organizations will recognize that secure data sharing is essential to building a strong, resilient AI-powered future. While AI is undoubtedly a testament to human ingenuity and potential, its safe and ethical application is imperative. It’s not merely about acquiring AI tools; it’s the responsibility and accountability of secure integration, primarily when facilitated through APIs. – Ameya Talwalkar, CEO and Founder of Cequence Security

AI-Driven Attacks and Defenses: Cybercriminals will increasingly use artificial intelligence (AI) to automate and enhance their attacks. In response, cybersecurity defenses will rely more on AI and machine learning for threat detection and automated incident response, creating a continuous battle of algorithms. – Joseph Carson, Chief Security Scientist and Advisory CISO at Delinea

Threat actors will win the AI battle in 2024: The rise of generative AI has ignited a critical debate. Will organizations harness generative AI in time, or will threat actors exploit faster to gain an advantage? Unfortunately, the scales will tip in favor of the dark side as threat actors outpace organizations in adopting generative AI. Brace for a relentless onslaught of deepfakes, sophisticated phishing campaigns, and stealthy payloads that evade endpoint security defenses. These challenges will test the mettle of cybersecurity defenders like never before. – Dr. Aleksandr Yampolskiy, Co-Founder and CEO of SecurityScorecard

AI is already providing a tremendous advantage for our cyber defenders, enabling them to improve capabilities, reduce toil and better protect against threats. We expect these capabilities and benefits to surge in 2024 as the defenders own the technology and thus direct its development with specific use cases in mind. In essence, we have the home-field advantage and intend to fully utilize it. On the other hand, while our frontline investigators saw very limited use of attackers using AI in 2023, in 2024 we expect attackers to use generative AI and LLMs to personalize and slowly scale their campaigns. They will use anything they can to blur the line between benign and malicious AI applications, so defenders must act quicker and more efficiently in response. – Phil Venables, CISO, Google Cloud 

With the risk of cybersecurity attacks on the rise, in 2024, it is crucial for governments to take a proactive approach to their security, making sure that their official channels of communication to their residents are not exploited or affected, and to be very deliberate with any sensitive information that they obtain from their residents in the first place. The major piece that everyone should be looking for is the ability to set up Multifactor Authentication to make it as hard as possible for a threat actor to get into a potential communication system like Facebook or X (formerly Twitter). – Ben Sebree, Senior VP of R&D at CivicPlus

New Malicious Uses of Generative AI Will Emerge: AI fears are warranted, but not in the way we expect. Content creation, while a risk to cybersecurity, is one that our modern solutions can address. The real threat is generative AI developing the ability to plan and orchestrate attacks. If that were to happen, it would mean that AI could design and execute attacks on the fly—and do so using information on the Internet. Generative AI promises to erase the greatest advantage we have over our adversaries: the time and resources required for sophisticated attacks. If generative AI can orchestrate attacks, it would shift the balance of power dramatically. Today, it takes hackers weeks to discover our vulnerabilities. Tomorrow, AI could do the same in a matter of seconds or minutes. And rather than requiring a team of hackers with diverse skill sets, it could only take just one person working with AI. – Adrien Gendre, Chief Product and Technology Officer at Vade

It’s easy to look at the cybersecurity implications of bad actors fine-tuning LLMs for nefarious purposes through an extremely negative lens. And while it is true that AI will enable hackers to scale the work that they do, the same holds true for security professionals. The good news is national governments aren’t sitting still. Building custom LLMs represents a viable path forward for other security-focused government agencies and business organizations. While only the largest well-funded big tech companies have the resources to build an LLM from scratch, many have the expertise and the resources to fine-tune open source LLMs in the fight to mitigate the threats bad actors—from tech-savvy teenagers to sophisticated nation-state operations—are in the process of building. It’s incumbent upon us to ensure that whatever is created for malicious purposes, an equal and opposite force is applied to create the equivalent toolsets for good. – Aaron Mulgrew, Solutions Architect, Forcepoint

Since 2022, we’ve witnessed a notable transformation in the data security and regulatory compliance technology landscape. This shift has marked the onset of technology consolidation that is expected to persist in the coming years. Niche and single-solution products and vendors are increasingly sought after for acquisitions and partnerships as consumers seek solutions to meet data security and regulatory requirements while minimizing required expertise, costs, and effort. What will make 2024 particularly interesting are the recent developments and acquisitions that currently suggest vendors will take some diverging paths in the new year – some will prioritize enhancing their cloud capabilities, while others will merge existing technologies for a consolidated, all-in-one offering. Others are simply looking to answer questions of data risk. There will be overlaps across these developments, but the ultimate winner will be consumers, who will see substantial growth in enterprise data asset coverage, reduced skill requirements, and improved synergy among technologies that were traditionally segmented. – Terry Ray, SVP Data Security GTM, Field CTO of Imperva

Trust in data and AI will be paramount to making smart decisions, but that trust can only come through understanding. CEOs will need to understand how their company collects and structures data, where their data infrastructure could improve, and its limitations to be able to effectively use AI in 2024. That means data infrastructure, quality, security and integrity can’t simply be delegated to the CTO, CIO or CDO. CEOs must be intimately familiar with what they are putting into AI, in order to act on what comes out of it with the appropriate context. – Allison Arzeno – CEO, Assurance IQ

The surging investments in AI will trigger a momentous shift in AI security, reshaping the landscape of technological safeguarding: In 2024, as the investment in AI continues to surge, a pivotal shift will unfold in the realm of AI security. With AI models, particularly large language models and generative AI, being integrated into every facet of the software chain across diverse industries, the demand for safeguarding these technologies against evolving threats like prompt injection and other malicious attacks will reach unprecedented levels. Despite the relative novelty of these advancements, the imperative for stringent security measures will gain traction, marking a watershed moment in the journey of AI technology. As we continue to grapple with the uncharted territory of immense data and new challenges, we will witness a concerted effort to fortify the boundaries and ensure the responsible growth of this transformative technology. – JP Perez-Etchegoyen, CTO, Onapsis 

In 2024, there will be a transition to AI-generated tailored malware and full-scale automation of cyberattacks: Cybersecurity teams face a significant threat from the rapid automation of malware creation and execution using generative AI and other advanced tools. In 2023, AI systems capable of generating highly customized malware emerged, giving threat actors a new and powerful weapon. In the coming year, the focus will shift from merely generating tailored malware to automating the entire attack process. This will make it much easier for even unskilled threat actors to launch successful attacks. – Adi Dubin, Vice President of Product Management, Skybox Security

Generative AI will become the largest proliferator of shadow IT: Traditional concerns around shadow IT revolved primarily around cost-control, but this year, with the unsanctioned use of generative AI services rapidly growing within the enterprise, that risk has now expanded to exposure of intellectual property and customer data being exposed outside of your organization. In 2024, we can expect to see businesses without strong AI compliance policies and visibility into the tools being used by their employees experience higher rates of PII exposure. I also expect to see at least a couple of incidents of proprietary source code being inadvertently used to train AI models not under IT’s control. My hope is that this will be a significant wake-up call for tech teams and business leaders at large about the urgent need for a proactive, enforced plan around responsible generative AI use. – Heath Thompson, President & GM, Quest Software

AI-Centric Surveillance Systems: Safety and Security: In the case of a security incident, traditional video surveillance systems require someone to review many hours of footage to find key incidents, a time-consuming process which can delay response. The video surveillance industry is poised to transform to AI-driven security systems. Traditional video surveillance systems are evolving into comprehensive AI security solutions. These systems will record video footage, but will also do a lot more to enhance safety and security. This shift reflects the fact that customers are less interested in video and more concerned about preventing and addressing security issues. Leveraging machine learning, algorithms, and computer vision, AI safety and security systems will efficiently process and interpret video content, enabling real-time threat detection. These AI-driven security systems are set to become the norm, delivering intelligent, proactive solutions that minimize problems and enhance overall security across various types of environments, including homes, businesses and government agencies. – Dean Drako, CEO of Eagle Eye Networks

The emergence of “poly-crisis” due to pervasive AI-based cyber-attacks: We saw the emergence of AI in 2022, and we saw the emergence of misuse of AI as an attack vector, helping make phishing attempts sharper and more effective. In 2024, I expect cyberattacks to become pervasive as enterprises transform. It is possible today to entice AI enthusiasts to fall prey to AI prompt injection. Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a “poly-crisis” that threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical Computing Infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models. – Agnidipta Sarkar, VP CISO Advisory, ColorTokens

Security programs for generative AI: As companies begin to move generative AI projects from experimental pilot to production, concerns about data security become paramount. LLMs that are trained on sensitive data can be manipulated to expose that data through prompt injections attacks. LLMs with access to sensitive data pose compliance, security, and governance risks. The effort around securing LLMs in production will require more organizational focus on data discovery and classification – in order to create transparency into the data that ‘feeds’ the language model. – Dan Benjamin, CEO and Co-Founder of Dig Security

By the end of 2024, 95% of consumers in the U.S. will have fallen victim to a deepfake: Every company and consumer is jumping on the AI bandwagon, and fraudsters are no exception. Cybercriminals have previously found ways to cheat the system. Earlier in 2023, they were found bypassing ChatGPT’s anti-abuse restrictions to generate and review malicious code. Now, ChatGPT is fully connected to the internet and has the ability to generate images — a recipe for the perfect deepfake. In 2023, 52% of consumers believed they could detect a deepfake video, reflecting an over-confidence in consumers. Deepfakes have become highly sophisticated and practically impossible to detect by the naked eye, and now generative AI makes their creation easier than ever. Misinformation is already spreading like wildfire, and deepfakes will only get more complicated with the upcoming elections. By the end of 2024, the vast majority of U.S. consumers will have been exposed to a deepfake, whether they knew it to be synthetic media or not. – Stuart Wells, CTO, Jumio

Storage

AI will accelerate storage and security requirements: By nature, generative AI models produce a vast amount of data. Because of this, in the upcoming year organizations can expect to see a surge in their data storage and security needs, leading to investments in scalable storage solutions, whether on-premises, cloud-based, or hybrid. The dynamic and continuous production of data generated by AI will necessitate more frequent backup cycles, and enterprises will need to implement more robust data lifecycle management solutions to determine data retention, archival, or deletion policies, ensuring that only valuable data is stored long-term. Ensuring the integrity of backups will also be paramount given the business-critical nature of AI-generated insights. Given that AI-generated data can be sensitive and critical, heightened security measures will be the last piece to the accelerated storage puzzle, meaning data security will need to be weaved into the fabric of all generative AI projects, including prevention, detection, and data recoverability. – Tony Liau, VP of Product at Object First

Synthetic Data

AI-Generated Data: Data has been viewed as a trustworthy and unbiased way to make smart decisions. As we tackle the rise of AI generated data, organizations will need to spend time and oversight validating the data or risk hallucinated results. Another large implication with these data sets is the risk of data being modified in cyberattacks – the results of which would be catastrophic. We rely on correct data to vote, receive government services, login to our work devices and applications and make informed, data-driven decisions. If an organization or governments data has been modified by threat actors or we place too much trust in AI generated data without validation, there will be widespread consequences. – Ivanti’s Chief Product Officer, Sri Mukkamala 

Verticals/Applications

Strong data engines will make financial data movement possible: Financial organizations are just starting to realize the potential their data holds, using it for guidance in financial planning and analysis, budgetary planning, and more. However, much of this data is still siloed, and we have reached the point where these organizations have so much of this data, that they need to start thinking about how it can bring value to the company or risk losing their competitive advantage. In 2024, we will see finance organizations seek to classify and harmonize their data across repositories to enable new solutions. In response, data engines, data platforms, and data lakes will be just a few tools that will become crucial to understanding and utilizing such data effectively. As a result, we can expect to see the growth of fintech applications to enable this aggregated data analysis, reporting, and visualization to take place. – Bernie Emsley, CTO, insightsoftware

AI will be the driving force behind the cultivation of a continuous learning culture within contact centers in the coming year, enhancing agents’ critical thinking abilities. Recognizing the role of adaptability, contact center managers will allocate funds to training initiatives that empower agents to adjust to evolving challenges, and recognize these skills as essential for future productivity. More than 60% of managers feel that critical thinking is a top skill needed by the agents of the future. Recruitment strategies will pivot towards individuals exhibiting robust critical thinking skills and a proactive willingness to continuously acquire new skills. – Dave Hoekstra, Product Evangelist, Calabrio 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW