Heard on the Street – 11/15/2023
Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Chat GPT Anniversary. Commentary by Balaji Ganesan, CEO and co-founder of Privacera
“Chat GPT’s official launch last November brought generative AI to the mainstream, underscoring its significance and potential impact well beyond its previous deep tech or academic testing grounds. Over the past year, we’ve seen enterprises across all industries increasingly looking to leverage generative AI to optimize efficiencies and innovate. In fact, a recent Privacera survey found that 96% of businesses are pursuing generative AI to gain a competitive edge. Amidst all the excitement about generative AI, enterprises must also prioritize safeguarding sensitive data and implement universal data security and governance to secure generative AI applications and their underlying data. Now, with the new Executive Order released from the White House, the significance of generative AI safety, data security, and governance cannot be overstated. Striking the right balance between innovation and safeguarding sensitive data is critical.”
GitHub Universe keynote – vendor tool consolidation risk. Commentary by Peter Guagenti, President of Tabnine
“We applaud GitHub for the product features they have demonstrated … It is bringing to life the vision for an end-to-end AI-enabled software development process that many of us have been pursuing for the last five years. The ability to leverage AI to eliminate or automate mundane tasks and free up engineering teams to do more creative and more valuable work is clearly here.
However, Caveat Emptor. What was unsaid, but clearly demonstrated today was another way for Microsoft to create top-to-bottom vendor lock-in and the ability to gain greater access to and control over your intellectual property and user data.
By vertically integrating OpenAI, Github’s access to workflow, Copilot’s AI agents, and Azure infrastructure, Microsoft has been able to deliver a seamless experience, but one built on having unfettered access to your codebase, your developer’s usage, and other potentially confidential and proprietary information. That may be acceptable to some companies, but how much access and control you are giving to a single vendor at that scale should be seriously considered.
In addition, the AI agents triggering tests and deployments demonstrated another vendor risk; rapid expansion of cloud spend without the ability to predict or limit that expense. Cloud service providers are already well known for complicated and obscure billing, and many customers complain about big surprises in cloud costs. Allowing agents to run loose on spinning up compute ideally would honor a company’s choice of compute platform – especially if you operate a private cloud, hybrid cloud, or multi-cloud architecture, where costs are more easily managed and constrained.
Microsoft has shown its cards – this is on-ramp to a monolithic Azure-based platform. Microsoft made a huge investment in OpenAI because they want to build the algorithms (on Azure) and run them embedded in every single product. Controlling a fully integrated developer workflow allows Microsoft to make sure that at every step of the software development life cycle the effort to use any other vendor is too challenging or too complicated. Monopolization and vendor lock-in are never a good thing. As we have seen in the past 20 years, the most vibrant innovation has come from open source, freedom of choice, and user control.
We believe (and Gartner and IDC also recommend) that you carefully and cautiously consider who has access to your code and data usage. No matter how exciting and potentially rewarding our new generation of tools may be, don’t just rush into the critical decision of selecting AI partners. Take the time and effort to evaluate, select, and deploy AI in ways that protects your policies and your most valuable assets – your code and your data.”
Senate AI election forum. Commentary by Dr. Srinivas Mukkamala, CPO, Ivanti
“Like any new technology, AI has the potential to be an enormous force for good, but it also presents serious challenges and threats. This is especially true during election cycles. The election forum marks one year until the 2024 presidential election, so time is of the essence in addressing concerns about the impact AI poses on elections. In terms of regulation, we cannot be too careful: the threat of deepfakes, misinformation and bias is present as AI models continue to rapidly advance and become more widespread. Tech leaders must work with government officials to ensure proper regulation and education is in place as we move into an election year.”
ChatGPT Enterprise still needs guardrails. Commentary by Mike Myer, CEO of Quiq
“Kudos to OpenAI for releasing an enterprise product that allows enterprises to use ChatGPT, which represents an important step forward. However, this doesn’t make ChatGPT ready for the enterprise.
Earlier, many businesses using the consumer version of ChatGPT in a careless manner for key business functions inadvertently created a massive security issue, with company data potentially exposed to AI models. Now, the release of ChatGPT Enterprise seeks to resolve these concerns by ensuring company data will not be used in any way for training its models, preventing sensitive information from reappearing elsewhere or being shared inappropriately. Unfortunately, other long-standing problems remain unaddressed.
It is laudable that OpenAI expanded and formalized its policy not to use any data entered via its APIs for training purposes for the enterprise level. However, the original problems that invoked the need for guardrails in the first place still need to be addressed. Most importantly, ChatGPT Enterprise is still connected to open web search, thus, the issue of returning answers that are hallucinations or simply wrong will continue. In short, all the original data problems are the same, with the important distinction that ChatGPT Enterprise will not use enterprise data to train its own models on.”
Maturity Model for Adopting Generative AI. Commentary by Bryan Kirschner, VP of Strategy, DataStax
“When AI mastery becomes the essential market differentiator, understanding your organization’s progress becomes a critical way to chart your next steps. In order for organizations to evaluate the generative AI readiness of their technology stack, plan for the future, and identify investment opportunities, they must reference an Generative AI Maturity Model. This model is composed of four key “arcs”: Contextualization, Architecture, Culture & Talent, and Trust. Each arc contains milestones that guide organizations on their AI journeys.
Contextualization is key to understanding the situation a model is trying to affect. Starting with an important baseline assumption: individuals’ privacy is preserved and from privacy, it moves through a series of features, technologies, and capacities that help organizations to add context to models. After some research, we found that organizations will often skip key milestones on this trajectory and make investments further along the arc. For example, certain organizations employing predictive models for their business planning have yet to incorporate vector search across their data repositories. Wise leaders should consider revisiting the milestones they may have overlooked.
A significant portion of the AI revolution hinges on the technological architectures that underpin advanced, increasingly real-time workloads. If your infrastructure lacks support for specific features, you’ll be limited in how far you can go with AI. Some leaders have raised doubts about making event streaming the initial milestone in the architectural arc. This is essentially an acknowledgment that real-time AI necessitates real-time messaging solutions, like Apache Pulsar. In reality, achieving a fully real-time data pipeline is contingent on incorporating event streaming.
People and culture should be a fundamental consideration for any ambitious AI initiatives. It’s crucial to recognize that not only do workers often fear being replaced by technology, but they also play a pivotal role in facilitating technological transformations. The ongoing transitions we are witnessing are not primarily about widespread job displacement; they’re about companies that have workers with AI replacing companies that have workers without AI. That’s why a significant aspect of this journey guides leaders in the integration of AI with existing processes, promotes increased transparency, and advocates for a rights-based approach that empowers individuals while fostering group cohesion.
Lastly, if an organization loses trust as they pivot to AI, it will be difficult to recover. Trust can be established, built, maintained, and eroded. Starting with governance establishes a level playing field where trust is possible—but not a given. Ensuring security and providing transparency to internal stakeholders are necessary conditions for expanding trust.”
To Achieve Data-Enabled Missions, Technical & Mission Experts Should Join Forces. Commentary by Dan Tucker, a Senior Vice President at Booz Allen Hamilton
“Over the past year, technological transformations have rapidly shifted generative AI from a once specialized tool to one that’s now being used widely across industries and in the private and public sectors. As a result, technical barriers are being lowered, allowing more non-experts to leverage generative AI’s capabilities to apply valuable insights from data to solve complex problems. For federal government agencies, generative AI has the potential to transform the way they serve the nation and its citizens. With this potential also comes new challenges for federal agencies that are looking to take full advantage of data to enhance decision-making and advance their vital missions.
Three primary challenges for generative AI in the public sector are security (knowing where the data is going), trustworthiness (the accuracy of the AI responses and ensuring there are no hallucinations,) and bias (addressing and removing its impacts). Thankfully, combining technical and mission expertise can address these challenges. For example, technology vendors are providing private large language models (LLMs) for agencies and corporations which address regulatory compliance controls to help combat security issues. Plus, many LLMs are now providing sources/cited responses to address trustworthiness issues. To combat bias, models are being trained and tested for accuracy and bias by mission experts and customer experience (CX) professionals prior to broad release. The collaboration between technologists who are skilled in AI with mission experts and those trained in human-centered design can ensure that the right questions are being asked of AI and the right challenges are being targeted in the most technically effective ways.
Ultimately, to make the promise of generative AI a reality, federal agencies should end the practice of locking data in silos. The data that’s needed to understand the world’s most critical challenges is out there, but it must be liberated, collected, integrated, shared, understood, and used to deliver better mission outcomes for people and communities. When federal missions are underway, it is often the speed and efficiency with which information is shared that ultimately determines if a citizen has a good experience interacting with the government. Therefore, it is imperative to ensure processes are optimized and that data is being leveraged as a means of streamlining goals to ensure success.”
ChatGPT’s New Search Feature. Daniel Malek, Head of Business Development at Intango
“If ChatGPT’s browsing capability becomes popular and scales quickly, search-focused performance advertisers will face a crunch. Campaigns built for traditional search engines may lose traction as users shift towards more interactive, conversational search experiences. It’s a wake-up call for advertisers to adapt or risk obsolescence.”
Don’t Fear AI – Responsible Use Helps Avoid Cyberattacks. Commentary by Ben Sebree, Senior Vice President of Research and Development at CivicPlus
“While AI drives innovation across various industries, it also presents security risks. However, we can actually protect against cyberattacks through leveraging AI solutions. For example, you can utilize AI to strengthen data security by implementing advanced encryption algorithms, user behavior analytics, and anomaly detection techniques – identifying potential vulnerabilities and enabling proactive measures to mitigate risks. In addition to protecting – you can also predict. By leveraging AI to analyze data and identify potential vulnerabilities and weak points in the security infrastructure, those predictive analytics can help assess risks and prioritize resources for enhanced protection. In a situation where the “bad guy” does succeed with a cyberattack – AI can also help respond to cyber incidents promptly. These tools can detect and respond to threats in real-time, minimizing the damage caused by cyberattacks. While the risks of AI-driven cyberattacks are real, responsible use of AI can be a powerful tool in preventing such threats.”
AI will impact the future of CX. Commentary by Cristina Fonseca, Head of AI, Zendesk
“The significance of AI in enhancing customer satisfaction is pivotal. It has redefined experiences by transforming interactions, making them more personal, swift, and efficient. As we think about the future of CX, it’s clear that AI will continue to evolve. These advancements will present more opportunities to perfect interactions, setting a higher standard for customer experience (CX).
Many companies are starting to recognize how AI can improve customer interactions and are actively exploring its use. One of the reasons why there’s so much potential is that AI tools can simplify work for agents by eliminating repetitive tasks and providing data insights, which automates routine tasks and improves decision-making. As AI technologies like support systems, chatbots, and virtual assistants improve, they’re becoming more accepted by customers.
When we think about AI applications – it’s making significant changes in the e-commerce world for retailers, agents, and shoppers. It’s changing how products are recommended, making shopping experiences more personal, and managing inventory better. Companies are using AI to create more empathetic customer experiences. AI-powered chatbots can analyze customer feelings and respond empathetically, leading to improved interactions. AI is also making shopping more personal which is increasing customer engagement, sales, and loyalty – this shows a move toward a more customer-focused digital retail. A new era in CX is undoubtedly on the horizon, supported by the ongoing development of AI.”
AI + UX. Commentary by Nitzan Shaer, CEO and co-founder of WEVO
“The workforce’s future roles and responsibilities are tied to AI. While there are understandable concerns around AI’s growth and potential unintended consequences, we see transformative potential for productivity and progression that enables human creativity to vet digital experiences and embrace design that improves the human experience. A more prosperous and brighter digital future requires a new approach to building AI-driven experiences – from financial services to retail – that support people and meet their fundamental needs. Those needs start with user experience.
As companies look to AI for operational benefits and growth potential, they should consider AI’s increasing role in defining the internet and user experience. Generative AI, AI-driven customization and even AI purchase decisions are transforming the internet. To enable growth and innovation, it’s pivotal for enterprises to embrace human-augmented AI to elevate their customer and user experiences. For enterprises across the board, these improvements in user access create economic benefits and enable society overall.”
AI will drive hyper-personalized travel through ‘micro-segmentation.’ Commentary by John Lyotier, CEO of TravelAI
“There’s no doubt that AI is touching virtually every industry, and travel is no exception. A Skift + McKinsey report recently noted that travellers have increased time spent on digital devices by 70 percent since 2013. Online travel agencies have jumped on generative AI to create travel assistants and chatbots. But that’s just the surface experience. Behind the scenes, Data Engines are gathering every input. Every keystroke leaves behind a trail of travel data to better understand the nuances of people searching for travel options and AI translates those data points into actions.
The result will be micro-segmentation, dividing travellers into hyper-niche groups based on their individual personas. Travel providers will have a granular view of their customers. Travel has always been about forging human connections. Now with AI and data analytics, agencies can merge machine efficiency with human warmth and creativity to achieve personalization, tailoring content on a grand scale so that it is specifically relevant on a personal level. Offering the right flight, the right property, the right package to the desired destination at the right time to the very person who’s searching for that ‘just right’ trip – that’s how the industry can benefit from AI’s advances.”
EU officials responsible for AI regulation warned against being ‘paranoid.’ Victor Botev, CTO and Co-founder of Iris.ai
“As the EU continues developing its AI Act, it’s crucial we get the balance right between safeguarding citizens and fostering innovation. While safety is paramount, regulators must be aware that unnecessary restrictions could hinder Europe’s AI progress.
Any regulation must be grounded in technical realities. Simply banning certain techniques outright could backfire, stymying progress rather than improving practices. There is a middle ground between fear of AI and letting it run wild. Regulations should encourage transparency and explainability in AI, not only for the safer development of the technology, but also for building public trust in the technology.
If developers better understand how their systems operate and impact society, they can more proactively address issues. Through better communication with the public around AI development we can have more grounded and constructive conversations on regulation. The EU has a chance to lead in establishing ethical AI standards. With care and wisdom, we can achieve safety without sacrificing progress.”
How businesses can efficiently apply AI to keep customers happy. Commentary by Tyler Ashby, COO of Agents Only
“Generative AI has ushered in excitement, creativity, and heightened investment, redefining the boundaries of what’s possible with AI-driven innovations. Combining analytical AI with the functionality of generative AI has companies dreaming of streamlined processes and customer-facing applications that will reduce the need to rely on humans and yet, early adopters of an ‘All AI’ customer service strategy are facing a degradation in service quality, not an improvement. With its ability to ‘hallucinate’ data, inherent biases from training, and the opaque nature of its decision-making, we tread on uncertain grounds. In the customer service industry, a training adage is “don’t practice on customers” referring to the concept of fully training an agent – and yet with AI we are allowing it to ‘practice’ with customers creating negative outcomes.
There’s a genuine concern about depersonalization where human creativity and intuition have long been the most valuable part of a customer interaction. The biggest challenge with utilizing AI, doesn’t come from AI technology, but the ineffective service design of the company or product. Companies relying heavily on AI are hurt by their policies, processes, and product expectations. Historically, it’s been the human touch that has compensated for deficiencies, leveraging interpersonal skills to bridge gaps when a company’s product falls short. AI risks amplifying these shortcomings by providing rigid, unsatisfactory responses without the nuance and understanding that humans bring.
Embracing a combined approach of AI with human intervention is the strategic linchpin for forward-thinking companies. By harnessing AI, businesses can efficiently deflect high-volume, routine inquiries, enabling customers to self-serve, while human agents are augmented to handle complex tasks, thus maximizing their intrinsic value. Staffing flexibility will become the key to hitting digital cost improvements without impacting customer experience. Just as companies need technology solutions for their AI implementation, they will need just as sophisticated software to manage the human piece of the solution. The ‘Agents Only’ gig platform helps companies achieve a balanced blend of automation and authentic human touch while advancing the digital roadmap.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW