This multi-part article series by our friends at Swim is intended for data architects and anyone else interested in learning how to design modern real-time data analytics solutions. It explores key principles and implications of event streaming and streaming analytics, and concludes that the biggest opportunity to derive meaningful value from data – and gain continuous intelligence about the state of things – lies in the ability to analyze, learn and predict from real-time events in concert with contextual, static and dynamic data. This article series places continuous intelligence in an architectural context, with reference to established technologies and use cases in place today.
Artificial intelligence has made astonishing technological advances in recent years and more companies are turning to AI to improve internal functions and unlock the potential of enterprise datasets. IDC has characterized AI as “inescapable” and estimates that by 2025, at least 90% of new enterprise apps will embed AI. But getting to the right models to effectively power AI is hard – and especially hard for speech. Today in order to address these needs, Deepgram has announced Deepgram AutoML, a new training capability that streamlines AI model development, reducing manual cycles for data scientists while giving them the best accuracy humanly possible.
In this contributed article, editorial consultant Jelani Harper discusses how machine learning models are no longer confined to the data scientist’s sandbox. When coupled with RPA, any business user can leverage machine learning and other vanguard technologies to overcome the difficulties of unstructured data, as well as additional challenges in the ever evolving data ecosystem.
In this contributed article, Pawel Stopczynski, Researcher and R&D Director at VAIOT, believes that armed with technology, lawyers are already more efficient than before. The trend toward digitizing law is poised to continue as robot lawyers become smarter and more capable. The question is, will robot lawyers lie like humans?
In this special guest feature, Bryan Friehauf, EVP and GM of Enterprise Software Solutions at Hitachi ABB Power Grids, discusses how preventing information overload and an organizational attention deficit is crucial for every business. For the energy industry, despite having access to more data than ever, organizations struggle to make use of the right data. This becomes an issue for critical systems like asset performance management (APM).
A group of AI researchers from DarwinAI and out of the University of Waterloo, announced an important theoretical development in deep learning around “attention condensers.” The paper describing this important advancement is: “TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices,” by Alexander Wong, et al. Wong is DarwinAI’s CTO.
ContrAI is a modular, web-first product that uses AI to aid contract management and analysis. The contract processing required to train the ContrAI system was due to enter its second phase from May this year following the initial categorization process at the end of last year. However, the lock-down meant that it would no longer be possible to bring together groups of legal students to implement this in person. As a result, the online Clause Game was developed to enable the contracts to be marked up remotely, providing essential data to train the algorithm.
In this special guest feature, Emily Kruger, Vice President of Product at Kaskada, discusses the topic that is on the minds of many data scientists and data engineers these days, maximizing the impact of machine learning in production environments. Data scientists and data engineers need integrated tools that speed the development and delivery of ML-powered products. ML platforms are an emerging solution embraced by many enterprises and are purpose-built to help get ML to production efficiently and reliably.
RL is a hugely popular area of deep learning, and many data scientists are exploring this AI technology to broaden their skillet to include a number of important problem domains like chatbots, robotics, discrete optimization, web automation and much more. As a result of this wide-spread interest in RL, there are many available educational resources specifically tailored to this class of deep learning – boot camps, training certificates, educational specializations, etc. But if you’re a data scientist who has been programming in Python for a while, and has some experience with other forms of deep learning using a framework like TensorFlow, then maybe this new book, “Deep Reinforcement Learning Hands-On,” by Maxim Lapan, might be a great way to kick-start yourself into becoming productive with RL.
I recently caught up with Andy Horng, Co-Founder and Head of AI at Cultivate, to get a sense for the technology underlying the company’s AI-powered leadership development platform. NLP plays an important role, and as a result they’re using the RoBERTa language model for very good results.