History of AI: The Library of Man with Generative AI

July 15, 2024 Data & AI

Artificial intelligence (AI) is a game-changer in the world of technology. It has revolutionised many industries and human life, and one of its amazing applications is generative AI. Libraries, which are repositories of knowledge and innovation, are now taking advantage of generative AI to improve their offerings and provide new experiences to their customers.

Libraries have a long and distinguished history of innovation, change and progress. They are centers of excellence, where history, science, philosophy and the arts are studied. According to legend, the Syracusan inventor Archimedes invented the Archimedes' screw, a pump for transporting water, while studying at the Library of Alexandria.

As places where humanity’s knowledge has resided from the time of antiquity, libraries—until very recently—were the best places to access our collective progression of change and innovation; to inspire thought and hypotheses building on lessons and insight from thousands of years of study.

This started to shift with the advent of the internet, and the digitisation of the knowledge contained in the manuscripts and books held by these institutions. No longer did you need to venture to your library to access the collective insights of man. No longer was this knowledge held by the few, with often social and economic barriers to entry. You are now able to study everything from the atom to zoology, anywhere in the world, online.

Democratisation of Knowledge

This democratisation of knowledge is one key driver that is pushing the pace of innovation forward. People from across the globe can access knowledge now with a much lower barrier to entry. For social, scientific and philosophical progress this is one of the “better” aspects of the Anthropocene. We are now sharing knowledge and ideas across borders, with more people, and in turn, raising the number of ideas collectively generated. Mass communication of knowledge and data has enabled innovation and collaboration on a truly international scale. Examples include the International Space Station, the CERN Super Collider, the COVID-19 vaccine, etc.

With all of this though, accessing the right information, data, knowledge—whatever you want to call it—can be a cumbersome task. Take for instance the image of the first black hole. The data volumes from each of the telescopes tasked with imaging the small patch in the sky where the black hole exists were so large that it was quicker to transport them in hard drives on planes.

So, great, we have democratised this data, but if data is the new oil, then the knowledge and insights from that data are the refined fuel that runs businesses and organisations worldwide. Extracting this knowledge has traditionally been performed by humans. Subject matter experts and business process experts manually extract the knowledge from the raw data. Think of this as using the Dewey decimal system to find the right book, and then going through that book to find the right information you need. This isn’t scalable without significant costs to the business as well as adding layers of friction, introducing human errors and moving the data away from those that need it.

The History of AI

Enter AI. You might not be aware, but AI is not a new technology. Its roots trace back to the 1940s, WW2, Bletchley Park, spies and the birth of data science. It took its first form as artificial neural networks—systems that took raw data, inputs and results and formed probability-weighted associations between the two. A question would be asked of the system, and the neural network would give an answer, which would then be analysed to see how “correct” it was. Adjustments would then be made to the probability weights to improve the results. After enough of these adjustments, the training could be terminated based on certain criteria—a form of supervised learning.

Research and investment continued right through the latter half of the twentieth century, with mixed results, and was the domain of academic institutions, the growing field of data scientists and the government’s investment. This entered what we now call the AI winter, lasting from the mid-eighties to the early 2000s.

The Advent of Deep Learning

Beginning around 2010 or so, the emergence of deep learning led to the rise of image classification, speech recognition and natural language processing. You use these all day every day, and they are used on you, from accessing your phone, recommending videos and curating posts in your social media feed. Each day, you’re given content based on your previous views and searches. These were/are kept behind proprietary walls, obscured and held in secret. Access to these AI-like systems and algorithms was the preserve of the few.

So, for researchers, companies and curious individuals, the forefront of AI was out of reach, creating a two-tier system. This led to the rise of some of the largest and fastest-growing companies in the world. Amazon, eBay, Apple, Tesla and others have all benefitted from this in more than one way, utilising it in content/product recommendation engines, automated supply chain systems, self-driving cars—the list goes on.

GenAI Joins the Scene

Generative artificial intelligence, or more accurately, network-enabled advancements in generative models, led to the first generative pre-trained transformer (GPT) in 2018. Quickly iterating on this, OpenAI launched GPT-3 in March of 2022, updating this in November of the same year to GPT-3.5 and launching its ChatGPT service. This offered free access to the tool for anyone and everyone who wanted to try this technology and see what results could be achieved. The impact was seismic with traditional websites seeing traffic down month after month, year after year. LLMs, including GPT and GenAI, were immediately seen as competition by Google and other search engines. Stack Overflow saw fewer questions posted. Opening the tool further by offering API access, new tools launched quickly, and competition raced to launch their own generative AIs to combat the loss of users.

Increased Access = Growing Concerns

The walled gardens of AI, the preserve of the few, were now lowered. Researchers, businesses and the general public were now able to access insights and advantages of tools that were often held behind closed doors. But with this came criticism of the platform. Questions were raised about the future of creative professions, the potential rise of misinformation and the future of work in so many fields. Many of these questions demonstrate a general lack of understanding of the technology and confusion between a generative AI and artificial general intelligence—a thinking machine, indistinct from our own intelligence.

Whilst these concerns have some validity, the real concern for businesses and organisations that wanted to capitalise on GenAI, represented three fundamental flaws in the technology: accuracy of the responses, trustworthiness of the output and the cost to deliver these services. The GenAI was found to hallucinate and give responses that were inaccurate and, in some cases, plain made up. There was a story of a lawyer using ChatGPT to present case law in his trial, with the GenAI giving him five cases to quote in support of his case. These cases were entirely imagined by the AI, landing him in hot water.

Offsetting the Risks of GenAI

It became clear that for the GenAI to be successful you needed a combination of technology and human expertise and insight. This has the potential in business to limit the scope of these applications and services as well as increase the cost once again to serve up these insights.

The question then becomes: how can we combine the Library of Man, our collective knowledge built over the Anthropocene, with this new technology and human intelligence to censor and check the responses that the GenAI generates?

One approach is to use a knowledge graph that brings context and meaning to your data, and combines this with the GenAI, to provide contextualised prompts. This improves the AI by using its short-term memory. Then when the response is received again using the knowledge graph, you can check the response to verify its accuracy and trustworthiness. All this can be achieved in a unified data platform that combines a multi-model, scalable and secure database with a semantic knowledge management tool.

Solutions like this can help expand human insight from our “library” to machine scale, empowering us to address challenges not only in business but in society at large. Whether they are new drug discoveries, solutions to climate change or challenges and opportunities we haven’t even imagined, opening up the Library of Man (and proprietary libraries of data) to new AI tools—one democratised, shared and accessible to all—opens up a world of innovation where the possibilities are endless and change is inevitable. Nervous? Sure, mistakes are inescapable. Excited? Absolutely, who knows what the next age will be? But funnily enough, history will repeat itself once more as it will be libraries, virtual or otherwise, where innovation will take place.

To learn more about the Progress approach to GenAI, check out our brochure.

Download Brochure

 

Philip Miller

Philip Miller serves as the Senior Product Marketing Manager for AI at Progress. He oversees the messaging and strategy for data and AI-related initiatives. A passionate writer, Philip frequently contributes to blogs and lends a hand in presenting and moderating product and community webinars. He is dedicated to advocating for customers and aims to drive innovation and improvement within the Progress AI Platform. Outside of his professional life, Philip is a devoted father of two daughters, a dog enthusiast (with a mini dachshund) and a lifelong learner, always eager to discover something new.

Read next 7 Tips for Getting Started with Generative AI