With last year's public release of OpenAI's ChatGPT, generative AI went from niche to nova. Generative AI has reached an astonishing level of capability, from producing near-human text responses and photorealistic images of non-existent events, to suggesting software code and creating websites. In turn, political and regulatory focus has sharpened regarding how to ensure responsible AI use and development. Balancing risk and reward in the deployment of AI has entered uncharted territory as the legal landscape for AI evolves and this versatile technology disrupts the way we work and create across sectors.
As legal, technology and risk-management teams collaborate to support business-critical decisions, establish forward-looking frameworks and embed responsible AI in company strategy, being able to assess and advise on AI with a holistic understanding of the changing legal and policy landscape has never been more important.
Here our experts examine some of the big questions to address when exploring generative AI opportunities.
What is Generative AI?
Generative AI refers to a broad class of artificial intelligence systems that can generate new and seemingly original content such as images, music or text in response to user requests or prompts. It encompasses a wide range of models and algorithms, which can be used to create a variety of outputs depending on the application. Although research and development in this space goes back a number of years, the recent public release of generative AI systems, tools and models has catalysed its adoption and scale.
One of the most well-known examples of generative AI systems is the GPT (Generative Pre-trained Transformer) series which relies on a large language model (LLM) to interpret text prompts and, in a tool like ChatGPT, generate natural language text in response in the way a human would. Combined with other models such as diffusion models, GPTs also allow images to be created based on text prompts. These LLMs use an architecture that mimics the way the human brain works (a "neural network"), analysing relationships within complex input data through an "attention mechanism" that allows the AI model to focus on the most important elements. They are typically trained on massive amounts of data, which allows for greater complexity and more coherent, and context-sensitive, responses. In many cases these AI systems have general (not task-specific) potential.
Using Generative AI in your business – the questions you should be asking
1. AI mapping: How is your organisation using generative AI today, and how could you use it tomorrow?
Does your senior leadership know where generative AI is already used in your business, and what use cases are in the pipeline?
What are the use cases for generative AI across your business?
Generative AI could augment or streamline many internal processes such as desktop research, generating meeting notes and calendar scheduling, assisting software development, creating first drafts of presentations, papers, emails and marketing materials, and much more. Generative AI can also be used in connection with customer-facing products, services and support – potentially revolutionising certain interactions, client offerings and business models.
Whether these use cases appear on the board agenda iteratively and organically or as part of a project to proactively investigate generative AI opportunities, organisations will need to ensure that proposals to use generative AI are raised to appropriate levels for key stakeholder interrogation, support and oversight.
Is your company already using generative AI?
Some generative AI tools are freely available online – either as stand-alone tools or as products that can integrate into a chain of tools that are provided by multiple developers. Although early adoption and experimentation with generative AI is key to realising its potential, if your business does not guide or restrict the use of these tools, they could potentially be used by your personnel in unanticipated and undesirable ways.
Your suppliers may also be incorporating generative AI in the products and services they provide to your organisation, which could result in your business unknowingly using or relying on such AI, or your business and customer data being shared with third-party generative AI developers. Do you have a process for identifying AI in your supply chain, as well as associated decision-making, contracting and ongoing monitoring processes for receiving AI-assisted services and products?