Artificial Intelligence (AI) and Chat GPT are subjects that have dominated conversations lately. There is no denying that it will have a profound impact on most industries with the The World Economic Forum predicting 85 million jobs globally will be replaced by AI by 2025 (Forbes, 31 May 2023).
I was recently invited to attend a workshop on Unlocking the Power of AI where the speakers were a Futurist and Technology Expert and an App Optimisation Leader from Canva.
The speakers discussed how there are many benefits of AI and machine learning tools such as efficiency in producing content (both imagery and copy), presumably reserved for simple, low risk presentations.
However, we should also not lose sight of the many risks with AI.
Six areas of concern when using AI in Presentations
Content generated by AI raises significant copyright concerns due to the complex nature of authorship and ownership. The question of who holds the copyright for AI-generated content is often ambiguous. While AI models may be trained on vast amounts of existing copyrighted material, the output they generate can be original and unique. This creates a grey area in determining the legal rights and obligations surrounding AI-generated content. As the Arts Law Centre of Australia points out:
“AI does not have any rights under copyright law and therefore there is no legal obligation to indicate that AI was used to generate the work. But you might still want to indicate that AI was used to help create a work to be transparent with your audience.”
Resolving these copyright concerns requires careful examination of existing intellectual property laws, and the development of new regulations that address the specific challenges posed by AI-generated content.
But regulations and standards may take some time to develop. In the meantime, is it worth having presentations in the public domain that haven’t been fully owned and copyrighted? Let’s think about the reputational impacts on your organisation.
2. Accuracy and Reliability
AI models are not infallible and can make mistakes. It’s important to thoroughly test and validate the AI system before using it in a presentation to ensure its accuracy and reliability. Presenters must be cautious about presenting AI-generated information without proper verification.
Infosys is concerned that content created by ChatGPT can sound “extremely confident and authoritative and yet be completely wrong” — a phenomenon known as “hallucination” where it is feared that this tendency will add further to misinformation online.
OpenGrowth also raised warnings that ChatGPTcould result in plagiarism if it generates text that is similar to existing content.
3. Bias and Fairness
AI models can be biased, reflecting the biases present in the data they were trained on. This was a major theme of this year’s United Nation’s International Women’s Day with Cracking the Code — if you look up the word “unprofessional women’s hairstyles” in Google, you will see that the pictures are of mostly curly haired women — hardly a signal of unprofessionalism.
When using AI-generated content in presentations, there is a risk of perpetuating or amplifying existing biases. It’s crucial to evaluate and address any biases in the AI system to ensure fairness and avoid misrepresentation, concerns that are being raised by the New Year State Bar Association (see link).
4. Lack of Transparency
AI models often operate as black boxes, making it challenging to understand their decision-making process. This lack of transparency can raise concerns about accountability, as presenters may find it difficult to explain or justify the AI-generated content in their presentations. Efforts should be made to increase the transparency of AI systems and provide explanations for their outputs.
“In the last two years, litigation on algorithms has developed rapidly and largely centres on the biases of datasets and/or instructions … On Feb. 21, 2023, the U.S. Supreme Court heard the oral arguments in Gonzalez v. Google. The issue presented is whether Section 230(c)(1) of the Communication and Decency Act shields interactive computer services from liability arising from content posted on their platforms and created by third-party providers using providers’ algorithms…” – New Your State Bar Association, June 2023
5. Data Privacy and Security
AI systems typically require access to large amounts of data to learn and make accurate predictions. Presenters must consider the privacy and security implications of using AI in presentations, especially if sensitive or personal data is involved. Ensuring compliance with data protection regulations and safeguarding data from unauthorized access is essential.
We have already seen the repercussions of large-scale cyber-attacks on organisations like Optus, Medibank and Latitude Financial, and the cost this has had to individuals and to the reputations of these organisations.
Cyber threats are not going to go away. As Forbes (April 2023) recently warned:
“Artificial Intelligence will exacerbate this challenge exponentially. Just imagine how much more powerful phishing attacks will be, for instance, when AI as sophisticated as ChatGPT sends emails to staff that appear to come from the boss, that use information only the boss would normally know, and that even use the boss’s writing style.”
6. Ethical Considerations
AI in presentations may raise ethical concerns, particularly when it comes to deepfakes or manipulated content. Who of us haven’t already seen deepfakes of Tom Cruise, Barack Obama, or Vladimir Putin?
The use of AI to create misleading or deceptive presentations can harm trust and credibility. Presenters should be mindful of the ethical implications and use AI responsibly and ethically.
“Before a tool becomes available to the public, developers need to ask themselves if its capabilities are ethical. Does the new tool have a foundational ‘programmatic core’ that truly prohibits manipulation? How do we establish standards that require this, and how do we hold developers accountable for failing to uphold those standards?” – Harvard Business Review, April 2023
While there are many benefits to using AI for presentations, we should not lose sight of the risks involved and the impact that using it could have on our personal and organisation’s reputation. We need to have appropriate guidelines and policies in place for using AI in presentations. Regular evaluation, testing, and validation of AI models, along with transparency in their limitations and biases, can help mitigate these concerns and ensure responsible and effective use of AI in presentations.
If you’d like to know more, please contact the presentation
design experts at Slidesho at firstname.lastname@example.org