This website uses cookies for anonymised analytics and for account authentication. See our privacy and cookies policies for more information.

 




Supporting Scotland's vibrant voluntary sector

Scottish Council for Voluntary Organisations

The Scottish Council for Voluntary Organisations is the membership organisation for Scotland's charities, voluntary organisations and social enterprises. Charity registered in Scotland SC003558. Registered office Mansfield Traquair Centre, 15 Mansfield Place, Edinburgh EH3 6BB.

Ethics, risks and limitations – some practical examples

Generative AI is a rapidly evolving area of technology, so this isn’t a definitive guide. But it should give you some pointers about some of the key risks and pitfalls and how you could mitigate them. Below, we’ve listed some common examples of how generative AI could be used in practice, ranging from low-risk to high-risk scenarios. 

Low risk – routine, non-expert content generation 

For example: ‘Write a short email invitation to a community fundraising event’ or ‘Write 20 tweets promoting our membership offering’. 

These kinds of use cases, where you are using generative AI to help produce simple, routine content, are likely to present few risks. However, if you are using a visual generative AI tool such as MidJourney, you will want to caption any generated images appropriately to ensure it’s clear that they are not real images. 

Medium risk – longer, more complex content generation 

For example, generating longer-form content such as articles and written guides. 

This can be very helpful in enabling writers to get past the ‘blank page problem’ and turn brief outlines into longer passages of text. However, you need to exercise caution – generative AI tools such as ChatGPT will provide seemingly plausible text which is not reliably connected to the real world. This means there is a real possiblity that generative AI could produce inaccurate, biased or false information. So, you will need to check your generative AI results to ensure that they are accurate. For specialist or expert content that includes a lot of detailed information, you should probably avoid using generative AI. 

Amnesty International Americas ran into controversy when they used an AI-generated image to illustrate a report on police oppression in Colombia. Amnesty had clearly captioned their image as AI-generated, and explained that they used AI to avoid exposing journalists to risk. But many people criticised this approach, including local human rights defenders who argued that using AI undermined their efforts to bear witness to real situations on the ground. 

Medium risk – funding applications 

For example, turning a project outline into a complete funding proposal. 

This might help people who find writing in ‘funder language’ challenging. Arguably, this levels up the funding application process, ensuring that anyone with good ideas can get past the initial stages of application assessment. 

The main risk is using prompts that are too minimal, and not reviewing the final text carefully. This might mean you run the risk of submitting a bid that include false or misleading information, for example objectives that are irrelevant or unrealistic. You could end up making claims or proposing activities in a bid that you can’t deliver in the real world. 

From the funder side, the possibility of bids being written with the help of generative AI throws up a few questions. It’s possible that funders might see an increase in the number of bids submitted, and they might all be superficially plausible, which could make shortlisting more challenging. 

Tools that try to detect the use of generative AI exist, but they are risky – for example, they’ve been shown to wrongly classify text from non-native English speakers as AI-generated. And using such tools leads to an obvious problem – if an application was flagged as AI-generated, what would you do next? 

In summary, funders need to be aware that people are likely to try using AI to generate applications, and funders will need to have other processes in place to assess which bids are worth supporting. But any new processes will need to avoid bias, and ensure that there is no additional burden on applicants or funder assessment teams. 

Medium/high risk – job applications 

For example, using generative AI tools to help write your CV or covering letter 

There are obvious parallels between job applications and funding applications. You’re required to share your experience in a standard format, and it can feel challenging to turn a list of experience into continuous text. 

You could argue that getting help from generative AI is similar to asking a friend to help you draft a job application. There are some risks though. Generative AI might help you write a compelling covering letter, but it might contain language or concepts that you couldn’t explain at an interview. If using generative AI on a CV, the AI result might include plausible-sounding experience that just didn’t happen. Many recruiters reserve the right to summarily dismiss people who are found to have lied on their job applications. 

Recruiters face similar ethical dilemmas to funders. If your shortlisting process involves reading lots of written applications, you may find it hard to identify the most suitable candidates. As noted above, tools which claim to detect the use of AI can end up discriminating against non-native English speakers. Some recruiters are using short quizzes earlier on in their recruitment process. The best approach is to be up-front about how you will treat applications which involve AI. For example, you could say ‘we allow candidates to use generative AI in written applications, but we require all candidates to guarantee that factual information they share is true and correct’. 

High risk – bespoke content such as advice 

For example, using generative AI to help you draft an email in response to a real-life query, or using generative AI to help you write a support guide. 

This approach is higher-risk for several reasons. First and foremost, if you’re responding to a direct query, the person who sent it would expect a human response. If using generative AI, how would you address their expectation in a transparent way? 

Secondly, you should not put personal or identifiable information into a ChatGPT or generative AI prompt, because you can’t be sure how this information may be used or shared. 

Thirdly, and most importantly, a one-to-one query is likely to require a high level of detail and factual accuracy. As noted above, generative AI tools aren’t currently able to check facts on a real-time basis, and they have been shown to ‘hallucinate’ false information. Since AI responses are usually plausible and sound definitive, someone engaging with an AI output won’t be able to make good judgements about how reliable the advice is. 

A mental health support app called Koko ran into massive controversy and criticism when they tried using AI tools to help volunteers draft counselling responses. The main criticism was that they were not up-front with vulnerable users about the nature of the support they were providing. 

What’s the impact of generative AI when you’re reading or reviewing online content? 

Because generative AI tools are now publicly available, and are fairly easy to use, it’s safe to assume that you may come across AI-generated content. If this happens, how might you respond? 

Social media disinformation 

Misinformation and disinformation from social media is already a big problem. People can struggle to spot the difference between reliable information and false or fake content. Generative AI is likely to make this more challenging, because it is now easier to generate large amounts of plausible content that has no connection to reality. For example, a US lawyer was caught making legal submissions that included references to case law that was entirely fictional. 

Phishing emails 

Phishing emails are sent by hackers, designed to trick us into giving away information or passwords. We often tell ourselves that we can spot phishing emails by looking out for spelling or grammatical mistakes. Generative AI means hackers will be able to easily churn out very polished phishing emails, which will be more harder to distinguish from genuine emails. So we’ll need extra technical controls and more training to help us avoid phishing attacks. 

Finding reliable information online 

Voluntary sector organisations have a strong reputation for being reliable and trustworthy sources of information. We’re close to the communities we serve, and we’re often sources of expert information on key issues. The development of generative AI means we will all need to exercise more caution when researching information online. We’ll need to make extra checks to ensure that websites we source information from are genuine. For example, if you were preparing a public-facing legal guide, you’d want to double-check the sources of any legal references, and the track record of any legal experts you cite. 

Last modified on 29 November 2023
Was this page helpful?
Thanks for your feedback!

Our work to help organisations grow their digital capacity is supported by: