Ethical considerations for AI in healthcare
In the world of healthcare, it is imperative that the basic values of fairness, safety and transparency are upheld when creating and using AI. For example, the ability of AI to help predict effective treatment and improve personalised medicine for patients can be revolutionary, but only if everyone involved is well informed. Many questions are raised, such as:
- Where is the patient’s information stored?
- Did the patient consent to give their information to an AI model?
The use of AI without regulated ethics can quickly become a harmful tool leading to invasions of privacy and heavy reliance on unreliable sources. Dr Tedros Adhanom Ghebreyesus, The World Health Organisation (WHO) Director-General, reminds us that “[AI] holds great promise for health, but also comes with serious amplifying biases or misinformation” (1). The WHO has since outlined six areas for regulation of AI for healthcare, including:
- The need for transparency and documentation
- Risk management
- Validating data and clarifying the intended use of collected data
- Vigilance around data quality, including privacy and data protection
- Ensuring collaboration between all parties to remain compliant
Various bodies, such as the Information Commissioner’s Office (ICO) (2), the Medicines and Healthcare products Regulatory Agency (MHRA) (3) and the EU (4), have issued guidelines for the use of AI, with differing approaches to ethical AI use in healthcare. At Branding Science, alongside our vigilant compliance team, we have our own policies for the creation and use of AI, expanding on these guidelines. For every AI tool we create, we provide clear and transparent documentation detailing the steps undertaken to ensure the AI is secure, trustworthy and ethical.
The environmental impact of AI
AI also has a massive impact on our climate. A typical ChatGPT query uses about 10 times more energy than a Google search (5). Along with a high demand of electricity, a vast amount of water is required to cool the hardware used for training, deploying and fine-tuning generative AI models, which can put a lot of pressure on local water supplies and harm nearby ecosystems. It has been estimated that for each kilowatt hour of energy a data centre consumes, two litres of water are required for cooling (6).
The production of GPUs, which are essential for generative AI, also has significant environmental impacts. They require more energy to make than simpler processors, due to their complex manufacturing process, and their carbon footprint is increased further by emissions from transporting materials and products, as well as mining for raw materials.
Despite these negative impacts, AI can also be used to spot patterns and make predictions which help us protect the environment. It is already being used to track emissions like methane (7) and is especially useful in helping individuals, businesses and governments make more planet-friendly decisions; by tracking deforestation and predicting natural disasters (8); and by monitoring sand dredging (9).
Our commitment to carbon neutral AI
For three years in a row, Branding Science has been officially certified as a Carbon Neutral Business. It is important to us that we understand the carbon emissions produced by the AI we create and use, and we are transparent about this. To this end, our Data Science team created an internal dashboard showing the carbon emissions data from the past year to date for the AI tools we create. This way, we can monitor and implement ways to mitigate the impact of our AI on the climate while taking full advantage of the opportunities which AI presents.
AI responsibility and safety is an important topic impacting healthcare, which we take very seriously at Branding Science. We remain excited and optimistic about the future opportunities that AI provides, while staying vigilant and ensuring trustworthy and ethical practices.
Thanks for reading!
Written (without AI) by:
- Elizabeth Brown, Data Scientist
- Laksha Thanabalasingam, Trainee Research Executive
- Amy Elliott, Trainee Research Executive
- Neha Bhatti, Trainee Research Executive
References:
- (1) https://www.who.int/news/item/19-10-2023-who-outlines-considerations-for-regulation-of-artificial-intelligence-for-health#:~:text=%E2%80%9CArtificial%20intelligence%20holds%20great%20promise,Ghebreyesus%2C%20WHO%20Director%2DGeneral.
- (2) ICO AI Guidelines: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
- (3) MHRA AI Guidelines: https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device
- (4) EU AI Act: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- (5) https://www.msn.com/en-us/news/technology/i-sat-down-with-two-cooling-experts-to-find-out-what-ais-biggest-problem-is-in-the-data-center/ar-AA1EN2Tg?ocid=BingNewsSerp
- (6) https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
- (7) https://www.unep.org/topics/energy/methane/international-methane-emissions-observatory
- (8) https://www.astutis.com/astutis-hub/blog/artificial-intelligence-environmental-impacts
- (9) https://unepgrid.ch/en/marinesandwatch
