A.I. - What is it Good For?

Blogpost
March 5 2025 - Cameron Barker, Communications & Marketing Lead

At the moment, it seems we can hardly go anywhere without hearing the letters A and I. Love it or loathe it, artificial intelligence is already having major impacts on our lives, and investors would be foolish to ignore it.

That being said, what is AI actually good for? And do the risks associated with its use outweigh any potential benefits? In this piece, we will dig into how AI tools can be used for good, but also how they can be used in ways that are detrimental to society.


The Value of AI

One area where the benefits of AI can be most clear is the life sciences, including medicine. For example, on the 20th February, the BBC reported that Professor José R Penadés of Imperial College London decided to trial Google’s “co-scientist" tool, to surprising results.

Microbiologists by trade, Penadés and his team have been researching anti-biotic resistant bacteria over a number of years, but reported that Google’s tool was able to reach the same conclusion of the team (i.e. from a particular project) after only 48 hours. The work of Penadés and his team had not been published and, on reaching out to the company, Penadés was also able to confirm that Google had not accessed his computer. As such, assuming that all was indeed above-board, the AI was able to generate the same hypothesis (along with four other viable alternatives) of its own accord.

Similarly, AlphaFold (an AI programme developed by Alphabet subsidiary DeepMind), has the capability to predict protein structures (“to a remarkable degree of accuracy”) in minutes, compared with the several years it can take using traditional methods. While it has been suggested by other researchers that AlphaFold does not understand the underpinning mechanisms of protein folding, its achievements are still significant, and illustrate that AI tools may have the potential to revolutionise fields such as biology in future.


The Dark Side of AI

While it may be prudent for investors to recognise the potential opportunities associated with AI, it would be foolish to not also consider the risks. Not only is AI development putting strain on our environment, infrastructure, and energy systems (as recently highlighted here by Ethical Screening team member Andrew Hicklin), the technology also has dangerous applications.

One such example is its ability to create and disseminate misinformation. In a world which is already struggling with the scourges of misleading and out-right false information, AI has the potential to fan the flames, given its ability to rapidly produce such materials. Most alarmingly is perhaps its ability to produce “deep-fakes”, images or videos which appear to show a life-like image which is not, in fact, real at all.

A perfect case-in-point would be a now infamous video-call in early 2024, where an employee at a major multinational corporation was invited to attend a conference call with the company’s chief financial officer, within which a number of colleagues they recognised were also present. However, these individuals were in fact deepfakes, and were convincing enough to dupe the employee into transferring a sum of 200 million Hong Kong dollars to the individuals behind the elaborate scam.

While AI being used for white-collar crimes is somewhat concerning, it does not take a feat of imagination to consider how deepfakes could be used for more frightening purposes. Generating images of political or civil leaders, with a view to either harm such individuals politically, or even to incite social unrest or violence, is certainly plausible.


Managing Opportunities and Risks from AI Development

In light of the above, what should investors do to reap the opportunities associated with AI, while avoiding the numerous and complex risks that it poses? First of all, they should aim to understand as much about AI tools as possible, if they seek to invest in companies that are developing or utilising them.

As is also explained by Andrew Hicklin is his blog, AI tools are like any other – they might be very well suited to some tasks, but less so for others. The example of using a hammer to cut down a tree is particularly apt in that it’s possible, but not necessarily sensible.

When considering investing in companies developing AI tools, investors may wish to understand exactly what these tools are being designed for, and if the developing company has identified a clear application and market for its tool. To go back to the hammer analogy, a company could design the best hammer imaginable, but if they’re targeting tree surgeons, they are not likely to make many sales.

Similarly, they could be making the right tool for the right people, but if the price of this tool is way beyond what most potential customers can afford, then it’s unlikely to be successful. A titanium chainsaw with diamond teeth may be ideal for tree surgeons, for example, but if most can’t afford it, then the development was a waste of time and money.

Identifying (and also engaging with companies regarding) potential applications of AI tools may also be a way for investors to avoid risks associated with their eventual use. If, for example, a company is developing a tool with the potential to be used for the generation of deepfakes, this may present an unacceptable risk to an investor both in terms of how it may impact the financial value of their investment, as well as any potential harm it may cause to external stakeholders.

If an investor is still willing to accept this risk, provided certain conditions are met, then assessing (and once again engaging with their investees regarding) certain criteria could be a course of action. This could, for example, entail the evaluation of any controls and procedures a company has established to prevent their tools being used for malicious purposes.


Managing Opportunities and Risks from the Use of AI

In addition to opportunities and risks associated with the development of AI tools, there are opportunities and risks associated with its use, and even risks associated with simply doing business in a world where AI exists. As such, even for investors not seeking to directly invest in companies devolving AI tools, there are AI-related issues that should perhaps be considered.

First of all, let’s look at the opportunities. If the correct AI tool is chosen and implemented successfully by a company, it may help to drive business efficiency, reduce costs, and provide competitive advantages. For investors, assessing if a company has the ability to use, or is already using, AI tools may be something to consider in the investment decision process. Companies that are best positioned to leverage AI tools successfully may prove a better investment than those which are not.

However, investors may also wish to consider the extent to which companies are mitigating risks posed by AI use. If a company is using (or intending to use) AI tools which are not best suited to their needs, this may prove to be a cost that does not bring any benefit, and thus represent a concern for investors. Similarly, a company may be exposed to risks associated with employee misuse of, or even dependency on, AI tools. If inadequate measures have not been implemented to prevent such risks, this may also be an area of concern for investors.

There are also risks associated with simply doing business in a world where AI is becoming increasingly powerful. AI tools can be (and are being) used as part of sophisticated scams and fraud, such as in the video-call scam discussed earlier. Without adequate controls and staff training, companies may be failing to mitigate the risk of exposure to such scams. For investors, this may be an area for due diligence and engagement with most, if not all, potential and existing investee companies.


Final Thoughts

Yes, AI can be great, and may revolutionise the way businesses function for the better, but it does not come without significant risks. For investors, identifying opportunities in the AI space could be a way to generate returns, but failing to adequately assess risks could result in the opposite; a cautious approach may therefore be beneficial. Investors may wish to ensure that their knowledge of AI keeps up with developments in the field, if the opportunities are to be seized without excessive risks being taken.

Other News

See all news

Feb 27 2025 Blogpost

A physicist once described civilisation as only possible in an energy surplus. Without it, collapse becomes inevitable.
Read full story

The Small Stuff: Energy, AI, and the Need for a Sustainable Transition