Türkiye'de Mostbet çok saygın ve popüler: en yüksek oranlarla spor bahisleri yapmayı, evden çıkmadan online casinoları oynamayı ve yüksek bonuslar almayı mümkün kılıyor.
Search for:
Polskie casino Mostbet to setki gier, zakłady sportowe z wysokimi kursami, gwarancja wygranej, wysokie bonusy dla każdego.
  • Home/
  • Startups/
  • Impact and generative AI offer great opportunities, but we also need to manage risks

Impact and generative AI offer great opportunities, but we also need to manage risks

In the last week of March 2023, the Future of Life Institute made headlines with its open letter, signed by some of the biggest names in technology, urging all artificial intelligence (AI) laboratories to “immediately begin training more powerful AI interrupt systems”. than GPT-4”.

It cited the need to enable security research and policy to keep pace with the “profound risks to society and humanity” created by rapid advances in AI capabilities.

In the two months since, we’ve seen commentary from all quarters about the runaway progress of the AI ​​arms race and what needs to be done about it.

Sundar Pichai, CEO of Google and Alphabet, recently said that “building AI responsibility is the only race that really Affairsa few months later declare a ‘code red’ in response to the success of Open AI’s ChatGPT.

Governments are also aware, with members of the European Parliament agreed on the AI ​​law, the flagship of the EU, and the US government invests 140 million dollars in pursuing AI advances that are “ethical, trustworthy, responsible and serve the public interest”.

The key question remains: how should we think about balancing the threats against the opportunities arising from the mainstreaming of (generative) AI?

What is AI?

AI is a set of components, including sensors, data, algorithms and actuators, that work in many different ways and for different purposes. AI is also a socio-technical idea – a technical tool that tries to automate certain functions, but always based on mathematics. Generative AI is just one form of AI.

The case for a new paradigm of AI risk analysis

I recently spoke with Doctor Kobi Leins, a global expert in AI, international law and governance, on how to conceptualize this delicate balance.

Dr. Leins emphasized the need to increase the depth of our risk analysis lens and actively consider the long-term, interconnected societal risks of AI-related harm, as well as embracing potential benefits. She not only emphasized the dangers of prioritizing speed over security, but also urged a cautious approach in looking for ways to use the technologies, rather than starting with the business problems and trying to use the available toolbox of available technologies. to use. Some tools are cheaper and less risky, and can solve the problem without the (almost) rocket-powered solution.

So what does this look like?

Known unknowns vs unknown unknowns

It’s important to remember that the world has seen this magnitude of risk before. Echoing a quote from Mark Twain, Dr. Leins told me that “history never repeats itself, but it often rhymes.”

There are many similar examples of scientific failures that cause enormous damage, where benefits could have been achieved and risks averted. One such cautionary tale is Thomas Midgley Jr.’s invention of chlorofluorocarbons and leaded gasoline—two of history’s most destructive technological innovations.

Like Stephen Johnson’s account in the NY Times Highlights: Midgley’s inventions revolutionized refrigeration and automotive efficiency, respectively, and have been hailed as some of the greatest advances of the early 1900s.

However, the passage of the next 50 years and the development of new measurement technology revealed that they would have disastrous consequences for the long-term future of our planet – namely, causing the hole in the ozone layer and widespread lead poisoning. Another famous example is Einstein, who died after helping to create a tool that was used to harm so many.

The lesson here is clear. Scientific advances that seem like great ideas at the time and solve very real problems can turn out to have even more damaging consequences in the long run. We already know that generative AI causes significant carbon emissions and uses significant amounts of water, and that broader societal issues such as misinformation and misinformation are a cause for concern.

The catch is that, as with chlorofluorocarbons, the long-term harms of AI, including generative AI, will most likely only be fully understood over time, and alongside other issues such as privacy, cybersecurity, human rights compliance, and risk. management.

The case for expanding the depth of our lens

While we cannot yet predict with any accuracy the future technological advances that will reveal the damage we are causing now, Dr. Leins emphasized that we still need to significantly expand our timeframe and vision for risk analysis.

She emphasized the need for a risk framing approach focused on ‘what can go wrong’, as she briefly discusses in this episode of the AI ​​Australia Podcast, and suggests that the safest threshold should be refute damage.

We discussed three areas where executives and decision-makers in technology companies engaged in generative AI should think about their approach to risk management.

  1. Consider longer timelines and use cases that impact minority groups

Dr. Leins argues that we currently see very siled analyzes of risk in commercial contexts, in that decision makers within technology companies or startups often only consider risk if it applies to their product or their intended application, or the impact on people who seem to them or have the same amount of knowledge and power.

Instead, companies should remember that generative AI tools do not work in isolation, and consider the externalities created by such tools when used in conjunction with other systems. What happens if the system is used for an unintended application (because this shall happen), and how does the whole system fit together? How do these systems affect the already minorities or vulnerable, even with ethical and representative datasets?

Important work is already being done in this area by governments and policymakers worldwide, including in the development of the ISO/IEC 42001 standard for AI, designed to ensure the implementation of circular processes of AI setup, implementation, maintenance and continuous improvement after a tool has been built.

Although top-down governance will play a major role in the future, it is also up to companies to consider and limit these risks much better themselves.

Not only will outsourcing risk to third parties or automated systems not be an option, but it may also introduce other risks that companies are not yet considering, aside from third party risks, supply chain risks, and SaaS risks.

  1. Thinking about the right solutions

Companies should also ask themselves what their real goals are and what the right tools to solve that problem really look like, and then choose the option that involves the least risk. Dr. Leins suggested that AI is not the solution to every problem, and therefore not always

used as a starting point for product development. Leaders should be more critical when considering whether it is worth taking the risks in the given circumstances.

Start with a problem statement, look at the toolbox of technologies and decide from there, rather than assigning technologies to a problem.

There is a lot of hype right now, but there will also be more and more risks. Rapidly applying generative AI have already stopped doing it – because it didn’t work, because it absorbed intellectual property, or because it contained completely fabricated content that was indistinguishable from fact.

  1. Cultural change within organizations

Companies are often run by generalists, with input from specialists. Dr. Leins told me there’s a cultural piece currently missing that needs to change: When the AI ​​and ethics specialists sound the alarm, the generalists need to stop and listen. Diversity in teams and having different perspectives is also very important, and while many aspects of AI are already mastered today, gaps remain.

We can take a lesson from this Japanese production maintenance principle called ‘and further’where every member of the assembly line is seen as an expert in their field and has the power to pull the ‘andon’ cable full to stop the line if they see something they perceive as a threat to production quality.

If someone anywhere in a company identifies a problem with an AI tool or system, management needs to stop, listen and take it very seriously. A safety culture is central.

Closing thoughts

Founders and startups should listen for opportunities with AI and automation, but also maintain a healthy cynicism about some of the “magic solutions” being touted. This includes boards developing a risk appetite that is reflected in internal frameworks, policies and risk management, as well as a culture of curiosity and humility to identify concerns and risks.

We’re not saying it has to be all doom and gloom, as there’s undoubtedly a lot to be excited about in the AI ​​space.

However, we would like the conversation to continue to evolve to ensure we don’t repeat the mistakes of the past and that new tools support the values ​​of environmentally sustainable and equitable outcomes.

Contents

Shreya has been with australiabusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider australiabusinessblog.com, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Leave A Comment

All fields marked with an asterisk (*) are required