The Black Box Problem: Why AI Needs Proof, Not Promises - adtechsolutions

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

The Black Box Problem: Why AI Needs Proof, Not Promises



About the author

Ismael Hishon-Rezaizadeh is the founder and executive director of Lagrange Labs, the construction of an account of an infrastructure company with a zero knowledge of verifiable calculating for blockchain and AI systems. Former defense engineer defined the venture, led projects through cryptography, data infrastructure and machine learning. Ismael graduated from McGill University and is headquartered in Miami.

Attitudes here are his expressed and not necessarily represent those of Decipher.

When people think about artificial intelligence, they think about chatbots and large language models. However, it is easy to overlook that AI is becoming more and more integrated with critical sectors in society.

These systems no longer recommend what to watch or buy; They also diagnose the disease, approve of loans, reveal fraud and target threats.

As AI becomes built into our daily lives, we must ensure that it works in our best interest. We need to ensure that his exits are evidence.

Most AI systems operate in a black box, where we often have no way to know how to reach a decision or whether they work the way they are foreseen.

It is a lack of transparency that is baked in mode and makes it almost impossible to revise or test the AI ​​decision after the fact.

For certain applications, this is good enough. But in high role sectors such as health care, finance and law enforcement, this opacity is serious risks.

AI models can unconsciously code bias, manipulate outcomes, or behave in ways that are in conflict with legal or ethical norms. Without a verifiable trace, users remain speculating if the decision was fair, valid or even safe.

These concerns become existential when related to the fact that AI capabilities continue to grow exponentially.

There is a wide consensus on the ground that developing artificial superintelligence (AS) is inevitable.

Sooner or later, we will have AI that outweighs human intelligence in all domains, from scientific explanation to strategic planning, to creativity, and even emotional intelligence.

Rapid progress testing

LLM -I have already shown rapid gains in the generalization and autonomy of tasks.

If the superintelime system acts in a way that people cannot predict or understand, how to ensure that it is aligned with our values? What happens if it interprets the command differently or achieves a goal with unintentional consequences? What happens if Rogue starts?

The scenarios in which such a thing could endanger humanity is obviously even AI advocates.

Geoffrey Hinton, deep learning pioneer, he warns AI systems that are capable of civilization at the level of civilization or mass manipulation. Bioological security experts are afraid that the A-AUGMENTED LABORATIONS could Develop pathogens beyond human controls.

And founder Anduril Palmer Luckey claimed that his AI AI system could Jam, hack or Spoof Military targets per second, making autonomous warfare with immediate reality.

With so many possible scenarios, how will we ensure that Asi doesn’t wipe us?

Imperative for transparent AI

A brief answer to all these questions is verification.

The reliance on the promises of opaque models is no longer acceptable to integrate their integration into critical infrastructure, much less in the proportion of ASA. We need guarantees. We need evidence.

Consensus in political and research communities is growing that the technical transparency measures are needed for AI.

Regulatory discussions often mention audit trails for AI decisions. For example, Nothing and I have an act They emphasized the importance of AI systems that can be “followed” and “understandable”.

Fortunately, research and development of AI do not occur in the vacuum. There were important breakthroughs in other fields such as advanced cryptography that could be applied to AI and ensure that today’s systems – and finally the ASA system – in check and aligned with human interests.

Currently, the most relevant evidence of zero knowledge is. The CPC offers a new way of achieving the blindness that is immediately applicable to AI systems.

In fact, the CPC can incorporate this blindness in AI models from the foundation. More than just a record of what Ai did, which could endanger, they can create an immutable proof of what happened.

Using the ZKML Library, specifically, we can combine evidence and machine learning zero knowledge that checks all the calculations produced on these models.

In particular, we can use the ZKML Library to check that the Ai model has been properly used, that it has started the expected budgets and that its output followed the specified logic – all without exposing the internal weight of the model or sensitive data.

Black box

This effectively pulls out the AI ​​from the black box and gives us to know exactly where he is standing and how he arrived there. More importantly, it keeps people in a loop.

The development of AI must be open, decentralized and checked, and ZKML must achieve this.

This must happen today to keep control over AI tomorrow. We need to be sure that human interests are protected from the first day by guaranteeing that AI works the way we expect before it becomes autonomous.

ZKML, however, is not just in stopping the malicious Asia.

In the short term, it is about ensuring that we can believe the automation of sensitive processes such as loans, diagnosis and police because we have proof that it acts transparent and fair.

ZKML libraries can give us reasons to trust AI if used on scale.

As useful as more powerful models, the next step in developing AI is to guarantee that they are learning and developing properly.

The widespread use of the effective and scalable ZKML will soon be a key component in the AI ​​race and the eventual creation of ASA.

The path to artificial superintelligence cannot be tiled with speculations. As AI systems become more capable and integrated into the critical domains, to prove what they do – and how they do – it will be important.

The verification must be moved from the concept of research on the principle of design. With tools like ZKML, now we have a sustainable path to the installation of transparency, security and responsibility into the foundations of AI.

The question is no longer if we can prove what AI does, but whether we decide.

Edited Sebastian Sinclair

Generally intelligent Bulletin

Weekly AI journey narrated by gene, generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *