I’m in Buenos Aires right now at the DeFi Security Summit, where I joined a panel on AI and Web3 security along with @blocksec (EigenLabs), @ChanniGreenwall (Olympix), @jack__sanford (Sherlock), and @nicowaisman (Xbow). What not long ago felt like a “distant future” is now being discussed as a very concrete roadmap for the coming years: both defensive and offensive AI‑powered technologies are going to advance rapidly, especially in vulnerability discovery. Everything auditors do today manually and through a patchwork of tools is gradually being bundled into more powerful and accessible automated stacks.
It’s important to look at reality soberly: the hope that we can reliably “fence off” models from undesirable use is illusory. Any sufficiently capable model is, by definition, dual‑use. Providers will add restrictions, filters, policies — but that’s not a fundamental barrier. Anyone with motivation and resources will spin up a self‑hosted model, assemble their own agentic stack, and use the same technologies without worrying about ToS. You can’t design security under the assumption that attackers won’t have access to these tools.
The economics of attacks are also far from straightforward. In the short term, attacks will get cheaper: more automation, more “wide‑area bombardment,” more exhaustive exploration of states and configurations without humans in the loop. But over the long run, as defensive practices and tools catch up, successful attacks will become more expensive: coverage will improve, trivial bugs will disappear, and effective breaches will require serious infrastructure, preparation, and expertise. This will shift the balance toward fewer incidents — but those that do happen will be far more complex and costly.
My main takeaway: we’ll have to revisit the entire security lifecycle, not just “cosmetically improve” audits. How we describe and understand risk profiles, how threat models evolve with AI in the picture, how we structure development, reviews, testing, deployment, on‑chain monitoring, incident response, and post‑mortems — all of this will need to be rethought. Traditional audits will remain a key piece, but they can no longer be the sole center of gravity. The reality is that AI symmetrically amplifies both defenders and attackers — and Web3 security will have to adapt its entire operating model to this asymmetric arms race.

3,07k
16
Innholdet på denne siden er levert av tredjeparter. Med mindre annet er oppgitt, er ikke OKX forfatteren av de siterte artikkelen(e) og krever ingen opphavsrett til materialet. Innholdet er kun gitt for informasjonsformål og representerer ikke synspunktene til OKX. Det er ikke ment å være en anbefaling av noe slag og bør ikke betraktes som investeringsråd eller en oppfordring om å kjøpe eller selge digitale aktiva. I den grad generativ AI brukes til å gi sammendrag eller annen informasjon, kan slikt AI-generert innhold være unøyaktig eller inkonsekvent. Vennligst les den koblede artikkelen for mer detaljer og informasjon. OKX er ikke ansvarlig for innhold som er vert på tredjeparts nettsteder. Beholdning av digitale aktiva, inkludert stablecoins og NFT-er, innebærer en høy grad av risiko og kan svinge mye. Du bør nøye vurdere om handel eller innehav av digitale aktiva passer for deg i lys av din økonomiske tilstand.

