And even if we’re closing the right gaps, are we really democratizing finance or just creating gilded cages? COIN360 worked with ARCH AI to bring AI Agents along with their market data aggregator. Yet this significant policy development raises for me some serious questions. While the press release paints a rosy picture of empowered users and democratized intelligence, I can't help but wonder if this is just another case of Silicon Valley's shiny object syndrome, promising utopia while potentially widening the very divides it claims to bridge.

Intelligence For All, Or Just Some?

Hilbert Group CEO Barnali Biswal discusses their mission to “democratizing market intelligence.” Sounds great, right? So who stands to gain as generative AI tools become widespread? Are these AI agents going to truly democratize access to AI for everyday investors? Or will they simply provide sophisticated traders with yet another leg up, disadvantaging the public and other market participants with less resources and know-how?

Think about it: access to premium features, data licensing, and higher volume API usage all come at a cost. This immediately creates a tiered system. The elite have access to the best AI talent. In the meantime, the rest of us have to get by with far inferior tools that don’t work nearly as well. Its analogous to distribution of fishing rods, but auctioning off the best bait only to those who can afford the highest bid.

Even with a no-code, drag-and-drop interface such as ARCH AI’s ChainGraph, there is an expectation of basic technical literacy. What do we say about the people who do not have the skills or access to consistent, reliable broadband internet? Are they really meant to be cast aside in this AI-driven future? This is where the "forgotten voices" matter. As practitioners, we have to be very careful about creating systems that grow current disparities.

Job Displacement - The Elephant in the Room

It’s a big issue that no one wants to touch with a ten-foot pole, AI is coming for our jobs. COIN360 is onboarding AI Agents as research assistants. What about the human analysts who today do that work? Are they being retrained? Are there safety nets in place? Or is this just another case of automation taking their place with algorithms.

This picture is not just about dollars on a spreadsheet. It’s not about cars and trucks…it’s about people’s livelihoods, their dignity, their ability to put food on their family’s table. It is our collective duty to make sure that these new technologies are deployed in a way that all society can reap the rewards—not just the privileged few.

We should be requiring this kind of transparency from companies like Hilbert Group. So what specific moves are they making to address AI’s real, potential, negative impacts on workers and jobs? Are they investing in retraining programs? Or are they truly designing towards a new economic model that improves over the old while giving a safety net to those automated away?

Can Web3 Actually Empower Marginalized Communities?

The claim that AI agents could empower marginalized communities to address social issues is a good idea, but it might be easier said than done.

These are the same communities that disproportionately do not have access to things many of us take for granted, such as clean water, healthcare, and quality education. Asking them to figure out all the confusing stuff about Web3 and AI to fix their issues just seems, quite honestly, a bit tone-deaf.

I'm not saying it's impossible. If projects like COIN360's AI Agents are truly committed to inclusivity, they need to actively engage with these communities, listen to their needs, and co-create solutions that are both relevant and accessible. That requires a different kind of engagement—one that includes training, resources, and ongoing support. It means making sure these technologies aren’t being used in ways that would further marginalize or exploit them.

Consider how misinformation and bias could affect the outcome. AI algorithms learn based on patterns in data, and if that data is biased, the AI will imitate those biases in its recommendations. We might end up with AI agents that are biased against marginalized groups or generate misleading claims.

We need to demand accountability. How COIN360 and ARCH AI are guaranteeing the fairness, bias-free and transparency of their AI Agents. What public accountability mechanisms are in place to detect and correct those errors? What accountability is there, what recourse do users have when they are harmed by these AI technologies?

Beyond the Hype - A Call to Action

On the face of it, COIN360’s use of ARCH AI’s ChainGraph is a step in the right direction. That is if it’s executed properly. We can’t assume that adopting the most advanced technology will inherently do more good than harm.

We need to fight for transparency, accountability, and inclusivity. We need to ask the hard questions:

  • How can this technology be made accessible to everyone?
  • What steps are being taken to prevent bias and discrimination?
  • How can the community be involved in the development and governance of these AI Agents?
  • What are the environmental implications of running these AI Agents?

Let's not be naive. We know tech can be a powerful force for good, but we know it can increase inequities that already exist. We’re all very excited about COIN360’s AI Agents, but it’s our responsibility to make sure they help the little people and not just tech bros.

Let’s cut through the buzz and ask to build a more equitable, sustainable future now! The future of our finance, and indeed, our society, as well as the world depends on it. Let's make our voices heard.