Blogs – Capture the Flag https://capturetheflag.onair.cc Learn. Discuss. Engage. Sat, 29 Mar 2025 17:17:05 +0000 en-US hourly 1 The Conversation AI https://capturetheflag.onair.cc/the-conversation-ai/ https://capturetheflag.onair.cc/the-conversation-ai/#respond Sat, 29 Mar 2025 17:17:05 +0000 http://cyber.onair.cc/?p=2766

From Martin LaMonica, former technology journalist and science editor for The Conversation, currently Director of Editorial Projects and Newsletters:

Dear reader,
We at The Conversation are keen to know what questions you have about AI and types of stories you want to read.

To tell us, please fill out this very short questionnaire. I’ll share your responses (no names or emails will be attached) with the editors to help guide our coverage going forward.

The Conversation AI is different than most newsletters on artificial intelligence. We will, of course, cover how the technology is evolving and its many applications.

But our editors and expert authors do more – they look broadly at the impact this powerful technology is having on society, whether it’s new ethical and regulatory questions, or changes to the workplace. Also, our academic writers approach this subject from a variety of disciplines and from universities around the world, bringing you a global perspective on this hot issue.

OnAir Post: The Conversation AI

]]>
Summary

From Martin LaMonica, former technology journalist and science editor for The Conversation, currently Director of Editorial Projects and Newsletters:

Dear reader,
We at The Conversation are keen to know what questions you have about AI and types of stories you want to read.

To tell us, please fill out this very short questionnaire. I’ll share your responses (no names or emails will be attached) with the editors to help guide our coverage going forward.

The Conversation AI is different than most newsletters on artificial intelligence. We will, of course, cover how the technology is evolving and its many applications.

But our editors and expert authors do more – they look broadly at the impact this powerful technology is having on society, whether it’s new ethical and regulatory questions, or changes to the workplace. Also, our academic writers approach this subject from a variety of disciplines and from universities around the world, bringing you a global perspective on this hot issue.

OnAir Post: The Conversation AI

News

How the US threw out any concerns about AI safety within days of Donald Trump coming to office
The Conversation AI, Oreste Pollicino and Giulia GentileMarch 11, 2025

The EU has a long-established reputation as a global standard setter, and as a reliable partner for international regulatory cooperation, especially in the digital field. But the second Trump administration is disrupting these dynamics.

In the last decade, several US big tech companies were scrutinised and sanctioned by EU data protection watchdogs for abusing customers’ personal data. Meanwhile, other nations have adopted digital regulations that are modelled on the EU’s GDPR. They reason that doing so will enhance privacy protections domestically while also strengthening their economic presence in the EU. The list of these countries keeps increasing, and includes countries traditionally operating on a protectionist agenda, such as China and Brazil.

The same had been true for artificial intelligence. Regulations on the development and use of AI drawn up under the presidency of Joe Biden signalled a degree of alignment with Brussels. The EU’s approach focuses on managing the risks stemming from AI – a goal that appeared to be seriously embraced by the US, too.

But shortly after arriving in office in January, Trump signed several executive orders “removing barriers to American leadership in artificial intelligence”. The Trump administration’s stated aim is to “achieve and maintain unquestioned and unchallenged global technological dominance”. This includes a new stance on AI that concentrates exclusively on economic and competitiveness arguments. Concerns around the risks of that technology, which the EU framework puts at its core, are no longer even part of the conversation in the US.

Trump has also launched an investigation into the EU’s Digital Markets Act (DMA) and Digital Services Act (DSA) as part of a wider exercise to see if “remedial actions” (for which, read tariffs) are needed in response to the taxes and regulations levied at US tech companies. EU acts seek to combat concentrations and abuses of digital power and the risks of social media platforms. The US is flexing its muscles, while the EU is exposed to a form of regulatory blackmail.

These are but a few examples of the new US government’s remarkably deregulatory approach concerning digital issues, despite the increasing global consensus around the risks and perils in this field.

The fallout
The geopolitics of digital regulation may push the EU towards an under-enforcement of its own digital rules so that it can continue to rely on US tech companies and avoid tariffs. The recent US executive orders may cause a chilling effect on the enforcement of the DMA and the DSA, or a potential lax application of the EU AI Act that requires developers of AI systems to respect a series of standards for their products to be lawfully marketed in the EU. Worryingly, some weeks ago the EU withdrew the proposed EU directive on AI liability, which introduced rules on how people could claim compensation for damages caused by AI systems.

Handing unfettered power to privately owned digital companies sits uneasily both with the European tradition of antitrust rules and consumer protection, as well as the values of EU constitutionalism that emerged in the aftermath of the second world war. The conquests of democracy and its values could be significantly eroded in a digital world that is becoming increasingly unequal. What is more, capitulation in the face of regulatory blackmail would equate to a relinquishment of global influence for the EU. The EU regulatory tradition and role as international standard-setter would be undermined were the EU to give in to US pressure.

Regardless of legal traditions and democratic values, any regulator should put people first when drawing up the rules that will govern the digital space – not the interests of a handful of tech companies. Jurisdictions that do not pursue policies ensuring a safe digital world for ordinary people are effectively declaring where their interests reside – not with the many but in the power and wealth of the few.

Striking a balance

The increasing use of AI in all aspects of people’s lives raises a new set of questions to which history has few answers. At the same time, the urgency to address how it should be governed is growing. Policymakers appear to be paralyzed, debating whether to let innovation flourish without controls or risk slowing progress. However, I believe that the binary choice between regulation and innovation is a false one.

Instead, it’s possible to chart a different approach that can help guide innovation in a direction that adheres to existing laws and societal norms without stifling creativity, competition and entrepreneurship.

The U.S. has consistently demonstrated its ability to drive economic growth. The American tech innovation system is rooted in entrepreneurial spirit, public and private investment, an open market and legal protections for intellectual property and trade secrets. From the early days of the Industrial Revolution to the rise of the internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions.

In January 2025, President Donald Trump issued an executive order calling for the development of an AI action plan for America. My team and I have developed an AI governance model that can underpin an action plan.

A new governance model

Previous presidential administrations have waded into AI governance, including the Biden administration’s since-recinded executive order. There has also been an increasing number of regulations concerning AI passed at the state level. But the U.S. has mostly avoided imposing regulations on AI. This hands-off approach stems in part from a disconnect between Congress and industry, with each doubting the other’s understanding of the technologies requiring governance.

The industry is divided into distinct camps, with smaller companies allowing tech giants to lead governance discussions. Other contributing factors include ideological resistance to regulation, geopolitical concerns and insufficient coalition-building that have marked past technology policymaking efforts. Yet, our study showed that both parties in Congress favor a uniquely American approach to governance.

Congress agrees on extending American leadership, addressing AI’s infrastructure needs and focusing on specific uses of the technology – instead of trying to regulate the technology itself. How to do it? My team’s findings led us to develop the Dynamic Governance Model, a policy-agnostic and nonregulatory method that can be applied to different industries and uses of the technology. It starts with a legislative or executive body setting a policy goal and consists of three subsequent steps:

  1. Establish a public-private partnership in which public and private sector experts work together to identify standards for evaluating the policy goal. This approach combines industry leaders’ technical expertise and innovation focus with policymakers’ agenda of protecting the public interest through oversight and accountability. By integrating these complementary roles, governance can evolve together with technological developments.
  2. Create an ecosystem for audit and compliance mechanisms. This market-based approach builds on the standards from the previous step and executes technical audits and compliance reviews. Setting voluntary standards and measuring against them is good, but it can fall short without real oversight. Private sector auditing firms can provide oversight so long as those auditors meet fixed ethical and professional standards.
  3. Set up accountability and liability for AI systems. This step outlines the responsibilities that a company must bear if its products harm people or fail to meet standards. Effective enforcement requires coordinated efforts across institutions. Congress can establish legislative foundations, including liability criteria and sector-specific regulations. It can also create mechanisms for ongoing oversight or rely on existing government agencies for enforcement. Courts will interpret statutes and resolve conflicts, setting precedents. Judicial rulings will clarify ambiguous areas and contribute to a sturdier framework.

Benefits of balance

I believe that this approach offers a balanced path forward, fostering public trust while allowing innovation to thrive. In contrast to conventional regulatory methods that impose blanket restrictions on industry, like the one adopted by the European Union, our model:

  • is incremental, integrating learning at each step.
  • draws on the existing approaches used in the U.S. for driving public policy, such as competition law, existing regulations and civil litigation.
  • can contribute to the development of new laws without imposing excessive burdens on companies.
  • draws on past voluntary commitments and industry standards, and encourages trust between the public and private sectors.

The U.S. has long led the world in technological growth and innovation. Pursuing a public-private partnership approach to AI governance should enable policymakers and industry leaders to advance their goals while balancing innovation with transparency and responsibility. We believe that our governance model is aligned with the Trump administration’s goal of removing barriers for industry but also supports the public’s desire for guardrails.

]]>
https://capturetheflag.onair.cc/the-conversation-ai/feed/ 0
Digital Future Daily https://capturetheflag.onair.cc/digital-future-daily/ https://capturetheflag.onair.cc/digital-future-daily/#respond Sat, 29 Mar 2025 16:55:28 +0000 http://cyber.onair.cc/?p=2769

Digital Future Daily (DFD) is a newsletter produced by Politico. Sign up to get DFD in your inbox at the link below.

DFD’s tag is"How the next wave of technology is upending the global economy and its power structures".

Source: Website

OnAir Post: Digital Future Daily

]]>
Summary

Digital Future Daily (DFD) is a newsletter produced by Politico. Sign up to get DFD in your inbox at the link below.

DFD’s tag is”How the next wave of technology is upending the global economy and its power structures”.

Source: Website

OnAir Post: Digital Future Daily

News

The government embraces AI lab rats
Digital Future Daily, Ruth ReaderApril 21, 2025

Enter silicon. The agency said on Thursday that it will phase out using animals to test certain therapies, in many ways fulfilling the ambitions of the FDA Modernization Act 2.0.

To replace animal testing, the FDA will explore using computer modeling and AI to predict how a drug will behave in humans — and its roadmap cites a wide variety of technologies, from AI simulations to “organ-on-a-chip” drug-testing devices. (For the uninitiated, organ-on-a-chip refers to testing done on lab-grown mini-tissues that replicate human physiology.)

The FDA’s plan to integrate digital tools into a field that’s long been defined by wet lab work marks a substantial change.

Superintelligent AI fears: They’re baaa-ack
Digital Future Daily, Mohar ChatterjeeApril 22, 2025

Looking at the collision of tech developments and policy shifts, Nate Soares, president of the Berkeley-based Machine Intelligence Research Institute (MIRI), doesn’t sound optimistic: “Right now, there’s no real path here where humanity doesn’t get destroyed. It gets really bad,” said Soares. “So I think we need to back off.”

Wait, what!? The latest wave of AI concern is triggered by a combination of developments in the tech world, starting with one big one: Self-coding AIs. This refers to AI models that can improve themselves, rewriting their own code to become smarter, and faster and do it again — all with minimal human oversight.

AI skeptics are a lot less optimistic. “The product being sold is the lack of human supervision — and that’s the most alarming development here,” said Hamza Chaudry, AI and National Security Lead at the Future of Life Institute (FLI), which focuses on AI’s existential risks. (DFD emailed Reflection AI to ask about its approach to risk, but the company didn’t reply by deadline.)

Biden’s AI legacy: A headache for Europe and the tech industry
Digital Future Daily, Daniella CheslowMarch 27, 2025

In Trump’s Washington, Europe’s tech regulation is a regular object of scorn. But there is one piece of American tech policy that’s united European diplomats and U.S. industry: A rule issued in President Joe Biden’s final days in office that sorted the world into three tiers for AI chip export, with more than half of Europe left off the top rung.

Under the Framework for AI Diffusion, 17 EU countries were designated Tier 2, setting caps on their access to chips needed to train AI, while the rest of Europe was set for Tier 1, with no import restrictions. Countries listed in the second tier are treating it as a scarlet letter.

“We’re going around town trying to explain that we have no idea why we ended up in Tier 2,” said one European diplomat, granted anonymity to discuss sensitive talks. “If this has to do with cooperation with the U.S. on security, we are NATO allies, we are more than willing.”

About

DFD Welcome

By POLITICO STAFF  04/04/2022

Your inbox needs a new tech newsletter.

Wait, really?

Yes. Our idea is simple: The next version of our world is already being built, and it’s growing so fast that it can be hard for Washington, or the tech industry, or anyone, to keep track.

Imagine fully alternative trillion-dollar economies. Virtual landscapes that rival the physical world for their claim on our time and attention.

These aren’t sci-fi. They’re being built now, they’re attracting billions of dollars of investment and they’re already reshaping power both inside and outside national borders.

Don’t just take our word for it. Thanks to virtual platforms like the metaverse, decentralized economic systems like Bitcoin and increasingly complex AI decisionmaking, our world promises to be meaningfully different in just a few years.

Much control of this new world will lie outside what we think of as the corridors of power — and even outside the hands of today’s tech titans.

Washington regulators aren’t famous for their cutting-edge tech savvy. The tech industry doesn’t love oversight. And blockchain-based platforms like crypto are explicitly designed to evade central scrutiny and the overweening power of today’s Internet titans.

So who’s drawing the roadmap to this future? Who’s minding the store? Who are the emerging power players? What ideas are driving them? How will their decisions affect everything from daily life to the global economy?

We’re going to track that.

By bringing POLITICO’s signature brand of pragmatic, power-savvy reporting to these questions, we’ll be offering a unique—and uniquely useful—look at questions that are addressed elsewhere as primarily business opportunities or technological challenges.

We’ll look at who benefits and who’s at risk in the explosive growth of crypto; why the blockchain could change politics as fast as it changes the economy; who’s guarding the public interest as the metaverse evolves. We’ll keep an eye on AI and other transformative technologies—and, crucially, how our existing power structures are keeping up, or not.

Tech leaders will get an honest look at how Washington sees them. Regulators and lawmakers will get a window into the edge of a world they’ll be expected to understand. And readers across the board will get insight into how this is going to change the whole idea of accountability and civic life. We also hope it will be — to use a very analog concept — fun.

Any good newsletter builds a community as it goes, so please reach out. What do we need to know? Who should we talk to? What questions need answering? Scroll to the bottom for our contact info.

Welcome to Digital Future Daily.

Source: Website

Web Links

Archive

Searchable archive of Digital Future Daily is located here.

Podcast

Politico Tech podcasts are located here. New episodes Mondays and Thursdays.

]]>
https://capturetheflag.onair.cc/digital-future-daily/feed/ 0
Michael Spencer https://capturetheflag.onair.cc/michael-spencer/ https://capturetheflag.onair.cc/michael-spencer/#respond Thu, 28 Mar 2024 21:43:57 +0000 http://cyber.onair.cc/?p=2764

Michael Spencer is an emerging tech analyst that covers industries like AI, the semiconductor AI chip industry, robotics, quantum computing and others areas of exponential tech in Newsletter articles and in news curation as a service.

His substack, AI Supremacy,  is rated #1 in Machine learning and is the fastest growing A.I. Newsletter on Substack, as of early 2022.

OnAir Post: Michael Spencer

]]>
Summary

Michael Spencer is an emerging tech analyst that covers industries like AI, the semiconductor AI chip industry, robotics, quantum computing and others areas of exponential tech in Newsletter articles and in news curation as a service.

His substack, AI Supremacy,  is rated #1 in Machine learning and is the fastest growing A.I. Newsletter on Substack, as of early 2022.

OnAir Post: Michael Spencer

News

TSMC’s role in the global AI and geopolitical order – a Full Report
AI Supremacy, Michael SpencerApril 22, 2025

Why I’m calling TSMC the most important tech company in the world for the future of AI. Severe trade tariffs but TSMC’s role in the future of AI in the spotlight.

As the U.S. vs. China trade war escalates the true “picks and shovels” company for AI Supremacy isn’t Nvidia, it’s TSMC. Taiwan Semiconductor Manufacturing Company (TSMC) has committed a total investment of approximately $165 billion in the United States complicating the geopolitical future in the era of reciprocal trade tariff uncertainty.

TSMC is the most important tech company in the world in 2025.

4 Startup Funding Models in the Age of AI
AI Supremacy, Michael Spencer and Henry ShiApril 10, 2025

The Future of Venture Capital is about to change due to AI and the flood of capital going to AI startups. MCP and A2A will enable Seed Strapping to have a bright reincarnation for startup futures.

With uncertain macro conditions, AI startups and startups in general are shifting their strategies and building companies completely differently. But how? While I don’t write on Venture capital at the intersection of AI and startups often, it’s one of my favorite things to track as an emerging tech analyst.

The idea of seed-strapping and the dream of solopreneurs being able to scale startups in a more lean and agile manner with less employees with AI is fairly fascinating. New cases studies are emerging to inform the founders of today and the future.

In the era of Generative AI, the way founders and solopreneurs are bootstrapping is very different where there are many examples of AI founders who are able to scale revenue faster, be more agile and rely less on traditional equity dilution to grow fast in a more sustainable and in a less high-risk manner. Is this the beginning of a fundamentally different future of entrepreneurship with AI?

 

Agents are here, but a world with AGI is still hard to imagine
AI Supremacy, Michael Spencer and Harry LawMarch 27, 2025

We start off with a simple question, will agents lead us to AGI? OpenAI conceptualized agents as stage 3 of 5. You can ascertain that agents in 2025 are barely functional.

Since ChatGPT was launched nearly 2.5 years ago, outside of DeepSeek, we haven’t really seen a killer-app emerge. It’s hard to know what to make of Manus AI? Part Claude wrapper, but also an incredible UX with Qwen reasoning integration. Manus AI, which has offices in Beijing and Wuhan and is part of Beijing Butterfly Effect Technology. The startup is Tencent backed, and with deep Qwen integration you have to imagine Alibaba might end up acquiring it.

Today technology and AI historian, Harry Law of  Learning From Examples , explores this awkward stage we are at halfway between reasoning models and agents. This idea that agents will lead to AGI is also quite baffling. You might also want to read some articles of the community on Manus AI: but will “unfathomable geniuses” really escape today’s frontier models, suddenly appearing like sentiment boogeymen saluting us in their made-up languages?

About

Quotes

I’m fascinated by all things artificial intelligence, innovation, business, content, automation and futurism.

Named a LinkedIn Top Voice in 2016 | & 2017 Ranked #2 in Marketing and social, I’m an amateur futurist and indie influencer.

I have an expressed interest in futurism, A.I., quantum computing and other related topics. I think about Chinese Tech a lot as well. You can contact me at michaelkspencer 2025 at gmail dot com.

Source: LinkedIn

Web Links

AGI Miniseries

Agents are here, but a world with AGI is still hard to imagine

Source: Substack

Michael Spencer and Harry Law

We start off with a simple question, will agents lead us to AGI? OpenAI conceptualized agents as stage 3 of 5. You can ascertain that agents in 2025 are barely functional.

Since ChatGPT was launched nearly 2.5 years ago, outside of DeepSeek, we haven’t really seen a killer-app emerge. It’s hard to know what to make of Manus AI? Part Claude wrapper, but also an incredible UX with Qwen reasoning integration. Manus AI, which has offices in Beijing and Wuhan and is part of Beijing Butterfly Effect Technology. The startup is Tencent backed, and with deep Qwen integration you have to imagine Alibaba might end up acquiring it.

Today technology and AI historian, Harry Law of  Learning From Examples , explores this awkward stage we are at halfway between reasoning models and agents. This idea that agents will lead to AGI is also quite baffling. You might also want to read some articles of the community on Manus AI: but will “unfathomable geniuses” really escape today’s frontier models, suddenly appearing like sentiment boogeymen saluting us in their made-up languages?

Is AGI a hoax of Silicon Valley?: Introducing: The New Generation of “AGI Startups”

Source: Substack

Everyone from OpenAI to DeepSeek claims they are an AGI startup, but the way these AI startups are proliferating is starting to get out of control in 2025. I asked Futuristic Lawyer

Tobias Mark Jensen

, to look into this trend.

On 14 April 2023, High-Flyer announced the start of an artificial general intelligence lab dedicated to research developing AI tools separate from High-Flyer’s financial business. Incorporated on 17 July 2023, with High-Flyer as the investor and backer, the lab became its own company, DeepSeek.

But while saying you are an AGI research lab has come into popular fashion in marketing terms in recent years, does anyone even believe AGI is a real thing or that today’s architecture even has the capability of attaining it?

The definition of and the date when it is achieved are both hotly debated. However it seems actual machine learning engineers and researchers don’t actually think the current LLM architecture can reach this apparent goal.

OpenAI’s o3 Scores an “A” on ARC’s AGI Test o3: Models are going to get a lot more expensive, here’s why.

Source: Substack

We have an AGI update for you today. AGI is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. We note that the commercial definition of AGI has been watered down by OpenAI, Google and many others in recent years for their systems to sound more capable. Taking this above definition however, OpenAI (employee) claims that they have AGI internally now make a bit more sense.

Apparently OpenAI’s o3 Model scores 87.5% on the ARC challenge (arcprize.org) – the key thing about this benchmark is that it is impossible to pre-learn, as every test has new conditions, models were stuck at 30-55%. Humans are particularly good at and LLMs were bad at it.

]]>
https://capturetheflag.onair.cc/michael-spencer/feed/ 0