Artificial Intelligence Act - EU AI Act Podcast Por Quiet. Please arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Quiet. Please
Escúchala gratis

Acerca de esta escucha

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2024 Quiet. Please
Economía Política y Gobierno
Episodios
  • Headline: Europe Leads the Charge: The EU's Groundbreaking AI Act Reshapes the Global Landscape
    Jun 28 2025
    We’re standing on the cusp of a seismic shift in how Europe—and really, the world—approaches artificial intelligence. In the past few days, as the dust settles on months of headlines and lobbying, the mood in Brussels is a mixture of relief, apprehension, and a certain tech-tinged excitement. The EU’s Artificial Intelligence Act, or AI Act, is now the law of the land, a patchwork of regulations as ambitious as the EU’s General Data Protection Regulation before it, but in many ways even more disruptive.

    For those keeping score: as of February this year, any AI system classified as carrying “unacceptable risk”—think social scoring, manipulative deepfakes, or untethered biometric surveillance—was summarily banned across the Union. The urgency is palpable; European lawmakers like Thierry Breton and Margrethe Vestager want us to know Europe is taking a “human-centric, risk-based” path that doesn’t just chase innovation but wrangles it, tames it. Over the next few weeks, eyes will turn to the European Commission’s new AI Office, already hard at work drafting a Code of Practice and prepping for the August 2025 milestone, when general-purpose AI models—like those powering art generators, chat assistants, and much more—fall squarely under the microscope.

    Let’s talk implications. For companies—especially stateside giants like OpenAI, Google, and Meta—Europe is now the compliance capital of the AI universe. The code is clear: transparency isn’t optional, and proving your AI is lawful, safe, and non-discriminatory is a ticket to play in the EU market. There’s a whole new calculus around technical documentation, reporting, and copyright policies, particularly for “systemic risk” models, which includes large language models that could plausibly disrupt fundamental rights. That means explainability, open records for training data, and above all, robust risk management frameworks—no more black boxes shrugged off as trade secrets.

    For everyday developers and startups, the challenge is balancing compliance overhead with the allure of 450 million potential users. Some argue the Act might smother European innovation by pushing smaller players out, while others—like the voices behind the BSR and the European Parliament itself—see it as a golden opportunity: trust becomes a feature, safety a selling point. In the past few days, industry leaders have scrambled to audit their supply chains, label their systems, and train up their staff—AI literacy isn’t just a buzzword now, it’s a legal necessity.

    Looking ahead, the AI Act’s phased rollout will test the resolve of regulators and the ingenuity of builders. As we approach August 2025 and 2026, high-risk sectors like healthcare, policing, and critical infrastructure will come online under the Act’s most stringent rules. The AI Office will be fielding questions, complaints, and a torrent of data like never before. Europe is betting big: if this works, it’s the blueprint for AI governance everywhere else.

    Thanks for tuning in to this deep dive. Make sure to subscribe so you don’t miss the next chapter in Europe’s AI revolution. This has been a quiet please production, for more check out quiet please dot ai.
    Más Menos
    3 m
  • EU's AI Act: Taming the Tech Titan, Shaping the Future
    Jun 26 2025
    It’s June 26, 2025, and if you’re working anywhere near artificial intelligence in the European Union—or, frankly, if you care about how society wrangles with emergent tech—the EU AI Act is the gravitational center of your universe right now. The European Parliament passed the AI Act back in March 2024, and by August, it was officially in force. But here’s the wrinkle: this legislation rolls out in waves. We’re living through the first real ripples.

    February 2, 2025: circle that date. That’s when the first teethy provisions of the Act snapped shut—most notably, a ban on AI systems that pose what policymakers have labeled “unacceptable risks.” If you think that sounds severe, you’re not wrong. The European Commission drew this line in response to the potential for AI to upend fundamental rights, specifically outlawing manipulative AI that distorts behavior or exploits vulnerabilities. This isn’t abstract. Think of technologies with the power to nudge people into decisions they wouldn’t otherwise make—a marketer’s dream, perhaps, but now a European regulator’s nightmare.

    But risk isn’t just black and white here. The Act’s famed “risk-based approach” means AI is categorized: minimal risk, limited risk, high risk, and that aforementioned “unacceptable.” High-risk systems—for instance, those used in critical infrastructure, law enforcement, or education—are staring down a much tougher compliance road, but they’ve got until 2026 or even 2027 to fully align or face some eye-watering fines.

    Today, we’re at an inflection point. The AI Act isn’t just about bans. It demands what Brussels calls "AI literacy"—organisations must ensure staff understand these systems, which, let’s admit, is no small feat when even the experts can’t always predict how a given model will behave. There’s also the forthcoming creation of an AI Office and the European Artificial Intelligence Board, charged with shepherding these rules and helping member states enforce them. This means that somewhere in the Berlaymont building, teams are preparing guidance, Q&As, and service desks for the coming storm of questions from industry, academia, and, inevitably, the legal profession.

    August 2, 2025, is looming. That’s when the governance rules and obligations for general-purpose AI—think the big, broad models powering everything from chatbots to medical diagnostics—kick in. Providers will need to keep up with technical documentation, maintain transparent training data summaries, and, crucially, grapple with copyright compliance. If your model runs the risk of “systemic risks” to fundamental rights, expect even more stringent oversight.

    Anyone who thought AI was just code now sees it’s a living part of society, and Europe is determined to domesticate it. Other governments are watching—some with admiration, others with apprehension. The next phase in this regulatory journey will reveal just how much AI can be tamed, and at what cost to innovation, competitiveness, and, dare I say, human agency.

    Thanks for tuning in to this techie deep dive. Don’t forget to subscribe and stay curious. This has been a quiet please production, for more check out quiet please dot ai.
    Más Menos
    3 m
  • EU's AI Act Reshapes Europe's Tech Landscape
    Jun 24 2025
    If you’ve paid even a shred of attention to tech policy news this week, you know that the European Union’s Artificial Intelligence Act is steamrolling from theory into practice, and the sense of urgency among AI developers and businesses is palpable. Today is June 24, 2025, a date sandwiched between the first major wave of real, binding AI rules that hit the continent back in February and the next tidal surge of obligations set for August. Welcome to the new EU, where your algorithm’s legal status matters just as much as your code quality.

    Let’s get to the heart of it. The EU AI Act, the world’s first comprehensive, horizontal framework for regulating artificial intelligence, was formally adopted by the European Parliament in March 2024 and hit the official books that August. The European Commission’s AI Office, along with each member state’s newly minted national AI authorities, are shoulder-deep in building a pan-continental compliance system. This isn’t just bureaucratic window dressing. Their immediate job: sorting AI systems by risk—think biometric surveillance, predictive policing, and social scoring at the top of the “unacceptable” list.

    Since February 2 of this year, the outright ban on high-risk AI—those systems deemed too dangerous or socially corrosive—has been in force. For the first time, any company caught using AI for manipulative subliminal techniques or mass biometric scraping in public faces real legal action, not just a sternly worded letter from a digital minister. The compliance clock isn’t just ticking; it’s deafening.

    But the EU is not done flexing its regulatory muscle. Come August, all eyes turn to the requirements on general-purpose AI models—especially those like OpenAI’s GPT, Google’s Gemini, and Meta’s Llama. Providers will have to maintain up-to-date technical documentation, publish summaries of the data they use, and ensure their training sets respect European copyright law. If a model is deemed to pose “systemic risks,” expect additional scrutiny: mandatory risk mitigation plans, cybersecurity protections, incident reporting, and much tighter transparency. The AI Office, supported by the newly launched “AI Act Service Desk,” is positioning itself as the de facto referee in this rapidly evolving game.

    For businesses integrating AI, the compliance load is non-negotiable. If your AI touches the EU, you need AI literacy training, ironclad governance, and rock-solid transparency up and down your value chain. The risk-based approach is about more than just box-ticking: it’s the EU’s gambit to build public trust, keep innovation inside sensible guardrails, and position itself as the global trendsetter in AI ethics and safety.

    With the AI landscape shifting this quickly, it’s a rare moment when policy gets to lead technology rather than chase after it. The world is watching Brussels, and it’s anyone’s guess which superpower will follow suit next. For now, the rules are real, the deadlines are near, and the future of AI feels—finally—like a shared European project.

    Thanks for tuning in. Don’t forget to subscribe. This has been a Quiet Please production, for more check out quiet please dot ai.
    Más Menos
    3 m
Todas las estrellas
Más relevante  
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.