Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe? Highlights:

• Artificial Intelligence (AI) is emerging as one of the most transformative technological developments in human history.
• Biological civilisations may universally underestimate the speed that AI systems progress, as these are so different from traditional timescales.
• AI could spell the end of intelligence on Earth (including AI) before mitigating strategies, e.g. a multiplanetary capability, have been achieved.
• These arguments suggest that the longevity, L, of technical civilisations is < 200 years, thus explaining the great silence observed by SETI.
• Small values for L underscores the necessity to intensify efforts to regulate AI – failure to do so, could rob the universe of all conscious presence.

How Far Are We From Artificial General Intelligence (AGI)? The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI’s current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI’s definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives.

Big tech has distracted world from existential risk of AI: “In 1942, Enrico Fermi built the first ever reactor with a self-sustaining nuclear chain reaction under a Chicago football field,” Tegmark, who trained as a physicist, said. “When the top physicists at the time found out about that, they really freaked out, because they realised that the single biggest hurdle remaining to building a nuclear bomb had just been overcome. They realised that it was just a few years away – and in fact, it was three years, with the Trinity test in 1945. “AI models that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] are the same warning for the kind of AI that you can lose control over. That’s why you get people like Geoffrey Hinton and Yoshua Bengio – and even a lot of tech CEOs, at least in private – freaking out now.”

AI deception: A survey of examples, risks, and potential solutions: AI systems are already capable of deceiving humans. Deception is the systematic inducement of false beliefs in others to accomplish some outcome other than the truth. Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems. Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception. Proactively addressing the problem of AI deception is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.

Public Computing Intellectuals in the Age of AI Crisis: The belief that AI technology is on the cusp of causing a generalized social crisis became a popular one in 2023. Interestingly, some of these worries were voiced from within the tech sector itself. While there was no doubt an element of hype and exaggeration to some of these accounts, they do reflect the fact that there are troubling ramifications to this technology stack. This conjunction of shared concerns about social, political, and personal futures presaged by current developments in machine learning and data science presents the academic discipline of computing with a rare opportunity for self-examination and reconfiguration. This position paper endeavors to do so in four sections. The first expands on the nature of the AI crisis for computing. The second articulates possible critical responses to this crisis and advocates for a broader analytic focus on power relations. The third section presents a novel characterization of academic computing’s epistemological field, one which includes not only the discipline’s usual instrumental forms of knowledge but reflexive knowledge as well. This reflexive dimension integrates both the critical and public functions of the discipline as equal intellectual partners and a necessary component of any contemporary academic field. The final section will advocate for a conceptual archetype–the Public Computer Intellectual–as a way of practically imagining the expanded possibilities of academic practice in our discipline, one that provides both self-critique and an outward-facing orientation towards the public good. It will argue that the computer education research community can play a vital role in this regard.

AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge? As advanced artificial intelligence (AI) technologies are developed and deployed, core zones of information and knowledge that support democratic life will be mediated more comprehensively by machines. Chatbots and AI agents may structure most internet, media, and public informational domains. What humans believe to be true and worthy of attention – what becomes public knowledge – may increasingly be influenced by the judgments of advanced AI systems. This pattern will present profound challenges to democracy. A pattern of what we might consider “epistemic risk” will threaten the possibility of AI ethical alignment with human values. AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. The pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values.

Without journalists, there is no journalism: the social dimension of generative artificial intelligence in the media

act.: Fostering a Federated AI Commons ecosystem: the centralization of power through AI is not inevitable. For example, The image shows a cover for the paper, with the name of the article and its authors. It also features a QR Code to access the text. The image also contains illustrations of leaves and plant shoots in purple.there are initiatives aiming to build federations of small organizations that can become part of a broader AI Commons ecosystem. This policy paper provides actionable recommendations for the G20 to foster decentralized AI development. We urge support for an alternative AI ecosystem characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately compensated and dignified data labeling and content moderation work, and ongoing attention to minimizing the ecological footprint and the social-economic-environmental harms of AI systems. We call on the G20 to center bienes comunes (the commons), human rights, and the public’s interest in AI development.