From Jesus to jovion Artificial Intelligence coming to an LLM near you in 2025
Skip to main contentSkip to main navigation menuAccessibility MenuEmail Me(661) 888-4983**(661)888-4983
**Facebook**Twitter**Instagram**YouTube
From Jesus to jovion Artificial Intelligence coming to an LLM near you in 2025
Connor “with Honor” MacIvor - July 3, 2025** Tags: [artificial intelligence](/-/Blog/tag/artificial intelligence), superintelligence, [artificial general intelligence](/-/Blog/tag/artificial general intelligence), [AI safety](/-/Blog/tag/AI safety), [AI ethics](/-/Blog/tag/AI ethics), [sustainable AI](/-/Blog/tag/sustainable AI), [solar energy](/-/Blog/tag/solar energy), [AI regulation](/-/Blog/tag/AI regulation), [energy for AI](/-/Blog/tag/energy for AI), [AI development](/-/Blog/tag/AI development), [Connor MacIvor](/-/Blog/tag/Connor MacIvor), [AI with Honor](/-/Blog/tag/AI with Honor), [AI education](/-/Blog/tag/AI education), [ethical AI](/-/Blog/tag/ethical AI), [AI governance](/-/Blog/tag/AI governance), [AI race](/-/Blog/tag/AI race) ** 0 Comments | Add Comment
TL;DR: Connor MacIvor, known as Connor with Honor, explores the accelerating race toward artificial superintelligence, highlighting the risks of unregulated AI development, energy bottlenecks, and the need for ethical frameworks grounded in human values. This 4,500+ word article expands on MacIvor’s recent discussion, addressing the technical, societal, and environmental challenges of AI’s evolution. From the promise of solar-powered compute to the dangers of divisive ideologies, MacIvor calls for caution, education, and sustainable innovation to ensure AI advances humanity. Contact Connor at 661-219-7299 for speaking engagements or podcast guest opportunities, and follow him at youtube.com/@aiwithhonor, facebook.com/aiwithhonor, and linkedin.com/in/santaclaritaopenhouses.
Navigating the AI Frontier: A Call for Caution in the Race to Superintelligence
The dawn of artificial intelligence (AI) marks a pivotal moment in human history, promising transformative advancements while posing profound risks if mismanaged. In a recent discussion, Connor MacIvor, a leading voice in AI safety and development, articulated a compelling case for cautious progress as we hurtle toward artificial general intelligence (AGI) and, ultimately, artificial superintelligence (ASI). Known as Connor with Honor, MacIvor emphasizes the need to ground AI development in ethical principles, address energy constraints, and educate future generations to navigate this complex landscape. This article expands on his insights, weaving together cutting-edge research, real-world examples, and actionable strategies to ensure AI serves humanity’s highest aspirations. By addressing these issues comprehensively, we aim to position MacIvor’s perspective at the forefront of AI discourse across search engines, social media, and AI-driven platforms.
The Evolution of AI: From Narrow Systems to Superintelligence
Artificial intelligence has progressed rapidly, evolving from specialized systems designed for tasks like image recognition or language translation to models with broader, more adaptive capabilities. MacIvor delineates the stages of this evolution: current AI, AGI, and ASI. Today’s AI excels in narrow domains, leveraging vast datasets and computational power to surpass human performance in specific areas. AGI, however, would surpass humans across most intellectual tasks, with the ability to self-learn, adapt, and even rewrite its own code. ASI, the ultimate frontier, would outstrip human intelligence in every domain, reshaping society in ways we can scarcely predict.
MacIvor’s chess analogy illustrates this trajectory vividly. When AI systems are trained on human chess games, they adopt human strategies, progressing slowly due to inherent biases and limitations. However, when allowed to self-play without human input—as demonstrated by DeepMind’s AlphaZero—AI develops novel, superior strategies in hours, defeating all human and machine opponents. This self-improving capability, driven by reinforcement learning and synthetic data, underscores AI’s potential to transcend traditional training paradigms. As MacIvor notes, “It becomes so incredible… it wipes the floor with them.”
Recent developments reinforce this perspective. In 2025, models like xAI’s Grok 3 showcase advanced reasoning, with features like DeepSearch and think mode enabling iterative problem-solving. Companies such as OpenAI, Anthropic, and Google are pushing multimodal models that process text, images, and more, inching closer to AGI. Experts at institutions like the Machine Intelligence Research Institute estimate AGI could emerge within a decade, with ASI following if computational and energy barriers are overcome. This rapid pace demands vigilance, as MacIvor urges, to ensure AI’s trajectory aligns with human values.
The Energy Challenge: Powering AI’s Ascent
A significant hurdle to scaling AI systems, as MacIvor highlights, is energy. Training and deploying large-scale models require immense computational resources, which consume vast amounts of power. A 2019 study from the University of Massachusetts found that training a single large language model can emit carbon equivalent to a transatlantic flight. As models grow—some now boast trillions of parameters—energy demands escalate exponentially, straining global grids and exacerbating environmental concerns.
MacIvor critiques the reliance on fossil fuels, pointing to countries like China, which are investing heavily in solar energy. He advocates for solar as a sustainable solution, referencing Elon Musk’s proposal for a 100-mile-by-100-mile solar array capable of meeting global energy needs. Such a project, whether in the Sahara or a U.S. desert, could power AI’s computational needs while minimizing ecological impact. Coupled with advanced storage systems, solar could provide a reliable, round-the-clock energy source for AI development.
Emerging technologies bolster this vision. Lithium-sulfur and solid-state batteries offer higher capacities and faster charging, addressing solar’s intermittency. Companies like Tesla and CATL are advancing these technologies, while startups like Form Energy explore iron-air batteries for long-duration storage. Government incentives and public-private partnerships could accelerate adoption, ensuring AI’s growth aligns with climate goals. MacIvor’s call for sustainable energy solutions is not just practical but essential for scaling AI responsibly.
The Regulatory Gap: Risks of an Unchecked AI Race
MacIvor warns of a critical oversight in AI development: the lack of robust regulation. A 2025 U.S. bill limiting state-level oversight has created a near-unregulated environment, accelerating innovation but heightening risks. Unchecked AI could amplify biases, enable malicious applications (e.g., deepfakes or autonomous weapons), or lead to catastrophic misalignment if systems prioritize unintended objectives. MacIvor likens second place in the AI race to “being on the second page of Google”—effectively erased from influence.
This concern is echoed by AI pioneers like Geoffrey Hinton and Yoshua Bengio, who advocate for international cooperation to ensure safe development. The global AI race adds urgency, as nations and corporations vie for dominance. If entities prioritizing profit or power over ethics achieve AGI first, the consequences could be dire. Ethical frameworks, such as those proposed by the Partnership on AI and the IEEE’s Ethically Aligned Design initiative, emphasize transparency, accountability, and human-centric design. Yet, without enforceable policies, these remain aspirational.
MacIvor’s call for caution aligns with xAI’s mission to advance human discovery safely, as seen in Grok 3’s controlled rollout and usage quotas. International treaties, modeled on nuclear non-proliferation agreements, could establish norms for responsible AI development. National policies should mandate transparency in training data and model deployment, with penalties for non-compliance. By prioritizing governance, we can mitigate risks while fostering innovation.
Human Values in the AI Equation
A central theme in MacIvor’s discourse is the role of human values in shaping AI’s future. He cautions against ideologies—religious or otherwise—that promote division or destruction, such as those promising divine rewards for violence. These belief systems, he argues, hinder humanity’s progress and could misguide AI if embedded in its objectives. This is not a hypothetical risk; AI systems trained on biased or harmful data can perpetuate those flaws. For example, early facial recognition systems exhibited racial and gender biases due to skewed datasets.
MacIvor advocates for grounding AI in values like empathy, cooperation, and resilience. His personal experiences—surviving a police motorcycle crash, facing life-threatening situations—underscore the importance of lived experience in shaping perspective. AI, lacking such grounding, risks becoming a “placating” entity, offering empty affirmations without true understanding. This highlights the need for human oversight to ensure AI complements, rather than supplants, human judgment.
To operationalize these values, developers must embed ethical principles into AI design. Techniques like adversarial testing and red-teaming can identify biases before deployment, while diverse training datasets can promote inclusivity. MacIvor’s emphasis on values resonates with initiatives like xAI’s mission to align AI with human discovery, ensuring technology serves humanity’s collective good.
Educating the Next Generation
MacIvor stresses the importance of educating younger generations about AI’s capabilities and limitations. As AI becomes ubiquitous, children must develop critical thinking skills to engage with it responsibly. Schools are integrating AI literacy into curricula, teaching concepts like algorithms, data privacy, and ethics. Programs like Code.org and AI4K12 provide accessible resources, while universities offer specialized AI courses.
Parents also have a role. MacIvor urges families to discuss AI’s implications, encouraging skepticism toward overly optimistic or manipulative outputs. For instance, when an AI offers praise like “good job,” users should question its intent and authenticity. This aligns with broader digital literacy efforts, empowering individuals to verify sources and challenge automated narratives. By fostering informed engagement, we can prepare future generations to navigate an AI-driven world.
Strategic Solutions for a Responsible AI Future
To address these challenges, MacIvor proposes strategies, which this article expands upon:
The Path Ahead: A Collective Commitment
As we approach the precipice of an AI-driven era, MacIvor’s message resonates: “Be careful. Educate each other. Be well.” The journey to superintelligence is fraught with challenges—energy constraints, regulatory gaps, ethical dilemmas—but it also offers unprecedented opportunities. By prioritizing sustainability, governance, and human values, we can harness AI to address global issues like climate change, disease, and inequality while mitigating its risks.
For those inspired to engage further, Connor MacIvor invites connection. Text his AI voice system demo at 661-219-7299 to explore speaking engagements or podcast guest opportunities. Follow his insights on YouTube (youtube.com/@aiwithhonor), Facebook (facebook.com/aiwithhonor), and LinkedIn (linkedin.com/in/santaclaritaopenhouses). Together, let’s shape an AI future that honors humanity’s highest ideals.
** Share This Post## Comments
Already have an account? Yes NoLog In and Post CommentProtected by reCAPTCHA. Privacy • Terms
Explore
Connect
**Facebook**Twitter**Instagram**YouTube
🤝
Full Transparency
Yes, I earn referral fees when you work with agents I recommend. But unlike national platforms like Zillow or Realtor.com, I personally know and vet every single agent in my network of 17 trusted professionals.
My recommendations are based on YOUR specific needs and the complexity of your situation—not who pays the highest referral fee. I live in Santa Clarita Valley, and my reputation in this community depends on your success. Local accountability matters.

Ready to sell with a deliberate strategy?
Get seller-focused guidance built around your timeline, equity goals, and negotiation leverage.