Crown City Magazine: Local Author’s NEW Book Algorithms of Armageddon

9781612515410

Some time ago, the Crown City Magazine team interviewed retired U.S. Navy Captain George Galdorisi about his book, Fire and Ice. This month, we spoke with him about artificial intelligence, the technology that is the subject of his most recent book, Algorithms of Armageddon: The Impact of Artificial Intelligence on Future Wars, published by the U.S. Naval Institute Press and released this spring.

To say that artificial intelligence (AI) is a technology that has dominated the news over the past several years is an understatement. Unfortunately, there are a few issues today where there is more heat than light. Galdorisi explained that the subject of artificial intelligence has triggered not only intense interest, but also (often shrill) opinions from voices pro and con regarding what AI might do in our society, let alone how it might change warfare as we know it.

We wanted to follow up with him regarding how and why what is happening in the AI-realm has evolved so rapidly over the past few years, so we asked George this basic question: Could anyone have predicted some of the headline-grabbing events surrounding AI that have occurred in just the past few months?

He noted that “A1 technology has dominated the media over the last year in ways that few could have envisioned. We have seen the controversy regarding generative AI such as ChatGPT, Bard, and Bing, and especially their promise and their peril. Additionally, we have seen some calls for a complete pause in AI development.”

He went on to explain, “This frenzy to ‘rein in’ AI has reached a fever pitch with many influencers calling for a brake on AI development. Geoffrey Hinton, widely recognized as the ‘Godfather of AI,’ quit his job at Google to ‘freely speak out about the risks of AI.’ Among other statements, Hinton said: ‘AI technologies pose profound risks to society and humanity.’”

Galdorisi added: “Perhaps no tech leader has been more vocal regarding the dangers of AI than Elon Musk, who opines across multiple media.”

Musk expressed worry about the state of the AI race, noting that an open letter signed by nearly 200 technology leaders and researchers that urged companies to pause development of powerful AI systems for at least six months to prevent profound risks to society, “was a cautionary message and deserved to be out there.” Later in the interview, Pixhai declared, “We are working with a technology that has the potential to cause harm in a deep way.”

Responding to these concerns, our national leaders have weighed in. A New York Times article noted that the Biden administration “is confronting the rapidly expanding use of artificial intelligence, warning of the dangers the technology poses to public safety, privacy and democracy while having limited authority to regulate it.”

Galdorisi noted, “Unsurprisingly, the fears that AI will usher in dystopian scenarios has once again made its way into popular culture, with the 2023 streaming series, Mrs. Davis, whose high concept revolves around a future society where humans outsource their brain work to machines and calamity ensues. Reviewers called the series ‘intriguing’ and noted that they were ‘hooked.’ This exemplifies the depth of the fears of AI.”

Against this backdrop of concerns regarding AI in the civilian sector, we asked George how this controversy translates into the military realm. He answered, “That’s a great question. When it comes to the issue of inserting AI into military platforms, systems, sensors and weapons the arguments pro and con regarding AI go into overdrive. While some are of the opinion that the United States must win the AI arms race with our peer competitors, China and Russia, others make the argument that the U.S. military will lose control of its AI-enabled tools and that disaster will ensue.”

He went on to add: “Fortunately, senior leaders in the Department of Defense have been proactive in reminding the American public of the existential threat peer competitors with AI-enabled military forces pose if the U.S. military cannot counter them with similar platforms, systems, sensors and weapons. In an address at the Reagan National Defense Forum, U.S. Secretary of Defense Lloyd Austin stated that: ‘DoD wants to successfully lead the AI Revolution.’ Deputy Secretary of Defense, Dr. Kathleen Hicks, emphasized the importance of AI technologies for the U.S. military to: ‘Provide operational commanders with data-driven technologies, including artificial intelligence, machine learning and automation.’”

Going further, Galdorisi noted: “A front-page article in The New York Times quoted the Pentagon’s chief information officer, John Sherman, regarding the national security imperative to continue AI development, with Sherman stating: ‘If we stop, guess who’s not going to stop: potential adversaries overseas. We’ve got to keep moving. The Chinese won’t wait, and neither will the Russians.’”

Given the wide spectrum of arguments regarding the U.S. military employing AI-enabled weapons, we asked George if he thought that there are things that all of us, as citizens, should do.

“Yes I do,” he concluded. “America needs a national dialogue to determine the risks and rewards of AI-development, and a large part of that discussion should be focused on the need for the U.S. military to have access to the latest AI-enabled technology to provide for the security and prosperity of the American people. An informed and engaged public can be a powerful tool to ensure that this occurs.”

George reminded us that in addition to writing books, he likes nothing more than connecting with readers. You can follow him on Facebook and Twitter, and learn more about his books, blogs and other writing on his website: georgegaldorisi.com.

Leveraging Ai-Technologies to Enable Uncrewed Maritime Vessels Autonomy

LEVERAGING AI-TECHNOLOGIES TO ENABLE UNCREWED MARITIME VESSELS AUTONOMY

The U. S. Navy stands at the precipice of a new era of technology
advancement. In an address at a military-industry conference,
then-U.S. Chief of Naval Operations, Admiral Michael Gilday,
revealed the Navy’s goal to grow to 500 ships, to include 350
crewed ships and 150 uncrewed maritime vessels. This plan has
been dubbed the “hybrid fleet.” More recently,the current CNO,
Admiral Lisa Franchetti, has stressed the importance of the hybrid
fleetin her NavigationPlan forAmerica’sWarfighting Navy.

Read the Full Article Below Beginning on Page 73.

Future Fleet: Readiness, Innovation, and Naval Superiority

In September 2024, the Chief of Naval Operations, Admiral Lisa Franchetti, issued her Chief of Naval Operations Navigation Plan for America’s Warfighting Navy. This Navigation Plan embodies “Project 33” in recognition of the fact that Admiral Franchetti is the 33rd Chief of Naval Operations. Project 33 articulates two overarching objectives: an imperative to be ready for the possibility of war with the People’s Republic of China by 2027 and enhancing the Navy’s long-term advantage. This Plan has several components:

The readiness component of the Navigation Plan has the goal of eliminating ship, submarine and aircraft maintenance delays and restoring critical infrastructure that sustains and projects the fight from shore.

The people component of the Navigation Plan describes the goal of recruiting and retaining the force needed to fill officer, chief petty officer and enlisted ranks and delivering a quality of service for Navy personnel.

The operational component of the Navigation Plan involves creating upgraded command centers for the Navy Fleet Commanders and training for combat to ensure that the Navy has a warfighting advantage over its adversaries.

Finally, the goal to scale robotic and autonomous systems to integrate more platforms at speed focuses on capitalizing on the inherent advantages of uncrewed systems. This is perhaps the most intriguing part of the CNO’s Navigation Plan.

Click Here to Read In Full

Reagan National Defense Forum Highlights Uncrewed Maritime Systems By George Galdorisi

7380122

The Reagan National Defense Forum, held every year on a Saturday in early December, is one of the most important national security dialogues of the year. “Everyone who is anyone” in the national security space is either an invited speaker or an in-person attendee.

As the informed readership of Maritime Reporter and Engineering News knows, uncrewed surface vehicles (USVs) represent one of the most cutting-edge and innovative technologies in today’s defense space. Given the scope of this event, not every speaker’s remarks were directly focused on uncrewed surface vehicles. That said, what was discussed regarding national security were gaps that the U.S. military needs to fill. Unsurprisingly, discussions regarding technology, innovation and other issues had a strong emphasis on these USVs.

Listen to the full episode at Maritime Magazines

Think Different

13Heffernan-articleLarge-v2

While bookstore (or Amazon warehouse) shelves groan under the weight of books about Silicon Valley, they continue to feed our fascination with the tech industry.

That is why I was drawn to the review of a book: WHAT TECH CALLS THINKING
An Inquiry Into the Intellectual Bedrock of Silicon Valley. Here is how it begins:

In 2007, the venture capitalist Marc Andreessen argued in a brassy blog post that markets — not personnel, product or pricing — were the only thing a start-up needed to take flight. Teams, he suggested, were a dime a dozen. Products could be barely functional. He even suggested that the laws of supply and demand, the ones that generate price competition, no longer obtained.

The takeaway was something like If they come, you will build it. To get them to come, a founder needs a magnetic concept. Community, say. Connection. Sharing. Markets coalesced around these hazy notions in 2007 and 2008, with the debuts of Twitter, Airbnb, Waze, Tumblr and Dropbox.

In an erudite new book, “What Tech Calls Thinking,” Adrian Daub, a professor of comparative literature and German studies at Stanford, investigates the concepts in which Silicon Valley is still staked. He argues that the economic upheavals that start there are “made plausible and made to seem inevitable” by these tightly codified marketing strategies he calls “ideals.”

There are so many scintillating aperçus in Daub’s book that I gave up underlining. But I couldn’t let “Disruption is a theodicy of hypercapitalism” pass. Not only does Daub’s point ring true — ennobling destruction and sabotage makes the most brutal forms of capitalism seem like God’s will — but the words themselves sound like one of the verses of a German punk-socialist anthem.

Want more? Here is a link to the NYT article

https://www.nytimes.com/2020/10/13/books/review/what-tech-calls-thinking-adrian-daub.html

 

Think Different

13Heffernan-articleLarge-v2

While bookstore (or Amazon warehouse) shelves groan under the weight of books about Silicon Valley, they continue to feed our fascination with the tech industry.

That is why I was drawn to the review of a new book: WHAT TECH CALLS THINKING
An Inquiry Into the Intellectual Bedrock of Silicon Valley. Here is how it begins:

In 2007, the venture capitalist Marc Andreessen argued in a brassy blog post that markets — not personnel, product or pricing — were the only thing a start-up needed to take flight. Teams, he suggested, were a dime a dozen. Products could be barely functional. He even suggested that the laws of supply and demand, the ones that generate price competition, no longer obtained.

The takeaway was something like If they come, you will build it. To get them to come, a founder needs a magnetic concept. Community, say. Connection. Sharing. Markets coalesced around these hazy notions in 2007 and 2008, with the debuts of Twitter, Airbnb, Waze, Tumblr and Dropbox.

In an erudite new book, “What Tech Calls Thinking,” Adrian Daub, a professor of comparative literature and German studies at Stanford, investigates the concepts in which Silicon Valley is still staked. He argues that the economic upheavals that start there are “made plausible and made to seem inevitable” by these tightly codified marketing strategies he calls “ideals.”

There are so many scintillating aperçus in Daub’s book that I gave up underlining. But I couldn’t let “Disruption is a theodicy of hypercapitalism” pass. Not only does Daub’s point ring true — ennobling destruction and sabotage makes the most brutal forms of capitalism seem like God’s will — but the words themselves sound like one of the verses of a German punk-socialist anthem.

Want more? Here is a link to the NYT article

Dedication to a Cause

01hybrid-superJumbo

Much ink has been spilled about the future of robots and how they will either help – or hurt – humanity. Some still fear HAL from 2001 A Space Odyssey.

That is why I was drawn to a recent piece, “A Case for Cooperation Between Machines and Humans.” The subtitle is revealing: “A computer scientist argues that the quest for fully automated robots is misguided, perhaps even dangerous. His decades of warnings are gaining more attention.” Here is how it begins:

The Tesla chief Elon Musk and other big-name Silicon Valley executives have long promised a car that can do all the driving without human assistance.

But Ben Shneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks fully automated cars and the tech industry’s vision for a robotic future is misguided. Even dangerous. Robots should collaborate with humans, he believes, rather than replace them.

Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on “Assured Autonomy” in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are “reliable, safe and trustworthy.”

There should be the equivalent of a flight data recorder for every robot, Dr. Shneiderman argued.

It is a warning that’s likely to gain more urgency when the world’s economies eventually emerge from the devastation of the coronavirus pandemic and millions who have lost their jobs try to return to work. A growing number of them will find they are competing with or working side by side with machines.

Want more? You can read the full article here

The Innovation Bible

25christensen02-popup

Clayton M. Christensen, a Harvard professor whose groundbreaking 1997 book, “The Innovator’s Dilemma,” outlined his theories about the impact of what he called “disruptive innovation” on leading companies and catapulted him to superstar status as a management guru, died last month.

“The Innovator’s Dilemma,” which The Economist called one of the six most important business books ever written, was published during the technology boom of the late 1990s. It trumpeted Professor Christensen’s assertion that the factors that help the best companies succeed — listening responsively to customers, investing aggressively in technology products that satisfied customers’ next-generation needs — are the same reasons some of these companies fail.

These corporate giants were so focused on doing the very things that had been taught for generations at the nation’s top business schools, he wrote, that they were blindsided by small, fast-moving, innovative companies that were able to enter markets nimbly with disruptive products and services and grab large chunks of market share. By laying out a blueprint for how executives could identify and respond to these disruptive forces, Professor Christensen, himself an entrepreneur and former management consultant, struck a chord with high-tech corporate leaders.

Want more? You read the full piece here

Changing the World

Innovation may rank as one of today’s most-used buzzwords. Most would agree that innovation is “good,” and it is something that is inherently good, especially for business.

But that’s where the agreement, as most don’t know “what” they want innovation to “do.” As a result, there is a cottage industry of books, articles, seminars, podcasts etc. about innovation.

Without giving you a point solution, for me, innovation is about change – often the desire to change the world.

That’s why I was drawn to an article, “The Courage to Change the World.” Here is how it began to take on this subject:

Call them what you will: change makers, innovators, thought leaders, visionaries.

In ways large and small, they fight. They disrupt. They take risks. They push boundaries to change the way we see the world, or live in it. Some create new enterprises, while others develop their groundbreaking ideas within an existing one.

From Archimedes to Zeppelin, the accomplishments of great visionaries over the centuries have filled history books. More currently, from Jeff Bezos of Amazon to Mark Zuckerberg of Facebook and Elon Musk of SpaceX and Tesla Motors, they are the objects of endless media fascination — and increasingly intense public scrutiny.

Although centuries stretch between them, experts who have studied the nature of innovators across all areas of expertise largely agree that they have important attributes in common, from innovative thinking to an ability to build trust among those who follow them to utter confidence and a stubborn devotion to their dream.

Want more? You can read the full article here

The Future is Unmanned

300px-MQ-1_Predator,_armed_with_AGM-114_Hellfire_missiles

One of the most rapidly growing areas of innovative technology adoption involves unmanned systems. The U.S. military’s use of these systems—especially armed unmanned systems—is not only changing the face of modern warfare, but is also altering the process of decision-making in combat operations. These systems are evolving rapidly to deliver enhanced capability to the warfighter and seemed poised to deliver the next “revolution in military affairs.” However, there are increasing concerns regarding the degree of autonomy these systems—especially armed unmanned systems—should have.

I addressed this issue in an article in the professional journal, U.S. Naval Institute Proceedings. Here is how I began:

While unmanned systems increasingly impact all aspects of life, it is their use as military assets that has garnered the most attention, and with that attention, growing concern.

The Department of Defense’s (DoD’s) vision for unmanned systems (UxS) is to integrate them into the joint force for a number of reasons, but especially to reduce the risk to human life, to deliver persistent surveillance over areas of interest, and to provide options to warfighters that derive from the technologies’ ability to operate autonomously. The most recent DoD “Unmanned Systems Integrated Roadmap” noted, “DoD envisions unmanned systems seamlessly op­erating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.”

I’ve attached the full article here