Hurricanes helping cybersecurity prep

Louisiana is applying its approach to natural disasters, like hurricanes, to how it is approaching cybersecurity. This includes mobilizing the national guard and state agencies coordinating with local entities. All parties are use to the interactions as the protocols are just like they use to during natural disasters.

Route Fifty | How hurricane response helped one state’s cyber preparedness

Anatomy of a City’s Plan to Regulate Crypto Mining

 Effingham, Illinois has a plan to protect its property values from the impact of a forthcoming crypto mine.  The plan establishes regulations to protect property values, energy and water supplies, and noise. Effingham Public Works Director Greg Koester describes the regulations as “It provides definitions for cryptocurrency mine, cryptocurrency mining, data mining, data center, high-density load and establishes some regulations on where these particular uses can be and then some requirements that they need to abide by to be located in that area. It does require a site plan and a special use.”

Governing | Illinois City Approves New Regs for Cryptocurrency Mines and Data Centers

Elected officials and social media blocking

A unanimous Supreme Court determined that elected officials can block social media followers under very specific circumstances— that is their private account and then only if it has nothing to do with their government position. What’s the practical effect- yes, elected officials can be sued for violating  the First Amendment when they block their critics.

Scotusblog | Public officials can be held liable for blocking critics on social media

A.I. risk list

A Harvard Law School Forum on Corporate Governance post gives us a handy rundown of AI risks. Let’s take a peek:

  • Unwanted bias, when automated systems relying on biased data or design produce
    discriminatory outcomes, perpetuates inequalities in decision-making. Some
    companies have faced legal action after using AI systems allegedly reinforcing
    discriminatory outcomes.[16]
  • “Hallucinations”, referring to when AI generates false information.[17]
  • AI systems trained on inaccurate, outdated, or otherwise not fit for purpose data.[18]
  • Spread of mis/dis-information or harmful content through AI generated content.
  • Failure to evaluate risks of third-party AI. Research suggests that more than half of
    all AI failures come from third-party tools, which most companies rely on.[19]
  • Intellectual property (IP) infringement.[20]
  • Data security breaches, including hacking or privacy violations.
  • Technical malfunctioning, causing autonomously operated machines to endanger
    human life, for instance.

Harvard Law School Forum on Corporate Governance | Artificial Intelligence: An engagement guide