The state of AI legislation in different US states as of August 2023
The US law system is different from some countries. As explained by bing chat (sources linked):
“In the United States, the 50 individual states and the United States as a whole are each sovereign jurisdictions1. Under constitutional laws, states are permitted to create, implement, and enforce their own laws in addition to federal laws2. This is because every state in the United States is a sovereign entity in its own right and is granted the power to create laws and regulate those laws according to their needs”
This means that each state can pass laws that are different to others. And this is applied to the field of AI legislation too.
The Electronic Privacy Information Center (EPIC) organization has documented the current state of AI legislation in different states across the US here.
It’s a great snapshot of different bills and AI regulation in the US in the following themes:
- Laws going into effect in 2023
- Laws passed this legislative session
- Laws proposed this legislative session:
AI Regulation as part of Comprehensive Consumer Privacy Bills
AI Regulation to Prevent General Harms
Regulating AI in Employment Settings
Regulating AI in Healthcare
Regulating AI in Insurance
Regulating AI Used by the Government
Regulating Generative AI
Bills to Increase Transparency and Understanding Around AI
Other AI-related Bills
Read all the details and the full article here.
Google’s AI Red team tips
This is a great report by Google’s Red team on AI security. You can find it here.
It reinforces the value of Google’s Secure AI Framework (SAIF) for practices and principles underlying AI Security.
The paper above talks about the approach of Red teaming with AI.
In it, they summarize common Attacker TTPs against AI in these 6 forms:
Read the report for details on each of them.
Lastly, they provide a summary of their lessons learned, key for every large enterprise working with AI these days, which I will further summarize into these bullet points:
- Red teams will benefit from having an AI subject matter expert.
- Some attacks may not have simple fixes.
- Many attacks against models and systems can be effectively mitigated by well implemented traditional lock down controls.
- Many AI threats can be mitigated the same way traditional attacks are. Other techniques (Prompt attacks, Content Issues) will require layering multiple security models.
Read the article here.
Google Cybersecurity Action Team Threat Horizons
Google release a new threat horizons report with sightings from its action team.
According to observations from their Incidents response teams, the main factor leading to cloud compromises is still credential issues alloting more than 60% of compromise factors. Misconfiguration is next in line with 19% compromise factor incidence.
From the report:
Read Anton Chuvakin’s summary in this blog post.
The report also includes information on the top risky actions that lead to compromises which is by far: “Cross-Project abuse og GCP token access generation permission” associated with MITRE ATT&CK® tactic of Privilege Escalation (TA0004) and technique of Valid Accounts: Cloud Accounts (T1078.004).
Next I want to touch on Supply Chain section they reported. According to the report, the below image:
“highlights eight different ways the supply chain can be compromised between the developer producing software and the end user consuming it. Though a developer may be creating software with good intent, this doesn’t stop malicious actors from compromising the supply chain before it reaches customers.”
Supply chain threats can stem from direct or indirect package dependencies, as their report:
The report goes on to detail these threats in a cloud environment and better yet, how to mitigate them.
There are heaps more interesting bits in the report, I highly recommend you check it out, the full report can be found here.
Microsoft Cyber Signals 5
Microsoft released a new Cyber Signals report, issue 5.
In this report, the security team reports on attacks against Sports events — which the team supported including the FIFA world cup 2022. ⚽
Microsoft provided a number of recommendations including:
Augment the SOC team: Have an additional set of eyes monitoring the event around the clock to proactively detect threats and send notifications. This helps correlate more hunting data and discover early signs of intrusion. It should include threats beyond endpoint, like identity compromise or device to cloud pivot.
Conduct a focused cyber risk assessment: Identify potential threats specific to the event, venue, or nation where the event occurs. This assessment should include vendors, team and venue IT professionals, sponsors, and key event stakeholders.
Consider least privileged access a best practice: Grant access to systems and services only to those who need it, and train staff to understand access layers.
Read the report here.