ISSA Chapter Meeting 3 October 2023:

President is present but online
Charles H is leading the meeting.

WELCOME/AGENDA: Hybrid meeting, held at ECPI and Zoom, if you want to join our meetings but cannot physically be here, you can choose zoom meeting through EventBrite. Can raise virtual hand when asking questions. After the meeting give us feedback!

ISSA- HR PROFESSIONAL ASSOCIATION MEMBERSHIP BENEFITS: Meet people, create business relationships, folks around here, older crowd, have years of knowledge of cybersecurity, business, government, Join us! Ask us questions, we like giving answers.  Stay current: DoD has moved to automated renewal system, a member of the club has inside knowledge of that before it became reality, goes to show what context reach the club has!
Professional development, CEUs, our meetings count for credits. Learn practical best practice solutions. Career information and employment opportunities. The club encourages such strides.

GROW PROFESSIONALLY: Membership benefits security profs in all aspects and levels of the field with strategic resources and guidance. Membership annual cost includes ISSA Organizational and chapter dues: professional $125, student $60.

NEW MEMBERS: Welcome Ryan M, Thank you for joining! Steady growth through the months.

EDUCATION: Goals: Educational Resources/Mentorship Program/ Team Building and Collaboration/ Hands-on Industry Tool familiarization/Certification Tracking Pipeline.

Chapter Links: Practice Labs /  Content Creators / Security Tools and Resources (http://https//issa-hr.org/security-resources/)

ISSA Reading List: (https://issa-hr.org/reading-list)  Continually upgrading website.

NEW SOCIAL MEDIA RESOURCES: Discord: Can use link or search ISSA-HR (https://discord.gg/76zTmJHx
LinkedIn: Can use link or search Information Systems Security Association – Hampton Roads Chapter (https://www.linkedin.com/company/information-systems-security-association-issa-hampton-roads-chapter/)

MEETINGS/SOCIAL EVENTS:
October 3rd : Today’s meeting: Adam Shostack, Shostack & Associates: Threat Modeling in the Age of AI.
November 7th : John Bos, Cybrex LLC Founder /CEO: Discussions on being a business owner.
Holiday party coming up, awaiting updates: TBD if speaker needed/wanted
March 5th, Barbara Cosgriff: prodsecteam.com (topic TBD).
Potential Upcoming Speaker: James Lawrence at CyberIT “Cyber Range Live Fire Attack Simulation Workshop”

We have openings for speakers at the beginning months of 2024. Meeting Program Director: Evan L, please reach out if interested or have a purpose speaker.

CYBER SOCIAL: October 25th at 5:30 PM at the Casual Pint side room. RSVP @ EventBrite: https://ISSA-HR.eventbrite.com or Meetup: https://www.meetup.com/issa-hampton-roads/  interesting to see how others are coming up in the field of cybersecurity.

AFTER THIS MEETING: Networking happy hour at Plaza Degollado. ~7:45 PM.

JOBS(new format): ISSA Has a job search page! (http://iz1.me/XJU31zUSeBV)
Government Jobs: USAJOBS.gov. Help building your government resume correctly the first time.
Government job resource: Federal Resume Guidebook, Author: Kathryn Troutman
Need a job/Have a job: if you are interested in a job, let us know who you are, introduction, your background, summary of what you do, security clearance, explain your value or what problem you ca solve. Followed by a call to action, what happens next. Maybe someone will have something for you! Optional extra information to include clearance status, Remote/On-site preference, relocation preference, any other short details. We can post your email in the chat if you want and we will ensure it gets to anyone interested.
Have a job section (we have a channel on your discord for this!), if you have a job, tell us about it, talk to folks who you feel may be qualified.

NEED A JOB: Lauren P: looking for any available positions in tech industry, 5 years sysadmin exp, little cybersecurity, little bit of ML (Nvidia),
Taliyh R: looking for something in the field, risk management, not dead set on certs. Just looking for a job, interested in healthcare or banking sector.

HAVE A JOB: Johnnie’s got jobs are SAIC, you don’t need certs or clearance, certs need to be obtained within 180 days, will sponsor for clearance. if you’re going to do cysec you need clearance and cert,
Twisted Pair is hiring, field support, person of contact Bradly T / Lauren P.

Presentation:

Evan Shostack: Author of Threat Modeling: Designing for Security and Threats: What Every Engineer Should Learn from Star Wars. Leading expert on threat modeling, a consultant, expert witness, and game designer. Decades pf experience delivering security. Loves to help towards the community. Graciously accepted Evan’s offer to speak, early in life helped create concept of CVE as an Emeritus member of the Advisory Board.

The age of AI: things like HAL 9000/Terminator: today age of AI is represented a lot by LLM GPTs, Midjourney, etc.

Threat Modeling Overview: well known for threat modeling, after 25 years in app sec space, while at Microsoft, learned he couldn’t threat model a single product let alone all products, forced to consider scaling of threat modeling, how do we do it better, not everyone is familiar with threat modeling:
what is threat modeling: using models to help us think about threats and security, threat modeling can be seen as “measure twice cut once” of security, the concepts that help us do better at cybersecurity before they’re developed and deployed. Software produced and deployed, applies to all. How do we threat model?: four question framework: what are we working on?/what can go wrong?/ what are we going to do about it?/ did we go a good job?/ (threatmodeilingmanifesto.org)

Myths of threat modeling: STRIDE is a methodology: (Spoofing/Tampering/Repudiation/Information Disclosure/DoS/Expansion of authority)
DFDs are threat modeling diagrams
DREAD is a prioritization system
Each is a way to answer one of the Four Questions

Threat modeling AI: what are we working on?:  (ai for business/software development) Four scenarios: AI for Offense (write me a phishing email/malware/etc.)
AI for Defense (anti-spam, Microsoft Defender copilot)
AI for business (main focus today)
AI for software development (2nd focus today)

“AI for business is the main focus today. AI for software development is second focus.
Overlap between business and devs, devs via AI; business of writing software and applying AI to it.”

What are we working on with AI: Adding an LLM to our business: we work on: Model building and validation, Model deployment, Operational environment.

“Things that scare me as a security professional, do not scare the business enough to overcome the business cases of lower costs, efficiency, these things are coming, if we ask what can go wrong, there’s a lot of ways to answer that.
We should be asking:
What can go wrong as we select and deploy models? Things can go wrong, models can be tampered with or otherwise leak, we should threat model each implementation.”

Importance of Training Data: What training data are we working on?: Pre-selected & curated? Live internet data? Customer Interaction? How frequently do we re-train/tune/adapt? Where are those adapted results visible? Different answers allow for very different threats.

“Differences between scenarios are important for threat modeling, important to call them out.”

Sets of different ways to answer what can go wrong: “There is an OWASP top 10 for LLMs, Berry institute of Machine Learning, Microsoft’s lists, Emily Bender’s dejargoning, clarifying for thinking about what can go wrong.”

ADAM’S top ten or so candidates: Prompt Injection/ Data leakage (OWASP’s sensitive info disclosure)/Training data poisoning/Over-reliance on LM-generated content. (these match OWASPtop10llm.com)–The following do not: Hallucination, Inexplicability, Bias, Insecure development and deployment.

“Training data reflects biases.”

Berryville institute. Of Machine Learning: Thinktank of security and ML experts studying machine learning security (https://berryvilleiml.com) come together to think about ML. Taxonomy of threats developed in 2019: Manipulation of input, data, models/ Extraction of input, data, models. Architectural Risk Analysis of a generic ML system (2023). Bibliography is annotated.

Microsoft has published lists: Threat Modeling AI/ML systems and dependencies/ Failure Modes in Machine Learning/ Securing the Future of Artificial Intelligence and Machine Learning at Microsoft. Links and comments an be found and made at https://shostack.org/blog/tmt-machine-learning/

Emily bender’s de-jargoning:
“Let’s replace the word ML with automation,” What’s being automated, who’s automating /why /who benefits from that automation? How well does the automation work in it’s use case that we’re considering? Who is being harmed? Who has accountability for the functioning of the automated system? What existing regulations already apply to the activities where the automation is being used?

“most useful about this list: if you have less tech centric execs who think that the security of the systems they’re building is not important, these questions may be good to ask for an executive review, may help figure out at a less technical level, useful way to ask what can go wrong, spreads into what are we working on as well.“

LLMs for Software Development:
Using LLMs to develop software: Is awesome and is happening today at your company; Is deeply scary: The AI takes our data (Customer data/PII/Intellectual Property/Trade Secrets) The AI gives us bad code: (Insecure/Vulnerable/ Uncopyrightable as a product of AI/Copyright by someone else applies to the derivative work.

“May have policies against, may try to block, Adam has been working with Jupyter notebooks with Python, a combination of GPT for software, and notes the heightened production of development; devs around the world are making use of these systems:

The AI gives us bad code, does not prevent folks from using the tool, the AI takes data, a real concern for customers, PFD, PII, protected classes of people, feeding to an AI not rated for that purpose, possibly violating privacy policies and maybe the law. The systems are so powerful it may well be happening. May also be exposing IP, most companies proprietary algorithms not a big deal, TRADE secrets can be a problem when exposing them to an LLM.  Asked LLM to identify vulnerabilities in code, would say something in generic, or more specific if vulnerable to SQL injection, it has no judgement, it answers the questions you give it. In the US a code produced by a machine is not copyrightable. Code may be derivative work of a human creator. Despite attractions of LLMs, may be dangerous to develop software.”

Using LLMs to secure code: LLMs may help us write better code (or not)
SDL/SSDF (Security development Lifecycle @Microsoft ~= Secure /Software Development Framework @NIST)
Traditionally include (“Design, code, test, deploy, respond”) every aspect of SDL may be affected by LLMs/ sometimes code “hallucinate” says things plain wrong. It’s not clear if GPT is better or worse then Checkmarx or Veracode at finding vulnerabilities  “May or may not be better than traditional static analysis.”

LLMs to help Threat Modeling: Opportunities: Speed/Scale/Accuracy
Limits: Hallucinations/Habituation and other cognitive biases/Discovering threats to new technologies, patterns.

“We may have opportunities to get better at speed, scale, accuracy, if we have an LLM that threat models as well as a new hire out of school, will allow to threat model things a lot faster, to scale, to train.
Habituation, if something works 99/100 times, human will assume full functionality, and lose attention to things that are in fact not perfect.

It’s not clear how LLMs will discover new tech or new patterns, may remain human activity for foreseeable future.”

LLMS probably best at tasks smaller than Threat Modeling: What are we working on? (system models): Here’s my intent/Here’s what’s been done in the code/Here’s a simplified system model. What can go wrong?: with every story? What are we going to do? LLMs write mitigation code/LLMs test mitigations.

“Hey LLM here’s code can you derive a model, here’s a code that’s been deployed, can you help me simplify? LLMS might be able to get good at. Can LLM’s help us replace security questionnaires? (yes), they do better at humans filling out questionnaires.”

Where LLMs Might help threat modeling?
Technical: Model: Propose models and designs/Extract model from code/ Simplify model from AST(etc)/Threats: Discover threats/Evaluate writeups/Write PoC/ Mitigations: Discover/Evaluate proposals/Improve and Explain goals/ Write unit tests.
Interpersonal: Model: Notify of model deviations/ Threats: Evaluate/Improve/ Mitigations: Help explore mitigations as chatbot/
Organizational: Discover features that need TM analysis (Databricks’ BlackHat talk “AI Assisted Decision Making of Security Review Needs for New Features”) Threats/Mitigations: Checks against policies/High Risk?

“LLMs help drive down costs, is why they are so attractive to business. LLMs might help us propose models and designs, simplify the code, the sort of tasks we are seeing LLMs SUCCEED AT. AI can also help us discover threats, create write ups, maybe even proof of concept code to demonstrate threat is real. Maybe LLMs can help us determine mitigations, evaluate quality of a mitigation,
Trusting LLMs to do validation gives Shostack most heartache, may help us discover work that hasn’t been done, (missed part of a form, etc), maybe am LLM can help observe model deviations, LLM could help us evaluate, improve our threat writeups, AI assisted decision making of security needs for new feature (blackhat talk), slides available”

Residual/ uncontrollable dangers:
Your threat modeling won’t save you from: The AI apocalypse/Externalities/Code and data merging.

‘The AI “apocalypse”: AI cannot destroy the world on its own, it needs people, “the AI told me to” is no good to a judge. Human factor is necessary for AI to be destructive.”
 
Externalities of AI: Groups impacted by AI include: Job seekers and resume review/International travelers and screening/DMV facial recognition databases and arrest warrants/ Artists and work replaced by bots./ Deepfakes of pornographic nature/ workers who perform ML labeling/ Power and environmental impact.

“Your resume is being screened by an AI, its unfair to you.”

Code/data intermingling: Separating code and data is essential to defensive programming (ie. SQL Injection, XSS, stack smashing are all code/data confusion issues. Large Language Models are statistical models of language/ Some separation is impossible by design.

“in separating code and data, is bedrock defensive programming, SQL injection occurs when code is pushed in a SQL statement, exploiting misconfigurations.

Cannot tell an LLM a rule and expect it to be interpreted. Its just answers.”

SUMMARY: Threat Modeling Overview/ The Four Question Frame applies well to ML/ LLM systems/ ML will probably help with software security/ There will be residual/uncontrollable dangers/Fascinating time to be in the field!

“We have covered threat modeling, four question framework, helps us find problems, specifics of OWASP top 10, Microsoft, Berry, Emily Benders work helps us apply four question framework to these ML systems. There are important residual dangers we have to pay attention to as sec profs.
haven’t been this excited about new tech in more than a decade, awful lot of opportunity to use threat modeling in the age of ASI, encourage you to go explore, want to say thank you appreciate the chance to share these ideas with you, happy to take questions, now or at my email.

?’s:
ML/LLMs, becoming more fragmented in private modeling to give more granular control of the data and how the model is developed?: Literally Billion dollar question, I think what will happen, is the number of new models created may drop and we are going to see more of what are called “adapters” and find tuning of models, as we get to the point where creating new models is reputedly 20/50M a pop, anything people do to drive down the cost of new models will be successful– a proliferation of model tuning, smaller models, can be seen today in number of communities at home, tuning LLMs for graphical styles, , think we will get more smaller fragmented models

We know good guys are using LLMs, we are starting to see pam coming in using LLM,s to be more readable to end user, have you heard any stores of TAs using LLM’s to actually do any hacking?: have been a number of claimed deepfake voices used in business email compromise scams, some may be credible, think we are seeing use of AI in phishing and disinformation campaigns, series of deepfake videos of Ron DeSantis, released, saying that, a lot of people predicting we haven’t seen evidence of : use of LLM’s to create new malware. Not capable of creating large scale reasoning and architecture that involves.

What are your perceptions on hallucinations, inexplicability in regard to AI?: think the reason these matter is because as we increase our reliance on the tools, hallucinations are going to impact people in all sorts of weird ways, Kevin Bomont, has talked about hoe Microsoft edge AI, Bing, has decided is involved in a bunch of lawsuits he has never filed, that is what happens when you ask Bing about even now, that leads to concerns about hallucinations, explainability is going to be increasingly important as these tools are used to make decisions about people, in the EU, under GDPR, you are entitled to an explanation about how data is processed about you. It’s not legitimate to say we have a magic box and we put something in that answers to come out about humans and we make decisions about those human beings. Think that will result in fundamental conflict where questions of explicability become important, is why he included it in his list.

That was awesome, setting back up again for remainder of meeting.

CONFERENCES:
ATT&CKcon 4.0 at MITRE ATT@CK’s HQ in McLean VA and online: October 24-25, cost free-$495
https://na.eventscloud.com/website/58627/attackcon4/
Adam Shostack worked at MITRE, helped develop CVE, MITRE 295 FOR BUSINESS.


CyberOps: ODU in Norfolk, Oct 28th, Free,
https://sites.google.com/view/oducyberops2023
 local, will have CTF, Mike Maury will speak there.

DSI 11th big data for intelligence symposium: National Harbor, MD, Nov 15-16, Free-$1290:
https://bigdatasymposium.dsigroup.org/
Focus on int community, DoD, fed, academic, industry, this year they’re focusing on leveraging AI for strategies and initiatives to extract meaningful information from big data, free for active duty, $1090 before October 13th $1290 after, nonprofit academic, $690- 790 after October 13th

Cyberforge: Date TBD, looking for sponsors, will be keeping an eye on.

BUSINESS MEETING:
Old: new discord, New LinkedIn, there is an OLD LinkedIn we recently obtained ownership of, will also be posting there, if you don’t know about it is fine, details coming.
New: Special Election, Secretary Position/Voting, Christmas Party
Secretary: Meeting Minutes
Membership Updates, Treasury Report, Social media Updates.

Old: All Board positions filled/
Social events: September 20th cyber social at Casual Pint
Christmas Party funded.

New business: Thank you Bruce Richard to his service of 2 ½ years as Secretary of the chapter.
New email addresses in the works.
New webmaster, Cal A
Volunteer events: what would members like to participate in? Talk to us! We will support you in doing it the best we can, NEED LEADERSHIP available to lead these things.

Special Election: Results are in! Nomination period opened until September 20th 2023 at 11:59 EST, Special Election Committee: Special Election Committee: Michael B, Mike D. Results, 9 votes, all 9 voted yes, Faith W elected as secretary. Notes for meetings is almost like being there, if you missed something it may be contained in the Minutes!

MEMBERSHIP UPDATE: up to 47, soon to be 48? +1 from last month, some folks are due for renewal, let them know they are up! Charles is no longer up for renewal.
Membership Discount code: 2023ISSA50L
-Discount plan works on 1 year plan WITH renewal, you put in information, then you put in code, questions about this can go to Charles.

Meeting Minutes slide: September 12th meeting recap available at https://issa-hr.org/issa-chapter-meeting-12-september-2023/ : Meeting called to order by chapter President/ Welcome Opening remarks/ ISSA overview/ Guest Speaker Introduction/ Guest Speaker:  Johnnie Shubert/ Topic: Digital Deception: Exposing the dark side of Artificial Intelligence/ Business Meeting: Chapter updates from the board and committees: Growth on New Discord/LinkedIn, Special Election for Secretary Position/  Treasurer Report Balance $5,845.27 recorded / Voted on Christmas party budget/  Meeting adjourned

Christmas Party Planning: Michael B in charge. Suggested location Three Notch’d brewing, occupying town center space, a lot of good food, proposal from them to rent the space, $250 for space, $1000 for food, they need a deposit of $125 to secure room (December 5th from 6 to 9 pm), we need have liked to have had the deposit by now, who will take care of deposit:? Peter likes to reimburse, Jon B?  are we waiting on Roop?
Is it an approved event: (yes) do we want to have it at the location (yes) voted for location (aye). Mike B will pay a deposit and a bank card will be used on night of event, Mike B’s wife will take care of catering concerns, this is a members only event, you are allowed to being a plus one will be set up sometime on eventbrite.

ISSA NEW EMAIL ADDRESSES:

all positions with ISSA-HR.org, to get a hold of appropriate person can be reached at this email.


Johnnie’s got Inkjet cards and is working on a template for any board members interested in printing one.

Doug is here! Just in time for happy hour.

Treasurer report: October 2023: had $5845.27 last balance, spent $54.13 on pizza, at $5791.14,
recording.

Networking happy hour!

If any questions let Charles know,

Please give us feedback!: What did you like? Recommendations for Future Meetings? What could make your experience better? Send your feedback to: President@ISSA-HR.org

Thanks to everyone online! Hope to see you in person next month!