FAQ
Logo Agencji Bezpieczeństwa Wewnętrznego (ABW), Polska

Large Language Models in jihadist terrorism and crimes

Data publikacji: 11.03.2024

Terroryzm – studia, analizy, prewencja, 2024, Numer 5 (5), s. 351 - 379

https://doi.org/10.4467/27204383TER.24.012.19400

Autorzy

,
Julia Puczyńska
Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN
https://orcid.org/0009-0009-5304-7092 Orcid
Wszystkie publikacje autora →
,
Marcin Podhajski
Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN
https://orcid.org/0009-0001-1350-879X Orcid
Wszystkie publikacje autora →
,
Karolina Wojtasik
Polskie Towarzystwo Bezpieczeństwa Narodowego
, Polska
https://orcid.org/0000-0002-1215-5005 Orcid
Wszystkie publikacje autora →
Tomasz P. Michalak
Uniwersytet Warszawski, ul. Krakowskie Przedmieście 30, 00-927 Warszawa, Polska
https://orcid.org/0000-0002-5288-0324 Orcid
Wszystkie publikacje autora →

Tytuły

Large Language Models in jihadist terrorism and crimes

Abstrakt

The authors discuss Large Language Models in the context of the security risks associated with their functions and availability. Even though their applications seem to be similar to search engines and internet access, the true danger posed by Large Language Models lies in basic analytical and programming skills they provide to any criminal or terrorist. They assert that accessible Large Language Models not only diminish financial barriers to various criminal activities but also lower the expertise and commitment required by individuals or small groups to commit crimes, and acts of terror in particular. On the other hand, however, law enforcement agencies can also harness the capabilities of these models to stay ahead of emerging threats.

Bibliografia

Breakstone J. et al., Students’ Civic Online Reasoning: A National Portrait, “Educational Researcher” 2021, no. 50, pp. 505–515. https://doi.org/10.3102/0013189X211017495.

Europol, ChatGPT. The impact of Large Language Models on Law Enforcement, Luxembourg 2023.

Faesen L. et al., Red Lines & Baselines Towards a European Multistakeholder Approach to Counter Disinformation, The Hague Centre for Strategic Studies 2021.

Felson M., Cohen L., Human ecology and crime: A routine activity approach, “Human Ecology” 1980, no. 8, pp. 389–406. https://doi.org/10.1007/BF01561001.

Fuocco M.A., Trial and error: They had larceny in their hearts but little in their heads, “Pittsburgh Post-Gazette” 1996.

GIFCT Red Team Working Group, Considerations of the Impacts of Generative AI on Online Terrorism and Extremism, [n.p.] 2023.

Ji Z. et al., Survey of hallucination in natural language generation, “ACM Computing Surveys” 2023, no. 12, pp. 1–38. https://doi.org/10.1145/3571730.

McGrew S. et al., Can Students Evaluate Online Sources? Learning From Assessments of Civic Online Reasoning, “Theory & Research in Social Education” 2018, no. 46, pp. 165–193. https://doi.org/10.1080/00933104.2017.1416320.

Raman G. et al., How weaponizing disinformation can bring down a city’s power grid, “PloS One” 2020, no 15. https://doi.org/10.1371/journal.pone.0236517.

Vaidhyanathan S., Antisocial media: How Facebook disconnects us and undermines democracy, New York 2018.

Vaswani A. et al., Attention is All you Need, in: Advances in Neural Information Processing Systems 30 (NIPS 2017), I. Guyon et al. (eds.), Long Beach 2017, pp. 5998–6008.

Waniek M. et al., Traffic networks are vulnerable to disinformation attacks, “Scientific Reports” 2021, no. 11. https://doi.org/10.1038/s41598-021-84291-w.

 

Internet sources

[heythereitsbeth], Just came across this sub and thought I’d share mine from the start of the year, Reddit, https://www.reddit.com/r/scambait/comments/17w6vx4/just_came_across_this_sub_and_thought_id_share/?rdt=40738 [accessed: 8 XI 2023].

AFP Kenya, Fake subtitles added to old clip of Putin talking about Ukraine war, not Israel-Gaza conflict, AFP Fact Check, 17 X 2023, https://factcheck.afp.com/doc.afp.com.33YG8TE [accessed: 8 XI 2023].

Bochyńska N., #CyberMagazyn: Politycy narzędziem w rękach Kremla? „Świadomość jest bardzo niska” (Eng. Are politicians a tool in the hands of the Kremlin? Awareness is very low”), CyberDefence24, 21 X 2023, https://cyberdefence24.pl/cyberbezpieczenstwo/cybermagazyn-politycy-narzedziem-w-rekach-kremla-swiadomosc-jest-bardzo-niska  [accessed: 8 XI 2023].

Borji A., Stochastic Parrots or Intelligent Systems? A Perspective on True Depth of Understanding in LLMs, preprint, SSRN, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4507038 [accessed: 8 IX 2023]. http://dx.doi.org/10.2139/ssrn.4507038.

Bowman S.R., Eight Things to Know about Large Language Models, preprint, arXiv, 2 IV 2023, https://arxiv.org/abs/2304.00612 [accessed: 8 IX 2023]. https://doi.org/10.48550/arXiv.2304.00612.

Brewster T., Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots, Forbes, 6 VI 2023, https://www.forbes.com/sites/thomasbrewster/2023/01/06/chatgpt-cybercriminal-malware-female-chatbots/ [accessed: 9 XI 2023].

Currie R., California man’s business is frustrating telemarketing scammers with chat- bots, The Register, 3 VII 2023, https://www.theregister.com/2023/07/03/jolly_roger_telephone_company/ [accessed: 8 XI 2023].

Derner E., Batistič K., Beyond the Safeguards: Exploring the Security Risks of ChatGPT, arXiv, preprint, 13 V 2023, https://arxiv.org/abs/2305.08005 [accessed: 8 IX 2023]. https://doi.org/10.48550/arXiv.2305.08005.

Goldfarb J., Applying AI to API Security, SecurityWeek, 11 X 2023, https://www.securityweek.com/applying-ai-to-api-security/ [accessed: 8 XI 2023].

Gwozdowska A. et al., Wojna informacyjna 2022–2023. Przebieg i wnioski (Eng. Information warfare 2022-2023. Course and conclusions), NASK, 25 V 2023, https://www.nask.pl/pl/raporty/raporty/5204,Raport-quotWojna-informacyjna-20222023-Przebieg-i-wnioskiquot.html [accessed: 8 XI 2023].

Heiding F. et al., Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models, preprint, arXiv, 23 VII 2023, https://arxiv.org/abs/2308.12287 [accessed: 8 XI 2023]. https://doi.org/10.48550/arXiv.2308.12287.

INFO OPS Poland Foundation, Model dystrybucji informacji w wirtualnym środowisku informacyjnym na bazie rozpoznanego rosyjskiego podstawowego modelu dystrybucji wiadomości manipulacyjnych (Eng. A model of information distribution in a virtual information environment based on a recognised Russian basic manipulative news distribution model), Disinfo Digest, 9 VI 2023, https://disinfodigest.pl/model-dystrybucji-informacji-w-wirtualnym-srodowisku-informacyjnym-na-bazie-rozpoznanego-rosyjskiego-podstawowego-modelu-dystrybucji-wiadomosci-manipulacyjnych/ [accessed: 8 XI 2023].

Kelley D., WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks, SlashNext, 13 VII 2023, https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-e- mail-compromise-attacks/ [accessed: 22 XI 2023].

Lai T. et al., Psy-LLM: Scaling up Global Mental Health Psychological Services with AI-based Large Language Models, preprint, arXiv, 22 VII 2023, https://arxiv.org/abs/2307.11991 [accessed: 8 XI 2023]. https://doi.org/10.48550/arXiv.2307.11991.

McGuffie K., Newhouse A., The radicalization risks of GPT-3 and advanced neural language models, preprint, arXiv, 15 IX 2020, https://arxiv.org/abs/2009.06807 [accessed: 8 XI 2023]. https://doi.org/10.48550/arXiv.2009.06807.

NASK (@WeryfikacjaNASK), Wraz z postępem technologicznym, rozwój AI staje się coraz bardziej widoczny w rożnych dziedzinach naszego życia (Eng. As technology advances, the development of AI is becoming more and more visible in various areas of our lives), X, 26 X 2023, https://twitter.com/WeryfikacjaNASK/status/1717487918556594437 [accessed: 8 XI 2023].

Nowe oszustwo na WhatsAppie „Kurier Szczeciński” (Eng. New WhatsApp scam), 5 VIII 2023, https://24kurier.pl/aktualnosci/wiadomosci/nowe-oszustwo-na-whatsappie/ [accessed: 8 XI 2023].

Podolak J. et al., Analyzing the Influence of Language Model-Generated Responses in Mitigating Hate Speech on Social Media Directed at Ukrainian Refugees in Poland, preprint, arXiv, 28 XI 2023, https://arxiv.org/abs/2311.16905 [accessed: 30 XI 2023]. https://doi.org/10.48550/arXiv.2311.16905.

Saunders W. et al., Self-critiquing models for assisting human evaluators, preprint, arXiv, 12 VI 2022, https://arxiv.org/abs/2206.05802 [accessed: 9 XI 2023]. https://doi.org/10.48550/arXiv.2206.05802.

Simmons A., Vasa R., Garbage in, garbage out: Zero-shot detection of crime using Large Language Models, preprint, arXiv, 4 VII 2023, https://arxiv.org/abs/2307.06844 [accessed: 9 XI 2023]. https://doi.org/10.48550/arXiv.2307.06844.

Skąd Polacy czerpią informacje? Badanie IBRIS i IBIMS kwiecień 2021 (Eng. Where do Poles get their information from? IBRIS and IBIMS survey April 2021), IBiMS, http://www.ibims.pl/wp-content/uploads/2021/04/IBIMS_media_2021.pdf [accessed: 8 XI 2023].

Toulas B., Cybercriminals train AI chatbots for phishing, malware attacks, Bleeping Computer, 1 VIII 2023, https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/ [accessed: 9 XI 2023].

Vallance C., Rahman-Jones I., Urgent need for terrorism AI laws, warns think tank, BBC News, 4 I 2024, https://www.bbc.com/news/technology-67872767 [accessed: 10 I 2024].

Yin S. et al., A Survey on Multimodal Large Language Models, preprint, arXiv, 23 VI 2023, https://arxiv.org/abs/2306.13549 [accessed: 8 IX 2023]. https://doi.org/10.48550/arXiv.2306.13549.

Zou A. et al., Universal and Transferable Adversarial Attacks on Aligned Language Models, preprint, arXiv, 27 VII 2023, https://arxiv.org/abs/2307.15043 [accessed: 8 IX 2023]. https://doi.org/10.48550/arXiv.2307.15043.

Informacje

Informacje: Terroryzm – studia, analizy, prewencja, 2024, Numer 5 (5), s. 351 - 379

Typ artykułu: Oryginalny artykuł naukowy

Tytuły:

Polski:

Large Language Models in jihadist terrorism and crimes

Angielski:

Large Language Models in jihadist terrorism and crimes

Autorzy

https://orcid.org/0009-0009-5304-7092

Julia Puczyńska
Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN
https://orcid.org/0009-0009-5304-7092 Orcid
Wszystkie publikacje autora →

Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN

https://orcid.org/0009-0001-1350-879X

Marcin Podhajski
Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN
https://orcid.org/0009-0001-1350-879X Orcid
Wszystkie publikacje autora →

Szkoła Doktorska Technologii Informacyjnych i Biomedycznych Instytutów PAN

https://orcid.org/0000-0002-1215-5005

Karolina Wojtasik
Polskie Towarzystwo Bezpieczeństwa Narodowego
, Polska
https://orcid.org/0000-0002-1215-5005 Orcid
Wszystkie publikacje autora →

Polskie Towarzystwo Bezpieczeństwa Narodowego
Polska

https://orcid.org/0000-0002-5288-0324

Tomasz P. Michalak
Uniwersytet Warszawski, ul. Krakowskie Przedmieście 30, 00-927 Warszawa, Polska
https://orcid.org/0000-0002-5288-0324 Orcid
Wszystkie publikacje autora →

Uniwersytet Warszawski, ul. Krakowskie Przedmieście 30, 00-927 Warszawa, Polska

Publikacja: 11.03.2024

Status artykułu: Otwarte __T_UNLOCK

Licencja: CC-BY-NC-SA  ikona licencji

Udział procentowy autorów:

Julia Puczyńska (Autor) - 25%
Marcin Podhajski (Autor) - 25%
Karolina Wojtasik (Autor) - 25%
Tomasz P. Michalak (Autor) - 25%

Korekty artykułu:

-

Języki publikacji:

Angielski

Liczba wyświetleń: 240

Liczba pobrań: 243

<p> Large Language Models in jihadist terrorism and crimes</p>