{"id":217,"date":"2023-03-03T15:03:57","date_gmt":"2023-03-03T14:03:57","guid":{"rendered":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/?p=217"},"modified":"2025-03-12T10:03:36","modified_gmt":"2025-03-12T09:03:36","slug":"chatgpt-in-disaster-research","status":"publish","type":"post","link":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/archive\/217","title":{"rendered":"ChatGPT (in disaster research)"},"content":{"rendered":"\n<p><em>By Vicente Sandoval<\/em><\/p>\n\n\n\n<p>March 3, 2023<\/p>\n\n\n\n<p><a href=\"http:\/\/dx.doi.org\/10.17169\/refubium-45258\">http:\/\/dx.doi.org\/10.17169\/refubium-45258<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"435\" src=\"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1024x435.jpg\" alt=\"\" class=\"wp-image-218\" srcset=\"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1024x435.jpg 1024w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-300x128.jpg 300w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-768x326.jpg 768w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1536x653.jpg 1536w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1200x510.jpg 1200w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header.jpg 1600w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption>Screenshot from ChatGPT Start page. 3rd March 2023.<\/figcaption><\/figure>\n\n\n\n<p><em>Many researchers are concerned these days regarding the potentials and risks of using Artificial Intelligence (AI), especially language models, in research. Then, I asked ChatGPT &#8220;Create an opinion column about the potentials and risks of using ChatGPT in disaster research as if you were Vicente Sandoval&#8221;. Here is its opinion (my own reflections at the end):<\/em><\/p>\n\n\n\n<p>As a language model, I believe that the use of ChatGPT in disaster research could have both significant potentials and risks. On the one hand, ChatGPT&#8217;s ability to understand and process language could help researchers to quickly analyze and make sense of large amounts of data related to disaster events. This could be especially useful in situations where time is of the essence, such as in the aftermath of a <em>disaster*<\/em>, when decision-makers need to act quickly to save lives and minimize damage.<\/p>\n\n\n\n<!--more \u21d2 Continue Reading-->\n<!--noteaser-->\n\n\n\n<p><em>By Vicente Sandoval<\/em><\/p>\n\n\n\n<p><a href=\"http:\/\/dx.doi.org\/10.17169\/refubium-45258\">http:\/\/dx.doi.org\/10.17169\/refubium-45258<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"435\" src=\"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1024x435.jpg\" alt=\"\" class=\"wp-image-218\" srcset=\"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1024x435.jpg 1024w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-300x128.jpg 300w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-768x326.jpg 768w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1536x653.jpg 1536w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header-1200x510.jpg 1200w, https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/files\/2023\/03\/2023-03-03-Sandoval_ChatGPT_header.jpg 1600w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption>Screenshot from ChatGPT Start page. 3rd March 2023.<\/figcaption><\/figure>\n\n\n\n<p><em>Many researchers are concerned these days regarding the potentials and risks of using Artificial Intelligence (AI), especially language models, in research. Then, I asked ChatGPT &#8220;Create an opinion column about the potentials and risks of using ChatGPT in disaster research as if you were Vicente Sandoval&#8221;. Here is its opinion (my own reflections at the end):<\/em><\/p>\n\n\n\n<p>As a language model, I believe that the use of ChatGPT in disaster research could have both significant potentials and risks. On the one hand, ChatGPT&#8217;s ability to understand and process language could help researchers to quickly analyze and make sense of large amounts of data related to disaster events. This could be especially useful in situations where time is of the essence, such as in the aftermath of a <em>disaster*<\/em>, when decision-makers need to act quickly to save lives and minimize damage.<\/p>\n\n\n\n<p>Furthermore, ChatGPT&#8217;s ability to generate language could also be helpful in disaster research. For example, it could be used to generate simulations of disaster events, which could help researchers to understand how different factors (such as building design, evacuation plans, and emergency response protocols) might affect the outcome of a disaster.<\/p>\n\n\n\n<p>However, there are also several potential risks associated with the use of ChatGPT in disaster research. One concern is that ChatGPT could introduce bias into the analysis of disaster data. As with any machine learning model, ChatGPT&#8217;s output is only as good as the data it is trained on. If the data used to train ChatGPT is biased (for example, if it contains stereotypes or discriminatory language), then ChatGPT&#8217;s output may also be biased.<\/p>\n\n\n\n<p>Another risk is that the use of ChatGPT in disaster research could contribute to the spread of misinformation. ChatGPT is capable of generating convincing text, which could be used to spread false information about disaster events. This could have serious consequences, as it could lead people to take actions that put themselves or others at risk.In conclusion, while the use of ChatGPT in disaster research has the potential to be incredibly valuable, it is important to proceed with caution. Researchers should be aware of the potential risks associated with using ChatGPT, and take steps to mitigate those risks (such as ensuring that the data used to train ChatGPT is diverse and unbiased, and carefully reviewing all output generated by ChatGPT). By doing so, researchers can harness the power of ChatGPT to advance our understanding of disasters and improve our ability to respond to them.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">My reflections<\/h2>\n\n\n\n<p>Impressed? Depressed? What do you think? ChatGPT represents a milestone in the advancement of artificial intelligence (AI) in terms of natural language generation and processing. It can develop natural language responses and adapting to the tone and style of the queries it receives, creating convincing human-like answers. Also, ChatGPT has an impressive ability to process large amounts of data, which make it an invaluable tool for those looking to extract insight from large data sets.<\/p>\n\n\n\n<p>Nonetheless, this also represents some challenges and risks. First, convincing natural language makes AI a potentially dangerous tool for generating fake news and spreading misleading information. Secondly, by automatizing tasks it could displace large numbers of human workers in many industries and sectors.<\/p>\n\n\n\n<p>I recently watched \u201cThe Matrix Resurrections\u201d (Wachowski, 2021) where the use of AI was also discussed. In the script, Niobe (Jada Pinkett Smith) introduces Quillion to Neo (Keanu Reeves), a lead digitologist which is also a machine, an AI, capable of transform the digital code from the Matrix into DNA sequences to growth (real) strawberries. <strong>A machine helping human rebels in the dystopian Matrix world? Yes<\/strong>.<\/p>\n\n\n\n<p>In the film, Niobe shows to Neo the new city named Io, the one built after the former city Zion was destroyed by then machines.  She remarks, \u201cZion was stuck in the past. Stuck in war. Stuck in a Matrix of its own. They believed that it had to be&nbsp;<em>us<\/em>&nbsp;or&nbsp;<em>them<\/em> [the machines]. This city was built by us&nbsp;<em>and<\/em>&nbsp;them.\u201d (My own italics).<\/p>\n\n\n\n<p>As Niobe, scholars like Manuel Castells (2023) proposes that the debate should not be about \u201cwith or without AI\u201d but on \u201cwith and without AI\u201d. It is true that as a tool AI can automate many tasks, carrying the risk of destroying jobs in many areas: management, lawyers, art, science, etc., which could have negative social and economic consequences. But it also true that AI has arrived to stay.<\/p>\n\n\n\n<p>Following this line of thinking I have only one final reflection: artificial intelligence will certainly disrupt science and research in many ways, some positive and some negative, but researchers and scientists will not disappear, the difference will be researchers that work with and without AI.<\/p>\n\n\n\n<p><em>*ChatGPT used the concept of \u201cNatural disasters\u201d in this example, which is clearly a bias in my opinion. But this is probably because the data used to train ChatGPT have these flaws. As the same ChatGPT put it: \u201cFor example, if it contains stereotypes or discriminatory language, then ChatGPT&#8217;s output may also be biased.\u201d<\/em><\/p>\n\n\n\n<p><strong>Vicente Sandoval<\/strong>&nbsp;is Research Associate at the Katastrophenforschungsstelle (Disaster Research Unit), Freie Universit\u00e4t Berlin. Consultant and researcher on urban disaster risk governance with interests on evidenced-based research, radical interpretations of disasters, disaster vulnerability, and climate-resilient urban development. More: <a href=\"https:\/\/fu-berlin.de\/x6m8f13\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/fu-berlin.de\/x6m8f13<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<p>Castells, Manuel. (2023): ChatGPT. In:&nbsp;<em>La Vanguardia<\/em>, 25 Feb. 2023. Available online at&nbsp;<a href=\"https:\/\/www.lavanguardia.com\/opinion\/20230225\/8782438\/chatgpt.html\">https:\/\/www.lavanguardia.com\/opinion\/20230225\/8782438\/chatgpt.html<\/a>&nbsp;checked on 03 March 2023.<\/p>\n\n\n\n<p>Wachowski, Lana. (Director). (2021):&nbsp;<em>The Matrix Resurrections<\/em>. Warner Bros. Pictures.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Vicente Sandoval March 3, 2023 http:\/\/dx.doi.org\/10.17169\/refubium-45258 Many researchers are concerned these days regarding the potentials and risks of using Artificial Intelligence (AI), especially language models, in research. Then, I asked ChatGPT &#8220;Create an opinion column about the potentials and risks of using ChatGPT in disaster research as if you were Vicente Sandoval&#8221;. Here is &hellip; <a href=\"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/archive\/217\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;ChatGPT (in disaster research)&#8221;<\/span><\/a><\/p>\n","protected":false},"author":6237,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[454266,454265,454267,16993,454268],"class_list":["post-217","post","type-post","status-publish","format-standard","hentry","category-allgemein","tag-artificial-intelligence","tag-chatgpt","tag-disaster-research","tag-english","tag-language-models"],"_links":{"self":[{"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/posts\/217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/users\/6237"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/comments?post=217"}],"version-history":[{"count":9,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/posts\/217\/revisions"}],"predecessor-version":[{"id":575,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/posts\/217\/revisions\/575"}],"wp:attachment":[{"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/media?parent=217"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/categories?post=217"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.fu-berlin.de\/disasterresearchblog\/wp-json\/wp\/v2\/tags?post=217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}