Agyepong, VCattoen, CAdusei-Fosu, KBuelow, FAWilson, TKQasim, MGrimshaw, GMGodsoe, W2024-05-222024-05-222024-05-10https://hdl.handle.net/10182/17229Artificial Intelligence (AI) can have unintended consequences in systems where they are deployed. Researchers have found that by increasing contextual understanding of AI feedback loops, cause and effect in systems, especially in high-risk applications like health, biosecurity, conservation, justice systems, and transport, AI tools can learn to improve over time and leverage wider neural networks. This paper fills the knowledge gap on how to consider varying competencies of human-AI teams to identify feedback in AI systems leveraging eight disciplines outside commerce and computer science. The study found that academic actors from more than one discipline tend to identify more relevant sources of feedback in AI systems, especially in high-risk applications. The paper recommends the integration of human lived experiences, knowledge generated from partial exposure of academic ideas to non-academic actors, and knowledge of decision making in the natural environment to reduce the incidence of misinformed human decision-making, especially in high-risk applications.84 pages© The AuthorsArtificial IntelligenceAI feedback loops in Aotearoa New Zealand: A Human-centered creative problem-solving approachPreprint Server Paper10.31235/osf.io/m5ehjANZSRC::460299 Artificial intelligence not elsewhere classifiedANZSRC::460910 Information systems user experience design and developmentANZSRC::460806 Human-computer interactionhttps://creativecommons.org/licenses/by-nc-nd/4.0/Attribution-NonCommercial-NoDerivatives