UMSN AI in Health Initiative: FAQs

 Below are some frequently asked questions about proper use (and potential misuse) of AI  

systems, and the management of AI-generated results.


  • What AI tools do UMSN faculty have access to? 


  • What AI utility should I consider using for scenarios X, Y, Z and when is the 'extra monthly fee' 

    worth the price? 

    • UMSN faculty are generally discouraged from paying out-of-pocket for external

      AI-services. If there are gaps in functionality or specific needs, please identify these 

      details and bring them to an AI in Health forum meeting for advice and recommendations.

      Also, please explore the 

                     Resources in NAIT AI Safety & Risk Mitigation in Healthcare Training Module.


  • Are there tools out there that can help me determine if a student has

    copied/pasted/used AI? 

    • The easiest approach is to use Canvas assignment submission with 

      TurnItIn prescreening (free). Alternative (paid) services include Grammarly, 

      PaperRater, Copyleaks, etc.


  • What training is available to support learning the various AI tools? 

    • Level 1: Self-learning using NAIT is the quickest and most effective first-level

      approach. 




  • Is there a 'workshop' for those interested in trying different AI tools?

    • This summer's AI in Health training sessions have served as informal workshops for AI training. 

      (the last one is on 8/12/25). This series may be extended into the fall. Specific UMSN faculty

                     groups needing to coordinate a special training session should email ai-in-health@umich.edu                

  • Are there recommendations for determining whether a student generated

    an assignment using AI (e.g., cut and paste AI response into the assignment)? 

  • In principle, copy-pasted AI-generated content has specific patterns that can 

    be detected either by eyeballing the text for special AI formatting and style 

    traits or, alternatively, by using another (free) AI service to check whether a paragraph,

    image, graph, table, reference, or actual content is likely to be copy-pasted material that

                     was generated by AI.

  • What are some ways in which Generative AI has been used to support scholarly

    endeavors? 

    • With all AI-services, careful human proofing is essential prior to sharing, 

      disseminating, communicating, or using any AI-generated content. 

    • You can use AI to help you draft a visual abstract (graphics) of a paper, 

      course syllabus, workshop, event, report or other activity. 

    • Be careful! AI generated reference citations are highly unreliable! Use 

      GoogleScholar and/or PubMed for finding references, reviewing articles, 

      and exporting citations.

    • You can use multiple AI systems to proof, improve, and refactor your initial 

      paper abstract draft (e.g., clear or reduce the size of the text content).

    • You can also use AI services to discover information  

      (similar to Google-searching, but much more elaborate).

    • You can use AI services (e.g., Google Gemini) to explore a new topic you are 

      unfamiliar with. The validity of any AI-generated content always needs to be

      carefully cross-checked, whether against primary sources, expert knowledge, or 

      encyclopedic references.

    • Also, review (and complete) the NAIT training module on

      Writing with AI: originality, composition & plagiarism.


  • When I use AI, what happens to the data/information that I upload? Is there a

         difference in data security between UM-GPT and ChatGPT)? 
    • In general, using external AI services implies that the user is providing explicit 

      consent for their information to be ingested by (or shared with) the external 

      service provider.

    • When using domain-specific AI services, such as UMSN CLNQ, UM Maizey, and

      UM GPT, your content information is not expected to be shared outside the

      local (e.g., UM) domain. For instance, all data is local in CLNQ up to generation 2

      and is handled within the browser-tab instance; however, more recent CLNQ generations

      rely on external LLMs, which utilize external Application Programming Interfaces

      that lie outside the UM domain.

    • PHI, EHR, or any personal information should never be shared with an external

       AI service provider. You can use LM Studio to run powerful AI models locally

      on your computer without ever sharing any of your data outside the local

      application. However, this requires significant local computer resources and 

      preloading of the LLMs that actually drive the AI responses.

  • How do I correctly cite/indicate that content is AI generated?

    • This is highly specific to the situation. In principle, it’s best to succinctly disclose the use of AI, rather than be perceived as trying to obscure that fact. Appropriate disclosure depends upon the context of the material, the level of AI use, and the specific situation/activity.       

  • How can I best write prompts and refine them to obtain the desired information?


Comments

Popular posts from this blog

Interactive Learning Platform (ILP)

AI-driven Statistical Data Analyzer (SDA)

CLNQ - advanced AI-supported clinical decision-making platform