The Experts (and ChatGPT) on Technology Trends, Algorithms, and AI: Part 3

By Sarah Birnie, Vice President of Analytics at Avalon Consulting

As we wrapped up our AI interview, I asked experts to share any “AI fails” they’ve experienced. (Before you read on, get caught up with Part 1 in the DMFA Newsletter and Part 2 here!)

The experts:

Roger Hiyama | Executive Vice President, Solutions & Innovation | Wiland

Susan Paine | Vice President, Data Analytics & Strategy | Human Rights Campaign

Derek Drockelman | Vice President, Sales & Marketing | ROI Solutions

Emily Courville | Senior Director, Analytics | The Humane Society of the United States 

ChatGPT

 

What sort of pitfalls or “AI fails” have you experienced?

Roger: Left in the hands of the inexperienced data person, if the design of your modeling project or your model data set is flawed, AI will just have you “running to the wrong place faster.”

Susan: No “fails” to report so far, but organizationally we are developing an AI policy — one that will focus as much on the risks of AI as the benefits. Overall, our team takes a fairly cautious approach when considering new projects and approaches, while embracing best practices and innovative (but proven) technology.

Derek: We’re pushing the envelope in many ways with AI; not every test succeeds the first time, so if we can keep the upfront cost and effort for clients at a minimum, we can ensure that we’re delivering good value.

It’s critical that all data going into a model is accurate and that the organization is committed to collecting data across a significant portion of its database. In the early days, we had to reject some highly predictive data that was extremely limited in scope or would not be collected for new constituents.

ChatGPT: As an AI, I don’t have personal experiences, but I can share common pitfalls or “AI fails” that have occurred in various applications:

  • Bias and Discrimination: Biased datasets can result in AI algorithms making unfair decisions in areas like hiring, lending, and criminal justice.
  • Misinterpretation of Input: AI models may misinterpret input data or context, leading to inaccurate or nonsensical outputs.
  • Adversarial Attacks: Adversarial attacks involve deliberately manipulating input data to deceive AI systems, causing them to make incorrect predictions or classifications.
  • Overfitting: Overfitting occurs when an AI model learns to perform well on training data but fails to generalize to new, unseen data.
  • Data Privacy Concerns: AI systems may inadvertently expose sensitive or personally identifiable information present in training data.
  • Unintended Consequences: For example, automated trading algorithms may contribute to market instability or flash crashes.
  • Lack of Transparency: Some AI models, particularly deep learning models, can be difficult to interpret or explain due to their complex, opaque nature.
  • Ethical Dilemmas: AI applications may raise ethical dilemmas regarding issues such as autonomy, accountability, and the impact on society.

Thanks for reading! If you’d like to keep the conversation going, reach out to us at avalonconsulting.net.