Procurement and Evaluation

Tools and materials to support procurement, selection, and evaluation of automation technologies, including due diligence questions, demonstration requirements, and strategies to ensure transparency and traceability.

Anyone who has attended a software conference (in any field) in the last five years or regularly receives promotional e-mails from software providers has undoubtedly noticed the increased focus on algorithmic decision making. Whether it’s “backed by AI”, “powered by Machine Learning”, or at “the bleeding edge of computer vision”, there is no shortage of innovation finding its way into sales pitches. As technology changes, so should the practices surrounding its procurement and evaluation. Below are some questions and consideration for buyers, regulators, privacy and risk practitioners. Vendors provide automation technologies could likewise use these questions to better prepare their business cases and sales artifacts and collateral.

For buyers:

Questions to ask:

  • Are the decision algorithms employed in your solution proprietary (“black box”) or transparent and traceable (“white box”)?
  • How does the vendor account for implicit biases in the training sets used in the algorithms?
  • How do you account for race, ethnicity, and gender in your models?
  • How do your models account for geographic, regional, and cultural differences?
  • Are algorithmic decision making models re-trained or updated?
  • How frequently are the models re-trained or updated?
  • How does the vendor validate that re-trained models
  • Have the underlying methods in suggestion models been validated independently or through research?
  • How do you ensure that the model continues to be valid as it continues to evolve?
  • Can you share the results from validation and reliability testing?
  • Will our data be used as input into the training algorithms?
  • How is our data or other clients’ data being used in the decision model?
  • What do you do for anonymization and de-identification of data in your models?

Why you should care:

  • Decisions are often subject to challenge, audit, and face scrutiny. Bad decisions – even more so. Asking probing questions, conducting the appropriate due diligence, and understanding the mechanisms contributing to those decisions helps establish credibility, builds accountability, and ensures traceability.
  • As a buyer, you should know “how the sausage gets made”, and that you are comfortable with the ingredients
  • Procurement efforts can anchor your organization to multi-year agreements, and with “learning” models, what you purchase in the first term may not be what you end up with in the second or third. While the questions above are not a crystal ball, you’ll be glad you asked if things veer sideways.

For regulators:

Questions to ask:

  • What kind of decision making does the automation technology replace? (e.g. managing employee hours, setting performance targets, providing discipline or feedback)
  • What is the relationship between staff and management and the algorithm?
  • How does the automation technology ensure compliance to local labour and employment laws where it replaces management?
  • How does your technology ensure equity between employees working alongside automation technology conducting similar work?
  • How does the cost of acquiring, implementing, running, and maintaining automation technologies compare to the same work being performed by humans?
  • What kind of roles will be required to support automation technologies?
  • What would be the nature of those roles (i.e. full-time/part-time, regular/temporary/casual, on-premise/remote)?
  • In organizations where similar technology has been deployed, what has been the impact on existing workforce?

Why you should care:

  • Depending on the type of work the algorithm replaces or the way it impacts the employer-employee relationship there may be impacts to the applicable legislation.
  • As automation technology beings to perform work similar (or identical) to that performed by human employees, understanding the inequity considerations (i.e. hours of work, breaks, compensation, etc.) will enable proactive contemplation of meaningful offsets or levelling.
  • Support, maintenance, and operations for software-based automation technologies can be conducted remotely. Consider how this may impact local communities, taxation, and governance.

For risk and privacy practitioners:

Questions to ask:

  • What decision data from our activities is being collected and how is it used by your organization?
  • If our decision outcome data is being used in instruction sets, where does this data reside?
  • How do you safeguard our data from being de-anonymized?
  • What user disclosures about collection, use, storage, retrieval, and archival does your technology offer?
  • If your decision algorithms are based on global user data, how do you ensure compliance with privacy and data legislation (e.g. General Data Protection Regulation (GDPR), Personal Information Protection and Electronic Documents Act (PIPEDA), California Consumer Privacy Act (CCPA), etc.)?
  • What traceability and audit tools are available for customers looking to review decisions made with your automation technology?

Why you should care:

  • Decision-making models are relatively new and many operate in a “black box”, where traceability, auditability, and review are complicated for the vendors providing them and more so for the clients using them. From the perspective of audit and risk, opaque methods and a lack of transparency should be factored into your organization’s risk appetite.
  • More and more privacy and data protection legislation is being introduced across the globe with ever-increasing requirements on organizations to not only disclose how data is used, but what safeguards are in place to protect it from malicious players, how it is stored and transferred, and how it is retained and archived.
  • Since algorithms and machine-learning models are heavily reliant on the training sets, often boasting the advantage of getting better with time, it is useful to understand how client data is being incorporated into the models, and ensure it not only complies with legislation, but also protects company advantages and trade secrets.

There are many questions and considerations when thinking about acquiring and evaluating automation technologies. Be prepared to ask these questions and think about your organizational posture towards the potential responses. Are you willing to absorb the risk? Do the benefits outweigh the detriments of existing tools and processes?

If you need a vendor-agnostic, independent 3rd party to be a voice on your evaluation panel, please reach out.