IA Summit 2018 Highlights

Carol Smith
9 min readMay 17, 2018

The Information Architecture Summit 2018 this past March in Chicago was wonderful. This week I had the opportunity to share my experience with my colleagues at Uber, and so I’m sharing it here as well. Search with the hashtag #ias18 on Twitter to find even more great content and discussion.

Stuart Maxwell (Twitter: @stumax) welcomed us to the conference with a challenging talk about the problems we face as information architects.

Anne Petersen (Twitter: @petersen) quoted him on Twitter:

“The field is deep and wide, and the problems are fascinating and wicked…” Stuart Maxwell #ias18

Digital + Physical: Designing Integrated Product Experiences

Bill Horan (Twitter: @billhoran) presented his thoughts on making integrated digital and physical experiences with a series of principles. My favorite was Don’t complicate simple. Bill talked about how light switches and similar items we use daily are often make much more complex by organizations “reinventing” the product. In many cases, they should have focused on the user’s mental model and used that to inform how the device works. Bill used the example of a broken escalator — it is still stairs — and still can be used to move between floors even when it is no longer moving.

Alternatively, by designing a device that can work in any situation, you can help the user create a mental model for how it works. For example, Bill showed ideas for a hearing device that was controlled via a phone application. Regardless of whether the user was listening to one person or an experience that surrounded them, the mental model that was created was one of raising and lowering the volume. In each situation it was the same, rather than creating a different type of control for each situation.

Controls for sound, showing that the same model is used regardless of the area of hearing focus.

Information Arrangement: It’s the Metadata

Information Arrangement: It’s the Metadata was presented by Dalia Levine (Twitter: @daliawithnoh)

Duane Degler (Twitter: @ddegler) tweeted his takeaway from Dalia’s talk:

Supposedly simple decisions like language, country and region are in fact important political and social decision in your metadata. Be thoughtful about these decisions. @daliawithnoh #ias18

Personal Ontology Maps: A Way to Get to Good

Kat King’s (Twitter: KatalogofChaos) powerful talk on personal ontology mapping was extremely insightful.

Andrew Hinton (Twitter @inkblurt) tweeted:

The talk right now from @Katalogofchaos is one of the most important talks I’ve ever heard at this conference. A gently convicting charge to make ontological clarity for our own values — because otherwise we don’t understand the lenses we are using for everything else. #ias18

Kat challenges us by asking “What is Good?” and encourages us to select our battles. We should measure for ourselves if we are doing good work. Are we doing our best work? Jeff Eaton (Twitter: @Eaton) also attended the talk and captured this quote on Twitter.

In order to understand what another person is saying, you must assume that it is true, and try to imagine what it could be true of.” — Miller’s Law #IAS18 https://en.wikipedia.org/wiki/Miller%27s_law

Kat is a strong proponent of diverse, inclusive teams (as am I), and reminded us that research has shown that cognitive based, non-routine problems, are best solved by a team with diverse heuristics.

We need to embrace the discomfort that diverse teams bring. We must strive to encourage new ideas, and different points of view. Rather than coalescing around agreement, we should come together with our differences.

Kat asked us to “consider that someone else has access to experiences and understandings that I do not.” That diversity of experiences and understandings will help the team to develop great solutions.

Finally I loved this quote from Kat:

“IA is the way that you sort, and the people you support.”

Is a Hot Dog a Sandwich? and Other Taxonomy Questions

This was another wonderfully nerdy talk at the Summit. Bob Kasenchak (Twitter: @taxobob) spoke about categorizing information and of course referenced The Sandwich Alignment Chart shared by @MattoMic on Twitter last year (below).

The Sandwich Alignment Chart by @MattoMic

Bob emphasized that we are encoding points of view in the decisions we make with regard to what is, and is not, included in categories (lists on forms, etc). This may sound familiar as Dalia Levine spoke on a similar topic, but they covered it in different ways.

Bob talked about the fluidity of naming categories and that they change over time. He used the example from an ancient Chinese Encyclopedia (via Foucault via Borges) where the categories of animals included:

  • Tame
  • Fabulous
  • Innumerable
  • Et cetera

Clearly, the way human’s cateogrize animals has changed and continues to as we learn more about the animal kingdom and evolution. Categories of things, such as the sandwich chart, are constantly changing as we change our understanding of the world, as new things are created and as we become aware of new ideas and information.

Fit & Finish: The Importance of Presentation Value to UX Deliverables

Adam Polansky (Twitter: @AdamtheIA) shared his lessons on #FitandFinish to ensure we are as effective as possible in sharing our work with our stakeholders.

He encouraged us to:

  • work in public and share what we are doing.
  • make room for other perspectives to avoid cognitive bias.
  • consider: what’s the least we can do to get our message across?
  • communicate understanding with artifacts.

No Static: IA for Dynamic Information Environments

I really enjoyed this talk by Duane Degler (Twitter: @ddegler) in which he brought conversations about security and privacy together with the creation of dynamic environments. He compared our search history to photographs of the past following us around — all the places we’d been.

Duane reminded us that it is “not a question of if sites get hacked, but when” and that taking precautions to protect individuals’ data is paramount.

He suggests a solution that enables people to own their personal portable digital profile. The profile would be shared in as much or as little an amount as the prefer, and when shared, the web sits would provide relevant pieces of data to them.

My favorite quote from Duane (context was lost in my notes):

“Translate intent into expression, and expression is more than language.”

On Designing a Safe Environment

Ramya Mahalingam (Twitter: @rams_mahalingam) presentated her engaging talk about safety. She presented a continuum of safety (see below) and talked about how psychological and contextual safety is. The concepts of accountability (I see you — even more confidence in well-lit situations for sighted people) and vulnerability (I’m alone — less confidence depending on context and individual).

Tweet by Carol Smith with an image of Ramya’s continuum of safety.

Why do we all suck at collaboration?

Karen VanHouten (Twitter: @designinginward) brought all of her enthusiasm, anger and a nice bottle of Scotch (?) to the stage for her talk about collaboration.

My favorite quotes from her:

“Bias is equally distributed, power and privilege are not.”

“Give up the pursuit of perfection, enjoy the pursuit of progress!!”

“It’s courage that we need to build, not confidence. Try things, make mistakes. Redefine success.”

“Don’t be a wallflower, don’t be an asshole. Be a badass, and together we can change the world.”

Collaboration Code of Conduct

I hope many organizations adopt the Collaboration Code of Conduct that Karen has developed. It provides a framwork to deal with difficult indivduals and to treat everyone respectfully by giving them guidance that everyone sometimes needs. The Collaboration Code of Conduct requires asking difficult questions of the team and then making a code around the team’s responses.

How will we…

  1. Treat each other?
  2. Approach work?
  3. Communicate?
  4. Make decisions?
  5. Define success of working relationships?
  6. Enforce this code?

Prototyping Information Architecture

I also missed this talk by Andy Fitzgerald (Twitter: AndyByWire) but enjoyed the Twitter feed about it and I wanted to share this quote tweeted by IA Summit (Twitter: IAsummit).

People are heuristic, associative, approximate. Computers are exhaustive, enumerative, exact. IA is the connection between, matching one system to another. -@andybywire #IAS18

There was so much more!

I did not transcribe the entire conference, but there are many other people who have posted notes. Here are two more great tweets:

“But what is a screen but a promise of a space you cannot enter?” — Marius Watz via @joasqueeniebee

“Interrogating is a strong word, but I believe it’s what we need to do with our designs.” @brownorama

Ethics Roundtable (Pre-Conference)

Roundtable participants discussing and sorting post-its about ethical issues in IA.

Before the conference, I attended half of the IA and Ethics: Academics’ and Practitioners’ Roundtable which was organized by Andrea Resmini, Stacy Surla, Ren Pope, Sarah Rice, Bern Irizarry, and Keith Instone and attended by ~30 folks over 2 days.

Ethics is not mentioned in many IA/UX books, and the roundtable attendees were all passionate about raising awareness about our responsibilities to do better with regard to ethics. We identified ethics as one of the biggest problems we face in IA/UX, and yet I was still surprised when it was made clear that as a community we have a complete lack of awareness of the importance of ethics.

Inclusive Digital Spaces

Andrea Resmini’s (Twitter: @resmini) presentation got to the heart of the discussion with regard to ensuring awareness and consent for our users. This is core to an ethical experience. He led us to consider the need for open public digital spaces for conversations — spaces that are made to feel as wide as streets — so that they are inclusive and comfortable for all members of a community to take part in the conversation.

Accessibility — Who Uses Our Tools?

During the roundtable anne gibson (Twitter: Kirabug)presented a short talk on accessibility and how the choices we make with regard to accessibility, determine who will be able to use our tools.

“You all have the potential to push the boundaries of what is accepted or expected, and to think big.” — Stephen Hawking, at Web Summit 2017

With Stephen Hawking’s passing still fresh at the conference, it was fitting to evoke him in a talk about accessibility.

Tweet by Carol Smith showing Anne’s quote by Stephen Hawking and his photo from Wikimedia Commons.

Anne called us out, saying that when we do not design for people with disabilities we are ableists. When we design for accessibility we are doing our job. Anne stated that we should:

“Decide to give a damn” about people with disabilities of all types.

Anne also presented this topic later in the conference in her “What letter are you? An Alphabet of Accessibility Issues” session. I did not get to attend her talk, but the Twitter’s were very complementary and I found her slides and her 2014 blog post about the topic to be very informative. She has modeled an exemplary way to integrate people with disabilities into our everyday work.

An additional resource on this topic is the W3C Web Accessibility Initiative (WAI) “How People with Disabilities Use the Web.” When looking for a link I noticed that the WAI recently updated their web site — even easier to navigate, attractive and accessible!

Moral Maps and Models and VR

Dan Klyn (Twitter: DanKlyn) focused on virtual reality (VR) and ethics in this space. I have minimal experience with VR myself and Dan’s talk was very thoughtful. My takeaways:

  • The focus point is inevitably occlusionary to other focal points (what are we occluding?).
  • We should protect difference — too often we take out all that is special and unique to make it fit.
  • Consider what models the VR decomposes to.
  • Always consider consent and control — enable someone with a “get me out of here!” feeling to leave the experience easily.

IA in Age of AI: Embracing Abstraction and Change

Finally, I had the honor of presenting a follow-up to my 2017 talk on AI with more specific guidance with regard to desiging for these systems. The slides are on SlideShar/Carologic and what follows are some highlights of this talk.

Information Architects must push to…

  • Keep people at the center of our work.
  • Lead with our user’s goals.
  • Ease of use, usability, findability, effectiveness, efficiency…
  • Work to mature organizations approach
  • Push back on “technology first” ideas.
  • Lead on ethics — for our users, humanity.

Creating Ethical AI

  • Less-biased content.
  • Transparency of data sources and training.
  • Intentional design: Build in safety.
  • Build practices around PAPA (Privacy, Accuracy, Property, Accessibility)

Create a code of conduct/ethics

  • What do you value?
  • What lines won’t your AI cross?
  • What is too far?
  • What are you including?
  • How will you track your progress?

Take Responsibility

  • Keep humans in control.
  • Hire people affected by bias
    (non-WEIRD, women, POC, LGBTQ, etc.).
  • Conduct auditing (algorithmic, data, UI, etc.).

Reference: How to Keep Your AI from Turning into a Racist Monster by Megan Garcia

Learn about making ethical, transparent and fair AI

Toward ethical, transparent and fair AI/ML: a critical reading list, by Eirini Malliaraki, Feb 19 via tweet from @robmccargow https://medium.com/@eirinimalliaraki/toward-ethical-transparent-and-fair-ai-ml-a-critical-reading-list-d950e70a70ea

Teach others about AI

  • Demystify AI by using plain language. Always.
  • Teach people how to utilize and benefit from the system.
  • Provide easy way to raise concerns (anonymously if appropriate).

--

--

Carol Smith

UX Leader, speaker and community organizer. My thoughts on user research, design, AI and more. Provoking human values in AI.