Reference by Danielle

1. Inside “The Laughing Room”

“The Laughing Room,” an interactive art installation by author, illustrator, and MIT graduate student Jonathan “Jonny” Sun, looks like a typical living room: couches, armchairs, coffee table, soft lighting. This cozy scene, however, sits in a glass-enclosed space, flanked by bright lights and a microphone, with a bank of laptops and a video camera positioned across the room. People wander in, take a seat, begin chatting. After a pause in the conversation, a riot of canned laughter rings out, prompting genuine giggles from the group.

Presented at the Cambridge Public Library in Cambridge, Massachusetts, Nov. 16-18, “The Laughing Room” was an artificially intelligent room programmed to play an audio laugh track whenever participants said something that its algorithm deemed funny. Sun, who is currently on leave from his PhD program within the MIT Department of Urban Studies and Planning, is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University, and creative researcher at the metaLAB at Harvard, created the project to explore the increasingly social and cultural roles of technology in public and private spaces, users’ agency within and dependence on such technology, and the issues of privacy raised by these systems. The installations were presented as part of ARTificial Intelligence, an ongoing program led by MIT associate professor of literature Stephanie Frampton that fosters public dialogue about the emerging ethical and social implications of artificial intelligence (AI) through art and design.

Setting the scene

“Cambridge is the birthplace of artificial intelligence, and this installation gives us an opportunity to think about the new roles that AI is playing in our lives every day,” said Frampton. “It was important to us to set the installations in the Cambridge Public Library and MIT Libraries, where they could spark an open conversation at the intersections of art and science.

“I wanted the installation to resemble a sitcom set from the 1980s–a private, familial space,” said Sun. “I wanted to explore how AI is changing our conception of private space, with things like the Amazon Echo or Google Home, where you’re aware of this third party listening.”

“The Control Room,” a companion installation located in Hayden Library at MIT, displayed a live stream of the action in “The Laughing Room,” while another monitor showed the algorithm evaluating people’s speech in real time. Live streams were also shared online via YouTube and Periscope. “It’s an extension of the sitcom metaphor, the idea that people are watching,” said Sun. The artist was interested to see how people would act, knowing they had an audience. Would they perform for the algorithm? Sun likened it to Twitter users trying to craft the perfect tweet so it will go viral.

Programming funny

“Almost all machine learning starts from a dataset,” said Hannah Davis, an artist, musician, and programmer who collaborated with Sun to create the installation’s algorithm. She described the process at an “Artists Talk Back” event held Saturday, Nov. 17, at Hayden Library. The panel discussion included Davis; Sun; Frampton; collaborator Christopher Sun, research assistant Nikhil Dharmaraj, Reinhard Engels, manager of technology and innovation at Cambridge Public Library, Mark Szarko, librarian at MIT Libraries, and Sarah Newman, creative researcher at the metaLAB. The panel was moderated by metaLAB founder and director Jeffrey Schnapp.

Davis explained how, to train the algorithm, she scraped stand-up comedy routines from YouTube, selecting performances by women and people of color to avoid programming misogyny and racism into how the AI identified humor. “It determines what is the setup to the joke and what shouldn’t be laughed at, and what is the punchline and what should be laughed at,” said Davis. Depending on how likely something is to be a punchline, the laugh track plays at different intensities.

Fake laughs, real connections

Sun acknowledged that the reactions from “The Laughing Room” participants have been mixed: “Half of the people came out saying ‘that was really fun,’” he said. “The other half said ‘that was really creepy.’”

That was the impression shared by Colin Murphy, a student at Tufts University who heard about the project from following Sun on Twitter: “This idea that you are the spectacle of an art piece, that was really weird.”

“It didn’t seem like it was following any kind of structure,” added Henry Scott, who was visiting from Georgia. “I felt like it wasn’t laughing at jokes, but that it was laughing at us. The AI seems mean.”

While many found the experience of “The Laughing Room”uncanny, for others it was intimate, joyous, even magical.

“There’s a laughter that comes naturally after the laugh track that was interesting to me, how it can bring out the humanness,” said Newman at the panel discussion. “The work does that more than I expected it to.”

Frampton noted how the installation’s setup also prompted unexpected connections: “It enabled strangers to have conversations with each other that wouldn’t have happened without someone listening.”

Continuing his sitcom metaphor, Sun described these first installations as a “pilot,” and is looking forward to presenting future versions of “The Laughing Room.” He and his collaborators will keep tweaking the algorithm, using different data sources, and building on what they’ve learned through these installations. “The Laughing Room” will be on display in the MIT Wiesner Student Art Gallery in May 2019, and the team is planning further events at MIT, Harvard, and Cambridge Public Library throughout the coming year.

“This has been an extraordinary collaboration and shown us how much interest there is in this kind of programming and how much energy can come from using the libraries in new ways,” said Frampton.

“The Laughing Room” and “The Control Room” were funded by the metaLAB (at) Harvard, the MIT De Florez Fund for Humor, the Council of the Arts at MIT, and the MIT Center For Art, Science and Technology and presented in partnership with the Cambridge Public Library and the MIT Libraries.

2. Resistentialism: where objects conspire against humans

is a jocular theory to describe “seemingly spiteful behavior manifested by inanimate objects”,[1] where objects that cause problems (like lost keys or a runaway bouncy ball) are said to exhibit a high degree of malice toward humans. The theory posits a war being fought between humans and inanimate objects, and all the little annoyances that objects cause throughout the day are battles between the two. The concept was not new in 1948 when humorist Paul Jennings coined this name for it in a piece titled “Report on Resistentialism”, published in The Spectator that year[2] and reprinted in The New York Times;[3] the word is a blend of the Latin res (“thing”), the French resister(“to resist”), and the existentialism school of philosophy.[4] The movement is a spoof of existentialism in general, and Jean-Paul Sartre in particular, Jennings naming the fictional inventor of Resistentialism as Pierre-Marie Ventre. The slogan of Resistentialism is “Les choses sont contre nous” (“Things are against us”).

Post Final Reflection

– What are two comments you received that resonate with most? This could be a suggestion, a critique or an observation.

  1. hard time becoming immersed in your story – need one more thing in oder to fully participate. 
  2. Problematic part could be push further

– How will you consider these in moving your work forward?

The reason I want to include interaction is because I want to increase participation and also based on the suggestions I got from the user test in class. I like the ideas of making it look like a surveillance camera or monitor. However, there is a debate about this part. So, I’ll postpone this part until I finish the whole videos, and maybe make one even after the thesis show. For now, I am only focusing on making content at this moment.

To make the video more engaging, I’ll probably add more sound clues and make the plot more straightforward and exaggerated..

– Did any feedback not make sense, or was confusing/unhelpful?

I think it’s okay to talk about weight lost. Maybe I need to twist the conversation a little to make it more understandable that the focus is on meeting Sandy’s own expectation and health not body-size criticism. Weight increase is just one side of health problem.

– Identify 3-5 MUST-DOs that need to happen in the next week and the week after (a total of 10 things)

Next Week:

  1. Finish the forth videos
  2. Add more sound clue to the video, since it has no human
  3. Keep polish the conversation, I personally want to make the 1st video content better (after all videos done)

The Week after:

  1. Editing everything together.
  2. Polish conversation in the videos
  3. Documentation
  4. Check Errors


I will send out a meeting schedule soon, after sending everyone their notes!

MJ – There are some interesting points in your project; has four points in one. Appreciate the highly imaginative elements. At this point, task is to edit down and make it sensible. Nice graphics and design elements. I like that objects are not in agreement. Gender narrative may not fit with the current moment of “gender fluid” moment we are in. Agree that there is some difficulty discerning most important elements. Think that the focus on weight is problematic in that narrative. Reconsider that chapter. Gender is also tricky. Subtly may be lost in the final form. Precedent: Laura McCarthy: simulating Alexa

AF – A little confused by the project; is it a video or a program? Video is always the same, not sure about the control mechanism. I like that you are bringing in humor and irony, especially to help people understand the inter-connected elements of the tech. Trying to imagine myself in the role of the viewer. Will I be able to absorb this in only watching the video? Thought the script was something we could read. Like the printed algorithms, some of it is obscure, some is clear. Having a hard time becoming immersed in your story need one more thing in oder to fully participate. 

DJ – The question, what is smart? Are they aggressive? We need to understand how these machines are wrong on multiple levels. Technologies can do the opposite of what we want them to do. Problematic parts can be pushed further. 

JI – This is the first time I’ve heard you describe the tone of your approach as “black humor” as well as the fact that Sandy is modeled on you. The presentation can be adapted for a linear showing of this, on the video reel. Is “it’s the right pronoun? Non-Binary Who has the element of conflict that was missing in the previous work. It is captivating to see them argue over Sandy. Perhaps limit to 3 stories? May want to reverse the order: start with object, then end with argument of who the user is. Not sure cardboard is the ideal material – will get dirty, is also hard to read. Please scrap the interactive menu and put all of your attention into making this a compelling video piece. This can be added to the screening reel, if the quesions above can be clarified and the approach simplified.

Feedback for User Testing Tuesday


What sort of feelings do you want people to feel?

-> they like this smart system?
-> they shouldn’t like it?
-> This is creepy?
-> We should have these things to help us have a healthy lifestyle?
-> are you critiquing how we live our lives nowadays?
-> Is losing weight a good choice for the story or something else?


  1. reference movie: Smart House by Disney 2000
  2. Adding sound, having the conversation read out loud?
  3. It is good to not to tell the audience who Sandy is


  1. Who set the algorithm? Sandy or who, think of a way to give people a heads up, even one sentence
  2. Sandy could eventually died or get crazy if you want to go to extreme


Sandy could be wearing the health rate device herself
-> heart rate
-> Vitamin
-> etc.,


  1. Think of an algorithm way to express the thoughts of furniture
  2. May be use suedocode
  3. Chair -> Weight measure
  4. Vacuum -> Trash Can
  5. Reference: Iphone X ad

1×1 meeting with Danielle


Creating a longer story. Planning six stories about what is smart.Wants to finish 3 more. 

Not sure there is enough power 

Control dilemma. Smart devices fight back!Wants it to be absurd
Projection mapping w models. objects will be too small. animation? VR?one model to manipulate 

God view. But why would someone be interacting? 
What is the interaction you are imagining???IS VR easier…? 

Current idea for interaction:

1) manipulate to the model. chair w pressure sensors

2) interaction station.
Each story comes w its own trigger??? 
Could the projection map run itself. Don’t want the dialog to run itself. Not talking daily. 

Three points of view

1) You do something related to the story that triggers the map to start (you are Sandy)

2) The map plays on its own bc they have a life of their own (Possible God-view, You are not Sandy)

3) Provide an interface with the options (You are not God, and you are not Sandy) 
Prefer the third one

Use scenario will be even more important!!!
Are the stories good? Figure to how to judge. What matters to you most?That is it entertaining? That it invites reflection?That it reveals algorithmic process? Etc?

Feedback from Jess

Queer Research

Queer is an umbrella term for sexual and gender minorities.

In academia, the term queer and the related verb queering broadly indicate the study of literature, discourse, academic fields, and other social and cultural areas from a non-heteronormative perspective. 

Presentation Feedback

From simple control systems we will design products that need to have a point of view of their own. From silent automation they will have to have feedbacks for an agreed discussion. From pushing buttons we might have to build the tools to have an actual conversation.

What kind of interfaces will we have to design? What are the buttons needed?What kind of messages will we receive?


Q1. What clarity did you accomplish for yourself as you worked on the midterm review presentation? What clarity around you core problem/idea or the thesis form or your critical perspective happened before the review?

  1. I identified key questions that I want to discover for my thesis topics, which helped me clarified the goal of my project.
  2. I identified the form, and execute corresponding prototype and tested out whether this form is doable and effective.
  3. I think the core problem and question that I want to ask through my thesis is clearly stated in the presentation before the review, but the detailed content of what I need to project at the end still lacks clarification.

Q2. What feedback did you receive overall? How is this informing what you are planning to build out? Be specific.

Overall, people responded with strong interests to my questions, especially the one “What is being smart?”. The general feedback I got was that the piece will become more interesting and important only as how the content turned out. Also, people want me to simplify the form.

So, my next important step will be focusing on storytelling and figuring out the conversation between the “smart” furniture. Also, keep building models of furniture and testing out the projection settings are also part of the next step process.

Q3. Did any one comment stay with your as something to consider moving forward?

Critics suggested me to simplify the form by not focusing or even not doing the modeling of furniture, but I think the 3D models of the furniture are still important to installation. The models help to construct a scenario to the audience and work as an embodiments.

Q4. What lingering questions do you have, related to your research, testing, development or presentation?

What stories and conversation can trigger the thinking of “what is being smart?”


I was pleased to see the updated information in your presentation– and it seems the clear and compelling question people responded to was “what is smart?” I think their recommendations are clear and helpful. Danielle can expand on her comments, as we were short on time. I think it makes sense to develop a plan for the narrative, or story — what is the conversation these objects are having. Let’s get it figured out clearly on paper for next week.


Interesting set of questions. What is intelligence? Dimensions of power in domestic space? Needs to work yet the piece will only be as interesting as the content is. Audience will understand this through the story. Think of this as a story: what are they talking about. Iterate many stories. It is a dialog between furniture – how do we reconsider what smart is, through these conversations? As you are building, work on the writing of the stories. 


Premise of furniture thinking about people is interesting. What are you considering smart – focus on the conversation, not just the data. Critically, this is where the project can expand. If it is someone’s home, will it be more personalized? What about superveillance? Aesthetic reference: “mechanic disclosures”, projections on paper. Simplify the form, focus on the conversations.


Is this about furniture talking to each other or is this about them talking to each other so they can do a job? OK this in service to the person, yes? Will send you scholar at NYU about how intelligence is formed….will follow up.

My Presentation

1. Exhibition plans

2. Concept Statement

A “Smart” Discussion is an interactive installation at museums that creates a scenario where furnitures talk and communicate to each other in order to reach a concluded interpretation of a person based on basic human interaction and behaviors 

3. Domains

Internet of Thing
Home Automation
Human-Computer Interaction
System Design

4. Methodology | I am not_

  1. Making smart home device
  2. Revealing algorithm of real products in the market

5. Prototypes

Code version

6. Purpose | I am building this to_

  1. To use a fun and playful way to demystify the “learning” process of home automation/smart spaces.
  2. To provoke a thinking upon “What is being smart?”.
  3. To discover the possibility of a more engaging and tangible learning process of home automation in front of users’ eyes
  4. To picture a scenario where people have more sense control of the their data (visually) at home in the future
  5. To discuss the future relationship between human and their smart space

7. Impetus | I am interested in discovering _

  1. How I could use technology to reveal unseen processes and stories that worth thinking and worth learning?
  2. How does the learning algorithm behind home automation works?
  3. What happened if the algorithm is displayed in a readable way in front of people’s eyes, scarier or more relief? 
  4. What is being smart? and what makes a furniture smart? (Think like a human, or do the job for us) What’s our expectation?

Because, I believe along with the fast development of A.I. and IoT, a realization of home automation is an inevitable trend. People will be eventually co-habit with these smart home devices and adapt them into the livings.

8. Research

8. Final Exhibition Mockup

Thesis Questionnaire

  1. Group: The New School Us List (Student/Professors)
  2. Total Data Collection, 26 groups

According to the questionnaire, among 52 votes from 26 respondents, 23 are positive, 10 are neutral (curious, mysterious, no feeling, surveillance), and about 19 votes for dislike feeling. The result implies that, although some people have positive feelings about the smart home device, they are still feeling unsure and insecure about the device. About half of people think smart home devices are very convenient.

After counting, 12 (46.2%) respondent do not have or do not want to have smart home devices at their home; 14 (53.8%) people want to or have smart devices at home. Interestingly, although some of the people who have smart home devices but have never use them and do not trust the machine. Most respondents with no answer presented concerns of privacy and data security.

50% of the respondent shows they would prefer a more transparent and tangible learning process for smart home devices. The others’ answers are very, some of the respondents required clarification. I guess this might need further research and physical prototype to test our if the revealing “learning process” is effective or not.

The age group for this research is mostly students and among’ 20s. The only one respondent with a more senior age showed a negative feeling about smart home technology in concern with the privacy and surveillance issues. The younger respondent showed positive answers to all the questions.


  1. People find smart home device scary because of a loss of control over the data and their interaction with the machine
  2. People prefer smart home devices with more soft smart functions and functions that do not harm their own privacy, e.g., curtain, switches.
  3. The younger respondents are a more positive feeling about smart home devices.
  4. Half of the people showing a preference for a more tangible and transparent learning process of smart home devices. More prototypes and user testings are required.

Questions to be Answer

  1. What is the case scenario for my installation?
  2. Who am I designing for? Define the personality, the scenario so I could have a better idea upon execution.

From the article: Like a Family Who Care Me

Kiesler et al. [19] examined how humans create a
mental model of a (humanoid) robot. The authors
report when people show anthropomorphic
characteristics in their mental models of a system
they tended to perceive the system as more

In the field of smart medical technologies,
e.g., Pak et al. [20] reported trust building effects of
anthropomorphic characteristics on a diabetes
decision-making support aid. Therefore, we expect
that users might perceive a system as more
trustworthy, if their mental model contains
anthropomorphic characteristics.

The relation of anthropomorphic perception and
its relation with trustworthiness remain an
unanswered question. Backed up by earlier research
(see, e.g., [18, 19, 20]) it seems worth to investigate
the relation of anthropomorphism and trust in the
field of smart home environments. We suggest an
anthropomorphic threshold, which should be
investigated by using a more precise methodology
and scale. Still metaphors are an effective and
inspiring way to overcome the abstract and difficult
character of computer systems.

Hidden Energy Flow in Smart Home Device

Why Smart Home is Creeping Out Customers?

“What’s the difference between a smart home and a stalking home?” 

Having the home is the focus of everything is not necessarily what the consumer is looking for

“Consumers are completely disconnected from some of the data about their utilities, and that needs to end, or we can’t get smart.”

“Consumers are completely disconnected from some of the data about their utilities, and that needs to end, or we can’t get smart.”

From Article “Anthroposophy on the Rise”

The embodiment of system logic in the smart home device

Just as smart objects and services are interpreting our behavior, we sometimes need to understand what this interpretation results in, and how it might come to its conclusions. This is equivalent to a math teacher asking you to show your work. As painful as it may have been for us in grade 9, teachers around the world are establishing trust in our abilities and accuracy.

Unfortunately, most smart systems don’t do much to embody their processes. 

Question: How should designers communicate how these systems work and enable people to rely on, trust, understand them.

Anthropomorphic system
We can use our experience of people and their behavior to relate to anthropomorphic systems.

As humans, we establish trust by comparing promises with actual responses. But with emerging services and experiences, sometimes we don’t know what the promise should be. What does it mean for the thermostat to “learn”? To comprehend what’s going on, we look for ways to move this from abstract to tangible.

Rico & Mother (Smart home device). Their use of metaphor is anthropomorphic rather than skeuomorphic  – and they exhibit human form, qualities, and behavior without being living entities. 

It’s easy to dismiss the actual card purely as a necessity of past technological shortcomings, but it’s worth considering it as more tangible representation (or avatar) of the service. There’s a system in the background enabling the transportation, but in most cases having the card is what it takes for people to depend on and understand how to use the service – card is the key and you swipe it to get access.

For one thing, the home is a stage where multiple users share the same system.

Just as smart objects and services are interpreting our behavior, we sometimes need to understand what this interpretation results in, and how it might come to its conclusions.

Unfortunately, most smart systems don’t do much to embody their processes. Nest hardly does it at all except for showing its current state. Netflix explains its recommendations as “because you watched” (also problematic when not getting it right). We’re left with few ways to establish trust, let alone engage with the processes directly.

Of course, users don’t want to evaluate these intelligent algorithms specifically. But in the foreseeable future, it will become more important to design touch points where complex processes can be shown, become relatable, and may even become tangible to us.

We’re not suggesting that anyone rejects leveraging anthropomorphic design patterns today. We are still in early times in the design of our connected things and services, and these patterns will play an important role in establishing user expectations and building trust in intelligent services.

However, designers must continue to explore and create new approaches. As far as designing for our connected life goes, the essence is still out there.

Design Mind: When Objects Talk Back


We expect a lot from something that is labeled as smart. But as smart as a product could be, data analysis and sensing are not enough to design a fully trustworthy experience. By relying on a product’s smartness, we tend to hide complexity. And by focusing on connectedness we outsource all controls to remote applications.

Soon we might find ourselves with objects around our homes that prevent us from making choices, that might awkwardly deny any manual control and behave in a way that is not really understandable to us. At this point, who is actually in control?

In a recent study about the use of Nest (besides a clear delight brought on by the slick UI design and remote control), what was surprisingly most undervalued was its “smartness.” People couldn’t fully rely on the self-setting of certain functionality as its sensing was not perfectly accurate (Nest relies on sensing presence to set specific routines as Away mode). The interviewees didn’t fully understand what learning meant, as the Nest seemed to be repeating what it was set to do. Ultimately, what was particularly interesting is that people didn’t trust it because “The Nest is doing its own [thing] and doesn’t tell you what it is doing.”


Will a coffee machine give me a coffee if it knows that my blood pressure is too high? Will it give me the same work to boost up my productivity if it knew from my Fitbit that I run a lot?

From simple control systems, we will design products that need to have a point of view of their own. From silent automation, they will have to have feedbacks for an agreed discussion. From pushing buttons we might have to build the tools to have an actual conversation.

User-centered design is useful in dealing with silent and inert objects, but new intelligent systems and objects surface a new set of issues and constraints.

calm technology

The object, in this view, becomes the lens to look at an interaction, looking at the center, the user, but from the perspective of the object.

So what if we would actually design from a product perspective?

As it happens with pets, we need ways to encourage good behaviors and correct bad ones within the objects we own. Through the process, pets understand the limit of what that can do and learn how to communicate with us. Objects will need to be designed just right to learn and adapt for various scenarios, to understand the stirring, to be sensible if they need be, even be loud or smelly or annoying if they have to.

Usability will no longer be the only goal of objects: they will absorb, adapt and become “engagable.”

However, in this near future, we might start to think about interfaces for objects to better communicate with humans to mediate instead of their goals with our lives and routines.

Feedback from the Science Fair

1. Reflection from myself:

I found that people’s opinions are super diverse. Some people feeling super

Youchun suggested me to do the project lie in my heart. Because my topic relates to the smart home device, lots of people gave suggestions to me based on how to make a physical smart home device or how does the future physical smart home alike. However, these are not what I want for my project. In my heart, the point of my project is to unveil the hidden algorithm of the smart home device.

2. Precedents from Peers:

From YouChun

  1. Objectifier 2016

2. Narciss – AI whose only purpose is to investigate itself

3. Deltu by Alexia Léchot – iPad as a ‘mirror interface’ between humans and robots

From Echo:
Gravity Sketch:
This is a tool for using VR to quicking making 3D models

From Jackie:
Interestingly, Jackie thinks the cup is a result of data visualization from the smart home device. She thinks the idea is still not clear enough