logo

Jonas Gebhardt

Interaction Designer & Software Engineer

Hi, I'm Jonas.

Somewhere on the developer-designer spectrum, I generally find joy in making all things digital. I studied Human-Computer Interaction and Software Engineering at Carnegie Mellon University and Hasso Plattner Institute. Currently, I'm at Facebook creating user interfaces for Instagram. Here are some of my past & ongoing projects.

Résumé (PDF)

Send email

  • All
  • Code
  • Design
  • Research
CloseClose

Portfolio

  • Boeing 787 Factory Communication

    Jan – July 2013

    A central part of the MHCI program is the rigorous, 8-month capstone project, where interdisciplinary teams of students work as consultants for large industry clients. My team is working with Boeing to conduct research and user experience design work to improve factory communication on the company's 787 line in Everett, WA.

    Due to the confidential nature of this project, I cannot share specific work details; However, we are employing a variety of research methods such as Affinity Diagramming, Contextual Design, Shadowing, Heuristic Evaluation, and Ethnography. After consolidating and synthesizing our research data, we will design and develop several prototypes in an iterative fashion, based on continuous refinement obtained through user testing.

    Team Zephyr

    Team

    • Arthur Hong
    • Jason Block
    • John Rogers
    • Rebecca Jablonsky

    Role

    • Design Lead

  • Voyager: Ubicomp Game Concept for Kids

    Spring 2013

    Team

    • Ben Margines
    • Kiran Lokhande
  • Methodology of Visualization

    Spring 2013

    Rapid Visualization

    As part of the Methodology of Visualization course taught by Matt Zywica of CMU's School of Design, I explored techniques for rapidly visualizing concepts, ideas and interactions. The course covered material ranging from basic drawing behaviors, visual mechanisms to developing and communicating complex form and meaning.

    Rapid Visualization Rapid Visualization


  • Health Design Challenge

    December 2012

    Posted by the Department of Veteran Affairs, the Health Design Challenge sought to re-design the patient medical record. The main design objective was to make it easier for patients to manage their health, while enabling health care professionals to more effectively understand patient's medical history.

    Our submission has been featured in the Showcase section as one of the Competition's top entries that "inspired the judges and challenged the status quo."

    Our team interviewed health care professionals, caregivers, patients and their family members, employing the Contextual Design methodology to synthesize our vision from consolidated user data.

    Recognizing the need for a multi-purpose and multi-media solution, our design focuses first and foremost on being easily accessible to patients, while making sure that medical professionals can quickly retrieve all necessary information. While our submission encompasses a printout version of the medical record, it was designed with future interactive applications in mind.

    I was responsible for designing the layouts for a comprehensive medical history of the patient, as well as a summary overview of past events. Furthermore, I created a detailed problem description layout, which allows medical professionals to quickly gain a detailed understanding of the a specific course of disease.

  • Collablocks

    Nov 2012

    Uniting anonymous crowds through collective collaboration

    Can an anonymous crowd of people collaborate to collectively solve a problem? Public events (such as sports games) bring together many people who do not know one another. We wondered whether it is possible to bring a large number of people closer together by tasking them with a collaborative challenge. Each individual contributes part of the solution using a personal mobile device. Players need to take into account the overall progress of the crowd , as shown on a large, shared, public display.
    We built and tested Collablocks, a crowd-based, local, collaborative game, during Facebook's CMU hackathon in 2012.

    Collablocks is a real-time, browser based 3D building game.

    On a large display such as a jumbotron or projector screen, players see a 3D grid of cube-like blocks. A see-through "skeleton" structure represents the goal for the current level. Individuals can join the game by accessing a publicly visible URL from their computers or mobile phones. Players are assigned a unique color and use a simple 2D grid to interact with the game. Each cell of the grid represents a block at the corresponding X,Y position on the shared 3D grid. Blocks can be toggled on or off by the player, creating or removing a block of their color at the top of the corresponding location. Changes to any player grid are instantly reflected on the public display. The goal of the game is to recreate the skeleton structure using player's blocks as closely as possible. Players are responsible for at most one layer of blocks on the public display, but need to take into account blocks created by other users. A shared progress bar visualizes the overall progress of the current level. Having to quickly react to other user's actions becomes the central challenge of the game.

    Implementation

    We used three.js/WebGL to visualize the public 3D display. The virtual camera continuously rotates and oscillates to reduce occlusion/visibility issues. However, this rotation is constrained to less than 180° in order to facilitate user's association of block location between the public and private displays. Also, blocks are semi-transparent in order to reduce occlusion.
    The player's input interface is a responsive HTML5 page that maximizes cross-device compatibility as well as the amount of input space available on mobile devices.
    The input and display pages are served by a node.js backend and communicate via WebSockets to minimize latency, an essential feature for fluid gameplay that allows players to visually associate "their" cubes with actions taken. We built on the SocketStream framework using an event-based architecture to keep server and clients in sync. Our backend uses Redis as a fast in-memory store containing session data as well as player's color and input state, allowing them to reconnect without losing access to previously created blocks. When a level is reached, the server generates the next level and broadcasts it to all current players and public display(s), which in turn update their UI. In fast-paced and error-prone environments commonly found at public events, it is important to establish a count of current players to determine a good vertical scale factor for the blocks, enabling the game to adapt to the number of current players. We obtain the count of current players by monitoring the websocket connections on the server side, while employing a periodic "heartbeat" to check on non-websocket based connections.

    Results & Outlook

    After building it in less than 24 hours, we successfully demoed Collablocks with a crowd of 30-50 users, winning second place at the 2012 CMU Facebook hackathon. One idea for future expansion of the game is to introduce a random assignment of players to different teams in order to create a sense of competition, while displaying current team results side by side on the public display. Furthermore, we see a potential to improve discovery and the onboarding process of the game by advertising ongoing games in the prospective player's vicinity.

    Team

    • Jonathan Chan
    • Noah Bornstein
    • Raunaq Gupta
  • Swrm.io

    Swrm is a platform for honest feedback on the web.

    Aug 2012 – ongoing

    Swrm is a platform for honest design feedback. We started Swrm after realizing how hard it is to get honest feedback on the web. Currently, online feedback either comes in the form of a 5-star scale with little context, or as part of discussion threads that are typically do not promote a culture of honest critique, making these platforms prone to unconstructive comments and trolling.

    Swrm allows designers and creative thinkers to give and get feedback and insights on their work from a large panel of reviewers. Swrm helps creatives to get feedback from designers of all experience levels, improve their work, and learn new skills.

    Implementation

    We created Swrm as part of Luis von Ahn's Startup Lab class at CMU. I architected the system and together we built a first prototype within a few weeks. Swrm is a single-page, "soft realtime" web app using AngularJS for the frontend, which is bound to our Node.js backend via WebSocket pub/sub. This is facilitated by the SocketStream framework (a WebSocket/socket.io wrapper) built atop Redis' event infrastructure. We use MongoDB for persistent storage. We used HAProxy for reverse-proxying to Node since Nginx does not currently support WebSockets.

    Team

    • Ari Zilnik
    • Luis Gonzalez
    • Raunaq Gupta

    Technologies

    • Node.js
    • SocketStream
    • Redis
    • MongoDB
    • AngularJS
  • Flox: iOS Game

    Nov 2012

    Flox [Flock + Flux] is a particle-based iPad game. On a large, pannable game map, many small particles ("travelers") are generated with one of many randomized start and destination points ("Hubs"). Travelers move slowly towards their destination hubs. Travelers can also move along "routes", in which case they can move a lot faster, much like cars can travel faster on highways than on small roads. The point of the game is to create effective "routes" between common start point/end point combinations, creating a network of routes which help travelers reach their destinations faster. Players receive feedback on the efficiency of routes built and overall create a complex, dynamic, beautiful system in a playful way.

    I envision future multiplayer versions to feature both discovery-based and constructive-collaborative gameplay. From an Experience Design standpoint, Flox aims to provide an Optimal Experience: to be intrinsically rewarding and to put the player in a state of Flow. Conceptually, the game is both absorptive and immersive, both entertaining and educational.

    The game map is generated ad-hoc from a random seed, allowing future multiplayer (network-based) implementations to quickly reconstruct the map from a single set of coordinates. The game is architected in a way that allows lean network synchronization of the game state across different devices: Hubs are implicitly "encoded" in the random seed; individual particles need not be synced due to their stochastic nature (their overall spawning and targeting behavior is encoded in the hubs), leaving routes and user profiles as the only elements that would need to be synced dynamically.

    I built the prototype shown in the video using iOS and Cocos2D in about 5 days, without prior knowledge of iOS. The Cocos2D engine abstracts away peculiarities of OpenGL, which allowed me to quickly reach minimum viable product stage. Some prior experience with C++ and Smalltalk helped smooth the transition to Objective C.

    Technologies

    • iOS
    • Cocos2D
  • Beta Tasting

    Beta Tasting: A Dessert Recipe E-Book

    Nov — Dec 2012

    Beta Tasting is a dessert recipe e-book geared towards a young, urban, edgy and technologically inclined audience. We created three personas and several scenarios of use for this type of e-book. Our initial competitive analysis suggested that most of the dessert recipe publications on the market are following a "cutesy cupcake shop" aesthetic that may not appeal to our target audience.

    Beta Tasting: moodboard

    Beta Tasting: Design Language

    Based on a large moodboard, we created our design language: clean-cut, simple geometric shapes; gritty, full-page process photos and a monospace font with a "computer terminal" feel. Infographics are added for additional context, representing different concepts such as the recipe's ingredient composition or sub-sequences in the preparation process.

    Beta Tasting: Visioning sketch

    As an additional feature, we envisioned the sequence of background photos to form a continuous panorama, enticing the reader to explore the entire e-book. The background photos should show the process of preparing the last recipe in the book ("Pomegranate-Cider Baked Apples with Sugared Pie-Crust Strips"), with the final result shown on page 22. Furthermore, the preparation steps are aligned with a sequence that best creates a continuous "flow" of common ingredients used in various recipes. These ingredients are included in the pictures in order to relate the background images to the recipe on the current page. For example, mint and several spices were included in the background picture on page 10 to relate to the recipe on that page ("Gingered Ambrosia").

    Beta Tasting: recipe relation chart

    The order of recipes was determined with the help of the above chart, which indicates which recipes share similar or identical ingredients. For example, recipes on pages 11-14 are linked by "Vanilla" as a common ingredient, which is included in the background picture.

    Beta Tasting: Photography storyboard sketch

    After roughly laying out the order of steps and ingredients to be included in the panorama, we shot the entire sequence over several hours, while preparing the actual recipe. The pictures were then manually stitched together to create the illusion of a continuous flow, which works especially well with the "sideways swipe" gesture commonly found on tablets.

    Beta Tasting: panorama backdrop

    Team

    • Aishwarya Suresh
    • Judith Tucker
    • Truc Nguyen

    Tools

    • Photoshop
    • Illustrator
    • InDesign
  • Kinetic Typography

    1) Kinetic Typography demo, in perspective

    Dec 2012

    Kinetic Typography is a technique to visualize auditory and emotional characteristics of speech with the goal of conveying additional information on top of the textual content.
    The above video draft is an example of Kinetic Typography, visualizing the speaker's vocal properties and emphasizing the continuous stream of background vocals.

    Tools

    • AfterEffects




    2) Generating Kinetic Typography from speech

    Feb 2013

    Creating Kinetic Typography is a fairly complex process. Basic characteristics such as loudness and frequency can easily be retrieved from speech, while speech-to-text tools have become quite reliable. We wondered whether it would be possible to generate Kinetic Typography automatically by simple speech input. We built an HTML5 prototype during the TartanHacks hackathon at CMU. The prototype uses both Google's speech-to-text API as well as the HTML5 audio API (for sound processing). We aimed to extract per-syllable loudness from the audio stream before attaching it to the words returned by the speech processor. Loudness is then converted to text size, and animations / transitions are added to the word.

    Links

    • Online Demo (Currently requires Google Chrome Canary / Chrome Version ≥27)

    Tools

    • HTML5 Speech, Audio APIs

    Team

    • Nancy Chen
    • Nishita Muhnot
  • Storyboards: Hospital Visioning

    Various rough & polished storyboard sketches for innovative concepts in health

    Oct 2012

  • Architecture Poster

    Poster: The Origins of the Avant-Garde in America

    Oct 2012

    Poster: 'The Origins of the Avant-Garde in America

    Objective

    As part of CMU's Interaction Design Fundamentals class, our task was to re-design the poster for the Philip Johnson Colloquium. The original can be found here.

    Design process

    I first created some preliminary sketches based on the rough structure of the provided text and the “modern architecture” theme evident from the sample pictures.

    Preliminary Sketches / Ideation

    After experimenting with a range of layouts, I realized that I needed to familiarize myself more with the theme and the multiple interpretations of avant-garde.
    I found this site especially helpful for getting a better idea of avant-garde as an art-historic movement, and for learning to differentiate between the various styles arising from related and/or sub–genres such as Constructivism, Bauhaus, De Stijl, and Dada.

    The crisp and calculated use of relatively few, powerful colors and geometric shapes found in the works of El Lissitzky, as well as the iconic, minimal design language of the Bauhaus Masters were particularly inspiring to me. At this point, I began wanting to achieve a very modern, minimalist (thus somewhat inherently avant-garde) style for my poster design.

    As a native of Dessau, Germany, I grew up with a particularly close connection to the Bauhaus movement. Ludwig Mies van der Rohe, head of the Bauhaus School from 1930-33, offered what is possibly the most well known description of minimalism: “less is more”.
    Incidentally, a discussion of Mies’ work turned out to be part of the schedule for Session III of the colloquium, which motivated my choice of this picture of the Seagram Building (New York City) as a background picture. This picture works particularly well due to its photographic composition, its depiction of an actual architectural artifact of the avant-garde, and of course the fact that it was designed by van der Rohe. Furthermore, the building is situated only a few blocks from the MoMA, which is the venue for Session IV of the colloquium.

    Tools

    • Illustrator
  • DepthSelect

    DepthSelect: Leveraging touch pressure to mitigate Pick Ambiguity

    Nov 2012

    DepthSelect allows users to select occluded objects in layer-based UIs. Layer-based tools such as PowerPoint, Photoshop, or Illustrator make it hard to select objects that are occluded by others, a problem called Pick Ambiguity. Currently used workarounds include selection of all objects with subsequent deselection of unwanted objects, or alternatively, locking all but the layers one is trying to select.

    As our submission to the UIST2012 Student Innovation Contest, we used a Synaptics ForcePad (a pressure-sensitive multi-touch input device) to convert touch pressure to "selection depth". By varying touch pressure, objects under the current touch position can be targeted. The selection is finalized by quickly tapping with a second finger. A continuous pressure indicator next to the object list visualizes the current pressure level for the user, allowing the user to more quickly form an accurate mental model of this interaction paradigm. As deeper layers are selected, top layers become increasingly translucent ("onion peeling"), allowing the user to select totally occluded objects.

    Implementation-wise, we first wrote a WebSocket wrapper for the C++-based Forcepad SDK, allowing us to accept input events in the browser with low latency. The web frontend is based on paper.js and CSS3. I wrote a basic vector-based layout program; a pressure-based, multi-touch vector drawing tool for quick creation of demo shapes, as well as the 3D selector.

    Team

    • Asim Mittal
    • Kevin Scott
    • Raunaq Gupta

    Technologies

    • Synaptics ForcePad
    • C++
    • Python
    • JavaScript
    • Paper.js
  • UIST 2012 T-Shirt Design

    Aug 2012

    My design won the UIST2012 Student Volunteer Shirt Design Contest (landing me a spot as an SV at the conference, which was great).
    The conference was held on MIT's campus, so a local reference to the MIT Stata center seemed appropriate. UIST is the ACM Symposium on User Interface Software and Technology. Papers in the field frequently feature a vector outline-based "Figure 1" on the first page as a type of "visual abstract". Incorporating this style with the themes of tangible and bi-manual interaction (both commonly found at UIST) and the Stata center yielded a design that also works well when printed on cloth.

    above: preliminary sketches

    Tools

    • Illustrator
  • SlingShot

    Nov 2009 — Aug 2011

    SlingShot is a workflow management system I built for exozet effects.

    SlingsShot is used to manage the entire production pipeline and artifacts, including Shots, Plates, Set Data, Tasks, Milestones and Client Feedback. The tool has enabled Exozet's VFX pipeline to scale up to several high-profile productions, for example "Cloud Atlas" and "Abduction".


    SlingShot's project statistics provide a compact overview of the entire show, summarizing shot statuses and visualizing task throughput by department.


    Personal Dashboards: Artists, Leads and Producers can easy track their work items throughout the show


    Shot Overview: one-stop summary of most relevant details


    Condensed interfaces for effective management of repetitive, nested tasks


    SlingShot's project statistics provide a compact overview of the entire show, summarizing shot statuses and visualizing task throughput by department.

    Selected Features

    • Personalized Dashboards for Artists, Leads, Producers and Management
    • Manage Shots, Assets, Storyboards, Plates, Notes, Set Data, Contacts, Tasks, Artists, Milestones, Client Feedback and more
    • Detailed, real-time projects statistics
    • OmniPlan Milestone and calendar import
    • Messaging & Note sharing

    Responsibilities

    • Design
    • Implementation
    • Training
    • Support
  • GravitySpace

    Whole-body interaction on a touch-sensitive floor

    Oct 2010 — Jul 2011

    GravitySpace is a new, camera-free approach to tracking objects and activities in smart rooms, using only touch-sensitive horizontal surfaces.

    We built several large, high-precision multi-touch surfaces as prototypes. As a yearlong capstone project with Patrick Baudisch at Hasso Plattner Institute's chair for Human-Computer Interaction, we equipped all horizontal surfaces in a room with touch sensitivity, including chairs, tables, and the floor. As gravity pushes people and objects against these surfaces, it causes them to leave imprints, i.e., distinct pressure distribution across the surface. GravitySpace concludes from these imprints which objects or people occupy the room, how they are positioned, and which actions they are performing.

    We implemented GravitySpace by building a 8m² FTIR glass floor (input and display resolution: 12 megapixels each), as well as several other pieces of active and passive touch-sensitive furniture. We then developed a framework on top of OpenCV in order to process imprint images and recognize objects and their spatial relationships above the surface.


    We instrument all horizontal surfaces with touch sensors. From the resulting imprints (blue), we can determine the objects that caused them.


    Our second-largest floor prototype. Its IR Cam and projector are mounted in the foreground. Imprint images are projected back onto the floor with a slight offset to make them visible from above.


    We can identify people, objects, and even pressure distribution from imprints on the surfaces, creating a '3D model' of the scene above the surfaces.


    FTIR lets us to visualize pressure: Light is sent lengthwise through a glass pane, and is deflected out of the pane only where an object touches the surface.


    Pressure-propagating furniture, such as this cube seat, allow the floor to register pressure distribution indirectly. This seat can be used as a generic 'weight shift' input device, e.g. to control a racing game.

    Team

    • Prof. Patrick Baudisch
    • Paul Meinhardt
    • Jossekin Beilharz
    • Nicholas Wittstruck
    • Marcel Kinzel
    • Franziska Boob
    • Jan Burhenne

    Technologies

    • C++
    • OpenCV
    • Qt

    Press

    • "Wenn die Fußmatte die Tür öffnet", Berliner Zeitung. February, 19, 2011
    • "Multitoe" - RBB (Nationwide Public TV), 4min, RBB Zibb. January, 5, 2011

  • BePart

    Facilitating Citizen's Participation in Urban Development

    Jan 2011 - June 2011

    bePart is a concept for mobile e-democracy. A smartphone app informs users about urban development projects close to their homes, and facilitates citizen's participation in public debates early on in the planning process. In 2011, bePart was awarded first place in the idea category of the Europe-wide Open Data Challenge. We also won first prize at the Apps4Berlin contest, hosted by the Berlin Senate.

    Team

    • Martin Büttner
    • Lukas Niemeier
    • Jannik Streek
    • Nicholas Wittstruck

    Press

    • ZDF Heute
    • Spiegel Online
    • Mitteldeutsche Zeitung
    • UniSpiegel
    • Neue Osnabrücker Zeitung
    • Potsdamer Neueste Nachrichten
  • eye.t vision

    Feb 2009 — ongoing

    I co-founded eye.t vision GbR in 2009. Together with my long-term collaborator Martin Büttner, we offer software development and web design services to small businesses. Feel free to browse our company website for more information.

    Team

    • Martin Büttner

    Technologies

    • PHP
    • JavaScript
    • Flash

    Tools

    • Photoshop
    • Illustrator