HI, I'M STANLEY SAKAI.

Full-Stack Engineer @ , Real-time Stenographer, Linguist, Polyglot
New York, New York


The intersection of language, communication, and technology is what fuels my fire.

Stanley Sakai

ABOUT.

With a passion for accessibility and an eye for design, I strive to develop web apps that both perform and look great. I also type as fast as people speak.

WHAT I DO.

skills

Web Development

I create aesthetic, responsive, and performant web apps, bridging the gap between design and development.

Front-end Technologies

HTML5, CSS3, SASS, ECMAScript 6, Bootstrap, React, React Native, Redux, Pug

Back-end Technologies

Node.js, Express, MongoDB, Firebase, WebSockets, RESTful API access-level management design

Others

Webpack, Mocha, Electron, Adobe Creative Suite

skills

Real-time Stenography

Using a stenography machine, I transcribe your events live at speeds of up to 250 words per minute.

Input devices

Stenograph Luminex, Stenomod, Stenovations Lightspeed

Software

Plover, Aloft, Vim

skills

Multilingualism

English

Spanish

Korean

Dutch

ASL

PORTFOLIO.

Section under construction.

  • All
  • Web Development
  • Stenography

sharedb-react-textbinding

This component refactors ShareDB DOM bindings for use with React. It listens to a ShareDB subscribed document and funnels updates over WebSockets to and from component state rather than relying on DOM bindings and listeners.

GITHUB REPO

Talk on Music Accessibility for Captioning Users

I was invited to speak at the Monthly Music Hackathon hosted by Spotify about my various experiments for improved acoustic accessibility for the deaf and hard of hearing, particularly those who rely on realtime captioning. I presented several solutions in which I overlaid realtime-captioned text over a sound spectrogram as well as adding a frequency response graph using the p5.js package. I also live captioned my own talk!

TALK RECAP

Frequency Graph Add-on for Aloft

Captions for the deaf and hard of hearing reveal the spoken word but don't necessarily capture a comprehensive acoustic experience. Good captioners typically use inline parentheticals to describe ambient noises but for me this wasn't enough. Here I propose using a simultaneous frequency response graph (which a realtime stenographer can show and hide as appropriate via macros on their steno machine) together with realtime captioning as one solution to maximizing accessibility in these situations.

PROJECT DESCRIPTION

Champion Steno Lightboard

Lightboard is a teaching tool I developed for Champion Steno, an online stenography course. Stenographers in training must take speed tests to advance to progressively faster classes until they reach the the minimum certifiable speed of 225 WPM. An important ability for a budding stenographer is not only their ability to take down dictation but also their ability to track and mark speakers as they go. At brick-and-mortar schools, these tests are usually conducted by having multiple teachers sitting in front of the room with name placards indicating whether they're the judge, the plaintiff, the defendant, or any other speaker that might be present at trial or during a deposition.

Online classes present a challenge for the students because they are not clearly able to distinguish who is speaking as these speed tests are usually conducted over video conference. Lightboard allows remote teachers to distinguish speakers by assigning each speaker a keyboard shortcut and pressing the appropriate key causing the current speaker's picture to "light up." The speaker-change events are sent over a WebSockets connection, which then modify the Redux global state, and are propagated nearly instantaneously to the client view.

Lomoji

Lomoji is a tool for polyglots who are learning more than one language simultaneously. As a polyglot myself, I frequently found myself repeatedly using Google Translate to learn how to say a given word or phrase in more than one of my target languages. Thought: "Now I know what 'cumbersome' is in Spanish but, wait, what's 'cumbersome' in Dutch? Or in Japanese? Or in Korean?

Lomoji aims to mitigate this problem by batching translations and presenting the user list of translations, one for each language they are studying (as long as the language is supported by the Google Translate API).

GITHUB REPO

SRCCON Session Transcripts

SRCCON is a multi-part annual journalism and tech conference that utilizes realtime captioning as a means of accessibility as well as documentation. One of the objectives of the organizers was to have links to live transcriptions available to all attendees as they happen. Through a custom version of Aloft, conference organizers can point website links and Slackbots to custom-generated URLs and extract text from API endpoints. The organizers can then blast etherpad and transcript links over Twitter or Slack to conference-goers as sessions begin and end.

SRCCON'S SESSION TRANSCRIPTS

CONTACT.

Instagram

@stanographer

GitHub

@stanographer

Twitter

@stanographer

LinkedIn

/in/stanographer