Skip to content

Instantly share code, notes, and snippets.

@rsimmons
Last active August 29, 2015 14:17
Show Gist options
  • Save rsimmons/fe47dfe889695d1d19b6 to your computer and use it in GitHub Desktop.
Save rsimmons/fe47dfe889695d1d19b6 to your computer and use it in GitHub Desktop.
Ligaya feedback on salami 2015-03-25

Ligaya feedback on salami 2015-03-25

She was already aware of the general type of thing we were working on going into this.

  • On prior page, she thought at first that explanatory text was part of first difficulty group, and only applied to that. Should be more clearly separated
  • She chose easiest prior setting. She knows a few Japanese words. She left default of Normal listening mode.
  • Liked the sounds
  • Something she did countless times was click to replay, then start typing answer. But of course input box lost focus, so she would type and then notice and have to type again. She probably did this 100 times, painful to watch.
  • Some of the segments were shockingly long/difficult as context (e.g. salt, ramen, etc.). From my perspective, this was by far the biggest problem for her. As she used it more, she generally figured stuff out, but this was a constant pain/issue.
  • Need to tune context length limits, or something. Better to re-use short segments than have ultra-hard long ones. Or could limit them by number of unknown words allowed in them?
  • Would it make sense to show furigana over non-clozed words? Would have helped her identify clozed part quite a lot, I think.
  • This also made me think, should it just continue to asking meaning if you get phonetic wrong? What would implications be?
  • Slo-mo might help. Could even make it automatic or something, like based on characters-per-second of segment or something.
  • She asked why, when she failed phonetic, it didn't tell her what the word was.
  • She entered "not" when it wanted "not being", and she was annoyed. She didn't notice the oops buttons. I'm not sure if she found those on her own or I pointed them out.
  • While long video was playing, she wanted to click to stop or restart it.
  • She had incorrect spell-corrects in her favor that she didn't notice: where->here and what->that. We could even have just a hard-coded special case list for these
  • First time seeing new words, she would think really hard trying to figure them out or guess from context. It made me feel that for her at least, "lessons" would have been much better, so that she wouldn't have to be graded wrong the first time. Then again, some of the words she already knew first try.
  • She discovered she could cheat and see clozed text with hover. She used this to cheat when the context was way too hard. Without this, she would not really have been able to keep using it, since some words only had super-hard contexts.
  • She didn't mention listening-mode setting at top. I asked her about it afterwards, and it seemed like she had noticed it but assumed it wouldn't be useful. She definitely didn't guess what it would do. She seemed to think it might tune SRS pace or something.
  • She used it for probably 10-15 minutes and then said "oh it was cool that that one sentence had a translation, I wish the other ones did". She hadn't noticed the translations, and even after noticing the one didn't see right away that they always had them.
  • She suggested that if people had a failure streak, could encourage them by telling them about their progress so far.
  • Would like to be able to see summary/list of words she's learned
  • Coming back to site later, would like to have a "Here's what we're going to do today" type welcome, talking about stuff learned till now, plan for this session, etc.
  • She felt it was the most compelling thing we've built. She was really sort of compelled to try to get thing correct, which made sort of unfair parts a bit painful to watch.
  • Her kana-reading is good, but not perfect. She would not have been able to type a couple difficult words without my help (こんにちは, ちょっと). We could maybe have a kana-assist mode checkbox somewhere that would explain the chars you need to enter into the IME, below incorrect phonetic answers.
  • This is unjustified and potentially wishful thinking on my part, but I feel like I could see her listening ability rapidly improve as I watched.. like her ability to parse contexts into phonemes and such.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment