IBM's Watson and Apple's Siri have been making big news over the past few months. This fantastic infographic from medicaltranscription.net (it's so big we've broken it into 8 parts!) helps explain how automatic speech recognition (ASR) works. While the technology behind ASR is incredibly complex and powerful, the end result from a user experience point of view for Siri has been hit or miss. As much fun as it is to play around with Siri and ask her silly questions, we ultimately just want to get stuff done with her as a valuable personal assistant - that's what she was pegged as in the first place. As excited as we were to really get to user (know?) her, our first few hours we're finding she's all over the place in what she can understand and what she can do. Our fellow contributor, Chris Perez, seems to agree. So here's to hoping that as the technology improves and Siri matures out of BETA phase, she gets much better at understanding what it is we're asking for. Either that or -we-are-going-to-have-to-talk-like-this-from-now-on and then pray she understood this third time before we give up altogether. How have your own experiences been with Siri?