Attending an AI Conference in Taiwan

I attended a local AI conference in Taipei. Conferences are always a great opportunity for academic exchange and catching up with friends. As an outsider, my goal was to make new connections, and thankfully, a colleague graciously introduced me to some of their acquaintances. I’m noticing some interesting differences in engineering social norms here, with a unique Taiwanese twist, which I’ll be able to elaborate on further once I’ve had more time to observe.

The conference itself was packed with discussions on popular AI topics, including generative AI and the anticipated rise of robotics. It’s impressive to see the range of AI applications, many of which are powered by large language models that emerged relatively recently (since the “Attention Is All You Need” paper in 2017). While it’s easy to focus on generative AI’s ability to create content, we often overlook the extensive data and training required. We’re actively working to advance AI by pushing the boundaries of techniques like chain of thought and ReAct. However, data remains a fundamental factor, as evidenced by the significant gap between general and domain-specific AI applications. Building a chatbot for general use cases is relatively straightforward, allowing for massive scale and significant returns by leveraging abundant data and computational resources (i.e., funding) to generate revenue. In contrast, domain-specific applications often face limitations in scale, even at an enterprise level, and data can be scarce. While I’ve wondered if this is just my observation, the conference demos from leading consulting and AI companies seemed to confirm this trend.

On that note, creating AI to generate content isn’t overly difficult. However, ensuring safety and ethical use presents a significant challenge. Even from OpenAI’s initial research, safety has been a primary concern. This issue becomes even more critical as models gain “reasoning” capabilities, which carry a higher risk of hallucination. This remains a hot topic, and even the solutions developed could themselves be biased. This ongoing cycle continues to demand extensive research. Coincidentally, I had this observation confirmed in a talk I attended just a few days after the conference.

I spend most of my time leading my team and striving to stay current with developments. Sometimes I worry that things are moving so quickly that I’m losing touch. A recent small quiz offered a bit of reassurance in that regard.

Similar Posts

  • The Lightest Laptop

    In my last post, I mentioned how much I enjoy using the iPad Pro. That said, deep down I’ve still been searching for a truly lightweight laptop. The MacBook Air remains one of my all-time favorites, but over the years it has gotten heavier. With my schedule—constantly driving my kids to different activities—I’ve been wanting…

  • AI, reading, and Traveling

    I originally intended to use this space to share technical thoughts on AI, but I’ll ease in with something lighter—my reading routine while traveling. Lately, I’ve been on the road quite a bit, often crossing continents. For years, I preferred lightweight laptops for work, and the Lenovo ThinkPad—especially the Nano series—was my go-to. The first-generation…

  • can LLM reason

    I’ve spent some time teaching and even training large language models. The question of whether LLMs can actually reason still comes up often. My answer remains essentially the same: no — but with a grain of salt. A couple of years ago, my stance was straightforward. LLMs are fundamentally massive probabilistic models, designed to predict…