Cities in distress
Seoullo 7017, Seoul / High Line, New York / Rooftop garden, New York
In recent years, around the world, along with the rise of ecological awareness, we've seen an increase in subways, trams, bicycles, and electric kick scooters. In Asian countries I watched long ago, bicycles were a vital means of transporting goods, and I often saw cycle rickxshaws too. After that came the era of motorized scooters and motorcycles, which are still common throughout Asia today. The problem is the sheer number of scooters and motorcycles overflowing onto the roads, making it difficult to even walk on the sidewalks. Exhaust emissions are also an issue.
Back then, I found it somewhat endearing to see scooters people speeding along with coats draped over their handles for protection from the cold wind. I also felt a sense of pride that Japanese products dominated the market. However, parking and exhaust emissions remain difficult problems to solve.
Electric cars, bikes, and kick scooters seem eco-friendly at first glance, but I wonder how their power supply and generation facilities are maintained. Probably, in the future, individual households and facilities will need their own natural power generation systems.
Also, with the recent spread of AI and similar technologies, construction of massive data centers is advancing. But I do feel a contradiction in the enormous amounts of electricity and cooling water they consume. As for me, I just occasionally cool my overheated, unproductive brain with tap water.
By the way, bicycles and kickboards darting around town have become more common, causing occasional close calls. Especially in Japan, bicycles and kickboards zooming through narrow alleys and tourist spots as if they own the place are a bit scary. Today's urban structure, where cars, bicycles, motorcycles, and pedestrians must coexist, seems to have already reached its limits. The corridor-style sidewalks I've seen in some cities, separating sidewalks from car and bicycle lanes, is one approach. It's a good topic for discussion, but I wonder if it truly offers a fundamental solution. Speaking of which, on my evening walks, I also encounter large groups of joggers. It seems like it might be time for a high line like New York just for jogging.
Is AI a substitute for our brains?
I can barely recall when I first encountered the term “artificial intelligence,” so I promptly asked an AI about it.
Heron's automata are nostalgic tales; I even experimented with trying to build something based on their structures. In Japan, these devices are also called “karakuri dolls” or “puppets” (kugutsu). The concept of humanoid mechanical dolls as human surrogates, like Čapek's robots, continues to this day. The mechanistic theories explored by Descartes and others, questioning what it means to be human, later merged with digital technology—computers—giving rise to new thinking (?) robots.
When I assembled my first personal computer and began teaching myself programming, I stumbled upon a book that profoundly impacted me. It was the Japanese translation of Joseph Weizenbaum's “Computer Power and Human Reason.” The ELIZA program featured in this book completely transformed my understanding of programming.
Like today’s AI, ELIZA was a natural language processing program. Based on the framework of psychotherapy, it played the role of a doctor—not to offer advice to the patient, but essentially to echo the patient’s own words. Patients would interpret ELIZA’s responses as meaningful and diagnose themselves, mistakenly believing they had received an accurate assessment. Because this process relied on the users’ own self-analysis, ELIZA was also dubbed an “artificial incompetence”(ELIZA’s effect).
By contrast, Weizenbaum himself seemed deeply troubled by how easily people were influenced by the program, worrying that such systems might inhibit human thinking and raise serious ethical concerns. Norbert Wiener’s work on communication, including the development of the Wiener filter, is also fascinating; yet it seems that outstanding programmers and researchers often become ensnared in the conceptual worlds—and pitfalls—they themselves create.
Today’s AI, which enables users and research subjects to access diverse and highly detailed answers and advice from vast data centers, appears at first glance to expand human knowledge and insight. Yet concerns have been raised that it may instead inhibit human thinking and discernment themselves. In other words, there is a fear that people may gradually stop thinking on their own. It is, so to speak, a condition of “human incompetence.” Consequently, there is also the risk of becoming trapped in self-imposed constraints, in which even the information accumulated through books, media, and the internet in the past may slowly dry up.
Joseph Weizenbaum,"Computer Power and Human Reason", 1976
Between Expression and Creativity
Studying art and design
Many art students, upon entering art schools or universities, must begin with drawing and sketching, despite having already repeated these exercises for years as examinees or during gap years. They start by observing subjects known as motifs or objets, learning to grasp the space these subjects occupy, the light falling upon them, and the resulting shadows and highlights. After mastering these foundational techniques, they are encouraged to attempt their own concepts and forms of expression.
However, even painters and designers who have thoroughly mastered these fundamentals and acquired excellent drawing skills often become stuck at the next stage—the process of developing and testing their own ideas and expressions. I myself was a thoroughly unmotivated student, yet for how many years was I compelled to silently sketch mute plaster casts, vegetables, and fruit? How did an atmosphere arise in which this was presented as a necessary and painful path to becoming a good painter or designer?
I find myself wanting to turn my gaze toward those struggling students who are still silently sketching before cold plaster casts and offer them a single word of encouragement.
Things I wonder about every day
Design for Whom
For example, why are the seams and tags on most clothing—even undergarments—placed on the inside? Considering the fundamental purpose of clothing—to protect the body and promote comfort—shouldn’t seams and tags be positioned on the outside, where they would not irritate the skin? Baby clothes, after all, often have their seams on the outside.
I did a bit of research, but the origins of this convention remain unclear. It seems to be related, at least in part, to branding in fashion. In recent years, however, a positive trend has emerged: tagless garments, which replace irritating neckline tags with printed labels, have become more common. Seamless shirts and undergarments have also appeared, though I have yet to find any that are truly satisfying. I have been wearing my undergarments inside out for quite some time now. I sometimes worry that if I were to have an accident and end up in the hospital, it might be embarrassing if someone noticed.
While there may be reasons related to sewing techniques or the pursuit of aesthetic beauty, is it really so outrageous to suggest that unattractive seams could simply be left visible on the outside? After all, some fashion designers have deliberately championed the idea of exposing construction details and embracing imperfect bodies.
Many modern tools boast a wide range of functions, making our lives more convenient and efficient while opening up new possibilities. Yet at the same time, they impose new obligations on their users. Brooms and vacuum cleaners silently instill in us the sense that we must clean. Washing machines create the feeling that we must diligently do the laundry. In this way, as we pursue efficiency and convenience, we may also be continuously adding new tasks to our lives. What, then, truly constitutes comfortable tools—and good design?
Commercials are a bit weird.
Commercials and Devices
Lately, when watching commercials on television or on computer screens, aside from the familiar annoyance of their constant presence, it has struck me that the text and characters appearing on screen have become unnecessarily large. Indeed, even when compared with television commercials from just a few years ago, the change is striking. What could be the reason for this?
When I first landed a job as a junior designer at an advertising agency, I was instructed to make the text in magazine advertisements as small as possible, to use a medium-weight Mincho (Japanese) font for the body copy, and to keep the catchphrase as restrained as feasible. The reasoning was that this would look smarter and more modern. As a rookie, however, I secretly imagined consumers first noticing the catchphrase, then reading the body text, and finally heading to the store. In other words, it was an era dominated by image strategy.
In recent years, many internet users have shifted to browsing primarily on smartphones and tablets. It is clear that advertisers are now striving to make large captions and icons more prominent on small smartphone displays in order to capture attention quickly.
Conversely, since their inception, televisions and computers have focused intensely on improving display resolution alongside processing speed. This has resulted in faster display performance and the emergence of technologies such as 4G/5G and 4K/8K. Increasing resolution was intended not only to enhance realism but also to increase the amount of information that could be displayed. If even small text can be easily recognized, the volume of information on screen naturally increases.
However, when we consider the recent struggles of 4K and 8K televisions in the consumer market, it becomes clear that users’ concerns lie in a realm quite different from sheer resolution or realism. Meanwhile, although smartphones continue to advance technologically with 4G/5G and high-resolution displays, the current state of advertising feels like a step backward—or perhaps a return to an earlier era of television, when slogans were endlessly repeated, echoing viewing habits from long ago.
During the period when personal computers were becoming widespread, terms such as “presence” (telepresence, real presence) and “awareness” were frequently used. I believe these concepts referred to the accurate recognition of information and the environment in which that recognition takes place. More recently, the term “context awareness” has emerged. Does this imply not only aiding recognition, but also providing context and foresight?
This may refer to the ability to generate a form of collective intelligence from our everyday tasks and operations, thereby enhancing users’ own knowledge and awareness. If so, could such an approach truly eliminate the annoyances and stresses illustrated by the earlier example of advertising, and create an environment that is genuinely cost-effective? (I fully understand, of course, that the world of gaming operates according to a different set of principles.)
Right and left
Things you pick up
Socks I purchased from a manufacturer in Nara Prefecture, Japan—a major sock-producing region—had an R and an L stitched onto each one. Since most socks were ambidextrous(dual use), I always struggle to tell which is the right and which is the left. I usually end up wearing them by feeling for the slight bulge where the big toe begins to protrude after some use. Five-toe socks and tabi-style socks are convenient in this respect, as there is no confusion. Still, it would be nice if there were some kind of right–left indicator. Simple R and L markings would work, or perhaps different colors for each side. It would also be wonderful if there were a service that allowed you to replace just one sock when it wears out or develops a hole.
Using a smartphone with my left hand can also feel slightly inconvenient—for example, when the beginning of horizontally written text slips out of view. I sometimes wish it were possible to switch the writing direction from left to right, though that would probably feel like reading a very old Japanese book.
In any case, there are so many everyday products in which left–right distinctions cause confusion that I often find myself at a loss. Tabi socks were originally split to accommodate geta clogs or waraji (straw sandals), but long ago, when I was responsible for producing a catalog for jikatabi (traditional Japanese outdoor work socks), I was struck by their remarkable variety—not only in forms designed for mountain hiking or river walking, but also in the distinct tread patterns molded into their rubber soles.
But why do jikatabi have a split toe in the first place? I can understand the idea that the big toe serves as a pivot point, contributing to stability while walking. It also seems likely that they were designed to be worn with snowshoes or similar equipment. Recently, colorful and fashionable jikatabi have begun to appear, and some fashion designers have even been captivated by them. They really deserve to be more widely adopted. I only wish the name were a bit more fashionable as well. Names such as “Jika-tabi” or “Jika boots” already seem to exist, after all.
Gestalzerfall
Visual Psychology and Design
While rereading a book the other day, I came across the term “Gestaltzerfall” (Gestalt breakdown). Curious about its meaning, I looked it up briefly and found it described as “a perceptual phenomenon in which, by staring continuously at letters or shapes, the overall image originally perceived as a coherent whole is lost, and the individual elements begin to appear fragmented.” Essentially, it refers to the tendency to overlook the bigger picture or what truly matters when one becomes too focused on details.
Recently, I stayed at a small, somewhat run-down local inn, where notices listing various prohibitions were posted everywhere—along the corridors, at the open-air bath, and even in the corners of the guest rooms. This rather dampened what little sense of a resort atmosphere there might have been. Japan seems to have an especially large number of such cautionary notices, not only in places like this but also throughout its cities and public facilities. Perhaps this reflects an emphasis on compliance and rule-following, a national tendency to avoid trouble, or even an excess of consideration for others. When walking around cities in Europe or North America, one can sometimes feel at a loss because there are no instructions at all. And yet, in a quiet town where there are no signs telling you what to do, it can feel as though your own survival instincts have come back to life—and that, too, can be comfortable in its own way.
The other day, I also read a book titled Noisy Japan…, which made me feel that excessive instructions of all kinds can gradually sap one’s sense of personal agency and independent thought. Come to think of it, I put some effort into choosing a highly functional toilet for my home. It opens and closes automatically and provides various prompts, which is certainly convenient—but it can feel as though a well-meaning butler is standing just behind you while you are using it, a thought that is slightly unsettling. I would rather enjoy the moment in quiet satisfaction. But I digress.
Materials and Selection
Materials have a lifespan
Since this is a piece I wrote down quite a long time ago, it may contain many passages that feel borrowed from elsewhere.
Over the years, devices and furniture that I have relied on for a long time have begun to break down one after another. Rubber parts crumble and fall apart, and once-beautiful plastic or resin casings have become noticeably yellowed. While such deterioration—or perhaps aging—can foster a certain affection, it is often the case that replacement parts are no longer available: warranties have expired, and production of spare parts has been discontinued. In an age that frequently extols ecology and recycling, more manufacturers are at least beginning to prepare replacement parts alongside the release of new products. Still, it would be even more helpful if products also specified the expected lifespan of individual components and the recommended timing for replacement.
After World War II, Japan struggled with severe shortages of resources and materials across many fields. Old newspapers and magazines, bamboo, wood, and straw continued to be widely used, and purchased food or household goods were often wrapped in old newspaper to be taken home. Wood shavings and straw were also used as packing and cushioning materials when transporting pottery or equipment. Gradually, with postwar reconstruction, many new materials and manufacturing methods—chemical and resin-based materials such as plastics, vinyl, and rubber—were developed and came to replace traditional materials like waste paper, bamboo, and wood.
As part of the reconstruction effort and the drive to overcome shortages of paper and daily necessities, industries such as papermaking and textiles adopted large-scale machinery, while local crafts using wood and bamboo, once produced in small quantities in various regions, were replaced by plastic, resin, and rubber products designed for mass production. For example, tableware and baskets traditionally made from local wood or bamboo were supplanted by machine-produced plastic alternatives. In this process, regionally rooted skills and crafts—papermaking, weaving, and traditional handicrafts, along with the people who practiced them—were gradually lost. It cannot be denied that these new materials and manufacturing technologies simplified production processes and enabled mass production, thereby supporting rapid postwar recovery and large-scale industrial transformation. At the same time, however, it is also undeniable that they contributed to the decline of local artisans and to the loss of household wisdom related to reusing materials, managing resources, and maintaining a degree of self-sufficiency.
Moreover, the fact that plastics and similar materials—once celebrated as symbols of modernity—would later leave serious problems for future generations was largely overlooked. When adopting new materials and technologies, it is therefore essential not only for developers but also for manufacturers, engineers, and designers who use them to possess foresight and to consider what kinds of changes and consequences their choices may bring about in the future.
During the 1970s, a period of rapid economic growth, a wide variety of technologies and materials came into widespread use. Rubber, plastics, nylon, vinylon, ABS resin, polyvinyl chloride, and concrete were employed in great quantities. Cheap toys and everyday goods were sometimes derisively labeled “Made in Japan.” There were even galleries in Tokyo that celebrated the future of plastic. In the process, traditional and simple packaging materials such as old newspaper, bamboo bark, cedar bark, waxed paper, hemp, and straw gradually faded from memory.
Our daily lives have come to be filled with chemically processed products. Needless to say, these materials have contributed greatly to making modern manufacturing and design easier and more affordable. In Amsterdam, a city overflowing with bicycles, I saw many bicycles with wooden frames. I have also heard that bicycles with bamboo frames exist in Japan. Without chemically processed materials, how might automobiles, trains, home appliances, and everyday goods have developed? Perhaps new processing methods for materials such as thinned timber, bamboo, paper, stone, or clay would have emerged instead.
Many technologies and manufacturing methods were created as tools for human use and survival, and they have been endlessly refined and improved alongside humanity itself. Such technological innovations have nurtured the prosperity we enjoy today. In this way, diverse forms of design have been born for the sake of humankind. At the same time, however, it may be worth reflecting on where humans stand within ecosystems and the environment, and what kinds of relationships we maintain with them.
Labor Shortage and Design
Robotics and Digitalization
Taiwan
Every field seems to be suffering from a labor shortage, and even fast-food restaurants and small, independent eateries are gradually switching to ordering systems that use tablets or smartphones. At first, I was confused and found myself looking around for a waiter, but I have slowly grown accustomed to it. With no conversation with a waiter and no need to engage in ordering, it has become somewhat comfortable.
That said, when visiting an unfamiliar restaurant, having to choose and decide solely based on a tablet menu and photographs can be limiting. I sometimes find myself wishing for a bit of advice—wondering what ingredients are used, or how other customers, especially regulars, tend to combine their orders. For example, with a set like A + B + C, I would like to know how hungry one should be to order it, how long the meal is likely to take from ordering to finishing, and what kind of satisfaction it offers. Simple “good” or “not good” ratings from other customers do not seem sufficient to answer these questions. Since these systems were introduced in response to labor shortages, I would probably be scolded for expecting them to be so fully equipped. Still, I find myself missing the days when I could ask a waiter what the restaurant’s recommendations were, or which wine or sake would pair well with a dish.
Come to think of it, the other day I was wandering around looking for a place where I could eat at an odd hour and stopped by a small, slightly old-fashioned restaurant. The moment I stepped inside, I was practically scolded: “Hurry up and decide—we’re just two elderly people running this place, we’re busy and can’t wait. This is what we recommend.” And yet, for some reason, I felt as though I had returned to a familiar and nostalgic place. Perhaps in the future, a slightly more personable waiter robot will greet us instead—though it might even deliver a sarcastic remark or two.
We who have become estranged from nature
Divergent Design
The other day, I read an online news article about people who suffer from allergies to air fresheners and similar products. Come to think of it, everything from detergents to bath salts, and from food products to cooking itself, seems to be overflowing with excessive scents and flavors.
For a time, combining hobby with practicality, I maintained a rooftop garden in a neighborhood from which Tokyo Skytree could be seen in the distance. We harvested asparagus, tomatoes, okra, myōga ginger, eggplant, cucumbers, and fruits such as grapes and blueberries according to the seasons. Even the vegetables and fruits we could not finish eating—all grown from seeds or seedlings that had, in one way or another, been artificially manipulated—allowed us to experience the genuine flavors inherent in nature’s bounty. There was also a simple joy and quiet thrill in harvesting hydroponically grown lettuce and other leafy greens and bringing them directly to the dinner table. At one point, we even calculated how much food could be produced if every rooftop in Tokyo were turned into farmland.
In the city where I now live, an “Urban Forest” project has been launched. Yet on a daily basis, we witness a relentless cycle in which tall buildings are torn down, only to be replaced by even taller ones. While rooftop gardens do exist, I rarely hear of actual farmland being created. My interest in urban agriculture stemmed from a desire to help city dwellers reconnect with authentic flavors and with nature itself; to encourage leisure farming as a means of resilience; to establish lifestyles grounded in practical survival skills, side occupations, and double cropping; to make use of rainwater and compost as part of urban waste management; and to cultivate young and middle-aged urban residents as active participants in agriculture.
I cannot help but feel some concern that overly processed foods and artificially added flavors may eventually lead to the loss of these original tastes and scents. As a child, I was a typically mischievous boy, and I still fondly remember the sweetness that filled my mouth when I picked and ate wild strawberries and red bayberries while hiking in the hills.