Please forgive me for taking such a controversial title, the original title is “What a Browser Isn’t”. I personally feel that the author is a bit divorced from the topic, but this does not affect the point he wants to state.

Usability has always been one of the focuses of our front-end debate. But think about it carefully, is it worthwhile to “satisfy” those blind readers who have never seen it or those users who have disabled JavaScript on their own with the extra and a lot of development costs?

Recall that at the beginning of Chaos, the computer was silent. If you need this function, you need to insert an additional sound card. After a period of time, some computers added sound cards by default, while others still maintained the “dumb tradition.”

Then after many years, motherboard manufacturers directly integrated the sound card into the motherboard-until now, almost all computers are equipped with a sound card. The question then is: What has the multimedia industry done during this period to change all this?

At the beginning, if the application makes the sound you want to use, you can only make the sound through the built-in PC speaker. Then after a period of time, there appeared applications that can use the speakers and the sound card at the same time.

Having said that, do anyone care about whether there is a sound card on their machine? I think there is no more. I even think people have forgotten the speakers in the case.

For example, I have never seen a game automatically turn off its sound because there is no sound card on the machine-of course, if my ears can’t hear it, that’s another matter (the foreigner’s statement is colder).

Having said that, the above story is very similar to the story of the browser and JavaScript. The difference is that current developers, when developing applications, are still considering the situation if there is no script support.

In fact, similar to the popularity of sound cards back then, JavaScript was invented in 1995 (already 15 years ago). At that time, its share in the browser was less than 1%, and users (even developers) at the time believed that this thing was dispensable.

My point is that each web application should be able to run in different environments as much as possible, but it does not mean unconditionally accommodating to a certain situation and behave consistently under any circumstances.

For example, in the case of the browser without JavaScript support, news sites can still display their main content (news), and there is no guarantee that album scripts that rely on JavaScript will still work normally.

The application that we now call a “browser” must be: it can understand HTML, can render pages using CSS, and can drive JavaScript scripts at the same time. An application can only perform one or two of the above functions, so it cannot be called a “browser” at all.

For example, search engines understand HTML (and part of CSS to prevent cheating), we only need to provide content for inclusion-and it does not need to understand the GUI-related design too much.

In terms of content, I actually only care about two things: search engines and browsers. First, the first thing I need to do is to create semantic HTML (which is not easy for HTML), then use CSS to typeset and make it support modern browsers, and then use JavaScript to add CSS rules for IE ( Obviously the original author hates IE very much).

My above workflow is sometimes criticized, because it requires JavaScript support for older browsers to introduce CSS rules for themselves. At the same time, the situation may become ambiguous. I really don’t think that what we call “browser” does not support JavaScript, even those things that can be called antique (does it allude to IE?).

All in all, our ideas should be developed for the future, not for the past (We should develop for the future not for the past.).

We should serve the majority (users) rather than the minority. If 0.1% of our users have JavaScript disabled, then in my opinion, we may not be worth spending a lot of development time to win those 0.1% of users.

At the same time, another fact is that if we make users feel that our application can be used without JavaScript, then they will not hesitate to disable it (similar to the noscript plugin). In this way, it is almost impossible for us to advance the Web, and we and users will think that JavaScript is an additional accessory.

Finally, what I want to explain is: Before embarking on actual development, we first plan those limited resources (such as time, manpower, etc.)-whether their planned input and actual output can meet our expectations.

Leave a Reply