Content-Length: 204973 | pFad | http://en.wikipedia.org/wiki/Unicode_and_HTML

Unicode and HTML - Wikipedia Jump to content

Unicode and HTML

From Wikipedia, the free encyclopedia

Web pages authored using HyperText Markup Language (HTML) may contain multilingual text represented with the Unicode universal character set. Key to the relationship between Unicode and HTML is the relationship between the "document character set", which defines the set of characters that may be present in an HTML document and assigns numbers to them, and the "external character encoding", or "charset", used to encode a given document as a sequence of bytes.

In RFC 1866, the initial HTML 2.0 standard, the document character set was defined as ISO-8859-1 (later HTML standard defaults to Windows-1252 encoding). It was extended to ISO 10646 (which is basically equivalent to Unicode) by RFC 2070. It does not vary between documents of different languages or created on different platforms. The external character encoding is chosen by the author of the document (or the software the author uses to create the document) and determines how the bytes used to store and/or transmit the document map to characters from the document character set. Characters not present in the chosen external character encoding may be represented by character entity references.

The relationship between Unicode and HTML tends to be a difficult topic for many computer professionals, document authors, and web users alike. The accurate representation of text in web pages from different natural languages and writing systems is complicated by the details of character encoding, markup language syntax, font, and varying levels of support by web browsers.

HTML document characters

[edit]

Web pages are typically HTML or XHTML documents. Both types of documents consist, at a fundamental level, of characters, which are graphemes and grapheme-like units, independent of how they manifest in computer storage systems and networks.

An HTML document is a sequence of Unicode characters. More specifically, HTML 4.0 documents are required to consist of characters in the HTML document character set : a character repertoire wherein each character is assigned a unique, non-negative integer code point. This set is defined in the HTML 4.0 DTD, which also establishes the syntax (allowable sequences of characters) that can produce a valid HTML document. The HTML document character set for HTML 4.0 consists of most, but not all, of the characters jointly defined by Unicode and ISO/IEC 10646: the Universal Character Set (UCS).

Like HTML documents, an XHTML document is a sequence of Unicode characters. However, an XHTML document is an XML document, which, while not having an explicit "document character" layer of abstraction, nevertheless relies upon a similar definition of permissible characters that cover most, but not all, of the Unicode/UCS character definitions. The sets used by HTML and XHTML/XML are slightly different, but these differences have little effect on the average document author.

Regardless of whether the document is HTML or XHTML, when stored on a file system or transmitted over a network, the document's characters are encoded as a sequence of bit octets (bytes) according to a particular character encoding. This encoding may either be a Unicode Transformation Format, like UTF-8, that can directly encode any Unicode character, or a legacy encoding, like Windows-1252, that cannot. However, even when using encodings that do not support all Unicode characters, the encoded document may make use of numeric character references. For example, ☺ (☺) is used to indicate a smiling face character in the Unicode character set.

Character encoding

[edit]

In order to support all Unicode characters without resorting to numeric character references, a web page must have an encoding covering all of Unicode. The most popular is UTF-8, where the ASCII characters, such as English letters, digits, and some other common characters are preserved unchanged against ASCII. This makes HTML code (such as <br> and </div>) unchanged compared to ASCII. Characters outside the ASCII range are stored in 2–4 bytes. It is also possible to use UTF-16 where most characters are stored as two bytes with varying endianness, which is supported by modern browsers but less commonly used.

Numeric character references

[edit]

In order to work around the limitations of legacy encodings, HTML is designed such that it is possible to represent characters from the whole of Unicode inside an HTML document by using a numeric character reference: a sequence of characters that explicitly spell out the Unicode code point of the character being represented. A character reference takes the form &#N;, where N is either a decimal number for the Unicode code point, or a hexadecimal number, in which case it must be prefixed by x. The characters that compose the numeric character reference are universally representable in every encoding approved for use on the Internet.[citation needed]

The support for hexadecimal in this context is more recent, so older browsers might have problems displaying characters referenced with hexadecimal numbers – but they will probably have a problem displaying Unicode characters above code point 255 anyway. To ensure better compatibility with older browsers, it is still a common practice to convert the hexadecimal code point into a decimal value (for example &#21512; instead of &#x5408;).[citation needed]

Named character entities

[edit]

In HTML 4, there is a standard set of 252 named character entities for characters - some common, some obscure - that are either not found in certain character encodings or are markup sensitive in some contexts (for example angle brackets and quotation marks). Although any Unicode character can be referenced by its numeric code point, some HTML document authors prefer to use these named entities instead, where possible, as they are less cryptic and were better supported by early browsers.

Character entities can be included in an HTML document via the use of entity references, which take the form &EntityName;, where EntityName is the name of the entity. For example, &mdash;, much like &#8212; or &#x2014;, represents U+2014: the em dash character "—" even if the character encoding used doesn't contain that character.

For the full list, see: List of XML and HTML character entity references.

Character encoding determination

[edit]

In order to correctly process HTML, a web browser must ascertain which Unicode characters are represented by the encoded form of an HTML document. In order to do this, the web browser must know what encoding was used.

Encoding information

[edit]

When a document is transmitted via a MIME message or a transport that uses MIME content types such as an HTTP response, the message may signal the encoding via a Content-Type header, such as Content-Type: text/html; charset=UTF-8. Other external means of declaring encoding are permitted but rarely used. If the document uses a Unicode encoding, the encoding info might also be present in the form of a byte order mark (BOM). Finally, the encoding can be declared via the HTML syntax. For the text/html serialisation then, as long as the page is encoded in an extension of ASCII (such as UTF-8, and thus, not if the page is using UTF-16), a meta element, like <meta http-equiv="content-type" content="text/html; charset=UTF-8"> or (starting with HTML5) <meta charset="UTF-8"> can be used. For HTML pages serialized as XML, then declaration options is to either rely on the encoding default (which for XML documents is UTF-8), or to use an XML encoding declaration. The meta attribute plays no role in HTML served as XML.

Encoding defaults

[edit]

An encoding default applies when there is no external or internal encoding declaration and also no byte order mark. While the encoding default for HTML pages served as XML is required to be UTF-8, the encoding default for a regular Web page (that is: for HTML pages serialized as text/html) varies depending on the localization of the browser. For a system set up mainly for Western European languages, it will generally be Windows-1252. For Cyrillic alphabet locales, the default is typically Windows-1251. For a browser from a location where legacy multi-byte character encodings are prevalent, some form of auto-detection is likely to be applied.

[edit]

Because of the legacy of 8-bit text representations in programming languages and operating systems and the desire to avoid burdening users with the need to understand the nuances of encoding, many text editors used by HTML authors are unable or unwilling to offer a choice of encodings when saving files to disk and often do not even allow input of characters beyond a very limited range. Consequently, many HTML authors are unaware of encoding issues and may not have any idea what encoding their documents actually use. Misunderstandings, such as the belief that the encoding declaration affects a change in the actual encoding (whereas it is actually just a label that could be inaccurate), is also a reason for this editor attitude. Another factor contributing in the same direction, is the arrival of UTF-8 – which greatly diminishes the need for other encodings, and thus modern editors tends to default, as recommended by the HTML5 specification,[1] to UTF-8.

Byte order mark/Unicode sniffing

[edit]

For both serializations of HTML (content-type "text/html" and content/type "application/xhtml+xml"), the byte order mark (BOM) is an effective way to transmit encoding information within an HTML document. For UTF-8, the BOM is optional, while it is a must for the UTF-16 and the UTF-32 encodings. (Note: UTF-16 and UTF-32 without the BOM are formally known under different names, they are different encodings, and thus needs some form of encoding declaration – see UTF-16BE, UTF-16LE, UTF-32LE and UTF-32BE.) The use of the BOM character (U+FEFF) means that the encoding automatically declares itself to any processing application. Processing applications need only look for an initial 0x0000FEFF, 0xFEFF or 0xEFBBBF in the byte stream to identify the document as UTF-32, UTF-16 or UTF-8 encoded respectively. No additional metadata mechanisms are required for these encodings since the byte-order mark includes all of the information necessary for processing applications. In most circumstances, the byte-order mark character is handled by editing applications separately from the other characters so there is little risk of an author removing or otherwise changing the byte order mark to indicate the wrong encoding (as can happen when the encoding is declared in English/Latin script). If the document lacks a byte-order mark, the fact that the first non-blank printable character in an HTML document is supposed to be "<" (U+003C) can be used to determine a UTF-8/UTF-16/UTF-32 encoding.

Encoding overriding

[edit]

Many HTML documents are served with inaccurate encoding information, or no encoding information at all. In order to determine the encoding in such cases, many browsers allow the user to manually select an encoding name from a list. They may also employ an encoding auto-detection algorithm that works in concert with or – in the case of the BOM and in case of HTML served as XML – against the manual override.

For HTML documents which are text/html serialized, manual override may apply to all documents, or only those for which the encoding cannot be ascertained by looking at declarations and/or byte patterns. The fact that the manual override is present and widely used hinders the adoption of accurate encoding declarations on the Web; therefore the problem is likely to persist. But note that Internet Explorer, Chrome and Safari – for both XML and text/html serializations – do not permit the encoding to be overridden whenever the page includes the BOM.[2]

For HTML documents serialized with the preferred XML label – application/xhtml+xml, manual encoding override is not permitted. To override the encoding of such an XML document would mean that the document stopped being XML, as it is a fatal error for XML documents to have an encoding declaration with detectable errors. Currently, Gecko browsers such as Firefox, abide to this rule, whereas the bulk of the other common browsers that support HTML as XML, such as Webkit browsers (Chrome/Safari) [3] do allow the encoding of XHTML documents to be manually overridden.

Web browser support

[edit]

Many browsers are only capable of displaying a small subset of the full Unicode repertoire. Here is how your browser displays various Unicode code points:

Example web browser support for Unicode characters
Character HTML char ref Unicode name What your browser displays
U+0041 &#65; or &#x41; Latin capital letter A A
U+00DF &#223; or &#xDF; Latin small letter Sharp S ß
U+00FE &#254; or &#xFE; Latin small letter Thorn þ
U+0394 &#916; or &#x394; Greek capital letter Delta Δ
U+017D &#381; or &#x17D; Latin capital letter Z with háček Ž
U+0419 &#1049; or &#x419; Cyrillic capital letter Short I Й
U+05E7 &#1511; or &#x5E7; Hebrew letter Qof ק
U+0645 &#1605; or &#x645; Arabic letter Meem م
U+0E57 &#3671; or &#xE57; Thai digit 7
U+1250 &#4688; or &#x1250; Ge'ez syllable Qha
U+3042 &#12354; or &#x3042; Hiragana letter A (Japanese)
U+53F6 &#21494; or &#x53F6; CJK Unified Ideograph-53F6 (Simplified Chinese "Leaf")
U+8449 &#33865; or &#x8449; CJK Unified Ideograph-8449 (Traditional Chinese "Leaf")
U+B5AB &#46507; or &#xB5AB; Hangul syllable Tteolp (Korean "Ssangtikeut Eo Rieulbieup")
U+16A0 &#5792; or &#x16A0; Runic letter Fehu
U+0D37 &#3383; or &#x0D37; Malayalam letter ഷ (ṣha)
U+1F602 &#128514; or &#x1F602; Face with Tears of Joy emoji 😂
To display all of the characters above, you may need to install one or more large multilingual fonts, like Code2000.

Some web browsers, such as Mozilla Firefox, Opera, Safari and Internet Explorer (from version 7 on), are able to display multilingual web pages by intelligently choosing a font to display each individual character on the page. They will correctly display any mix of Unicode blocks, as long as appropriate fonts are present in the operating system.

Older browsers, such as Netscape Navigator 4.77 and Internet Explorer 6, can only display text supported by the current font associated with the character encoding of the page, and may misinterpret numeric character references as being references to code values within the current character encoding, rather than references to Unicode code points. When you are using such a browser, it is unlikely that your computer has all of those fonts, or that the browser can use all available fonts on the same page. As a result, the browser will not display the text in the examples above correctly, though it may display a subset of them. Because they are encoded according to the standard, though, they will display correctly on any system that is compliant and does have the characters available. Further, those characters given names for use in named entity references are likely to be more commonly available than others.

For displaying characters outside the Basic Multilingual Plane, such as the Gothic letter faihu, which is a variant of the runic letter fehu in the table above, some systems (like Windows 2000) need manual adjustments of their settings.

Frequency of usage

[edit]

According to internal data from Google's web index, in December 2007 the UTF-8 Unicode encoding became the most frequently used encoding on web pages, overtaking both ASCII (US) and 8859-1/1252 (Western European).[4]

See also

[edit]

References

[edit]
  1. ^ Ian Hickson (2011). "HTML5". Retrieved 17 September 2011. Authors are encouraged to use UTF-8. Conformance checkers may advise authors against using legacy encodings. [RFC3629] Authoring tools should default to using UTF-8 for newly created documents. [RFC3629]
  2. ^ "12897 – In some parsers, UTF-8 BOM trumps the HTTP charset attribute (Encoding sniffing algorithm)". www.w3.org. Retrieved 2023-03-09.
  3. ^ "66189 – XML parser doesn't emit FATAL ERROR for all, detectable encoding errors". bugs.webkit.org. Retrieved 2023-03-09.
  4. ^ "Moving to Unicode 5.1". Official Google Blog. Retrieved 2024-10-10.
[edit]








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://en.wikipedia.org/wiki/Unicode_and_HTML

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy