Menu
Home
Log in / Register
 
Home arrow Computer Science arrow Android Application Development for the Intel Platform
< Prev   CONTENTS   Next >

64-Bit Android OS on Intel Architecture

On a generic level, there are not many significant differences between 64-bit and 32-bit processors. But compute-intensive applications (later, the chapter discusses software workloads that run faster on 64-bit processors) can see significant improvements when moved from 32-bit to 64-bit. In almost all cases, 64-bit applications run faster in a 64-bit environment than 32-bit applications in a 64-bit environment, which is a good enough reason for developers to care about it. Utilizing platform capabilities can improve the speed of applications that perform a large number of computations.

64 Bits vs. 32-bit Android

A 64-bit architecture means the width of the integer registers and pointers is 64 bits. The three main advantages of a 64-bit operating system are as follows:

• Increased number of registers

• Extended address space

• Increased RAM

It's not hard to imagine Android phones with 64-bit chips in the not-too-distant future. Because the Android kernel is based on a Linux kernel, and Linux has supported 64-bit technology for years, the only thing Android needs to fully support 64-bit processing is to make the Dalvik VM 64-bit compatible. A Dalvik application (written only in Java) will work without any changes on a 64-bit device because the bytecode is platform independent.

Native application developers can take full advantage of the capabilities offered by the underlying processor. For example, Intel Advanced Vector Extensions (Intel AVX) has been extended to support a 256-bit instruction size on 64-bit processors.

Memory and CPU Register Size

Memory is extremely slow compared to the CPU, and reading from and writing to memory can take a long time compared to how long it takes the CPU to process an instruction. CPUs try to hide this with layers of caches, but even the fastest layer of cache is slow compared to internal CPU registers. More registers means more data can be kept purely CPU-internal, reducing memory accesses and increasing performance.

Just how much difference this makes depends on the specific code in question, as well as how good the compiler is at optimizing the code to make the best use of available registers. When the Intel architecture moved from 32-bit to 64-bit, the number of registers doubled from 8 to 16, and this made for a substantial performance improvement.

Sixty-four-bit pointers allow applications to address larger RAM address spaces: typically, on a 32-bit processor, the addressable memory space available to a program is between 1 and 3 GB because only 4 GB is addressable. Even if 1–3 GB is available, a single program cannot use all the memory that is addressable unless it resorts to a technique like splitting the program into multiple processes, which takes a lot of programming effort. On a 64-bit operating system, this is of no concern because the addressable memory space is pretty large.

Memory-mapped files are becoming more difficult to implement on 32-bit architectures because files over 4 GB are increasingly common. Such large files cannot be memory-mapped easily to 32-bit architectures—only part of the file can be mapped into the address space at a time. To access such a file, the mapped parts must be swapped into and out of the address space as needed. This is a problem because memory mapping, if properly implemented by the OS, is one of the most efficient disk-to-memory methods.

Sixty-four-bit pointers also come with a substantial downside: most programs use more memory because pointers need to be stored and they consume twice as much memory. An identical program running on a 64-bit CPU takes more memory than on a

32-bit CPU. Because pointers are very common in programs, this can increase cache sizes and have an impact on performance.

Register count can strongly influence performance of an application. RAM is slow compared to on-CPU registers. CPU caches help to increase the speed of applications, but accessing cache does result in a performance hit.

The amount of the performance increase is dependent on how well the compiler can optimize for a 64-bit environment. Compute-intensive applications that are able to do the majority of their processing in a small amount of memory see significant performance increases because a large percentage of the application can be stored on the CPU registers.

Contrast this with an unoptimized application that might see a decrease in computer performance because 64-bit pointers require twice the bandwidth. However, in a mobile environment, the OS and installed applications should be engineered to avoid this. A famous example of a large program that runs slower on a 64-bit environment is the Oracle JVM.

Both ARM and Intel 64-bit CPUs have a 32-bit compatibility mode. Although 32-bit applications will run on 64-bit processors, compiling with a 64-bit optimizing compiler allows them to take advantage of the architectural benefits of a 64-bit environment.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel