Computer performance by orders of magnitude
From Infogalactic: the planetary knowledge core
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
Contents
- 1 Hecto-scale computing (102)
- 2 Kilo-scale computing (103)
- 3 Mega-scale computing (106)
- 4 Giga-scale computing (109)
- 5 Tera-scale computing (1012)
- 6 Petascale computing (1015)
- 7 Exascale computing (1018)
- 8 Zetta-scale computing (1021)
- 9 Yotta-scale computing (1024)
- 10 See also
- 11 References
- 12 External links
Hecto-scale computing (102)
- 2.2×102 Upper end of serialized human through put. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductors arm, the reaction time to lights on a drag strip etc.)[1]
- 2×102 IBM 602 1946 computer.
Kilo-scale computing (103)
- 9.2×104 Intel 4004 First commercially available full function CPU on a chip 1971
- 5×105 Colossus computer vacuum tube supercomputer 1943
Mega-scale computing (106)
- 1×106 Motorola 68000 commercial computing 1979
- 1.2×106 IBM 7030 "Stretch" transistorized supercomputer 1961
Giga-scale computing (109)
- 1×109 ILLIAC IV 1972 supercomputer does first computational fluid dynamics problems
- 1.354×109 Intel Pentium III commercial computing 1999
- 147.6×109 Intel Core-i7 980X Extreme Edition commercial computing 2010[2]
Tera-scale computing (1012)
- 1.34×1012 Intel ASCI Red 1997 Supercomputer
- 1.344×1012 GeForce GTX 480 from NVIDIA at its peak performance
- 4.64×1012 Radeon HD 5970 from ATI at its peak performance
- 5.152×1012 S2050/S2070 1U GPU Computing System from NVIDIA
- 80×1012 IBM Watson[3]
Petascale computing (1015)
<templatestyles src="Module:Hatnote/styles.css"></templatestyles>
- 1.026×1015 IBM Roadrunner 2009 Supercomputer
- 8.1×1015 Fastest computer system as of 2012 is the Folding@home distributed computing system
- 20×1015 IBM Sequoia Circa 2011
- 33.86×1015 Tianhe-2's Linpack performance, June 2013[4]
- 36.8×1015 Estimated computational power required to simulate a human brain in real time.[5]
Exascale computing (1018)
<templatestyles src="Module:Hatnote/styles.css"></templatestyles>
- 1×1018 It is estimated that the need for exascale computing will become pressing around 2018[6]
- 1×1018 Bitcoin network Hash Rate is expected to reach 1 Exahash per seconds in 2016[7]
Zetta-scale computing (1021)
- 1×1021 Accurate global weather estimation on the scale of approximately 2 weeks.[8] Assuming Moore's law remains constant, such systems may be feasible around 2030.
A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in first quarter 2011.
Yotta-scale computing (1024)
- 257.6×1024 Estimated computational power required to simulate 7 billion brains in real time.
See also
- Futures studies – study of possible, probable, and preferable futures, including making projections of future technological advances
- History of computing hardware (1960s–present)
- List of emerging technologies – new fields of technology, typically on the cutting edge. Examples include genetics, robotics, and nanotechnology (GNR).
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- History of artificial intelligence (AI)
- Strong AI – hypothetical AI as smart as a human. Such an entity would likely be recursive, that is, capable of improving its own design, which could lead to the rapid development of a superintelligence.
- Quantum computing
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- Moore's law – observation (not actually a law) that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.[9]
- Supercomputer
- Superintelligence
- Timeline of computing
- Technological singularity – hypothetical point in the future when computer capacity rivals that of a human brain, enabling the development of strong AI — artificial intelligence at least as smart as a human.
- The Singularity is Near – book by Raymond Kurzweil dealing with the progression and projections of development of computer capabilities, including beyond human levels of performance.
- TOP500 – list of the 500 most powerful (non-distributed) computer systems in the world
References
<templatestyles src="Reflist/styles.css" />
Cite error: Invalid <references>
tag; parameter "group" is allowed only.
<references />
, or <references group="..." />
External links
- ↑ http://www.100fps.com/how_many_frames_can_humans_see.htm
- ↑ Overclock3D - Sandra CPU
- ↑ Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
- ↑ http://top500.org/list/2013/06/
- ↑ http://hplusmagazine.com/2009/04/07/brain-chip/
- ↑ [1]
- ↑ Bitcoin hash rate chart
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.