AskDefine | Define flops

User Contributed Dictionary



  1. Plural of flop


  1. third-person singular of flop

Extensive Definition

In computing, FLOPS (or flops or flop/s) is an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations, similar to instructions per second. Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.
Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as gigaFLOPS (one billion or 1×109 FLOPS), teraFLOPS (one trillion or 1×1012 FLOPS) and petaFLOPS (one quadrillion or 1×1015 FLOPS). IBM's top supercomputer, dubbed Blue Gene/P, is designed to continuously operate at speeds exceeding one petaFLOPS and, when configured to do so, should be able to reach speeds in excess of three petaFLOPS. NEC's SX-9 supercomputer has a peak processing performance of 839 teraFLOPS and features the world's first vector processor to exceed 100 gigaFLOPS per single core.
A basic calculator performs relatively few FLOPS. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is perceived as instantaneous by a human operator, so a simple calculator needs only about 10 FLOPS. Factoring for human delays in reaction, the actual FLOPS can be far less.

Measuring performance

In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.
There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.
For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.
Historically, the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s.
The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurement of "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. On that date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the Export Administration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted teraFLOPS (WT).


On February 4, 2008, The NSF and the University of Texas opened full scale research runs on an AMD, Sun supercomputer Ranger, the most powerful supercomputing system in the world for open science research, which operates at sustained speeds of half a petaflop.
On October 25, 2007, NEC Corporation of Japan issued a press release announcing its SX series model SX-9, claiming it to be the world's fastest vector supercomputer with a peak processing performance of 839 teraFLOPS. The SX-9 features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core.
On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS. When configured to do so, it can reach speeds in excess of three petaFLOPS.
In June 2007, reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer, measuring a peak of 596 TFLOPS. The Cray XT4 hit second place with 101.7 TFLOPS.
In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the list. It has special-purpose pipelines for simulating molecular dynamics.
Distributed computing uses the Internet to link personal computers to achieve a similar effect:
  • The entire BOINC averages over 1000 TFLOPS (1 PFLOP) as of March 16, 2008.
  • SETI@Home computes data averages more than 265 TFLOPS.
  • Folding@Home has reached over 1 PFLOPS as of September 15, 2007. Note, as of March 22, 2007, PlayStation 3 owners may now participate in the Folding@home project. Because of this and high performance GPU clients, Folding@home is now sustaining over 2000 TFLOPS (2053 TFLOPS as of May 8, 2008). See the current stats for details.
  • Einstein@Home is crunching more than 150 TFLOPS.
  • As of June 2007, GIMPS is sustaining 23 TFLOPS.
  • Intel Corporation has recently unveiled the experimental multi-core POLARIS chip, which achieves 1 TFLOPS at 3.2 GHz. The 80-core chip can increase this to 1.8 TFLOPS at 5.6 GHz, although the thermal dissipation at this frequency exceeds 260 watts.
As of 2007, the fastest PC processors (quad-core) perform over 30 GFLOPS. GPUs in PCs are considerably more powerful in pure FLOPS. For example, in the GeForce 8 Series the nVidia 8800 Ultra performs around 576 GFLOPS on 128 Processing elements. This equates to around 4.5 GFLOPS per element, compared with 2.75 per core for the Blue Gene/L. It should be noted that the 8800 series performs only Single precision calculations, and that while GPUs are highly efficient at calculations they are not as flexible as a general purpose CPU.
As of November 2007, the TOP500 list of the most powerful supercomputers (excluding grid computers) is headed by IBM's BlueGene/L System, with just under half a petaflop of processing power.
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflop computer in 2009, scaling up to 10 PFLOPs by 2012.

Cost of computing

Hardware costs:

  • 1961: about US$1,100,000,000,000 ($1.1 trillion) per GFLOPS (=US$1,100 per FLOPS); with 1 billion IBM 1620 units @ $64,000 each and a multiplication operation taking 17.7ms
  • 1997: about US$30,000 per GFLOPS; with two 16-Pentium-Pro–processor Beowulf cluster computers
  • 2000, April: $1,000 per GFLOPS, Bunyip, Australian National University. First sub-US$1/MFlop. Gordon Bell Prize 2000.
  • 2000, May: $640 per GFLOPS, KLAT2, University of Kentucky
  • 2003, August: $82 per GFLOPS, KASY0, University of Kentucky
  • 2006, February: about $1 per GFLOPS in ATI PC add-in graphics card (X1900 architecture) — these figures are disputed as they refer to highly parallelized GPU power.
  • 2007, March: about $0.42 per GFLOPS in Ambric AM2045.
  • 2007, October: about $0.20 per GFLOPS with the cheapest retail Sony PS3 console, at US$400, that runs at a claimed 2 teraFLOPS; these figures represent the processing power of the GPU. The seven CPUs run collectively at a lower 218 GFLOPS.
This trend toward lower and lower cost for the same computing power follows Moore's law.

Operation costs:

In energy cost, according to the Green500 list, as of 2007 the most efficient CPU runs at 357.23 MFLOPS per watt. This translates to an energy requirement of 2.8 watts per GFLOPS, however this energy requirement will be much greater for less efficient CPUs.
Hardware costs for low cost supercomputers may be less significant than energy costs when running continuously for several years. A Playstation 3 (PS3) 40GB (65nm Cell) costs $399 and consumes 135 watts or $118 of electricity each year, conservatively assuming U.S. national average residential electric rates of $0.10/kWh (135 watts / 1000 watts per kW × 24 Hours × 365 days × $0.10 per kWh = $118.26). The operating cost of electricity for 3.5 years ($413) is more than the cost of the PS3. Additional operating costs include air conditioning, space and lighting.
flops in Bosnian: FLOPS
flops in Bulgarian: FLOPS
flops in Czech: FLOPS
flops in Danish: Flops
flops in German: Floating Point Operations Per Second
flops in Spanish: FLOPS
flops in Esperanto: MFLOPS
flops in French: Floating-point operations per second
flops in Galician: FLOPS
flops in Korean: 플롭스
flops in Croatian: FLOPS
flops in Indonesian: FLOPS
flops in Italian: Flops
flops in Hebrew: FLOPS
flops in Dutch: FLOPS
flops in Japanese: FLOPS
flops in Norwegian: FLOPS
flops in Polish: FLOPS
flops in Portuguese: FLOPS
flops in Russian: FLOPS
flops in Swedish: Flops
flops in Thai: FLOPS
flops in Ukrainian: FLOPS
flops in Chinese: FLOPS
Privacy Policy, About Us, Terms and Conditions, Contact Us
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2
Material from Wikipedia, Wiktionary, Dict
Valid HTML 4.01 Strict, Valid CSS Level 2.1