The 10 Fastest Super-Computer in the World

#1: Tianhe-2, National University of Defense Technology in China

The followup to Tianhe-1A, the new world’s #1 broke the speed record with performance of 33.86 petaflops. Tianhe-2 uses Ivy Bridge-based Intel Xeons and Intel Xeon Phi for a total of 3.12 million cores. The computer uses 17,808 kilowatts of power and can theoretically hit speeds of up to 54.9 petaflops.

01 - Tianhe-2

#2: Titan, Oak Ridge National Laboratory

Titan was #1 last November with a speed of 17.6 petaflops. The system uses AMD-based Cray CPUs and Nvidia GPUs in its 560,640 cores. Rated the third most energy efficient supercomputer last November, Titan uses 8,209 kilowatts of power.

02 - Titan

#3: Sequoia, Lawrence Livermore National Laboratory

The world’s #1 supercomputer in June 2012, Sequoia is used by the National Nuclear Security Administration to conduct simulations aimed at extending the lifespan of nuclear weapons. The Blue Gene/Q system has nearly 1.6 million cores and hits speeds of 17.2 petaflops.

03 - Sequoia

#4: K computer, RIKEN Advanced Institute for Computational Science in Japan

Ranked #1 in the world in 2011, the K computer was made by Fujitsu. Delivering 10.5 petaflops with 705,024 Sparc cores, the K computer uses a six-dimensional torus interconnect called Tofu.

04 - K Computer

#5: Mira, Department of Energy’s Argonne National Laboratory

This Blue Gene/Q system uses 786,432 cores to hit 8.6 petaflops. When it hits full production in 2014 Mira will offer more than 5 billion computing hours per year to scientists (counting time on each core separately).

05 - Mira

#6: Stampede, Texas Advanced Computing Center at University of Texas

With Dell PowerEdge servers powered by Xeon processors and an InfiniBand interconnect, Stampede scored 5.2 petaflops. It is one of the largest systems in the world devoted to open science research—any researcher at a US institution can submit a request to use some of its computing power.

06 - Stampede

#7: Juqueen, Jülich Supercomputing Centre in Germany.

Another Blue Gene/Q system, Juqueen’s 458,752 cores hit speeds of 5 petaflops. Using 2,301 kilowatts of power, Juqueen was rated the fifth most energy efficient Top 500 supercomputer last November.

07 - Juqueen

#8: Vulcan, Lawrence Livermore National Laboratory

This 4.3 petaflop system is based on IBM’s Blue Gene/Q supercomputing technology and has 393,216 cores. This supercomputer isn’t devoted solely to government use; Vulcan was recently opened to industry and research universities for collaborative projects.

08 - Vulcan

#9: SuperMUC, Leibniz Supercomputing Centre in Germany

Using IBM iDataPlex servers, 300TB of RAM, and an InfiniBand interconnect, SuperMUC’s 147,456 cores achieved a speed of 2.9 petaflops. Energy costs are cut by directly cooling chips and memory with water at unusually high temperatures of 104 degrees fahrenheit.

09 - SuperMUC

#10: Tianhe-1A, National Supercomputing Center in China

Ranked #1 in November 2010, Tianhe-1A uses Intel Xeon CPUs and Nvidia GPUs across its 183,368 processing cores for a rating of 2.6 petaflops.

010 - Tianhe-1Asource:

For study

OSI Model

[This article use Indonesian]

OSI (Open System Interconnection) merupakan sebuah pemodelan yang digunakan untuk menjelaskan cara kerja komputer secara logika. OSI model mempunyai 7 layer yaitu Physical, Data Link, Network, Transport, Session, Presentation, dan Application. Masing-masing layer mempunyai fungsi dan perangkat yang berbeda di dalam suatu jaringan.


  • Layer 1 – Physical Layer
    Berfungsi untuk mendefinisikan media transmisi jaringan, metode pensinyalan, sinkronisasi bit, arsitektur jaringan (seperti halnya Ethernet atau Token Ring), topologi jaringan dan pengabelan. Selain itu, level ini juga mendefinisikan bagaimana Network Interface Card (NIC) dapat berinteraksi dengan media kabel atau radio.
  • Layer 2 – Data Link Layer
    Berfungsi untuk menentukan bagaimana bit-bit data dikelompokkan menjadi format yang disebut sebagai frame. Selain itu, pada level ini terjadi koreksi kesalahan, flow control, pengalamatan perangkat keras (seperti halnya Media Access Control Address (MAC Address)), dan menetukan bagaimana perangkat-perangkat jaringan seperti hubbridge,repeater, dan switch layer 2 beroperasi. Pada layer Data Link terdapat 2 sublayer, yaitu lapisan Logical Link Control (LLC) dan lapisan Media Access Control (MAC).
  • Layer 3 – Network Layer
    Berfungsi untuk mendefinisikan alamat-alamat IP, membuat header untuk paket-paket, dan kemudian melakukan routing melalui internetworking dengan menggunakan router dan switch layer-3.
  • Layer 4 – Transport Layer
    Berfungsi untuk memecah data ke dalam paket-paket data serta memberikan nomor urut ke paket-paket tersebut sehingga dapat disusun kembali pada sisi tujuan setelah diterima. Selain itu, pada level ini juga membuat sebuah tanda bahwa paket diterima dengan sukses (acknowledgement), dan mentransmisikan ulang terhadp paket-paket yang hilang di tengah jalan.
  • Layer 5 – Session Layer
    Berfungsi untuk mendefinisikan bagaimana koneksi dapat dibuat, dipelihara, atau dihancurkan. Selain itu, di level ini juga dilakukan resolusi nama.
  • Layer 6 – Presentation Layer
    Berfungsi untuk mentranslasikan data yang hendak ditransmisikan oleh aplikasi ke dalam format yang dapat ditransmisikan melalui jaringan. Protokol yang berada dalam level ini adalah perangkat lunak redirektor (redirector software), seperti layanan Workstation (dalam Windows NT) dan juga Network shell (semacam Virtual Network Computing (VNC) atau Remote Desktop Protocol (RDP)).
  • Layer 7 – Application Layer
    Berfungsi sebagai antarmuka dengan aplikasi dengan fungsionalitas jaringan, mengatur bagaimana aplikasi dapat mengakses jaringan, dan kemudian membuat pesan-pesan kesalahan. Protokol yang berada dalam lapisan ini adalah HTTP, FTP, SMTP, dan NFS.

First Hard Drive in the world

| Did you know?

tumblr_mhnldgsWjF1qcdm3lo3_r1_400The First Hard Drive in the world is just have 5 MB capacity. It was called the IBM 350 Disk Storage Unit. It was revealed to the world on September 4, 1956.

The size of this machine is really big, compared to the size of Hard Drive in present. The IBM 350 Disk Storage Unit weighed about one ton and required special shipping and handling. The IBM 350 Disk Storage Unit had 24″ diameter disks and could only store about five megs of data, about the size an average MP3 file.


IBM 305 RAMAC (Random Access Method of Accounting and Control) 1956

  1. Capacity: 4.8 MB
  2. Rotational speed: 1,200 rpm
  3. Rent: $ 3,200 per month (1957)
  4. Business users first: Chrysler’s Mopar Division (1957)
  5. The number of units that have been produced: Over 1000 units by year 1961

Cloud Computing, what is it?

Maybe we ever heard about Cloud Computing, but do you know what is Cloud Computing?

Cloud Computing is a computing model / computing, where resources such as processor / computing power, storage, network, and software to be abstract and is given as service on the network / internet using remote access patterns. Model billing of services in general similar to a modem public service. Availability of on-demand according to the needs, easy to control, dynamic and almost without limit scalability are some of the important attributes of the cloud computing. A model of cloud computing infrastructure setup is usually recognized as the ‘Cloud’.

Here is some categories of services are available from the ‘Cloud’ as:

• Infrastructure As A Service (IAAS)

• Platform As A Service (PAAS)

• Software As A Service (SAAS)

‘Cloud’ is usually provided as a service to anyone on the Internet. However, variants in called ‘Private Cloud’ increasingly popular for private infrastructure / private attribute that has like ‘Cloud’ above. Cloud computing is different from Grid computing or Parallel Computing, Grid computing where and Parallel computing is more a part of the physical infrastructure for the provision of Cloud computing concept.

  • Infrastructure As A Service (IAAS)

In this most basic cloud service model, cloud providers offer computers – as physical or more often as virtual machine –, raw (block) storage, firewalls, load balancers, and networks. IaaS providers supply these resources on demand from their large pools installed in data centers. Local area network including IP addresses are part of the offer. For the wide area connectivity, the Internet can be used or – in carrier clouds – dedicated virtual private networks can be configured.

To deploy their applications, cloud users then install operating system images on the machines as well as their application software. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software. Cloud providers typically bill IaaS services on a utility computing basis, that is, cost will reflect the amount of resources allocated and consumed.

Infrastructure-as-a-Service or IaaS Cloud is a platform through which businesses can avail equipment in the form of hardware, servers, storage space etc. at pay-per-use service.

Moreover, IaaS is a branch of cloud computing that has gathered attention among the entrepreneurs largely with the prime motive to make their business environments more organized and in sync with the ongoing operational activities of organizations.

When we talk about IaaS functioning, it is not a machine that we are talking about, which does all the work, it is simply a facility given to the business enterprises that offers users the leverage of extra storage space in servers and data centers.

  • Platform As A Service (PAAS)

In the PaaS model, cloud providers deliver a computing platform and/or solution stack typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying compute and storage resources scale automatically to match application demand such that cloud user does not have to allocate resources manually.

  • Software As A Service (SAAS)

In this model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. The cloud users do not manage the cloud infrastructure and platform on which the application is running. This eliminates the need to install and run the application on the cloud user’s own computers simplifying maintenance and support. What makes a cloud application different from other applications is its elasticity. This can be achieved by cloning tasks onto multiple virtual machines at run-time to meet the changing work demand. Load balancers distribute the work over the set of virtual machines.

This process is inconspicuous to the cloud user who sees only a single access point. To accommodate a large number of cloud users, cloud applications can bemultitenant, that is, any machine serves more than one cloud user organization. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, Test Environment as a Service, communication as a service.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user.

  • Cloud Clients

Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices – cloud clients – rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-basedChromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax andHTML5 these Web user interfaces can achieve a similar or even better look and feel as native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client Windows computing) are delivered via a screen-sharing technology.