motherboards
Results 1 to 5 of 5

Thread: Fermi: First supercomputing GPU?

  1. #1
    News Hound

    Status
    Lil' ½ Dead is offline

    Last Online
    17-10-2013 @ 09:38
    Join Date
    Mar 2009
    Location
    Carolina
    Posts
    5,234
    CPU: Intel 2500k 3.8Ghz
    M/B: MSI Z68A-GD65 (3)
    RAM: G.SKILL Sniper 16GB
    GPU: Gigabyte GTX 670 OC
    • 's Full Spec's
      • Case:
      • NZXT: White Phantom Case
      • PSU:
      • In Win Commander 750W
      • Cooling:
      • Thermalright HR-02
      • Sound:
      • Realtek High Definition
      • Monitor:
      • LG 47LK520
      • OS:
      • Windows Seven x64
    Thanks
    135
    Thanked 1,253 Times in 852 Posts
    Points: 58,573, Level: 75
    Points: 58,573, Level: 75
    Level completed: 2%,
    Points required for next Level: 1,477
    Level completed: 2%, Points required for next Level: 1,477
    Overall activity: 66.0%
    Overall activity: 66.0%

    Default Fermi: First supercomputing GPU?

    http://www.techradar.com/news/comput...plained-657489

    Hardware comes and goes, sometimes we get a little damp in anticipation, but usually its nothing you want to run out into the street and proclaim.
    There is an exception to this – high-power graphics cards, we love these. They make games sexy and that makes us sexy. At the heart of these is the GPU, and when Nvidia announces it has a new and wonderful one, it is time to take notice. It's codenamed Fermi, after renowned nuclear physicist, Enrico Fermi.
    From being a humble bit-player (geddit?) the GPU has grown to be a crucial component, next to the processor this is where you want the power concentrated. There are all sorts of applications you could use a GPU for, but essentially on the home PC it is games that drive everything.
    Offloading the reams of processor-intensive floating point calculations that 3D demand across to a chip dedicated to the task is the most cost effective way to get things moving. Rather cheekily Nvidia starts its whitepaper on Fermi by claiming to have invented the GPU in 1999.
    The GeForce 256 was indeed the first to have transform and lighting in hardware, but come on guys, dedicated graphics chips date back to the 70s with Blitters, then 2D, and finally 3D accelerators (remember the buzz the Voodoo made?). Even if you define the GPU as only full programmable 2D/3D acceleration chips, that's pushing it.
    There are some big claims being made for Fermi. It is, apparently, the most advanced GPU ever made and the first GPU designed specifically for 'supercomputing', basically running those big and complicated jobs such as trying to simulate the gravitational interactions of an entire galaxy.


    PRETTY CAR!: Why bother spending all that time modelling gravitational forces of the galaxy when you can have graphics this smooth

    What it has created is essentially a storming maths co-processor which just happens to sit on a graphics card and run your graphics for you too. As with Nvidia's current range, it can run in two modes, compute and graphics mode, and it's the compute mode that has had Nvidia hanging out the flags.
    The versions aimed at proper serious HPC applications don't even have a graphics output, the chip is used purely as a parallel computing engine. Fermi can switch between two modes in a few clock cycles, between horribly complicated maths and rasterizing.
    Apparently it is "the next engine of science". Tub-thumping aside, it does appear to be something rather special.

    Three billion transistors

    The silicon has been designed from the ground-up to match the latest concepts in parallel computing. The basic features list reads thus: 512 CUDA Cores, Parallel DataCache, Nvidia GigaThread and EEC Support.
    Clear? There are three billion transistors for starters, compared to 1.4 billion in a GT200 and a mere 681 million on a G80. There's shared, configurable L1 and L2 cache and support for up to 6GB of GDDR5 memory.
    The block diagram of Fermi looks like the floor plan of a dystopian holiday camp. Sixteen rectangles, each with 32 smaller ones inside, all nice and regimented in neat rows. That's your 16 SM (Streaming Multiprocessing) blocks with 512 little execution units inside, called CUDA cores.
    Each SM core has local memory, register files, load/store units and thread scheduler to run the 32 associated cores. Each of these can run a floating point or an integer instruction every click. It can also run double precision floating point operations at half that, which will please the maths department.


    PETROL HEADS REJOICE: The inside of your car in a future racing game? Nvidia thinks so

    Initial trials has it pegged at four to five times faster than a GeForce GT200 running double precision apps, not quite fair perhaps as this is Fermi's party trick, but still, gosh.
    Nvidia's GigaThread engine, the global scheduler, intelligently ties together all these threads and pipes data around to use this wealth of processing power. We are in a world of out of order thread block execution, application context switching here.
    Parallel DataCache provides configurable, unified L1 and L2 caches. Traditional load and read paths, which have to be flushed and managed carefully, have been replaced with shared memory for all tasks. It is also the first GPU with ECC (error-checking and correction).
    The transistors are so teeny and carry such a small charge that they can easily be flipped by Alpha particles from space (seriously), or more likely electromagnetic interference, creating a soft error. The error correction covers the register files, shared memory, caches and main memory.
    It's easy to get lost in all these technical terms. Essentially what we have is a chip that contains lots of little processors with a smart control system that enables it to work as one on a mass of data. It's flexible, scalable and perfect for streaming data, where parallel operations work.
    It's a fundamentally different approach to a CPU, which has to cope with serial tasks. Pound for pound the GPU offers, Nvidia claims anyway, ten times the performance for one twentieth the price.
    CUDA Hardware is half the whole of course. Nvidia CUDA (Compute Unified Device Architecture) is the C-based programming environment enabling you to tap into this multicore parallel processor goodness.
    Nvidia has expanded the term to cover its whole GPU-based approach, hence naming the execution units CUDA cores. Language support includes C++, FORTRAN, Java, Matlab and Python. Yes, people still use FORTRAN, it supports double precision floating point you see.
    Support also includes OpenGL, DirectX, 32/64-bit Windows and Linux, and includes standard calls for such intensive tasks as Fast Fourier Transformations (such good fun).
    Never mind all the physics stuff and programming, does this mean you can whack a Fermi card in a PC and expect it to run Direct3D games quickly then? Yes it does, despite all the high-end apps jabber, this is still Nvidia's GPU and making graphics cards is its business.
    You might want to know exactly how fast a Fermi-based card is going to be. Nvidia wouldn't be drawn into anything other than fairly vague ideas. That's good enough for us. Apparently, it'll be a blast.
    In theory it is eight times faster than the best GeForce, in practice what with other limiting factors, you'll see less than that, but it'll still destroy them.

    Get me one now

    Hold your horses. The first cards are due next year; although exactly how you define "availability" is something of an issue. Next year is far more certain though. It'll be sold under the three Nvidia brands, GeForce for the consumer, Tesla for the lab boys and Quadro for the workstations.
    Details on the first consumer card are currently very sparse. It'll be a high-end GeForce version to grab the headlines and at a price that's comparable with the current range. There's no news on the final spec though, or even if it'll sport the full 512-cores as demonstrated by Nvidia when it announced Fermi.
    Quite possibly it'll have a reduced core version, the full 512 really is aimed at GPU computing after all. Fermi has been four years in the making and represents a horrific amount of work. It's destined to be at the heart of Nvidia's range and, so far, looks fantastic.
    Get this: we asked if it could ray-trace a 3D world fast enough for gaming in real time on a consumer machine. Nvidia said yes it could…
    Those ray traced images look sweet. I wonder what kind of fps Fermi gets trying to compute that.

    This was an interesting quote

    Get this: we asked if it could ray-trace a 3D world fast enough for gaming in real time on a consumer machine. Nvidia said yes it could…
    Last edited by Lil' ½ Dead; 24-12-2009 at 19:06.

  2. #2
    OC Jedi Master

    Status
    Skyguy is offline

    Last Online
    03-08-2015 @ 17:03
    Join Date
    Jun 2008
    Location
    Canuckland
    Posts
    4,819
    CPU: Core i7 3820 @ 4.5GHz
    M/B: ASUS Rampage IV
    RAM: 16GB G.Skill RipjawsZ
    GPU: It varies
    • Skyguy's Full Spec's
      • Case:
      • SilverStone Raven RV03
      • PSU:
      • SilverStone Strider Gold 1200W
      • Cooling:
      • Noctua D14 air, XSPC 750 water
      • Sound:
      • ASUS Xonar w/ Beyer Dynamics DT770/80
      • Monitor:
      • HP 2465; Samsung 226; Acer 22", Dell E22WFP & 1907FP
      • OS:
      • Win 7 64-bit
      • Misc:
      • Mionix Naos 5000, MaxKeyboard Nighthawk, Mionix Propus 380
    Thanks
    122
    Thanked 898 Times in 682 Posts
    Points: 69,025, Level: 81
    Points: 69,025, Level: 81
    Level completed: 58%,
    Points required for next Level: 725
    Level completed: 58%, Points required for next Level: 725
    Overall activity: 20.0%
    Overall activity: 20.0%

    Default

    Ray tracing in real time..........at 640 resolution though, and with the Quake 3 engine. That's already been proven. Show us something we don't know already.

  3. #3
    Regular Member

    Status
    Gareth is offline

    Last Online
    07-06-2010 @ 08:02
    Join Date
    Jul 2006
    Location
    Cambridgeshire, England
    Posts
    1,703
    CPU: Intel Core 2 Quad Q9450 / 3.30GHz
    M/B: ASUS P5Q-E
    RAM: 8.00GB DDR2 1066 PC8500
    GPU: Powercolor Radeon HD4870 1GB GDDR5 800MHz/3,700MHz + GeForce 8600GT
    • Gareth's Full Spec's
      • Case:
      • NZXT Whisper Series Classic
      • PSU:
      • 730W Hiper
      • Cooling:
      • Thermaltake V-14 Pro 3x 120mm intake 1x 120mm exhaust 2x 80mm exhaust
      • Sound:
      • Soundblaster Audigy SE
      • Monitor:
      • Packard Bell Viseo 230Ws 1920x1080 + AOC 22" 1680x1050 + Mag Innovisions 19" 1440x900
      • OS:
      • Windows 7 Professional x64
    Thanks
    34
    Thanked 20 Times in 20 Posts
    Points: 13,481, Level: 35
    Points: 13,481, Level: 35
    Level completed: 19%,
    Points required for next Level: 569
    Level completed: 19%, Points required for next Level: 569
    Overall activity: 99.0%
    Overall activity: 99.0%

    Default

    Wow, those graphics are stunning! Amazed.

  4. #4
    Banned

    Status
    Doctor_Death is offline

    Last Online
    25-02-2016 @ 12:04
    Join Date
    Apr 2008
    Location
    Punxsutawney, Pa. - USA
    Posts
    11,794
    CPU: Core i7 3930K
    M/B: ASRock X79 Extreme9
    RAM: 64GBs Kingston Beast 2133MHz
    GPU: Two EVGA GTX 690s in Quad SLI
    • Doctor_Death's Full Spec's
      • Case:
      • Corsair 900D
      • PSU:
      • Corsair AX1200i
      • Cooling:
      • Complete system cooled by EK Water Blocks
      • Sound:
      • Creative Sound Core 3D Audio 7.1
      • Monitor:
      • Dell U3011 30" 2560 x 1600 Res
      • OS:
      • Win 7 Ultimate 64Bit
      • Misc:
      • Asus Blu-Ray, Asus 24X DVD Burner, Max Mechanicalkeyboard / Razer Abyssus Mouse / Razer eXact Mat with wrist rest.
    Thanks
    339
    Thanked 2,582 Times in 1,551 Posts
    Points: 490,204, Level: 100
    Points: 490,204, Level: 100
    Level completed: 0%,
    Points required for next Level: 0
    Level completed: 0%, Points required for next Level: 0
    Overall activity: 0%
    Overall activity: 0%

    Default

    Well it looks like we will have to wait for the 2nd week of January, that's when CES starts. There's one thing you can count on, nVidia will not show up empty handed.

  5. #5
    OC Jedi Knight

    Status
    logan is offline

    Last Online
    13-10-2011 @ 19:42
    Join Date
    Mar 2009
    Posts
    1,217
    CPU: Phenom II X4 925
    M/B: Asus M5A97
    RAM: 4 GB Mushkin DDR3 1600
    GPU: GTX 460
    Thanks
    46
    Thanked 118 Times in 111 Posts
    Points: 5,268, Level: 21
    Points: 5,268, Level: 21
    Level completed: 44%,
    Points required for next Level: 282
    Level completed: 44%, Points required for next Level: 282
    Overall activity: 24.0%
    Overall activity: 24.0%

    Default

    damn, just as i ordered an HD5850!

    at least AMD beat nVidia to the massacre, would've been a trainwreck for AMD had they shown up late and had to fight fermi.

Remove Ads

Sponsored Links

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Similar Threads

  1. FS/FT ASUS 580 Fermi
    By rickss69 in forum Buy/Sell/Trade/Online Deals
    Replies: 0
    Last Post: 18-04-2011, 07:06
  2. Fermi GTX 480 / 470 Review
    By Lil' ½ Dead in forum Graphics Cards
    Replies: 10
    Last Post: 29-03-2010, 00:36
  3. AMD’s answer to Fermi is Trillian
    By Lil' ½ Dead in forum Graphics Cards
    Replies: 0
    Last Post: 18-02-2010, 15:02
  4. Fermi Benchmark, Far Cry 2
    By Lil' ½ Dead in forum Graphics Cards
    Replies: 0
    Last Post: 21-01-2010, 21:17
  5. Top 500 supercomputing list 2006, AMD dominates.
    By Jameson in forum Hardware News
    Replies: 0
    Last Post: 14-11-2006, 13:56

Search tags for this page

There are currently no search tags.
Click on a term to search for related topics.

Tags for this Thread