GPU vs CPU rendering, 2012

🤖
8 Nov, 2011 Victor Skea
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Good afternoon. I have a question about GPU-rendering. I actually just recently and quietly gather in home mini-studio. To date, there are two system unit. Both are based on the Core i7 2600, 16 GB of RAM. One is positioned as a graphic, and as the second game. And, accordingly, they only differ in graphics. The graphical Quadro 4000, the gaming Radeon 6970. The game was conceived was to assist in the rendering. But recently I found out that there NVidia CUDA technology, and they released the Tesla GPUs. Judging by their beautiful graphics performance boost when rendering on the GPU CPU exceeds more than 200 times. Even in their favor is the fact that the Quadro 4000 CUDA technology has already been introduced and I was able to personally test it. And it so happened that one average video card renders faster than the two modern processor. Let and less accurately. But this is a crude ActiveShade disadvantages of V-ray RT. I fired up the purchase of Tesla. What do you think of the renderer on the GPU, and why it is still not rendering processor is gone? Do you think the move to render the benefit of the GPU? In general, I decided to give the right for one electronic card equal to the value of the whole is very good computer?


8 Nov, 2011 Anton (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Hello! In fact, a very interesting and relevant today tema.Ya very interested in it and understood what was happening. Below I will share his research on this subject. This is what I myself found out about GPU renderinga.Prezhde than to make any conclusions about rendering graphics, its feasibility, it is necessary to understand the most important thing. In fact, what is the GPU rendering and what type of renderers it can work. I myself use V-Ray, so I took it as a starting point. The main advantage of V-Ray renderer in its adaptability. That is, in the ability to adapt to a particular scene and on the basis of information on it, perform calculations only in those places where it is really needed. At the same time, subtle areas of calculation accuracy can be reduced, whereby there is a substantial saving of computational resources. A very good example of how this is done, there is work adaptive GI engine Irradiance Map described in our series of articles about the V-Ray settings. These engines are capable of selectively change the precision of calculations in different zones of the scene called biased, prejudiced or biased. In English Bias ed . In Russian reads [ 'baɪəst] ​​(baiest). Since they really biased approach to the calculation in the scene, assessing more precisely, only those areas that are considered important. In addition, the result of their mistakes is not changed by the renderer to renderu.Takie engines - the prerogative of only CPU rendering. GPU rendering biased-engine is not possible because of the different approaches to proschetu.Esche key feature of Biased engines is to use interpolation. It not only allows you to speed up computation by interpolating an ill-samples, but also eliminates the noise inherent in the images generated by the computer. After interpolation, it is in practice - blurring & nbsp;. However, the same V-Ray, has in its arsenal, and algorithms that have the properties of those which are used for GPU rendering. In particular, they have no adaptability and all scene areas are calculated equally, regardless of their importance. A prime example is an impartial engine GI engine Brute force . It does not evaluate any areas not define any importance, he just "fuck" the whole scene with the same precision, based only on a single self-tuning the Subdivs, without changing the quality of the rendering for all areas without exception scene.


8 Nov, 2011 Anton (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012

GPU-based rendering is similar to brute force unbiased. It is called the Unbiased rendering. In fairness, it should be noted that all the Brute force is not Unbiased engine, it is only similar to those engines and it is easy to understand example of a practical difference. So, Unbiased engines can work with the GPU, it is, can lead computing as the computer's CPU and graphics card. It is these Unbiased engines in question, when start talking about GPU-renderinge.Kak I said, in no Unbiased rendering adaptability and especially interpolation. As a result, to get rid of the noise is almost impossible when Unbiased rendering. Of course, we can ensure that the noise is almost not visible at all (especially if it does not try to discover😁). However, it will be payback for the abnormal increase in the rendering time. Unbiased engines are ideal for GPU rendering. Just because the video card in nature and are designed for such calculations. The fact that render Unbiased considers the whole picture at once in multiple threads simultaneously. It's just perfect for a parallel pipeline architecture GPU graphics cards. Unbiased characteristic feature of the engine is that it does not like the end of the rendering itself. Unbiased renderinng lasts forever, with every second to count simply adding new color image data. At its core, Unbiased renderer just adds a whole new color and new information into the frame buffer, as long as the user is forced not to interrupt this process or it will not make the automatic termination of payments specified manually before rendering. Just look at this famous video presentation Unbiased engine V-Ray RT: If you look closely to the renderer in the video, you will see that in the stage picture is very noisy when driving. But it is worth a stop at a certain angle, the noise begins to disappear and the picture becomes brighter and more clear. It may seem that when driving a specially reduced quality to slow down the rotation is less than RT viewport, and then only improves when you stop the movement. But this is not the case. Unbiased renderer is constantly engaged in rendering the current angle with the same intensity. Just during a stop at some angle, a visible image starts to be complemented by a new color information, which is displayed as a visible improvement. It is enough to leave this calculation alone, like every moment a visible image will be updated. This unspecified points that we see as noise, will become less and less. For comparison, Biased engine calculates render successive large enough portions . He did not complement it endlessly, it clearly performs the rendering with a given level of quality. Unlike rasparalelennogo Unbiased rendering, Biased rendering more consistent and the most suited for the CPU.



8 Nov, 2011 Anton (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Now we have come close to comparing CPU and GPU rendering. But the comparison is the central paradox of the topic. Compare CPU and GPU rendering a numerically for longer, faster impossible. More precisely, it is possible to compare the speed of CPU and GPU on Unbiased renderer. However, it is so stupid as to compare the fields plowing the same power tractor and a sports car, only by the fact that that and that with 300 horsepower, and both with wheels😁 By the way, this impenetrable ignore CPU «sorevnovateli" with the GPU . And at the same time marketing slogans video card manufacturers. The fact that no one sane , in a real situation, will not be rendered by a CPU Unbiased-renderer designed for GPU. This, at least, stupid. The only even how relevant can be a comparison Unbiased GPU rendering and Biased CPU rendering. However, this comparison can only be subjective, since Unbiased rendering time -. Infinity What's here is a subjective comparison? This means is that you can compare only the time to create visually indistinguishable images taken Unbiased and Biased engines. That is, you need to take a particular scene and render it Biased engine with the CPU. Then, run the same scene on a miscalculation Unbiased renderer on the GPU and wait until visually the picture quality will not be comparable with the result of Biased renderer. Only FIHM, compare the time spent on calculating vremya.I, here it will begin to appear "features» Unbiased engine. The first of them is that to achieve as clean (not noisy) images without interpolation, which is actively used in the rendering Biased on Unbiased engine "little blood" will not work. Prividetsya wait long to frame buffer card complemented color information to the complete lack of visual black and gray grains. The second "surprise" is that Unbiased engine has technological limitations and its possibilities are much more modest than that of Biased renderer. For example, the same V-Ray RT has a number of unsupported features , that is, it simply will not be able to render all, even the most seemingly basic effects, which have long been available Biased renderer. However, it is in general trivia. Assume that the first feature can compensate for the purchase of additional cards, and the second just setting the scene by other methods. However, a third of the existing problem is not a reasonable solution. It consists in insufficient quantities Onboard Video Memory. That is, for serious work, the video card installed in one, at best, two gigabytes of graphics memory, just enough to render the scene with a commercial, for example, the classic interior design. When you try to render everyday for most Wieser picture, Unbiased renderer simply thrown out, because of the banal lack of video memory.


8 Nov, 2011 Anton (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Now we are gradually getting to the issue of iron. The Tesla essentially different from Quadra? Yes, nothing😁 The only difference is a large amount of video memory installed in the cards Tesla series . In some models installed already on 6GB Video Memory, which, in general, not bad, but still not enough in severe renderers. But, that's funny, the price for these few gigabytes of graphics memory, just terrible . For the price of one such quadra can buy 5 different system to block the distribution of rendering . Moreover, it is not even the most "ridiculous". The fact that the difference between a gaming graphics card, built on the same chip in the GPU rendering is also only in the amount of video memory, and not as not smoago rendering speed. Checked! Now you understand why it is still not rendering processor is gone? 🙂


8 Nov, 2011 Maks (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Just want to note that all the above arguments are valid only at the time of writing, ie, the end of 2011. The rendering of photorealistic images using the GPU - this is a very promising direction. You do not need long to think about in order to understand that the video card is still done for otrisovyvaniya images 🙂 Current situation is not the most successful cases GPU rendering, and most likely in the next few years, only temporarily. Rendering high-quality images with the help of output video card is in its "infant" stage. In the future it will be possible to carry out the battle CPU and GPU. But this takes some time😉


8 Nov, 2011 Anton (Staff Author)
8 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Yes, I think that in the future, it will be a hybrid technology to render the CPU has some effects and some GPU. At least, everyone goes to.


8 Nov, 2011 Victor Skea
9 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Thank you so much for such a detailed analysis of the issue. You saved me (and only me) a huge pile of money 🙂 really better to wait for the future. And with our progress, it will come in a couple of years. But even for the same money the card will be sold completely different generation and efficiency of their use will be much more significant. You can not imagine how grateful I am. After your lesson and answer all the most difficult thing becomes elementary. Thank you for being!


9 Nov, 2011 Vladimir
9 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Such deployed and complete answers to the question can be found very rarely, thank you so much, I read with great interest and learned a lot. 👍


10 Nov, 2011 LexxD
10 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Anton, thank you very much;🙂🙂🙂 is very useful !!!!🙂🙂


22 Nov, 2011 Roman
22 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Incidentally Arion Render is not limited to video memory, as opposed to, for example, iray, at least-at least I thought so: | I would love to put the Xeon 7500 and forgotten about GPU rendering😁


23 Nov, 2011 Natalya
23 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Hi all, really I liked the article Anton, thank you! I'm working on maya, animation is not very complex models. Who is going to pokupvt new system unit and raises the problem of choosing the video for it. I wanted to take a quadrupole, but now I think I will take the game kartochkuz. That's just what power should be a map for z-brush? I'll juzat it to trace. project.


8 Nov, 2011 Maks (Staff Author)
23 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Good morning, Natalia! Come in this topic, it's just now being discussed: What graphics card to take to work in 3d?


25 Nov, 2011 Sergej102
25 Nov, 2011 # Re: GPU vs CPU rendering, 2012
I have a question about the GPU: Cuda somewhere still more useful to the Max? Can any plugins or features only CUDA recognize. Why do I ask this because if it is too much for anything in max is not necessary, then I'm in choosing a graphics card amd more to lean. When he took a job AMD HD6950, afraid that he would not like VrayRT, but fears were unfounded, everything worked (OpenCL), but some over quickly ActiveShader (+ VrayRT) I never received.


8 Nov, 2011 Anton (Staff Author)
25 Nov, 2011 # Re: GPU vs CPU rendering, 2012

1. What is CUDA and what is the user's benefit from it?
2. I like AMD video cards, but I choose NVIDIA because there is CUDA there, am I doing the right thing?
3. Where is CUDA used in 3ds Max?
4. Is my video card required to support CUDA? I will lose something if it does not support CUDA?

To answer all these questions, you should understand what CUDA is.

CUDA is a hardware-software architecture that allows software professionals to create applications that perform calculations on video cards that support GPGPU technology. In simple words, this is one of the programming languages ​​that allows you to write programs that run on non-CPU powers (as it usually does), but on NVIDIA graphics cards. CUDA is not some kind of a separate chip on the video card, it's not a function that speeds up the GPU, it's not an effect that improves graphics. No. In the video card it is the ability to work with this particular programming language.

Now, when we got acquainted with the meaning of the abbreviation CUDA and, therefore, answered the first question, we can proceed to the second and, for clarity, rephrase it.
"I like video cards that support one programming language, but I choose others because they support another programming language, do I do the right thing?"
Are you going to write programs for the GPU, and CUDA is more convenient for you? No? So why do you care about supporting one of the ways to program?

And because you are entangled in the skillful marketing slogans around NVIDIA, presenting CUDA, as the only and unique technology for using GPU to conduct calculations 🙂

Let's now answer the remaining questions.

Where is CUDA used in 3ds Max? - Nowhere, CUDA has nothing to do with 3ds Max.

Is my video card required to support CUDA? Will I lose something if it does not work with CUDA? - No, your video card should not be tied to the limited software monopolized by one of the iron manufacturers.

As you yourself said, the same V-Ray RT is quietly working with OpenCL.

It should be understood that the developers of popular software, be it V-Ray or any other program, will not limit sales of their software product only to the owners of the same developer's hardware and completely ignore their other potential customers, especially considering that the adherents of AMD are no less than NVIDIA. This, at least, is stupid, for the most part it is a banal loss of profit. Rather, this NVIDIA dreams of all developers to limit, forcing them to write zalochennoe under their video card software. But this is as real as the existence of the president of the universe😁
OpenCL is a programming language (simplified), which allows you to write programs that perform calculations on a video card, of course, regardless of its manufacturer. Its future is much more promising than the future of limited in the choice of iron CUDA.

The ability of the program to work only on some video cards and blind ignoring of others is its lack, and in no way an advantage, which is well understood by developers. An exception can serve only special software and hardware systems, focused on the performance of a single highly specialized task. For example, for medical or financial calculations. But they have nothing to do with desktop software users and can not act as a benefit to them.

If you are interested in reading more about this topic, then you can follow the links in the post. There's more specific information on it.

It remains for us to undercut.

Do not look back at misleading marketing slogans. Buy that video card that you like best😉



25 Nov, 2011 Sergej102
25 Nov, 2011 # Re: GPU vs CPU rendering, 2012
Finally, everything is clear) Thank you !!!! 🙂


5 Dec, 2011 Vasiliy
5 Dec, 2011 # Re: GPU vs CPU rendering, 2012
Anton thanks! very useful information.


21 Dec, 2011 alios
21 Dec, 2011 # Re: GPU vs CPU rendering, 2012
Aya-ay what you done! So chew and explain. Perhaps you do not have an equal. Thank you for this, and saved time and money.


27 Dec, 2011 Roman
27 Dec, 2011 # Re: GPU vs CPU rendering, 2012
Thanks Anton! You're doing a good deed!


8 Nov, 2011 Anton (Staff Author)
15 Jan, 2012 # Re: GPU vs CPU rendering, 2012

By the way, recently, Nvidia's CUDA has opened the source code compiler , so you can use it for other architectures and processors, including the AMD GPU and x86 CPU. What, exactly, is to be expected. So, now that even this, compelling, argument for buying a graphics card from Nvidia - not. You can safely choose a video card performance and price, did not dwell on a single developer of video😉



19 Jan, 2012 Aleksandr
19 Jan, 2012 # Re: GPU vs CPU rendering, 2012
Hello. In 2012, Max realized its nvidia driver instead of Nitrous Direct3D. According to subjective feelings nvidia 540m was the work quicker and smarter than the Radeon 5870.


8 Nov, 2011 Anton (Staff Author)
19 Jan, 2012 # Re: GPU vs CPU rendering, 2012
Hello! viewport speed when working with gaming graphics card, it depends on the processor and on the resolution of the viewport. 540m - is a mobile graphics card, the Radeon 5870 - stationary. So the first of the laptop, the second on a desktop computer. So: Processors in nouta and stationary computer different? Screen resolution is connected to the stationary graphics card more than the laptop display resolution? If so, then the comparison, unfortunately, not an objective, even in the context of the subjective feeling 🙂 compare different video cards on different platforms and in different conditions - it is impossible. And then, and then to nitrous Invidia? I nitros and Radeon works😁 Turn on the Radeon 5870 Nitrous and work on a Radeon Nitros. Who makes you sit on instead Direkte nitrosa on Radeon?


19 Jan, 2012 Aleksandr
19 Jan, 2012 # Re: GPU vs CPU rendering, 2012
Statsionarnik - i7 970 Gulftown Nout - i7 2630QM permission and here and there I have a FullHD Radik in nitros constantly produces artifacts, so I d3d work, but the laptop in nitros faster than d3d. And besides - wrote that subjectively😁


8 Nov, 2011 Anton (Staff Author)
19 Jan, 2012 # Re: GPU vs CPU rendering, 2012
Hello! Yes, I realized that not subjectively 🙂 I do not know that you were confused. Can I rabtal IPC in 2012 to Radhika😁 But the artifacts do not notice it. Although, as for me, terrible artifacts only render, and in the viewport, for the sake of the performance of the latter, and you can not look for anything.


19 Jan, 2012 Aleksandr
19 Jan, 2012 # Re: GPU vs CPU rendering, 2012
Well, that's one example of work with splines - they do not see is sometimes missing vertices. In general, the hellish hell and popabol. However, it is good that Avtodesk left to work in d3d (even though they did not do it :🙂. Well, something like that 🙄 ------- PS I cordially ask to respond to the subject of the rendering and post HDRi.


13 Feb, 2012 Vadim
13 Feb, 2012 # Re: GPU vs CPU rendering, 2012
I'm here I can not understand one thing. The idea is the video card can perform the same calculations with the actual rendering that CPU. When mnogopochtonom rendre each square rendrit its core. So why can not the usual algorithms performed on the video card Wray smashing it to bits n corresponding to the number of video streams? And the more percents and video could rendrit simultaneously. Why is the graphics card can not make the same calculations as per cent?


8 Nov, 2011 Anton (Staff Author)
13 Feb, 2012 # Re: GPU vs CPU rendering, 2012
I believe the answer to this question is deeply rooted in the programming, but the answer is most likely, it is trivial. Otherwise, such a possibility would long ago have realized. But just as likely, the fact that the flow of the video card is not equal to the flux of the processor and the fact that one of the nucleus of the modern CPU will take 10 seconds at pokovogo graphics processor will take 10 hours. It would be naive to believe that a modern graphics card in which the ceremony can be from one and a half to two thousand of these processors - the same as the CPU core from 2000 🙂

That is why when GPU rendering graphics card is not considered buckets and tiny "grains", as with the buckets, she would not have coped. And due to the fact that we had to render to break 2,000 buckets, and due to the fact that their smoothing between a required-have more resources than their calculation, and because of the fact that the very principle of computational flow graphics other than what at the central processor, the calculation is not possible. After rendering was originally designed for the CPU.
In addition, highly misleading name of the stream processor "cores". This is not the kernel, which is equipped with a CPU. Just so called, in connection with similar features. This title is more marketing than technical. To have the eyes of buyers fled "OOO! 2000 Nuclear graphics card cooler than my 4-core percent. - I take! »😁

Therefore, all again rests on the difference between the tractor and the sports car .



13 Feb, 2012 Valeriy
13 Feb, 2012 # Re: GPU vs CPU rendering, 2012
Good afternoon! I read all messages from this forum and still have questions. From the above it becomes that in the near future, GPU rendering will replace gradually Cpu. Improve the program, type "lumion 3d" that can fully use the resources the video card, as V-Ray with each new version improves the work with the GPU There was a number of questions: (I would like to let us clarify just will not take the price of the video you want to understand technically what is better) 1. If I understand correctly the professional graphics card is needed only when using very large scenes, as when there is a GPU to render the information stored in the video buffer. The question arises: • GeForce GTX 580 • Radeon HD 7950 • Tesla C2050 has 3072 MB of memory. Judging from an article presented by Anton (RenderStuff) # 2128 It turns out that there is no difference between a professional card and play the difference in GPU renderer is not? Yes, the same taking into account the super mega wood for Tesla. 2. Take for example the same game graphics • GeForce GTX 580 • Radeon HD 7950 is very similar for the graphics card. Says Anton (RenderStuff) # 2310 "No need to look at marketing slogans misleading. Buy the card, which do you like more, "Please tell me what you need to look for when choosing a video card GPU rendering? For example, do not understand the purpose of the Uniform conveyors


8 Nov, 2011 Anton (Staff Author)
13 Feb, 2012 # Re: GPU vs CPU rendering, 2012

Hello!

Actually, no, you're not quite correctly understood. In the near future GPU rendering is unlikely to replace the gradually-CPU. And not in the near, not distant😁

Everyone is looking for a cure, hoping for miracles of technology. But the paradox is that develop not only the GPU, and CPU. So when the video will be ready to perform the imaging volume, which is capable of performing modern processors, in the same capacity and at the same rate, by the time the processors are quite different and not the fact that video processors will be able to keep up with them. At least officially a trend yet.

Rather, it will be a hybrid software , which will be for some tasks use the CPU and the other GPU.

1. Do not be. Firewood Tesla have no direct relation to the rendering algorithm built into the render engine.

2. At this point, adequate Unbiased renderers capable of equally high quality and in full make visualization as modern Biased renderers - no. By this, there is no adequate comparative GPU tests, which could be to choose the most optimal video card in terms of price / performance. The only thing that can be relied upon in selecting the video, it's just its absolute performance in other applications, gaming, etc. That is, the video card is faster, the better, but it is also more expensive.



5 Apr, 2012 Mishanes
5 Apr, 2012 # Re: GPU vs CPU rendering, 2012
I understand Maxwell is Unbiased renderer, that is, in this case, the increase is almost guaranteed. of course hardly anyone of those present here uses it in commercial projects. but in any case, it gives the opportunity to every restless enthusiasts to experiment. So it will be very interesting if someone writes a force vidyushku render in Maxwell. preferably not in Maxwell Studio and from the Max


24 Sep, 2012 Andrey
24 Sep, 2012 # Re: GPU vs CPU rendering, 2012
Anton all beautifully and accurately painted, and it would be like but all so: 1. What if I think the developers Prof. video cards, since it is so uneconomical .. (1 card is irreplaceable 5 system units), for some reason we always put ourselves smarter creators: ) is still the case in the software ??? 2. What is the situation with the GPU business today? What can you say about the external renderers: - Octane Render3D - http://habrahabr.ru/post/142213/ - the Render Cycles - http://designlenta.com/post/3d-works-octane-render/ - the Arion - http: //www.randomcontrol.com/arion It would be very nice to receive your lesson Octane Render3D compared to the CPU.


8 Nov, 2011 Anton (Staff Author)
25 Sep, 2012 # Re: GPU vs CPU rendering, 2012

1 - As already written, the point is in marketing. Because everything that is positioned as a pro is expensive.
Why does the plant employ 2,000 workers, pay 10% of the cost of the product produced by the plant, while a 10a of top managers, working no more than any of these workers, get 90% of the product earned from the sale? Do they produce ten percent of 90% of the product, and 2000 plant employees only 10% ...?

The developers think that you are a pro and you can afford it. Successfully using this principle, because it works. That's all.

2 - Everything described above in the topic, is relevant for today. Nothing has changed in a few months, and there has been no trend to change😁

I understand that I really want to cheat, buying one vidyahu for 300 bucks, download some mythical-magic software and, by deceiving fate, get the performance of the render farm for $ 30,000. But, unfortunately, these are dreams on the verge of insanity. There will be no miracles. Do not wait. When "progress" raises the GPU to such heights, the CPU will also be different. Not GPUs, CPU manufacturers do not stand still. So that everything will return to the original one. Do not see the marketing, about the "supercomputer" from one video card. When the teraflops are declared to be of the latest quality, then carefully read for which task and you will realize that there is no miracle. Just a narrowly specialized piece of hardware is confined to one highly specialized streaming task, while for universal tasks, which is modern photorealistic rendering - not suitable 🙄

And about the lesson, then the lesson for such a renderer in comparison with the processor, it's a bit of a pun😁

In any case, the lessons for Octane Render3D are with us, it is not exactly planned, and hardly ever will be. I will not advise a lesson either, because in my work I use only photorealistic and flexible V-Ray, which is constantly evolving and, most likely, will always be more interesting and promising than competing solutions. At least until something new and revolutionary appears. But, there is nothing like this yet. As soon as it appears, we will definitely open this topic and consider all the promising areas.

In the meantime, I leave this topic closed, in view of its complete exhaustion to this day😉



The discussion is closed.

Terms of Service

RenderStuff © 2008