GPU-based rendering is similar to brute force unbiased. It is called the Unbiased rendering. In fairness, it should be noted that all the Brute force is not Unbiased engine, it is only similar to those engines and it is easy to understand example of a practical difference. So, Unbiased engines can work with the GPU, it is, can lead computing as the computer's CPU and graphics card. It is these Unbiased engines in question, when start talking about GPU-renderinge.Kak I said, in no Unbiased rendering adaptability and especially interpolation. As a result, to get rid of the noise is almost impossible when Unbiased rendering. Of course, we can ensure that the noise is almost not visible at all (especially if it does not try to discover😁). However, it will be payback for the abnormal increase in the rendering time. Unbiased engines are ideal for GPU rendering. Just because the video card in nature and are designed for such calculations. The fact that render Unbiased considers the whole picture at once in multiple threads simultaneously. It's just perfect for a parallel pipeline architecture GPU graphics cards. Unbiased characteristic feature of the engine is that it does not like the end of the rendering itself. Unbiased renderinng lasts forever, with every second to count simply adding new color image data. At its core, Unbiased renderer just adds a whole new color and new information into the frame buffer, as long as the user is forced not to interrupt this process or it will not make the automatic termination of payments specified manually before rendering. Just look at this famous video presentation Unbiased engine V-Ray RT: If you look closely to the renderer in the video, you will see that in the stage picture is very noisy when driving. But it is worth a stop at a certain angle, the noise begins to disappear and the picture becomes brighter and more clear. It may seem that when driving a specially reduced quality to slow down the rotation is less than RT viewport, and then only improves when you stop the movement. But this is not the case. Unbiased renderer is constantly engaged in rendering the current angle with the same intensity. Just during a stop at some angle, a visible image starts to be complemented by a new color information, which is displayed as a visible improvement. It is enough to leave this calculation alone, like every moment a visible image will be updated. This unspecified points that we see as noise, will become less and less. For comparison, Biased engine calculates render successive large enough portions . He did not complement it endlessly, it clearly performs the rendering with a given level of quality. Unlike rasparalelennogo Unbiased rendering, Biased rendering more consistent and the most suited for the CPU.
1. What is CUDA and what is the user's benefit from it?2. I like AMD video cards, but I choose NVIDIA because there is CUDA there, am I doing the right thing?3. Where is CUDA used in 3ds Max?4. Is my video card required to support CUDA? I will lose something if it does not support CUDA?To answer all these questions, you should understand what CUDA is.CUDA is a hardware-software architecture that allows software professionals to create applications that perform calculations on video cards that support GPGPU technology. In simple words, this is one of the programming languages that allows you to write programs that run on non-CPU powers (as it usually does), but on NVIDIA graphics cards. CUDA is not some kind of a separate chip on the video card, it's not a function that speeds up the GPU, it's not an effect that improves graphics. No. In the video card it is the ability to work with this particular programming language.Now, when we got acquainted with the meaning of the abbreviation CUDA and, therefore, answered the first question, we can proceed to the second and, for clarity, rephrase it."I like video cards that support one programming language, but I choose others because they support another programming language, do I do the right thing?"Are you going to write programs for the GPU, and CUDA is more convenient for you? No? So why do you care about supporting one of the ways to program?And because you are entangled in the skillful marketing slogans around NVIDIA, presenting CUDA, as the only and unique technology for using GPU to conduct calculations 🙂Let's now answer the remaining questions.Where is CUDA used in 3ds Max? - Nowhere, CUDA has nothing to do with 3ds Max.Is my video card required to support CUDA? Will I lose something if it does not work with CUDA? - No, your video card should not be tied to the limited software monopolized by one of the iron manufacturers.As you yourself said, the same V-Ray RT is quietly working with OpenCL.It should be understood that the developers of popular software, be it V-Ray or any other program, will not limit sales of their software product only to the owners of the same developer's hardware and completely ignore their other potential customers, especially considering that the adherents of AMD are no less than NVIDIA. This, at least, is stupid, for the most part it is a banal loss of profit. Rather, this NVIDIA dreams of all developers to limit, forcing them to write zalochennoe under their video card software. But this is as real as the existence of the president of the universe😁OpenCL is a programming language (simplified), which allows you to write programs that perform calculations on a video card, of course, regardless of its manufacturer. Its future is much more promising than the future of limited in the choice of iron CUDA.The ability of the program to work only on some video cards and blind ignoring of others is its lack, and in no way an advantage, which is well understood by developers. An exception can serve only special software and hardware systems, focused on the performance of a single highly specialized task. For example, for medical or financial calculations. But they have nothing to do with desktop software users and can not act as a benefit to them.If you are interested in reading more about this topic, then you can follow the links in the post. There's more specific information on it.It remains for us to undercut.Do not look back at misleading marketing slogans. Buy that video card that you like best😉
By the way, recently, Nvidia's CUDA has opened the source code compiler , so you can use it for other architectures and processors, including the AMD GPU and x86 CPU. What, exactly, is to be expected. So, now that even this, compelling, argument for buying a graphics card from Nvidia - not. You can safely choose a video card performance and price, did not dwell on a single developer of video😉
That is why when GPU rendering graphics card is not considered buckets and tiny "grains", as with the buckets, she would not have coped. And due to the fact that we had to render to break 2,000 buckets, and due to the fact that their smoothing between a required-have more resources than their calculation, and because of the fact that the very principle of computational flow graphics other than what at the central processor, the calculation is not possible. After rendering was originally designed for the CPU. In addition, highly misleading name of the stream processor "cores". This is not the kernel, which is equipped with a CPU. Just so called, in connection with similar features. This title is more marketing than technical. To have the eyes of buyers fled "OOO! 2000 Nuclear graphics card cooler than my 4-core percent. - I take! »😁
Therefore, all again rests on the difference between the tractor and the sports car .
Actually, no, you're not quite correctly understood. In the near future GPU rendering is unlikely to replace the gradually-CPU. And not in the near, not distant😁
Everyone is looking for a cure, hoping for miracles of technology. But the paradox is that develop not only the GPU, and CPU. So when the video will be ready to perform the imaging volume, which is capable of performing modern processors, in the same capacity and at the same rate, by the time the processors are quite different and not the fact that video processors will be able to keep up with them. At least officially a trend yet.
Rather, it will be a hybrid software , which will be for some tasks use the CPU and the other GPU.
1. Do not be. Firewood Tesla have no direct relation to the rendering algorithm built into the render engine.
2. At this point, adequate Unbiased renderers capable of equally high quality and in full make visualization as modern Biased renderers - no. By this, there is no adequate comparative GPU tests, which could be to choose the most optimal video card in terms of price / performance. The only thing that can be relied upon in selecting the video, it's just its absolute performance in other applications, gaming, etc. That is, the video card is faster, the better, but it is also more expensive.
1 - As already written, the point is in marketing. Because everything that is positioned as a pro is expensive.Why does the plant employ 2,000 workers, pay 10% of the cost of the product produced by the plant, while a 10a of top managers, working no more than any of these workers, get 90% of the product earned from the sale? Do they produce ten percent of 90% of the product, and 2000 plant employees only 10% ...?The developers think that you are a pro and you can afford it. Successfully using this principle, because it works. That's all.2 - Everything described above in the topic, is relevant for today. Nothing has changed in a few months, and there has been no trend to change😁I understand that I really want to cheat, buying one vidyahu for 300 bucks, download some mythical-magic software and, by deceiving fate, get the performance of the render farm for $ 30,000. But, unfortunately, these are dreams on the verge of insanity. There will be no miracles. Do not wait. When "progress" raises the GPU to such heights, the CPU will also be different. Not GPUs, CPU manufacturers do not stand still. So that everything will return to the original one. Do not see the marketing, about the "supercomputer" from one video card. When the teraflops are declared to be of the latest quality, then carefully read for which task and you will realize that there is no miracle. Just a narrowly specialized piece of hardware is confined to one highly specialized streaming task, while for universal tasks, which is modern photorealistic rendering - not suitable 🙄And about the lesson, then the lesson for such a renderer in comparison with the processor, it's a bit of a pun😁In any case, the lessons for Octane Render3D are with us, it is not exactly planned, and hardly ever will be. I will not advise a lesson either, because in my work I use only photorealistic and flexible V-Ray, which is constantly evolving and, most likely, will always be more interesting and promising than competing solutions. At least until something new and revolutionary appears. But, there is nothing like this yet. As soon as it appears, we will definitely open this topic and consider all the promising areas.In the meantime, I leave this topic closed, in view of its complete exhaustion to this day😉
The discussion is closed.