Hello, Stimp3D 🙂 In general, you may take the maximum configuration that allows your budget allocated for this purchase. More specifically, the: For the smooth rotation of 3d scene in the viewport the graphics card (GPU) is responsible.The more performance of its graphics accelerator, the more smoothly your job will be to in the scene. In fact, the performance of graphics accelerator is responsible for the comfort of 3d artist in creating scene. For the rendering speed the central processor (CPU) of computer is responsible. The more its clock speed, the more cores it has, the faster it will compute images. For that how big scene can be loaded and calculated, the operative memory (RAM) responds. Indirectly, it also affects the speed of rendering. The fact that many modern renderers have special algorithms, which are fighting with lack of memory. In particular, they do not allow you to load the whole scene entirely, and in the process of rendering, it partially load only those parts over which the calculations are made at this time. The less RAM, the lower portions of the scene can be used and the greater the number of loads of these portions will be required to render the entire scene. Unloading process (cleanup) for the new portion and further new loading actually take time and computational resources of the entire system. Therefore, a sufficient amount of RAM will not speed up the rendering process, but it will not let it slow down 🙂 The remaining components such as motherboard, hard disk drives and other, are less important than these three. For comfortable work with the majority of typical scenes, it will be enough video card NVIDIA GeForce 8800 class. Higher is better and but more expensive 🙂 If you choose a new processor, nowadays the best choice for price / performance ratio, is a processor AMD Phenom II X6 1100T. At a price of 250 USD, this six-core processor are much efficient relative to similar to the cost dual-core 🙂 Intel Core i5-660. The 8 gigabytes of RAM is enough for almost any purpose.
Hello!Well done, that you think about the right things😁Additional systems for rendering are a great solution. Immediately make a reservation, we will not discuss the creation of a cluster, as such, because This technology goes far beyond the "household needs" of rendering scenes in 3ds Max and V-Ray. As you know, V-Ray has its own technology of network (distribution) rendering. Let's make it clearer (maybe someone who does not know), determine what kind of computer that does with network rendering V-Ray.The main computer (Render Client) is the computer on which the main rendering process is started and maintained. That is, in your case, this is your current PC on the Core i7 930.Render Server is an auxiliary computer connected to the master on the local network. It takes data from the main, processes them and returns them back, thus speeding up the rendering of the image. That is, in your case, this is the additional computer that you want to buy.Theoretically, it is desirable that all computers involved in network rendering be identical in performance. However, as practice shows, if the host computer is faster than the render node, then it will not be worse than in the case of an identical server and client.In addition, as we have already mentioned, in the case of rendering scenes using Light Cache, which is not distributed in distributive rendering, but rendered on all network computers in parallel, it makes sense that the main one was the most powerful computer on the network. Then it faster than the nodes will calculate the light cache and, thereby, reduce the total rendering time.Therefore, for the server, you can choose exactly the same, you have now, or less powerful processor.The next thing you should pay attention to is the amount of RAM. It should be understood that in network rendering, the scene is loaded not only on the client, but also on the servers. And it's rendered similar to them. Therefore, if, for example, you have 12 GB of RAM on the host computer, 4 GB on the node, and the scene needs 9 GB of RAM, then after starting the network render, the scene will start rendering on the main one. But, at the same time, on the node it just flies with the vray memory allocation failure because of the banal memory shortage.You need to understand that on the server you will be able to render only those scenes for which there will be enough RAM installed on it.The last thing I want to say on the theory of the choice of hardware for network rendering is that on the server the scene is launched in the background and it does not matter which video card is installed on it. It is simply not used at all. The other components are even less important.Here is the recommended configuration for rendering: the best computer for 3ds Max price / quality. At prices of the end of 2011, for 400 USD you can collect an excellent render-node on a 6-nuclear phenomenon with 8 gigs of RAM, without taking into account the price of the hard drive. Winchester in the account does not take, because of its current mixed price.For example, we, for nodes, use just such systems. They have motherboards with integrated graphics cards. This is convenient, cost-effective and does not affect the network rendering speed in any way 🙂
Hello!Ilya_sp correctly expressed the basic "postulates" of the choice of render-node, which consist in the fact that this is exactly the same computer to which exactly the same requirements are imposed as for any computer for working in 3ds Max, the only difference being that directly It will not create the scene itself, which means that the video card does not play any role at all, since it does not participate in the rendering process at all.Considering the above, the video card built into the motherboard is more than enough, since its only "role" is that without it, the operating system😁 is simply not launched, therefore it is formally required as a basic component of the computer.As for RAM, in principle, on the nodes you can keep less RAM than on the control workstation, because on it you, simultaneously with 3ds Max as a rule, run the same Photoshop or After Effects, which are gluttonous for memory, Than 3ds Max. However, it should not be less than 8 GB, since this is the minimum that is usually enough for most scenes.For example, I have 32 GB on my workstation and only to use the swap less, and all nodes work with 12 GB. Not once the problems were not with one scene, that is, in my experience, 12 gig more than enough.Concerning the screw, then on the nodes it is needed solely for loading the OS and there is nothing more for it. Therefore, as bootable disks, you can use the most "killed" and B / Ozny disks of any size and speed. We do exactly this, saving the component in the nodes as much as possible😉
Rostislav, I would not be so committed monitors Dell brand. This is a cool company, but its shortcomings in her too. Be sure to pay attention to the fact that, with similar marketing labeling, the Dell monitors may have different specifications. Model U2515H too small diagonally and has a very high density of pixels, all models S2715H, P2714H, SE2716H have too low resolution 1920x1080.
In your case, I suggest you look for these key models, and possibly their "neighbors" on the series of the same brand. Below is a list of monitors arranged in ascending order of price, the main characteristics of all - very high and most importantly, similar. All of these monitors are the best value.
Hi gus_ann,I will express my opinion only on the software part.Embree is a library of ready-made "commands" or software "kernels" (kernels) specially "honed" for ray tracing calculation using algorithms (SSE, AVX, AVX2, AVX512) of Intel processors. That is, in fact, Embree presents itself as software modules written at a lower level than the programming languages on which the renderers themselves are written, for example, V-Ray is written in C ++. More precise commands for performing certain calculations, especially those associated with the work of specific algorithms of a particular iron, than allows the functionality of C ++ itself, you can not write on it. They are written in low-level languages and connected to high-level modules. These are the modules that directly address specific Intel processor algorithms and are designed to execute commands typical for raytracing and are provided by Intel programmers working closely with the engineers of this company who create the processors themselves. These modules have nothing to do with V-Ray, they can be used by developers of any software, any renderer. Therefore, to assert that a large amount of memory will allow Embree to work at 100%, let's say it is not correct. No direct connection with the memory and use of Embree - no. The only thing that can be attributed here is that nominally Embree, as another algorithm, can use a different amount of memory than the standard V-Ray algorithm, including more. But specific data on whether more memory "eats" Embree than a standard raytracer, I do not have.The only "summary" of all the above is that Embree, which gives a significant acceleration of rendering, works ONLY on Intel processors. With the tick in the checkbox "Use Embree", the renderer even crashes on the modern AMD processor. Verified!However, in the presence of a large amount of memory and acceleration of the rendering there is a link. It consists in the ability of V-Ray to work exclusively in the Static mode. That is, the whole scene is loaded entirely into memory, without the need to load it on the "chunks" from the disk in Dynamic mode. The difference between the Static renderer and the rendering of the same scene, but in Dynamic, depends on the scene itself, but at times quite significant. And here comes the question. What will be faster, to render the scene ONE computer with 128 Gb RAM in Static mode or at once DUAL (Distributed Rendering) computers with 16 Gb of RAM in each, but in Dynamic mode. Specific experiments I did not conduct, but something tells me that two computers with a smaller number of memory will cope with the task faster than one, even in Dynamic mode.Why do I say two against one? Yes, because for the price of only 128 Gb RAM you can buy 2 wonderful render-computers on a good processor, like 4790K, with any motherboard for it, with a built-in video card (the video core, in fact, is built in percent, so all motherboards under It has a video output by default), 64-128 Gb SSD and at least 16 Gb of RAM. Another thing is that you need to have a place to put just two computers and more. Have an infrastructure, that is, supply power, stabilizers, network equipment. But the latter is not at all expensive and not so difficult to configure.About what is forced on Distributed Rendering to count both computers in different modes, that is, one in Static and one in Dynamic - it is impossible, Max already said. Or both in Static, or both in Dynamic, respectively, have a different amount of RAM on the computers involved in rendering, almost meaningless. Why is it practical, because on the render-comp that is somewhere there and only renders, except for the rendering itself, nothing is started and all the memory is available. At the same time on the working computer from which you launch the renderer for sure and the photoshop is open, and the browser, and skype and marvelous, etc., etc. Of course, they, in turn, "eat" memory leaving less renderer. In this connection, it makes sense on the main working computer, to have a bit more RAM than on the render-computers.
Gus_ann,Overclocking or overclocking, in simple words, is forcing the processor to work in a more intensive, and in a more productive, mode than the manufacturer's. The manufacturer sets the operating mode of the processor with an orientation to super-stability, reliability and retention of the level of heat release / energy consumption at the declared level.At the same time, in many processors the manufacturer embeds the uncomplicated overclocking function, knowing that the processor in theory can work in a more intensive mode. But the very practice of overclocking, namely experiments with the level of intensity, it leaves to the user, the enthusiast.The only, but very fat, overclocking advantage is the increase in processing power of the processor by 5%, 10%, 15%, 20%, and even higher, depending on the luck and perseverance of the specialist. At the same time, for this added power, the processor does not become more expensive in the store 👍 You pay for usual, but you get faster.Minuses of overclocking can be considered: The waste of time and the need for knowledge of all the nuances of overclocking. Possible instability of work, if the overclocked mode is insufficiently tested by a specialist, which manifests itself in the spontaneous shutdown of the PC. Yes, with a malfunction of all rendered😁 The cost of a more reliable motherboard, which is forced to work with the processor in an intensive mode. The cost of a faster RAM, which should "keep pace" with a faster processor. Disproportionately increased electricity consumption. The matter is that with the increase of absolute computing power, the energy consumption goes many times more than when working in a less intensive mode. You can say inadequate. This can have a good effect on the electricity bill, giving a relatively small increase in performance in the same renderer. The cost of a more powerful power supply. The efficiency of modern electronics, and especially, of powerful processors tends to zero. For simplicity of calculations, the amount of electricity expended by electronics is equal to the amount of energy of the by-product of the work of this electronics - heat. Given the pre-previous minus, it also generates this minus - the allocation of excess heat, disproportionate to the increase in productivity. The computer begins to heat up the surrounding space, which in turn can create problems with the comfort of work. If you are not from the cold edges😉 The cost of a super-efficient CPU cooling system, which, as we have already seen, begins to work> consume> allocate disproportionately more and, therefore, unnecessary heat energy from the processor needs to be more quickly removed. Increase of noise in the result of the previous minus. Faster cooling is almost always achieved, among other things, by accelerating the air flow through the cooling radiator. The faster the air, the more noise it generates. This worsens the comfort of finding a person next to the PC. Or, at least, does not improve it, as it were, if you put a very good cooling on the processor running in the normal mode. Probably a more voluminous and more expensive enclosure that will accommodate a large efficient cooling system, as well as give freedom in the physical organization of cooling.But the main disadvantage from calculating success in overclocking the processor is the lack of success. The manufacturer does not vouch for a significant, or even any, performance gain when overclocking this particular sample of the processor, which you got randomly. There may be a situation that you choose and buy a processor, counting on its overclocked power, but it does not deliver this power; Then it can come out that it is less profitable than some other.
Gus_ann,I, for example, are not inclined to overclocking solely because of considerations of rationality, i.e. Price / quality ratio. Overclocking greatly increases the price for the sake of the growth of absolute productivity. And this is trivial against rationality. As for the costs of increasing the level of related elements, both for costs of disproportionate consumption of energy, and for headache (= costs) for organizing a powerful infrastructure for supplying a large amount of electricity in parallel with the removal of heat.As far as I understand, your task is somewhat different. Namely, get a powerful PC that will allow as long as possible to not care about the next upgrade. The problems of outlets and excess heat, having only 2 PCs, do not arise. For this purpose, overclocking can be considered justified. Just make sure that the computer is configured carefully, at the highest quality level, with 100% stability. Then everything will be fine and nothing will spontaneously turn off 👍The very "physics" of the overclocking process has a lot of nuances, with a lot of problems, their derivatives and solutions. The topic is very complex and even many experts do not bother to understand what is actually happening inside the processor. A computer user and even more so this is all you need to know. On the other hand, overclockers understand first of all the practice of overclocking, "feel" the behavior of the processor when it is output to a more intensive mode. Again, the user knowledge of a particular sequence of overclocker actions and to anything, and in principle is poorly understood.In the most common case, overclocking your computer by a specialist, if the PC is already purchased in the specified configuration, will look like an endless (rather, many hours) clatter of digits in the BIOS, monitoring the temperature on the screen, rebooting, running programs that load the processor and so on, Until it finds an ideal combination of internal parameters, allowing you to have a performance boost with stable PC operation. Something like a typical hacker scene in the movie + swearing😁
The discussion is closed.