Hello!If we talk about "beams and cameras," then in the competent CG algorithm everything is calculated relative to the camera, except for the photon mole except that, after all, why count the rays that did not hit the camera? You as the user do not have a difference from which side the check is made to hit it. A beam from the light flies into the scene and "checks" whether it hit the camera or vice versa, the camera "probes" the scene with its rays and determines whether the particular visible point is lit by light. Of course, in the second case, the miscalculation is more optimal, albeit with the need to issue additional. Rays from the camera, since in this case only "useful" rays are taken, the effect of which will be visible on the renderer and the computational time is not wasted on the rays that "flew away" into space away from the scene or are not visible from behind some object.If you think of the "real world", then the idea should be like this:- Light released the beam- The ray hit the surface and reflected- The beam reflected from the surface hit the camera- The camera recorded its color- We got a pixel on the final renderThat is, it's the lights that have to check if their beams get into the camera.Then the question becomes, and what about the beam that did not hit the camera and we will not see it on the renderer? It turns out that it will have to be calculated. That in turn, entails a huge waste of resources, regardless of whether they see the result or not, whether these rays will be seen on the render or will be. That is, the introduction of a light source into the scene automatically complicates its calculation by the number of beams produced by these sources, even if 99.99% of these rays do not enter the camera and will not be visible on the render.This is not very reasonable😁That is why, "reasonable" rendering algorithms operate "from the reverse", that is:- The camera releases a beam (it's not light and does not carry information from the illumination, not to be confused with the beam from the light, it's just "feeler") in the visible part of the scene in front of you- If he gets into the visible camera point in the scene, he spends a straight line to all the light sources in it and "looks" whether there is an unobstructed hit in the light, that is, if it is not blocked by an obstacle (geometry in the scene)- If a straight line (also exactly the same beam as a matter of fact) gets into the light, then the illumination (to become brighter depending on the brightness of the light) is transmitted to the calculated point in the scene, if not, then the brightness of the point does not change.- The camera records the color of the point- We got a pixel on the final renderAs a result, rays that do not affect the illumination of the scene can not be calculated from the lights, moreover, a limited number of "feeler" rays from the camera to the scene is produced for each pixel and does not depend on the number of lights. As a result, we get an optimized calculation of the lighting of the scene, "feeling" it from the camera and not vice versa.How each individual GI algorithm works already is already covered in the lessons. Just keep in mind that these are independent Render Elements for compositing the final render.If you have more specific questions, then ask them more literally and we will try to help😉
Hello!A very important point that should be clearly understood is that each effect in the final rendering of 3d rendering is a separate renderer, sometimes completely independent of the others. What you see in the final is not the result of counting the color of each pixel by a direct logical change in color. There is no such thing as direct light, then its first bounce, then from the first bounce of direct light, the second bounce, etc., and only then the pixel color is recorded.Primary and Secondary Bounces, these are separate render-elements that are calculated each separately into their own image and then, as layers in Photoshop, superimposed one upon another when forming the final render. That is, the direct light is considered separately, then it is written into the render-element, the GI components are separately considered, and each is written into its render-element, and then they merge in the finale.Now as to the sequence of their miscalculation. It would be logical to assume that by calculating the direct light, the renderer betrays this information further to the Primary Bounces engine, which, from the calculated direction and the color of the direct rays, forms the first rebound. Then, this information is passed to the Secondary Bounces engine, which in its turn from the already calculated primary rebound similarly forms the subsequent ones. Then it turns out that the quality of Primary Bounces depends directly on the quality of direct light calculation, and the quality of Secondary Bounces directly depends on the quality of Primary Bounces and the calculation of direct light combined. In other words, if the PB card, for example the Irradiance map is blurred and not accurate, then SB, for example Light Cache, will be even worse. Right?And here it is not😁 The quality of the "secondary" does not depend on the "primary" for each render-element is considered separately!You ask why LC standing in the secondary, is considered before IM standing on the primary? But why do not you ask what IM does not consider direct light first, but calmly perceive its miscalculation during the final rendering after the rendering of the secondary lighting?I want to say that the specific information directly from the developers of V-Ray, explaining the "inner kitchen" of the work of their renderer is nowhere to be found. At least I do not know where to get it. Nevertheless, according to the functions and nature of the work, we can assume that Light Cache, for the calculation of secondary bounces, INDEPENDENT calculates both direct light and a primary bounce. That's why, on the LC preview (tick Show calc. Phase), we see a noisy, but quite full-fledged image, obviously containing both direct light and a primary rebound as well. For this reason, LC can deliver Store Direct light information to the final renderer😁 Apparently, it is for these reasons that it is calculated first, so that, if necessary, subsequent miscalculations can use its information for their needs. Those to me less, if they do not use this information, it could be calculated after the Irradiance map, just the developers did not steal with the change of order depending on the activated checkmarks in the Light Cache settings. Just made it a miscalculation first and all.As for the Irradiance map, here it does not know how to independently calculate the same as Light Cache, so it can only work in Primary Bounces. Where does she take the information from direct rays? Surely, too, as it counts them themselves. In the official help, there are some explanatory pictures, for example:Nevertheless, there is no specific textual explanation.