مواضيع عن الحاسب وتقنية المعلومات
الأربعاء، 2 مايو 2012
Compositing
To use Compositing Nodes, you can use two different Output Nodes. The "Composite Node" defines the output for the rendering pipeline, the "Viewer Node" allows results to be displayed in the UV/Image Editor. The latter is facilitated by a built-in (generated) Image with the name "Compositor". To view this image to it with the menu in the header.
Only one Viewer and one Composite Node is active, which is indicated with a red sphere icon in the Node header. Clicking on Viewer Nodes makes them active. The active Composite Node is always the first, typically you only use one anyway.
The UV/Image Editor also has three additional options in its header to view Images with or without Alpha, or to view the Alpha or Z itself. Holding LMB in the Image display allows you to sample the values.
Internally, the Compositor uses float buffers only (4 x 32 bits), regular 24 or 32 bits images used as input get converted to float first before compositing. Images are saved in the format as defined in the "Format" panel in the Scene context buttons. Only OpenEXR and Radiance images can store the full floating point range. DPX and Cineon formats will map the colors to 16 or 10 bits per component, clamped to a 0.0-1.0 range. All other formats save as regular 24 or 32 bits.
Only one Viewer and one Composite Node is active, which is indicated with a red sphere icon in the Node header. Clicking on Viewer Nodes makes them active. The active Composite Node is always the first, typically you only use one anyway.
The UV/Image Editor also has three additional options in its header to view Images with or without Alpha, or to view the Alpha or Z itself. Holding LMB in the Image display allows you to sample the values.
Internally, the Compositor uses float buffers only (4 x 32 bits), regular 24 or 32 bits images used as input get converted to float first before compositing. Images are saved in the format as defined in the "Format" panel in the Scene context buttons. Only OpenEXR and Radiance images can store the full floating point range. DPX and Cineon formats will map the colors to 16 or 10 bits per component, clamped to a 0.0-1.0 range. All other formats save as regular 24 or 32 bits.
Sockets and links
Input and output sockets are available in three types:
- RGBA (4 channels), yellow. When images or operations only have or use RGB, the Alpha channel is set to 1.0 by default.
- Vector (3 channels), blue.
- Value (1 channel), grey.
A socket can store both an Image buffer or just a fixed value (for example one value or an RGBA color). Blender's compositor allows the use of values or buffers as inputs transparently, and internally sorts out what will happen in a Node execution as follows:
- when all inputs are values, the operation occurs on the values only
- when all inputs are buffers, the operation occurs on the buffers
- when one input is an image, and another a value, the operation tries to define an image size (typically the first/top Image socket) and uses this image to operate on, with the value.
Most Color operations or Filter nodes have a "Fac" input, defining how much of an effect the operation has. By default the first/top Image (or color) input is always passed on, and the "Fac" defines how much the operation with other input(s) contributes to the end result, with 0.0 defined as "no operation".
Any "Fac" input can be used in three ways:
- As a constant, with a button for manual input
- Linked from another Node, passing on a value (like from a Time Node).
- Or as a buffer, when linked to an output having a buffer, to define a per-pixel influence of the operation.
The compositor also allows you to link different socket types, in which case a default conversion will occur as follows:
- value from vector: the average (X+Y+Z)/3.0
- value from color: the BW average (0.35*R + 0.45*G + 0.2*B)
- vector or color from value: copies value to each channel
- vector from color: copies RGB to XYZ
- color from vector: copies XYZ to RGB and sets A to 1.0
Almost all Node operations support this conversion, with some exceptions, such as the Vector-Blur node, which really requires the proper inputs.
Image size
The compositor can mix images with any size, and will only perform operations on pixels where images have an overlap. When Nodes receive inputs with differently sized Images, these rules apply:
- the first/top Image input socket defines the output size.
- the composite is centered by default, unless a translation has been assigned to a buffer with the "Translate Node".
So each Node in a composite can operate on different sized images, as defined by its inputs. Only the Composite Output node has a fixed size, as defined by the Scene buttons (Format Panel). The Viewer node always shows the size from its input, but when not linked (or linked to a value) it shows a small 320x256 pixel image.
Note: when using the Preview option (see below), it is important a fixed size is known in advance, to be able to define the preview cut-out. In this case the render output size is used by default.
Input and output sockets are available in three types:
- RGBA (4 channels), yellow. When images or operations only have or use RGB, the Alpha channel is set to 1.0 by default.
- Vector (3 channels), blue.
- Value (1 channel), grey.
A socket can store both an Image buffer or just a fixed value (for example one value or an RGBA color). Blender's compositor allows the use of values or buffers as inputs transparently, and internally sorts out what will happen in a Node execution as follows:
- when all inputs are values, the operation occurs on the values only
- when all inputs are buffers, the operation occurs on the buffers
- when one input is an image, and another a value, the operation tries to define an image size (typically the first/top Image socket) and uses this image to operate on, with the value.
Most Color operations or Filter nodes have a "Fac" input, defining how much of an effect the operation has. By default the first/top Image (or color) input is always passed on, and the "Fac" defines how much the operation with other input(s) contributes to the end result, with 0.0 defined as "no operation".
Any "Fac" input can be used in three ways:
- As a constant, with a button for manual input
- Linked from another Node, passing on a value (like from a Time Node).
- Or as a buffer, when linked to an output having a buffer, to define a per-pixel influence of the operation.
The compositor also allows you to link different socket types, in which case a default conversion will occur as follows:
- value from vector: the average (X+Y+Z)/3.0
- value from color: the BW average (0.35*R + 0.45*G + 0.2*B)
- vector or color from value: copies value to each channel
- vector from color: copies RGB to XYZ
- color from vector: copies XYZ to RGB and sets A to 1.0
Almost all Node operations support this conversion, with some exceptions, such as the Vector-Blur node, which really requires the proper inputs.
Image size
The compositor can mix images with any size, and will only perform operations on pixels where images have an overlap. When Nodes receive inputs with differently sized Images, these rules apply:
- the first/top Image input socket defines the output size.
- the composite is centered by default, unless a translation has been assigned to a buffer with the "Translate Node".
So each Node in a composite can operate on different sized images, as defined by its inputs. Only the Composite Output node has a fixed size, as defined by the Scene buttons (Format Panel). The Viewer node always shows the size from its input, but when not linked (or linked to a value) it shows a small 320x256 pixel image.
Note: when using the Preview option (see below), it is important a fixed size is known in advance, to be able to define the preview cut-out. In this case the render output size is used by default.
Introduction to Compositing and Layering
Compositing involves stacking two or more video or graphics clips in a sequence on multiple video tracks. You can also scale, rotate, and reposition each clip using the controls in the Motion tab in the Viewer. The order that clips are stacked in the Timeline determines which images appear in front of others in the Canvas. You can have up to 99 layers, or tracks, of clips in Final Cut Pro.
الجمعة، 27 أبريل 2012
Blender Composite Nodes
Without a doubt, compositing is currently one of the hot topics in 3D computer graphics creation
, particularly because it enables efficient management and creative control of complex scenes and images. A typical 3D graphic can consist of many individual layers, each having a special filter or effect applied and combined into the final result. By pre-rendering such layers an artist can work much faster on fine-tuning the final lresult of an image.
In Blender, the Compositor is tightly integrated and aligned with the rendering pipeline. For this reason it is part of the Blender Scene, meaning there's only one "Composite" possible per Scene (but each file can have unlimited Scenes).
Compositing can also be used 'stand-alone; with only images read from disk as input, allowing you to render the Composite without having a 3D rendering invoked.
الأربعاء، 25 أبريل 2012
Node-based and layer-based compositing
There are two radically different digital compositing workflows: node-based compositing and layer-based compositing.
Node-based compositing represents an entire composite as a tree graph, linking media objects and effects in a procedural map, intuitively laying out the progression from source input to final output, and is in fact the way all compositing applications internally handle composites. This type of compositing interface allows great flexibility, including the ability to modify the parameters of an earlier image processing step "in context" (while viewing the final composite). Node-based compositing packages often handle keyframing and time effects poorly, as their workflow does not stem directly from a timeline, as do layer-based compositing packages. Software which incorporates a node based interface include Apple Shake, Blender, eyeon Fusion, and The Foundry's Nuke.
Layer-based compositing represents each media object in a composite as a separate layer within a timeline, each with its own time bounds, effects, and keyframes. All the layers are stacked, one above the next, in any desired order; and the bottom layer is usually rendered as a base in the resultant image, with each higher layer being progressively rendered on top of the previously composited of layers, moving upward until all layers have been rendered into the final composite. Layer-based compositing is very well suited for rapid 2D and limited 3D effects such as in motion graphics, but becomes awkward for more complex composites entailing a large number of layers. A partial solution to this is some programs' ability to view the composite-order of elements (such as images, effects, or other attributes) with a visual diagram called a flowchart to nest compositions, or "comps," directly into other compositions, thereby adding complexity to the render-order by first compositing layers in the beginning composition, then combining that resultant image with the layered images from the proceeding composition, and so on. An example of this exists in the Adobe program After Effects
There are two radically different digital compositing workflows: node-based compositing and layer-based compositing.
Node-based compositing represents an entire composite as a tree graph, linking media objects and effects in a procedural map, intuitively laying out the progression from source input to final output, and is in fact the way all compositing applications internally handle composites. This type of compositing interface allows great flexibility, including the ability to modify the parameters of an earlier image processing step "in context" (while viewing the final composite). Node-based compositing packages often handle keyframing and time effects poorly, as their workflow does not stem directly from a timeline, as do layer-based compositing packages. Software which incorporates a node based interface include Apple Shake, Blender, eyeon Fusion, and The Foundry's Nuke.
Layer-based compositing represents each media object in a composite as a separate layer within a timeline, each with its own time bounds, effects, and keyframes. All the layers are stacked, one above the next, in any desired order; and the bottom layer is usually rendered as a base in the resultant image, with each higher layer being progressively rendered on top of the previously composited of layers, moving upward until all layers have been rendered into the final composite. Layer-based compositing is very well suited for rapid 2D and limited 3D effects such as in motion graphics, but becomes awkward for more complex composites entailing a large number of layers. A partial solution to this is some programs' ability to view the composite-order of elements (such as images, effects, or other attributes) with a visual diagram called a flowchart to nest compositions, or "comps," directly into other compositions, thereby adding complexity to the render-order by first compositing layers in the beginning composition, then combining that resultant image with the layered images from the proceeding composition, and so on. An example of this exists in the Adobe program After Effects
الأربعاء، 18 أبريل 2012
خصائص واسترا تيجيات الذكاء الاصطناعي
· التمثيل الرمزي من العالم الحقيقي .
· الاستدلال من خلال الحقائق والقواعد والخطوات .
· ايجاد الحل الختامي من خلال التجربه .
· وبما انه غير حسابي اي بمعنى اخر ان الجواب ليس ثابت من خلال قواعد ثابته , لذالك اجرائته اكثر تعقيدا .
سؤال :- هل يمكن للآله ان تفكر ؟ صراحتا هذا ليس سؤالي بالاصل بل هو سؤال آلن تيورنغ عام 1950 م
انا بنظري فالله هو الاعلم .
تقنيات الذكاء الاصطناعي
expert system ------ knowledge based system - -- neural network ----- data mining -----
fuzzy logic intelligent agent ------- genetic algorithm ----- natural language processing ---- machine learning
· التمثيل الرمزي من العالم الحقيقي .
· الاستدلال من خلال الحقائق والقواعد والخطوات .
· ايجاد الحل الختامي من خلال التجربه .
· وبما انه غير حسابي اي بمعنى اخر ان الجواب ليس ثابت من خلال قواعد ثابته , لذالك اجرائته اكثر تعقيدا .
سؤال :- هل يمكن للآله ان تفكر ؟ صراحتا هذا ليس سؤالي بالاصل بل هو سؤال آلن تيورنغ عام 1950 م
انا بنظري فالله هو الاعلم .
تقنيات الذكاء الاصطناعي
expert system ------ knowledge based system - -- neural network ----- data mining -----
fuzzy logic intelligent agent ------- genetic algorithm ----- natural language processing ---- machine learning
الاشتراك في:
الرسائل (Atom)