title
stringlengths 0
19.7k
| authors
sequence | sha1
stringlengths 31
31
| timestamp
stringlengths 20
20
| parent_id
int32 16M
1.25B
⌀ | namespace
int32 0
0
| sections
sequence |
---|---|---|---|---|---|---|
Rendering (computer graphics) | {
"id": [
47183495
],
"name": [
"KaiaVintr"
]
} | qrz56rb0bairsq19hzezpk21tz0rbc6 | 2024-10-14T03:30:15Z | 1,251,050,950 | 0 | {
"title": [
"Introduction",
"Features",
"Inputs",
"2D vector graphics",
"3D geometry",
"Volumetric data",
"Photogrammetry and scanning",
"Neural approximations and light fields",
"Outputs",
"Techniques",
"Rasterization<span class=\"anchor\" id=\"Rasterization\"></span><span class=\"anchor\" id=\"Scanline rendering and rasterization\"></span>",
"Ray casting",
"Ray tracing",
"Radiosity",
"Path tracing",
"Neural rendering",
"Scientific and mathematical basis",
"The rendering equation",
"The bidirectional reflectance distribution function",
"Geometric optics",
"Visual perception",
"Sampling and filtering",
"Hardware",
"History",
"GPUs",
"Chronology of algorithms and techniques<span class=\"anchor\" id=\"Chronology of concepts\"></span>",
"See also",
"References",
"Further reading",
"External links"
],
"level": [
1,
2,
2,
3,
3,
3,
3,
3,
2,
2,
3,
3,
3,
3,
3,
3,
2,
3,
3,
3,
3,
3,
2,
3,
3,
2,
2,
2,
2,
2
],
"content": [
"\n\n[thumb\\|A variety of rendering techniques applied to a single 3D scene](/wiki/File:Render_Types.png \"Render Types.png\")\n[thumb\\|An image created by using [POV\\-Ray](/wiki/POV-Ray \"POV-Ray\") 3\\.6](/wiki/Image:Glasses_800_edit.png \"Glasses 800 edit.png\")\n**Rendering** or **image synthesis** is the process of generating a [photorealistic](/wiki/Physically-based_rendering \"Physically-based rendering\") or [non\\-photorealistic](/wiki/Non-photorealistic_rendering \"Non-photorealistic rendering\") image from a [2D](/wiki/2D_model \"2D model\") or [3D model](/wiki/3D_model \"3D model\") by means of a [computer program](/wiki/Computer_program \"Computer program\"). The resulting image is referred to as a **rendering**. Multiple models can be defined in a *scene file* containing objects in a strictly defined language or [data structure](/wiki/Data_structure \"Data structure\"). The scene file contains geometry, viewpoint, [textures](/wiki/Texture_mapping \"Texture mapping\"), [lighting](/wiki/Computer_graphics_lighting \"Computer graphics lighting\"), and [shading](/wiki/Shading \"Shading\") information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a [digital image](/wiki/Digital_image \"Digital image\") or [raster graphics](/wiki/Raster_graphics \"Raster graphics\") image file. The term \"rendering\" is analogous to the concept of an [artist's impression](/wiki/Artist%27s_impression \"Artist's impression\") of a scene. The term \"rendering\" is also used to describe the process of calculating effects in a video editing program to produce the final video output.\n\nA [software application](/wiki/Application_software \"Application software\") or [component](/wiki/Component-based_software_engineering \"Component-based software engineering\") that performs rendering is called a **rendering [engine](/wiki/Software_engine \"Software engine\")**, **render engine**, **[rendering system](/wiki/Rendering_systems \"Rendering systems\")**, **graphics engine**, or simply a **renderer**.\n\nRendering is one of the major sub\\-topics of [3D computer graphics](/wiki/3D_computer_graphics \"3D computer graphics\"), and in practice it is always connected to the others. It is the last major step in the [graphics pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\"), giving models and animation their final appearance. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.\n\nRendering has uses in [architecture](/wiki/Architectural_rendering \"Architectural rendering\"), [video games](/wiki/Video_game \"Video game\"), [simulators](/wiki/Simulation \"Simulation\"), movie and TV [visual effects](/wiki/Visual_effects \"Visual effects\"), and design visualization, each employing a different balance of features and techniques. A wide variety of renderers are available for use. Some are integrated into larger modeling and animation packages, some are stand\\-alone, and some are free open\\-source projects. On the inside, a renderer is a carefully engineered program based on multiple disciplines, including [light physics](/wiki/Optics \"Optics\"), [visual perception](/wiki/Visual_system \"Visual system\"), [mathematics](/wiki/Mathematics \"Mathematics\"), and [software development](/wiki/Software_engineering \"Software engineering\").\n\nThough the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image on a screen from a 3D representation stored in a scene file are handled by the [graphics pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\") in a rendering device such as a [GPU](/wiki/Graphics_processing_unit \"Graphics processing unit\"). A GPU is a purpose\\-built device that assists a [CPU](/wiki/Central_processing_unit \"Central processing unit\") in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software must solve the [rendering equation](/wiki/Rendering_equation \"Rendering equation\"). The rendering equation does not account for all lighting phenomena, but instead acts as a general lighting model for computer\\-generated imagery.\n\nIn the case of 3D graphics, scenes can be [pre\\-rendered](/wiki/Pre-rendered \"Pre-rendered\") or generated in realtime. Pre\\-rendering is a slow, computationally intensive process that is typically used for movie creation, where scenes can be generated ahead of time, while [real\\-time](/wiki/Real-time_computer_graphics \"Real-time computer graphics\") rendering is often done for 3D video games and other applications that must dynamically create scenes. 3D [hardware accelerators](/wiki/Hardware_accelerators \"Hardware accelerators\") can improve realtime rendering performance.\n\n",
"Features\n--------\n\nA rendered image can be understood in terms of a number of visible features. Rendering [research and development](/wiki/Research_and_development \"Research and development\") has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.\n\n* [Shading](/wiki/Shading \"Shading\") how the color and brightness of a surface varies with lighting\n* [Texture\\-mapping](/wiki/Texture_mapping \"Texture mapping\") a method of applying detail to surfaces\n* [Bump\\-mapping](/wiki/Bump_mapping \"Bump mapping\") a method of simulating small\\-scale bumpiness on surfaces\n* [Fogging/participating medium](/wiki/Distance_fog \"Distance fog\") how light dims when passing through non\\-clear atmosphere or air\n* [Shadows](/wiki/Shadow \"Shadow\") the effect of obstructing light\n* [Soft shadows](/wiki/Soft_shadows \"Soft shadows\") varying darkness caused by partially obscured light sources\n* [Reflection](/wiki/Reflection_%28computer_graphics%29 \"Reflection (computer graphics)\") mirror\\-like or highly glossy reflection\n* [Transparency (optics)](/wiki/Transparency_%28optics%29 \"Transparency (optics)\"), [transparency (graphic)](/wiki/Transparency_%28graphic%29 \"Transparency (graphic)\") or [opacity](/wiki/Opacity_%28optics%29 \"Opacity (optics)\") sharp transmission of light through solid objects\n* [Translucency](/wiki/Translucency \"Translucency\") highly scattered transmission of light through solid objects\n* [Refraction](/wiki/Refraction \"Refraction\") bending of light associated with transparency\n* [Diffraction](/wiki/Diffraction \"Diffraction\") bending, spreading, and interference of light passing by an object or aperture that disrupts the ray\n* [Indirect illumination](/wiki/Global_illumination \"Global illumination\") surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)\n* [Caustics](/wiki/Caustic_%28optics%29 \"Caustic (optics)\") (a form of indirect illumination) reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object\n* [Depth of field](/wiki/Depth_of_field \"Depth of field\") objects appear blurry or out of focus when too far in front of or behind the object in focus\n* [Motion blur](/wiki/Motion_blur \"Motion blur\") objects appear blurry due to high\\-speed motion, or the motion of the camera\n* [Non\\-photorealistic rendering](/wiki/Non-photorealistic_rendering \"Non-photorealistic rendering\") rendering of scenes in an artistic style, intended to look like a painting or drawing\n",
"Inputs\n------\n\nBefore a 3D scene or 2D image can be rendered, it must be described in a way that the rendering software can understand. Historically, inputs for both 2D and 3D rendering were usually [text files](/wiki/Text_file \"Text file\"), which are easier than binary files for humans to edit and understand. For 3D graphics, text formats have largely been supplanted by more efficient [binary formats](/wiki/Binary_file \"Binary file\"), and by [APIs](/wiki/API \"API\") which allow interactive applications to communicate directly with a rendering component without generating a file on disk (although a scene description is usually still created in memory prior to rendering).\n\nTraditional rendering algorithms use geometric descriptions of 3D scenes or 2D images. Applications and algorithms that render [visualizations](/wiki/Visualization_%28graphics%29 \"Visualization (graphics)\") of data scanned from the real world, or scientific [simulations](/wiki/Computer_simulation \"Computer simulation\"), may require different types of input data.\n\nThe [PostScript](/wiki/PostScript \"PostScript\") format (which is often credited with the rise of [desktop publishing](/wiki/Desktop_publishing \"Desktop publishing\")) provides a standardized, interoperable way to describe 2D graphics and [page layout](/wiki/Page_layout \"Page layout\"). The [Scalable Vector Graphics (SVG)](/wiki/SVG \"SVG\") format is also text\\-based, and the [PDF](/wiki/PDF \"PDF\") format uses the PostScript language internally. In contrast, although many 3D graphics file formats have been standardized (including text\\-based formats such as [VRML](/wiki/VRML \"VRML\") and [X3D](/wiki/X3D \"X3D\")), different rendering applications typically use formats tailored to their needs, and this has led to a proliferation of proprietary and open formats, with binary files being more common.\n\n### 2D vector graphics\n\nA [vector graphics](/wiki/Vector_graphics \"Vector graphics\") image description may include:\n* [Coordinates](/wiki/Cartesian_coordinate_system \"Cartesian coordinate system\") and [curvature](/wiki/Curvature \"Curvature\") information for [line segments](/wiki/Line_segments \"Line segments\"), [arcs](/wiki/Circular_arc \"Circular arc\"), and [Bézier curves](/wiki/B%C3%A9zier_curve \"Bézier curve\") (which may be used as boundaries of filled shapes)\n* Center coordinates, width, and height (or [bounding rectangle](/wiki/Minimum_bounding_rectangle \"Minimum bounding rectangle\") coordinates) of [basic](/wiki/Geometric_primitive \"Geometric primitive\") shapes such as [rectangles](/wiki/Rectangle \"Rectangle\"), [circles](/wiki/Circle \"Circle\") and [ellipses](/wiki/Ellipse \"Ellipse\")\n* Color, width and pattern (such as dashed or dotted) for rendering lines\n* Colors, patterns, and [gradients](/wiki/Color_gradient \"Color gradient\") for filling shapes\n* [Bitmap](/wiki/Bitmap \"Bitmap\") image data (either embedded or in an external file) along with scale and position information\n* [Text to be rendered](/wiki/Font_rasterization \"Font rasterization\") (along with size, position, orientation, color, and font)\n* [Clipping](/wiki/Clipping_%28computer_graphics%29 \"Clipping (computer graphics)\") information, if only part of a shape or bitmap image should be rendered\n* Transparency and [compositing](/wiki/Compositing \"Compositing\") information for rendering overlapping shapes\n* [Color space](/wiki/Color_space \"Color space\") information, allowing the image to be rendered consistently on different displays and printers\n\n### 3D geometry\n\nA geometric scene description may include:\n* Size, position, and orientation of [geometric primitives](/wiki/Geometric_primitive \"Geometric primitive\") such as spheres and cones (which may be [combined in various ways](/wiki/Constructive_solid_geometry \"Constructive solid geometry\") to create more complex objects)\n* [Vertex](/wiki/Vertex_%28geometry%29 \"Vertex (geometry)\") [coordinates](/wiki/Cartesian_coordinate_system \"Cartesian coordinate system\") and [surface normal](/wiki/Normal_%28geometry%29 \"Normal (geometry)\") [vectors](/wiki/Euclidean_vector \"Euclidean vector\") for [meshes](/wiki/Polygon_mesh \"Polygon mesh\") of triangles or polygons (often rendered as smooth surfaces by [subdividing](/wiki/Subdivision_surface \"Subdivision surface\") the mesh)\n* [Transformations](/wiki/Geometric_transformation \"Geometric transformation\") for positioning, rotating, and scaling objects within a scene (allowing parts of the scene to use different local coordinate systems).\n* \"Camera\" information describing how the scene is being viewed (position, direction, [focal length](/wiki/Focal_length \"Focal length\"), and [field of view](/wiki/Field_of_view \"Field of view\"))\n* Light information (location, type, brightness, and color)\n* Optical properties of surfaces, such as [albedo](/wiki/Albedo \"Albedo\"), [reflectance](/wiki/Reflectance \"Reflectance\"), and [refractive index](/wiki/Refractive_index \"Refractive index\"),\n* Optical properties of media through which light passes (transparent solids, liquids, clouds, smoke), e.g. [absorption](/wiki/Absorption_cross_section \"Absorption cross section\") and [scattering](/wiki/Cross_section_%28physics%29%23Scattering_of_light \"Cross section (physics)#Scattering of light\") cross sections\n* [Bitmap](/wiki/Bitmap \"Bitmap\") image data used as [texture maps](/wiki/Texture_mapping \"Texture mapping\") for surfaces\n* Small scripts or programs for generating complex 3D shapes or scenes [procedurally](/wiki/Procedural_generation \"Procedural generation\")\n* Description of how object and camera locations and other information change over time, for rendering an animation\n\nMany file formats exist for storing individual 3D objects or \"[models](/wiki/3D_modeling \"3D modeling\")\". These can be imported into a larger scene, or loaded on\\-demand by rendering software or games. A realistic scene may require hundreds of items like household objects, vehicles, and trees, and [3D artists](/wiki/Environment_artist \"Environment artist\") often utilize large libraries of models. In game production, these models (along with other data such as textures, audio files, and animations) are referred to as \"[assets](/wiki/Digital_asset \"Digital asset\")\".\n\n### Volumetric data\n\nScientific and engineering [visualization](/wiki/Visualization_%28graphics%29 \"Visualization (graphics)\") often requires rendering [volumetric data](/wiki/Voxel \"Voxel\") generated by 3D scans or [simulations](/wiki/Computer_simulation \"Computer simulation\"). Perhaps the most common source of such data is medical [CT](/wiki/CT_scan \"CT scan\") and [MRI](/wiki/Magnetic_resonance_imaging \"Magnetic resonance imaging\") scans, which need to be rendered for diagnosis. Volumetric data can be extremely large, and requires [specialized data formats](/wiki/OpenVDB \"OpenVDB\") to store it efficiently, particularly if the volume is *[sparse](/wiki/Sparse_matrix \"Sparse matrix\")* (with empty regions that do not contain data).\n\nBefore rendering, [level sets](/wiki/Level_set \"Level set\") for volumetric data can be extracted and converted into a mesh of triangles, e.g. by using the [marching cubes](/wiki/Marching_cubes \"Marching cubes\") algorithm. Algorithms have also been developed that work directly with volumetric data, for example to render realistic depictions of the way light is scattered and absorbed by clouds and smoke, and this type of volumetric rendering is used extensively in visual effects for movies. When rendering lower\\-resolution volumetric data without interpolation, the individual cubes or \"[voxels](/wiki/Voxel \"Voxel\")\" may be visible, an effect sometimes used deliberately for game graphics.\n\n### Photogrammetry and scanning\n\nPhotographs of real world objects can be incorporated into a rendered scene by using them as [textures](/wiki/Texture_mapping \"Texture mapping\") for 3D objects. Photos of a scene can also be stitched together to create [panoramic images](/wiki/Panorama \"Panorama\") or [environment maps](/wiki/Reflection_mapping \"Reflection mapping\"), which allow the scene to be rendered very efficiently but only from a single viewpoint. Scanning of real objects and scenes using [structured light](/wiki/Structured-light_3D_scanner \"Structured-light 3D scanner\") or [lidar](/wiki/Lidar \"Lidar\") produces [point clouds](/wiki/Point_cloud \"Point cloud\") consisting of the coordinates of millions of individual points in space, sometimes along with color information. These point clouds may either be rendered directly or [converted into meshes](/wiki/Point_cloud%23Conversion_to_3D_surfaces \"Point cloud#Conversion to 3D surfaces\") before rendering. (Note: \"point cloud\" sometimes also refers to a minimalist rendering style that can be used for any 3D geometry, similar to wireframe rendering.)\n\n### Neural approximations and light fields\n\nA more recent, experimental approach is description of scenes using [radiance fields](/wiki/Neural_radiance_field \"Neural radiance field\") which define the color, intensity, and direction of incoming light at each point in space. (This is conceptually similar to, but not identical to, the [light field](/wiki/Light_field \"Light field\") recorded by a [hologram](/wiki/Holography \"Holography\").) For any useful resolution, the amount of data in a radiance field is so large that it is impractical to represent it directly as volumetric data, and an [approximation](/wiki/Approximation \"Approximation\") function must be found. [Neural networks](/wiki/Deep_learning \"Deep learning\") are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as \"[training data](/wiki/Training%2C_validation%2C_and_test_data_sets%23Training_data_set \"Training, validation, and test data sets#Training data set\")\".\n\nAlgorithms related to neural networks have recently been used to find approximations of a scene as [3D Gaussians](/wiki/Gaussian_splatting \"Gaussian splatting\"). The resulting representation is similar to a [point cloud](/wiki/Point_cloud \"Point cloud\"), except that it uses fuzzy, partially\\-transparent blobs of varying dimensions and orientations instead of points. As with [neural radiance fields](/wiki/Neural_radiance_field \"Neural radiance field\"), these approximations are often generated from photographs or video frames.\n\n",
"### 2D vector graphics\n\nA [vector graphics](/wiki/Vector_graphics \"Vector graphics\") image description may include:\n* [Coordinates](/wiki/Cartesian_coordinate_system \"Cartesian coordinate system\") and [curvature](/wiki/Curvature \"Curvature\") information for [line segments](/wiki/Line_segments \"Line segments\"), [arcs](/wiki/Circular_arc \"Circular arc\"), and [Bézier curves](/wiki/B%C3%A9zier_curve \"Bézier curve\") (which may be used as boundaries of filled shapes)\n* Center coordinates, width, and height (or [bounding rectangle](/wiki/Minimum_bounding_rectangle \"Minimum bounding rectangle\") coordinates) of [basic](/wiki/Geometric_primitive \"Geometric primitive\") shapes such as [rectangles](/wiki/Rectangle \"Rectangle\"), [circles](/wiki/Circle \"Circle\") and [ellipses](/wiki/Ellipse \"Ellipse\")\n* Color, width and pattern (such as dashed or dotted) for rendering lines\n* Colors, patterns, and [gradients](/wiki/Color_gradient \"Color gradient\") for filling shapes\n* [Bitmap](/wiki/Bitmap \"Bitmap\") image data (either embedded or in an external file) along with scale and position information\n* [Text to be rendered](/wiki/Font_rasterization \"Font rasterization\") (along with size, position, orientation, color, and font)\n* [Clipping](/wiki/Clipping_%28computer_graphics%29 \"Clipping (computer graphics)\") information, if only part of a shape or bitmap image should be rendered\n* Transparency and [compositing](/wiki/Compositing \"Compositing\") information for rendering overlapping shapes\n* [Color space](/wiki/Color_space \"Color space\") information, allowing the image to be rendered consistently on different displays and printers\n\n",
"### 3D geometry\n\nA geometric scene description may include:\n* Size, position, and orientation of [geometric primitives](/wiki/Geometric_primitive \"Geometric primitive\") such as spheres and cones (which may be [combined in various ways](/wiki/Constructive_solid_geometry \"Constructive solid geometry\") to create more complex objects)\n* [Vertex](/wiki/Vertex_%28geometry%29 \"Vertex (geometry)\") [coordinates](/wiki/Cartesian_coordinate_system \"Cartesian coordinate system\") and [surface normal](/wiki/Normal_%28geometry%29 \"Normal (geometry)\") [vectors](/wiki/Euclidean_vector \"Euclidean vector\") for [meshes](/wiki/Polygon_mesh \"Polygon mesh\") of triangles or polygons (often rendered as smooth surfaces by [subdividing](/wiki/Subdivision_surface \"Subdivision surface\") the mesh)\n* [Transformations](/wiki/Geometric_transformation \"Geometric transformation\") for positioning, rotating, and scaling objects within a scene (allowing parts of the scene to use different local coordinate systems).\n* \"Camera\" information describing how the scene is being viewed (position, direction, [focal length](/wiki/Focal_length \"Focal length\"), and [field of view](/wiki/Field_of_view \"Field of view\"))\n* Light information (location, type, brightness, and color)\n* Optical properties of surfaces, such as [albedo](/wiki/Albedo \"Albedo\"), [reflectance](/wiki/Reflectance \"Reflectance\"), and [refractive index](/wiki/Refractive_index \"Refractive index\"),\n* Optical properties of media through which light passes (transparent solids, liquids, clouds, smoke), e.g. [absorption](/wiki/Absorption_cross_section \"Absorption cross section\") and [scattering](/wiki/Cross_section_%28physics%29%23Scattering_of_light \"Cross section (physics)#Scattering of light\") cross sections\n* [Bitmap](/wiki/Bitmap \"Bitmap\") image data used as [texture maps](/wiki/Texture_mapping \"Texture mapping\") for surfaces\n* Small scripts or programs for generating complex 3D shapes or scenes [procedurally](/wiki/Procedural_generation \"Procedural generation\")\n* Description of how object and camera locations and other information change over time, for rendering an animation\n\nMany file formats exist for storing individual 3D objects or \"[models](/wiki/3D_modeling \"3D modeling\")\". These can be imported into a larger scene, or loaded on\\-demand by rendering software or games. A realistic scene may require hundreds of items like household objects, vehicles, and trees, and [3D artists](/wiki/Environment_artist \"Environment artist\") often utilize large libraries of models. In game production, these models (along with other data such as textures, audio files, and animations) are referred to as \"[assets](/wiki/Digital_asset \"Digital asset\")\".\n\n",
"### Volumetric data\n\nScientific and engineering [visualization](/wiki/Visualization_%28graphics%29 \"Visualization (graphics)\") often requires rendering [volumetric data](/wiki/Voxel \"Voxel\") generated by 3D scans or [simulations](/wiki/Computer_simulation \"Computer simulation\"). Perhaps the most common source of such data is medical [CT](/wiki/CT_scan \"CT scan\") and [MRI](/wiki/Magnetic_resonance_imaging \"Magnetic resonance imaging\") scans, which need to be rendered for diagnosis. Volumetric data can be extremely large, and requires [specialized data formats](/wiki/OpenVDB \"OpenVDB\") to store it efficiently, particularly if the volume is *[sparse](/wiki/Sparse_matrix \"Sparse matrix\")* (with empty regions that do not contain data).\n\nBefore rendering, [level sets](/wiki/Level_set \"Level set\") for volumetric data can be extracted and converted into a mesh of triangles, e.g. by using the [marching cubes](/wiki/Marching_cubes \"Marching cubes\") algorithm. Algorithms have also been developed that work directly with volumetric data, for example to render realistic depictions of the way light is scattered and absorbed by clouds and smoke, and this type of volumetric rendering is used extensively in visual effects for movies. When rendering lower\\-resolution volumetric data without interpolation, the individual cubes or \"[voxels](/wiki/Voxel \"Voxel\")\" may be visible, an effect sometimes used deliberately for game graphics.\n\n",
"### Photogrammetry and scanning\n\nPhotographs of real world objects can be incorporated into a rendered scene by using them as [textures](/wiki/Texture_mapping \"Texture mapping\") for 3D objects. Photos of a scene can also be stitched together to create [panoramic images](/wiki/Panorama \"Panorama\") or [environment maps](/wiki/Reflection_mapping \"Reflection mapping\"), which allow the scene to be rendered very efficiently but only from a single viewpoint. Scanning of real objects and scenes using [structured light](/wiki/Structured-light_3D_scanner \"Structured-light 3D scanner\") or [lidar](/wiki/Lidar \"Lidar\") produces [point clouds](/wiki/Point_cloud \"Point cloud\") consisting of the coordinates of millions of individual points in space, sometimes along with color information. These point clouds may either be rendered directly or [converted into meshes](/wiki/Point_cloud%23Conversion_to_3D_surfaces \"Point cloud#Conversion to 3D surfaces\") before rendering. (Note: \"point cloud\" sometimes also refers to a minimalist rendering style that can be used for any 3D geometry, similar to wireframe rendering.)\n\n",
"### Neural approximations and light fields\n\nA more recent, experimental approach is description of scenes using [radiance fields](/wiki/Neural_radiance_field \"Neural radiance field\") which define the color, intensity, and direction of incoming light at each point in space. (This is conceptually similar to, but not identical to, the [light field](/wiki/Light_field \"Light field\") recorded by a [hologram](/wiki/Holography \"Holography\").) For any useful resolution, the amount of data in a radiance field is so large that it is impractical to represent it directly as volumetric data, and an [approximation](/wiki/Approximation \"Approximation\") function must be found. [Neural networks](/wiki/Deep_learning \"Deep learning\") are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as \"[training data](/wiki/Training%2C_validation%2C_and_test_data_sets%23Training_data_set \"Training, validation, and test data sets#Training data set\")\".\n\nAlgorithms related to neural networks have recently been used to find approximations of a scene as [3D Gaussians](/wiki/Gaussian_splatting \"Gaussian splatting\"). The resulting representation is similar to a [point cloud](/wiki/Point_cloud \"Point cloud\"), except that it uses fuzzy, partially\\-transparent blobs of varying dimensions and orientations instead of points. As with [neural radiance fields](/wiki/Neural_radiance_field \"Neural radiance field\"), these approximations are often generated from photographs or video frames.\n\n",
"Outputs\n-------\n\nThe output of rendering may be displayed immediately on the screen (many times a second, in the case of real\\-time rendering such as games) or saved in a [raster graphics](/wiki/Raster_graphics \"Raster graphics\") file format such as [JPEG](/wiki/JPEG \"JPEG\") or [PNG](/wiki/PNG \"PNG\"). High\\-end rendering applications commonly use the [OpenEXR](/wiki/OpenEXR \"OpenEXR\") file format, which can represent finer gradations of colors and [high dynamic range](/wiki/High_dynamic_range \"High dynamic range\") lighting, allowing [tone mapping](/wiki/Tone_mapping \"Tone mapping\") or other adjustments to be applied afterwards without loss of quality.\n\nQuickly rendered animations can be saved directly as video files, but for high\\-quality rendering, individual frames (which may be rendered by different computers in a [cluster](/wiki/Computer_cluster \"Computer cluster\") or *[render farm](/wiki/Render_farm \"Render farm\")* and may take hours or even days to render) are output as separate files and combined later into a video clip.\n\nThe output of a renderer sometimes includes more than just [RGB color values](/wiki/RGB_color_model%23Numeric_representations \"RGB color model#Numeric representations\"). For example, the spectrum can be sampled using multiple wavelengths of light, or additional information such as depth (distance from camera) or the material of each point in the image can be included (this data can be used during compositing or when generating [texture maps](/wiki/Texture_mapping \"Texture mapping\") for real\\-time rendering, or used to assist in [removing noise](/wiki/Noise_reduction%23In_images \"Noise reduction#In images\") from a path\\-traced image). Transparency information can be included, allowing rendered foreground objects to be composited with photographs or video. It is also sometimes useful to store the contributions of different lights, or of specular and diffuse lighting, as separate channels, so lighting can be adjusted after rendering. The [OpenEXR](/wiki/OpenEXR \"OpenEXR\") format allows storing many channels of data in a single file.\n\n",
"Techniques\n----------\n\nChoosing how to render a 3D scene usually involves trade\\-offs between speed, memory usage, and realism (although realism is not always desired). The **** developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased. Multiple techniques may be used for a single final image.\n\nAn important distinction is between [image order](/wiki/Image_and_object_order_rendering \"Image and object order rendering\") algorithms, which iterate over pixels of the image plane, and [object order](/wiki/Image_and_object_order_rendering \"Image and object order rendering\") algorithms, which iterate over objects in the scene. For simple scenes, object order is usually more efficient, as there are fewer objects than pixels.\n\n [2D vector graphics](/wiki/Vector_graphics \"Vector graphics\")\n The [vector displays](/wiki/Vector_monitor \"Vector monitor\") of the 1960s\\-1970s used deflection of an [electron beam](/wiki/Cathode_ray \"Cathode ray\") to draw [line segments](/wiki/Line_segment \"Line segment\") directly on the screen. Nowadays, [vector graphics](/wiki/Vector_graphics \"Vector graphics\") are rendered by [rasterization](/wiki/Rasterization \"Rasterization\") algorithms that also support filled shapes. In principle, any 2D vector graphics renderer can be used to render 3D objects by first projecting them onto a 2D image plane. \n [3D rasterization](/wiki/Rasterisation%233D_images \"Rasterisation#3D images\")\n Adapts 2D rasterization algorithms so they can be used more efficiently for 3D rendering, handling [hidden surface removal](/wiki/Hidden-surface_determination \"Hidden-surface determination\") via [scanline](/wiki/Scanline_rendering \"Scanline rendering\") or [z\\-buffer](/wiki/Z-buffering \"Z-buffering\") techniques. Different realistic or stylized effects can be obtained by coloring the pixels covered by the objects in different ways. [Surfaces](/wiki/Computer_representation_of_surfaces \"Computer representation of surfaces\") are typically divided into [meshes](/wiki/Polygon_mesh \"Polygon mesh\") of triangles before being rasterized. Rasterization is usually synonymous with \"object order\" rendering (as described above).\n [Ray casting](/wiki/Ray_casting \"Ray casting\")\n Uses geometric formulas to compute the first object that a [ray](/wiki/Line_%28geometry%29%23Ray \"Line (geometry)#Ray\") intersects. It can be used to implement \"image order\" rendering by casting a ray for each pixel, and finding a corresponding point in the scene. Ray casting is a fundamental operation used for both graphical and non\\-graphical purposes, e.g. determining whether a point is in shadow, or checking what an enemy can see in a [game](/wiki/Artificial_intelligence_in_video_games \"Artificial intelligence in video games\").\n [Ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\")\n Simulates the bouncing paths of light caused by [specular reflection](/wiki/Specular_reflection \"Specular reflection\") and [refraction](/wiki/Refraction \"Refraction\"), requiring a varying number of ray casting operations for each path. Advanced forms use [Monte Carlo techniques](/wiki/Monte_Carlo_method \"Monte Carlo method\") to render effects such as area lights, [depth of field](/wiki/Depth_of_field \"Depth of field\"), blurry reflections, and [soft shadows](/wiki/Umbra%2C_penumbra_and_antumbra \"Umbra, penumbra and antumbra\"), but computing [global illumination](/wiki/Global_illumination \"Global illumination\") is usually in the domain of path tracing.\n [Radiosity](/wiki/Radiosity_%28computer_graphics%29 \"Radiosity (computer graphics)\")\n A [finite element analysis](/wiki/Finite_element_method \"Finite element method\") approach that breaks surfaces in the scene into pieces, and estimates the amount of light that each piece receives from light sources, or indirectly from other surfaces. Once the [irradiance](/wiki/Irradiance \"Irradiance\") of each surface is known, the scene can be rendered using rasterization or ray tracing.\n [Path tracing](/wiki/Path_tracing \"Path tracing\")\n Uses [Monte Carlo integration](/wiki/Monte_Carlo_method \"Monte Carlo method\") with a simplified form of ray tracing, computing the average brightness of a [sample](/wiki/Sampling_%28statistics%29 \"Sampling (statistics)\") of the possible paths that a photon could take when traveling from a light source to the camera (for some images, thousands of paths need to be sampled per pixel). It was introduced as a [statistically unbiased](/wiki/Unbiased_rendering \"Unbiased rendering\") way to solve the [rendering equation](/wiki/Rendering_equation \"Rendering equation\"), giving ray tracing a rigorous mathematical foundation.\nEach of the above approaches has many variations, and there is some overlap. Path tracing may be considered either a distinct technique or a particular type of ray tracing. Note that the [usage](/wiki/Usage_%28language%29 \"Usage (language)\") of terminology related to ray tracing and path tracing has changed significantly over time.\n\n[thumb\\|Rendering of a fractal terrain by [ray marching](/wiki/Ray_marching \"Ray marching\")](/wiki/File:Real-time_Raymarched_Terrain.png \"Real-time Raymarched Terrain.png\")\n[Ray marching](/wiki/Ray_marching \"Ray marching\") is a family of algorithms, used by ray casting, for finding intersections between a ray and a complex object, such as a [volumetric dataset](/wiki/Volume_ray_casting \"Volume ray casting\") or a surface defined by a [signed distance function](/wiki/Signed_distance_function \"Signed distance function\"). It is not, by itself, a rendering method, but it can be incorporated into ray tracing and path tracing, and is used by rasterization to implement screen\\-space reflection and other effects.\n\nA technique called [photon mapping](/wiki/Photon_mapping \"Photon mapping\") traces paths of photons from a light source to an object, accumulating data about [irradiance](/wiki/Irradiance \"Irradiance\") which is then used during conventional ray tracing or path tracing. Rendering a scene using only rays traced from the light source to the camera is impractical, even though it corresponds more closely to reality, because a huge number of photons would need to be simulated, only a tiny fraction of which actually hit the camera.\n\nSome authors call conventional ray tracing \"backward\" ray tracing because it traces the paths of photons backwards from the camera to the light source, and call following paths from the light source (as in photon mapping) \"forward\" ray tracing. However sometimes the meaning of these terms is reversed. Tracing rays starting at the light source can also be called *particle tracing* or *light tracing*, which avoids this ambiguity.\n\nReal\\-time rendering, including video game graphics, typically uses rasterization, but increasingly combines it with ray tracing and path tracing. To enable realistic [global illumination](/wiki/Global_illumination \"Global illumination\"), real\\-time rendering often relies on pre\\-rendered (\"baked\") lighting for stationary objects. For moving objects, it may use a technique called *light probes*, in which lighting is recorded by rendering omnidirectional views of the scene at chosen points in space (often points on a grid to allow easier [interpolation](/wiki/Interpolation \"Interpolation\")). These are similar to [environment maps](/wiki/Reflection_mapping \"Reflection mapping\"), but typically use a very low resolution or an approximation such as [spherical harmonics](/wiki/Spherical_harmonics \"Spherical harmonics\"). (Note: [Blender](/wiki/Blender_%28software%29 \"Blender (software)\") uses the term 'light probes' for a more general class of pre\\-recorded lighting data, including reflection maps.)\n\n### Rasterization\n\n[thumb\\|Rendering of the [Extremely Large Telescope](/wiki/Extremely_Large_Telescope \"Extremely Large Telescope\")](/wiki/File:Latest_Rendering_of_the_E-ELT.jpg \"Latest Rendering of the E-ELT.jpg\")\n\nThe term *rasterization* (in a broad sense) encompasses many techniques used for 2D rendering and [real\\-time](/wiki/Real-time_computer_graphics \"Real-time computer graphics\") 3D rendering. 3D [animated films](/wiki/Computer_animation \"Computer animation\") were rendered by rasterization before [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") and [path tracing](/wiki/Path_tracing \"Path tracing\") became practical.\n\nA renderer combines rasterization with *geometry processing* (which is not specific to rasterization) and *pixel processing* which computes the [RGB color values](/wiki/RGB_color_model \"RGB color model\") to be placed in the *[framebuffer](/wiki/Framebuffer \"Framebuffer\")* for display.\n\nThe main tasks of rasterization (including pixel processing) are:\n* Determining which pixels are covered by each geometric shape in the 3D scene or 2D image (this is the actual rasterization step, in the strictest sense)\n* Blending between colors and depths defined at the [vertices](/wiki/Vertex_%28computer_graphics%29 \"Vertex (computer graphics)\") of shapes, e.g. using [barycentric coordinates](/wiki/Barycentric_coordinate_system \"Barycentric coordinate system\") (*interpolation*)\n* Determining if parts of shapes are hidden by other shapes, due to 2D layering or 3D depth (*[hidden surface removal](/wiki/Hidden-surface_determination \"Hidden-surface determination\")*)\n* Evaluating a function for each pixel covered by a shape (*[shading](/wiki/Shading \"Shading\")*)\n* Smoothing edges of shapes so pixels are less visible (*[anti\\-aliasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\")*)\n* Blending overlapping transparent shapes (*[compositing](/wiki/Compositing \"Compositing\")*)\n\n3D rasterization is typically part of a *[graphics pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\")* in which an application provides [lists of triangles](/wiki/Triangle_mesh \"Triangle mesh\") to be rendered, and the rendering system transforms and [projects](/wiki/3D_projection \"3D projection\") their coordinates, determines which triangles are potentially visible in the *[viewport](/wiki/Viewport \"Viewport\")*, and performs the above rasterization and pixel processing tasks before displaying the final result on the screen.\n\nHistorically, 3D rasterization used algorithms like the *[Warnock algorithm](/wiki/Warnock_algorithm \"Warnock algorithm\")* and *[scanline rendering](/wiki/Scanline_rendering \"Scanline rendering\")* (also called \"scan\\-conversion\"), which can handle arbitrary polygons and can rasterize many shapes simultaneously. Although such algorithms are still important for 2D rendering, 3D rendering now usually divides shapes into triangles and rasterizes them individually using simpler methods.\n\n[High\\-performance algorithms](/wiki/Digital_differential_analyzer_%28graphics_algorithm%29 \"Digital differential analyzer (graphics algorithm)\") exist for rasterizing [2D lines](/wiki/Bresenham%27s_line_algorithm \"Bresenham's line algorithm\"), including [anti\\-aliased lines](/wiki/Xiaolin_Wu%27s_line_algorithm \"Xiaolin Wu's line algorithm\"), as well as [ellipses](/wiki/Midpoint_circle_algorithm \"Midpoint circle algorithm\") and filled triangles. An important special case of 2D rasterization is [text rendering](/wiki/Font_rasterization \"Font rasterization\"), which requires careful anti\\-aliasing and rounding of coordinates to avoid distorting the [letterforms](/wiki/Letterform \"Letterform\") and preserve spacing, density, and sharpness.\n\nAfter 3D coordinates have been [projected](/wiki/3D_projection \"3D projection\") onto the [image plane](/wiki/Image_plane \"Image plane\"), rasterization is primarily a 2D problem, but the 3rd dimension necessitates *[hidden surface removal](/wiki/Hidden-surface_determination \"Hidden-surface determination\")*. Early computer graphics used [geometric algorithms](/wiki/Computational_geometry \"Computational geometry\") or ray casting to remove the hidden portions of shapes, or used the *[painter's algorithm](/wiki/Painter%27s_algorithm \"Painter's algorithm\")*, which sorts shapes by depth (distance from camera) and renders them from back to front. Depth sorting was later avoided by incorporating depth comparison into the [scanline rendering](/wiki/Scanline_rendering \"Scanline rendering\") algorithm. The *[z\\-buffer](/wiki/Z-buffering \"Z-buffering\")* algorithm performs the comparisons indirectly by including a depth or \"z\" value in the [framebuffer](/wiki/Framebuffer \"Framebuffer\"). A pixel is only covered by a shape if that shape's z value is lower (indicating closer to the camera) than the z value currently in the buffer. The z\\-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes. Memory is now faster and more plentiful, and a z\\-buffer is almost always used for real\\-time rendering.\n\nA drawback of the basic [z\\-buffer algorithm](/wiki/Z-buffering \"Z-buffering\") is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image. Early *[anti\\-aliasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\")* approaches addressed this by detecting when a pixel is partially covered by a shape, and calculating the covered area. The [A\\-buffer](/wiki/A-buffer \"A-buffer\") (and other [sub\\-pixel](/wiki/Subpixel_rendering \"Subpixel rendering\") and [multi\\-sampling](/wiki/Multisample_anti-aliasing \"Multisample anti-aliasing\") techniques) solve the problem less precisely but with higher performance. For real\\-time 3D graphics, it has become common to use [complicated heuristics](/wiki/Fast_approximate_anti-aliasing \"Fast approximate anti-aliasing\") (and even [neural\\-networks](/wiki/Deep_learning_anti-aliasing \"Deep learning anti-aliasing\")) to perform anti\\-aliasing.\n\nIn 3D rasterization, color is usually determined by a *[pixel shader](/wiki/Shader%23Pixel_shaders \"Shader#Pixel shaders\")* or *fragment shader*, a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like [shadows](/wiki/Shadow_mapping \"Shadow mapping\") and [reflections](/wiki/Reflection_%28computer_graphics%29 \"Reflection (computer graphics)\") using only [texture mapping](/wiki/Texture_mapping \"Texture mapping\") and multiple passes.\n\nOlder and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as *[flat shading](/wiki/Shading%23Flat_shading \"Shading#Flat shading\")* (lighting is computed once for each triangle, which is then rendered entirely in one color), *[Gouraud shading](/wiki/Gouraud_shading \"Gouraud shading\")* (lighting is computed using [normal vectors](/wiki/Normal_%28geometry%29 \"Normal (geometry)\") defined at vertices and then colors are interpolated across each triangle), or *[Phong shading](/wiki/Phong_shading \"Phong shading\")* (normal vectors are interpolated across each triangle and lighting is computed for each pixel).\n\nUntil relatively recently, [Pixar](/wiki/Pixar \"Pixar\") used rasterization for rendering its [animated films](/wiki/Computer_animation \"Computer animation\"). Unlike the renderers commonly used for real\\-time graphics, the [Reyes rendering system](/wiki/Reyes_rendering \"Reyes rendering\") in Pixar's [RenderMan](/wiki/Pixar_RenderMan \"Pixar RenderMan\") software was optimized for rendering very small (pixel\\-sized) polygons, and incorporated [stochastic](/wiki/Stochastic \"Stochastic\") sampling techniques more typically associated with [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\").\n\n### Ray casting\n\nOne of the simplest ways to render a 3D scene is to test if a [ray](/wiki/Line_%28geometry%29%23Ray \"Line (geometry)#Ray\") starting at the viewpoint (the \"eye\" or \"camera\") intersects any of the geometric shapes in the scene, repeating this test using a different ray direction for each pixel. This method, called *ray casting*, was important in early computer graphics, and is a fundamental building block for more advanced algorithms. Ray casting can be used to render shapes defined by *[constructive solid geometry](/wiki/Constructive_solid_geometry \"Constructive solid geometry\")* (CSG) operations.\n\nEarly ray casting experiments include the work of Arthur Appel in the 1960s. Appel rendered shadows by casting an additional ray from each visible surface point towards a light source. He also tried rendering the density of illumination by casting random rays from the light source towards the object and [plotting](/wiki/Plotter \"Plotter\") the intersection points (similar to the later technique called *[photon mapping](/wiki/Photon_mapping \"Photon mapping\")*).\n\n[thumb\\|[Ray marching](/wiki/Ray_marching \"Ray marching\") can be used to find the first intersection of a ray with an intricate shape such as this [Mandelbulb](/wiki/Mandelbulb \"Mandelbulb\") fractal.](/wiki/File:Mandelbulb_p8a.jpg \"Mandelbulb p8a.jpg\")\nWhen rendering scenes containing many objects, testing the intersection of a ray with every object becomes very expensive. Special [data structures](/wiki/Data_structure \"Data structure\") are used to speed up this process by allowing large numbers of objects to be excluded quickly (such as objects behind the camera). These structures are analogous to [database indexes](/wiki/Database_index \"Database index\") for finding the relevant objects. The most common are the *[bounding volume hierarchy](/wiki/Bounding_volume_hierarchy \"Bounding volume hierarchy\")* (BVH), which stores a pre\\-computed [bounding box or sphere](/wiki/Bounding_volume \"Bounding volume\") for each branch of a [tree](/wiki/Tree_%28data_structure%29 \"Tree (data structure)\") of objects, and the *[k\\-d tree](/wiki/K-d_tree \"K-d tree\")* which recursively divides space into two parts. Recent [GPUs](/wiki/GPU \"GPU\") include hardware acceleration for BVH intersection tests. K\\-d trees are a special case of *[binary space partitioning](/wiki/Binary_space_partitioning \"Binary space partitioning\")*, which was frequently used in early computer graphics (it can also generate a rasterization order for the [painter's algorithm](/wiki/Painter%27s_algorithm \"Painter's algorithm\")). *[Octrees](/wiki/Octree \"Octree\")*, another historically popular technique, are still often used for volumetric data.\n\nGeometric formulas are sufficient for finding the intersection of a ray with shapes like [spheres](/wiki/Sphere \"Sphere\"), [polygons](/wiki/Polygon \"Polygon\"), and [polyhedra](/wiki/Polyhedron \"Polyhedron\"), but for most curved surfaces there is no [analytic solution](/wiki/Closed-form_expression%23Analytic_expression \"Closed-form expression#Analytic expression\"), or the intersection is difficult to compute accurately using limited precision [floating point numbers](/wiki/Floating-point_arithmetic \"Floating-point arithmetic\"). [Root\\-finding algorithms](/wiki/Root-finding_algorithm \"Root-finding algorithm\") such as [Newton's method](/wiki/Newton%27s_method \"Newton's method\") can sometimes be used. To avoid these complications, curved surfaces are often approximated as [meshes of triangles](/wiki/Triangle_mesh \"Triangle mesh\"). [Volume rendering](/wiki/Volume_rendering \"Volume rendering\") (e.g. rendering clouds and smoke), and some surfaces such as [fractals](/wiki/Fractal \"Fractal\"), may require [ray marching](/wiki/Ray_marching \"Ray marching\") instead of basic ray casting.\n\n### Ray tracing\n\n[thumb\\|250px\\|*Spiral Sphere and Julia, Detail*, a computer\\-generated image created by visual artist Robert W. McGregor using only [POV\\-Ray](/wiki/POV-Ray \"POV-Ray\") 3\\.6 and its built\\-in scene description language](/wiki/Image:SpiralSphereAndJuliaDetail1.jpg \"SpiralSphereAndJuliaDetail1.jpg\")\n\n**Ray tracing** aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the [rendering equation](/wiki/Rendering_equation \"Rendering equation\") by applying [Monte Carlo methods](/wiki/Monte_Carlo_methods \"Monte Carlo methods\") to it. Some of the most used methods are [path tracing](/wiki/Path_tracing \"Path tracing\"), [bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\"), or [Metropolis light transport](/wiki/Metropolis_light_transport \"Metropolis light transport\"), but also semi realistic methods are in use, like [Whitted Style Ray Tracing](/wiki/Whitted_Style_Ray_Tracing \"Whitted Style Ray Tracing\"), or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.\n\nIn a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as \"angle of incidence equals angle of reflection\" and more advanced laws that deal with refraction and surface roughness.\n\nOnce the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.\n\nIn [distribution ray tracing](/wiki/Distribution_ray_tracing \"Distribution ray tracing\"), at each point of intersection, multiple rays may be spawned. In [path tracing](/wiki/Path_tracing \"Path tracing\"), however, only a single ray or none is fired at each intersection, utilizing the statistical nature of [Monte Carlo](/wiki/Monte_Carlo_methods \"Monte Carlo methods\") experiments.\n\nAdvances in GPU technology have made real\\-time ray tracing possible in games, although it is currently almost always used in combination with rasterization. This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects, and shadows that are accurate over a wide range of distances and surface orientations. Ray tracing support is included in recent versions of the graphics APIs used by games, such as [DirectX](/wiki/DirectX_Raytracing \"DirectX Raytracing\"), [Metal](/wiki/Metal_%28API%29 \"Metal (API)\"), and [Vulkan](/wiki/Vulkan \"Vulkan\").\n\n### Radiosity\n\n[thumb\\|Classical radiosity demonstration. Surfaces are divided into 16x16 or 16x32 meshes. Top: direct light only. Bottom: radiosity solution (for [albedo](/wiki/Albedo \"Albedo\") 0\\.85\\).](/wiki/File:Classical_radiosity_example%2C_simple_scene%2C_no_interpolation%2C_direct_only_and_full.png \"Classical radiosity example, simple scene, no interpolation, direct only and full.png\")\n[thumb\\|Top: the same scene with a finer radiosity mesh, smoothing the patches during final rendering using [bilinear interpolation](/wiki/Bilinear_interpolation \"Bilinear interpolation\"). Bottom: the scene rendered with path tracing (using the PBRT renderer).](/wiki/File:Classical_radiosity_comparison_with_path_tracing%2C_simple_scene%2C_interpolated.png \"Classical radiosity comparison with path tracing, simple scene, interpolated.png\")\nRadiosity (named after the [radiometric quantity of the same name](/wiki/Radiosity_%28radiometry%29 \"Radiosity (radiometry)\")) is a method for rendering objects illuminated by light [bouncing off rough or matte surfaces](/wiki/Diffuse_reflection \"Diffuse reflection\"). This type of illumination is called *indirect light*, *environment lighting*, or *diffuse lighting*, and the problem of rendering it realistically is called *global illumination*. Rasterization and basic forms of ray tracing (other than distribution ray tracing and path tracing) can only roughly approximate indirect light, e.g. by adding a uniform \"ambient\" lighting amount chosen by the artist. Radiosity techniques are also suited to rendering scenes with *area lights* such as rectangular fluorescent lighting panels, which are difficult for rasterization and traditional ray tracing. Radiosity is considered a [physically\\-based method](/wiki/Physically_based_rendering \"Physically based rendering\"), meaning that it aims to simulate the flow of light in an environment using equations and experimental data from physics, however it often assumes that all surfaces are opaque and perfectly [Lambertian](/wiki/Lambertian_reflectance \"Lambertian reflectance\"), which reduces realism and limits its applicability.\n\nIn the original radiosity method (first proposed in 1984\\) now called *classical radiosity*, surfaces and lights in the scene are split into pieces called *patches*, a process called *[meshing](/wiki/Mesh_generation \"Mesh generation\")* (this step makes it a [finite element method](/wiki/Finite_element_method \"Finite element method\")). The rendering code must then determine what fraction of the light being emitted or [diffusely reflected](/wiki/Diffuse_reflection \"Diffuse reflection\") (scattered) by each patch is received by each other patch. These fractions are called *form factors* or *[view factors](/wiki/View_factor \"View factor\")* (first used in engineering to model [radiative heat transfer](/wiki/Thermal_radiation \"Thermal radiation\")). The form factors are multiplied by the [albedo](/wiki/Albedo \"Albedo\") of the receiving surface and put in a [matrix](/wiki/Matrix_%28mathematics%29 \"Matrix (mathematics)\"). The lighting in the scene can then be expressed as a matrix equation (or equivalently a [system of linear equations](/wiki/System_of_linear_equations \"System of linear equations\")) that can be solved by methods from [linear algebra](/wiki/Linear_algebra \"Linear algebra\").\n\nSolving the radiosity equation gives the total amount of light emitted and reflected by each patch, which is divided by area to get a value called *[radiosity](/wiki/Radiosity_%28radiometry%29 \"Radiosity (radiometry)\")* that can be used when rasterizing or ray tracing to determine the color of pixels corresponding to visible parts of the patch. For real\\-time rendering, this value (or more commonly the [irradiance](/wiki/Irradiance \"Irradiance\"), which does not depend on local surface albedo) can be pre\\-computed and stored in a texture (called an *irradiance map*) or stored as vertex data for 3D models. This feature was used in architectural visualization software to allow real\\-time walk\\-throughs of a building interior after computing the lighting.\n\nThe large size of the matrices used in classical radiosity (the square of the number of patches) causes problems for realistic scenes. Practical implementations may use [Jacobi](/wiki/Jacobi_method \"Jacobi method\") or [Gauss\\-Seidel](/wiki/Gauss%E2%80%93Seidel_method \"Gauss–Seidel method\") iterations, which is equivalent (at least in the Jacobi case) to simulating the propagation of light one bounce at a time until the amount of light remaining (not yet absorbed by surfaces) is insignificant. The number of iterations (bounces) required is dependent on the scene, not the number of patches, so the total work is proportional to the square of the number of patches (compared to the cube for [Gaussian elimination](/wiki/Gaussian_elimination \"Gaussian elimination\")). Form factors may be recomputed when they are needed, to avoid storing a complete matrix in memory.\n\nThe quality of rendering is often determined by the size of the patches, e.g. very fine meshes are needed to depict the edges of shadows accurately. An important improvement is *hierarchical radiosity*, which uses a coarser mesh (larger patches) for simulating the transfer of light between surfaces that are far away from one another, and adaptively sub\\-divides the patches as needed. This allows radiosity to be used for much larger and more complex scenes.\n\nAlternative and extended versions of the radiosity method support non\\-Lambertian surfaces, such as glossy surfaces and mirrors, and sometimes use volumes or \"clusters\" of objects as well as surface patches. Stochastic or [Monte Carlo](/wiki/Monte_Carlo_method \"Monte Carlo method\") radiosity uses [random sampling](/wiki/Sampling_%28statistics%29 \"Sampling (statistics)\") in various ways, e.g. taking samples of incident light instead of integrating over all patches, which can improve performance but adds noise (this noise can be reduced by using deterministic iterations as a final step, unlike path tracing noise). Simplified and partially precomputed versions of radiosity are widely used for real\\-time rendering, combined with techniques such as *[octree](/wiki/Octree \"Octree\") radiosity* that store approximations of the [light field](/wiki/Light_field \"Light field\").\n\n### Path tracing\n\nAs part of the approach known as *[physically based rendering](/wiki/Physically_based_rendering \"Physically based rendering\")*, **[path tracing](/wiki/Path_tracing \"Path tracing\")** has become the dominant technique for rendering realistic scenes, including effects for movies. For example, the popular open source 3D software [Blender](/wiki/Blender_%28software%29 \"Blender (software)\") uses path tracing in its Cycles renderer. Images produced using path tracing for [global illumination](/wiki/Global_illumination \"Global illumination\") are generally noisier than when using [radiosity](/wiki/Radiosity_%28computer_graphics%29 \"Radiosity (computer graphics)\") (the main competing algorithm for realistic lighting), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a [tessellated](/wiki/Tessellation_%28computer_graphics%29 \"Tessellation (computer graphics)\") representation of [irradiance](/wiki/Irradiance \"Irradiance\").\n\nLike *[distributed ray tracing](/wiki/Distributed_ray_tracing \"Distributed ray tracing\")*, path tracing is a kind of *[stochastic](/wiki/Stochastic \"Stochastic\")* or *[randomized](/wiki/Randomized_algorithm \"Randomized algorithm\")* [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") that uses [Monte Carlo](/wiki/Monte_Carlo_integration \"Monte Carlo integration\") or [Quasi\\-Monte Carlo](/wiki/Quasi-Monte_Carlo_method \"Quasi-Monte Carlo method\") integration. It was proposed and named in 1986 by [Jim Kajiya](/wiki/Jim_Kajiya \"Jim Kajiya\") in the same paper as the [rendering equation](/wiki/Rendering_equation \"Rendering equation\"). Kajiya observed that much of the complexity of [distributed ray tracing](/wiki/Distributed_ray_tracing \"Distributed ray tracing\") could be avoided by only tracing a single path from the camera at a time (in Kajiya's implementation, this \"no branching\" rule was broken by tracing additional rays from each surface intersection point to randomly chosen points on each light source). Kajiya suggested reducing the noise present in the output images by using *[stratified sampling](/wiki/Stratified_sampling \"Stratified sampling\")* and *[importance sampling](/wiki/Importance_sampling \"Importance sampling\")* for making random decisions such as choosing which ray to follow at each step of a path. Even with these techniques, path tracing would not have been practical for film rendering, using computers available at the time, because the computational cost of generating enough samples to reduce [variance](/wiki/Variance \"Variance\") to an acceptable level was too high. [Monster House](/wiki/Monster_House_%28film%29 \"Monster House (film)\"), the first feature film rendered entirely using path tracing, was not released until 20 years later.\n\nIn its basic form, path tracing is inefficient (requiring too many samples) for rendering [caustics](/wiki/Caustic_%28optics%29 \"Caustic (optics)\") and scenes where light enters indirectly through narrow spaces. Attempts were made to address these weaknesses in the 1990s. *[Bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\")* has similarities to [photon mapping](/wiki/Photon_mapping \"Photon mapping\"), tracing rays from the light source and the camera separately, and then finding ways to connect these paths (but unlike photon mapping it usually samples new light paths for each pixel rather than using the same cached data for all pixels). *[Metropolis light transport](/wiki/Metropolis_light_transport \"Metropolis light transport\")* samples paths by modifying paths that were previously traced, spending more time exploring paths that are similar to other \"bright\" paths, which increases the chance of discovering even brighter paths. *Multiple importance sampling* provides a way to reduce [variance](/wiki/Variance \"Variance\") when combining samples from more than one sampling method, particularly when some samples are much noisier than the others.\n\nThis later work was summarized and expanded upon in [Eric Veach](/wiki/Eric_Veach \"Eric Veach\")'s 1997 PhD thesis, which helped raise interest in path tracing in the computer graphics community. The [Arnold renderer](/wiki/Autodesk_Arnold \"Autodesk Arnold\"), first released in 1998, proved that path tracing was practical for rendering frames for films, and that there was a demand for [unbiased](/wiki/Unbiased_rendering \"Unbiased rendering\") and [physically based](/wiki/Physically_based_rendering \"Physically based rendering\") rendering in the film industry; other commercial and open source path tracing renderers began appearing. Computational cost was addressed by rapid advances in [CPU](/wiki/CPU \"CPU\") and [cluster](/wiki/Computer_cluster \"Computer cluster\") performance.\n\nPath tracing's relative simplicity and its nature as a [Monte Carlo method](/wiki/Monte_Carlo_method \"Monte Carlo method\") (sampling hundreds or thousands of paths per pixel) have made it attractive to implement on a [GPU](/wiki/Graphics_processing_unit \"Graphics processing unit\"), especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's [RTX](/wiki/Nvidia_RTX \"Nvidia RTX\") and [OptiX](/wiki/OptiX \"OptiX\"). However bidirectional path tracing and Metropolis light transport are more difficult to implement efficiently on a GPU.\n\nResearch into improving path tracing continues. Recent *path guiding* approaches construct approximations of the [light field](/wiki/Light_field \"Light field\") probability distribution in each volume of space, so paths can be sampled more effectively. Many techniques have been developed to [denoise](/wiki/Noise_reduction \"Noise reduction\") the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small\\-scale artifacts that are more objectionable than noise; [neural networks](/wiki/Artificial_neural_network \"Artificial neural network\") are now widely used for this purpose.\n\n### Neural rendering\n\n**Neural rendering** is a rendering method using [artificial neural networks](/wiki/Artificial_neural_network \"Artificial neural network\"). Neural rendering includes [image\\-based rendering](/wiki/Image-based_rendering \"Image-based rendering\") methods that are used to [reconstruct 3D models](/wiki/3D_reconstruction \"3D reconstruction\") from 2\\-dimensional images.One of these methods are [photogrammetry](/wiki/Photogrammetry \"Photogrammetry\"), which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably [Nvidia](/wiki/Nvidia \"Nvidia\"), [Google](/wiki/Google \"Google\") and various other companies.\n\n",
"### Rasterization\n\n[thumb\\|Rendering of the [Extremely Large Telescope](/wiki/Extremely_Large_Telescope \"Extremely Large Telescope\")](/wiki/File:Latest_Rendering_of_the_E-ELT.jpg \"Latest Rendering of the E-ELT.jpg\")\n\nThe term *rasterization* (in a broad sense) encompasses many techniques used for 2D rendering and [real\\-time](/wiki/Real-time_computer_graphics \"Real-time computer graphics\") 3D rendering. 3D [animated films](/wiki/Computer_animation \"Computer animation\") were rendered by rasterization before [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") and [path tracing](/wiki/Path_tracing \"Path tracing\") became practical.\n\nA renderer combines rasterization with *geometry processing* (which is not specific to rasterization) and *pixel processing* which computes the [RGB color values](/wiki/RGB_color_model \"RGB color model\") to be placed in the *[framebuffer](/wiki/Framebuffer \"Framebuffer\")* for display.\n\nThe main tasks of rasterization (including pixel processing) are:\n* Determining which pixels are covered by each geometric shape in the 3D scene or 2D image (this is the actual rasterization step, in the strictest sense)\n* Blending between colors and depths defined at the [vertices](/wiki/Vertex_%28computer_graphics%29 \"Vertex (computer graphics)\") of shapes, e.g. using [barycentric coordinates](/wiki/Barycentric_coordinate_system \"Barycentric coordinate system\") (*interpolation*)\n* Determining if parts of shapes are hidden by other shapes, due to 2D layering or 3D depth (*[hidden surface removal](/wiki/Hidden-surface_determination \"Hidden-surface determination\")*)\n* Evaluating a function for each pixel covered by a shape (*[shading](/wiki/Shading \"Shading\")*)\n* Smoothing edges of shapes so pixels are less visible (*[anti\\-aliasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\")*)\n* Blending overlapping transparent shapes (*[compositing](/wiki/Compositing \"Compositing\")*)\n\n3D rasterization is typically part of a *[graphics pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\")* in which an application provides [lists of triangles](/wiki/Triangle_mesh \"Triangle mesh\") to be rendered, and the rendering system transforms and [projects](/wiki/3D_projection \"3D projection\") their coordinates, determines which triangles are potentially visible in the *[viewport](/wiki/Viewport \"Viewport\")*, and performs the above rasterization and pixel processing tasks before displaying the final result on the screen.\n\nHistorically, 3D rasterization used algorithms like the *[Warnock algorithm](/wiki/Warnock_algorithm \"Warnock algorithm\")* and *[scanline rendering](/wiki/Scanline_rendering \"Scanline rendering\")* (also called \"scan\\-conversion\"), which can handle arbitrary polygons and can rasterize many shapes simultaneously. Although such algorithms are still important for 2D rendering, 3D rendering now usually divides shapes into triangles and rasterizes them individually using simpler methods.\n\n[High\\-performance algorithms](/wiki/Digital_differential_analyzer_%28graphics_algorithm%29 \"Digital differential analyzer (graphics algorithm)\") exist for rasterizing [2D lines](/wiki/Bresenham%27s_line_algorithm \"Bresenham's line algorithm\"), including [anti\\-aliased lines](/wiki/Xiaolin_Wu%27s_line_algorithm \"Xiaolin Wu's line algorithm\"), as well as [ellipses](/wiki/Midpoint_circle_algorithm \"Midpoint circle algorithm\") and filled triangles. An important special case of 2D rasterization is [text rendering](/wiki/Font_rasterization \"Font rasterization\"), which requires careful anti\\-aliasing and rounding of coordinates to avoid distorting the [letterforms](/wiki/Letterform \"Letterform\") and preserve spacing, density, and sharpness.\n\nAfter 3D coordinates have been [projected](/wiki/3D_projection \"3D projection\") onto the [image plane](/wiki/Image_plane \"Image plane\"), rasterization is primarily a 2D problem, but the 3rd dimension necessitates *[hidden surface removal](/wiki/Hidden-surface_determination \"Hidden-surface determination\")*. Early computer graphics used [geometric algorithms](/wiki/Computational_geometry \"Computational geometry\") or ray casting to remove the hidden portions of shapes, or used the *[painter's algorithm](/wiki/Painter%27s_algorithm \"Painter's algorithm\")*, which sorts shapes by depth (distance from camera) and renders them from back to front. Depth sorting was later avoided by incorporating depth comparison into the [scanline rendering](/wiki/Scanline_rendering \"Scanline rendering\") algorithm. The *[z\\-buffer](/wiki/Z-buffering \"Z-buffering\")* algorithm performs the comparisons indirectly by including a depth or \"z\" value in the [framebuffer](/wiki/Framebuffer \"Framebuffer\"). A pixel is only covered by a shape if that shape's z value is lower (indicating closer to the camera) than the z value currently in the buffer. The z\\-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes. Memory is now faster and more plentiful, and a z\\-buffer is almost always used for real\\-time rendering.\n\nA drawback of the basic [z\\-buffer algorithm](/wiki/Z-buffering \"Z-buffering\") is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image. Early *[anti\\-aliasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\")* approaches addressed this by detecting when a pixel is partially covered by a shape, and calculating the covered area. The [A\\-buffer](/wiki/A-buffer \"A-buffer\") (and other [sub\\-pixel](/wiki/Subpixel_rendering \"Subpixel rendering\") and [multi\\-sampling](/wiki/Multisample_anti-aliasing \"Multisample anti-aliasing\") techniques) solve the problem less precisely but with higher performance. For real\\-time 3D graphics, it has become common to use [complicated heuristics](/wiki/Fast_approximate_anti-aliasing \"Fast approximate anti-aliasing\") (and even [neural\\-networks](/wiki/Deep_learning_anti-aliasing \"Deep learning anti-aliasing\")) to perform anti\\-aliasing.\n\nIn 3D rasterization, color is usually determined by a *[pixel shader](/wiki/Shader%23Pixel_shaders \"Shader#Pixel shaders\")* or *fragment shader*, a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like [shadows](/wiki/Shadow_mapping \"Shadow mapping\") and [reflections](/wiki/Reflection_%28computer_graphics%29 \"Reflection (computer graphics)\") using only [texture mapping](/wiki/Texture_mapping \"Texture mapping\") and multiple passes.\n\nOlder and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as *[flat shading](/wiki/Shading%23Flat_shading \"Shading#Flat shading\")* (lighting is computed once for each triangle, which is then rendered entirely in one color), *[Gouraud shading](/wiki/Gouraud_shading \"Gouraud shading\")* (lighting is computed using [normal vectors](/wiki/Normal_%28geometry%29 \"Normal (geometry)\") defined at vertices and then colors are interpolated across each triangle), or *[Phong shading](/wiki/Phong_shading \"Phong shading\")* (normal vectors are interpolated across each triangle and lighting is computed for each pixel).\n\nUntil relatively recently, [Pixar](/wiki/Pixar \"Pixar\") used rasterization for rendering its [animated films](/wiki/Computer_animation \"Computer animation\"). Unlike the renderers commonly used for real\\-time graphics, the [Reyes rendering system](/wiki/Reyes_rendering \"Reyes rendering\") in Pixar's [RenderMan](/wiki/Pixar_RenderMan \"Pixar RenderMan\") software was optimized for rendering very small (pixel\\-sized) polygons, and incorporated [stochastic](/wiki/Stochastic \"Stochastic\") sampling techniques more typically associated with [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\").\n\n",
"### Ray casting\n\nOne of the simplest ways to render a 3D scene is to test if a [ray](/wiki/Line_%28geometry%29%23Ray \"Line (geometry)#Ray\") starting at the viewpoint (the \"eye\" or \"camera\") intersects any of the geometric shapes in the scene, repeating this test using a different ray direction for each pixel. This method, called *ray casting*, was important in early computer graphics, and is a fundamental building block for more advanced algorithms. Ray casting can be used to render shapes defined by *[constructive solid geometry](/wiki/Constructive_solid_geometry \"Constructive solid geometry\")* (CSG) operations.\n\nEarly ray casting experiments include the work of Arthur Appel in the 1960s. Appel rendered shadows by casting an additional ray from each visible surface point towards a light source. He also tried rendering the density of illumination by casting random rays from the light source towards the object and [plotting](/wiki/Plotter \"Plotter\") the intersection points (similar to the later technique called *[photon mapping](/wiki/Photon_mapping \"Photon mapping\")*).\n\n[thumb\\|[Ray marching](/wiki/Ray_marching \"Ray marching\") can be used to find the first intersection of a ray with an intricate shape such as this [Mandelbulb](/wiki/Mandelbulb \"Mandelbulb\") fractal.](/wiki/File:Mandelbulb_p8a.jpg \"Mandelbulb p8a.jpg\")\nWhen rendering scenes containing many objects, testing the intersection of a ray with every object becomes very expensive. Special [data structures](/wiki/Data_structure \"Data structure\") are used to speed up this process by allowing large numbers of objects to be excluded quickly (such as objects behind the camera). These structures are analogous to [database indexes](/wiki/Database_index \"Database index\") for finding the relevant objects. The most common are the *[bounding volume hierarchy](/wiki/Bounding_volume_hierarchy \"Bounding volume hierarchy\")* (BVH), which stores a pre\\-computed [bounding box or sphere](/wiki/Bounding_volume \"Bounding volume\") for each branch of a [tree](/wiki/Tree_%28data_structure%29 \"Tree (data structure)\") of objects, and the *[k\\-d tree](/wiki/K-d_tree \"K-d tree\")* which recursively divides space into two parts. Recent [GPUs](/wiki/GPU \"GPU\") include hardware acceleration for BVH intersection tests. K\\-d trees are a special case of *[binary space partitioning](/wiki/Binary_space_partitioning \"Binary space partitioning\")*, which was frequently used in early computer graphics (it can also generate a rasterization order for the [painter's algorithm](/wiki/Painter%27s_algorithm \"Painter's algorithm\")). *[Octrees](/wiki/Octree \"Octree\")*, another historically popular technique, are still often used for volumetric data.\n\nGeometric formulas are sufficient for finding the intersection of a ray with shapes like [spheres](/wiki/Sphere \"Sphere\"), [polygons](/wiki/Polygon \"Polygon\"), and [polyhedra](/wiki/Polyhedron \"Polyhedron\"), but for most curved surfaces there is no [analytic solution](/wiki/Closed-form_expression%23Analytic_expression \"Closed-form expression#Analytic expression\"), or the intersection is difficult to compute accurately using limited precision [floating point numbers](/wiki/Floating-point_arithmetic \"Floating-point arithmetic\"). [Root\\-finding algorithms](/wiki/Root-finding_algorithm \"Root-finding algorithm\") such as [Newton's method](/wiki/Newton%27s_method \"Newton's method\") can sometimes be used. To avoid these complications, curved surfaces are often approximated as [meshes of triangles](/wiki/Triangle_mesh \"Triangle mesh\"). [Volume rendering](/wiki/Volume_rendering \"Volume rendering\") (e.g. rendering clouds and smoke), and some surfaces such as [fractals](/wiki/Fractal \"Fractal\"), may require [ray marching](/wiki/Ray_marching \"Ray marching\") instead of basic ray casting.\n\n",
"### Ray tracing\n\n[thumb\\|250px\\|*Spiral Sphere and Julia, Detail*, a computer\\-generated image created by visual artist Robert W. McGregor using only [POV\\-Ray](/wiki/POV-Ray \"POV-Ray\") 3\\.6 and its built\\-in scene description language](/wiki/Image:SpiralSphereAndJuliaDetail1.jpg \"SpiralSphereAndJuliaDetail1.jpg\")\n\n**Ray tracing** aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the [rendering equation](/wiki/Rendering_equation \"Rendering equation\") by applying [Monte Carlo methods](/wiki/Monte_Carlo_methods \"Monte Carlo methods\") to it. Some of the most used methods are [path tracing](/wiki/Path_tracing \"Path tracing\"), [bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\"), or [Metropolis light transport](/wiki/Metropolis_light_transport \"Metropolis light transport\"), but also semi realistic methods are in use, like [Whitted Style Ray Tracing](/wiki/Whitted_Style_Ray_Tracing \"Whitted Style Ray Tracing\"), or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.\n\nIn a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as \"angle of incidence equals angle of reflection\" and more advanced laws that deal with refraction and surface roughness.\n\nOnce the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.\n\nIn [distribution ray tracing](/wiki/Distribution_ray_tracing \"Distribution ray tracing\"), at each point of intersection, multiple rays may be spawned. In [path tracing](/wiki/Path_tracing \"Path tracing\"), however, only a single ray or none is fired at each intersection, utilizing the statistical nature of [Monte Carlo](/wiki/Monte_Carlo_methods \"Monte Carlo methods\") experiments.\n\nAdvances in GPU technology have made real\\-time ray tracing possible in games, although it is currently almost always used in combination with rasterization. This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects, and shadows that are accurate over a wide range of distances and surface orientations. Ray tracing support is included in recent versions of the graphics APIs used by games, such as [DirectX](/wiki/DirectX_Raytracing \"DirectX Raytracing\"), [Metal](/wiki/Metal_%28API%29 \"Metal (API)\"), and [Vulkan](/wiki/Vulkan \"Vulkan\").\n\n",
"### Radiosity\n\n[thumb\\|Classical radiosity demonstration. Surfaces are divided into 16x16 or 16x32 meshes. Top: direct light only. Bottom: radiosity solution (for [albedo](/wiki/Albedo \"Albedo\") 0\\.85\\).](/wiki/File:Classical_radiosity_example%2C_simple_scene%2C_no_interpolation%2C_direct_only_and_full.png \"Classical radiosity example, simple scene, no interpolation, direct only and full.png\")\n[thumb\\|Top: the same scene with a finer radiosity mesh, smoothing the patches during final rendering using [bilinear interpolation](/wiki/Bilinear_interpolation \"Bilinear interpolation\"). Bottom: the scene rendered with path tracing (using the PBRT renderer).](/wiki/File:Classical_radiosity_comparison_with_path_tracing%2C_simple_scene%2C_interpolated.png \"Classical radiosity comparison with path tracing, simple scene, interpolated.png\")\nRadiosity (named after the [radiometric quantity of the same name](/wiki/Radiosity_%28radiometry%29 \"Radiosity (radiometry)\")) is a method for rendering objects illuminated by light [bouncing off rough or matte surfaces](/wiki/Diffuse_reflection \"Diffuse reflection\"). This type of illumination is called *indirect light*, *environment lighting*, or *diffuse lighting*, and the problem of rendering it realistically is called *global illumination*. Rasterization and basic forms of ray tracing (other than distribution ray tracing and path tracing) can only roughly approximate indirect light, e.g. by adding a uniform \"ambient\" lighting amount chosen by the artist. Radiosity techniques are also suited to rendering scenes with *area lights* such as rectangular fluorescent lighting panels, which are difficult for rasterization and traditional ray tracing. Radiosity is considered a [physically\\-based method](/wiki/Physically_based_rendering \"Physically based rendering\"), meaning that it aims to simulate the flow of light in an environment using equations and experimental data from physics, however it often assumes that all surfaces are opaque and perfectly [Lambertian](/wiki/Lambertian_reflectance \"Lambertian reflectance\"), which reduces realism and limits its applicability.\n\nIn the original radiosity method (first proposed in 1984\\) now called *classical radiosity*, surfaces and lights in the scene are split into pieces called *patches*, a process called *[meshing](/wiki/Mesh_generation \"Mesh generation\")* (this step makes it a [finite element method](/wiki/Finite_element_method \"Finite element method\")). The rendering code must then determine what fraction of the light being emitted or [diffusely reflected](/wiki/Diffuse_reflection \"Diffuse reflection\") (scattered) by each patch is received by each other patch. These fractions are called *form factors* or *[view factors](/wiki/View_factor \"View factor\")* (first used in engineering to model [radiative heat transfer](/wiki/Thermal_radiation \"Thermal radiation\")). The form factors are multiplied by the [albedo](/wiki/Albedo \"Albedo\") of the receiving surface and put in a [matrix](/wiki/Matrix_%28mathematics%29 \"Matrix (mathematics)\"). The lighting in the scene can then be expressed as a matrix equation (or equivalently a [system of linear equations](/wiki/System_of_linear_equations \"System of linear equations\")) that can be solved by methods from [linear algebra](/wiki/Linear_algebra \"Linear algebra\").\n\nSolving the radiosity equation gives the total amount of light emitted and reflected by each patch, which is divided by area to get a value called *[radiosity](/wiki/Radiosity_%28radiometry%29 \"Radiosity (radiometry)\")* that can be used when rasterizing or ray tracing to determine the color of pixels corresponding to visible parts of the patch. For real\\-time rendering, this value (or more commonly the [irradiance](/wiki/Irradiance \"Irradiance\"), which does not depend on local surface albedo) can be pre\\-computed and stored in a texture (called an *irradiance map*) or stored as vertex data for 3D models. This feature was used in architectural visualization software to allow real\\-time walk\\-throughs of a building interior after computing the lighting.\n\nThe large size of the matrices used in classical radiosity (the square of the number of patches) causes problems for realistic scenes. Practical implementations may use [Jacobi](/wiki/Jacobi_method \"Jacobi method\") or [Gauss\\-Seidel](/wiki/Gauss%E2%80%93Seidel_method \"Gauss–Seidel method\") iterations, which is equivalent (at least in the Jacobi case) to simulating the propagation of light one bounce at a time until the amount of light remaining (not yet absorbed by surfaces) is insignificant. The number of iterations (bounces) required is dependent on the scene, not the number of patches, so the total work is proportional to the square of the number of patches (compared to the cube for [Gaussian elimination](/wiki/Gaussian_elimination \"Gaussian elimination\")). Form factors may be recomputed when they are needed, to avoid storing a complete matrix in memory.\n\nThe quality of rendering is often determined by the size of the patches, e.g. very fine meshes are needed to depict the edges of shadows accurately. An important improvement is *hierarchical radiosity*, which uses a coarser mesh (larger patches) for simulating the transfer of light between surfaces that are far away from one another, and adaptively sub\\-divides the patches as needed. This allows radiosity to be used for much larger and more complex scenes.\n\nAlternative and extended versions of the radiosity method support non\\-Lambertian surfaces, such as glossy surfaces and mirrors, and sometimes use volumes or \"clusters\" of objects as well as surface patches. Stochastic or [Monte Carlo](/wiki/Monte_Carlo_method \"Monte Carlo method\") radiosity uses [random sampling](/wiki/Sampling_%28statistics%29 \"Sampling (statistics)\") in various ways, e.g. taking samples of incident light instead of integrating over all patches, which can improve performance but adds noise (this noise can be reduced by using deterministic iterations as a final step, unlike path tracing noise). Simplified and partially precomputed versions of radiosity are widely used for real\\-time rendering, combined with techniques such as *[octree](/wiki/Octree \"Octree\") radiosity* that store approximations of the [light field](/wiki/Light_field \"Light field\").\n\n",
"### Path tracing\n\nAs part of the approach known as *[physically based rendering](/wiki/Physically_based_rendering \"Physically based rendering\")*, **[path tracing](/wiki/Path_tracing \"Path tracing\")** has become the dominant technique for rendering realistic scenes, including effects for movies. For example, the popular open source 3D software [Blender](/wiki/Blender_%28software%29 \"Blender (software)\") uses path tracing in its Cycles renderer. Images produced using path tracing for [global illumination](/wiki/Global_illumination \"Global illumination\") are generally noisier than when using [radiosity](/wiki/Radiosity_%28computer_graphics%29 \"Radiosity (computer graphics)\") (the main competing algorithm for realistic lighting), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a [tessellated](/wiki/Tessellation_%28computer_graphics%29 \"Tessellation (computer graphics)\") representation of [irradiance](/wiki/Irradiance \"Irradiance\").\n\nLike *[distributed ray tracing](/wiki/Distributed_ray_tracing \"Distributed ray tracing\")*, path tracing is a kind of *[stochastic](/wiki/Stochastic \"Stochastic\")* or *[randomized](/wiki/Randomized_algorithm \"Randomized algorithm\")* [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") that uses [Monte Carlo](/wiki/Monte_Carlo_integration \"Monte Carlo integration\") or [Quasi\\-Monte Carlo](/wiki/Quasi-Monte_Carlo_method \"Quasi-Monte Carlo method\") integration. It was proposed and named in 1986 by [Jim Kajiya](/wiki/Jim_Kajiya \"Jim Kajiya\") in the same paper as the [rendering equation](/wiki/Rendering_equation \"Rendering equation\"). Kajiya observed that much of the complexity of [distributed ray tracing](/wiki/Distributed_ray_tracing \"Distributed ray tracing\") could be avoided by only tracing a single path from the camera at a time (in Kajiya's implementation, this \"no branching\" rule was broken by tracing additional rays from each surface intersection point to randomly chosen points on each light source). Kajiya suggested reducing the noise present in the output images by using *[stratified sampling](/wiki/Stratified_sampling \"Stratified sampling\")* and *[importance sampling](/wiki/Importance_sampling \"Importance sampling\")* for making random decisions such as choosing which ray to follow at each step of a path. Even with these techniques, path tracing would not have been practical for film rendering, using computers available at the time, because the computational cost of generating enough samples to reduce [variance](/wiki/Variance \"Variance\") to an acceptable level was too high. [Monster House](/wiki/Monster_House_%28film%29 \"Monster House (film)\"), the first feature film rendered entirely using path tracing, was not released until 20 years later.\n\nIn its basic form, path tracing is inefficient (requiring too many samples) for rendering [caustics](/wiki/Caustic_%28optics%29 \"Caustic (optics)\") and scenes where light enters indirectly through narrow spaces. Attempts were made to address these weaknesses in the 1990s. *[Bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\")* has similarities to [photon mapping](/wiki/Photon_mapping \"Photon mapping\"), tracing rays from the light source and the camera separately, and then finding ways to connect these paths (but unlike photon mapping it usually samples new light paths for each pixel rather than using the same cached data for all pixels). *[Metropolis light transport](/wiki/Metropolis_light_transport \"Metropolis light transport\")* samples paths by modifying paths that were previously traced, spending more time exploring paths that are similar to other \"bright\" paths, which increases the chance of discovering even brighter paths. *Multiple importance sampling* provides a way to reduce [variance](/wiki/Variance \"Variance\") when combining samples from more than one sampling method, particularly when some samples are much noisier than the others.\n\nThis later work was summarized and expanded upon in [Eric Veach](/wiki/Eric_Veach \"Eric Veach\")'s 1997 PhD thesis, which helped raise interest in path tracing in the computer graphics community. The [Arnold renderer](/wiki/Autodesk_Arnold \"Autodesk Arnold\"), first released in 1998, proved that path tracing was practical for rendering frames for films, and that there was a demand for [unbiased](/wiki/Unbiased_rendering \"Unbiased rendering\") and [physically based](/wiki/Physically_based_rendering \"Physically based rendering\") rendering in the film industry; other commercial and open source path tracing renderers began appearing. Computational cost was addressed by rapid advances in [CPU](/wiki/CPU \"CPU\") and [cluster](/wiki/Computer_cluster \"Computer cluster\") performance.\n\nPath tracing's relative simplicity and its nature as a [Monte Carlo method](/wiki/Monte_Carlo_method \"Monte Carlo method\") (sampling hundreds or thousands of paths per pixel) have made it attractive to implement on a [GPU](/wiki/Graphics_processing_unit \"Graphics processing unit\"), especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's [RTX](/wiki/Nvidia_RTX \"Nvidia RTX\") and [OptiX](/wiki/OptiX \"OptiX\"). However bidirectional path tracing and Metropolis light transport are more difficult to implement efficiently on a GPU.\n\nResearch into improving path tracing continues. Recent *path guiding* approaches construct approximations of the [light field](/wiki/Light_field \"Light field\") probability distribution in each volume of space, so paths can be sampled more effectively. Many techniques have been developed to [denoise](/wiki/Noise_reduction \"Noise reduction\") the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small\\-scale artifacts that are more objectionable than noise; [neural networks](/wiki/Artificial_neural_network \"Artificial neural network\") are now widely used for this purpose.\n\n",
"### Neural rendering\n\n**Neural rendering** is a rendering method using [artificial neural networks](/wiki/Artificial_neural_network \"Artificial neural network\"). Neural rendering includes [image\\-based rendering](/wiki/Image-based_rendering \"Image-based rendering\") methods that are used to [reconstruct 3D models](/wiki/3D_reconstruction \"3D reconstruction\") from 2\\-dimensional images.One of these methods are [photogrammetry](/wiki/Photogrammetry \"Photogrammetry\"), which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably [Nvidia](/wiki/Nvidia \"Nvidia\"), [Google](/wiki/Google \"Google\") and various other companies.\n\n",
"Scientific and mathematical basis\n---------------------------------\n\nThe implementation of a realistic renderer always has some basic element of physical simulation or emulation some computation which resembles or abstracts a real physical process.\n\nThe term \"*[physically based](/wiki/Physically_based_rendering \"Physically based rendering\")*\" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.\n\nThe basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.\n\nRendering research is concerned with both the adaptation of scientific models and their efficient application.\n\nMathematics used in rendering includes: [linear algebra](/wiki/Linear_algebra \"Linear algebra\"), [calculus](/wiki/Calculus \"Calculus\"), [numerical mathematics](/wiki/Numerical_analysis \"Numerical analysis\"), [signal processing](/wiki/Digital_signal_processing \"Digital signal processing\"), and [Monte Carlo methods](/wiki/Monte_Carlo_methods \"Monte Carlo methods\").\n\n### The rendering equation\n\nThis is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non\\-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.\n\n L\\_o(x, \\\\omega) \\= L\\_e(x, \\\\omega) \\+ \\\\int\\_\\\\Omega L\\_i(x, \\\\omega') f\\_r(x, \\\\omega', \\\\omega) (\\\\omega' \\\\cdot n) \\\\, \\\\mathrm d \\\\omega'\nMeaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' all the movement of light in a scene.\n### The bidirectional reflectance distribution function\n\nThe **[bidirectional reflectance distribution function](/wiki/Bidirectional_reflectance_distribution_function \"Bidirectional reflectance distribution function\")** (BRDF) expresses a simple model of light interaction with a surface as follows:\n\n f\\_r(x, \\\\omega', \\\\omega) \\= \\\\frac{\\\\mathrm d L\\_r(x, \\\\omega)}{L\\_i(x, \\\\omega')(\\\\omega' \\\\cdot \\\\vec n) \\\\mathrm d \\\\omega'}\nLight interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.\n\n### Geometric optics\n\nRendering is practically exclusively concerned with the particle aspect of light physics known as [geometrical optics](/wiki/Geometrical_optics \"Geometrical optics\"). Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of [CDs](/wiki/Compact_disc \"Compact disc\") and [DVDs](/wiki/DVD \"DVD\")) and polarisation (as seen in [LCDs](/wiki/Liquid-crystal_display \"Liquid-crystal display\")). Both types of effect, if needed, are made by appearance\\-oriented adjustment of the reflection model.\n\n### Visual perception\n\nThough it receives less attention, an understanding of [human visual perception](/wiki/Human_visual_perception \"Human visual perception\") is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large\\-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short\\-cuts could be used in the rendering simulation, since certain subtleties will not be noticeable. This related subject is [tone mapping](/wiki/Tone_mapping \"Tone mapping\").\n\n### Sampling and filtering\n\nOne problem that any rendering system must deal with, no matter which approach it takes, is the **sampling problem**. Essentially, the rendering process tries to depict a [continuous function](/wiki/Continuous_function \"Continuous function\") from image space to colors by using a finite number of pixels. As a consequence of the [Nyquist–Shannon sampling theorem](/wiki/Nyquist%E2%80%93Shannon_sampling_theorem \"Nyquist–Shannon sampling theorem\") (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to [image resolution](/wiki/Image_resolution \"Image resolution\"). In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.\n\nIf a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly [aliasing](/wiki/Aliasing \"Aliasing\") to be present in the final image. Aliasing typically manifests itself as [jaggies](/wiki/Jaggies \"Jaggies\"), or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good\\-looking images) must use some kind of [low\\-pass filter](/wiki/Low-pass_filter \"Low-pass filter\") on the image function to remove high frequencies, a process called [antialiasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\").\n\n",
"### The rendering equation\n\nThis is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non\\-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.\n\n L\\_o(x, \\\\omega) \\= L\\_e(x, \\\\omega) \\+ \\\\int\\_\\\\Omega L\\_i(x, \\\\omega') f\\_r(x, \\\\omega', \\\\omega) (\\\\omega' \\\\cdot n) \\\\, \\\\mathrm d \\\\omega'\nMeaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' all the movement of light in a scene.\n",
"### The bidirectional reflectance distribution function\n\nThe **[bidirectional reflectance distribution function](/wiki/Bidirectional_reflectance_distribution_function \"Bidirectional reflectance distribution function\")** (BRDF) expresses a simple model of light interaction with a surface as follows:\n\n f\\_r(x, \\\\omega', \\\\omega) \\= \\\\frac{\\\\mathrm d L\\_r(x, \\\\omega)}{L\\_i(x, \\\\omega')(\\\\omega' \\\\cdot \\\\vec n) \\\\mathrm d \\\\omega'}\nLight interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.\n\n",
"### Geometric optics\n\nRendering is practically exclusively concerned with the particle aspect of light physics known as [geometrical optics](/wiki/Geometrical_optics \"Geometrical optics\"). Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of [CDs](/wiki/Compact_disc \"Compact disc\") and [DVDs](/wiki/DVD \"DVD\")) and polarisation (as seen in [LCDs](/wiki/Liquid-crystal_display \"Liquid-crystal display\")). Both types of effect, if needed, are made by appearance\\-oriented adjustment of the reflection model.\n\n",
"### Visual perception\n\nThough it receives less attention, an understanding of [human visual perception](/wiki/Human_visual_perception \"Human visual perception\") is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large\\-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short\\-cuts could be used in the rendering simulation, since certain subtleties will not be noticeable. This related subject is [tone mapping](/wiki/Tone_mapping \"Tone mapping\").\n\n",
"### Sampling and filtering\n\nOne problem that any rendering system must deal with, no matter which approach it takes, is the **sampling problem**. Essentially, the rendering process tries to depict a [continuous function](/wiki/Continuous_function \"Continuous function\") from image space to colors by using a finite number of pixels. As a consequence of the [Nyquist–Shannon sampling theorem](/wiki/Nyquist%E2%80%93Shannon_sampling_theorem \"Nyquist–Shannon sampling theorem\") (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to [image resolution](/wiki/Image_resolution \"Image resolution\"). In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.\n\nIf a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly [aliasing](/wiki/Aliasing \"Aliasing\") to be present in the final image. Aliasing typically manifests itself as [jaggies](/wiki/Jaggies \"Jaggies\"), or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good\\-looking images) must use some kind of [low\\-pass filter](/wiki/Low-pass_filter \"Low-pass filter\") on the image function to remove high frequencies, a process called [antialiasing](/wiki/Spatial_anti-aliasing \"Spatial anti-aliasing\").\n\n",
"Hardware\n--------\n\nRendering is usually limited by available computing power and memory [bandwidth](/wiki/Bandwidth_%28computing%29 \"Bandwidth (computing)\"), and so specialized [hardware](/wiki/Computer_hardware \"Computer hardware\") has been developed to speed it up (\"accelerate\" it), particularly for [real\\-time rendering](/wiki/Real-time_computer_graphics \"Real-time computer graphics\"). Hardware features such as a [framebuffer](/wiki/Framebuffer \"Framebuffer\") for raster graphics are required to display the output of rendering smoothly in real time.\n\n### History\n\nIn the era of [vector monitors](/wiki/Vector_monitor \"Vector monitor\") (also called *calligraphic displays*), a display processing unit (DPU) was a dedicated [CPU](/wiki/Central_processing_unit \"Central processing unit\") or [coprocessor](/wiki/Coprocessor \"Coprocessor\") that maintained a list of visual elements and redrew them continuously on the screen by controlling an [electron beam](/wiki/Cathode_ray \"Cathode ray\"). Advanced DPUs such as [Evans \\& Sutherland](/wiki/Evans_%26_Sutherland \"Evans & Sutherland\")'s [Line Drawing System\\-1](/wiki/Line_Drawing_System-1 \"Line Drawing System-1\") (and later models produced into the 1980s) incorporated 3D coordinate transformation features to accelerate rendering of [wire\\-frame images](/wiki/Wire-frame_model \"Wire-frame model\"). Evans \\& Sutherland also made the [Digistar](/wiki/Digistar \"Digistar\") [planetarium](/wiki/Planetarium \"Planetarium\") projection system, which was a vector display that could render both stars and wire\\-frame graphics (the vector\\-based Digistar and Digistar II were used in many planetariums, and a few may still be in operation). A Digistar prototype was used for rendering 3D star fields for the film [Star Trek II: The Wrath of Khan](/wiki/Star_Trek_II:The_Wrath_of_Khan \"The Wrath of Khan\") – some of the first 3D computer graphics sequences ever seen in a feature film.\n\nShaded 3D graphics rendering in the 1970s and early 1980s was usually implemented on general\\-purpose computers, such as the [PDP\\-10](/wiki/PDP-10 \"PDP-10\") used by researchers at the University of Utah. It was difficult to speed up using specialized hardware because it involves a [pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\") of complex steps, requiring data addressing, decision\\-making, and computation capabilities typically only provided by CPUs (although dedicated circuits for speeding up particular operations were proposed ). [Supercomputers](/wiki/Supercomputer \"Supercomputer\") or specially designed multi\\-CPU computers or [clusters](/wiki/Computer_cluster \"Computer cluster\") were sometimes used for ray tracing. In 1981, [James H. Clark](/wiki/James_H._Clark \"James H. Clark\") and [Marc Hannah](/wiki/Marc_Hannah \"Marc Hannah\") designed the Geometry Engine, a [VLSI](/wiki/Very-large-scale_integration \"Very-large-scale integration\") chip for performing some of the steps of the 3D rasterization pipeline, and started the company [Silicon Graphics](/wiki/Silicon_Graphics \"Silicon Graphics\") (SGI) to commercialize this technology.\n\n[Home computers](/wiki/Home_computer \"Home computer\") and [game consoles](/wiki/Video_game_console \"Video game console\") in the 1980s contained graphics [coprocessors](/wiki/Coprocessor \"Coprocessor\") that were capable of scrolling and filling areas of the display, and drawing [sprites](/wiki/Sprite_%28computer_graphics%29 \"Sprite (computer graphics)\") and lines, though they were not useful for rendering realistic images. Towards the end of the 1980s [PC graphics cards](/wiki/Graphics_card \"Graphics card\") and [arcade games](/wiki/Arcade_video_game \"Arcade video game\") with 3D rendering acceleration began to appear, and in the 1990s such technology became commonplace. Today, even low\\-power [mobile processors](/wiki/Mobile_processor \"Mobile processor\") typically incorporate 3D graphics acceleration features.\n\n### GPUs\n\nThe [3D graphics accelerators](/wiki/Graphics_card \"Graphics card\") of the 1990s evolved into modern GPUs. GPUs are general\\-purpose processors, like [CPUs](/wiki/Central_processing_unit \"Central processing unit\"), but they are designed for tasks that can be broken into many small, similar, mostly independent sub\\-tasks (such as rendering individual pixels) and performed in [parallel](/wiki/Parallel_computing \"Parallel computing\"). This means that a GPU can speed up any rendering algorithm that can be split into subtasks in this way, in contrast to 1990s 3D accelerators which were only designed to speed up specific rasterization algorithms and simple shading and lighting effects (although [tricks](/wiki/Kludge%23Computer_science \"Kludge#Computer science\") could be used to perform more general computations).\n\nDue to their origins, GPUs typically still provide specialized hardware acceleration for some steps of a traditional 3D rasterization [pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\"), including hidden surface removal using a [z\\-buffer](/wiki/Z-buffering \"Z-buffering\"), and [texture mapping](/wiki/Texture_mapping \"Texture mapping\") with [mipmaps](/wiki/Mipmap \"Mipmap\"), but these features are no longer always used. Recent GPUs have features to accelerate finding the intersections of rays with a [bounding volume hierarchy](/wiki/Bounding_volume_hierarchy \"Bounding volume hierarchy\"), to help speed up all variants of [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") and [path tracing](/wiki/Path_tracing \"Path tracing\"), as well as [neural network](/wiki/Neural_network_%28machine_learning%29 \"Neural network (machine learning)\") acceleration features sometimes useful for rendering.\n\nGPUs are usually integrated with [high\\-bandwidth memory systems](/wiki/GDDR_SDRAM \"GDDR SDRAM\") to support the read and write [bandwidth](/wiki/Memory_bandwidth \"Memory bandwidth\") requirements of high\\-resolution, real\\-time rendering, particularly when multiple passes are required to render a frame, however memory [latency](/wiki/Memory_latency \"Memory latency\") may be higher than on a CPU, which can be a problem if the [critical path](/wiki/Analysis_of_parallel_algorithms%23Critical_path \"Analysis of parallel algorithms#Critical path\") in an algorithm involves many memory accesses. GPU design accepts high latency as inevitable (in part because a large number of [threads](/wiki/Thread_%28computing%29 \"Thread (computing)\") are sharing the [memory bus](/wiki/Bus_%28computing%29%23Memory_bus \"Bus (computing)#Memory bus\")) and attempts to \"hide\" it by efficiently switching between threads, so a different thread can be performing computations while the first thread is waiting for a read or write to complete.\n\nRendering algorithms will run efficiently on a GPU only if they can be implemented using small groups of threads that perform mostly the same operations. As an example of code that meets this requirement: when rendering a small square of pixels in a simple [ray\\-traced](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") image, all threads will likely be intersecting rays with the same object and performing the same lighting computations. For performance and architectural reasons, GPUs run groups of around 16\\-64 threads called *warps* or *wavefronts* in [lock\\-step](/wiki/Single_instruction%2C_multiple_threads \"Single instruction, multiple threads\") (all threads in the group are executing the same instructions at the same time). If not all threads in the group need to run particular blocks of code (due to conditions) then some threads will be idle, or the results of their computations will be discarded, causing degraded performance.\n\n",
"### History\n\nIn the era of [vector monitors](/wiki/Vector_monitor \"Vector monitor\") (also called *calligraphic displays*), a display processing unit (DPU) was a dedicated [CPU](/wiki/Central_processing_unit \"Central processing unit\") or [coprocessor](/wiki/Coprocessor \"Coprocessor\") that maintained a list of visual elements and redrew them continuously on the screen by controlling an [electron beam](/wiki/Cathode_ray \"Cathode ray\"). Advanced DPUs such as [Evans \\& Sutherland](/wiki/Evans_%26_Sutherland \"Evans & Sutherland\")'s [Line Drawing System\\-1](/wiki/Line_Drawing_System-1 \"Line Drawing System-1\") (and later models produced into the 1980s) incorporated 3D coordinate transformation features to accelerate rendering of [wire\\-frame images](/wiki/Wire-frame_model \"Wire-frame model\"). Evans \\& Sutherland also made the [Digistar](/wiki/Digistar \"Digistar\") [planetarium](/wiki/Planetarium \"Planetarium\") projection system, which was a vector display that could render both stars and wire\\-frame graphics (the vector\\-based Digistar and Digistar II were used in many planetariums, and a few may still be in operation). A Digistar prototype was used for rendering 3D star fields for the film [Star Trek II: The Wrath of Khan](/wiki/Star_Trek_II:The_Wrath_of_Khan \"The Wrath of Khan\") – some of the first 3D computer graphics sequences ever seen in a feature film.\n\nShaded 3D graphics rendering in the 1970s and early 1980s was usually implemented on general\\-purpose computers, such as the [PDP\\-10](/wiki/PDP-10 \"PDP-10\") used by researchers at the University of Utah. It was difficult to speed up using specialized hardware because it involves a [pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\") of complex steps, requiring data addressing, decision\\-making, and computation capabilities typically only provided by CPUs (although dedicated circuits for speeding up particular operations were proposed ). [Supercomputers](/wiki/Supercomputer \"Supercomputer\") or specially designed multi\\-CPU computers or [clusters](/wiki/Computer_cluster \"Computer cluster\") were sometimes used for ray tracing. In 1981, [James H. Clark](/wiki/James_H._Clark \"James H. Clark\") and [Marc Hannah](/wiki/Marc_Hannah \"Marc Hannah\") designed the Geometry Engine, a [VLSI](/wiki/Very-large-scale_integration \"Very-large-scale integration\") chip for performing some of the steps of the 3D rasterization pipeline, and started the company [Silicon Graphics](/wiki/Silicon_Graphics \"Silicon Graphics\") (SGI) to commercialize this technology.\n\n[Home computers](/wiki/Home_computer \"Home computer\") and [game consoles](/wiki/Video_game_console \"Video game console\") in the 1980s contained graphics [coprocessors](/wiki/Coprocessor \"Coprocessor\") that were capable of scrolling and filling areas of the display, and drawing [sprites](/wiki/Sprite_%28computer_graphics%29 \"Sprite (computer graphics)\") and lines, though they were not useful for rendering realistic images. Towards the end of the 1980s [PC graphics cards](/wiki/Graphics_card \"Graphics card\") and [arcade games](/wiki/Arcade_video_game \"Arcade video game\") with 3D rendering acceleration began to appear, and in the 1990s such technology became commonplace. Today, even low\\-power [mobile processors](/wiki/Mobile_processor \"Mobile processor\") typically incorporate 3D graphics acceleration features.\n\n",
"### GPUs\n\nThe [3D graphics accelerators](/wiki/Graphics_card \"Graphics card\") of the 1990s evolved into modern GPUs. GPUs are general\\-purpose processors, like [CPUs](/wiki/Central_processing_unit \"Central processing unit\"), but they are designed for tasks that can be broken into many small, similar, mostly independent sub\\-tasks (such as rendering individual pixels) and performed in [parallel](/wiki/Parallel_computing \"Parallel computing\"). This means that a GPU can speed up any rendering algorithm that can be split into subtasks in this way, in contrast to 1990s 3D accelerators which were only designed to speed up specific rasterization algorithms and simple shading and lighting effects (although [tricks](/wiki/Kludge%23Computer_science \"Kludge#Computer science\") could be used to perform more general computations).\n\nDue to their origins, GPUs typically still provide specialized hardware acceleration for some steps of a traditional 3D rasterization [pipeline](/wiki/Graphics_pipeline \"Graphics pipeline\"), including hidden surface removal using a [z\\-buffer](/wiki/Z-buffering \"Z-buffering\"), and [texture mapping](/wiki/Texture_mapping \"Texture mapping\") with [mipmaps](/wiki/Mipmap \"Mipmap\"), but these features are no longer always used. Recent GPUs have features to accelerate finding the intersections of rays with a [bounding volume hierarchy](/wiki/Bounding_volume_hierarchy \"Bounding volume hierarchy\"), to help speed up all variants of [ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") and [path tracing](/wiki/Path_tracing \"Path tracing\"), as well as [neural network](/wiki/Neural_network_%28machine_learning%29 \"Neural network (machine learning)\") acceleration features sometimes useful for rendering.\n\nGPUs are usually integrated with [high\\-bandwidth memory systems](/wiki/GDDR_SDRAM \"GDDR SDRAM\") to support the read and write [bandwidth](/wiki/Memory_bandwidth \"Memory bandwidth\") requirements of high\\-resolution, real\\-time rendering, particularly when multiple passes are required to render a frame, however memory [latency](/wiki/Memory_latency \"Memory latency\") may be higher than on a CPU, which can be a problem if the [critical path](/wiki/Analysis_of_parallel_algorithms%23Critical_path \"Analysis of parallel algorithms#Critical path\") in an algorithm involves many memory accesses. GPU design accepts high latency as inevitable (in part because a large number of [threads](/wiki/Thread_%28computing%29 \"Thread (computing)\") are sharing the [memory bus](/wiki/Bus_%28computing%29%23Memory_bus \"Bus (computing)#Memory bus\")) and attempts to \"hide\" it by efficiently switching between threads, so a different thread can be performing computations while the first thread is waiting for a read or write to complete.\n\nRendering algorithms will run efficiently on a GPU only if they can be implemented using small groups of threads that perform mostly the same operations. As an example of code that meets this requirement: when rendering a small square of pixels in a simple [ray\\-traced](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\") image, all threads will likely be intersecting rays with the same object and performing the same lighting computations. For performance and architectural reasons, GPUs run groups of around 16\\-64 threads called *warps* or *wavefronts* in [lock\\-step](/wiki/Single_instruction%2C_multiple_threads \"Single instruction, multiple threads\") (all threads in the group are executing the same instructions at the same time). If not all threads in the group need to run particular blocks of code (due to conditions) then some threads will be idle, or the results of their computations will be discarded, causing degraded performance.\n\n",
"Chronology of algorithms and techniques\n---------------------------------------\n\nThe following is a rough timeline of frequently mentioned rendering techniques, including areas of current research. Note that even in cases where an idea was named in a specific paper, there were almost always multiple researchers working in the same area (including earlier related work). When a method is first proposed it is often very inefficient, and it takes additional research and practical efforts to turn it into a useful technique.\n\nThe list focuses on academic research and does not include hardware. (For more history see [\\#External links](/wiki/%23External_links \"#External links\"), as well as [Computer graphics\\#History](/wiki/Computer_graphics%23History \"Computer graphics#History\") and [Golden\\_age\\_of\\_arcade\\_video\\_games\\#Technology](/wiki/Golden_age_of_arcade_video_games%23Technology \"Golden age of arcade video games#Technology\").)\n\n* 1760 \\- [Lambertian reflectance model](/wiki/Lambertian_reflectance \"Lambertian reflectance\")\n* 1968 \\- [Ray casting](/wiki/Ray_casting \"Ray casting\")\n* 1968 \\- [Warnock hidden surface removal](/wiki/Warnock_algorithm \"Warnock algorithm\")\n* 1970 \\- [Scanline rendering](/wiki/Scanline_rendering \"Scanline rendering\")\n* 1971 \\- [Gouraud shading](/wiki/Gouraud_shading \"Gouraud shading\")\n* 1973 \\- [Phong shading](/wiki/Phong_shading \"Phong shading\")\n* 1973 \\- [Phong reflectance model](/wiki/Phong_reflection_model \"Phong reflection model\")\n* 1974 \\- [Texture mapping](/wiki/Texture_mapping \"Texture mapping\")\n* 1974 \\- [Z\\-buffering](/wiki/Z-buffering \"Z-buffering\")\n* 1976 \\- [Environment mapping](/wiki/Environment_mapping \"Environment mapping\")\n* 1977 \\- [Blinn\\-Phong reflectance model](/wiki/Blinn-Phong_shading_model \"Blinn-Phong shading model\")\n* 1977 \\- [Shadow volumes](/wiki/Shadow_volume \"Shadow volume\")\n* 1978 \\- [Shadow mapping](/wiki/Shadow_mapping \"Shadow mapping\")\n* 1978 \\- [Bump mapping](/wiki/Bump_mapping \"Bump mapping\")\n* 1980 \\- [BSP trees](/wiki/BSP_trees \"BSP trees\")\n* 1980 \\- [Ray tracing](/wiki/Ray_tracing_%28graphics%29 \"Ray tracing (graphics)\")\n* 1981 \\- [Cook\\-Torrance reflectance model](/wiki/Specular_highlight%23Cook%E2%80%93Torrance_model \"Specular highlight#Cook–Torrance model\")\n* 1983 \\- [MIP maps](/wiki/Mipmap \"Mipmap\")\n* 1984 \\- [Octree](/wiki/Octree \"Octree\") ray tracing\n* 1984 \\- [Alpha compositing](/wiki/Alpha_compositing \"Alpha compositing\")\n* 1984 \\- [Distributed ray tracing](/wiki/Distributed_ray_tracing \"Distributed ray tracing\")\n* 1984 \\- [Radiosity](/wiki/Radiosity_%28computer_graphics%29 \"Radiosity (computer graphics)\")\n* 1984 \\- [A\\-buffer](/wiki/A-buffer \"A-buffer\")\n* 1985 \\- [Hemicube](/wiki/Hemicube_%28computer_graphics%29 \"Hemicube (computer graphics)\") radiosity\n* 1986 \\- Light source tracing\n* 1986 \\- [Rendering equation](/wiki/Rendering_equation \"Rendering equation\")\n* 1986 \\- [Path tracing](/wiki/Path_tracing \"Path tracing\")\n* 1987 \\- [Reyes rendering](/wiki/Reyes_rendering \"Reyes rendering\")\n* 1991 \\- [Xiaolin Wu line anti\\-aliasing](/wiki/Xiaolin_Wu%27s_line_algorithm \"Xiaolin Wu's line algorithm\")\n* 1991 \\- Hierarchical radiosity\n* 1993 \\- [Oren–Nayar reflectance model](/wiki/Oren%E2%80%93Nayar_reflectance_model \"Oren–Nayar reflectance model\")M. Oren and S.K. Nayar, \"[Generalization of Lambert's Reflectance Model](http://www1.cs.columbia.edu/CAVE/publications/pdfs/Oren_SIGGRAPH94.pdf) \". SIGGRAPH. pp.239\\-246, Jul, 1994\n* 1993 \\- [Tone mapping](/wiki/Tone_mapping \"Tone mapping\")\n* 1993 \\- [Subsurface scattering](/wiki/Subsurface_scattering \"Subsurface scattering\")\n* 1993 \\- [Bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\") (Lafortune \\& Willems formulation)\n* 1994 \\- [Ambient occlusion](/wiki/Ambient_occlusion \"Ambient occlusion\")\n* 1995 \\- [Photon mapping](/wiki/Photon_mapping \"Photon mapping\")\n* 1995 \\- Multiple importance sampling\n* 1997 \\- [Bidirectional path tracing](/wiki/Path_tracing%23Bidirectional_path_tracing \"Path tracing#Bidirectional path tracing\") (Veach \\& Guibas formulation)\n* 1997 \\- [Metropolis light transport](/wiki/Metropolis_light_transport \"Metropolis light transport\")\n* 1997 \\- Instant Radiosity\n* 2002 \\- [Precomputed Radiance Transfer](/wiki/Precomputed_Radiance_Transfer \"Precomputed Radiance Transfer\")\n* 2014 \\- [Differentiable rendering](/wiki/Michael_J._Black%23Differentiable_rendering \"Michael J. Black#Differentiable rendering\")\n* 2017 \\- Path guiding (using adaptive SD\\-tree)\n* 2020 \\- Spatiotemporal [reservoir](/wiki/Reservoir_sampling \"Reservoir sampling\") resampling (ReSTIR)\n* 2020 \\- [Neural radiance fields](/wiki/Neural_radiance_field \"Neural radiance field\")\n* 2023 \\- 3D [Gaussian splatting](/wiki/Gaussian_splatting \"Gaussian splatting\")\n\n",
"See also\n--------\n\n* [Per\\-pixel lighting](/wiki/Per-pixel_lighting \"Per-pixel lighting\")\n",
"References\n----------\n\n",
"Further reading\n---------------\n\n",
"External links\n--------------\n\n* [SIGGRAPH](https://web.archive.org/web/19961221040900/http://siggraph.org/) the ACMs special interest group in graphics the largest academic and professional association and conference\n* [vintage3d.org \"The way to home 3d\"](https://vintage3d.org/history.php) Extensive history of computer graphics hardware, including research, commercialization, and video games and consoles\n\n[Category:3D rendering](/wiki/Category:3D_rendering \"3D rendering\")\n\n"
]
} |
Gallipoli | {
"id": [
5229428
],
"name": [
"Rsjaffe"
]
} | 273dphn47pbt6cht3oazqfdkkbjrgr0 | 2024-10-18T17:08:17Z | 1,243,446,556 | 0 | {
"title": [
"Introduction",
"History",
"Antiquity and Middle Ages",
"Ottoman era",
"Ottoman conquest",
"Crimean War (1853–1856)",
"First Balkan War (1912–1913)",
"World War I: Gallipoli Campaign (1914–1918)",
"Greco-Turkish War (1919–1922)",
"Turkish Republic",
"Notable people",
"References",
"External links"
],
"level": [
1,
2,
3,
3,
4,
4,
4,
4,
4,
3,
2,
2,
2
],
"content": [
"\n\n[thumb\\|upright\\=1\\.4\\|Satellite image of the Gallipoli peninsula and surrounding area](/wiki/File:Gallipoli_peninsula_from_space.png \"Gallipoli peninsula from space.png\")\n[upright\\=1\\.4\\|thumb\\|[ANZAC Cove](/wiki/ANZAC_Cove \"ANZAC Cove\") in Gallipoli](/wiki/File:View_of_Anzac_Cove_-_Gallipoli_Peninsula_-_Dardanelles_-_Turkey_-_01_%285734713946%29.jpg \"View of Anzac Cove - Gallipoli Peninsula - Dardanelles - Turkey - 01 (5734713946).jpg\")\nThe **Gallipoli** [peninsula](/wiki/Peninsula \"Peninsula\") (; ; ) is located in the southern part of [East Thrace](/wiki/East_Thrace \"East Thrace\"), the [European](/wiki/Europe \"Europe\") part of [Turkey](/wiki/Turkey \"Turkey\"), with the [Aegean Sea](/wiki/Aegean_Sea \"Aegean Sea\") to the west and the [Dardanelles](/wiki/Dardanelles \"Dardanelles\") strait to the east.\n\nGallipoli is the Italian form of the [Greek](/wiki/Greek_language \"Greek language\") name (), meaning 'beautiful city', the original name of the modern town of [Gelibolu](/wiki/Gelibolu \"Gelibolu\"). In [antiquity](/wiki/Classical_antiquity \"Classical antiquity\"), the peninsula was known as the **Thracian Chersonese** (; ).\n\nThe peninsula runs in a south\\-westerly direction into the Aegean Sea, between the [Dardanelles](/wiki/Dardanelles \"Dardanelles\") (formerly known as the Hellespont), and the [Gulf of Saros](/wiki/Gulf_of_Saros \"Gulf of Saros\") (formerly the bay of Melas). In [antiquity](/wiki/Ancient_Greece \"Ancient Greece\"), it was protected by the [Long Wall](/wiki/Long_Wall_%28Thracian_Chersonese%29 \"Long Wall (Thracian Chersonese)\"), a defensive structure built across the narrowest part of the peninsula near the ancient city of [Agora](/wiki/Agora_%28Thrace%29 \"Agora (Thrace)\"). The [isthmus](/wiki/Isthmus \"Isthmus\") traversed by the wall was only 36 [stadia](/wiki/Stadia_%28length%29 \"Stadia (length)\") in breadth[Herodotus](/wiki/Herodotus \"Herodotus\"), *The Histories*, [vi. 36](https://www.perseus.tufts.edu/cgi-bin/ptext?doc=Perseus%3Atext%3A1999.01.0126&layout=&loc=6.36); Xenophon, ibid.; Pseudo\\-Scylax, *[Periplus of Pseudo\\-Scylax](/wiki/Periplus_of_Pseudo-Scylax \"Periplus of Pseudo-Scylax\")*, 67 ([PDF](http://www.le.ac.uk/ar/gjs/skylax_for_www_02214.pdf) ) or about , but the length of the peninsula from this wall to its southern extremity, Cape Mastusia, was 420 stadia or about .\n\n",
"History\n-------\n\n### Antiquity and Middle Ages\n\n[thumb\\|left\\|Map of the Thracian Chersonese](/wiki/File:Thracian_chersonese.png \"Thracian chersonese.png\")\nIn ancient times, the Gallipoli Peninsula was known as the [Thracian Chersonese](/wiki/Thracian_Chersonese \"Thracian Chersonese\") (from [Greek](/wiki/Ancient_Greek_language \"Ancient Greek language\") , 'peninsula') to the Greeks and later the Romans. It was the location of several prominent towns, including [Cardia](/wiki/Cardia_%28Thrace%29 \"Cardia (Thrace)\"), [Pactya](/wiki/Pactya \"Pactya\"), Callipolis (Gallipoli), Alopeconnesus (), [Sestos](/wiki/Sestos \"Sestos\"), [Madytos](/wiki/Madytos_%28Thrace%29 \"Madytos (Thrace)\"), and [Elaeus](/wiki/Elaeus \"Elaeus\"). The peninsula was renowned for its [wheat](/wiki/Wheat \"Wheat\"). It also benefited from its strategic importance on the main route between [Europe](/wiki/Europe \"Europe\") and [Asia](/wiki/Asia \"Asia\"), as well as from its control of the shipping route from [Crimea](/wiki/Crimea \"Crimea\"). The city of Sestos was the main crossing\\-point on the [Hellespont](/wiki/Hellespont \"Hellespont\").\n\nAccording to [Herodotus](/wiki/Herodotus \"Herodotus\"), the Thracian tribe of [Dolonci](/wiki/Dolonci \"Dolonci\") () (or 'barbarians' according to [Cornelius Nepos](/wiki/Cornelius_Nepos \"Cornelius Nepos\")) held possession of the peninsula before Greek colonizers arrived. Then, settlers from [Ancient Greece](/wiki/Ancient_Greece \"Ancient Greece\"), mainly of [Ionian](/wiki/Ionia \"Ionia\") and [Aeolian](/wiki/Aeolians \"Aeolians\") stock, founded about 12 cities on the peninsula in the 7th century BC.Herodotus, [vi. 34](https://www.perseus.tufts.edu/cgi-bin/ptext?doc=Perseus%3Atext%3A1999.01.0126&layout=&loc=6.34.1) ; [Nepos, Cornelius](/wiki/Cornelius_Nepos \"Cornelius Nepos\"), *Lives of Eminent Commanders*, \"Miltiades\", [1](http://www.tertullian.org/fathers/nepos.htm#Miltiades) The [Athenian](/wiki/Classical_Athens \"Classical Athens\") statesman [Miltiades the Elder](/wiki/Miltiades_the_Elder \"Miltiades the Elder\") founded a major Athenian colony there around 560 BC. He took authority over the entire peninsula, augmenting its defences against incursions from the mainland. It eventually passed to his nephew, the more famous [Miltiades the Younger](/wiki/Miltiades_the_Younger \"Miltiades the Younger\"), about 524 BC. The peninsula was abandoned to the [Persians](/wiki/Achaemenid_Empire \"Achaemenid Empire\") in 493 BC after the beginning of the [Greco\\-Persian Wars](/wiki/Greco-Persian_Wars \"Greco-Persian Wars\") (499–478 BC).\n\nThe Persians were eventually expelled, after which the peninsula was for a time ruled by Athens, which enrolled it into the [Delian League](/wiki/Delian_League \"Delian League\") in 478 BC. The Athenians established a number of [cleruchies](/wiki/Cleruchy \"Cleruchy\") on the Thracian Chersonese and sent an additional 1,000 settlers around 448 BC. [Sparta](/wiki/Sparta \"Sparta\") gained control after the decisive [Battle of Aegospotami](/wiki/Battle_of_Aegospotami \"Battle of Aegospotami\") in 404 BC, but the peninsula subsequently reverted to the Athenians. During the 4th century BC, the Thracian Chersonese became the focus of a bitter territorial dispute between Athens and [Macedon](/wiki/Macedon \"Macedon\"), whose king [Philip II](/wiki/Philip_II_of_Macedon \"Philip II of Macedon\") sought its possession. It was eventually ceded to Philip in 338 BC.\n\nAfter the death of Philip's son [Alexander the Great](/wiki/Alexander_the_Great \"Alexander the Great\") in 323 BC, the Thracian Chersonese became the object of contention among [Alexander's successors](/wiki/Diadochi \"Diadochi\"). [Lysimachus](/wiki/Lysimachus \"Lysimachus\") established his capital [Lysimachia](/wiki/Lysimachia_%28Thrace%29 \"Lysimachia (Thrace)\") here. In 278 BC, [Celtic tribes](/wiki/List_of_ancient_Celtic_peoples_and_tribes \"List of ancient Celtic peoples and tribes\") from [Galatia](/wiki/Galatia \"Galatia\") in Asia Minor settled in the area. In 196 BC, the [Seleucid](/wiki/Seleucid_Empire \"Seleucid Empire\") king [Antiochus III](/wiki/Antiochus_III_the_Great \"Antiochus III the Great\") seized the peninsula. This alarmed the Greeks and prompted them to seek the aid of the [Romans](/wiki/Roman_Republic \"Roman Republic\"), who conquered the Thracian Chersonese, which they gave to their ally [Eumenes II](/wiki/Eumenes_II \"Eumenes II\") of [Pergamon](/wiki/Pergamon \"Pergamon\") in 188 BC. At the extinction of the [Attalid dynasty](/wiki/Attalid_dynasty \"Attalid dynasty\") in 133 BC it passed again to the Romans, who from 129 BC administered it in the [Roman province](/wiki/Roman_province \"Roman province\") of [Asia](/wiki/Asia_%28Roman_province%29 \"Asia (Roman province)\"). It was subsequently made a state\\-owned territory () and during the reign of the emperor [Augustus](/wiki/Augustus \"Augustus\") it was imperial property.\n[thumb\\|Map of the peninsula and its surroundings](/wiki/File:Gallipolimap2.png \"Gallipolimap2.png\")\n\nThe Thracian Chersonese was part of the [Eastern Roman Empire](/wiki/Eastern_Roman_Empire \"Eastern Roman Empire\") from its foundation in 395 AD. In 443 AD, [Attila the Hun](/wiki/Attila_the_Hun \"Attila the Hun\") invaded the Gallipoli Peninsula during one of the last stages of his grand campaign that year. He captured both Callipolis and Sestus. Aside from a brief period from 1204 to 1235, when it was controlled by the [Republic of Venice](/wiki/Republic_of_Venice \"Republic of Venice\"), the [Byzantine Empire](/wiki/Byzantine_Empire \"Byzantine Empire\") ruled the territory until 1356\\. During the night between 1 and 2 March 1354, a strong earthquake destroyed the city of Gallipoli and its city walls, weakening its defenses.\n\n### Ottoman era\n\n#### Ottoman conquest\n\nWithin a month after the devastating 1354 earthquake the [Ottomans](/wiki/Ottoman_Empire \"Ottoman Empire\") [besieged and captured](/wiki/Fall_of_Gallipoli \"Fall of Gallipoli\") the town of Gallipoli, making it the first Ottoman stronghold in Europe and the staging area for Ottoman expansion across the [Balkans](/wiki/Balkans \"Balkans\").Crowley, Roger. 1453: *The Holy War for Constantinople and the Clash of Islam and the West*. New York: Hyperion, 2005\\. p 31 . The [Savoyard Crusade](/wiki/Savoyard_Crusade \"Savoyard Crusade\") recaptured Gallipoli for Byzantium in 1366, but the beleaguered Byzantines were forced to hand it back in September 1376\\. The [Greeks](/wiki/Greeks \"Greeks\") living there were allowed to continue their everyday activities. In the 19th century, Gallipoli (, ) was a district () in the [Vilayet of Adrianople](/wiki/Vilayet_of_Adrianople \"Vilayet of Adrianople\"), with about thirty thousand inhabitants: comprising Greeks, Turks, Armenians and Jews.\n\n#### Crimean War (1853–1856\\)\n\n[thumb\\|The port of Gallipoli, ](/wiki/File:Port_de_Gallipoli.JPG \"Port de Gallipoli.JPG\")\nGallipoli became a major [encampment](/wiki/Military_camp \"Military camp\") for British and French forces in 1854 during the [Crimean War](/wiki/Crimean_War \"Crimean War\"), and the harbour was also a stopping\\-off point between the western Mediterranean and [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")).\n\nIn March 1854 British and French engineers constructed an line of defence to protect the peninsula from a possible Russian attack and secure control of the route to the [Mediterranean Sea](/wiki/Mediterranean_Sea \"Mediterranean Sea\").\n\n#### First Balkan War (1912–1913\\)\n\nDuring the [First Balkan War](/wiki/First_Balkan_War \"First Balkan War\"), the 1913 [Battle of Bulair](/wiki/Battle_of_Bulair \"Battle of Bulair\") and several minor skirmishes took place where the Ottoman army fought in the Greek villages near Gallipoli\". The [Report of the International Commission on the Balkan Wars](/wiki/Report_of_the_International_Commission_on_the_Balkan_Wars \"Report of the International Commission on the Balkan Wars\") mention destruction and massacres in the area by the Ottoman army against Greek and Bulgarian population.\n\nThe Ottoman Government, under the pretext that a village was within the firing line, ordered its evacuation within three hours. The residents abandoned everything they possessed, left their village and went to [Gallipoli](/wiki/Gelibolu \"Gelibolu\"). Seven of the Greek villagers who stayed two minutes later than the three\\-hour limit allowed for the evacuation were shot by the soldiers. After the end of the Balkan War the exiles were allowed to return. But as the Government allowed only the Turks to rebuild their houses and furnish them, the exiled Greeks were compelled to remain in Gallipoli.\n\n#### World War I: Gallipoli Campaign (1914–1918\\)\n\n[thumb\\|Landing at Gallipoli in April 1915](/wiki/File:Landing_at_Gallipoli_%2813901951593%29.jpg \"Landing at Gallipoli (13901951593).jpg\")\n[thumb\\|The Sphinx overlooking Anzac Cove](/wiki/File:Gallipoli_ANZAC_Cove_Sphinx_2.JPG \"Gallipoli ANZAC Cove Sphinx 2.JPG\")\n\nDuring World War I (1914–1918\\), French, British, and allied forces (Australian, New Zealand, Newfoundland, Irish and Indian) fought the [Gallipoli campaign](/wiki/Gallipoli_campaign \"Gallipoli campaign\") (1915–1916\\) in and near the peninsula, seeking to secure a sea route to relieve their eastern ally, [Russia](/wiki/Imperial_Russia \"Imperial Russia\"). The Ottomans set up defensive fortifications along the peninsula and contained the invading forces.\n\nIn early 1915, attempting to seize a strategic advantage in World War I by capturing the [Bosporus Strait](/wiki/Bosporus_Strait \"Bosporus Strait\") at [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")), the British authorised an attack on the peninsula by French, British, and British Empire forces. The first Australian troops landed at [ANZAC Cove](/wiki/ANZAC_Cove \"ANZAC Cove\") early in the morning of 25 April 1915\\. After eight months of heavy fighting the last Allied soldiers withdrew by 9 January 1916\\.\n\nThe campaign, one of the greatest [Ottoman](/wiki/Ottoman_Empire \"Ottoman Empire\") victories during the war, is considered by historians as a humiliating [Allied](/wiki/Allies_of_World_War_I \"Allies of World War I\") failure. [Turks](/wiki/Turkey \"Turkey\") regard it as a defining moment in their nation's history and national identity, contributing to the establishment of the Republic of Turkey eight years later under President [Mustafa Kemal Atatürk](/wiki/Mustafa_Kemal_Atat%C3%BCrk \"Mustafa Kemal Atatürk\"), who first rose to prominence as a commander at Gallipoli.\n\nThe Ottoman Empire instituted the [Gallipoli Star](/wiki/Gallipoli_Star_%28Ottoman_Empire%29 \"Gallipoli Star (Ottoman Empire)\") as a military decoration in 1915 and awarded it throughout the rest of World War I.\n\nThe campaign was the first major military action of [Australia](/wiki/Australia \"Australia\") and [New Zealand](/wiki/New_Zealand \"New Zealand\") (or [ANZACs](/wiki/Australian_and_New_Zealand_Army_Corps \"Australian and New Zealand Army Corps\")) as independent [dominions](/wiki/Dominion \"Dominion\"), setting a foundation for Australian and New Zealand military history, and contributing to their developing national identities. The date of the landing, 25 April, is known as \"[ANZAC Day](/wiki/Anzac_Day \"Anzac Day\")\". It remains the most significant commemoration of military casualties and [\"returned soldiers\"](/wiki/Veteran \"Veteran\") in Australia and New Zealand.\n\nOn the Allied side, one of the promoters of the expedition was Britain's [First Lord of the Admiralty](/wiki/First_Lord_of_the_Admiralty \"First Lord of the Admiralty\"), [Winston Churchill](/wiki/Winston_Churchill \"Winston Churchill\"), whose bullish optimism caused damage to his reputation that took years to repair.\n\nPrior to the Allied landings in April 1915, the Ottoman Empire deported [Greek residents](/wiki/Ottoman_Greeks \"Ottoman Greeks\") from Gallipoli and the surrounding region and from the islands in the [sea of Marmara](/wiki/Sea_of_Marmara \"Sea of Marmara\"), to the interior where they were at the mercy of hostile Turks. The Greeks had little time to pack and the Ottoman authorities permitted them to take only some bedding and the rest was handed over to the Government. The Turks then plundered the houses and properties. A testimony of a deportee described how the deportees were forced onto crowded steamers, standing\\-room only, then on disembarking, men of military age were removed (for forced labour in the [labour battalions](/wiki/Labour_Battalions_%28Ottoman_Empire%29 \"Labour Battalions (Ottoman Empire)\") of the Ottoman army). The rest were \"scattered… among the farms like ownerless cattle.\"\n\nThe [Metropolitan bishop](/wiki/Metropolitan_bishop \"Metropolitan bishop\") of Gallipoli wrote on 17 July 1915 that the extermination of the Christian refugees was methodical. He also mentions that \"The Turks, like beasts of prey, immediately plundered all the Christians' property and carried it off. The inhabitants and refugees of my district are entirely without shelter, awaiting to be sent no one knows where ...\". Many Greeks died from hunger and there were frequent cases of rape of women and young girls, as well as their forced conversion to [Islam](/wiki/Islam \"Islam\"). In some cases, [Muhacirs](/wiki/Muhacirs \"Muhacirs\") appeared in the villages even before the Greek inhabitants were deported and stoned the houses and threatened the inhabitants that they would kill them if they did not leave.\n\n#### Greco\\-Turkish War (1919–1922\\)\n\nGreek troops occupied Gallipoli on 4 August 1920 during the [Greco\\-Turkish War of 1919–22](/wiki/Greco-Turkish_War_%281919%E2%80%9322%29 \"Greco-Turkish War (1919–22)\"), considered part of the [Turkish War of Independence](/wiki/Turkish_War_of_Independence \"Turkish War of Independence\"). After the [Armistice of Mudros](/wiki/Armistice_of_Mudros \"Armistice of Mudros\") of 30 October 1918 it became a Greek prefecture centre as *Kallipolis*. However, Greece was forced to cede Eastern Thrace after the [Armistice of Mudanya](/wiki/Armistice_of_Mudanya \"Armistice of Mudanya\") of October 1922\\. Gallipoli was briefly handed over to British troops on 20 October 1922, but finally returned to Turkish rule on 26 November 1922\\.\n\nIn 1920, after the defeat of the [Russian White army](/wiki/White_movement \"White movement\") of General [Pyotr Wrangel](/wiki/Pyotr_Wrangel \"Pyotr Wrangel\"), a significant number of [émigré soldiers](/wiki/White_%C3%A9migr%C3%A9 \"White émigré\") and their families evacuated to Gallipoli from the [Crimean Peninsula](/wiki/Crimean_Peninsula \"Crimean Peninsula\"). From there, many went to European countries, such as [Yugoslavia](/wiki/Kingdom_of_Yugoslavia \"Kingdom of Yugoslavia\"), where they found refuge.\n\nThere are now many [cemeteries and war memorials](/wiki/List_of_war_cemeteries_and_memorials_on_the_Gallipoli_Peninsula \"List of war cemeteries and memorials on the Gallipoli Peninsula\") on the Gallipoli peninsula.\n\n### Turkish Republic\n\nBetween 1923 and 1926 Gallipoli became the centre of Gelibolu Province, comprising the districts of Gelibolu, [Eceabat](/wiki/Eceabat \"Eceabat\"), [Keşan](/wiki/Ke%C5%9Fan \"Keşan\") and [Şarköy](/wiki/%C5%9Eark%C3%B6y \"Şarköy\"). After the dissolution of the province, it became a district centre in [Çanakkale Province](/wiki/%C3%87anakkale_Province \"Çanakkale Province\").\n\n",
"### Antiquity and Middle Ages\n\n[thumb\\|left\\|Map of the Thracian Chersonese](/wiki/File:Thracian_chersonese.png \"Thracian chersonese.png\")\nIn ancient times, the Gallipoli Peninsula was known as the [Thracian Chersonese](/wiki/Thracian_Chersonese \"Thracian Chersonese\") (from [Greek](/wiki/Ancient_Greek_language \"Ancient Greek language\") , 'peninsula') to the Greeks and later the Romans. It was the location of several prominent towns, including [Cardia](/wiki/Cardia_%28Thrace%29 \"Cardia (Thrace)\"), [Pactya](/wiki/Pactya \"Pactya\"), Callipolis (Gallipoli), Alopeconnesus (), [Sestos](/wiki/Sestos \"Sestos\"), [Madytos](/wiki/Madytos_%28Thrace%29 \"Madytos (Thrace)\"), and [Elaeus](/wiki/Elaeus \"Elaeus\"). The peninsula was renowned for its [wheat](/wiki/Wheat \"Wheat\"). It also benefited from its strategic importance on the main route between [Europe](/wiki/Europe \"Europe\") and [Asia](/wiki/Asia \"Asia\"), as well as from its control of the shipping route from [Crimea](/wiki/Crimea \"Crimea\"). The city of Sestos was the main crossing\\-point on the [Hellespont](/wiki/Hellespont \"Hellespont\").\n\nAccording to [Herodotus](/wiki/Herodotus \"Herodotus\"), the Thracian tribe of [Dolonci](/wiki/Dolonci \"Dolonci\") () (or 'barbarians' according to [Cornelius Nepos](/wiki/Cornelius_Nepos \"Cornelius Nepos\")) held possession of the peninsula before Greek colonizers arrived. Then, settlers from [Ancient Greece](/wiki/Ancient_Greece \"Ancient Greece\"), mainly of [Ionian](/wiki/Ionia \"Ionia\") and [Aeolian](/wiki/Aeolians \"Aeolians\") stock, founded about 12 cities on the peninsula in the 7th century BC.Herodotus, [vi. 34](https://www.perseus.tufts.edu/cgi-bin/ptext?doc=Perseus%3Atext%3A1999.01.0126&layout=&loc=6.34.1) ; [Nepos, Cornelius](/wiki/Cornelius_Nepos \"Cornelius Nepos\"), *Lives of Eminent Commanders*, \"Miltiades\", [1](http://www.tertullian.org/fathers/nepos.htm#Miltiades) The [Athenian](/wiki/Classical_Athens \"Classical Athens\") statesman [Miltiades the Elder](/wiki/Miltiades_the_Elder \"Miltiades the Elder\") founded a major Athenian colony there around 560 BC. He took authority over the entire peninsula, augmenting its defences against incursions from the mainland. It eventually passed to his nephew, the more famous [Miltiades the Younger](/wiki/Miltiades_the_Younger \"Miltiades the Younger\"), about 524 BC. The peninsula was abandoned to the [Persians](/wiki/Achaemenid_Empire \"Achaemenid Empire\") in 493 BC after the beginning of the [Greco\\-Persian Wars](/wiki/Greco-Persian_Wars \"Greco-Persian Wars\") (499–478 BC).\n\nThe Persians were eventually expelled, after which the peninsula was for a time ruled by Athens, which enrolled it into the [Delian League](/wiki/Delian_League \"Delian League\") in 478 BC. The Athenians established a number of [cleruchies](/wiki/Cleruchy \"Cleruchy\") on the Thracian Chersonese and sent an additional 1,000 settlers around 448 BC. [Sparta](/wiki/Sparta \"Sparta\") gained control after the decisive [Battle of Aegospotami](/wiki/Battle_of_Aegospotami \"Battle of Aegospotami\") in 404 BC, but the peninsula subsequently reverted to the Athenians. During the 4th century BC, the Thracian Chersonese became the focus of a bitter territorial dispute between Athens and [Macedon](/wiki/Macedon \"Macedon\"), whose king [Philip II](/wiki/Philip_II_of_Macedon \"Philip II of Macedon\") sought its possession. It was eventually ceded to Philip in 338 BC.\n\nAfter the death of Philip's son [Alexander the Great](/wiki/Alexander_the_Great \"Alexander the Great\") in 323 BC, the Thracian Chersonese became the object of contention among [Alexander's successors](/wiki/Diadochi \"Diadochi\"). [Lysimachus](/wiki/Lysimachus \"Lysimachus\") established his capital [Lysimachia](/wiki/Lysimachia_%28Thrace%29 \"Lysimachia (Thrace)\") here. In 278 BC, [Celtic tribes](/wiki/List_of_ancient_Celtic_peoples_and_tribes \"List of ancient Celtic peoples and tribes\") from [Galatia](/wiki/Galatia \"Galatia\") in Asia Minor settled in the area. In 196 BC, the [Seleucid](/wiki/Seleucid_Empire \"Seleucid Empire\") king [Antiochus III](/wiki/Antiochus_III_the_Great \"Antiochus III the Great\") seized the peninsula. This alarmed the Greeks and prompted them to seek the aid of the [Romans](/wiki/Roman_Republic \"Roman Republic\"), who conquered the Thracian Chersonese, which they gave to their ally [Eumenes II](/wiki/Eumenes_II \"Eumenes II\") of [Pergamon](/wiki/Pergamon \"Pergamon\") in 188 BC. At the extinction of the [Attalid dynasty](/wiki/Attalid_dynasty \"Attalid dynasty\") in 133 BC it passed again to the Romans, who from 129 BC administered it in the [Roman province](/wiki/Roman_province \"Roman province\") of [Asia](/wiki/Asia_%28Roman_province%29 \"Asia (Roman province)\"). It was subsequently made a state\\-owned territory () and during the reign of the emperor [Augustus](/wiki/Augustus \"Augustus\") it was imperial property.\n[thumb\\|Map of the peninsula and its surroundings](/wiki/File:Gallipolimap2.png \"Gallipolimap2.png\")\n\nThe Thracian Chersonese was part of the [Eastern Roman Empire](/wiki/Eastern_Roman_Empire \"Eastern Roman Empire\") from its foundation in 395 AD. In 443 AD, [Attila the Hun](/wiki/Attila_the_Hun \"Attila the Hun\") invaded the Gallipoli Peninsula during one of the last stages of his grand campaign that year. He captured both Callipolis and Sestus. Aside from a brief period from 1204 to 1235, when it was controlled by the [Republic of Venice](/wiki/Republic_of_Venice \"Republic of Venice\"), the [Byzantine Empire](/wiki/Byzantine_Empire \"Byzantine Empire\") ruled the territory until 1356\\. During the night between 1 and 2 March 1354, a strong earthquake destroyed the city of Gallipoli and its city walls, weakening its defenses.\n\n",
"### Ottoman era\n\n#### Ottoman conquest\n\nWithin a month after the devastating 1354 earthquake the [Ottomans](/wiki/Ottoman_Empire \"Ottoman Empire\") [besieged and captured](/wiki/Fall_of_Gallipoli \"Fall of Gallipoli\") the town of Gallipoli, making it the first Ottoman stronghold in Europe and the staging area for Ottoman expansion across the [Balkans](/wiki/Balkans \"Balkans\").Crowley, Roger. 1453: *The Holy War for Constantinople and the Clash of Islam and the West*. New York: Hyperion, 2005\\. p 31 . The [Savoyard Crusade](/wiki/Savoyard_Crusade \"Savoyard Crusade\") recaptured Gallipoli for Byzantium in 1366, but the beleaguered Byzantines were forced to hand it back in September 1376\\. The [Greeks](/wiki/Greeks \"Greeks\") living there were allowed to continue their everyday activities. In the 19th century, Gallipoli (, ) was a district () in the [Vilayet of Adrianople](/wiki/Vilayet_of_Adrianople \"Vilayet of Adrianople\"), with about thirty thousand inhabitants: comprising Greeks, Turks, Armenians and Jews.\n\n#### Crimean War (1853–1856\\)\n\n[thumb\\|The port of Gallipoli, ](/wiki/File:Port_de_Gallipoli.JPG \"Port de Gallipoli.JPG\")\nGallipoli became a major [encampment](/wiki/Military_camp \"Military camp\") for British and French forces in 1854 during the [Crimean War](/wiki/Crimean_War \"Crimean War\"), and the harbour was also a stopping\\-off point between the western Mediterranean and [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")).\n\nIn March 1854 British and French engineers constructed an line of defence to protect the peninsula from a possible Russian attack and secure control of the route to the [Mediterranean Sea](/wiki/Mediterranean_Sea \"Mediterranean Sea\").\n\n#### First Balkan War (1912–1913\\)\n\nDuring the [First Balkan War](/wiki/First_Balkan_War \"First Balkan War\"), the 1913 [Battle of Bulair](/wiki/Battle_of_Bulair \"Battle of Bulair\") and several minor skirmishes took place where the Ottoman army fought in the Greek villages near Gallipoli\". The [Report of the International Commission on the Balkan Wars](/wiki/Report_of_the_International_Commission_on_the_Balkan_Wars \"Report of the International Commission on the Balkan Wars\") mention destruction and massacres in the area by the Ottoman army against Greek and Bulgarian population.\n\nThe Ottoman Government, under the pretext that a village was within the firing line, ordered its evacuation within three hours. The residents abandoned everything they possessed, left their village and went to [Gallipoli](/wiki/Gelibolu \"Gelibolu\"). Seven of the Greek villagers who stayed two minutes later than the three\\-hour limit allowed for the evacuation were shot by the soldiers. After the end of the Balkan War the exiles were allowed to return. But as the Government allowed only the Turks to rebuild their houses and furnish them, the exiled Greeks were compelled to remain in Gallipoli.\n\n#### World War I: Gallipoli Campaign (1914–1918\\)\n\n[thumb\\|Landing at Gallipoli in April 1915](/wiki/File:Landing_at_Gallipoli_%2813901951593%29.jpg \"Landing at Gallipoli (13901951593).jpg\")\n[thumb\\|The Sphinx overlooking Anzac Cove](/wiki/File:Gallipoli_ANZAC_Cove_Sphinx_2.JPG \"Gallipoli ANZAC Cove Sphinx 2.JPG\")\n\nDuring World War I (1914–1918\\), French, British, and allied forces (Australian, New Zealand, Newfoundland, Irish and Indian) fought the [Gallipoli campaign](/wiki/Gallipoli_campaign \"Gallipoli campaign\") (1915–1916\\) in and near the peninsula, seeking to secure a sea route to relieve their eastern ally, [Russia](/wiki/Imperial_Russia \"Imperial Russia\"). The Ottomans set up defensive fortifications along the peninsula and contained the invading forces.\n\nIn early 1915, attempting to seize a strategic advantage in World War I by capturing the [Bosporus Strait](/wiki/Bosporus_Strait \"Bosporus Strait\") at [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")), the British authorised an attack on the peninsula by French, British, and British Empire forces. The first Australian troops landed at [ANZAC Cove](/wiki/ANZAC_Cove \"ANZAC Cove\") early in the morning of 25 April 1915\\. After eight months of heavy fighting the last Allied soldiers withdrew by 9 January 1916\\.\n\nThe campaign, one of the greatest [Ottoman](/wiki/Ottoman_Empire \"Ottoman Empire\") victories during the war, is considered by historians as a humiliating [Allied](/wiki/Allies_of_World_War_I \"Allies of World War I\") failure. [Turks](/wiki/Turkey \"Turkey\") regard it as a defining moment in their nation's history and national identity, contributing to the establishment of the Republic of Turkey eight years later under President [Mustafa Kemal Atatürk](/wiki/Mustafa_Kemal_Atat%C3%BCrk \"Mustafa Kemal Atatürk\"), who first rose to prominence as a commander at Gallipoli.\n\nThe Ottoman Empire instituted the [Gallipoli Star](/wiki/Gallipoli_Star_%28Ottoman_Empire%29 \"Gallipoli Star (Ottoman Empire)\") as a military decoration in 1915 and awarded it throughout the rest of World War I.\n\nThe campaign was the first major military action of [Australia](/wiki/Australia \"Australia\") and [New Zealand](/wiki/New_Zealand \"New Zealand\") (or [ANZACs](/wiki/Australian_and_New_Zealand_Army_Corps \"Australian and New Zealand Army Corps\")) as independent [dominions](/wiki/Dominion \"Dominion\"), setting a foundation for Australian and New Zealand military history, and contributing to their developing national identities. The date of the landing, 25 April, is known as \"[ANZAC Day](/wiki/Anzac_Day \"Anzac Day\")\". It remains the most significant commemoration of military casualties and [\"returned soldiers\"](/wiki/Veteran \"Veteran\") in Australia and New Zealand.\n\nOn the Allied side, one of the promoters of the expedition was Britain's [First Lord of the Admiralty](/wiki/First_Lord_of_the_Admiralty \"First Lord of the Admiralty\"), [Winston Churchill](/wiki/Winston_Churchill \"Winston Churchill\"), whose bullish optimism caused damage to his reputation that took years to repair.\n\nPrior to the Allied landings in April 1915, the Ottoman Empire deported [Greek residents](/wiki/Ottoman_Greeks \"Ottoman Greeks\") from Gallipoli and the surrounding region and from the islands in the [sea of Marmara](/wiki/Sea_of_Marmara \"Sea of Marmara\"), to the interior where they were at the mercy of hostile Turks. The Greeks had little time to pack and the Ottoman authorities permitted them to take only some bedding and the rest was handed over to the Government. The Turks then plundered the houses and properties. A testimony of a deportee described how the deportees were forced onto crowded steamers, standing\\-room only, then on disembarking, men of military age were removed (for forced labour in the [labour battalions](/wiki/Labour_Battalions_%28Ottoman_Empire%29 \"Labour Battalions (Ottoman Empire)\") of the Ottoman army). The rest were \"scattered… among the farms like ownerless cattle.\"\n\nThe [Metropolitan bishop](/wiki/Metropolitan_bishop \"Metropolitan bishop\") of Gallipoli wrote on 17 July 1915 that the extermination of the Christian refugees was methodical. He also mentions that \"The Turks, like beasts of prey, immediately plundered all the Christians' property and carried it off. The inhabitants and refugees of my district are entirely without shelter, awaiting to be sent no one knows where ...\". Many Greeks died from hunger and there were frequent cases of rape of women and young girls, as well as their forced conversion to [Islam](/wiki/Islam \"Islam\"). In some cases, [Muhacirs](/wiki/Muhacirs \"Muhacirs\") appeared in the villages even before the Greek inhabitants were deported and stoned the houses and threatened the inhabitants that they would kill them if they did not leave.\n\n#### Greco\\-Turkish War (1919–1922\\)\n\nGreek troops occupied Gallipoli on 4 August 1920 during the [Greco\\-Turkish War of 1919–22](/wiki/Greco-Turkish_War_%281919%E2%80%9322%29 \"Greco-Turkish War (1919–22)\"), considered part of the [Turkish War of Independence](/wiki/Turkish_War_of_Independence \"Turkish War of Independence\"). After the [Armistice of Mudros](/wiki/Armistice_of_Mudros \"Armistice of Mudros\") of 30 October 1918 it became a Greek prefecture centre as *Kallipolis*. However, Greece was forced to cede Eastern Thrace after the [Armistice of Mudanya](/wiki/Armistice_of_Mudanya \"Armistice of Mudanya\") of October 1922\\. Gallipoli was briefly handed over to British troops on 20 October 1922, but finally returned to Turkish rule on 26 November 1922\\.\n\nIn 1920, after the defeat of the [Russian White army](/wiki/White_movement \"White movement\") of General [Pyotr Wrangel](/wiki/Pyotr_Wrangel \"Pyotr Wrangel\"), a significant number of [émigré soldiers](/wiki/White_%C3%A9migr%C3%A9 \"White émigré\") and their families evacuated to Gallipoli from the [Crimean Peninsula](/wiki/Crimean_Peninsula \"Crimean Peninsula\"). From there, many went to European countries, such as [Yugoslavia](/wiki/Kingdom_of_Yugoslavia \"Kingdom of Yugoslavia\"), where they found refuge.\n\nThere are now many [cemeteries and war memorials](/wiki/List_of_war_cemeteries_and_memorials_on_the_Gallipoli_Peninsula \"List of war cemeteries and memorials on the Gallipoli Peninsula\") on the Gallipoli peninsula.\n\n",
"#### Ottoman conquest\n\nWithin a month after the devastating 1354 earthquake the [Ottomans](/wiki/Ottoman_Empire \"Ottoman Empire\") [besieged and captured](/wiki/Fall_of_Gallipoli \"Fall of Gallipoli\") the town of Gallipoli, making it the first Ottoman stronghold in Europe and the staging area for Ottoman expansion across the [Balkans](/wiki/Balkans \"Balkans\").Crowley, Roger. 1453: *The Holy War for Constantinople and the Clash of Islam and the West*. New York: Hyperion, 2005\\. p 31 . The [Savoyard Crusade](/wiki/Savoyard_Crusade \"Savoyard Crusade\") recaptured Gallipoli for Byzantium in 1366, but the beleaguered Byzantines were forced to hand it back in September 1376\\. The [Greeks](/wiki/Greeks \"Greeks\") living there were allowed to continue their everyday activities. In the 19th century, Gallipoli (, ) was a district () in the [Vilayet of Adrianople](/wiki/Vilayet_of_Adrianople \"Vilayet of Adrianople\"), with about thirty thousand inhabitants: comprising Greeks, Turks, Armenians and Jews.\n\n",
"#### Crimean War (1853–1856\\)\n\n[thumb\\|The port of Gallipoli, ](/wiki/File:Port_de_Gallipoli.JPG \"Port de Gallipoli.JPG\")\nGallipoli became a major [encampment](/wiki/Military_camp \"Military camp\") for British and French forces in 1854 during the [Crimean War](/wiki/Crimean_War \"Crimean War\"), and the harbour was also a stopping\\-off point between the western Mediterranean and [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")).\n\nIn March 1854 British and French engineers constructed an line of defence to protect the peninsula from a possible Russian attack and secure control of the route to the [Mediterranean Sea](/wiki/Mediterranean_Sea \"Mediterranean Sea\").\n\n",
"#### First Balkan War (1912–1913\\)\n\nDuring the [First Balkan War](/wiki/First_Balkan_War \"First Balkan War\"), the 1913 [Battle of Bulair](/wiki/Battle_of_Bulair \"Battle of Bulair\") and several minor skirmishes took place where the Ottoman army fought in the Greek villages near Gallipoli\". The [Report of the International Commission on the Balkan Wars](/wiki/Report_of_the_International_Commission_on_the_Balkan_Wars \"Report of the International Commission on the Balkan Wars\") mention destruction and massacres in the area by the Ottoman army against Greek and Bulgarian population.\n\nThe Ottoman Government, under the pretext that a village was within the firing line, ordered its evacuation within three hours. The residents abandoned everything they possessed, left their village and went to [Gallipoli](/wiki/Gelibolu \"Gelibolu\"). Seven of the Greek villagers who stayed two minutes later than the three\\-hour limit allowed for the evacuation were shot by the soldiers. After the end of the Balkan War the exiles were allowed to return. But as the Government allowed only the Turks to rebuild their houses and furnish them, the exiled Greeks were compelled to remain in Gallipoli.\n\n",
"#### World War I: Gallipoli Campaign (1914–1918\\)\n\n[thumb\\|Landing at Gallipoli in April 1915](/wiki/File:Landing_at_Gallipoli_%2813901951593%29.jpg \"Landing at Gallipoli (13901951593).jpg\")\n[thumb\\|The Sphinx overlooking Anzac Cove](/wiki/File:Gallipoli_ANZAC_Cove_Sphinx_2.JPG \"Gallipoli ANZAC Cove Sphinx 2.JPG\")\n\nDuring World War I (1914–1918\\), French, British, and allied forces (Australian, New Zealand, Newfoundland, Irish and Indian) fought the [Gallipoli campaign](/wiki/Gallipoli_campaign \"Gallipoli campaign\") (1915–1916\\) in and near the peninsula, seeking to secure a sea route to relieve their eastern ally, [Russia](/wiki/Imperial_Russia \"Imperial Russia\"). The Ottomans set up defensive fortifications along the peninsula and contained the invading forces.\n\nIn early 1915, attempting to seize a strategic advantage in World War I by capturing the [Bosporus Strait](/wiki/Bosporus_Strait \"Bosporus Strait\") at [Istanbul](/wiki/Istanbul \"Istanbul\") (formerly [Constantinople](/wiki/Constantinople \"Constantinople\")), the British authorised an attack on the peninsula by French, British, and British Empire forces. The first Australian troops landed at [ANZAC Cove](/wiki/ANZAC_Cove \"ANZAC Cove\") early in the morning of 25 April 1915\\. After eight months of heavy fighting the last Allied soldiers withdrew by 9 January 1916\\.\n\nThe campaign, one of the greatest [Ottoman](/wiki/Ottoman_Empire \"Ottoman Empire\") victories during the war, is considered by historians as a humiliating [Allied](/wiki/Allies_of_World_War_I \"Allies of World War I\") failure. [Turks](/wiki/Turkey \"Turkey\") regard it as a defining moment in their nation's history and national identity, contributing to the establishment of the Republic of Turkey eight years later under President [Mustafa Kemal Atatürk](/wiki/Mustafa_Kemal_Atat%C3%BCrk \"Mustafa Kemal Atatürk\"), who first rose to prominence as a commander at Gallipoli.\n\nThe Ottoman Empire instituted the [Gallipoli Star](/wiki/Gallipoli_Star_%28Ottoman_Empire%29 \"Gallipoli Star (Ottoman Empire)\") as a military decoration in 1915 and awarded it throughout the rest of World War I.\n\nThe campaign was the first major military action of [Australia](/wiki/Australia \"Australia\") and [New Zealand](/wiki/New_Zealand \"New Zealand\") (or [ANZACs](/wiki/Australian_and_New_Zealand_Army_Corps \"Australian and New Zealand Army Corps\")) as independent [dominions](/wiki/Dominion \"Dominion\"), setting a foundation for Australian and New Zealand military history, and contributing to their developing national identities. The date of the landing, 25 April, is known as \"[ANZAC Day](/wiki/Anzac_Day \"Anzac Day\")\". It remains the most significant commemoration of military casualties and [\"returned soldiers\"](/wiki/Veteran \"Veteran\") in Australia and New Zealand.\n\nOn the Allied side, one of the promoters of the expedition was Britain's [First Lord of the Admiralty](/wiki/First_Lord_of_the_Admiralty \"First Lord of the Admiralty\"), [Winston Churchill](/wiki/Winston_Churchill \"Winston Churchill\"), whose bullish optimism caused damage to his reputation that took years to repair.\n\nPrior to the Allied landings in April 1915, the Ottoman Empire deported [Greek residents](/wiki/Ottoman_Greeks \"Ottoman Greeks\") from Gallipoli and the surrounding region and from the islands in the [sea of Marmara](/wiki/Sea_of_Marmara \"Sea of Marmara\"), to the interior where they were at the mercy of hostile Turks. The Greeks had little time to pack and the Ottoman authorities permitted them to take only some bedding and the rest was handed over to the Government. The Turks then plundered the houses and properties. A testimony of a deportee described how the deportees were forced onto crowded steamers, standing\\-room only, then on disembarking, men of military age were removed (for forced labour in the [labour battalions](/wiki/Labour_Battalions_%28Ottoman_Empire%29 \"Labour Battalions (Ottoman Empire)\") of the Ottoman army). The rest were \"scattered… among the farms like ownerless cattle.\"\n\nThe [Metropolitan bishop](/wiki/Metropolitan_bishop \"Metropolitan bishop\") of Gallipoli wrote on 17 July 1915 that the extermination of the Christian refugees was methodical. He also mentions that \"The Turks, like beasts of prey, immediately plundered all the Christians' property and carried it off. The inhabitants and refugees of my district are entirely without shelter, awaiting to be sent no one knows where ...\". Many Greeks died from hunger and there were frequent cases of rape of women and young girls, as well as their forced conversion to [Islam](/wiki/Islam \"Islam\"). In some cases, [Muhacirs](/wiki/Muhacirs \"Muhacirs\") appeared in the villages even before the Greek inhabitants were deported and stoned the houses and threatened the inhabitants that they would kill them if they did not leave.\n\n",
"#### Greco\\-Turkish War (1919–1922\\)\n\nGreek troops occupied Gallipoli on 4 August 1920 during the [Greco\\-Turkish War of 1919–22](/wiki/Greco-Turkish_War_%281919%E2%80%9322%29 \"Greco-Turkish War (1919–22)\"), considered part of the [Turkish War of Independence](/wiki/Turkish_War_of_Independence \"Turkish War of Independence\"). After the [Armistice of Mudros](/wiki/Armistice_of_Mudros \"Armistice of Mudros\") of 30 October 1918 it became a Greek prefecture centre as *Kallipolis*. However, Greece was forced to cede Eastern Thrace after the [Armistice of Mudanya](/wiki/Armistice_of_Mudanya \"Armistice of Mudanya\") of October 1922\\. Gallipoli was briefly handed over to British troops on 20 October 1922, but finally returned to Turkish rule on 26 November 1922\\.\n\nIn 1920, after the defeat of the [Russian White army](/wiki/White_movement \"White movement\") of General [Pyotr Wrangel](/wiki/Pyotr_Wrangel \"Pyotr Wrangel\"), a significant number of [émigré soldiers](/wiki/White_%C3%A9migr%C3%A9 \"White émigré\") and their families evacuated to Gallipoli from the [Crimean Peninsula](/wiki/Crimean_Peninsula \"Crimean Peninsula\"). From there, many went to European countries, such as [Yugoslavia](/wiki/Kingdom_of_Yugoslavia \"Kingdom of Yugoslavia\"), where they found refuge.\n\nThere are now many [cemeteries and war memorials](/wiki/List_of_war_cemeteries_and_memorials_on_the_Gallipoli_Peninsula \"List of war cemeteries and memorials on the Gallipoli Peninsula\") on the Gallipoli peninsula.\n\n",
"### Turkish Republic\n\nBetween 1923 and 1926 Gallipoli became the centre of Gelibolu Province, comprising the districts of Gelibolu, [Eceabat](/wiki/Eceabat \"Eceabat\"), [Keşan](/wiki/Ke%C5%9Fan \"Keşan\") and [Şarköy](/wiki/%C5%9Eark%C3%B6y \"Şarköy\"). After the dissolution of the province, it became a district centre in [Çanakkale Province](/wiki/%C3%87anakkale_Province \"Çanakkale Province\").\n\n",
"Notable people\n--------------\n\n* [Ahmed Bican](/wiki/Ahmed_Bican \"Ahmed Bican\") (1398 – ), author\n* [Piri Reis](/wiki/Piri_Reis \"Piri Reis\") (1465/70 – 1553), admiral, geographer and cartographer\n* [Mustafa Âlî](/wiki/Mustafa_%C3%82l%C3%AE \"Mustafa Âlî\") (1541–1600\\), Ottoman historian, politician and writer\n* [Sofia Vembo](/wiki/Sofia_Vembo \"Sofia Vembo\") (1910–1978\\), Greek singer and actress\n",
"References\n----------\n\n",
"External links\n--------------\n\n* [Gallipoli Peninsula Historical National Park photos with info](https://web.archive.org/web/20110524001248/http://e-turkey.net/v/canakkale_gallipoli/)\n* [Tours of Gallipoli](https://rsltours.com/) \n* [Australia's role in the Gallipoli Campaign – Website (ABC and Dept of Veteran's Affairs)](http://www.abc.net.au/ww1-anzac/gallipoli/)\n\n[Category:Dardanelles](/wiki/Category:Dardanelles \"Dardanelles\")\n[Category:Geography of Thrace](/wiki/Category:Geography_of_Thrace \"Geography of Thrace\")\n[Category:Ancient Greek archaeological sites in Turkey](/wiki/Category:Ancient_Greek_archaeological_sites_in_Turkey \"Ancient Greek archaeological sites in Turkey\")\n[Category:Landforms of Çanakkale Province](/wiki/Category:Landforms_of_%C3%87anakkale_Province \"Landforms of Çanakkale Province\")\n[Gelibolu](/wiki/Category:Headlands_of_Turkey \"Headlands of Turkey\")\n[Category:Peninsulas of Turkey](/wiki/Category:Peninsulas_of_Turkey \"Peninsulas of Turkey\")\n[Category:Tourist attractions in Çanakkale Province](/wiki/Category:Tourist_attractions_in_%C3%87anakkale_Province \"Tourist attractions in Çanakkale Province\")\n[Category:Territories of the Republic of Venice](/wiki/Category:Territories_of_the_Republic_of_Venice \"Territories of the Republic of Venice\")\n[Category:World Heritage Tentative List for Turkey](/wiki/Category:World_Heritage_Tentative_List_for_Turkey \"World Heritage Tentative List for Turkey\")\n[Category:Places of the Greek genocide](/wiki/Category:Places_of_the_Greek_genocide \"Places of the Greek genocide\")\n\n"
]
} |
Mayo | {
"id": [
30292728
],
"name": [
"Frost"
]
} | 05pvlwo5zvumjz7tsts3lgkasjlsu58 | 2024-08-08T13:43:53Z | 1,239,295,055 | 0 | {
"title": [
"Mayo",
"Places",
"Antarctica",
"Australia",
"Canada",
"Cape Verde",
"Republic of Ireland",
"Ivory Coast",
"Sudan",
"Thailand",
"United Kingdom",
"United States",
"Multiple places",
"Schools",
"People",
"Other uses"
],
"level": [
1,
2,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
2,
2,
2
],
"content": [
"**Mayo** often refers to:\n\n* [Mayonnaise](/wiki/Mayonnaise \"Mayonnaise\"), a sauce\n* [Mayo Clinic](/wiki/Mayo_Clinic \"Mayo Clinic\"), a medical center in Rochester, Minnesota, United States\n\n**Mayo** may also refer to:\n\n",
"Places\n------\n\n### Antarctica\n\n* [Mayo Peak](/wiki/Mayo_Peak \"Mayo Peak\"), Marie Byrd Land\n\n### Australia\n\n* [Division of Mayo](/wiki/Division_of_Mayo \"Division of Mayo\"), an Australian Electoral Division in South Australia\n\n### Canada\n\n* [Mayo, Quebec](/wiki/Mayo%2C_Quebec \"Mayo, Quebec\"), a municipality\n* [Mayo, Yukon](/wiki/Mayo%2C_Yukon \"Mayo, Yukon\"), a village\n\t+ [Mayo (electoral district)](/wiki/Mayo_%28electoral_district%29 \"Mayo (electoral district)\"), Yukon, a former electoral district\n\n### Cape Verde\n\n* [Maio, Cape Verde](/wiki/Maio%2C_Cape_Verde \"Maio, Cape Verde\") (also formerly known as Mayo Island)\n\n### Republic of Ireland\n\n* [County Mayo](/wiki/County_Mayo \"County Mayo\")\n* [Mayo (Dáil constituency)](/wiki/Mayo_%28D%C3%A1il_constituency%29 \"Mayo (Dáil constituency)\")\n* [County Mayo (Parliament of Ireland constituency)](/wiki/County_Mayo_%28Parliament_of_Ireland_constituency%29 \"County Mayo (Parliament of Ireland constituency)\")\n* [County Mayo (UK Parliament constituency)](/wiki/County_Mayo_%28UK_Parliament_constituency%29 \"County Mayo (UK Parliament constituency)\")\n* [Mayo, County Mayo](/wiki/Mayo%2C_County_Mayo \"Mayo, County Mayo\"), a village\n\n### Ivory Coast\n\n* [Mayo, Ivory Coast](/wiki/Mayo%2C_Ivory_Coast \"Mayo, Ivory Coast\"), a town and commune\n\n### Sudan\n\n* [Mayo, Khartoum](/wiki/Mayo%2C_Khartoum \"Mayo, Khartoum\"), a neighborhood\n\n### Thailand\n\n* [Mayo district, Pattani](/wiki/Mayo_district%2C_Pattani \"Mayo district, Pattani\")\n\n### United Kingdom\n\n* Mayo, a [townland in County Down](/wiki/List_of_townlands_in_County_Down%23M \"List of townlands in County Down#M\"), Northern Ireland\n* [Mayo (UK Parliament constituency)](/wiki/Mayo_%28UK_Parliament_constituency%29 \"Mayo (UK Parliament constituency)\"), a former constituency encompassing the whole of County Mayo\n\n### United States\n\n* [Mayo, Florida](/wiki/Mayo%2C_Florida \"Mayo, Florida\"), a town\n* [Mayo, Kentucky](/wiki/Mayo%2C_Kentucky \"Mayo, Kentucky\"), an unincorporated community\n* [Mayo, Maryland](/wiki/Mayo%2C_Maryland \"Mayo, Maryland\"), a census\\-designated place\n* [Mayo, South Carolina](/wiki/Mayo%2C_South_Carolina \"Mayo, South Carolina\"), a census\\-designated place\n* [Mayo Lake](/wiki/Mayo_Lake \"Mayo Lake\"), North Carolina, a reservoir\n\n### Multiple places\n\n* [Mayo River (disambiguation)](/wiki/Mayo_River_%28disambiguation%29 \"Mayo River (disambiguation)\"), various rivers\n",
"### Antarctica\n\n* [Mayo Peak](/wiki/Mayo_Peak \"Mayo Peak\"), Marie Byrd Land\n",
"### Australia\n\n* [Division of Mayo](/wiki/Division_of_Mayo \"Division of Mayo\"), an Australian Electoral Division in South Australia\n",
"### Canada\n\n* [Mayo, Quebec](/wiki/Mayo%2C_Quebec \"Mayo, Quebec\"), a municipality\n* [Mayo, Yukon](/wiki/Mayo%2C_Yukon \"Mayo, Yukon\"), a village\n\t+ [Mayo (electoral district)](/wiki/Mayo_%28electoral_district%29 \"Mayo (electoral district)\"), Yukon, a former electoral district\n",
"### Cape Verde\n\n* [Maio, Cape Verde](/wiki/Maio%2C_Cape_Verde \"Maio, Cape Verde\") (also formerly known as Mayo Island)\n",
"### Republic of Ireland\n\n* [County Mayo](/wiki/County_Mayo \"County Mayo\")\n* [Mayo (Dáil constituency)](/wiki/Mayo_%28D%C3%A1il_constituency%29 \"Mayo (Dáil constituency)\")\n* [County Mayo (Parliament of Ireland constituency)](/wiki/County_Mayo_%28Parliament_of_Ireland_constituency%29 \"County Mayo (Parliament of Ireland constituency)\")\n* [County Mayo (UK Parliament constituency)](/wiki/County_Mayo_%28UK_Parliament_constituency%29 \"County Mayo (UK Parliament constituency)\")\n* [Mayo, County Mayo](/wiki/Mayo%2C_County_Mayo \"Mayo, County Mayo\"), a village\n",
"### Ivory Coast\n\n* [Mayo, Ivory Coast](/wiki/Mayo%2C_Ivory_Coast \"Mayo, Ivory Coast\"), a town and commune\n",
"### Sudan\n\n* [Mayo, Khartoum](/wiki/Mayo%2C_Khartoum \"Mayo, Khartoum\"), a neighborhood\n",
"### Thailand\n\n* [Mayo district, Pattani](/wiki/Mayo_district%2C_Pattani \"Mayo district, Pattani\")\n",
"### United Kingdom\n\n* Mayo, a [townland in County Down](/wiki/List_of_townlands_in_County_Down%23M \"List of townlands in County Down#M\"), Northern Ireland\n* [Mayo (UK Parliament constituency)](/wiki/Mayo_%28UK_Parliament_constituency%29 \"Mayo (UK Parliament constituency)\"), a former constituency encompassing the whole of County Mayo\n",
"### United States\n\n* [Mayo, Florida](/wiki/Mayo%2C_Florida \"Mayo, Florida\"), a town\n* [Mayo, Kentucky](/wiki/Mayo%2C_Kentucky \"Mayo, Kentucky\"), an unincorporated community\n* [Mayo, Maryland](/wiki/Mayo%2C_Maryland \"Mayo, Maryland\"), a census\\-designated place\n* [Mayo, South Carolina](/wiki/Mayo%2C_South_Carolina \"Mayo, South Carolina\"), a census\\-designated place\n* [Mayo Lake](/wiki/Mayo_Lake \"Mayo Lake\"), North Carolina, a reservoir\n",
"### Multiple places\n\n* [Mayo River (disambiguation)](/wiki/Mayo_River_%28disambiguation%29 \"Mayo River (disambiguation)\"), various rivers\n",
"Schools\n-------\n\n* [Mayo Clinic School of Medicine](/wiki/Mayo_Clinic_School_of_Medicine \"Mayo Clinic School of Medicine\"), formerly Mayo Medical School, an American medical school that is part of the Mayo Clinic and the Mayo Clinic College of Medicine and Science\n* [Mayo High School](/wiki/Mayo_High_School \"Mayo High School\"), a public high school in Rochester, Minnesota, United States\n* [Mayo College](/wiki/Mayo_College \"Mayo College\"), a secondary educational institution in Ajmer, Rajasthan, India\n",
"People\n------\n\n* [Mayo (surname)](/wiki/Mayo_%28surname%29 \"Mayo (surname)\")\n* [Mayo (given name)](/wiki/Mayo_%28given_name%29 \"Mayo (given name)\")\n* [Mayo people](/wiki/Mayo_people \"Mayo people\"), an indigenous ethnic group in the Mexican states of Sinaloa and Sonora\n* [Meo (ethnic group)](/wiki/Meo_%28ethnic_group%29 \"Meo (ethnic group)\") or Mayo, an Indian ethnic tribe of Rajputs\n* James Mayo, pen name of [Stephen Coulter](/wiki/Stephen_Coulter \"Stephen Coulter\") (born 1913/14\\), English author\n",
"Other uses\n----------\n\n* [Short Mayo Composite](/wiki/Short_Mayo_Composite \"Short Mayo Composite\"), a piggy\\-back long\\-range seaplane/flying boat combination built by Short Brothers in the late 1930s\n* , World War II US Navy destroyer\n* [Earl of Mayo](/wiki/Earl_of_Mayo \"Earl of Mayo\"), a title in the Peerage of Ireland\n* [Viscount Mayo](/wiki/Viscount_Mayo \"Viscount Mayo\"), a title that has been created twice in the Peerage of Ireland\n* [Mayo GAA](/wiki/Mayo_GAA \"Mayo GAA\"), a county board of the Gaelic Athletic Association\n\t+ [Mayo county football team](/wiki/Mayo_county_football_team \"Mayo county football team\")\n\t+ [Mayo county hurling team](/wiki/Mayo_county_hurling_team \"Mayo county hurling team\")\n* [Mayo language](/wiki/Mayo_language \"Mayo language\"), spoken by the Mayo people\n* [*Mayo* (TV series)](/wiki/Mayo_%28TV_series%29 \"Mayo (TV series)\"), a BBC television series first broadcast in 2006\n* [\"Mayo\" (song)](/wiki/Mayo_%28song%29 \"Mayo (song)\"), by DJ Speedsta\n* [Mayo Hospital](/wiki/Mayo_Hospital \"Mayo Hospital\"), in Lahore, Pakistan\n* [Mayo Hotel](/wiki/Mayo_Hotel \"Mayo Hotel\"), Tulsa, Oklahoma, on the National Register of Historic Places\n* [Mexican American Youth Organization](/wiki/Mexican_American_Youth_Organization \"Mexican American Youth Organization\")\n* Project Mayo, an open source project by [DivX, Inc.](/wiki/DivX%2C_Inc. \"DivX, Inc.\")\n* *[Mayo v. Prometheus](/wiki/Mayo_v._Prometheus \"Mayo v. Prometheus\")*, a U.S. Supreme Court case\n* *[Avenida de Mayo](/wiki/Avenida_de_Mayo \"Avenida de Mayo\")*, an avenue in Buenos Aires, Argentina\n\n"
]
} |
Namespace | {
"id": [
48143508
],
"name": [
"Partey Lover"
]
} | qz7l4juhbqov2feju0yede061gdfo8p | 2024-08-13T05:15:10Z | 1,240,040,029 | 0 | {"title":["Introduction","Name conflicts","Solution via prefix","Naming system","Examples","Delegati(...TRUNCATED) |
Saint Casimir | {
"id": [
36671443
],
"name": [
"Manannan67"
]
} | egzch7mna1i3yhrrl9c48yqtwobnl92 | 2024-09-01T19:12:18Z | 1,242,514,924 | 0 | {"title":["Introduction","Biography","Early life and education","Hungarian campaign","Later life and(...TRUNCATED) |
Overhand | {
"id": [
8164021
],
"name": [
"Termininja"
]
} | 9ui1gtowjhoz74szujmbjuk1ljzotnn | 2020-11-14T11:21:24Z | 933,033,312 | 0 | {"title":["Introduction"],"level":[1],"content":["\n**Overhand** may refer to:\n\n* [Overhand (boxin(...TRUNCATED) |
590s BC | {
"id": [
45595768
],
"name": [
"DervotNum4"
]
} | g2k0vdf4f7qtfcckz1i9onrn2hh601o | 2024-08-25T18:35:55Z | 1,230,720,687 | 0 | {"title":["Introduction","Events and trends","Significant people","References"],"level":[1,2,2,2],"c(...TRUNCATED) |
Thornton Wilder | {
"id": [
27823944
],
"name": [
"GreenC bot"
]
} | 1s4fhsl0fh4igh0efpk5b8dlb45k8ue | 2024-09-24T20:34:34Z | 1,246,706,946 | 0 | {"title":["Introduction","Early life and education","Education","Career","Personal life","Death","Bi(...TRUNCATED) |
Delaware County | {
"id": [
1492328
],
"name": [
"RickinBaltimore"
]
} | 1vapj9azu7jfntrvmoio9zncku06ckp | 2023-10-02T18:44:10Z | 1,178,288,429 | 0 | {"title":["Delaware County"],"level":[1],"content":["**Delaware County** is the name of six counties(...TRUNCATED) |
Alkali metal | {
"id": [
10248457
],
"name": [
"Orenburg1"
]
} | bsju8looop8t43b01fumic4u0s7d30a | 2024-10-10T15:30:50Z | 1,247,867,939 | 0 | {"title":["Introduction","History","Occurrence","In the Solar System","On Earth","Properties","Physi(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 23