text
stringlengths 1
2.56k
|
---|
In Part 3 of the Netflix original series The Chiilling Adventures of Sabrina, the barker of the traveling amusement park and carneval is named Carcosa, and the carneval in turn named, presumably, after him. |
Throughout the season of the show, it becomes apparent that the workers at the carneval are all mythological beings of old, with Carcosa himself being the god Pan, his true form being that of a satyr, in the show understood to be the god of madness. |
The arc of the season revolves partially around the attempts of the carneval workers to ressurect an older deity identified as The Green Man. |
Themes of madness, death and resurrection parallel the works of Robert W. Chambers et. |
al.. |
In the 1988 album "Passage to Arcturo" by Rotting Christ, the song "Inside The Eye of Algond" nominates the Mystical Carcosa as part of the singer's journey. |
In 2016 DigiTech released a Fuzz pedal called the Carcosa. |
The pedal featured two modes, named "Hali" and "Demhe." |
"Maria", a film by King Abalos, takes place in a mysterious mountain called Carcosa. |
In the Mass Effect 3 universe there is a planet named Carcosa. |
In 2001 the Belgian black metal band Ancient Rites released the album Dim Carcosa. |
The title track's lyrics consist of excerpts from "Cassilda's Song". |
In the early 2000s, a Mysterious Package Company experience called The King in Yellow was introduced, heavily inspired by story and title. |
Later, a sequel experience entitled was created, obviously connected to this shared universe, and connected to the original The King in Yellow. |
The 2019 EP "On the Shores of Hali" by Cassilda and Carcosa makes numerous references to Chamber's version of Carcosa. |
Two different publishers have used the name Carcosa. |
Carcosa House was a science fiction specialty publishing firm formed in 1947 by Frederick B. Shroyer, a boyhood friend of T. E. Dikty, and two Los Angeles science fiction fans, Russell Hodgkins and Paul Skeeters. |
Shroyer had secured a copy of the original newspaper appearance of the novel "Edison's Conquest of Mars" by Garrett P. Serviss which he wished to publish. |
Shroyer talked Hodgkins and Skeeters into going in on shares to form the publisher which issued the Serviss book in 1947. |
Dikty offered advice, and William L. Crawford of F.P.C.I. |
helped with production and distribution. |
Carcosa House announced one other book, "Enter Ghost: A Study in Weird Fiction", by Sam Russell, but due to slow sales of the Serviss book, it was never published. |
Carcosa was a specialty publishing firm formed by David Drake, Karl Edward Wagner, and Jim Groce, who were concerned that Arkham House would cease publication after the death of its founder, August Derleth. |
Carcosa was founded in North Carolina in 1973 and put out four collections of pulp horror stories, all edited by Wagner. |
Their first book was a huge omnibus volume of the best non-series weird fiction by Manly Wade Wellman. |
It was enhanced by a group of chilling illustrations by noted fantasy artists Lee Brown Coye. |
Their other three volumes were also giant omnibus collections (of work by Hugh B. Cave, E. Hoffman Price, and again by Manly Wade Wellman). |
A fifth collection was planned, "Death Stalks the Night," by Hugh B. Cave; Lee Brown Coye was working on illustrating it when he suffered a crippling stroke in 1977 and eventually died, causing Carcosa to abandon the project. |
The book was eventually published by Fedogan & Bremer. |
Carcosa also had plans to issue volumes by Leigh Brackett, H. Warner Munn and Jack Williamson; however, none of the projected volumes appeared. |
The Carcosa colophon depicts the silhouette of a towered city in front of three moons. |
In 1896-7 the Carcosa mansion was built as the official residence of the Resident-General of the Federated Malay States for the first holder of that office, Sir Frank Swettenham. |
It is currently in use as a luxury hotel, the Carcosa Seri Negara. |
Swettenham took the name from The King in Yellow. |
In the Quebec-based geopolitical/live action role play game "Bicolline", Carcosa is a kingdom in the west. |
It was established upon principles of freedom and is populated by pirates, gypsies, escaped slaves, and religious exiles. |
Cui |
Cui may refer to: |
Approximation algorithm |
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to NP-hard optimization problems with provable guarantees on the distance of the returned solution to the optimal one. |
Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. |
Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. |
The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. |
In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. |
However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. |
A notable example of an approximation algorithm that provides "both" is the classic approximation algorithm of Lenstra, Shmoys and Tardos for Scheduling on Unrelated Parallel Machines. |
The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. |
This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail. |
There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. |
For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 1.5 approximation algorithm of Christofides to the Metric Traveling Salesman Problem. |
The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. |
One well-known example of the former is the Goemans-Williamson algorithm for Maximum Cut which solves a graph theoretic problem using high dimensional geometry. |
A simple example of an approximation algorithm is one for the Minimum Vertex Cover problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. |
One way to find a vertex cover is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. |
As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a matching), the vertex cover produced, therefore, is at most twice as large as the optimal one. |
In other words, this is a constant factor approximation algorithm with an approximation factor of 2. |
Under the recent Unique Games Conjecture, this factor is even the best possible one. |
NP-hard problems vary greatly in their approximability; some, such as the Knapsack Problem, can be approximated within a multiplicative factor formula_1, for any fixed formula_2, and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a polynomial time approximation scheme or PTAS). |
Others are impossible to approximate within any constant, or even polynomial, factor unless P = NP, as in the case of the Maximum Clique Problem. |
Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the theory of NP-completeness. |
In other words, although NP-complete problems may be equivalent (under polynomial time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions. |
By now there are several established techniques to design approximation algorithms. |
These include the following ones. |
While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. |
This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input. |
For example, there is a different approximation algorithm for Minimum Vertex Cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation. |
Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. |
While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover). |
Approximation algorithms as a research area is closely related to and informed by inapproximability theory where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of reductions. |
In the case of the Metric Traveling Salesman Problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied. |
Coupled with the knowledge of the existence of Christofides' 1.5 approximation algorithm, this tells us that the threshold of approximability for Metric Traveling Salesman (if it exists) is somewhere between 123/122 and 1.5. |
While inapproximability results have been proved since the 1970s, such results were obtained by ad-hoc means and no systematic understanding was available at the time. |
It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem, that modern tools for proving inapproximability results were uncovered. |
The PCP theorem, for example, shows that Johnson's 1974 approximation algorithms for Max SAT, Set Cover, Independent Set and Coloring all achieve the optimal approximation ratio, assuming P ≠ NP. |
Not all approximation algorithms are suitable for direct practical applications. |
Some involve solving non-trivial linear programming/semidefinite relaxations (which may themselves invoke the ellipsoid algorithm), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. |
Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. |
Despite their inability to be used "out of the box" in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. |
In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. |
In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. |
One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora (and independently by Joseph Mitchell) which had a prohibitive running time of formula_3 for a formula_4 approximation. |
Yet, within a year these ideas were incorporated into a near-linear time formula_5 algorithm for any constant formula_2. |
For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. |
For example, a "ρ"-approximation algorithm "A" is defined to be an algorithm for which it has been proven that the value/cost, "f"("x"), of the approximate solution "A"("x") to an instance "x" will not be more (or less, depending on the situation) than a factor "ρ" times the value, OPT, of an optimum solution. |
The factor "ρ" is called the "relative performance guarantee". |
An approximation algorithm has an "absolute performance guarantee" or "bounded error" "c", if it has been proven for every instance "x" that |
Similarly, the "performance guarantee", "R"("x,y"), of a solution "y" to an instance "x" is defined as |
where "f"("y") is the value/cost of the solution "y" for the instance "x". |
Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if "y" is an optimal solution. |
If an algorithm "A" guarantees to return solutions with a performance guarantee of at most "r"("n"), then "A" is said to be an "r"("n")-approximation algorithm and has an "approximation ratio" of "r"("n"). |
Likewise, a problem with an "r"("n")-approximation algorithm is said to be r"("n")"-"approximable" or have an approximation ratio of "r"("n"). |
For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of formula_10. |
In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1. |
The "absolute performance guarantee" formula_11 of some approximation algorithm "A", where "x" refers to an instance of a problem, and where formula_12 is the performance guarantee of "A" on "x" (i.e. |
ρ for problem instance "x") is: |
That is to say that formula_11 is the largest bound on the approximation ratio, "r", that one sees over all possible instances of the problem. |
Likewise, the "asymptotic performance ratio" formula_15 is: |
That is to say that it is the same as the "absolute performance ratio", with a lower bound "n" on the size of problem instances. |
These two types of ratios are used because there exist algorithms where the difference between these two is significant. |
In the literature, an approximation ratio for a maximization (minimization) problem of "c" - ϵ (min: "c" + ϵ) means that the algorithm has an approximation ratio of "c" ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shown for ϵ = 0. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.