text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Astronomy, Space, Space Exploration, Universe, Aliens. assuming it is expanding will reach infinite ‘dimensions’. Source: abyss.uoregon.edu So, assuming infinite universe, why we are not able to find life outside of Earth? There are few conditions in order to declare some place liveable. We should take at least two conditions into consideration while
medium
7,176
Astronomy, Space, Space Exploration, Universe, Aliens. searching for new Earth. There should be magnetic field protecting from harmful space radiation of future parent star, for example the solar wind, a continuous stream of high-speed, high-energy particles coming out of the Sun. Secondly, life will depend out of composition of atmosphere, where
medium
7,177
Astronomy, Space, Space Exploration, Universe, Aliens. oxygen particles are only indication of future or past life. Where to look then? There are few new Earths to be. # Enceladus Enceladus is the sixth largest moon of Saturn. It is about 500 kilometres in diameter, about a tenth of that of Saturn’s largest moon, Titan. It is covered mostly with fresh,
medium
7,178
Astronomy, Space, Space Exploration, Universe, Aliens. clean ice with consequence of noon temperature around −198 °C. So why should we focus on this satellite? In 2005, NASA’s Cassini spacecraft photographed geysers of frozen water spewing from cracks in Enceladus’ southern hemisphere. Scientists think reservoirs of liquid water lie beneath the frozen
medium
7,179
Astronomy, Space, Space Exploration, Universe, Aliens. surface and are warmed by gravitational interactions between Enceladus and other moons around Saturn. So one of the most important conditions for life checked. Source: NASA/JPL/Space Science Institute, space.com, Seth Shostak comment # Mars Are there actually Martians? What is interesting about
medium
7,180
Astronomy, Space, Space Exploration, Universe, Aliens. Mars are the dark stripes that appear in the Martian summertime at Horowitz crater. These are likely to be salty meltwater only inches beneath Mars’ dusty epidermis. Scientists think that, in the past, water may have flowed across the surface in rivers and streams, and that vast oceans covered the
medium
7,181
Astronomy, Space, Space Exploration, Universe, Aliens. planet which today made Mars dry and barren, with most of its water locked up in the polar ice caps. Source: Science/AAAS, Seth Shostak comment # Europa Europa is smallest of four Galilean moons orbiting Jupiter. Europa is primarily made of silicate rock and has a water-ice crust, with iron-nickel
medium
7,182
Astronomy, Space, Space Exploration, Universe, Aliens. core and atmosphere filled with oxygen. It is considered that amount of water on Europa exceeds amount of water on Earth. The downside is that Europa’s vast, salty seas lie beneath roughly 10 miles of ice. Not only is it difficult get a probe beneath this icy armour, but Europa’s oceans are darker
medium
7,183
Astronomy, Space, Space Exploration, Universe, Aliens. than a cave — which means photosynthesis won’t work. Source: NASA/JPL-Caltech/SETI Institute # Globular Star Clusters Globular clusters are actually bucket full of stars, hounds to millions, held together by gravity. The clusters date back to the early life of the Milky Way, nearly 10 billion years
medium
7,184
Astronomy, Space, Space Exploration, Universe, Aliens. ago, unlike Sun dating 5 billion years. The biggest chance for finding habitable spot is outermost ring of clusters. Big problem is instability of possible planet due to short distance of two parent stars and effect of gravity which will probably move planet out of effect of bending space-time.
medium
7,185
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. Puffers, it’s a big day - our 🧛 has officially started! With this first transaction batch, $13.2M in stETH deposited into Puffer has been redeemed into ETH. Source: Etherscan In the upcoming weeks, all stETH deposits in our protocol will be converted into ETH to fund our permissionless validators
medium
7,187
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. on Puffer’s Mainnet and take another step toward a decentralized Ethereum. Why 🧛? The impetus for this campaign stems from a critical observation: a significant portion of Ethereum’s staked ETH is concentrated under a single Liquid Staking Token (LST) — stETH (🥕). Holding approximately 28.69% of
medium
7,188
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. all staked ETH, this centralization presents not only a theoretical risk but a tangible threat. Cartelization, transaction censorship, and manipulative extraction of MEV are just a few of the potential dangers. Such a scenario could undermine the integrity and security of the entire Ethereum
medium
7,189
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. network. From its inception, Puffer has been an ardent advocate for the decentralization of Ethereum. The recent Crunchy Carrot Quest successfully amassed $1.36 billion in stETH deposits, which corresponds to approximately 4.48% of all stETH supply. By converting this stETH into ETH, Puffer will
medium
7,190
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. effectively decrease stETH’s dominance over the staked ETH on the beacon chain by approximately 1%. With the official launch of our Mainnet, these efforts herald the dawn of Puffer’s ambitious journey to safeguard the decentralization of the Ethereum ecosystem. How the 🧛’s Gonna Work The mechanics
medium
7,191
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. of the 🧛 have been meticulously designed to ensure seamless execution, keeping user funds secure while transforming stETH holdings into pristine ETH. As a Puffer staker, you don’t have to do anything — all the work will be handled by our own Puffer dev team. Here’s a step-by-step breakdown of how
medium
7,192
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. Puffer intends to execute this bold move: Initiation of Withdrawals: Puffer will begin by queuing up significant amounts of stETH withdrawals from EigenLayer, strategically breaking them down into manageable chunks (e.g., ~450K stETH divided by 45). Waiting Period: After initiating the withdrawals,
medium
7,193
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. there will be a compulsory waiting period of approximately one week. This allows the system to process and prepare the 45 chunks for final withdrawal. During this phase, stETH deposits will stop accruing EigenLayer Points. Redemption: Post the waiting period, Puffer will start redeeming these stETH
medium
7,194
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. chunks for ETH. The initial phase will involve withdrawing a few chunks at a time. Claiming the Underlying ETH: Redeeming the chunks requires a waiting period of around three days. After this, Puffer can claim the underlying ETH, which will then be utilized to run Puffer’s permissionless
medium
7,195
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. validators. Repetition and Earnings: The process from steps 3 to 4 will be repeated for the remaining chunks. It’s crucial to note that during the waiting period in step 2, the stETH still earns EigenLayer Points, ensuring that the staking rewards continue to accrue, albeit with a slight pause
medium
7,196
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. during the actual withdrawal. This meticulous process is our commitment in action — a pledge to uphold the decentralized vision of Ethereum and to ensure a secure environment for all Ethereum enthusiasts for the decades to come. Happy 🧛! About Puffer Puffer is the first native Liquid Restaking
medium
7,197
Puffer Finance, Restaking, Eigenlayer, Ethereum Decentralization. Protocol (nLRP) built on EigenLayer. It introduces native Liquid Restaking Tokens (nLRTs) that accrue PoS and restaking rewards. Nodes within the protocol leverage Puffer’s anti-slashing technology to enjoy reduced risk and increased capital efficiency, while supercharging their rewards through
medium
7,198
Haskell, Genetic Algorithm, Graph Theory, Programming. Lately, I’ve been working on a side project that became a fun exercise in both graph theory and genetic algorithms. This is the story of that experience. Beginnings I recently took a break from my job to recover my excitement and sense of wonder about the world. During this break, I ended up
medium
7,200
Haskell, Genetic Algorithm, Graph Theory, Programming. building a board game. In this game, players navigate a maze built out of maze pieces on hexagon tiles, so that when these tiles are shuffled and connected at random, they build a complete maze, on a grid like this: I wasn’t at all sure that I could build a very good maze by the random placement of
medium
7,201
Haskell, Genetic Algorithm, Graph Theory, Programming. tiles. Still, I figured that I could wing it and see what happened, so I did. Attempt #1: Doing the Simplest Thing Mazes are almost the ideal application of graph theory. A graph (and here I always mean an undirected graph) is a bunch of vertices connected by edges. A maze, on the other hand, is a
medium
7,202
Haskell, Genetic Algorithm, Graph Theory, Programming. bunch of locations connected by paths. Same thing, different words! In this case, my original idea was that the hexagon tiles were the locations (i.e., vertices), and each of the six sides of the tile would either be open or blocked, potentially making a path to the adjacent tile. The question,
medium
7,203
Haskell, Genetic Algorithm, Graph Theory, Programming. then, was which of these edges should be blocked. I was guided by two concerns: It should be possible to solve the maze. That is, there should be (at least with high probability) an open path from the start to the finish. It should not be trivial to solve the maze. That is, it should not be the
medium
7,204
Haskell, Genetic Algorithm, Graph Theory, Programming. case that moving in any direction at all gets you from the start to the finish. A key insight here is that adding an edge in a graph always does one of two things: it either connects two previously unconnected components of the graph, or it creates a cycle (a path from a vertex back to itself
medium
7,205
Haskell, Genetic Algorithm, Graph Theory, Programming. without repeating an edge) by connecting two previously connected vertices in a second way. The two concerns, then, say that we want to do the first if at all possible, but avoid the second. Unfortunately, since our edges are pointing in random directions, it’s not clear how we can distinguish
medium
7,206
Haskell, Genetic Algorithm, Graph Theory, Programming. between the two! But, at the very least, we can try to get about the right number of edges in our graph. We want the graph to be acyclic (i.e., not contain any cycles), but connected. Such a graph is called a tree. We’ll use the following easy fact several times: Proposition: In any acyclic graph,
medium
7,207
Haskell, Genetic Algorithm, Graph Theory, Programming. the number of edges is always equal to the number of vertices minus the number of connected components. (Proof: By induction on the number of edges: if there are no edges, then each vertex is its own connected component. If there are edges, removing any edge keeps the graph acyclic, so the equality
medium
7,208
Haskell, Genetic Algorithm, Graph Theory, Programming. holds, but re-adding that edge adds one edge and removes one connected component, preserving the equality.) Corollary: A tree has one fewer edge than it has vertices. (Proof: the entire tree is one connected component. Apply the proposition above.) So, I thought to myself, I have twenty hex tiles,
medium
7,209
Haskell, Genetic Algorithm, Graph Theory, Programming. because that’s how many blank tiles came in the box. If I treat each of them as a vertex, then I need 19 connections between them. There’s one further complication. In order for a path to be open, it has to not be blocked on either side, and the outside boundary of the maze is always blocked. So
medium
7,210
Haskell, Genetic Algorithm, Graph Theory, Programming. not every open edge of a tile adds a vertex to the graph. This is easily solved with a slight change of perspective. Instead of counting the number of edges, one could count the desired probability of an edge being open. In my chosen configuration of tiles, there were 43 internal borders between
medium
7,211
Haskell, Genetic Algorithm, Graph Theory, Programming. tiles, not counting the outer boundary of the whole maze. So I wanted about a 19/43 (or about 43%) probability of an edge on each of these boundaries. Because both tiles need an open side to make that edge, if each edge is chosen independently, the probability of a tile having an open edge should
medium
7,212
Haskell, Genetic Algorithm, Graph Theory, Programming. be the square root of that, or about 66%. Since each of the 20 tiles has 6 edges on each of 2 sides, there are 240 edges, and a random 160 of them (that’s 240 * sqrt(19/43)) should be open, and the remaining 80 should be blocked. You can see a simulation of this here. This simple simulation doesn’t
medium
7,213
Haskell, Genetic Algorithm, Graph Theory, Programming. choose tiles. Unfortunately, this experiment failed. if you try this several times, you’ll notice: The mazes are just too simple. Twenty spaces in the maze are just not enough. It’s far too frequent that the open edges are in the wrong places, and you cannot get from start to finish. Now, it’s not
medium
7,214
Haskell, Genetic Algorithm, Graph Theory, Programming. imperative that a maze be acyclic, so I could (and, in practice, would) increase the probability of an open edge to solve the second problem… but that would just make the first problem even worse. There is no choice of probability that fixes both problems at once. Attempt #2: More Interesting Tiles
medium
7,215
Haskell, Genetic Algorithm, Graph Theory, Programming. To be honest, I expected this first attempt to fail. I never even drew the tiles, but just simulated enough to be confident that it would not work. Clearly, I needed more structure within the map tiles. One thing that would have made the result look more compelling would be to draw interesting (but
medium
7,216
Haskell, Genetic Algorithm, Graph Theory, Programming. static) mazes onto each tile, which are all connected themselves, and have exits in all directions with the appropriate probabilities. This is all smoke and mirrors, though. Ultimately, it reduces to the same system as above. With a sufficiently increased probability of an open edge, this would
medium
7,217
Haskell, Genetic Algorithm, Graph Theory, Programming. satisfy my requirements. Faking a better maze with static tiles But it feels like cheating. I want the high-level structure of the maze to be interesting in its own right. And no matter how clever I get with the tiles, the previous section showed that’s not possible in this model. The remaining
medium
7,218
Haskell, Genetic Algorithm, Graph Theory, Programming. option, then, is to consider that each tile, instead of being one location with a number of exits, actually contains multiple locations, with varied connectivity. Since I don’t want to add artificial complexity inside the tiles, they will simply connect the edges as desired. Some of the tiles now
medium
7,219
Haskell, Genetic Algorithm, Graph Theory, Programming. look like this: Tiles with (potentially) multiple locations per tile At first glance, the analysis for connecting the maze becomes trickier, because the number of vertices in the graph changes! Three of the tiles above have two locations, but the last one has only one. There are even tiles with
medium
7,220
Haskell, Genetic Algorithm, Graph Theory, Programming. three different paths. But never fear: another change of perspective simplifies the matter. Instead of considering the graph where each tile or path is a vertex, we will consider the dual graph, where each border is a vertex, like this: The dual of the maze graph Now we once again have a fixed
medium
7,221
Haskell, Genetic Algorithm, Graph Theory, Programming. number of vertices (77 of them), and tiles that determine the edges between those vertices. To connect this graph, then, requires at least 76 edges spread across 20 tiles, or about 3.8 edges per tile. Maybe even a few more, since the resulting graph is sure to contain some cycles, after all, so
medium
7,222
Haskell, Genetic Algorithm, Graph Theory, Programming. let’s call it 4 per tile. Here’s the mistake I made when I first considered this graph. The last tile in the set of four examples above connects five different vertices together, so that it’s possible to pass directly between any pair of them. Naively, you might think that corresponds to 10 edges:
medium
7,223
Haskell, Genetic Algorithm, Graph Theory, Programming. one for each pair of vertices that are connected. That fit my intuition that connecting five out of the six sides should be way above average, as far as connections on a tile. I thought this, and I worked out a full set of tiles on this assumption before realizing that the resulting mazes were not
medium
7,224
Haskell, Genetic Algorithm, Graph Theory, Programming. nearly connected enough. Here’s the problem: those 10 edges will contain a rather large number of cycles. The requirement for at least 76 edges was intended for an acyclic graph. We will inevitably end up with some cycles between tiles, but we should at least avoid counting cycles within a tile. To
medium
7,225
Haskell, Genetic Algorithm, Graph Theory, Programming. connect those five vertices with an acyclic graph requires four edges rather than ten. It doesn’t matter to us which four edges those are; they only exist in theory, and don’t change how the tile is drawn. But it’s important to only count them as four. Now, 76 edges feels like an exorbitant number
medium
7,226
Haskell, Genetic Algorithm, Graph Theory, Programming. of edges! If we must average connecting nearly five out of the six sides of each tile, it seems it will be possible to move almost anywhere. But on further reflection, this isn’t unreasonable. Nearly half of those vertices are on the outside boundary of the maze! All it takes is one blocked path in
medium
7,227
Haskell, Genetic Algorithm, Graph Theory, Programming. an unfortunate place to isolate one of those vertices. Even a one-in-six chance will isolate about six of them. Those six edges that are not being used to connect the graph will then go to adding excess paths in the remaining pieces! The fact that vertices on the edges of the graph are so likely to
medium
7,228
Haskell, Genetic Algorithm, Graph Theory, Programming. be isolated is a problem. I decided to mitigate that problem creatively, by adding some of those 76 edges not on the tiles, but on the board. Something like this: Exterior edges on the maze With these 14 edges on the exterior of the maze (remember that connecting three vertices only counts as two
medium
7,229
Haskell, Genetic Algorithm, Graph Theory, Programming. edges), we need only 64 on the interior, which is barely more than 3 per tile. Not only that, but the tiles on the edge are considerably less likely to be isolated in the resulting maze. At this point, I drew a set of tiles, iterated on them for a bit by laying out mazes and getting a sense for
medium
7,230
Haskell, Genetic Algorithm, Graph Theory, Programming. when they were too easy, too hard, etc. I ended up with a mechanic that I was mostly happy with. Here is the actual game board with a random maze laid out. The layout is a little different from the examples above because of the need to work in other game play elements, but the idea is essentially
medium
7,231
Haskell, Genetic Algorithm, Graph Theory, Programming. the same. The actual game board If you look closely, you’ll see that the resulting maze isn’t entirely connected: there are two short disconnected sections on the bottom-left and bottom-right tiles. It definitely contains cycles, both because the disconnected components free up edges that cause
medium
7,232
Haskell, Genetic Algorithm, Graph Theory, Programming. cycles, and because I’ve included more than the minimum number of edges needed to connect the maze anyway. But it is non-trivial, visually interesting, and generally in pretty good shape. It appears fairly easy when all laid out like this, but in the actual game, players only uncover tiles when
medium
7,233
Haskell, Genetic Algorithm, Graph Theory, Programming. they reach them, and the experience is definitely much like navigating a maze, where you’re not sure where a trail will lead until you follow it. All in all, it’s a success. Attempt #3: Using Genetic Algorithms I wasn’t unhappy with this attempt, but I suspected that I could do better with some
medium
7,234
Haskell, Genetic Algorithm, Graph Theory, Programming. computer simulation. I little bit of poking around turned up a Haskell library called moo that provides the building blocks for genetic algorithms. The idea behind genetic algorithms is simple: you begin with an initial population of answers, which are sometimes randomly generated, and evaluate
medium
7,235
Haskell, Genetic Algorithm, Graph Theory, Programming. them according to a fitness function. You take the best answers in that set, and then expand it by generating random mutations and crosses of that population. This provides a new population, which is evaluated, mutated, cross-bred, etc. Let’s start with some imports: import
medium
7,236
Haskell, Genetic Algorithm, Graph Theory, Programming. Algebra.Graph.AdjacencyMap.Algorithm (scc) import Algebra.Graph.ToGraph import qualified Algebra.Graph.Undirected as U import Control.Monad import Data.Containers.ListUtils import Data.Function (on) import Data.List import Moo.GeneticAlgorithm.Binary import System.IO.Unsafe import System.Random
medium
7,237
Haskell, Genetic Algorithm, Graph Theory, Programming. import System.Random.Shuffle And some types: data Direction = N | NE | SE | S | SW | NW deriving (Show, Eq, Ord, Enum) type Trail = [Direction] data Tile = Tile [Trail] [Trail] deriving (Show, Eq) A Direction is one of the six cardinal directions in a hex world, arranged in clockwise order. A Trail
medium
7,238
Haskell, Genetic Algorithm, Graph Theory, Programming. is a single trail from one of the tile designs. It’s represented as a list of directions from which one can exit the tile by that trail. Everything but that is just visual design. And finally, a Tile has a top and bottom side, and each of those has some list of Trails. The moo library requires some
medium
7,239
Haskell, Genetic Algorithm, Graph Theory, Programming. way to encode or decode values into bit strings, which play the role of DNA in the genetic algorithm. The framework will make somewhat arbitrary modifications to these bit strings, and you’re expected to be able to read these modified strings and get reasonable values. In fact, it’s expected that
medium
7,240
Haskell, Genetic Algorithm, Graph Theory, Programming. small modifications to a bit string will yield similarly small changes to the encoded value. Here’s what I did. Each side of a tile is encoded into six numbers, each of which identifies the trail connected to a direction of the tile. If a tile does not have a trail leaving in some direction, then
medium
7,241
Haskell, Genetic Algorithm, Graph Theory, Programming. that direction will have a unique trail number, so essentially it becomes a dead end. Assuming we don’t want a trail with only a single non-branching path (and we don’t!), it suffices to leave room for four trail numbers, which requires only two bits in the string. Then a side needs 12 bits, a
medium
7,242
Haskell, Genetic Algorithm, Graph Theory, Programming. two-sided tile needs 24 bits, and a full set of 20 tiles needs 480 bits. Here’s the code to encode a stack of tiles this way starting from the earlier representation: encodeTile :: Tile -> [Bool] encodeTile (Tile a b) = concat [ encodeBinary (0 :: Int, 3) (trailNum side dir) | side <- [a, b], dir
medium
7,243
Haskell, Genetic Algorithm, Graph Theory, Programming. <- [N .. NW] ] where trailNum trails d = case [i | (t, i) <- zip trails [0 .. 3], d `elem` t] of [i] -> i _ -> error "Duplicate or missing dir on tile" encodeTiles :: [Tile] -> [Bool] encodeTiles = concat . map encodeTile The encodeBinary function is part of moo, and just encodes a binary number
medium
7,244
Haskell, Genetic Algorithm, Graph Theory, Programming. into a list of bools, given a range. The rest of this just matches trails with trail numbers and concatenates their encodings. Decoding is a little more complex, mainly because we want to do some normalization that will come in handy down the road. A helper function first: clockwise :: Direction ->
medium
7,245
Haskell, Genetic Algorithm, Graph Theory, Programming. Direction clockwise N = NE clockwise NE = SE clockwise SE = S clockwise S = SW clockwise SW = NW clockwise NW = N Now the decoding: decodeTile :: [Bool] -> Tile decodeTile bs = Tile (postproc a) (postproc b) where trailNums = map (decodeBinary (0 :: Int, 3)) (splitEvery trailSize bs) trail nums n =
medium
7,246
Haskell, Genetic Algorithm, Graph Theory, Programming. [d | (d, i) <- zip [N .. NW] nums, i == n] a = map (trail (take 6 trailNums)) [0 .. 3] b = map (trail (drop 6 trailNums)) [0 .. 3] postproc :: [Trail] -> [Trail] postproc = normalize . filter (not . null) where normalize trails = minimum [ simplify ts | ts <- take 6 (iterate (map (map clockwise))
medium
7,247
Haskell, Genetic Algorithm, Graph Theory, Programming. trails) ] simplify = sort . map sort decodeTiles :: [Bool] -> [Tile] decodeTiles = map decodeTile . splitEvery tileSize trailSize :: Int trailSize = bitsNeeded (0 :: Int, 3) tileSize :: Int tileSize = 12 * trailSize Again, some of the functions (such as decodeBinary, bitsNeeded, and splitEvery) are
medium
7,248
Haskell, Genetic Algorithm, Graph Theory, Programming. utility functions provided by moo. The postproc function applies a bit of normalization to the tile design. Since reordering of the exits of a trail, the trails, and the rotation of the tile do not change its meaning, we simply make a consistent choice here, so that equal tiles will compare equal.
medium
7,249
Haskell, Genetic Algorithm, Graph Theory, Programming. The next thing we need is a way to score a set of tiles, so the best can be chosen. From the beginning, we know that the scoring will involve shuffling the tiles, which actually includes three kinds of randomization: Moving the tiles to different places on the board. Rotating the tiles to a random
medium
7,250
Haskell, Genetic Algorithm, Graph Theory, Programming. orientation. Flipping the tiles so either side is equally likely to be visible. Here’s that code: rotateTile :: Int -> Tile -> Tile rotateTile n | n > 0 = rotateTile (n - 1) . rot | n < 0 = rotateTile (n + 6) | otherwise = id where rot (Tile top bottom) = Tile (map (map clockwise) top) bottom
medium
7,251
Haskell, Genetic Algorithm, Graph Theory, Programming. flipTile :: Tile -> Tile flipTile (Tile top bottom) = Tile bottom top randomizeTile :: RandomGen g => g -> Tile -> (g, Tile) randomizeTile g t | flipped = (g3, rotateTile rots (flipTile t)) | otherwise = (g3, rotateTile rots t) where (flipped, g2) = random g (rots, g3) = randomR (0, 5) g2
medium
7,252
Haskell, Genetic Algorithm, Graph Theory, Programming. shuffleTiles :: RandomGen g => g -> [Tile] -> (g, [Tile]) shuffleTiles g tiles = (g3, result) where (g1, g2) = split g shuffledTiles = shuffle' tiles (length tiles) g1 (g3, result) = mapAccumL randomizeTile g2 shuffledTiles In order to score the random arrangements of tiles, it’s useful to have a
medium
7,253
Haskell, Genetic Algorithm, Graph Theory, Programming. graph library. In this case, I chose Alga for the task, mainly because it allows for graphs with an arbitrary node type (which I want here), it seems to be garnering some excitement, and I haven’t had a chance to play with it yet. Alga represents undirected graphs with a different Graph type
medium
7,254
Haskell, Genetic Algorithm, Graph Theory, Programming. (annoyingly called the same thing) in a subpackage, hence the qualified import. In order to share actual code, I’m switching to now work with the real game board, from the photo above. There are certain fixed locations that I care about because players will need to reach them in the game: the
medium
7,255
Haskell, Genetic Algorithm, Graph Theory, Programming. witch’s hut, the wishing well, the monster’s lair, the orchard, the spring, and the exit. These get their own named nodes. The tiles are named “1” through “20”. And finally, because each tile can have multiple trails, a node consists of a name and a trail number (which is always 0 for the built-in
medium
7,256
Haskell, Genetic Algorithm, Graph Theory, Programming. locations). Here’s the code to build a graph from a list of tiles: topSide :: Tile -> [[Direction]] topSide (Tile top _) = top tileGraph :: [Tile] -> U.Graph (String, Int) tileGraph tiles = U.edges $ [ ((show a, trailNum a dira), (show b, trailNum b dirb)) | (a, dira, b, dirb) <- connections ] ++ [
medium
7,257
Haskell, Genetic Algorithm, Graph Theory, Programming. (("Well", 0), (show c, trailNum c dir)) | (c, dir) <- [ (6, SE), (7, NE), (10, S), (11, N), (14, SW), (15, NW) ] ] ++ [ (("Hut", 0), (show c, trailNum c dir)) | (c, dir) <- [(1, NE), (5, N), (9, NW)] ] ++ [ (("Spring", 0), (show c, trailNum c dir)) | (c, dir) <- [(4, S), (8, SW)] ] ++ [
medium
7,258
Haskell, Genetic Algorithm, Graph Theory, Programming. (("Orchard", 0), (show c, trailNum c dir)) | (c, dir) <- [(13, NE), (17, N)] ] ++ [ (("Lair", 0), (show c, trailNum c dir)) | (c, dir) <- [(12, SE), (16, S), (20, SW)] ] ++ [ (("Exit", 0), (show c, trailNum c dir)) | (c, dir) <- [(19, SE), (20, NE), (20, SE)] ] where trailNum n dir = head [ i |
medium
7,259
Haskell, Genetic Algorithm, Graph Theory, Programming. (exits, i) <- zip (topSide (tiles !! (n - 1))) [0 ..], dir `elem` exits ] connections = [ (1, S, 2, N), (1, SW, 2, NW), (1, SE, 5, NW), (2, NE, 5, SW), (2, SE, 6, NW), (2, S, 3, N), (2, SW, 3, NW), (3, NE, 6, SW), (3, SE, 7, NW), (3, S, 4, N), (3, SW, 4, NW), (4, NE, 7, SW), (4, SE, 8, NW), (5, NE,
medium
7,260
Haskell, Genetic Algorithm, Graph Theory, Programming. 9, SW), (5, SE, 10, NW), (5, S, 6, N), (6, NE, 10, SW), (6, S, 7, N), (7, SE, 11, NW), (7, S, 8, N), (8, NE, 11, SW), (8, SE, 12, NW), (8, S, 12, SW), (9, NE, 13, N), (9, SE, 13, NW), (9, S, 10, N), (10, NE, 13, SW), (10, SE, 14, NW), (11, NE, 15, SW), (11, SE, 16, NW), (11, S, 12, N), (12, NE, 16,
medium
7,261
Haskell, Genetic Algorithm, Graph Theory, Programming. SW), (13, SE, 17, NW), (13, S, 14, N), (14, NE, 17, SW), (14, SE, 18, NW), (14, S, 15, N), (15, NE, 18, SW), (15, SE, 19, NW), (15, S, 16, N), (16, NE, 19, SW), (16, SE, 20, NW), (17, SE, 18, NE), (17, S, 18, N), (18, SE, 19, NE), (18, S, 19, N), (19, S, 20, N) ] Tedious, but it works! I can now
medium
7,262
Haskell, Genetic Algorithm, Graph Theory, Programming. score one of these graphs. There are two kinds of things I care about: the probability of being able to get between any two of the built-in locations, and the number of “excess” edges that create multiple paths. hasPath :: String -> String -> U.Graph (String, Int) -> Bool hasPath a b g = (b, 0)
medium
7,263
Haskell, Genetic Algorithm, Graph Theory, Programming. `elem` reachable (a, 0 :: Int) (U.fromUndirected g) extraEdges :: Ord a => U.Graph a -> Int extraEdges g = edges - (vertices - components) where vertices = U.vertexCount g edges = U.edgeCount g components = vertexCount (scc (toAdjacencyMap (U.fromUndirected g))) scoreGraph :: U.Graph (String, Int)
medium
7,264
Haskell, Genetic Algorithm, Graph Theory, Programming. -> [Double] scoreGraph g = [ -0.1 * fromIntegral (extraEdges g), if hasPath "Hut" "Well" g then 1.0 else 0.0, if hasPath "Hut" "Spring" g then 1.0 else 0.0, if hasPath "Hut" "Lair" g then 1.0 else 0.0, if hasPath "Hut" "Orchard" g then 1.0 else 0.0, if hasPath "Hut" "Exit" g then 1.0 else 0.0, if
medium
7,265
Haskell, Genetic Algorithm, Graph Theory, Programming. hasPath "Well" "Spring" g then 1.0 else 0.0, if hasPath "Well" "Lair" g then 1.0 else 0.0, if hasPath "Well" "Orchard" g then 1.0 else 0.0, if hasPath "Well" "Exit" g then 1.0 else 0.0, if hasPath "Spring" "Lair" g then 1.0 else 0.0, if hasPath "Spring" "Orchard" g then 1.0 else 0.0, if hasPath
medium
7,266
Haskell, Genetic Algorithm, Graph Theory, Programming. "Spring" "Exit" g then 1.0 else 0.0, if hasPath "Lair" "Orchard" g then 1.0 else 0.0, if hasPath "Lair" "Exit" g then 1.0 else 0.0, if hasPath "Orchard" "Exit" g then 1.0 else 0.0 ] The result of scoring is a list of individual scores, which will be added together to determine the overall fitness.
medium
7,267
Haskell, Genetic Algorithm, Graph Theory, Programming. I’ve added a low negative weight to the extra edges, in order to express the fact that they are bad, but far less so than not being able to reach an important game location. (Unreachability isn’t fatal, though, since the game also has mechanisms by which the maze changes over time… that’s my backup
medium
7,268
Haskell, Genetic Algorithm, Graph Theory, Programming. plan.) Now, I just need to score the tiles themselves by generating a bunch of random game board graphs, and averaging the scores for those graphs. When I did so, though, I found that there were other things that went wrong with the optimized tiles: The algorithm reused the same designs over and
medium
7,269
Haskell, Genetic Algorithm, Graph Theory, Programming. over again, making the game less random. To fix this, I added a small cost associated with the sum of the squares of the number of each unique tile design. This is why it was convenient to normalize the tiles, so I could find duplicates. Some duplicates are okay, but by the time we get to four or
medium
7,270
Haskell, Genetic Algorithm, Graph Theory, Programming. five of the same design, adding more duplicates becomes very costly. The algorithm generated a lot of tiles that need bridges. Like salt, bridges make the map more interesting in small quantities, such as the five bridges in the photo above, out of 20 tiles in all. But I was getting tile sets that
medium
7,271
Haskell, Genetic Algorithm, Graph Theory, Programming. needed bridges on nearly every tile, and sometimes even stacks of them! To fix this, I added a small cost for each bridge in the final tile set. The tiles included a tile with all six directions connected to each other. For aesthetic reasons, I wanted the wishing well to be the only location on the
medium
7,272
Haskell, Genetic Algorithm, Graph Theory, Programming. map with that level of connectivity. So I added a large negative cost associated with using that specific tile design. Here’s the resulting code. needsBridge :: [Trail] -> Bool needsBridge trails = or [conflicts a b | a <- trails, b <- trails, a /= b] where conflicts t1 t2 = or [d > minimum t1 && d
medium
7,273
Haskell, Genetic Algorithm, Graph Theory, Programming. < maximum t1 | d <- t2] && or [d > minimum t2 && d < maximum t2 | d <- t1] numBridges :: [Tile] -> Int numBridges tiles = length [ () | Tile a _ <- tiles ++ map flipTile tiles, needsBridge a ] dupScore :: [Tile] -> Int dupScore tiles = sum (map ((^ 2) . length) (group (sort sides))) where sides =
medium
7,274
Haskell, Genetic Algorithm, Graph Theory, Programming. map topSide tiles ++ map (topSide . flipTile) tiles numFullyConnected :: [Tile] -> Int numFullyConnected tiles = length [() | Tile a b <- tiles, length a == 1 || length b == 1] scoreTiles :: RandomGen g => g -> Int -> [Tile] -> [Double] scoreTiles g n tiles = [ -0.05 * fromIntegral (numBridges
medium
7,275
Haskell, Genetic Algorithm, Graph Theory, Programming. tiles), -0.02 * fromIntegral (dupScore tiles), -1 * fromIntegral (numFullyConnected tiles) ] ++ map (/ fromIntegral n) graphScores where (graphScores, _) = foldl' next (repeat 0, g) (replicate n tiles) next (soFar, g1) tiles = let (g2, shuffled) = shuffleTiles g1 tiles in (zipWith (+) soFar
medium
7,276
Haskell, Genetic Algorithm, Graph Theory, Programming. (scoreGraph (tileGraph shuffled)), g2) That’s basically all the pieces. From here, I simply ask the moo library to run the optimization. I was too lazy to figure out how to pipe random number generation through moo, so I cheated with an unsafePerformIO there. I realize that means I’m kicked out of
medium
7,277