code
stringlengths 5
1.03M
| repo_name
stringlengths 5
90
| path
stringlengths 4
158
| license
stringclasses 15
values | size
int64 5
1.03M
| n_ast_errors
int64 0
53.9k
| ast_max_depth
int64 2
4.17k
| n_whitespaces
int64 0
365k
| n_ast_nodes
int64 3
317k
| n_ast_terminals
int64 1
171k
| n_ast_nonterminals
int64 1
146k
| loc
int64 -1
37.3k
| cycloplexity
int64 -1
1.31k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
{-# LANGUAGE Trustworthy #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE TypeSynonymInstances #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE MagicHash #-}
-----------------------------------------------------------------------------
-- |
-- Module : GHC.Generics
-- Copyright : (c) Universiteit Utrecht 2010-2011, University of Oxford 2012-2013
-- License : see libraries/base/LICENSE
--
-- Maintainer : [email protected]
-- Stability : internal
-- Portability : non-portable
--
-- @since 4.6.0.0
--
-- If you're using @GHC.Generics@, you should consider using the
-- <http://hackage.haskell.org/package/generic-deriving> package, which
-- contains many useful generic functions.
module GHC.Generics (
-- * Introduction
--
-- |
--
-- Datatype-generic functions are based on the idea of converting values of
-- a datatype @T@ into corresponding values of a (nearly) isomorphic type @'Rep' T@.
-- The type @'Rep' T@ is
-- built from a limited set of type constructors, all provided by this module. A
-- datatype-generic function is then an overloaded function with instances
-- for most of these type constructors, together with a wrapper that performs
-- the mapping between @T@ and @'Rep' T@. By using this technique, we merely need
-- a few generic instances in order to implement functionality that works for any
-- representable type.
--
-- Representable types are collected in the 'Generic' class, which defines the
-- associated type 'Rep' as well as conversion functions 'from' and 'to'.
-- Typically, you will not define 'Generic' instances by hand, but have the compiler
-- derive them for you.
-- ** Representing datatypes
--
-- |
--
-- The key to defining your own datatype-generic functions is to understand how to
-- represent datatypes using the given set of type constructors.
--
-- Let us look at an example first:
--
-- @
-- data Tree a = Leaf a | Node (Tree a) (Tree a)
-- deriving 'Generic'
-- @
--
-- The above declaration (which requires the language pragma @DeriveGeneric@)
-- causes the following representation to be generated:
--
-- @
-- instance 'Generic' (Tree a) where
-- type 'Rep' (Tree a) =
-- 'D1' D1Tree
-- ('C1' C1_0Tree
-- ('S1' 'NoSelector' ('Par0' a))
-- ':+:'
-- 'C1' C1_1Tree
-- ('S1' 'NoSelector' ('Rec0' (Tree a))
-- ':*:'
-- 'S1' 'NoSelector' ('Rec0' (Tree a))))
-- ...
-- @
--
-- /Hint:/ You can obtain information about the code being generated from GHC by passing
-- the @-ddump-deriv@ flag. In GHCi, you can expand a type family such as 'Rep' using
-- the @:kind!@ command.
--
#if 0
-- /TODO:/ Newer GHC versions abandon the distinction between 'Par0' and 'Rec0' and will
-- use 'Rec0' everywhere.
--
#endif
-- This is a lot of information! However, most of it is actually merely meta-information
-- that makes names of datatypes and constructors and more available on the type level.
--
-- Here is a reduced representation for 'Tree' with nearly all meta-information removed,
-- for now keeping only the most essential aspects:
--
-- @
-- instance 'Generic' (Tree a) where
-- type 'Rep' (Tree a) =
-- 'Par0' a
-- ':+:'
-- ('Rec0' (Tree a) ':*:' 'Rec0' (Tree a))
-- @
--
-- The @Tree@ datatype has two constructors. The representation of individual constructors
-- is combined using the binary type constructor ':+:'.
--
-- The first constructor consists of a single field, which is the parameter @a@. This is
-- represented as @'Par0' a@.
--
-- The second constructor consists of two fields. Each is a recursive field of type @Tree a@,
-- represented as @'Rec0' (Tree a)@. Representations of individual fields are combined using
-- the binary type constructor ':*:'.
--
-- Now let us explain the additional tags being used in the complete representation:
--
-- * The @'S1' 'NoSelector'@ indicates that there is no record field selector associated with
-- this field of the constructor.
--
-- * The @'C1' C1_0Tree@ and @'C1' C1_1Tree@ invocations indicate that the enclosed part is
-- the representation of the first and second constructor of datatype @Tree@, respectively.
-- Here, @C1_0Tree@ and @C1_1Tree@ are datatypes generated by the compiler as part of
-- @deriving 'Generic'@. These datatypes are proxy types with no values. They are useful
-- because they are instances of the type class 'Constructor'. This type class can be used
-- to obtain information about the constructor in question, such as its name
-- or infix priority.
--
-- * The @'D1' D1Tree@ tag indicates that the enclosed part is the representation of the
-- datatype @Tree@. Again, @D1Tree@ is a datatype generated by the compiler. It is a
-- proxy type, and is useful by being an instance of class 'Datatype', which
-- can be used to obtain the name of a datatype, the module it has been defined in, and
-- whether it has been defined using @data@ or @newtype@.
-- ** Derived and fundamental representation types
--
-- |
--
-- There are many datatype-generic functions that do not distinguish between positions that
-- are parameters or positions that are recursive calls. There are also many datatype-generic
-- functions that do not care about the names of datatypes and constructors at all. To keep
-- the number of cases to consider in generic functions in such a situation to a minimum,
-- it turns out that many of the type constructors introduced above are actually synonyms,
-- defining them to be variants of a smaller set of constructors.
-- *** Individual fields of constructors: 'K1'
--
-- |
--
-- The type constructors 'Par0' and 'Rec0' are variants of 'K1':
--
-- @
-- type 'Par0' = 'K1' 'P'
-- type 'Rec0' = 'K1' 'R'
-- @
--
-- Here, 'P' and 'R' are type-level proxies again that do not have any associated values.
-- *** Meta information: 'M1'
--
-- |
--
-- The type constructors 'S1', 'C1' and 'D1' are all variants of 'M1':
--
-- @
-- type 'S1' = 'M1' 'S'
-- type 'C1' = 'M1' 'C'
-- type 'D1' = 'M1' 'D'
-- @
--
-- The types 'S', 'C' and 'D' are once again type-level proxies, just used to create
-- several variants of 'M1'.
-- *** Additional generic representation type constructors
--
-- |
--
-- Next to 'K1', 'M1', ':+:' and ':*:' there are a few more type constructors that occur
-- in the representations of other datatypes.
-- **** Empty datatypes: 'V1'
--
-- |
--
-- For empty datatypes, 'V1' is used as a representation. For example,
--
-- @
-- data Empty deriving 'Generic'
-- @
--
-- yields
--
-- @
-- instance 'Generic' Empty where
-- type 'Rep' Empty = 'D1' D1Empty 'V1'
-- @
-- **** Constructors without fields: 'U1'
--
-- |
--
-- If a constructor has no arguments, then 'U1' is used as its representation. For example
-- the representation of 'Bool' is
--
-- @
-- instance 'Generic' Bool where
-- type 'Rep' Bool =
-- 'D1' D1Bool
-- ('C1' C1_0Bool 'U1' ':+:' 'C1' C1_1Bool 'U1')
-- @
-- *** Representation of types with many constructors or many fields
--
-- |
--
-- As ':+:' and ':*:' are just binary operators, one might ask what happens if the
-- datatype has more than two constructors, or a constructor with more than two
-- fields. The answer is simple: the operators are used several times, to combine
-- all the constructors and fields as needed. However, users /should not rely on
-- a specific nesting strategy/ for ':+:' and ':*:' being used. The compiler is
-- free to choose any nesting it prefers. (In practice, the current implementation
-- tries to produce a more or less balanced nesting, so that the traversal of the
-- structure of the datatype from the root to a particular component can be performed
-- in logarithmic rather than linear time.)
-- ** Defining datatype-generic functions
--
-- |
--
-- A datatype-generic function comprises two parts:
--
-- 1. /Generic instances/ for the function, implementing it for most of the representation
-- type constructors introduced above.
--
-- 2. A /wrapper/ that for any datatype that is in `Generic`, performs the conversion
-- between the original value and its `Rep`-based representation and then invokes the
-- generic instances.
--
-- As an example, let us look at a function 'encode' that produces a naive, but lossless
-- bit encoding of values of various datatypes. So we are aiming to define a function
--
-- @
-- encode :: 'Generic' a => a -> [Bool]
-- @
--
-- where we use 'Bool' as our datatype for bits.
--
-- For part 1, we define a class @Encode'@. Perhaps surprisingly, this class is parameterized
-- over a type constructor @f@ of kind @* -> *@. This is a technicality: all the representation
-- type constructors operate with kind @* -> *@ as base kind. But the type argument is never
-- being used. This may be changed at some point in the future. The class has a single method,
-- and we use the type we want our final function to have, but we replace the occurrences of
-- the generic type argument @a@ with @f p@ (where the @p@ is any argument; it will not be used).
--
-- > class Encode' f where
-- > encode' :: f p -> [Bool]
--
-- With the goal in mind to make @encode@ work on @Tree@ and other datatypes, we now define
-- instances for the representation type constructors 'V1', 'U1', ':+:', ':*:', 'K1', and 'M1'.
-- *** Definition of the generic representation types
--
-- |
--
-- In order to be able to do this, we need to know the actual definitions of these types:
--
-- @
-- data 'V1' p -- lifted version of Empty
-- data 'U1' p = 'U1' -- lifted version of ()
-- data (':+:') f g p = 'L1' (f p) | 'R1' (g p) -- lifted version of 'Either'
-- data (':*:') f g p = (f p) ':*:' (g p) -- lifted version of (,)
-- newtype 'K1' i c p = 'K1' { 'unK1' :: c } -- a container for a c
-- newtype 'M1' i t f p = 'M1' { 'unM1' :: f p } -- a wrapper
-- @
--
-- So, 'U1' is just the unit type, ':+:' is just a binary choice like 'Either',
-- ':*:' is a binary pair like the pair constructor @(,)@, and 'K1' is a value
-- of a specific type @c@, and 'M1' wraps a value of the generic type argument,
-- which in the lifted world is an @f p@ (where we do not care about @p@).
-- *** Generic instances
--
-- |
--
-- The instance for 'V1' is slightly awkward (but also rarely used):
--
-- @
-- instance Encode' 'V1' where
-- encode' x = undefined
-- @
--
-- There are no values of type @V1 p@ to pass (except undefined), so this is
-- actually impossible. One can ask why it is useful to define an instance for
-- 'V1' at all in this case? Well, an empty type can be used as an argument to
-- a non-empty type, and you might still want to encode the resulting type.
-- As a somewhat contrived example, consider @[Empty]@, which is not an empty
-- type, but contains just the empty list. The 'V1' instance ensures that we
-- can call the generic function on such types.
--
-- There is exactly one value of type 'U1', so encoding it requires no
-- knowledge, and we can use zero bits:
--
-- @
-- instance Encode' 'U1' where
-- encode' 'U1' = []
-- @
--
-- In the case for ':+:', we produce 'False' or 'True' depending on whether
-- the constructor of the value provided is located on the left or on the right:
--
-- @
-- instance (Encode' f, Encode' g) => Encode' (f ':+:' g) where
-- encode' ('L1' x) = False : encode' x
-- encode' ('R1' x) = True : encode' x
-- @
--
-- In the case for ':*:', we append the encodings of the two subcomponents:
--
-- @
-- instance (Encode' f, Encode' g) => Encode' (f ':*:' g) where
-- encode' (x ':*:' y) = encode' x ++ encode' y
-- @
--
-- The case for 'K1' is rather interesting. Here, we call the final function
-- 'encode' that we yet have to define, recursively. We will use another type
-- class 'Encode' for that function:
--
-- @
-- instance (Encode c) => Encode' ('K1' i c) where
-- encode' ('K1' x) = encode x
-- @
--
-- Note how 'Par0' and 'Rec0' both being mapped to 'K1' allows us to define
-- a uniform instance here.
--
-- Similarly, we can define a uniform instance for 'M1', because we completely
-- disregard all meta-information:
--
-- @
-- instance (Encode' f) => Encode' ('M1' i t f) where
-- encode' ('M1' x) = encode' x
-- @
--
-- Unlike in 'K1', the instance for 'M1' refers to 'encode'', not 'encode'.
-- *** The wrapper and generic default
--
-- |
--
-- We now define class 'Encode' for the actual 'encode' function:
--
-- @
-- class Encode a where
-- encode :: a -> [Bool]
-- default encode :: ('Generic' a) => a -> [Bool]
-- encode x = encode' ('from' x)
-- @
--
-- The incoming 'x' is converted using 'from', then we dispatch to the
-- generic instances using 'encode''. We use this as a default definition
-- for 'encode'. We need the 'default encode' signature because ordinary
-- Haskell default methods must not introduce additional class constraints,
-- but our generic default does.
--
-- Defining a particular instance is now as simple as saying
--
-- @
-- instance (Encode a) => Encode (Tree a)
-- @
--
#if 0
-- /TODO:/ Add usage example?
--
#endif
-- The generic default is being used. In the future, it will hopefully be
-- possible to use @deriving Encode@ as well, but GHC does not yet support
-- that syntax for this situation.
--
-- Having 'Encode' as a class has the advantage that we can define
-- non-generic special cases, which is particularly useful for abstract
-- datatypes that have no structural representation. For example, given
-- a suitable integer encoding function 'encodeInt', we can define
--
-- @
-- instance Encode Int where
-- encode = encodeInt
-- @
-- *** Omitting generic instances
--
-- |
--
-- It is not always required to provide instances for all the generic
-- representation types, but omitting instances restricts the set of
-- datatypes the functions will work for:
--
-- * If no ':+:' instance is given, the function may still work for
-- empty datatypes or datatypes that have a single constructor,
-- but will fail on datatypes with more than one constructor.
--
-- * If no ':*:' instance is given, the function may still work for
-- datatypes where each constructor has just zero or one field,
-- in particular for enumeration types.
--
-- * If no 'K1' instance is given, the function may still work for
-- enumeration types, where no constructor has any fields.
--
-- * If no 'V1' instance is given, the function may still work for
-- any datatype that is not empty.
--
-- * If no 'U1' instance is given, the function may still work for
-- any datatype where each constructor has at least one field.
--
-- An 'M1' instance is always required (but it can just ignore the
-- meta-information, as is the case for 'encode' above).
#if 0
-- *** Using meta-information
--
-- |
--
-- TODO
#endif
-- ** Generic constructor classes
--
-- |
--
-- Datatype-generic functions as defined above work for a large class
-- of datatypes, including parameterized datatypes. (We have used 'Tree'
-- as our example above, which is of kind @* -> *@.) However, the
-- 'Generic' class ranges over types of kind @*@, and therefore, the
-- resulting generic functions (such as 'encode') must be parameterized
-- by a generic type argument of kind @*@.
--
-- What if we want to define generic classes that range over type
-- constructors (such as 'Functor', 'Traversable', or 'Foldable')?
-- *** The 'Generic1' class
--
-- |
--
-- Like 'Generic', there is a class 'Generic1' that defines a
-- representation 'Rep1' and conversion functions 'from1' and 'to1',
-- only that 'Generic1' ranges over types of kind @* -> *@.
-- The 'Generic1' class is also derivable.
--
-- The representation 'Rep1' is ever so slightly different from 'Rep'.
-- Let us look at 'Tree' as an example again:
--
-- @
-- data Tree a = Leaf a | Node (Tree a) (Tree a)
-- deriving 'Generic1'
-- @
--
-- The above declaration causes the following representation to be generated:
--
-- instance 'Generic1' Tree where
-- type 'Rep1' Tree =
-- 'D1' D1Tree
-- ('C1' C1_0Tree
-- ('S1' 'NoSelector' 'Par1')
-- ':+:'
-- 'C1' C1_1Tree
-- ('S1' 'NoSelector' ('Rec1' Tree)
-- ':*:'
-- 'S1' 'NoSelector' ('Rec1' Tree)))
-- ...
--
-- The representation reuses 'D1', 'C1', 'S1' (and thereby 'M1') as well
-- as ':+:' and ':*:' from 'Rep'. (This reusability is the reason that we
-- carry around the dummy type argument for kind-@*@-types, but there are
-- already enough different names involved without duplicating each of
-- these.)
--
-- What's different is that we now use 'Par1' to refer to the parameter
-- (and that parameter, which used to be @a@), is not mentioned explicitly
-- by name anywhere; and we use 'Rec1' to refer to a recursive use of @Tree a@.
-- *** Representation of @* -> *@ types
--
-- |
--
-- Unlike 'Par0' and 'Rec0', the 'Par1' and 'Rec1' type constructors do not
-- map to 'K1'. They are defined directly, as follows:
--
-- @
-- newtype 'Par1' p = 'Par1' { 'unPar1' :: p } -- gives access to parameter p
-- newtype 'Rec1' f p = 'Rec1' { 'unRec1' :: f p } -- a wrapper
-- @
--
-- In 'Par1', the parameter @p@ is used for the first time, whereas 'Rec1' simply
-- wraps an application of @f@ to @p@.
--
-- Note that 'K1' (in the guise of 'Rec0') can still occur in a 'Rep1' representation,
-- namely when the datatype has a field that does not mention the parameter.
--
-- The declaration
--
-- @
-- data WithInt a = WithInt Int a
-- deriving 'Generic1'
-- @
--
-- yields
--
-- @
-- class 'Rep1' WithInt where
-- type 'Rep1' WithInt =
-- 'D1' D1WithInt
-- ('C1' C1_0WithInt
-- ('S1' 'NoSelector' ('Rec0' Int)
-- ':*:'
-- 'S1' 'NoSelector' 'Par1'))
-- @
--
-- If the parameter @a@ appears underneath a composition of other type constructors,
-- then the representation involves composition, too:
--
-- @
-- data Rose a = Fork a [Rose a]
-- @
--
-- yields
--
-- @
-- class 'Rep1' Rose where
-- type 'Rep1' Rose =
-- 'D1' D1Rose
-- ('C1' C1_0Rose
-- ('S1' 'NoSelector' 'Par1'
-- ':*:'
-- 'S1' 'NoSelector' ([] ':.:' 'Rec1' Rose)
-- @
--
-- where
--
-- @
-- newtype (':.:') f g p = 'Comp1' { 'unComp1' :: f (g p) }
-- @
-- *** Representation of unlifted types
--
-- |
--
-- If one were to attempt to derive a Generic instance for a datatype with an
-- unlifted argument (for example, 'Int#'), one might expect the occurrence of
-- the 'Int#' argument to be marked with @'Rec0' 'Int#'@. This won't work,
-- though, since 'Int#' is of kind @#@ and 'Rec0' expects a type of kind @*@.
-- In fact, polymorphism over unlifted types is disallowed completely.
--
-- One solution would be to represent an occurrence of 'Int#' with 'Rec0 Int'
-- instead. With this approach, however, the programmer has no way of knowing
-- whether the 'Int' is actually an 'Int#' in disguise.
--
-- Instead of reusing 'Rec0', a separate data family 'URec' is used to mark
-- occurrences of common unlifted types:
--
-- @
-- data family URec a p
--
-- data instance 'URec' ('Ptr' ()) p = 'UAddr' { 'uAddr#' :: 'Addr#' }
-- data instance 'URec' 'Char' p = 'UChar' { 'uChar#' :: 'Char#' }
-- data instance 'URec' 'Double' p = 'UDouble' { 'uDouble#' :: 'Double#' }
-- data instance 'URec' 'Int' p = 'UFloat' { 'uFloat#' :: 'Float#' }
-- data instance 'URec' 'Float' p = 'UInt' { 'uInt#' :: 'Int#' }
-- data instance 'URec' 'Word' p = 'UWord' { 'uWord#' :: 'Word#' }
-- @
--
-- Several type synonyms are provided for convenience:
--
-- @
-- type 'UAddr' = 'URec' ('Ptr' ())
-- type 'UChar' = 'URec' 'Char'
-- type 'UDouble' = 'URec' 'Double'
-- type 'UFloat' = 'URec' 'Float'
-- type 'UInt' = 'URec' 'Int'
-- type 'UWord' = 'URec' 'Word'
-- @
--
-- The declaration
--
-- @
-- data IntHash = IntHash Int#
-- deriving 'Generic'
-- @
--
-- yields
--
-- @
-- instance 'Generic' IntHash where
-- type 'Rep' IntHash =
-- 'D1' D1IntHash
-- ('C1' C1_0IntHash
-- ('S1' 'NoSelector' 'UInt'))
-- @
--
-- Currently, only the six unlifted types listed above are generated, but this
-- may be extended to encompass more unlifted types in the future.
#if 0
-- *** Limitations
--
-- |
--
-- /TODO/
--
-- /TODO:/ Also clear up confusion about 'Rec0' and 'Rec1' not really indicating recursion.
--
#endif
-----------------------------------------------------------------------------
-- * Generic representation types
V1, U1(..), Par1(..), Rec1(..), K1(..), M1(..)
, (:+:)(..), (:*:)(..), (:.:)(..)
-- ** Unboxed representation types
, URec(..)
, type UAddr, type UChar, type UDouble
, type UFloat, type UInt, type UWord
-- ** Synonyms for convenience
, Rec0, Par0, R, P
, D1, C1, S1, D, C, S
-- * Meta-information
, Datatype(..), Constructor(..), Selector(..), NoSelector
, Fixity(..), Associativity(..), Arity(..), prec
-- * Generic type classes
, Generic(..), Generic1(..)
) where
-- We use some base types
import GHC.Prim ( Addr#, Char#, Double#, Float#, Int#, Word# )
import GHC.Ptr ( Ptr )
import GHC.Types
import Data.Maybe ( Maybe(..) )
import Data.Either ( Either(..) )
-- Needed for instances
import GHC.Classes ( Eq, Ord )
import GHC.Read ( Read )
import GHC.Show ( Show )
import Data.Proxy
--------------------------------------------------------------------------------
-- Representation types
--------------------------------------------------------------------------------
-- | Void: used for datatypes without constructors
data V1 (p :: *)
-- | Unit: used for constructors without arguments
data U1 (p :: *) = U1
deriving (Eq, Ord, Read, Show, Generic)
-- | Used for marking occurrences of the parameter
newtype Par1 p = Par1 { unPar1 :: p }
deriving (Eq, Ord, Read, Show, Generic)
-- | Recursive calls of kind * -> *
newtype Rec1 f (p :: *) = Rec1 { unRec1 :: f p }
deriving (Eq, Ord, Read, Show, Generic)
-- | Constants, additional parameters and recursion of kind *
newtype K1 (i :: *) c (p :: *) = K1 { unK1 :: c }
deriving (Eq, Ord, Read, Show, Generic)
-- | Meta-information (constructor names, etc.)
newtype M1 (i :: *) (c :: *) f (p :: *) = M1 { unM1 :: f p }
deriving (Eq, Ord, Read, Show, Generic)
-- | Sums: encode choice between constructors
infixr 5 :+:
data (:+:) f g (p :: *) = L1 (f p) | R1 (g p)
deriving (Eq, Ord, Read, Show, Generic)
-- | Products: encode multiple arguments to constructors
infixr 6 :*:
data (:*:) f g (p :: *) = f p :*: g p
deriving (Eq, Ord, Read, Show, Generic)
-- | Composition of functors
infixr 7 :.:
newtype (:.:) f (g :: * -> *) (p :: *) = Comp1 { unComp1 :: f (g p) }
deriving (Eq, Ord, Read, Show, Generic)
-- | Constants of kind @#@
data family URec (a :: *) (p :: *)
-- | Used for marking occurrences of 'Addr#'
data instance URec (Ptr ()) p = UAddr { uAddr# :: Addr# }
deriving (Eq, Ord, Generic)
-- | Used for marking occurrences of 'Char#'
data instance URec Char p = UChar { uChar# :: Char# }
deriving (Eq, Ord, Show, Generic)
-- | Used for marking occurrences of 'Double#'
data instance URec Double p = UDouble { uDouble# :: Double# }
deriving (Eq, Ord, Show, Generic)
-- | Used for marking occurrences of 'Float#'
data instance URec Float p = UFloat { uFloat# :: Float# }
deriving (Eq, Ord, Show, Generic)
-- | Used for marking occurrences of 'Int#'
data instance URec Int p = UInt { uInt# :: Int# }
deriving (Eq, Ord, Show, Generic)
-- | Used for marking occurrences of 'Word#'
data instance URec Word p = UWord { uWord# :: Word# }
deriving (Eq, Ord, Show, Generic)
-- | Type synonym for 'URec': 'Addr#'
type UAddr = URec (Ptr ())
-- | Type synonym for 'URec': 'Char#'
type UChar = URec Char
-- | Type synonym for 'URec': 'Double#'
type UDouble = URec Double
-- | Type synonym for 'URec': 'Float#'
type UFloat = URec Float
-- | Type synonym for 'URec': 'Int#'
type UInt = URec Int
-- | Type synonym for 'URec': 'Word#'
type UWord = URec Word
-- | Tag for K1: recursion (of kind *)
data R
-- | Tag for K1: parameters (other than the last)
data P
-- | Type synonym for encoding recursion (of kind *)
type Rec0 = K1 R
-- | Type synonym for encoding parameters (other than the last)
type Par0 = K1 P
{-# DEPRECATED Par0 "'Par0' is no longer used; use 'Rec0' instead" #-} -- deprecated in 7.6
{-# DEPRECATED P "'P' is no longer used; use 'R' instead" #-} -- deprecated in 7.6
-- | Tag for M1: datatype
data D
-- | Tag for M1: constructor
data C
-- | Tag for M1: record selector
data S
-- | Type synonym for encoding meta-information for datatypes
type D1 = M1 D
-- | Type synonym for encoding meta-information for constructors
type C1 = M1 C
-- | Type synonym for encoding meta-information for record selectors
type S1 = M1 S
-- | Class for datatypes that represent datatypes
class Datatype (d :: *) where
-- | The name of the datatype (unqualified)
datatypeName :: t d (f :: * -> *) (a :: *) -> [Char]
-- | The fully-qualified name of the module where the type is declared
moduleName :: t d (f :: * -> *) (a :: *) -> [Char]
-- | The package name of the module where the type is declared
packageName :: t d (f :: * -> *) (a :: *) -> [Char]
-- | Marks if the datatype is actually a newtype
isNewtype :: t d (f :: * -> *) (a :: *) -> Bool
isNewtype _ = False
-- | Class for datatypes that represent records
class Selector (s :: *) where
-- | The name of the selector
selName :: t s (f :: * -> *) (a :: *) -> [Char]
-- | Used for constructor fields without a name
data NoSelector
instance Selector NoSelector where selName _ = ""
-- | Class for datatypes that represent data constructors
class Constructor (c :: *) where
-- | The name of the constructor
conName :: t c (f :: * -> *) (a :: *) -> [Char]
-- | The fixity of the constructor
conFixity :: t c (f :: * -> *) (a :: *) -> Fixity
conFixity _ = Prefix
-- | Marks if this constructor is a record
conIsRecord :: t c (f :: * -> *) (a :: *) -> Bool
conIsRecord _ = False
-- | Datatype to represent the arity of a tuple.
data Arity = NoArity | Arity Int
deriving (Eq, Show, Ord, Read, Generic)
-- | Datatype to represent the fixity of a constructor. An infix
-- | declaration directly corresponds to an application of 'Infix'.
data Fixity = Prefix | Infix Associativity Int
deriving (Eq, Show, Ord, Read, Generic)
-- | Get the precedence of a fixity value.
prec :: Fixity -> Int
prec Prefix = 10
prec (Infix _ n) = n
-- | Datatype to represent the associativity of a constructor
data Associativity = LeftAssociative
| RightAssociative
| NotAssociative
deriving (Eq, Show, Ord, Read, Generic)
-- | Representable types of kind *.
-- This class is derivable in GHC with the DeriveGeneric flag on.
class Generic a where
-- | Generic representation type
type Rep a :: * -> *
-- | Convert from the datatype to its representation
from :: a -> (Rep a) x
-- | Convert from the representation to the datatype
to :: (Rep a) x -> a
-- | Representable types of kind * -> *.
-- This class is derivable in GHC with the DeriveGeneric flag on.
class Generic1 f where
-- | Generic representation type
type Rep1 f :: * -> *
-- | Convert from the datatype to its representation
from1 :: f a -> (Rep1 f) a
-- | Convert from the representation to the datatype
to1 :: (Rep1 f) a -> f a
--------------------------------------------------------------------------------
-- Derived instances
--------------------------------------------------------------------------------
deriving instance Generic [a]
deriving instance Generic (Maybe a)
deriving instance Generic (Either a b)
deriving instance Generic Bool
deriving instance Generic Ordering
deriving instance Generic ()
deriving instance Generic ((,) a b)
deriving instance Generic ((,,) a b c)
deriving instance Generic ((,,,) a b c d)
deriving instance Generic ((,,,,) a b c d e)
deriving instance Generic ((,,,,,) a b c d e f)
deriving instance Generic ((,,,,,,) a b c d e f g)
deriving instance Generic1 []
deriving instance Generic1 Maybe
deriving instance Generic1 (Either a)
deriving instance Generic1 ((,) a)
deriving instance Generic1 ((,,) a b)
deriving instance Generic1 ((,,,) a b c)
deriving instance Generic1 ((,,,,) a b c d)
deriving instance Generic1 ((,,,,,) a b c d e)
deriving instance Generic1 ((,,,,,,) a b c d e f)
--------------------------------------------------------------------------------
-- Primitive representations
--------------------------------------------------------------------------------
-- Int
data D_Int
data C_Int
instance Datatype D_Int where
datatypeName _ = "Int"
moduleName _ = "GHC.Int"
packageName _ = "base"
instance Constructor C_Int where
conName _ = "" -- JPM: I'm not sure this is the right implementation...
instance Generic Int where
type Rep Int = D1 D_Int (C1 C_Int (S1 NoSelector (Rec0 Int)))
from x = M1 (M1 (M1 (K1 x)))
to (M1 (M1 (M1 (K1 x)))) = x
-- Float
data D_Float
data C_Float
instance Datatype D_Float where
datatypeName _ = "Float"
moduleName _ = "GHC.Float"
packageName _ = "base"
instance Constructor C_Float where
conName _ = "" -- JPM: I'm not sure this is the right implementation...
instance Generic Float where
type Rep Float = D1 D_Float (C1 C_Float (S1 NoSelector (Rec0 Float)))
from x = M1 (M1 (M1 (K1 x)))
to (M1 (M1 (M1 (K1 x)))) = x
-- Double
data D_Double
data C_Double
instance Datatype D_Double where
datatypeName _ = "Double"
moduleName _ = "GHC.Float"
packageName _ = "base"
instance Constructor C_Double where
conName _ = "" -- JPM: I'm not sure this is the right implementation...
instance Generic Double where
type Rep Double = D1 D_Double (C1 C_Double (S1 NoSelector (Rec0 Double)))
from x = M1 (M1 (M1 (K1 x)))
to (M1 (M1 (M1 (K1 x)))) = x
-- Char
data D_Char
data C_Char
instance Datatype D_Char where
datatypeName _ = "Char"
moduleName _ = "GHC.Base"
packageName _ = "base"
instance Constructor C_Char where
conName _ = "" -- JPM: I'm not sure this is the right implementation...
instance Generic Char where
type Rep Char = D1 D_Char (C1 C_Char (S1 NoSelector (Rec0 Char)))
from x = M1 (M1 (M1 (K1 x)))
to (M1 (M1 (M1 (K1 x)))) = x
deriving instance Generic (Proxy t)
| ml9951/ghc | libraries/base/GHC/Generics.hs | bsd-3-clause | 30,581 | 0 | 14 | 6,424 | 3,476 | 2,243 | 1,233 | -1 | -1 |
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE DeriveFoldable #-}
-- | See <https://github.com/ezyang/ghc-proposals/blob/backpack/proposals/0000-backpack.rst>
module Distribution.Backpack.ModuleScope (
-- * Module scopes
ModuleScope(..),
ModuleProvides,
ModuleRequires,
ModuleSource(..),
dispModuleSource,
WithSource(..),
unWithSource,
getSource,
ModuleWithSource,
emptyModuleScope,
) where
import Prelude ()
import Distribution.Compat.Prelude
import Distribution.ModuleName
import Distribution.Types.IncludeRenaming
import Distribution.Types.PackageName
import Distribution.Types.ComponentName
import Distribution.Backpack
import Distribution.Backpack.ModSubst
import Distribution.Text
import qualified Data.Map as Map
import Text.PrettyPrint
-----------------------------------------------------------------------
-- Module scopes
-- Why is ModuleProvides so complicated? The basic problem is that
-- we want to support this:
--
-- package p where
-- include q (A)
-- include r (A)
-- module B where
-- import "q" A
-- import "r" A
--
-- Specifically, in Cabal today it is NOT an error have two modules in
-- scope with the same identifier. So we need to preserve this for
-- Backpack. The modification is that an ambiguous module name is
-- OK... as long as it is NOT used to fill a requirement!
--
-- So as a first try, we might try deferring unifying provisions that
-- are being glommed together, and check for equality after the fact.
-- But this doesn't work, because what if a multi-module provision
-- is used to fill a requirement?! So you do the equality test
-- IMMEDIATELY before a requirement fill happens... or never at all.
--
-- Alternate strategy: go ahead and unify, and then if it is revealed
-- that some requirements got filled "out-of-thin-air", error.
-- | A 'ModuleScope' describes the modules and requirements that
-- are in-scope as we are processing a Cabal package. Unlike
-- a 'ModuleShape', there may be multiple modules in scope at
-- the same 'ModuleName'; this is only an error if we attempt
-- to use those modules to fill a requirement. A 'ModuleScope'
-- can influence the 'ModuleShape' via a reexport.
data ModuleScope = ModuleScope {
modScopeProvides :: ModuleProvides,
modScopeRequires :: ModuleRequires
}
-- | An empty 'ModuleScope'.
emptyModuleScope :: ModuleScope
emptyModuleScope = ModuleScope Map.empty Map.empty
-- | Every 'Module' in scope at a 'ModuleName' is annotated with
-- the 'PackageName' it comes from.
type ModuleProvides = Map ModuleName [ModuleWithSource]
-- | INVARIANT: entries for ModuleName m, have msrc_module is OpenModuleVar m
type ModuleRequires = Map ModuleName [ModuleWithSource]
-- TODO: consider newtping the two types above.
-- | Description of where a module participating in mixin linking came
-- from.
data ModuleSource
= FromMixins PackageName ComponentName IncludeRenaming
| FromBuildDepends PackageName ComponentName
| FromExposedModules ModuleName
| FromOtherModules ModuleName
| FromSignatures ModuleName
-- We don't have line numbers, but if we did, we'd want to record that
-- too
-- TODO: Deduplicate this with Distribution.Backpack.UnifyM.ci_msg
dispModuleSource :: ModuleSource -> Doc
dispModuleSource (FromMixins pn cn incls)
= text "mixins:" <+> dispComponent pn cn <+> disp incls
dispModuleSource (FromBuildDepends pn cn)
= text "build-depends:" <+> dispComponent pn cn
dispModuleSource (FromExposedModules m)
= text "exposed-modules:" <+> disp m
dispModuleSource (FromOtherModules m)
= text "other-modules:" <+> disp m
dispModuleSource (FromSignatures m)
= text "signatures:" <+> disp m
-- Dependency
dispComponent :: PackageName -> ComponentName -> Doc
dispComponent pn cn =
-- NB: This syntax isn't quite the source syntax, but it
-- should be clear enough. To do source syntax, we'd
-- need to know what the package we're linking is.
case cn of
CLibName -> disp pn
CSubLibName ucn -> disp pn <<>> colon <<>> disp ucn
-- Case below shouldn't happen
_ -> disp pn <+> parens (disp cn)
-- | An 'OpenModule', annotated with where it came from in a Cabal file.
data WithSource a = WithSource ModuleSource a
deriving (Functor, Foldable, Traversable)
unWithSource :: WithSource a -> a
unWithSource (WithSource _ x) = x
getSource :: WithSource a -> ModuleSource
getSource (WithSource s _) = s
type ModuleWithSource = WithSource OpenModule
instance ModSubst a => ModSubst (WithSource a) where
modSubst subst (WithSource s m) = WithSource s (modSubst subst m)
| mydaum/cabal | Cabal/Distribution/Backpack/ModuleScope.hs | bsd-3-clause | 4,689 | 0 | 11 | 857 | 642 | 368 | 274 | 64 | 3 |
{-# LANGUAGE Trustworthy #-}
{-# LANGUAGE NoImplicitPrelude, UnboxedTuples, MagicHash #-}
-----------------------------------------------------------------------------
-- |
-- Module : Control.Concurrent.MVar
-- Copyright : (c) The University of Glasgow 2001
-- License : BSD-style (see the file libraries/base/LICENSE)
--
-- Maintainer : [email protected]
-- Stability : experimental
-- Portability : non-portable (concurrency)
--
-- An @'MVar' t@ is mutable location that is either empty or contains a
-- value of type @t@. It has two fundamental operations: 'putMVar'
-- which fills an 'MVar' if it is empty and blocks otherwise, and
-- 'takeMVar' which empties an 'MVar' if it is full and blocks
-- otherwise. They can be used in multiple different ways:
--
-- 1. As synchronized mutable variables,
--
-- 2. As channels, with 'takeMVar' and 'putMVar' as receive and send, and
--
-- 3. As a binary semaphore @'MVar' ()@, with 'takeMVar' and 'putMVar' as
-- wait and signal.
--
-- They were introduced in the paper
-- <http://research.microsoft.com/~simonpj/papers/concurrent-haskell.ps.gz "Concurrent Haskell">
-- by Simon Peyton Jones, Andrew Gordon and Sigbjorn Finne, though
-- some details of their implementation have since then changed (in
-- particular, a put on a full 'MVar' used to error, but now merely
-- blocks.)
--
-- === Applicability
--
-- 'MVar's offer more flexibility than 'IORef's, but less flexibility
-- than 'STM'. They are appropriate for building synchronization
-- primitives and performing simple interthread communication; however
-- they are very simple and susceptible to race conditions, deadlocks or
-- uncaught exceptions. Do not use them if you need perform larger
-- atomic operations such as reading from multiple variables: use 'STM'
-- instead.
--
-- In particular, the "bigger" functions in this module ('readMVar',
-- 'swapMVar', 'withMVar', 'modifyMVar_' and 'modifyMVar') are simply
-- the composition of a 'takeMVar' followed by a 'putMVar' with
-- exception safety.
-- These only have atomicity guarantees if all other threads
-- perform a 'takeMVar' before a 'putMVar' as well; otherwise, they may
-- block.
--
-- === Fairness
--
-- No thread can be blocked indefinitely on an 'MVar' unless another
-- thread holds that 'MVar' indefinitely. One usual implementation of
-- this fairness guarantee is that threads blocked on an 'MVar' are
-- served in a first-in-first-out fashion, but this is not guaranteed
-- in the semantics.
--
-- === Gotchas
--
-- Like many other Haskell data structures, 'MVar's are lazy. This
-- means that if you place an expensive unevaluated thunk inside an
-- 'MVar', it will be evaluated by the thread that consumes it, not the
-- thread that produced it. Be sure to 'evaluate' values to be placed
-- in an 'MVar' to the appropriate normal form, or utilize a strict
-- MVar provided by the strict-concurrency package.
--
-- === Ordering
--
-- 'MVar' operations are always observed to take place in the order
-- they are written in the program, regardless of the memory model of
-- the underlying machine. This is in contrast to 'IORef' operations
-- which may appear out-of-order to another thread in some cases.
--
-- === Example
--
-- Consider the following concurrent data structure, a skip channel.
-- This is a channel for an intermittent source of high bandwidth
-- information (for example, mouse movement events.) Writing to the
-- channel never blocks, and reading from the channel only returns the
-- most recent value, or blocks if there are no new values. Multiple
-- readers are supported with a @dupSkipChan@ operation.
--
-- A skip channel is a pair of 'MVar's. The first 'MVar' contains the
-- current value, and a list of semaphores that need to be notified
-- when it changes. The second 'MVar' is a semaphore for this particular
-- reader: it is full if there is a value in the channel that this
-- reader has not read yet, and empty otherwise.
--
-- @
-- data SkipChan a = SkipChan (MVar (a, [MVar ()])) (MVar ())
--
-- newSkipChan :: IO (SkipChan a)
-- newSkipChan = do
-- sem <- newEmptyMVar
-- main <- newMVar (undefined, [sem])
-- return (SkipChan main sem)
--
-- putSkipChan :: SkipChan a -> a -> IO ()
-- putSkipChan (SkipChan main _) v = do
-- (_, sems) <- takeMVar main
-- putMVar main (v, [])
-- mapM_ (\sem -> putMVar sem ()) sems
--
-- getSkipChan :: SkipChan a -> IO a
-- getSkipChan (SkipChan main sem) = do
-- takeMVar sem
-- (v, sems) <- takeMVar main
-- putMVar main (v, sem:sems)
-- return v
--
-- dupSkipChan :: SkipChan a -> IO (SkipChan a)
-- dupSkipChan (SkipChan main _) = do
-- sem <- newEmptyMVar
-- (v, sems) <- takeMVar main
-- putMVar main (v, sem:sems)
-- return (SkipChan main sem)
-- @
--
-- This example was adapted from the original Concurrent Haskell paper.
-- For more examples of 'MVar's being used to build higher-level
-- synchronization primitives, see 'Control.Concurrent.Chan' and
-- 'Control.Concurrent.QSem'.
--
-----------------------------------------------------------------------------
module Control.Concurrent.MVar
(
-- * @MVar@s
MVar
, newEmptyMVar
, newMVar
, takeMVar
, putMVar
, readMVar
, swapMVar
, tryTakeMVar
, tryPutMVar
, isEmptyMVar
, withMVar
, withMVarMasked
, modifyMVar_
, modifyMVar
, modifyMVarMasked_
, modifyMVarMasked
, tryReadMVar
, mkWeakMVar
, addMVarFinalizer
) where
import GHC.MVar ( MVar(..), newEmptyMVar, newMVar, takeMVar, putMVar,
tryTakeMVar, tryPutMVar, isEmptyMVar, readMVar,
tryReadMVar
)
import qualified GHC.MVar
import GHC.Weak
import GHC.Base
import Control.Exception.Base
{-|
Take a value from an 'MVar', put a new value into the 'MVar' and
return the value taken. This function is atomic only if there are
no other producers for this 'MVar'.
-}
swapMVar :: MVar a -> a -> IO a
swapMVar mvar new =
mask_ $ do
old <- takeMVar mvar
putMVar mvar new
return old
{-|
'withMVar' is an exception-safe wrapper for operating on the contents
of an 'MVar'. This operation is exception-safe: it will replace the
original contents of the 'MVar' if an exception is raised (see
"Control.Exception"). However, it is only atomic if there are no
other producers for this 'MVar'.
-}
{-# INLINE withMVar #-}
-- inlining has been reported to have dramatic effects; see
-- http://www.haskell.org//pipermail/haskell/2006-May/017907.html
withMVar :: MVar a -> (a -> IO b) -> IO b
withMVar m io =
mask $ \restore -> do
a <- takeMVar m
b <- restore (io a) `onException` putMVar m a
putMVar m a
return b
{-|
Like 'withMVar', but the @IO@ action in the second argument is executed
with asynchronous exceptions masked.
/Since: 4.7.0.0/
-}
{-# INLINE withMVarMasked #-}
withMVarMasked :: MVar a -> (a -> IO b) -> IO b
withMVarMasked m io =
mask_ $ do
a <- takeMVar m
b <- io a `onException` putMVar m a
putMVar m a
return b
{-|
An exception-safe wrapper for modifying the contents of an 'MVar'.
Like 'withMVar', 'modifyMVar' will replace the original contents of
the 'MVar' if an exception is raised during the operation. This
function is only atomic if there are no other producers for this
'MVar'.
-}
{-# INLINE modifyMVar_ #-}
modifyMVar_ :: MVar a -> (a -> IO a) -> IO ()
modifyMVar_ m io =
mask $ \restore -> do
a <- takeMVar m
a' <- restore (io a) `onException` putMVar m a
putMVar m a'
{-|
A slight variation on 'modifyMVar_' that allows a value to be
returned (@b@) in addition to the modified value of the 'MVar'.
-}
{-# INLINE modifyMVar #-}
modifyMVar :: MVar a -> (a -> IO (a,b)) -> IO b
modifyMVar m io =
mask $ \restore -> do
a <- takeMVar m
(a',b) <- restore (io a >>= evaluate) `onException` putMVar m a
putMVar m a'
return b
{-|
Like 'modifyMVar_', but the @IO@ action in the second argument is executed with
asynchronous exceptions masked.
/Since: 4.6.0.0/
-}
{-# INLINE modifyMVarMasked_ #-}
modifyMVarMasked_ :: MVar a -> (a -> IO a) -> IO ()
modifyMVarMasked_ m io =
mask_ $ do
a <- takeMVar m
a' <- io a `onException` putMVar m a
putMVar m a'
{-|
Like 'modifyMVar', but the @IO@ action in the second argument is executed with
asynchronous exceptions masked.
/Since: 4.6.0.0/
-}
{-# INLINE modifyMVarMasked #-}
modifyMVarMasked :: MVar a -> (a -> IO (a,b)) -> IO b
modifyMVarMasked m io =
mask_ $ do
a <- takeMVar m
(a',b) <- (io a >>= evaluate) `onException` putMVar m a
putMVar m a'
return b
{-# DEPRECATED addMVarFinalizer "use 'mkWeakMVar' instead" #-} -- deprecated in 7.6
addMVarFinalizer :: MVar a -> IO () -> IO ()
addMVarFinalizer = GHC.MVar.addMVarFinalizer
-- | Make a 'Weak' pointer to an 'MVar', using the second argument as
-- a finalizer to run when 'MVar' is garbage-collected
--
-- /Since: 4.6.0.0/
mkWeakMVar :: MVar a -> IO () -> IO (Weak (MVar a))
mkWeakMVar m@(MVar m#) f = IO $ \s ->
case mkWeak# m# m f s of (# s1, w #) -> (# s1, Weak w #)
| frantisekfarka/ghc-dsi | libraries/base/Control/Concurrent/MVar.hs | bsd-3-clause | 9,356 | 0 | 14 | 2,093 | 1,051 | 600 | 451 | 88 | 1 |
{-# LANGUAGE RecordWildCards, DeriveGeneric, CPP, TupleSections, TemplateHaskell #-}
module TipToiYaml
( tt2ttYaml, ttYaml2tt
, readTipToiYaml, writeTipToiYaml,writeTipToiCodeYaml
, ttyProduct_Id
, debugGame
)
where
import qualified Data.ByteString.Lazy as B
import qualified Data.ByteString.Lazy.Char8 as BC
import qualified Data.ByteString as SB
import qualified Data.ByteString.Char8 as SBC
import System.Exit
import System.FilePath
import Text.Printf
import Data.Char
import Data.Either
import Data.Functor
import Data.Maybe
import Control.Monad
import System.Directory
import qualified Data.Map as M
import qualified Data.Set as S
import qualified Data.Vector as V
import Control.Monad.Writer.Strict
#if MIN_VERSION_time(1,5,0)
import Data.Time.Format (defaultTimeLocale)
#else
import System.Locale (defaultTimeLocale)
#endif
import Data.Time (getCurrentTime, formatTime)
import Data.Yaml hiding ((.=), Parser)
import Data.Aeson.Types hiding ((.=), Parser)
import Data.Aeson.TH
import Text.Parsec hiding (Line, lookAhead, spaces)
import Text.Parsec.String
import qualified Text.Parsec as P
import qualified Text.Parsec.Token as P
import Text.Parsec.Language (emptyDef)
import GHC.Generics
import qualified Data.Foldable as F
import qualified Data.Traversable as T
import Data.Traversable (for, traverse)
import Control.Arrow
import Control.Applicative (Applicative(..), (<*>), (<*))
import TextToSpeech
import Language
import Types
import Constants
import KnownCodes
import PrettyPrint
import OneLineParser
import Utils
import TipToiYamlAux
data TipToiYAML = TipToiYAML
{ ttyScripts :: M.Map String [String]
, ttyComment :: Maybe String
, ttyMedia_Path :: Maybe String
, ttyInit :: Maybe String
, ttyWelcome :: Maybe String
, ttyProduct_Id :: Word32
, ttyScriptCodes :: Maybe CodeMap
, ttySpeak :: Maybe SpeakSpecs
, ttyLanguage :: Maybe Language
, ttyGames :: Maybe [GameYaml]
}
deriving Generic
data TipToiCodesYAML = TipToiCodesYAML
{ ttcScriptCodes :: CodeMap
}
deriving Generic
data SpeakSpec = SpeakSpec
{ ssLanguage :: Maybe Language
, ssSpeak :: M.Map String String
}
instance FromJSON SpeakSpec where
parseJSON v = do
m <- parseJSON v
l <- T.traverse parseJSON $ M.lookup "language" m
m' <- T.traverse parseJSON $ M.delete "language" m
return $ SpeakSpec l m'
instance ToJSON SpeakSpec where
toJSON (SpeakSpec (Just l) m) = toJSON $ M.insert "language" (ppLang l) m
toJSON (SpeakSpec Nothing m) = toJSON $ m
toSpeakMap :: Language -> Maybe SpeakSpecs -> M.Map String (Language, String)
toSpeakMap l Nothing = M.empty
toSpeakMap l (Just (SpeakSpecs specs)) = M.unionsWith e $ map go specs
where
go (SpeakSpec ml m) = M.map ((l',)) m
where l' = fromMaybe l ml
e = error "Conflicting definitions in section \"speak\""
newtype SpeakSpecs = SpeakSpecs [SpeakSpec]
instance FromJSON SpeakSpecs where
parseJSON (Array a) = SpeakSpecs <$> mapM parseJSON (V.toList a)
parseJSON v = SpeakSpecs . (:[]) <$> parseJSON v
instance ToJSON SpeakSpecs where
toJSON (SpeakSpecs [x]) = toJSON x
toJSON (SpeakSpecs l) = Array $ V.fromList $ map toJSON $ l
type PlayListListYaml = String
type OIDListYaml = String
data GameYaml = CommonGameYaml
{ gyGameType :: Word16
, gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
}
| Game6Yaml
{ gyRounds :: Word16
, gyBonusSubgameCount :: Word16
, gyBonusRounds :: Word16
, gyBonusTarget :: Word16
, gyUnknownI :: Word16
, gyEarlyRounds :: Word16
, gyUnknownQ :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gyRoundStartPlayList2 :: PlayListListYaml
, gyLaterRoundStartPlayList2 :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyBonusTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gyBonusFinishPlayLists :: [PlayListListYaml]
, gyBonusSubgameIds :: [Word16]
}
| Game7Yaml
{ gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gySubgameGroups :: [[GameId]]
}
| Game8Yaml
{ gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gyGameSelectOIDs :: OIDListYaml
, gyGameSelect :: [Word16]
, gyGameSelectErrors1 :: PlayListListYaml
, gyGameSelectErrors2 :: PlayListListYaml
}
| Game9Yaml
{ gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gyExtraPlayLists :: [PlayListListYaml]
}
| Game10Yaml
{ gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gyExtraPlayLists :: [PlayListListYaml]
}
| Game16Yaml
{ gyRounds :: Word16
, gyUnknownC :: Word16
, gyEarlyRounds :: Word16
, gyRepeatLastMedia :: Word16
, gyUnknownX :: Word16
, gyUnknownW :: Word16
, gyUnknownV :: Word16
, gyStartPlayList :: PlayListListYaml
, gyRoundEndPlayList :: PlayListListYaml
, gyFinishPlayList :: PlayListListYaml
, gyRoundStartPlayList :: PlayListListYaml
, gyLaterRoundStartPlayList :: PlayListListYaml
, gySubgames :: [SubGameYaml]
, gyTargetScores :: [Word16]
, gyFinishPlayLists :: [PlayListListYaml]
, gyExtraOIDs :: OIDListYaml
, gyExtraPlayLists :: [PlayListListYaml]
}
| Game253Yaml
data SubGameYaml = SubGameYaml
{ sgUnknown :: String
, sgOids1 :: OIDListYaml
, sgOids2 :: OIDListYaml
, sgOids3 :: OIDListYaml
, sgPlaylist :: [PlayListListYaml]
}
$(deriveJSON gameYamlOptions ''GameYaml)
$(deriveJSON gameYamlOptions ''SubGameYaml)
tipToiYamlOptions = defaultOptions { fieldLabelModifier = map fix . map toLower . drop 3 }
where fix '_' = '-'
fix c = c
instance FromJSON TipToiYAML where
parseJSON = genericParseJSON tipToiYamlOptions
instance ToJSON TipToiYAML where
toJSON = genericToJSON tipToiYamlOptions
#if MIN_VERSION_aeson(0,10,0)
toEncoding = genericToEncoding tipToiYamlOptions
#endif
instance FromJSON TipToiCodesYAML where
parseJSON = genericParseJSON tipToiYamlOptions
instance ToJSON TipToiCodesYAML where
toJSON = genericToJSON tipToiYamlOptions
#if MIN_VERSION_aeson(0,10,0)
toEncoding = genericToEncoding tipToiYamlOptions
#endif
tt2ttYaml :: String -> TipToiFile -> TipToiYAML
tt2ttYaml path (TipToiFile {..}) = TipToiYAML
{ ttyProduct_Id = ttProductId
, ttyInit = Just $ spaces $ [ ppCommand True M.empty [] (ArithOp Set (RegPos r) (Const n))
| (r,n) <- zip [0..] ttInitialRegs , n /= 0]
, ttyWelcome = Just $ playListList2Yaml ttWelcome
, ttyComment = Just $ BC.unpack ttComment
, ttyScripts = M.fromList
[ (show oid, map exportLine ls) | (oid, Just ls) <- ttScripts]
, ttyMedia_Path = Just path
, ttyScriptCodes = Nothing
, ttySpeak = Nothing
, ttyLanguage = Nothing
, ttyGames = list2Maybe $ map game2gameYaml ttGames
}
list2Maybe [] = Nothing
list2Maybe xs = Just xs
playListList2Yaml :: PlayListList -> PlayListListYaml
playListList2Yaml = commas . map show . concat
oidList2Yaml :: [OID] -> OIDListYaml
oidList2Yaml = unwords . map show
subGame2Yaml :: SubGame -> SubGameYaml
subGame2Yaml (SubGame u o1 o2 o3 pl) = SubGameYaml
{ sgUnknown = prettyHex u
, sgOids1 = oidList2Yaml o1
, sgOids2 = oidList2Yaml o2
, sgOids3 = oidList2Yaml o3
, sgPlaylist = map playListList2Yaml pl
}
game2gameYaml :: Game -> GameYaml
game2gameYaml CommonGame {..} = CommonGameYaml
{ gyGameType = gGameType
, gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
}
game2gameYaml Game6 {..} = Game6Yaml
{ gyRounds = gRounds
, gyBonusSubgameCount = gBonusSubgameCount
, gyBonusRounds = gBonusRounds
, gyBonusTarget = gBonusTarget
, gyUnknownI = gUnknownI
, gyEarlyRounds = gEarlyRounds
, gyUnknownQ = gUnknownQ
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gyRoundStartPlayList2 = playListList2Yaml gRoundStartPlayList2
, gyLaterRoundStartPlayList2 = playListList2Yaml gLaterRoundStartPlayList2
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyBonusTargetScores = gBonusTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gyBonusFinishPlayLists = map playListList2Yaml gBonusFinishPlayLists
, gyBonusSubgameIds = gBonusSubgameIds
}
game2gameYaml Game7 {..} = Game7Yaml
{ gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gySubgameGroups = gSubgameGroups
}
game2gameYaml Game8 {..} = Game8Yaml
{ gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gyGameSelectOIDs = oidList2Yaml gGameSelectOIDs
, gyGameSelect = gGameSelect
, gyGameSelectErrors1 = playListList2Yaml gGameSelectErrors1
, gyGameSelectErrors2 = playListList2Yaml gGameSelectErrors2
}
game2gameYaml Game9 {..} = Game9Yaml
{ gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gyExtraPlayLists = map playListList2Yaml gExtraPlayLists
}
game2gameYaml Game10 {..} = Game10Yaml
{ gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gyExtraPlayLists = map playListList2Yaml gExtraPlayLists
}
game2gameYaml Game16 {..} = Game16Yaml
{ gyRounds = gRounds
, gyUnknownC = gUnknownC
, gyEarlyRounds = gEarlyRounds
, gyRepeatLastMedia = gRepeatLastMedia
, gyUnknownX = gUnknownX
, gyUnknownW = gUnknownW
, gyUnknownV = gUnknownV
, gyStartPlayList = playListList2Yaml gStartPlayList
, gyRoundEndPlayList = playListList2Yaml gRoundEndPlayList
, gyFinishPlayList = playListList2Yaml gFinishPlayList
, gyRoundStartPlayList = playListList2Yaml gRoundStartPlayList
, gyLaterRoundStartPlayList = playListList2Yaml gLaterRoundStartPlayList
, gySubgames = map subGame2Yaml gSubgames
, gyTargetScores = gTargetScores
, gyFinishPlayLists = map playListList2Yaml gFinishPlayLists
, gyExtraOIDs = oidList2Yaml gExtraOIDs
, gyExtraPlayLists = map playListList2Yaml gExtraPlayLists
}
game2gameYaml Game253 = Game253Yaml
playListListFromYaml :: PlayListListYaml -> WithFileNames PlayListList
playListListFromYaml =
fmap listify .
traverse recordFilename .
either error id .
parseOneLinePure parsePlayList "playlist"
where listify [] = []
listify x = [x]
oidListFromYaml :: OIDListYaml -> [OID]
oidListFromYaml = map read . words
subGameFromYaml :: SubGameYaml -> WithFileNames SubGame
subGameFromYaml (SubGameYaml u o1 o2 o3 pl) = (\x -> SubGame
{ sgUnknown = either error id $ parseOneLinePure parsePrettyHex "unknown" u
, sgOids1 = oidListFromYaml o1
, sgOids2 = oidListFromYaml o2
, sgOids3 = oidListFromYaml o3
, sgPlaylist = x
}) <$> traverse playListListFromYaml pl
gameYaml2Game :: GameYaml -> WithFileNames Game
gameYaml2Game CommonGameYaml {..} = pure CommonGame
<*> pure gyGameType
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
gameYaml2Game Game6Yaml {..} = pure Game6
<*> pure gyRounds
<*> pure gyBonusSubgameCount
<*> pure gyBonusRounds
<*> pure gyBonusTarget
<*> pure gyUnknownI
<*> pure gyEarlyRounds
<*> pure gyUnknownQ
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> playListListFromYaml gyRoundStartPlayList2
<*> playListListFromYaml gyLaterRoundStartPlayList2
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> pure gyBonusTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> traverse playListListFromYaml gyBonusFinishPlayLists
<*> pure gyBonusSubgameIds
gameYaml2Game Game7Yaml {..} = pure Game7
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> pure gySubgameGroups
gameYaml2Game Game8Yaml {..} = pure Game8
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> pure (oidListFromYaml gyGameSelectOIDs)
<*> pure gyGameSelect
<*> playListListFromYaml gyGameSelectErrors1
<*> playListListFromYaml gyGameSelectErrors2
gameYaml2Game Game9Yaml {..} = pure Game9
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> traverse playListListFromYaml gyExtraPlayLists
gameYaml2Game Game10Yaml {..} = pure Game10
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> traverse playListListFromYaml gyExtraPlayLists
gameYaml2Game Game16Yaml {..} = pure Game16
<*> pure gyRounds
<*> pure gyUnknownC
<*> pure gyEarlyRounds
<*> pure gyRepeatLastMedia
<*> pure gyUnknownX
<*> pure gyUnknownW
<*> pure gyUnknownV
<*> playListListFromYaml gyStartPlayList
<*> playListListFromYaml gyRoundEndPlayList
<*> playListListFromYaml gyFinishPlayList
<*> playListListFromYaml gyRoundStartPlayList
<*> playListListFromYaml gyLaterRoundStartPlayList
<*> traverse subGameFromYaml gySubgames
<*> pure gyTargetScores
<*> traverse playListListFromYaml gyFinishPlayLists
<*> pure (oidListFromYaml gyExtraOIDs)
<*> traverse playListListFromYaml gyExtraPlayLists
gameYaml2Game Game253Yaml = pure Game253
mergeOnlyEqual :: String -> Word16 -> Word16 -> Word16
mergeOnlyEqual _ c1 c2 | c1 == c2 = c1
mergeOnlyEqual s c1 c2 = error $
printf "The .yaml file specifies code %d for script \"%s\",\
\but the .codes.yamls file specifies %d. Please fix this!" c2 s c1
toWord16 :: Word32 -> Word16
toWord16 x = fromIntegral x
toWord32 :: Word16 -> Word32
toWord32 x = fromIntegral x
scriptCodes :: [String] -> CodeMap -> Word32 -> Either String (String -> Word16, CodeMap)
scriptCodes [] codeMap productId = Right (error "scriptCodes []", codeMap)
scriptCodes codes codeMap productId
| null strs || null nums = Right (lookupCode, totalMap)
| otherwise = Left "Cannot mix numbers and names in scripts."
where
(strs, nums) = partitionEithers $ map f codes
newStrs = filter (`M.notMember` codeMap) strs
usedCodes = S.fromList $ M.elems codeMap
f s = case readMaybe s of
Nothing -> Left s
Just n -> Right (n::Word16)
-- The following logic (for objectCodes) tries to use different object codes
-- for different projects, as far as possible. This makes the detection of not
-- having activated a book/product more robust.
-- We could theoretically set:
-- objectCodeOffsetMax = lastObjectCode - firstObjectCode.
-- This would assign perfectly usable object codes, and would minimize the
-- probability of object code collisions between products, but sometimes
-- object codes would wrap around from 14999 to 1000 even for small projects
-- which may be undesirable. We arbitrarily do not use the last 999 possible
-- offsets to avoid a wrap around in object codes for projects with <= 1000
-- object codes. This does not impose any limit on the number of object codes
-- per project. Every project can always use all 14000 object codes.
objectCodeOffsetMax = lastObjectCode - firstObjectCode - 999
-- Distribute the used object codes for different projects across the whole
-- range of usable object codes. We do this by multiplying the productId with
-- the golden ratio to achive a maximum distance between different projects,
-- independent of the total number of different projects.
-- 8035 = (14999-1000-999+1)*((sqrt(5)-1)/2)
objectCodeOffset = toWord16(rem (productId * 8035) (toWord32(objectCodeOffsetMax) + 1))
-- objectCodes always contains _all_ possible object codes [firstObjectCode..lastObjectCode],
-- starting at firstObjectCode+objectCodeOffset and then wrapping around.
objectCodes = [firstObjectCode + objectCodeOffset .. lastObjectCode] ++ [firstObjectCode .. firstObjectCode + objectCodeOffset - 1]
newAssignments =
M.fromList $
zip newStrs $
filter (`S.notMember` usedCodes) $
objectCodes
totalMap = M.fromList
[ (str, fromJust $
readMaybe str `mplus`
M.lookup str codeMap `mplus`
M.lookup str newAssignments)
| str <- codes
]
readCode s = case readMaybe s of
Nothing -> error $ printf "Cannot jump to named script \"%s\" in a yaml with numbered scripts." s
Just c -> c
lookupCode s = case M.lookup s totalMap of
Nothing -> error $ printf "Cannot jump to unknown script \"%s\"." s
Just c -> c
resolveRegs ::
(M.Map Register Word16, [(a, Maybe [Line Register])]) ->
(M.Map ResReg Word16, [(a, Maybe [Line ResReg])])
resolveRegs x = everywhere x
where
-- Could use generics somehow
regs = S.fromList $
M.keys (fst x) ++ concatMap (maybe [] (concatMap F.toList) . snd) (snd x)
-- Could use generics somehow
everywhere = M.mapKeys resolve *** map (second (fmap (map (fmap resolve))))
regNums = S.fromList [ n | RegPos n <- S.toList regs ]
regNames = [ n | RegName n <- S.toList regs ]
mapping = M.fromList $ zip regNames [n | n <- [0..], n `S.notMember` regNums]
resolve (RegPos n) = n
resolve (RegName n) = fromMaybe (error "resolveRegs broken") (M.lookup n mapping)
resolveJumps :: (String -> Word16) -> [(a, Maybe [Line b])] -> [(a, Maybe [Line b])]
resolveJumps m = everywhere
where
everywhere = map (second (fmap (map resolveLine)))
resolveLine (Line o cond cmds acts) = (Line o cond (map resolve cmds) acts)
resolve (NamedJump n) = Jump (Const (m n))
resolve c = c
newtype WithFileNames a = WithFileNames
{ runWithFileNames :: ((String -> Word16) -> a, [String] -> [String])
}
instance Functor WithFileNames where
fmap f (WithFileNames (r,fns)) = WithFileNames (f . r, fns)
instance Applicative WithFileNames where
pure x = WithFileNames (const x, id)
WithFileNames (r1,fns1) <*> WithFileNames (r2,fns2)
= WithFileNames (\m -> r1 m (r2 m),fns1 . fns2)
recordFilename :: String -> WithFileNames Word16
recordFilename fn = WithFileNames (($ fn), (fn :))
resolveFileNames :: WithFileNames a -> (a, [String])
resolveFileNames (WithFileNames (r,fns)) = (r filename_lookup, filenames)
where
filenames = S.toList $ S.fromList $ fns []
filename_lookup = (M.fromList (zip filenames [0..]) M.!)
ttYaml2tt :: FilePath -> TipToiYAML -> CodeMap -> IO (TipToiFile, CodeMap)
ttYaml2tt dir (TipToiYAML {..}) extCodeMap = do
now <- getCurrentTime
let date = formatTime defaultTimeLocale "%Y%m%d" now
let codeMap = M.unionWithKey mergeOnlyEqual
extCodeMap
(fromMaybe M.empty ttyScriptCodes)
(scriptMap, totalMap) <- either fail return $ scriptCodes (M.keys ttyScripts) codeMap ttyProduct_Id
let m = M.mapKeys scriptMap ttyScripts
first = fst (M.findMin m)
last = fst (M.findMax m)
welcome_names <- parseOneLine parsePlayList "welcome" (fromMaybe "" ttyWelcome)
let ((prescripts, welcome, games), filenames) = resolveFileNames $
(,,) <$>
for [first ..last] (\oid ->
(oid ,) <$>
for (M.lookup oid m) (\raw_lines ->
forAn raw_lines (\i raw_line ->
let d = printf "Line %d of OID %d" i oid
(l,s) = either error id $ parseOneLinePure parseLine d raw_line
in l <$> traverse recordFilename s
)
)
) <*>
traverse recordFilename welcome_names <*>
traverse gameYaml2Game (fromMaybe [] ttyGames)
preInitRegs <- M.fromList <$> parseOneLine parseInitRegs "init" (fromMaybe "" ttyInit)
-- resolve registers
let (initRegs, scripts) = resolveRegs (preInitRegs, prescripts)
-- resolve named jumps
let scripts' = resolveJumps scriptMap scripts
let maxReg = maximum $ 0:
[ r
| (_, Just ls) <- scripts'
, Line _ cs as _ <- ls
, r <- concatMap F.toList cs ++ concatMap F.toList as ]
let ttySpeakMap = toSpeakMap (fromMaybe defaultLanguage ttyLanguage) ttySpeak
-- Generate text-to-spech files
forM (M.elems ttySpeakMap) $ \(lang, txt) ->
textToSpeech lang txt
-- Check which files do not exist
-- Not very nice, better to use something like Control.Applicative.Error if
-- it were in base, and not fixed to String.
files_with_errors <- forM filenames $ \fn -> case M.lookup fn ttySpeakMap of
Just (lang, txt) -> do
Right <$> readFile' (ttsFileName lang txt)
Nothing -> do
let paths = [ combine dir relpath
| ext <- map snd fileMagics
, let pat = fromMaybe "%s" ttyMedia_Path
, let relpath = printf pat fn <.> ext
]
ex <- filterM doesFileExist paths
case ex of
[] -> do
return $ Left $ unlines $
"Could not find any of these files:" :
paths
[f] -> Right <$> readFile' f
_ -> do
return $ Left $ unlines $
"Multiple matching files found:" :
paths
files <- case partitionEithers files_with_errors of
([],files) -> return files
(errors, _) -> putStr (unlines errors) >> exitFailure
comment <- case ttyComment of
Nothing -> return $ BC.pack $ "created with tttool version " ++ tttoolVersion
Just c | length c > maxCommentLength -> do
printf "Comment is %d characters too long; the maximum is %d."
(length c - maxCommentLength) maxCommentLength
exitFailure
| otherwise -> return $ BC.pack c
return $ (TipToiFile
{ ttProductId = ttyProduct_Id
, ttRawXor = knownRawXOR
, ttComment = comment
, ttDate = BC.pack date
, ttWelcome = [welcome]
, ttInitialRegs = [fromMaybe 0 (M.lookup r initRegs) | r <- [0..maxReg]]
, ttScripts = scripts'
, ttGames = games
, ttAudioFiles = files
, ttAudioXor = knownXOR
, ttAudioFilesDoubles = False
, ttChecksum = 0x00
, ttChecksumCalc = 0x00
, ttBinaries1 = []
, ttBinaries2 = []
, ttBinaries3 = []
, ttBinaries4 = []
, ttSpecialOIDs = Nothing
}, totalMap)
lexer = P.makeTokenParser $
emptyDef
{ P.reservedOpNames = words ":= == /= < >="
, P.opLetter = oneOf ":!#%&*+./<=>?@\\^|-~" -- Removed $, used for registers
}
parseLine :: Parser ([Word16] -> Line Register, [String])
parseLine = do
conds <- many (P.try parseCond)
(acts, filenames) <- parseCommands 0
eof
return (Line 0 conds acts, filenames)
descP d p = p <?> d
parseCond :: Parser (Conditional Register)
parseCond = descP "Conditional" $ do
v1 <- parseTVal
op <- parseCondOp
v2 <- parseTVal
P.lexeme lexer (char '?')
return (Cond v1 op v2)
parseWord16 :: Parser Word16
parseWord16 = fromIntegral <$> P.natural lexer
parseReg :: Parser Register
parseReg = P.lexeme lexer $ char '$' >> (RegPos <$> parseWord16 <|> RegName <$> many1 (alphaNum <|> char '_'))
parseTVal :: Parser (TVal Register)
parseTVal = (Reg <$> parseReg <|> Const <$> parseWord16) <?> "Value"
parseCondOp :: Parser CondOp
parseCondOp = choice
[ P.reservedOp lexer "==" >> return Eq
, P.reservedOp lexer "<" >> return Lt
, P.reservedOp lexer ">" >> return Gt
, P.reservedOp lexer ">=" >> return GEq
, P.reservedOp lexer "<=" >> return LEq
, P.reservedOp lexer "/=" >> return NEq
, P.reservedOp lexer "!=" >> return NEq
]
parseInitRegs :: Parser [(Register, Word16)]
parseInitRegs = many $ do
r <- parseReg
P.reservedOp lexer ":="
v <- parseWord16
return (r,v)
parsePlayList :: Parser [String]
parsePlayList = P.commaSep lexer $ parseAudioRef
parseAudioRef :: Parser String
parseAudioRef = P.lexeme lexer $ many1 (alphaNum <|> char '_')
parseScriptRef :: Parser String
parseScriptRef = P.lexeme lexer $ many1 (alphaNum <|> char '_')
parsePrettyHex :: Parser B.ByteString
parsePrettyHex = B.pack <$> many1 (P.lexeme lexer nibble)
where
nibble = fromIntegral <$> number 16 hexDigit
number base baseDigit
= do{ digits <- many1 baseDigit
; let n = foldl (\x d -> base*x + toInteger (digitToInt d)) 0 digits
; seq n (return n)
}
parseCommands :: Int -> Parser ([Command Register], [String])
parseCommands i =
choice
[ eof >> return ([],[])
, descP "Register action" $
do r <- parseReg
op <- choice [ P.reservedOp lexer ":=" >> return Set
, P.reservedOp lexer "+=" >> return Inc
, P.reservedOp lexer "-=" >> return Dec
, P.reservedOp lexer "%=" >> return Mod
, P.reservedOp lexer "/=" >> return Div
, P.reservedOp lexer "*=" >> return Mult
, P.reservedOp lexer "&=" >> return And
, P.reservedOp lexer "|=" >> return Or
, P.reservedOp lexer "^=" >> return XOr
]
v <- parseTVal
(cmds, filenames) <- parseCommands i
return (ArithOp op r v : cmds, filenames)
, descP "Negation" $
do P.lexeme lexer $ string "Neg"
r <- P.parens lexer $ parseReg
(cmds, filenames) <- parseCommands i
return (Neg r : cmds, filenames)
, descP "Unknown action" $
do P.lexeme lexer $ char '?'
(r,v) <- P.parens lexer $
(,) <$> parseReg <* P.comma lexer <*> parseTVal
h <- P.parens lexer parsePrettyHex
(cmds, filenames) <- parseCommands i
return (Unknown h r v : cmds, filenames)
, descP "Play action" $
do (withA, withStar) <- P.lexeme lexer $ do
char 'P'
withA <- optionBool (char 'A')
withStar <- optionBool (char '*')
return (withA, withStar)
fns <- P.parens lexer $ P.commaSep1 lexer parseAudioRef
playAllUnknownArgument <- option (Const 0) $ P.parens lexer $ parseTVal
let n = length fns
let c = case (withA, withStar, fns) of
(False, False, [fn]) -> Play (fromIntegral i)
(False, False, _) -> Random (fromIntegral (i + n - 1)) (fromIntegral i)
(True, False, _) -> PlayAll (fromIntegral (i + n - 1)) (fromIntegral i)
(False, True, _) -> RandomVariant playAllUnknownArgument
(True, True, _) -> PlayAllVariant playAllUnknownArgument
(cmds, filenames) <- parseCommands (i+n)
return (c : cmds, fns ++ filenames)
, descP "Cancel" $
do P.lexeme lexer $ char 'C'
(cmds, filenames) <- parseCommands i
return (Cancel : cmds, filenames)
, descP "Jump action" $
do P.lexeme lexer $ char 'J'
cmd <- P.parens lexer $ choice
[ Jump <$> parseTVal
, NamedJump <$> parseScriptRef
]
(cmds, filenames) <- parseCommands i
return (cmd : cmds, filenames)
, descP "Timer action" $
do P.lexeme lexer $ char 'T'
(r,v) <- P.parens lexer $
(,) <$> parseReg <* P.comma lexer <*> parseTVal
(cmds, filenames) <- parseCommands i
return (Timer r v : cmds, filenames)
, descP "Start Game" $
do P.lexeme lexer $ char 'G'
n <- P.parens lexer $ parseWord16
(cmds, filenames) <- parseCommands i
return (Game n : cmds, filenames)
]
optionBool :: Stream s m t => ParsecT s u m a -> ParsecT s u m Bool
optionBool p = option False (const True <$> p)
encodeFileCommented :: ToJSON a => FilePath -> String -> a -> IO ()
encodeFileCommented fn c v = do
SBC.writeFile fn $ SBC.pack c <> encode v
readFile' :: String -> IO B.ByteString
readFile' filename =
B.fromStrict <$> SB.readFile filename
readTipToiYaml :: FilePath -> IO (TipToiYAML, CodeMap)
readTipToiYaml inf = do
content <- SBC.readFile inf
let etty = decodeEither' content
tty <- case etty of
Left e -> print e >> exitFailure
Right tty -> return tty
ex <- doesFileExist infCodes
codeMap <-
if ex
then do
ettcy <- decodeFileEither infCodes
case ettcy of
Left e -> print e >> exitFailure
Right ttcy -> return (ttcScriptCodes ttcy)
else return M.empty
return (tty, codeMap)
where
infCodes = codeFileName inf
writeTipToiYaml :: FilePath -> TipToiYAML -> IO ()
writeTipToiYaml out tty = encodeFile out tty
writeTipToiCodeYaml :: FilePath -> TipToiYAML -> CodeMap -> CodeMap -> IO ()
writeTipToiCodeYaml inf tty oldMap totalMap = do
let newCodeMap =
M.filterWithKey (\s v -> readMaybe s /= Just v) totalMap
M.\\ fromMaybe M.empty (ttyScriptCodes tty)
if M.null newCodeMap
then do
ex <- doesFileExist infCodes
when ex $ removeFile infCodes
else when (newCodeMap /= oldMap) $ encodeFileCommented infCodes codesComment (TipToiCodesYAML { ttcScriptCodes = newCodeMap })
where
infCodes = codeFileName inf
codesComment :: String
codesComment = unlines $ map ("# " ++)
[ "This file contains a mapping from script names to oid codes."
, "This way the existing scripts are always assigned to the the"
, "same codes, even if you add further scripts."
, ""
, "You can copy the contents of this file into the main .yaml file,"
, "if you want to have both together."
, ""
, "If you delete this file, the next run of \"ttool assemble\" might"
, "use different codes for your scripts, and you might have to re-"
, "create the images for your product."
]
codeFileName :: FilePath -> FilePath
codeFileName fn = base <.> "codes" <.> ext
where
base = dropExtension fn
ext = takeExtension fn
-- | Unused
debugGame :: ProductID -> IO TipToiYAML
debugGame productID = do
return $ TipToiYAML
{ ttyProduct_Id = productID
, ttyMedia_Path = Just "Audio/digits/%s"
, ttyInit = Nothing
, ttyScriptCodes = Nothing
, ttySpeak = Nothing
, ttyComment = Nothing
, ttyWelcome = Just $ "blob"
, ttyScripts = M.fromList [
(show oid, [line])
| oid <- [1..15000]
, let chars = [oid `div` 10^p `mod` 10| p <-[4,3,2,1,0]]
, let line = ppLine t $ Line 0 [] [Play n | n <- [0..5]] ([10] ++ chars)
]
, ttyLanguage = Nothing
, ttyGames = Nothing
}
where
t= M.fromList $
[ (n, "english_" ++ show n) | n <- [0..9]] ++ [(10, "blob")]
| colinba/tip-toi-reveng | src/TipToiYaml.hs | mit | 44,095 | 0 | 29 | 14,025 | 10,153 | 5,375 | 4,778 | 903 | 6 |
{-|
- This module adds servant combinators.
-}
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGuAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeOperators #-}
module Apollo.Types.Servant
( -- * Core API
StrictQueryParam
) where
import Apollo.Reflection
import Control.Monad ( join )
import Data.Monoid ( (<>) )
import Data.String.Conversions ( cs )
import GHC.TypeLits
import Servant.Server.Internal
import Web.HttpApiData ( FromHttpApiData, parseQueryParamMaybe, ToHttpApiData )
import Network.HTTP.Types.URI ( parseQueryText )
import Network.Wai ( rawQueryString )
import Servant.API ( (:>), QueryParam )
import Servant.Utils.Links ( IsElem, IsElem', HasLink(..) )
------------------------------------------------------------------------
--- Core API ---
------------------------------------------------------------------------
-- | A strict query parameter is one whose presence is required and that must
-- correctly parse. This is essentially equivalent to a capture, but as a query
-- string.
data StrictQueryParam (sym :: Symbol) (ty :: *)
------------------------------------------------------------------------
--- Links integration ---
------------------------------------------------------------------------
-- | A required query param in an API will only match endpoitns also containing
-- a required query param. Furthermore, the names and decoded types must match.
type instance IsElem'
(StrictQueryParam sym ty :> e)
(StrictQueryParam sym ty :> api)
= IsElem e api
instance
( KnownSymbol sym
, ToHttpApiData a
, HasLink sub
)
=> HasLink (StrictQueryParam sym a :> sub) where
type MkLink (StrictQueryParam sym a :> sub) x = a -> MkLink sub x
toLink f _ l x = toLink f qp l (Just x) where
qp = Proxy :: Proxy (QueryParam sym a :> sub)
------------------------------------------------------------------------
--- Implementation details ---
------------------------------------------------------------------------
instance
( KnownSymbol sym
, FromHttpApiData a
, HasServer api context
)
=> HasServer (StrictQueryParam sym a :> api) context where
type ServerT (StrictQueryParam sym a :> api) m = a -> ServerT api m
route Proxy context subserver =
let queryText r = parseQueryText $ rawQueryString r
param r = let p = lookup paramname (queryText r) in
case join $ parseQueryParamMaybe <$> join p of
Nothing -> delayedFailFatal err400
{ errBody = cs $
"parameter " <> paramname <> " is absent or has no value"
}
Just e -> case e of
Left err -> delayedFailFatal err400
{ errBody = cs $
"failed to parse parameter " <> paramname <> ": " <> err
}
Right x -> pure x
delayed = addParameterCheck subserver . withRequest $ param
in route (Proxy :: Proxy api) context delayed
where
paramname = cs $ symbolVal (Proxy :: Proxy sym)
| tsani/apollo | src/Apollo/Types/Servant.hs | mit | 3,357 | 0 | 24 | 812 | 647 | 358 | 289 | -1 | -1 |
{-# LANGUAGE TupleSections #-}
-- | Environments implemented as association lists.
module ListEnv where
type Env k v = [(k,v)]
-- | Query
lookup :: (Eq k) => k -> Env k v -> Maybe v
lookup = Prelude.lookup
lookupSafe :: (Eq k, Show k) => k -> Env k v -> v
lookupSafe k = maybe (error $ "internal error: unbound key " ++ show k) id .
ListEnv.lookup k
-- | Construction
empty :: Env k v
empty = []
singleton :: k -> v -> Env k v
singleton k v = [(k,v)]
insert :: k -> v -> Env k v -> Env k v
insert x v rho = (x,v) : rho
update :: Env k v -> k -> v -> Env k v
update rho x v = (x,v) : rho
-- | Left-biased union
union :: (Ord k) => Env k v -> Env k v -> Env k v
union = (++)
map :: (v -> w) -> Env k v -> Env k w
map f = Prelude.map (\ (k,v) -> (k, f v))
mapM :: (Monad m) => (v -> m w) -> Env k v -> m (Env k w)
mapM f = Prelude.mapM (\ (k,v) -> f v >>= return . (k,))
| andreasabel/helf | src/ListEnv.hs | mit | 881 | 0 | 10 | 228 | 484 | 257 | 227 | 22 | 1 |
{-|
Module: Flaw.Visual.Texture.Mip
Description: Mipmap generation support.
License: MIT
-}
{-# LANGUAGE Strict #-}
module Flaw.Visual.Texture.Mip
( generateMips
) where
import Control.Monad
import Data.Bits
import qualified Data.ByteArray as BA
import qualified Data.ByteString.Unsafe as B
import Data.Word
import Foreign.Ptr
import Foreign.Storable
import Flaw.Graphics.Texture
import Flaw.Math
import Flaw.Visual.Texture.Internal
-- | Generate specified amount (0 means full chain) of mipmap levels for a texture.
-- All existing mipmap levels except top one are removed.
generateMips :: Int -> PackedTexture -> PackedTexture
generateMips mipsRequested PackedTexture
{ packedTextureBytes = bytes
, packedTextureInfo = textureInfo@TextureInfo
{ textureWidth = width
, textureHeight = height
, textureDepth = depth
, textureFormat = UncompressedTextureFormat
{ textureFormatComponents = pixelComponents
, textureFormatValueType = pixelValueType
, textureFormatPixelSize = pixelSize
}
, textureCount = count
}
} = PackedTexture
{ packedTextureBytes = newBytes
, packedTextureInfo = newTextureInfo
} where
-- mips to generate
mips = if mipsRequested > 0 then mipsRequested else let
maxMip mip = if (width `shiftR` mip) > 1 || (height `shiftR` mip) > 1 || (depth `shiftR` mip) > 1 then maxMip (mip + 1) else mip + 1
in maxMip 0
-- new texture info
newTextureInfo = textureInfo
{ textureMips = mips
}
-- calculate metrics
TextureMetrics
{ textureImageSize = imageSize
} = calcTextureMetrics textureInfo
TextureMetrics
{ textureImageSize = newImageSize
, textureMipsMetrics = mipsMetrics@(TextureMipMetrics
{ textureMipWidth = topMipWidth
, textureMipHeight = topMipHeight
, textureMipDepth = topMipDepth
, textureMipLinePitch = topMipLinePitch
, textureMipSlicePitch = topMipSlicePitch
} : _)
} = calcTextureMetrics newTextureInfo
pixelPitch = pixelSizeByteSize pixelSize
-- number of images
ncount = if count > 0 then count else 1
-- downscale function
downscale :: (Storable a, Num b) => (a -> b) -> (Int -> b -> a) -> TextureMipMetrics -> Ptr a -> Ptr a -> IO ()
downscale fromSource toSource TextureMipMetrics
{ textureMipWidth = mipWidth
, textureMipHeight = mipHeight
, textureMipDepth = mipDepth
, textureMipLinePitch = mipLinePitch
, textureMipSlicePitch = mipSlicePitch
} sourcePtr destPtr = let
zLoop z = if z >= mipDepth then return () else let
z1 = (z * topMipDepth) `quot` mipDepth
z2 = ((z + 1) * topMipDepth) `quot` mipDepth
yLoop y = if y >= mipHeight then return () else let
y1 = (y * topMipHeight) `quot` mipHeight
y2 = ((y + 1) * topMipHeight) `quot` mipHeight
xLoop x = if x >= mipWidth then return () else let
x1 = (x * topMipWidth) `quot` mipWidth
x2 = ((x + 1) * topMipWidth) `quot` mipWidth
szLoop sz tz = if sz >= z2 then return tz else let
syLoop sy ty = if sy >= y2 then return ty else let
sxLoop sx tx = if sx >= x2 then return tx else do
s <- peekByteOff sourcePtr $ sz * topMipSlicePitch + sy * topMipLinePitch + sx * pixelPitch
sxLoop (sx + 1) (tx + fromSource s)
in syLoop (sy + 1) =<< sxLoop x1 ty
in szLoop (sz + 1) =<< syLoop y1 tz
in do
s <- szLoop z1 0
pokeByteOff destPtr (z * mipSlicePitch + y * mipLinePitch + x * pixelPitch) $ toSource ((x2 - x1) * (y2 - y1) * (z2 - z1)) s
xLoop $ x + 1
in do
xLoop 0
yLoop $ y + 1
in do
yLoop 0
zLoop $ z + 1
in zLoop 0
-- poly-format mip generation function
genMip mipMetrics sourcePtr destPtr = case (pixelComponents, pixelValueType, pixelSize) of
(PixelR, PixelUint, Pixel8bit) -> downscale (fromIntegral :: Word8 -> Word32) (\n b -> fromIntegral (b `quot` fromIntegral n)) mipMetrics (castPtr sourcePtr) (castPtr destPtr)
(PixelRG, PixelUint, Pixel16bit) -> downscale (vecfmap fromIntegral :: Word8_2 -> Word32_2) (\n -> vecfmap (fromIntegral . (`quot` fromIntegral n))) mipMetrics (castPtr sourcePtr) (castPtr destPtr)
(PixelRGB, PixelUint, Pixel24bit) -> downscale (vecfmap fromIntegral :: Word8_3 -> Word32_3) (\n -> vecfmap (fromIntegral . (`quot` fromIntegral n))) mipMetrics (castPtr sourcePtr) (castPtr destPtr)
(PixelRGBA, PixelUint, Pixel32bit) -> downscale (vecfmap fromIntegral :: Word8_4 -> Word32_4) (\n -> vecfmap (fromIntegral . (`quot` fromIntegral n))) mipMetrics (castPtr sourcePtr) (castPtr destPtr)
_ -> error $ "unsupported texture format for mipmap generation: " ++ show (pixelComponents, pixelValueType, pixelSize)
-- bytes
newBytes = BA.allocAndFreeze (newImageSize * ncount) $ \newBytesPtr -> B.unsafeUseAsCString bytes $ \bytesPtr ->
-- loop for textures in the array
forM_ [0..(ncount - 1)] $ \c -> do
let imageSourcePtr = bytesPtr `plusPtr` (c * imageSize)
let imageDestPtr = newBytesPtr `plusPtr` (c * newImageSize)
-- loop for mips
forM_ mipsMetrics $ \mipMetrics@TextureMipMetrics
{ textureMipOffset = mipOffset
} -> genMip mipMetrics imageSourcePtr (imageDestPtr `plusPtr` mipOffset)
generateMips _ _ = error "compressed textures are not supported for mipmap generation"
| quyse/flaw | flaw-visual/Flaw/Visual/Texture/Mip.hs | mit | 5,414 | 0 | 46 | 1,285 | 1,607 | 881 | 726 | 97 | 14 |
{-# LANGUAGE ImpredicativeTypes #-}
{-# LANGUAGE StandaloneDeriving #-}
module Examples.TranslationExample where
import Examples.SampleContracts
import HOAS
import PrettyPrinting
import qualified Contract as C
import qualified RebindableEDSL as R
import EDSL
import ContractTranslation
import Examples.PayoffToHaskell
import qualified Examples.PayoffToFuthark as F
import Data.Maybe
import qualified Data.Map as Map
deriving instance Show ILUnOp
deriving instance Show ILBinOp
deriving instance Show ILExpr
deriving instance Show TExpr
deriving instance Show ILTExpr
deriving instance Show ILTExprZ
deriving instance Show ILVal
deriving instance Show C.Val
deriving instance Eq ILVal
deriving instance Eq TExpr
deriving instance Eq ILExpr
deriving instance Eq ILTExpr
deriving instance Eq ILTExprZ
deriving instance Eq ObsLabel
deriving instance Eq ILUnOp
deriving instance Eq ILBinOp
tZero = (ILTexpr (C.Tnum 0))
tZeroZ = (ILTexprZ tZero)
empty_tenv :: String -> Int
empty_tenv = (\_ -> undefined)
-- Use fromHoas to convert from HOAS representation to the plain AST
translateEuropean = fromContr (fromHoas european) tZero
translateEuropean' = fromContr (fromHoas european') tZero
translateWorstOff = fromContr (fromHoas worstOff) tZero
tranlateTemplate = fromContr (fromHoas templateEx) tZero
-- printing the AST
printEuroOption = putStrLn (show (fromHoas european))
advSimple = fst (advance simple (mkExtEnvP [] []) empty_tenv)
transC c = fromMaybe (error "No translation") (fromContr c tZero)
trSimple = transC (fromHoas simple)
trAdvSimple = transC (fromHoas advSimple)
advTwoCF = fst (advance twoCF (mkExtEnvP [] []) empty_tenv)
trTwoCF = transC (fromHoas twoCF)
trAdvTwoCF = transC $ fromHoas advTwoCF
trTemplate = transC (fromHoas templateEx)
trTemplateCut = cutPayoff (transC (fromHoas templateEx))
eval_empty exp = iLsem exp (\l t -> ILRVal 0) empty_tenv 0 0 (\t -> 1) X Y
eval k (exp, extEnv) = iLsem exp extEnv empty_tenv 0 k (\t -> 1) X Y
sampleExt = mkExtEnvP [(Stock "DJ_Eurostoxx_50", 365, 4100)] []
sampleILExt :: C.ExtEnv' ILVal
sampleILExt = \l t -> if (t == 365) then (ILRVal 4100) else (ILRVal 0)
---------------------------------------------------------------
-- Testing, that two paths (reduce, compile than evaluate; and
-- compile, apply cutPayoff and then evaluate) commute
--------------------------------------------------------------
adv_n :: ExtEnvP -> Int -> Contr -> Contr
adv_n ext 0 c = c
adv_n ext k c = adv_n ext (k-1) (advance1 c ext)
adv_both :: Int -> (Contr, ExtEnvP) -> (Contr, ExtEnvP)
adv_both 0 (c, ext) = (c, ext)
adv_both k (c, ext) = adv_both (k-1) (advance1 c ext, C.adv_ext 1 ext)
fromJustExtEnv ext = \l t -> fromJust (ext l t)
adv_n' :: Int -> (Contr, ExtEnvP) -> (C.Contr, C.ExtEnv' ILVal)
adv_n' k (c, ext) = let (c', ext') = adv_both k (c, ext)
in (fromHoas c', fromExtEnv (fromJustExtEnv ext'))
advance1 :: Contr -> ExtEnvP -> Contr
advance1 c env = let (c', _) = fromJust (C.redfun (fromHoas c) [] env empty_tenv)
in toHoas c'
-- apply product of morphisms
appProd (f,g) (x,y) = (f x, g y)
path1 :: Int -> (Contr, ExtEnvP) -> Maybe ILVal
path1 k = (eval 0) . (appProd (transC, id)) . (adv_n' k)
path2 :: Int -> (Contr, ExtEnvP) -> Maybe ILVal
path2 k = (eval k) . (appProd (cutPayoff . transC . fromHoas, fromExtEnv . fromJustExtEnv))
commute k = path1 k (composite, sampleExt) == path2 k (composite, sampleExt)
-- testing up to the contract horizon
commute_horizon = and $ map commute [0..(horizon composite empty_tenv)]
-----------------------------------------------------------------
-- Some contract equivalences give equal payoff expressions
-- (or up to template expression evaluation)
-----------------------------------------------------------------
c1_eq1 = C.Scale (C.OpE (C.RLit 2) []) (C.Scale (C.OpE (C.RLit 3) []) (C.Transfer X Y EUR))
c2_eq1 = C.Scale (C.OpE C.Mult [C.OpE (C.RLit 2) [], C.OpE (C.RLit 3) []]) (C.Transfer X Y EUR)
c1_eq2 = C.Translate (C.Tnum 2) $ C.Translate (C.Tnum 3) $ C.Transfer X Y EUR
c2_eq2 = C.Translate (C.Tnum 5) $ C.Transfer X Y EUR
eq2 = transC c1_eq2 == transC c2_eq2
c1_eq3 = C.Translate (C.Tnum 5) $ C.Both (C.Transfer X Y EUR) (C.Transfer X Y EUR)
c2_eq3 = C.Both (C.Translate (C.Tnum 5) $ C.Transfer X Y EUR) (C.Translate (C.Tnum 5) $ C.Transfer X Y EUR)
eq3 = transC c1_eq3 == transC c2_eq3
nonObviouslyCausal = C.Scale (C.Obs (LabR (FX EUR DKK)) 1) (C.Translate (C.Tnum 1) $ C.Transfer X Y EUR)
obviouslyCausal = C.Translate (C.Tnum 1) $ C.Scale (C.Obs (LabR (FX EUR DKK)) 0) (C.Transfer X Y EUR)
eq_causal = transC nonObviouslyCausal == transC obviouslyCausal
| annenkov/contracts | Coq/Extraction/contracts-haskell/src/Examples/TranslationExample.hs | mit | 4,726 | 0 | 13 | 839 | 1,773 | 926 | 847 | 84 | 2 |
{-# OPTIONS_GHC -fno-warn-orphans #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
-- some js combinators and helpers
module Lucid.Js
( module Lucid.Js
, module Language.Javascript.JMacro
, module Language.Javascript.JMacro.D3Expr
) where
import Blaze.ByteString.Builder
import Control.Applicative
import Data.ByteString.Lazy as Lazy
import Data.Monoid
import Data.String
import GHC.Char
import Language.JavaScript.Parser
import Language.Javascript.JMacro
import Language.Javascript.JMacro.D3Expr
($.) :: JExpr -> String -> JExpr
x $. y = SelExpr x (StrI y)
($$) :: (ToJExpr a, ToJExpr b) => a -> b -> JExpr
x $$ y = ApplExpr (toJExpr x) (toJExprList y)
($$$) :: (ToJExpr a, ToJExpr b) => a -> b -> JStat
x $$$ y = ApplStat (toJExpr x) (toJExprList y)
instance IsString JExpr where
fromString x = r x
r :: String -> JExpr
r = ValExpr . JVar . StrI
lit :: String -> JExpr
lit s = (ValExpr . JStr) s
var :: String -> JStat
var sym = DeclStat (StrI sym) Nothing
declObj :: String -> JStat
declObj s =
var s
<> (r s =: r "{}")
-- | wraps statements in a function and returns parameter
wrapF :: String -> JStat -> JExpr
wrapF c stats = ValExpr (JFunc [StrI c] (stats <> ReturnStat (r c)))
wrapF2 :: String -> String -> JStat -> JExpr
wrapF2 c d stats = ValExpr (JFunc [StrI c,StrI d] (stats <> ReturnStat (r c)))
wrapF0 :: String -> JStat -> JExpr
wrapF0 c stats = ValExpr $ JFunc [StrI c] stats
render :: JStat -> String
render = Prelude.map (chr . fromIntegral) . Lazy.unpack . toLazyByteString . renderJS . readJs . show . renderJs
jsToFile :: FilePath -> JStat -> IO ()
jsToFile file js = Prelude.writeFile file (render js)
-- chain
chain :: ToJExpr a => String -> [a] -> JExpr -> JExpr
chain x args y = ApplExpr (SelExpr y (StrI x)) (fmap toJExpr args)
ch :: JExpr -> JExpr -> JExpr
ch (ApplExpr (ValExpr (JVar (StrI x))) args) = chain x args
-- ch (ApplExpr (SelExpr y (StrI x)) args) = (chain x args) y
ch e = error (show e)
-- fixme: sort through these
fun0 :: JStat -> JExpr
fun0 es = ValExpr (JFunc [] es)
fun :: [String] -> JStat -> JExpr
fun args es = ValExpr (JFunc (StrI <$> args) es)
switch :: JExpr -> [(JExpr,JStat)] -> JStat -> JStat
switch x cases def = SwitchStat (toJExpr x) ((\(b,s) -> (toJExpr b,s)) <$> cases) def
onLoad :: JStat -> JStat
onLoad es = r "window.onload" =: fun0 es
jq :: String -> JExpr
jq s = r "$" $$ lit s
infixl 2 =:
(=:) :: ToJExpr a => JExpr -> a -> JStat
x =: y = AssignStat x (toJExpr y)
null :: JExpr
null = jsv "null" -- fixme
if' :: (ToJExpr a, ToStat b) => a -> b -> JStat
if' x y = IfStat (toJExpr x) (toStat y) (BlockStat [])
ifElse :: (ToJExpr a, ToStat b, ToStat c) => a -> b -> c -> JStat
ifElse x y z = IfStat (toJExpr x) (toStat y) (toStat z)
while :: ToJExpr a => a -> JStat -> JStat
while x y = WhileStat False (toJExpr x) y
return :: ToJExpr a => a -> JStat
return x = ReturnStat (toJExpr x)
toJExprList :: ToJExpr a => a -> [JExpr]
toJExprList x = case toJExpr x of
(ValExpr (JList l)) -> l
x' -> [x']
jstr :: String -> JExpr
jstr = ValExpr . JStr
-- chain example
d3' :: JExpr
d3' = [jmacroE| d3 |]
selBody :: JExpr -> JExpr
selBody = ch [jmacroE| select("body")|]
styleBlack :: JExpr -> JExpr
styleBlack = ch [jmacroE| style("color", "black")|]
styleBgWhite :: JExpr -> JExpr
styleBgWhite = ch [jmacroE| style("background-color", "white")|]
ex1 :: JExpr
ex1 = styleBlack . styleBgWhite . selBody $ d3'
| tonyday567/hdcharts | src/Lucid/Js.hs | mit | 3,501 | 0 | 13 | 750 | 1,433 | 757 | 676 | 88 | 2 |
module Aws.DynamoDb.Commands.Conduit
( module Aws.DynamoDb.Commands
, module Aws.DynamoDb.Commands.Query.Conduit
) where
import Aws.DynamoDb.Commands
import Aws.DynamoDb.Commands.Query.Conduit
| srijs/haskell-aws-dynamodb-conduit | src/Aws/DynamoDb/Commands/Conduit.hs | mit | 200 | 0 | 5 | 21 | 40 | 29 | 11 | 5 | 0 |
module Complex (Complex((:+)), ii) where
-- This module is similar to Data.Complex, but with lighter constraints
infix 6 :+
data Complex a = a :+ a
ii :: Num a => Complex a
ii = 0 :+ 1
instance Num a => Num (Complex a) where
(a :+ b) + (c :+ d) = (a + c) :+ (b + d)
(a :+ b) - (c :+ d) = (a - c) :+ (b - d)
(a :+ b) * (c :+ d) = (a * c - b * d) :+ (b * c + a * d)
fromInteger n = fromInteger n :+ 0
abs = error "abs not implemented because it would need a Floating instance for the type variable"
signum = error "signum makes no sense for complex numbers"
instance Fractional a => Fractional (Complex a) where
recip (a :+ b) =
let
d = a * a + b * b
rp = a / d
ip = -1 * b / d
in
rp :+ ip
fromRational r = fromRational r :+ 0
instance Floating a => Floating (Complex a) where
sqrt z =
let
r = aabs z
rc = r :+ 0
aabs (a :+ b) = sqrt (a * a + b * b)
in
(sqrt r :+ 0) * (z + rc) / (aabs (z + rc) :+ 0)
pi = error "pi"
log = error "log not implemented"
exp = error "exp not implemented"
tan = error "tan not implemented"
atan = error "atan not implemented"
atanh = error "atanh not implemented"
sin = error "sin not implemented"
asin = error "asin not implemented"
sinh = error "sinh not implemented"
asinh = error "asinh not implemented"
cos = error "cos not implemented"
acos = error "acos not implemented"
cosh = error "cosh not implemented"
acosh = error "acosh not implemented"
instance (Show a, Eq a, Num a) => Show (Complex a) where
show (a :+ b)
| b == 0 = show a
| a == 0 = show b ++ "i"
| otherwise = show a ++ "+" ++ show b ++ "i"
| miniBill/entangle | src/lib/Complex.hs | mit | 1,824 | 0 | 14 | 641 | 707 | 361 | 346 | 48 | 1 |
module TestWriter where
import Control.Monad.Writer
logNumber :: Int -> Writer [String] Int
logNumber x = writer (x, ["Got number: " ++ show x])
multWithLog :: Writer [String] Int
multWithLog = do
a <- logNumber 3
b <- logNumber 5
return (a*b)
gcdReverse :: Int -> Int -> Writer [String] Int
gcdReverse a b
| b == 0 = do
tell ["Finished with " ++ show a]
return a
| otherwise = do
result <- gcdReverse b (a `mod` b)
tell [show a ++ " mod " ++ show b ++ " = " ++ show (a `mod` b)]
return result
newtype DiffList a = DiffList { getDiffList :: [a] -> [a] }
toDiffList :: [a] -> DiffList a
toDiffList xs = DiffList (xs++)
fromDiffList :: DiffList a -> [a]
fromDiffList (DiffList f) = f []
instance Monoid (DiffList a) where
mempty = DiffList (\xs -> [] ++ xs)
(DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs))
| rockdragon/julia-programming | code/haskell/TestWriter.hs | mit | 874 | 0 | 14 | 213 | 416 | 212 | 204 | 26 | 1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE OverloadedStrings #-}
module Xml.TrackSpec (spec) where
import Data.Text (Text)
import Data.Traversable (traverse)
import Lastfm
import Lastfm.Track
import Test.Hspec
import Text.Xml.Lens
import SpecHelper
spec :: Spec
spec = do
it "addTags" $
shouldHaveXml_ . privately $
addTags <*> artist "Jefferson Airplane" <*> track "White rabbit" <*> tags ["60s", "awesome"]
it "ban" $
shouldHaveXml_ . privately $
ban <*> artist "Eminem" <*> track "Kim"
it "love" $
shouldHaveXml_ . privately $
love <*> artist "Gojira" <*> track "Ocean"
it "removeTag" $
shouldHaveXml_ . privately $
removeTag <*> artist "Jefferson Airplane" <*> track "White rabbit" <*> tag "awesome"
it "share" $
shouldHaveXml_ . privately $
share <*> artist "Led Zeppelin" <*> track "When the Levee Breaks" <*> recipient "liblastfm" <* message "Just listen!"
it "unban" $
shouldHaveXml_ . privately $
unban <*> artist "Eminem" <*> track "Kim"
it "unlove" $
shouldHaveXml_ . privately $
unlove <*> artist "Gojira" <*> track "Ocean"
it "scrobble" $
privately (scrobble (pure (item <*> artist "Gojira" <*> track "Ocean" <*> timestamp 1300000000)))
`shouldHaveXml`
root.node "scrobbles".node "scrobble".node "track".text
it "updateNowPlaying" $
privately (updateNowPlaying <*> artist "Gojira" <*> track "Ocean")
`shouldHaveXml`
root.node "nowplaying".node "track".text
it "getBuylinks" $
publicly (getBuyLinks <*> country "United Kingdom" <*> artist "Pink Floyd" <*> track "Brain Damage")
`shouldHaveXml`
root.node "affiliations".node "downloads".node "affiliation".node "supplierName".text
it "getCorrection" $
publicly (getCorrection <*> artist "Pink Ployd" <*> track "Brain Damage")
`shouldHaveXml`
root.node "corrections".node "correction".node "track".node "artist".node "name".text
it "getFingerprintMetadata" $
publicly (getFingerprintMetadata <*> fingerprint 1234)
`shouldHaveXml`
root.node "tracks".node "track".node "name".text
describe "getInfo*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "track".node "userplaycount".text
it "getInfo" $
publicly (getInfo <*> artist "Pink Floyd" <*> track "Comfortably Numb" <* username "aswalrus")
`shouldHaveXml`
xmlQuery
it "getInfo_mbid" $
publicly (getInfo <*> mbid "52d7c9ff-6ae4-48a6-acec-4c1a486f8c92" <* username "aswalrus")
`shouldHaveXml`
xmlQuery
describe "getShouts*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "shouts".node "shout".node "author".text
it "getShouts" $
publicly (getShouts <*> artist "Pink Floyd" <*> track "Comfortably Numb" <* limit 7)
`shouldHaveXml`
xmlQuery
it "getShouts_mbid" $
publicly (getShouts <*> mbid "52d7c9ff-6ae4-48a6-acec-4c1a486f8c92" <* limit 7)
`shouldHaveXml`
xmlQuery
describe "getSimilar*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "similartracks".node "track".node "name".text
it "getSimilar" $
publicly (getSimilar <*> artist "Pink Floyd" <*> track "Comfortably Numb" <* limit 4)
`shouldHaveXml`
xmlQuery
it "getSimilar_mbid" $
publicly (getSimilar <*> mbid "52d7c9ff-6ae4-48a6-acec-4c1a486f8c92" <* limit 4)
`shouldHaveXml`
xmlQuery
describe "getTags*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "tags".attr "track".traverse
it "getTags" $
publicly (getTags <*> artist "Jefferson Airplane" <*> track "White Rabbit" <* user "liblastfm")
`shouldHaveXml`
xmlQuery
it "getTags_mbid" $
publicly (getTags <*> mbid "001b3337-faf4-421a-a11f-45e0b60a7703" <* user "liblastfm")
`shouldHaveXml`
xmlQuery
describe "getTopFans*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "topfans".node "user".node "name".text
it "getTopFans" $
publicly (getTopFans <*> artist "Pink Floyd" <*> track "Comfortably Numb")
`shouldHaveXml`
xmlQuery
it "getTopFans_mbid" $
publicly (getTopFans <*> mbid "52d7c9ff-6ae4-48a6-acec-4c1a486f8c92")
`shouldHaveXml`
xmlQuery
describe "getTopTags*" $ do
let xmlQuery :: Fold Document Text
xmlQuery = root.node "toptags".node "tag".node "name".text
it "getTopTags" $
publicly (getTopTags <*> artist "Pink Floyd" <*> track "Comfortably Numb")
`shouldHaveXml`
xmlQuery
it "getTopTags_mbid" $
publicly (getTopTags <*> mbid "52d7c9ff-6ae4-48a6-acec-4c1a486f8c92")
`shouldHaveXml`
xmlQuery
it "search" $
publicly (search <*> track "Believe" <* limit 12)
`shouldHaveXml`
root.node "results".node "trackmatches".node "track".node "name".text
| supki/liblastfm | test/api/Xml/TrackSpec.hs | mit | 4,865 | 0 | 18 | 1,064 | 1,362 | 640 | 722 | -1 | -1 |
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE ExistentialQuantification #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses #-}
-- | Please see the README at:
--
-- https://github.com/jwiegley/fuzzcheck/blob/master/README.md
module Test.FuzzCheck
( Fuzz(..)
, FuzzException(..)
, arg
, gen
, rand
, branch
, jumble
, (?>)
, fuzzCheck'
, fuzzCheck
) where
import Control.Applicative
import Control.Exception.Lifted
import Control.Monad
import Control.Monad.IO.Class
import Control.Monad.Trans.Control
import Data.Functor.Compose
import Data.Functor.Identity
import Data.Functor.Product
import Data.List
import Data.Typeable
import Prelude hiding (ioError)
import Test.QuickCheck
import Test.QuickCheck.Gen (Gen(..))
import Test.QuickCheck.Random
newtype Fuzz a = Fuzz (Compose Gen (Product (Const [String]) Identity) a)
deriving Functor
instance Applicative Fuzz where
pure x = Fuzz (Compose (pure (Pair (Const ["<arg>"]) (Identity x))))
Fuzz f <*> Fuzz x = Fuzz (f <*> x)
data FuzzException = FuzzException String deriving (Eq, Show, Typeable)
instance Exception FuzzException
wrap :: Show a => a -> Product (Const [String]) Identity a
wrap x = Pair (Const [show x]) (Identity x)
arg :: Show a => a -> Fuzz a
arg = Fuzz . Compose . pure . wrap
gen :: Show a => Gen a -> Fuzz a
gen (MkGen m) = Fuzz (Compose (MkGen g))
where g r n = let x = m r n in wrap x
rand :: (Arbitrary a, Show a) => Fuzz a
rand = gen arbitrary
branch :: (MonadIO m, MonadBaseControl IO m) => [m a] -> m a
branch xs = do
let len = length xs
n <- "pick a random number"
?> return <$> gen (choose (0,len-1) :: Gen Int)
xs !! n
jumble :: (MonadIO m, MonadBaseControl IO m) => [m a] -> m [a]
jumble xs = do
let len = length xs
xs' <- sequence xs
foldM (\acc _x -> do
n <- "pick a random number"
?> return <$> gen (choose (1,len-1) :: Gen Int)
let (y:ys, z:zs) = splitAt n acc
return $ (z:ys) ++ (y:zs)) xs' xs'
infixr 1 ?>
(?>) :: (MonadIO m, MonadBaseControl IO m)
=> String -> Fuzz (m a) -> m a
lbl ?> Fuzz (Compose (MkGen g)) = do
rnd <- liftIO newQCGen
let Pair (Const args) (Identity x) = g rnd 100
runFuzz args x
where
runFuzz :: (MonadIO m, MonadBaseControl IO m)
=> [String] -> m a -> m a
runFuzz args m = m `catch` report
where report e = throwIO (FuzzException $
lbl ++ " " ++ unwords (map show args)
++ ": " ++ show (e :: SomeException))
fuzzCheck' :: (MonadIO m, MonadBaseControl IO m)
=> m a -> Int -> m () -> m ()
fuzzCheck' f runs cleanup = replicateM_ runs f `finally` cleanup
fuzzCheck :: (MonadIO m, MonadBaseControl IO m)
=> m a -> m ()
fuzzCheck f = fuzzCheck' f 100 $
liftIO $ putStrLn "+++ OK, passed 100 tests."
| jwiegley/fuzzcheck | Test/FuzzCheck.hs | mit | 3,083 | 0 | 18 | 848 | 1,163 | 603 | 560 | 82 | 1 |
data OperatingSystem =
GnuPlusLinux
| OpenBSDPlusNevermindJustBSDStill
| Mac
| Windows
deriving (Eq, Show, Enum)
data ProgLang =
Haskell
| Agda
| Idris
| PureScript
deriving (Eq, Show, Enum)
data Programmer =
Programmer { os :: OperatingSystem
, lang :: ProgLang }
deriving (Eq, Show)
nineToFive :: Programmer
nineToFive = Programmer { os = Mac, lang = Haskell }
feelingWizardly :: Programmer
feelingWizardly = Programmer { lang = Agda, os = GnuPlusLinux }
allOperatingSystems :: [OperatingSystem]
allOperatingSystems = [GnuPlusLinux ..]
allLanguages :: [ProgLang]
allLanguages = [Haskell ..]
allProgrammers :: [Programmer]
allProgrammers = [Programmer {os = os', lang = lang'} |
os' <- allOperatingSystems, lang' <- allLanguages] | candu/haskellbook | ch11/programmer.hs | mit | 780 | 0 | 8 | 154 | 223 | 134 | 89 | 27 | 1 |
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE UndecidableInstances #-}
module Betfair.APING.Types.PriceSize
( PriceSize(..)
) where
import Data.Aeson.TH (Options (omitNothingFields),
defaultOptions, deriveJSON)
import Protolude
import Text.PrettyPrint.GenericPretty
data PriceSize = PriceSize
{ price :: Double
, size :: Double
} deriving (Eq, Show, Generic, Pretty, Read, Ord)
$(deriveJSON defaultOptions {omitNothingFields = True} ''PriceSize)
| joe9/betfair-api | src/Betfair/APING/Types/PriceSize.hs | mit | 712 | 0 | 9 | 169 | 124 | 77 | 47 | 18 | 0 |
{-# LANGUAGE OverloadedStrings, DefaultSignatures, MultiParamTypeClasses, FunctionalDependencies, ForeignFunctionInterface #-}
module Web.Authenticate.SQRL.SecureStorage where
import Web.Authenticate.SQRL.Types
import Web.Authenticate.SQRL.Client.Types
import Data.Binary
import Data.Binary.Get
import Data.Binary.Put
import Data.Bits
import Data.Int (Int32)
import Data.IORef
import Data.Maybe (catMaybes, fromJust, listToMaybe)
import Data.Time.Clock
import Control.Applicative
import Crypto.Random
import Crypto.Cipher.AES
import qualified Crypto.Hash.SHA256
import qualified Crypto.Scrypt as Scrypt () --- needed for its c-files
import qualified Crypto.Ed25519.Exceptions as ED25519
import Control.Exception (catch)
--import Control.Monad (when)
import Control.Monad.IO.Class (liftIO, MonadIO)
import System.Directory (getModificationTime, getDirectoryContents, getAppUserDataDirectory, doesFileExist, createDirectoryIfMissing)
import System.IO (IOMode(..), withBinaryFile)
import qualified Data.ByteString.Base64.URL as B64U
import qualified Data.ByteString.Base64.URL.Lazy as LB64U
--import Control.Concurrent (runInBoundThread)
import System.IO.Unsafe (unsafePerformIO)
import Control.DeepSeq
import Data.Text (Text)
import qualified Data.Text as T
import qualified Data.Text.Encoding as TE
import Data.ByteString (ByteString)
import Data.Byteable
import qualified Data.ByteString as BS
import qualified Data.ByteString.Unsafe as BS (unsafeUseAsCStringLen)
import qualified Data.ByteString.Lazy as LBS
import Foreign.Marshal.Alloc (allocaBytes)
import Foreign.Ptr (FunPtr, Ptr, castPtr)
import Foreign.C.Types (CInt(..), CSize(..))
getWord64 :: Get Word64
getWord64 = getWord64le
putWord64 :: Word64 -> Put
putWord64 = putWord64le
--getWord32 :: Get Word32
--getWord32 = getWord32le
--putWord32 :: Word32 -> Put
--putWord32 = putWord32le
putWord16 :: Word16 -> Put
putWord16 = putWord16le
getWord16 :: Get Word16
getWord16 = getWord16le
empty256 :: ByteString
empty256 = BS.replicate 32 0
emptySalt :: ByteString
emptySalt = empty256
-- | Create a hash of the bytestring.
sha256 :: ByteString -> ByteString
sha256 = Crypto.Hash.SHA256.hash
-- | A general interface to all blocks containing encrypted data. This allows for the use of 'ssDecrypt' to be applied to all blocks, with the exception of 'BlockOther'.
-- |
-- | Note: If all but 'ssDecrypt' are bound and the decrypted type @r@ is an instance of 'Binary' the default 'ssDecrypt' will do the job.
class SecureStorageEncrypted block r | block -> r where
-- | Any IV used for encryption and verification.
ssIV :: block -> ByteString
-- | Initial salt for the generation of the 'ProfilePasskey'.
ssSalt :: block -> ByteString
-- | Complexity of each iteration of Scrypt.
ssLogN :: block -> LogN
-- | Iteration count for Scrypt.
ssIter :: block -> ScryptIterations
-- | Any not encrypted values which sould also be verified.
ssAAD :: block -> ByteString
-- | The encrypted content to decrypt.
ssEncData :: block -> ByteString
-- | The tag to verify encrypted content to.
ssVerTag :: block -> ByteString
-- | Decrypt this block type and return decrypted data.
ssDecrypt :: Text -> block -> Maybe r
default ssDecrypt :: Binary r => Text -> block -> Maybe r
ssDecrypt pass b = ssDecrypt' (ssVerTag b) (ssIter b) (fromIntegral $ ssLogN b) (ssIV b) (ssSalt b) (ssAAD b) (ssEncData b) pass
ssDecrypt' :: Binary r => ByteString -> ScryptIterations -> LogN -> ByteString -> ByteString -> ByteString -> ByteString -> Text -> Maybe r
ssDecrypt' ver iter logn iv salt aad dta pass =
if toBytes tag /= ver then Nothing
else case decodeOrFail $ LBS.fromStrict r of
Left err -> error $ "ssDecrypt: tag matched but encrypted data is somehow corrupt (" ++ show err ++ ")."
Right (_, _, r') -> Just r'
where pssky = enScrypt iter logn salt pass
(r, tag) = decryptGCM (initAES pssky) iv aad dta
-- | Test if two 'ByteString's are the same in time @n@ even if the first byte are diffrent. This thwarts timing attacks unlike the builtin '(==)'.
secureEq :: ByteString -> ByteString -> Bool
secureEq a b = BS.length a == BS.length b &&
BS.length a == sum (BS.zipWith (\a' b' -> if a' == b' then 1 else 0) a b)
-- | A type representing an master pass key. Destroy as soon as possible.
newtype ProfilePasskey = PassKey { thePassKey :: ByteString }
{-
-- | Iterate the Scrypt function to get a 'ProfilePasskey'.
iterscrypt :: ScryptIterations -> Scrypt.ScryptParams -> ByteString -> Scrypt.Pass -> ProfilePasskey
iterscrypt i p x y = PassKey $ chain (fromIntegral i) xorBS (\a -> Scrypt.getHash $ Scrypt.scrypt p (Scrypt.Salt a) y) x emptySalt
-}
-- | Type 1 - User access password authenticated & encrypted data
--
-- The type 1 'SecureStorage' block supplies the EnScrypt parameters to convert a user-supplied “local access passcode” into a 256-bit symmetric key,
-- and also contains both the plaintext and encrypted data managed by that password.
data SecureStorageBlock1
= SecureStorageBlock1
{ ss1CryptoIV :: ByteString -- ^ init vector for auth/encrypt
, ss1ScryptSalt :: ByteString -- ^ update for password change
, ss1ScryptLogN :: LogN -- ^ memory consumption factor
, ss1ScryptIter :: ScryptIterations -- ^ time consumption factor
, ss1Flags :: ClientFlags -- ^ 16 binary flags
, ss1HintLen :: HintLength -- ^ number of chars in hint
, ss1PwVerifySec :: PWHashingTime -- ^ seconds to run PW EnScrypt
, ss1HintIdle :: Word16 -- ^ idle minutes before wiping PW
, ss1PlainExtra :: ByteString -- ^ extended binary data not in spec as of yet
, ss1Encrypted :: ByteString -- ^ encrypted master key, lock key, unlock key etc (see 'SecureStorageBlock1Decrypted')
, ss1VerifyTag :: ByteString -- ^ signature to validate no external changes has been made
}
deriving (Show)
-- | This is the decrypted data of 'SecureStorageBlock1' and contains decrypted keys and aditional decrypted data.
data SecureStorageBlock1Decrypted
= SecureStorageBlock1Decrypted
{ identityMasterKey :: PrivateMasterKey -- ^ decrypted identity master key
, identityLockKey :: PrivateLockKey -- ^ decrypted identity lock key
, previousUnlockKey :: Maybe PrivateUnlockKey -- ^ optional identity unlock key for previous identity
, ss1DecryptedExtra :: ByteString -- ^ extended encrypted data not in spec as of yet
}
instance NFData SecureStorageBlock1 where
rnf SecureStorageBlock1
{ ss1CryptoIV = iv
, ss1ScryptSalt = sa
, ss1ScryptLogN = ln
, ss1ScryptIter = si
, ss1Flags = cf
, ss1HintLen = hl
, ss1PwVerifySec= ht
, ss1HintIdle = hi
, ss1PlainExtra = px
, ss1Encrypted = ec
, ss1VerifyTag = tg
} = rnf iv `seq` rnf sa `seq` rnf ln `seq` rnf si `seq` rnf cf `seq` rnf hl `seq` rnf ht `seq` rnf hi `seq` rnf px `seq` rnf ec `seq` rnf tg `seq` ()
instance Binary SecureStorageBlock1Decrypted where
put b = let (PrivateMasterKey pmk) = identityMasterKey b
(PrivateLockKey plk) = identityLockKey b
puk = case previousUnlockKey b of { Nothing -> empty256 ; Just (PrivateUnlockKey k) -> k }
in putByteString pmk *> putByteString plk *> putByteString puk *> putByteString (ss1DecryptedExtra b)
get = SecureStorageBlock1Decrypted <$> (PrivateMasterKey <$> getByteString 32) <*> (PrivateLockKey <$> getByteString 32)
<*> ((\t -> if t == empty256 then Nothing else Just (PrivateUnlockKey t)) <$> getByteString 32)
<*> (LBS.toStrict <$> getRemainingLazyByteString)
-- | 'ssDecrypt' should be used to decrypt a 'SecureStorageBlock1' from a passphrase.
instance SecureStorageEncrypted SecureStorageBlock1 SecureStorageBlock1Decrypted where
ssIV = ss1CryptoIV
ssSalt = ss1ScryptSalt
ssLogN = ss1ScryptLogN
ssIter = ss1ScryptIter
ssAAD x = let x' = encode x in runGet (lookAhead (skip 4 *> getWord16) >>= getByteString . fromIntegral) x'
ssEncData = ss1Encrypted
ssVerTag = ss1VerifyTag
-- | Type 2 - Rescue code encrypted data
--
-- The type 2 'SecureStorage' block supplies the EnScrypt parameters to convert a user-supplied “emergency rescue code” into a 256-bit symmetric key
-- for use in decrypting the block's embedded encrypted emergency rescue code.
data SecureStorageBlock2
= SecureStorageBlock2
{ ss2ScryptSalt :: ByteString -- ^ update for password change
, ss2ScryptLogN :: LogN -- ^ memory consumption factor
, ss2ScryptIter :: ScryptIterations -- ^ time consumption factor
, ss2Encrypted :: ByteString -- ^ encrypted emergency rescue code and any extended encrypted data not in spec as of yet (see 'SecureStorageBlock2Decrypted')
, ss2VerifyTag :: ByteString -- ^ signature to validate no external changes has been made
}
deriving (Show)
-- | This is the decrypted data of 'SecureStorageBlock2' and contains an emergency rescue code to transfer an identity.
data SecureStorageBlock2Decrypted
= SecureStorageBlock2Decrypted
{ identityUnlockKey :: PrivateUnlockKey -- ^ decrypted unlock key
, ss2DecryptedExtra :: ByteString -- ^ extended decrypted data not in spec as of yet
}
instance NFData SecureStorageBlock2 where
rnf SecureStorageBlock2
{ ss2ScryptSalt = sa
, ss2ScryptLogN = ln
, ss2ScryptIter = si
, ss2Encrypted = ec
, ss2VerifyTag = tg
} = rnf sa `seq` rnf ln `seq` rnf si `seq` rnf ec `seq` rnf tg `seq` ()
instance Binary SecureStorageBlock2Decrypted where
put b = let (PrivateUnlockKey erc) = identityUnlockKey b in putByteString erc *> putByteString (ss2DecryptedExtra b)
get = SecureStorageBlock2Decrypted <$> (PrivateUnlockKey <$> getByteString 32) <*> (LBS.toStrict <$> getRemainingLazyByteString)
-- | 'ssDecrypt' should be used to decrypt a 'SecureStorageBlock2' from a passphrase.
instance SecureStorageEncrypted SecureStorageBlock2 SecureStorageBlock2Decrypted where
ssIV _ = emptyIV
ssSalt = ss2ScryptSalt
ssLogN = ss2ScryptLogN
ssIter = ss2ScryptIter
ssAAD _ = BS.empty
ssEncData = ss2Encrypted
ssVerTag = ss2VerifyTag
emptyIV :: ByteString
emptyIV = BS.replicate 12 0
-- | This two-byte value contains a set of individual single-bit flags corresponding to options offered by SQRL's user-interface.
type ClientFlags = Word16
-- | This one-byte value specifies the number of characters used in password hints. The default is 4 characters. A value of zero disables hinting and causes the SQRL client to prompt its user for their full access password whenever it's required.
type HintLength = Word8
-- | This one-byte value specifies the length of time SQRL's EnScrypt function will run in order to deeply hash the user's password to generate the Identity Master Key's (IMK) symmetric key. SQRL clients are suggested to default this value to five seconds with one second as a minimum. It should not be possible for the user to circumvent at least one second of iterative hashing on any platform.
type PWHashingTime = Word8
type LogN = Word8
type ScryptIterations = Word32
-- | This requests, and gives the SQRL client permission, to briefly check-in with its publisher to see whether any updates to this software have been made available.
clientFlagAutoUpdate :: ClientFlags
clientFlagAutoUpdate = 0x0001
-- | Where a SQRL client is loaded with multiple identities, this prevents the client from assuming any “current user” and
-- causes it to prompt its operator for which identity should be used for every authentication.
-- This can be useful when multiple users share a computer to keep any user from inadvertently attempting to use another user's identity.
clientFlagNoCurrentProfile :: ClientFlags
clientFlagNoCurrentProfile = 0x0002
-- | This adds the @option=sqrlonly@ string to every client transaction. When this option string is present in any properly signed client transaction,
-- this requests the server to set a flag in the user account that will cause the web server to subsequently disable all traditional
-- non-SQRL account logon authentication such as username and password.
clientFlagSQRLOnly :: ClientFlags
clientFlagSQRLOnly = 0x0004
-- | This adds the @option=hardlock@ string to every client transaction. When this option string is present in any properly signed client transaction,
-- this requests the server to set a flag in the user account that will cause the web server to subsequently disable all “out of band” (non-SQRL)
-- account identity recovery options such as “what was your favorite pet's name.”
clientFlagHardLock :: ClientFlags
clientFlagHardLock = 0x0008
-- | When set, this bit instructs the SQRL client to notify its user when the web server indicates that an IP address mismatch exists between the entity
-- that requested the initial logon web page containing the SQRL link URL (and probably encoded into the SQRL link URL's “nut”) and the IP address
-- from which the SQRL client's query was received for this reply.
clientFlagWarnMITM :: ClientFlags
clientFlagWarnMITM = 0x0010
-- | When set, this bit instructs the SQRL client to wash any existing local password and hint data from RAM upon notification that the system
-- is going to sleep in any way such that it cannot be used. This would include sleeping, hibernating, screen blanking, etc.
clientFlagClearOnBlack :: ClientFlags
clientFlagClearOnBlack = 0x0020
-- | When set, this bit instructs the SQRL client to wash any existing local password and hint data from RAM upon notification that the current user is being switched.
--
-- Notice: This could be interpreted as refering to the SQRL profile as in 'clientFlagNoCurrentProfile', but in actuality the "user" above is the user controlled by the OS.
-- I could see it being used either way, though.
clientFlagClearOnUserSwitch :: ClientFlags
clientFlagClearOnUserSwitch = 0x0040
-- | When set, this bit instructs the SQRL client to wash any existing local password and hint data from RAM when the system has been user-idle (no mouse or keyboard activity)
-- for the number of minutes specified by the two-byte idle timeout.
--
-- Notice: The idle time in 'SecureStorageBlock1' is in minutes, when time=0 then no hint is allowed. It is quite clear that this is idle system-wide and not only in usage of SQRL.
-- But since the idle time is allowed to be more than a month;
-- a developer could see this as clearing the hint after being idle in the sense of no SQRL authentications for the specified amounts of minutes if there is no reliable way to detect other activity.
clientFlagClearOnIdle :: ClientFlags
clientFlagClearOnIdle = 0x0080
-- | The default configuration of active flags.
--
-- > def = 'clientFlagAutoUpdate' | 'clientFlagWarnMITM' | 'clientFlagClearOnBlack' | 'clientFlagClearOnUserSwitch' | 'clientFlagClearOnIdle'
clientFlagsDefault :: ClientFlags
clientFlagsDefault = 0x00F1
instance Binary SecureStorageBlock1 where
get = do blocklen <- getWord16
if blocklen < 157 then fail "Block too small"
else do blockt <- getWord16
if blockt /= 1 then fail $ "Block type mismatch expected 1 got " ++ show blockt
else do ptlen <- getWord16
if ptlen < 45 then fail "Inner block to small"
else if ptlen + 112 > blocklen then fail "Inner block to large to fit outer block"
else SecureStorageBlock1
<$> getByteString 12 <*> getByteString 16 <*> getWord8 <*> getWord32 -- IV and scrypt params
<*> getWord16 <*> getWord8 <*> getWord8 <*> getWord16 -- client and hashing settings
<*> getByteString (fromIntegral ptlen - 45) -- additional plain text
<*> getByteString (fromIntegral $ blocklen - ptlen - 16) -- all encrypted data
<*> getByteString 16 -- auth tag
put b = putWord16 (fromIntegral $ 61 + BS.length (ss1PlainExtra b) + BS.length (ss1Encrypted b)) *> putWord16 1
*> putWord16 (fromIntegral $ 45 + BS.length (ss1PlainExtra b))
*> putByteString (ss1CryptoIV b) *> putByteString (ss1ScryptSalt b)
*> putWord8 (ss1ScryptLogN b) *> putWord32 (ss1ScryptIter b)
*> putWord16 (ss1Flags b) *> putWord8 (ss1HintLen b) *> putWord8 (ss1PwVerifySec b) *> putWord16 (ss1HintIdle b)
*> putByteString (ss1PlainExtra b)
*> putByteString (ss1Encrypted b)
*> putByteString (ss1VerifyTag b)
instance Binary SecureStorageBlock2 where
get = do blocklen <- getWord16
if blocklen < 73 then fail "Block too small"
else do blockt <- getWord16
if blockt /= 2 then fail $ "Block type mismatch expected 2 got " ++ show blockt
else SecureStorageBlock2
<$> getByteString 16 <*> getWord8 <*> getWord32 -- scrypt params
<*> getByteString (fromIntegral blocklen - 41) -- encrypted key and any additional encrypted data
<*> getByteString 16 -- auth tag
put b = putWord16 (41 + fromIntegral (BS.length (ss2Encrypted b))) *> putWord16 2
*> putByteString (ss2ScryptSalt b)
*> putWord8 (ss2ScryptLogN b) *> putWord32 (ss2ScryptIter b)
*> putByteString (ss2Encrypted b)
*> putByteString (ss2VerifyTag b)
-- | A collection of related data connected to a specific SQRL profile.
data SecureStorageBlock
= Block00001 SecureStorageBlock1 -- ^ The most basic of storage blocks. Contains information about master key and encryption settings.
| Block00002 SecureStorageBlock2 -- ^ Encrypted rescue code.
| BlockOther Int LBS.ByteString -- ^ Any other block not supported by the specification at the time of writing, or chosen not to implement. Pull requests are welcome.
deriving (Show)
-- | A secure storage for a SQRL profile. Contains encrypted keys and SQRL settings.
data SecureStorage = SecureStorage Bool String [SecureStorageBlock]
deriving (Show)
-- | Get the whole block as a lazy 'LBS.ByteString'.
secureStorageData :: SecureStorageBlock -> LBS.ByteString
secureStorageData (Block00001 b) = encode b
secureStorageData (Block00002 b) = encode b
secureStorageData (BlockOther n bs) = LBS.append (runPut (putWord16 (4 + fromIntegral (LBS.length bs)) *> putWord16 (fromIntegral n))) bs
-- | Get a structured version of the data contained by the block of type 1.
secureStorageData1 :: SecureStorage -> Maybe SecureStorageBlock1
secureStorageData1 (SecureStorage _ _ ss) = case listToMaybe $ filter ((==) 1 . secureStorageType) ss of
Just (Block00001 b) -> Just b
_ -> Nothing
-- | Get a structured version of the data contained by the block of type 2.
secureStorageData2 :: SecureStorage -> Maybe SecureStorageBlock2
secureStorageData2 (SecureStorage _ _ ss) = case listToMaybe $ filter ((==) 2 . secureStorageType) ss of
Just (Block00002 b) -> Just b
_ -> Nothing
-- | Get the numeric type identifier for the 'SecureStorageBlock'.
secureStorageType :: SecureStorageBlock -> Int
secureStorageType (Block00001 _) = 1
secureStorageType (Block00002 _) = 2
secureStorageType (BlockOther n _) = n
-- | Get something specific out of the 'SecureStorageBlock'. Accepts first block of each type.
secureStorageBlock :: Int -> SecureStorage -> Get a -> Maybe a
secureStorageBlock bt (SecureStorage _ _ ss) f = case listToMaybe $ filter ((==) bt . secureStorageType) ss of
Nothing -> Nothing
Just sb -> case runGetOrFail f $ secureStorageData sb of
Left _ -> Nothing
Right (_, _, r) -> Just r
-- | Open a 'SecureStorage' contained within a 'LBS.ByteString'.
openSecureStorage' :: String -> LBS.ByteString -> Either String SecureStorage
openSecureStorage' fn bs =
let (hdr, bs') = LBS.splitAt 8 bs
bs'' | hdr == "sqrldata" = Right bs'
| hdr == "SQRLDATA" = LB64U.decode bs'
| otherwise = Left "Header mismatch"
in bs'' >>= \bs_ -> case runGetOrFail (oss []) bs_ of
Left (_, pos, err) -> Left $ err ++ " (at position " ++ show pos ++ ")"
Right (_, _, rslt) -> let slt = reverse rslt in seq slt $ Right $ SecureStorage False fn slt
where oss :: [SecureStorageBlock] -> Get [SecureStorageBlock]
oss p = isEmpty >>= \e -> if e then return p else do
(l, t) <- lookAhead $ (,) <$> getWord16 <*> getWord16
r <- case t of
1 -> Block00001 <$> get
2 -> Block00002 <$> get
_ -> BlockOther (fromIntegral t) <$> (skip 32 *> getLazyByteString (fromIntegral l - 32))
let r' = r : p in seq r' $ oss r'
-- | Open a 'SecureStorage' contained within a file.
openSecureStorage :: FilePath -> IO (Either String SecureStorage)
openSecureStorage fp = do
var <- newIORef $ Left "Nothing read from SecureStorage"
withBinaryFile fp ReadMode (\h -> fmap (openSecureStorage' fp) (LBS.hGetContents h) >>= \r -> writeIORef var $! r)
readIORef var
-- | Turn a 'SecureStorage' into a lazy 'LBS.ByteString'.
saveSecureStorage' :: SecureStorage -> LBS.ByteString
saveSecureStorage' (SecureStorage _ _ ss) = runPut $ putByteString "sqrldata" *> mapM_ sss ss
where sss :: SecureStorageBlock -> Put
sss x = case x of
Block00001 x' -> put x'
Block00002 x' -> put x'
BlockOther i b -> putWord16 (fromIntegral $ LBS.length b + 32) >> putWord16 (fromIntegral i) >> putLazyByteString b
-- | Saves any changes made to the SecureStorage.
saveSecureStorage :: SecureStorage -> IO ()
saveSecureStorage ss@(SecureStorage True fp _) = LBS.writeFile fp $ saveSecureStorage' ss
saveSecureStorage _ = return ()
-- | Creates an in memory copy of the 'SecureStorage'. This may then be changed and/or saved without affecting the previous storage.
--
-- > -- make a copy of the storage
-- > openSecureStorage "original.ssss" >>= saveSecureStorage . copySecureStorage "copy.ssss" . either (\err -> error err) id
copySecureStorage :: FilePath -> SecureStorage -> SecureStorage
copySecureStorage fp (SecureStorage _ _ ss) = SecureStorage True fp ss
data SQRLProfile
= SQRLProfile
{ profileName :: Text
, profileUsed :: Maybe UTCTime
, profileSecureStorage :: IO (Either String SecureStorage)
}
-- | The separator to use to separate directories in paths.
dirSep :: String
dirSep = "/"
profilePath :: Text -> IO (Either FilePath FilePath)
profilePath n = let n' = T.unpack $ TE.decodeUtf8 $ B64U.encode $ TE.encodeUtf8 n in profilesDirectory >>= \d ->
let f = d ++ dirSep ++ n' ++ ".ssss"
in createDirectoryIfMissing True d >> fmap (\b -> if b then Right f else Left f) (doesFileExist f)
-- | List all profiles contained within a file system directory. It's recommended to use 'listProfiles' unless there is a good reason for not using the default directory.
listProfilesInDir :: FilePath -> IO [SQRLProfile]
listProfilesInDir dir = do
dd <- map (init . dropWhileEnd ('.'/=)) <$> filter (isSuffixOf ".ssss") <$> getDirectoryContents dir
catMaybes <$> mapM openProfile dd
where openProfile d' = case (if all (\x -> x > ' ' && x < 'z') d' then B64U.decode (BS.pack $ map (fromIntegral . fromEnum) d') else Left undefined) of
Left _ -> return Nothing
Right bs -> let f = dir ++ dirSep ++ d' in do
t <- catch (fmap Just $ getModificationTime $ f ++ ".time") (const (return Nothing) :: IOError -> IO (Maybe UTCTime))
return $ Just $ SQRLProfile (TE.decodeUtf8 bs) t $ openSecureStorage (f ++ ".ssss")
isSuffixOf suff txt = let sl = length suff
tl = length txt
in sl <= tl && drop (tl - sl) txt == suff
dropWhileEnd _ [] = ""
dropWhileEnd f (c:cs) = let t = dropWhileEnd f cs in if null t then if f c then "" else [c] else c:t
-- | List all profiles which is available in the default profile directory.
listProfiles :: MonadIO io => io [SQRLProfile]
listProfiles = liftIO $ profilesDirectory >>= \d -> createDirectoryIfMissing True d >> listProfilesInDir d
-- | The default file system directory for profiles.
profilesDirectory :: IO FilePath
profilesDirectory = getAppUserDataDirectory $ "sqrl" ++ dirSep ++ "profiles"
-- | ADT representing different types of errors which may occur during profile creation.
data ProfileCreationError
= ProfileExists
| RandomError0 GenError
| RandomError1 GenError
deriving (Show, Eq)
data ProfileCreationState
= ProfileCreationFailed ProfileCreationError
| ProfileCreationSuccess (SQRLProfile, RescueCode)
| ProfileCreationGeneratingExternal
| ProfileCreationGeneratingKeys
| ProfileCreationGeneratingParameters
| ProfileCreationHashingMasterKey Int
| ProfileCreationEncryptingUnlock (Int, Int, Int)
| ProfileCreationEncryptingMaster (Int, Int, Int)
-- | Get a default message describing any 'ProfileCreationState'.
profileCreationMessage :: ProfileCreationState -> String
profileCreationMessage (ProfileCreationFailed x) = "Creation failed: " ++ show x
profileCreationMessage (ProfileCreationSuccess (x, _)) = "Creation succeded: " ++ show (profileName x)
profileCreationMessage (ProfileCreationGeneratingExternal) = "Generating external entropy"
profileCreationMessage (ProfileCreationGeneratingKeys) = "Generating keys"
profileCreationMessage (ProfileCreationGeneratingParameters) = "Generating parameters"
profileCreationMessage p@(ProfileCreationHashingMasterKey _) = "Hashing master key - " ++ show (profileCreationInternalPercentage p) ++ "%"
profileCreationMessage p@(ProfileCreationEncryptingUnlock _) = "Encrypting unlock key - " ++ show (profileCreationInternalPercentage p) ++ "%"
profileCreationMessage p@(ProfileCreationEncryptingMaster _) = "Encrypting master key - " ++ show (profileCreationInternalPercentage p) ++ "%"
-- | Get an approximate internal percentage (0 just begun - 100 complete) of the completion for the current state.
profileCreationInternalPercentage :: ProfileCreationState -> Int
profileCreationInternalPercentage (ProfileCreationFailed _) = 0
profileCreationInternalPercentage (ProfileCreationSuccess _) = 100
profileCreationInternalPercentage (ProfileCreationGeneratingExternal) = 0
profileCreationInternalPercentage (ProfileCreationGeneratingKeys) = 0
profileCreationInternalPercentage (ProfileCreationGeneratingParameters) = 0
profileCreationInternalPercentage (ProfileCreationHashingMasterKey i) = truncate (fromIntegral i / (16 :: Double))
profileCreationInternalPercentage (ProfileCreationEncryptingUnlock (i,_,_)) = i
profileCreationInternalPercentage (ProfileCreationEncryptingMaster (i,_,_)) = i
-- | Get an approximate percentage for the current state. (A failed state returns @-1@.)
profileCreationPercentage :: ProfileCreationState -> Int
profileCreationPercentage (ProfileCreationFailed _) = -1
profileCreationPercentage (ProfileCreationSuccess _) = 100
profileCreationPercentage (ProfileCreationGeneratingExternal) = 0
profileCreationPercentage (ProfileCreationGeneratingKeys) = 0
profileCreationPercentage (ProfileCreationGeneratingParameters) = 12
profileCreationPercentage (ProfileCreationHashingMasterKey i) = 17 + (i `shiftR` 2)
profileCreationPercentage (ProfileCreationEncryptingUnlock (i,_,_)) = 25 + (i `shiftR` 2)
profileCreationPercentage (ProfileCreationEncryptingMaster (i,_,_)) = 75 + (i `div` 5)
-- a "wrapper" import gives a factory for converting a Haskell function to a foreign function pointer
foreign import ccall "wrapper"
enscryptwrap :: (CInt -> Int32 -> Int32 -> IO ()) -> IO (FunPtr (CInt -> Int32 -> Int32 -> IO ()))
-- import the foreign function as normal
foreign import ccall safe "enscrypt.h sqrl_enscrypt_time"
c_sqrl_enscrypt_time :: FunPtr (CInt -> Int32 -> Int32 -> IO ()) -> Int32 -> Word8
-> Ptr Word8 -> CSize -> Ptr Word8 -> CSize
-> Ptr Word8 -> CSize -> IO Word32
-- import the foreign function as normal
foreign import ccall safe "enscrypt.h sqrl_enscrypt_iter"
c_sqrl_enscrypt_iter :: Word32 -> Word8
-> Ptr Word8 -> CSize -> Ptr Word8 -> CSize
-> Ptr Word8 -> CSize -> IO Word32
-- | Hash a password for a number of times
enScrypt :: ScryptIterations -- ^ the amount of iterations
-> LogN -- ^ the 'LogN' to be used in the hashing
-> ByteString -- ^ salt to be used
-> Text -- ^ the password to be hashed
-> ByteString
enScrypt iters logn salt pass = unsafePerformIO $
BS.unsafeUseAsCStringLen salt $ \(salt', saltlen) ->
BS.unsafeUseAsCStringLen (TE.encodeUtf8 pass) $ \(pass', passlen) ->
allocaBytes (fromIntegral bufflen) $ \buff' -> do
putStrLn $ "TRACE: Calling out to EnScrypt for " ++ show iters ++ " iterations..."
r <-
--runInBoundThread $
c_sqrl_enscrypt_iter (fromIntegral iters) (fromIntegral logn) (castPtr salt') (fromIntegral saltlen) (castPtr pass') (fromIntegral passlen) (castPtr buff') (fromIntegral bufflen)
putStrLn "TRACE: Calling out to EnScrypt done."
if r < 0 then fail "enScrypt: enscrypt failed." else BS.packCStringLen (buff', bufflen)
where bufflen = 32
-- | Hash a password for approximatly an amount of time (in seconds). Time varies depending on device.
enScryptForSecs :: ((Int, Int, Int) -> IO ()) -- ^ progress callback which will be called at most once every second @(percentage done, seconds left)@
-> Int -- ^ the amount of seconds to iterate hashing
-> LogN -- ^ the 'LogN' to be used in the hashing
-> ByteString -- ^ salt to be used
-> Text -- ^ the password to be hashed
-> IO (ByteString, ScryptIterations)
enScryptForSecs f time logn salt pass = do
putStrLn "TRACE: enScryptForSecs wrapping callback..."
callback <- enscryptwrap (\a b c -> f (fromIntegral a, fromIntegral b, fromIntegral c))
putStrLn "TRACE: enScryptForSecs wrapping complete."
BS.unsafeUseAsCStringLen salt $ \(salt', saltlen) ->
BS.unsafeUseAsCStringLen (TE.encodeUtf8 pass) $ \(pass', passlen) ->
allocaBytes (fromIntegral bufflen) $ \buff' -> do
putStrLn $ "TRACE: Calling out to EnScrypt for " ++ show time ++ "s..."
r <-
--runInBoundThread $
c_sqrl_enscrypt_time callback (fromIntegral time) (fromIntegral logn) (castPtr salt') (fromIntegral saltlen) (castPtr pass') (fromIntegral passlen) (castPtr buff') (fromIntegral bufflen)
putStrLn $ "TRACE: Calling out to EnScrypt lasted " ++ show r ++ " iterations."
if r < 0 then fail "enScryptForSecs: enscrypt failed." else (\x -> (x, fromIntegral r)) <$> BS.packCStringLen (buff', bufflen)
where bufflen = 32
{- -- This is too slow. It uses too much RAM constructing ADTs.
now <- getCurrentTime
let r = Scrypt.getHash $ Scrypt.scrypt p (Scrypt.Salt salt) pass' in iterscrypt' now (fromIntegral time `addUTCTime` now) r r 0 now
where pass' = Scrypt.Pass $ TE.encodeUtf8 pass
p = fromJust $ Scrypt.scryptParamsLen (fromIntegral logn) 256 1 32
iterscrypt' :: UTCTime -> UTCTime -> ByteString -> ByteString -> ScryptIterations -> UTCTime -> IO (ByteString, ScryptIterations)
iterscrypt' startTime targetTime salt' passon iter ltime =
let r = Scrypt.getHash $ Scrypt.scrypt p (Scrypt.Salt salt') pass'
r' = xorBS passon r
iter' = iter + 1
spant = truncate $ targetTime `diffUTCTime` startTime
update :: UTCTime -> IO UTCTime
update t = if truncate (t `diffUTCTime` ltime) /= (0 :: Int) then f ((100 * truncate (t `diffUTCTime` startTime)) `div` spant, truncate $ targetTime `diffUTCTime` t) >> return t else return ltime
next :: UTCTime -> IO (ByteString, ScryptIterations)
next t = if t >= targetTime then return (r', iter') else iterscrypt' startTime targetTime r r' iter' t
in seq iter' ((if iter' .&. 3 /= 0 then return ltime else getCurrentTime >>= update) >>= next)
-}
-- TODO: temove trace
{-# INLINE pe #-}
pe :: NFData a => String -> a -> a
pe s a = let
b = unsafePerformIO (putStrLn $ "TRACE> Evaluating " ++ s) `deepseq` a
c = b `deepseq` unsafePerformIO (putStrLn $ "TRACE< Evaluated " ++ s)
in c `deepseq` b
data SQRLEntropy
= NoEntropy
| SQRLEntropy [ByteString] (IO SQRLEntropy)
createProfileInDir :: (ProfileCreationState -> IO ()) -- ^ a callback which gets notified when the state changes
-> IO SQRLEntropy -- ^ an external source of entropy (recommended minimum list length of 20 and bytestring length of 32), if none is available @return 'NoEntropy'@ should still generate an acceptable result.
-> Text -- ^ name of this profile (may not collide with another)
-> Text -- ^ password for this profile
-> HintLength -- ^ the length the password hint should be (see 'HintLength')
-> Word16 -- ^ the time, in minutes, before a hint should be wiped
-> PWHashingTime -- ^ the amount of time should be spent hashing the password
-> ClientFlags -- ^ client settings for this profile
-> FilePath -- ^ the directory which contains the profile
-> IO (Either ProfileCreationError (SQRLProfile, RescueCode))
createProfileInDir callback extent name pass hintl hintt time flags dir =
let f = (++) (dir ++ dirSep) $ map (toEnum . fromIntegral) $ BS.unpack $ B64U.encode $ TE.encodeUtf8 name
in doesFileExist f >>= \fx -> if fx then return $ Left ProfileExists else (flip genKeys extent <$> newGenIO) >>= \ekeys -> case ekeys of
Left err -> return $ Left $ RandomError0 err
Right iof -> putStrLn "TRACE: Generating keys." >> iof >>= \(lockKey, unlockKey, rcode) -> putStrLn "TRACE: Generating params." >> (genEncParams <$> newGenIO) >>= \eencp -> case eencp of
Left err -> return $ Left $ RandomError1 err
Right (unlockKeySalt, unlockKeyLogN, unlockKeyTime, idKeyIV, idKeySalt, idKeyLogN) -> do
putStrLn "TRACE: All random data gathered and allocated."
(unlockKeyPass, unlockKeyIter) <- enScryptForSecs (callback . ProfileCreationEncryptingUnlock) (fromIntegral unlockKeyTime) unlockKeyLogN emptySalt $ rescueCode rcode
(idKeyPass, idKeyIter) <- enScryptForSecs (callback . ProfileCreationEncryptingMaster) (fromIntegral time) idKeyLogN idKeySalt pass
putStrLn "TRACE: Scrypt iterations has completed."
let idKey = PrivateMasterKey $ enHash $ privateUnlockKey unlockKey
(block1enc, idKeyTag) = encryptGCM (initAES idKeyPass) idKeyIV ("ss1ssAAD" `pe` ssAAD block1) $ BS.concat [ privateMasterKey idKey, privateLockKey lockKey, empty256 ]
(block2enc, unlockKeyTag) = encryptGCM (initAES unlockKeyPass) emptyIV (ssAAD block2) $ privateUnlockKey unlockKey
block1 =
SecureStorageBlock1
{ ss1CryptoIV = "idKeyIV" `pe` idKeyIV
, ss1ScryptSalt = "idKeySalt" `pe` idKeySalt
, ss1ScryptLogN = "idKeyLogN" `pe` idKeyLogN
, ss1ScryptIter = "idKeyIter" `pe` idKeyIter
, ss1Flags = "flags" `pe` flags
, ss1HintLen = "hintl" `pe` hintl
, ss1PwVerifySec = "time" `pe` time
, ss1HintIdle = "hintt" `pe` hintt
, ss1PlainExtra = BS.empty
, ss1Encrypted = bs96 -- waiting for encryption
, ss1VerifyTag = bs16 -- waiting for encryption
}
block1' =
"block1" `pe`
(block1 { ss1Encrypted = block1enc, ss1VerifyTag = "idKeyTag" `pe` toBytes idKeyTag })
block2 =
SecureStorageBlock2
{ ss2ScryptSalt = unlockKeySalt
, ss2ScryptIter = unlockKeyIter
, ss2ScryptLogN = unlockKeyLogN
, ss2Encrypted = bs32
, ss2VerifyTag = bs16
}
block2' =
block2 { ss2Encrypted = block2enc, ss2VerifyTag = toBytes unlockKeyTag }
f' = f ++ ".ssss"
ss = SecureStorage True f' [Block00001 block1', Block00002 block2']
putStrLn "TRACE: Saving secure storage..."
saveSecureStorage ss
putStrLn "TRACE: Secure storage has been saved."
return $ Right (SQRLProfile { profileName = name, profileUsed = Nothing, profileSecureStorage = openSecureStorage f' }, rcode)
where genKeys :: SystemRandom -> IO SQRLEntropy -> Either GenError (IO (PrivateLockKey, PrivateUnlockKey, RescueCode))
genKeys g ntrpy = (genKeys' ntrpy . fst) <$> genBytes 768 g
genKeys' ntrpy genbytes = do
let cryptoinit = Crypto.Hash.SHA256.update Crypto.Hash.SHA256.init $ BS.take 512 genbytes
ntrpy0 <- ntrpy
(shastate, rest) <- case ntrpy0 of
NoEntropy -> return (cryptoinit, [])
SQRLEntropy ent0 ntrpy' -> ntrpy' >>= \ntrpy1 -> case ntrpy1 of
NoEntropy -> return (Crypto.Hash.SHA256.updates cryptoinit ent0, [])
SQRLEntropy ent1 ntrpy'' -> (\x -> (x, ent0)) <$> updateEntropy Crypto.Hash.SHA256.updates (Crypto.Hash.SHA256.updates cryptoinit ent1) ntrpy''
let unlockKey = Crypto.Hash.SHA256.finalize shastate
lockKey = ED25519.exportPublic $ ED25519.generatePublic $ fromJust $ ED25519.importPrivate unlockKey
rcode = Crypto.Hash.SHA256.finalize $ Crypto.Hash.SHA256.updates (Crypto.Hash.SHA256.update shastate $ BS.drop 512 genbytes) rest
in return (PrivateLockKey lockKey, PrivateUnlockKey unlockKey, genRcode rcode)
updateEntropy f a ntrpy = ntrpy >>= \r -> case r of
NoEntropy -> return a
SQRLEntropy bs ntrpy' -> let a' = f a bs in a' `seq` updateEntropy f a' ntrpy'
genEncParams :: SystemRandom -> Either GenError (ByteString, LogN, Int, ByteString, ByteString, LogN)
genEncParams g = (genEncParams' . fst) <$> genBytes (16 + 1 + 1 + 12 + 16 + 1) g
genEncParams' :: ByteString -> (ByteString, LogN, Int, ByteString, ByteString, LogN)
genEncParams' bs =
let unlockKeySalt = BS.take 16 bs
unlockKeyLogN = (BS.index bs 16 .&. 0x03) + 0x9
unlockKeyTime = 60 --fromIntegral (complement (BS.index bs 17 .&. 0x7F)) `shiftR` 4
idKeyIV = BS.take 12 $ BS.drop 18 bs
idKeySalt = BS.take 16 $ BS.drop 30 bs
idKeyLogN = (BS.index bs 46 .&. 0x03) + 0x9
in (unlockKeySalt, unlockKeyLogN, unlockKeyTime, idKeyIV, idKeySalt, idKeyLogN)
bsToNatural :: ByteString -> Integer
bsToNatural = BS.foldl (\i w -> (i `shiftL` 8) + fromIntegral w) 0
genRcode :: ByteString -> RescueCode
genRcode rcodeb = RescueCode $ T.pack $ take 24 (genRcode' $ bsToNatural rcodeb)
genRcode' i = let (i', r) = i `quotRem` 10 in head (show r) : genRcode' i'
bs32 = BS.replicate 32 0
bs96 = BS.replicate 96 0
bs16 = BS.replicate 16 0
xorBS :: ByteString -> ByteString -> ByteString
xorBS a = BS.pack . BS.zipWith xor a
enHash :: ByteString -> ByteString
enHash inp = chain 16 xorBS sha256 inp empty256
-- | Do chained operations. @chain i f h a b@ means derive a new @a' = h a@ which then gets used to derive a new @b' = f a' b@. The new @a'@ and @b'@ are used recursivly for a total of @i@ times before the last @b'@ is returned.
chain :: Int -> (a -> b -> b) -> (a -> a) -> a -> b -> b
chain 0 _ _ _ b = b
chain i f h a b = let { i' = i - 1 ; a' = h a ; b' = f a' b} in i' `seq` (b' `seq` chain i' f h a' b')
-- | Creates a new SQRL profile. This includes generating keys, a 'RescueCode', hashing passwords and creating a 'SecureStorage'.
--
-- The resulting profile is returned if no error occured during the creation.
createProfile :: (MonadIO io)
=> (ProfileCreationState -> IO ()) -- ^ a callback which gets notified when the state changes
-> IO SQRLEntropy -- ^ an external source of entropy (recommended minimum list length of 20 and bytestring length of 32), if none is available @return NoEntropy@ should still generate a working result.
-> Text -- ^ name of this profile (may not collide with another)
-> Text -- ^ password for this profile
-> HintLength -- ^ the length the password hint should be (see 'HintLength')
-> Word16 -- ^ the time, in minutes, before a hint should be wiped
-> PWHashingTime -- ^ the amount of time should be spent hashing the password
-> ClientFlags -- ^ client settings for this profile
-> io (Either ProfileCreationError (SQRLProfile, RescueCode))
createProfile callback extent name pass hintl hintt time flags = liftIO (profilesDirectory >>= createProfileInDir callback extent name pass hintl hintt time flags)
| TimLuq/sqrl-auth-client-hs | src/Web/Authenticate/SQRL/SecureStorage.hs | mit | 42,264 | 0 | 32 | 10,242 | 8,261 | 4,410 | 3,851 | 511 | 7 |
import Probability
model = do
i <- bernoulli 0.5
y <- normal 0.0 1.0
let x = if (i == 1) then y else 0.0
return ["i" %=% i, "x" %=% x]
main = do
mcmc model
| bredelings/BAli-Phy | tests/prob_prog/demos/3/Main.hs | gpl-2.0 | 193 | 0 | 12 | 74 | 86 | 42 | 44 | 8 | 2 |
{- |
Module : $Header$
Description : Interface to the theorem prover e-krhyper in CASC-mode.
Copyright : (c) Dominik Luecke, Uni Bremen 2010
License : GPLv2 or higher, see LICENSE.txt
Maintainer : [email protected]
Stability : provisional
Portability : needs POSIX
Check out
http://www.uni-koblenz.de/~bpelzer/ekrhyper/
for details. For the ease of maintenance we are using e-krhyper in
its CASC-mode, aka tptp-input. It works for single input files and
fof-style.
-}
module SoftFOL.ProveHyperHyper (hyperS, hyperProver, hyperConsChecker)
where
import Logic.Prover
import Common.ProofTree
import qualified Common.Result as Result
import Common.AS_Annotation as AS_Anno
import Common.SZSOntology
import Common.Timing
import Common.Utils
import SoftFOL.Sign
import SoftFOL.Translate
import SoftFOL.ProverState
import GUI.GenericATP
import Proofs.BatchProcessing
import Interfaces.GenericATPState
import System.Directory
import Control.Monad (when)
import qualified Control.Concurrent as Concurrent
import Data.Char
import Data.List
import Data.Maybe
import Data.Time.LocalTime (TimeOfDay, midnight)
-- Prover
hyperS :: String
hyperS = "ekrh"
-- | The Prover implementation.
hyperProver :: Prover Sign Sentence SoftFOLMorphism () ProofTree
hyperProver = mkAutomaticProver hyperS () hyperGUI hyperCMDLautomaticBatch
{- |
Record for prover specific functions. This is used by both GUI and command
line interface.
-}
atpFun :: String -- ^ theory name
-> ATPFunctions Sign Sentence SoftFOLMorphism ProofTree SoftFOLProverState
atpFun thName = ATPFunctions
{ initialProverState = spassProverState
, atpTransSenName = transSenName
, atpInsertSentence = insertSentenceGen
, goalOutput = showTPTPProblem thName
, proverHelpText = "for more information visit " ++
"http://www.uni-koblenz.de/~bpelzer/ekrhyper/"
, batchTimeEnv = "HETS_HYPER_BATCH_TIME_LIMIT"
, fileExtensions = FileExtensions
{ problemOutput = ".tptp"
, proverOutput = ".hyper"
, theoryConfiguration = ".hypcf" }
, runProver = runHyper
, createProverOptions = extraOpts }
{- |
Invokes the generic prover GUI.
-}
hyperGUI :: String -- ^ theory name
-> Theory Sign Sentence ProofTree
{- ^ theory consisting of a SoftFOL.Sign.Sign
and a list of Named SoftFOL.Sign.Sentence -}
-> [FreeDefMorphism SPTerm SoftFOLMorphism] -- ^ freeness constraints
-> IO [ProofStatus ProofTree] -- ^ proof status for each goal
hyperGUI thName th freedefs =
genericATPgui (atpFun thName) True hyperS thName th
freedefs emptyProofTree
{- |
Implementation of 'Logic.Prover.proveCMDLautomaticBatch' which provides an
automatic command line interface to the prover.
-}
hyperCMDLautomaticBatch ::
Bool -- ^ True means include proved theorems
-> Bool -- ^ True means save problem file
-> Concurrent.MVar (Result.Result [ProofStatus ProofTree])
-- ^ used to store the result of the batch run
-> String -- ^ theory name
-> TacticScript -- ^ default tactic script
-> Theory Sign Sentence ProofTree {- ^ theory consisting of a
'SoftFOL.Sign.Sign' and a list of Named 'SoftFOL.Sign.Sentence' -}
-> [FreeDefMorphism SPTerm SoftFOLMorphism] -- ^ freeness constraints
-> IO (Concurrent.ThreadId, Concurrent.MVar ())
{- ^ fst: identifier of the batch thread for killing it
snd: MVar to wait for the end of the thread -}
hyperCMDLautomaticBatch inclProvedThs saveProblem_batch resultMVar
thName defTS th freedefs =
genericCMDLautomaticBatch (atpFun thName) inclProvedThs saveProblem_batch
resultMVar hyperS thName
(parseTacticScript batchTimeLimit [] defTS) th freedefs emptyProofTree
prelTxt :: String -> String
prelTxt t =
"% only print essential output\n" ++
"#(set_verbosity(1)).\n\n" ++
"% assume all input to be in tptp-syntax\n" ++
"#(set_parameter(input_type, 2)).\n\n" ++
"% To prevent blowing up my memory\n" ++
"#(set_memory_limit(500)).\n\n" ++
"% produce SZS results\n" ++
"#(set_flag(szs_output_flag, true)).\n\n" ++
"% do not use special evaluable symbols\n" ++
"#(clear_builtins).\n\n" ++
"% initial term weight bound, 3 recommended for TPTP-problems\n" ++
"#(set_parameter(max_weight_initial, 3)).\n\n" ++
"% Terminate if out of memory\n" ++
"#(set_parameter(limit_termination_method,0)).\n\n" ++
"% Terminate if out of time\n" ++
"#(set_parameter(timeout_termination_method,0)).\n\n" ++
"% Start timer\n" ++
"#(start_wallclock_timer(" ++ t ++ ".0)).\n"
checkOption :: String -> Bool
checkOption a = isPrefixOf "#(" a && isSuffixOf ")." a
uniteOptions :: [String] -> [String]
uniteOptions opts =
case opts of
a : b : cs ->
if checkOption a
then
a : uniteOptions (b : cs)
else
(a ++ b) : uniteOptions cs
_ -> opts
runTxt :: String
runTxt =
"% start derivation with the input received so far\n" ++
"#(run).\n\n" ++
"% print normal E-KRHyper proof\n" ++
"%#(print_proof).\n\n" ++
"% print result and proof using SZS terminology;\n" ++
"% requires postprocessing with post_szs script for proper legibility\n" ++
"#(print_szs_proof).\n"
runHyper :: SoftFOLProverState
{- ^ logical part containing the input Sign and axioms and possibly
goals that have been proved earlier as additional axioms -}
-> GenericConfig ProofTree -- ^ configuration to use
-> Bool -- ^ True means save TPTP file
-> String -- ^ name of the theory in the DevGraph
-> AS_Anno.Named SPTerm -- ^ goal to prove
-> IO (ATPRetval, GenericConfig ProofTree)
-- ^ (retval, configuration with proof status and complete output)
runHyper sps cfg saveTPTP thName nGoal =
let
saveFile = basename thName ++ '_' : AS_Anno.senAttr nGoal ++ ".tptp"
simpleOptions = uniteOptions $ extraOpts cfg
tl = configTimeLimit cfg
tScript = TacticScript $ show ATPTacticScript
{ tsTimeLimit = tl
, tsExtraOpts = filter (isPrefixOf "#")
$ lines $ prelTxt (show tl) ++ runTxt }
defProofStat = ProofStatus
{ goalName = senAttr nGoal
, goalStatus = openGoalStatus
, usedAxioms = []
, usedProver = hyperS
, proofTree = emptyProofTree
, usedTime = midnight
, tacticScript = tScript }
in
if all checkOption simpleOptions
then
do
prob <- showTPTPProblem thName sps nGoal []
when saveTPTP (writeFile saveFile prob)
(stdoutC, stderrC, t_u) <- runHyperProcess prob saveFile (show tl)
('\n' : unlines simpleOptions) runTxt
let (pStat, ret) = examineProof sps stdoutC stderrC
defProofStat { usedTime = t_u }
return (pStat, cfg
{ proofStatus = ret
, resultOutput = lines (stdoutC ++ stderrC)
, timeUsed = usedTime ret })
else return
(ATPError "Syntax error in options"
, cfg
{ proofStatus = defProofStat
, resultOutput = ["Parse Error"]
, timeUsed = midnight
})
-- | call ekrh
runHyperProcess
:: String -- ^ problem
-> String -- ^ file name template
-> String -- ^ time limit
-> String -- ^ extra options
-> String -- ^ run text
-> IO (String, String, TimeOfDay) -- ^ out, err, diff time
runHyperProcess prob saveFile tl opts runTxtA = do
stpTmpFile <- getTempFile prob saveFile
let stpPrelFile = stpTmpFile ++ ".prelude.tme"
stpRunFile = stpTmpFile ++ ".run.tme"
writeFile stpPrelFile $ prelTxt tl ++ opts
writeFile stpRunFile runTxtA
t_start <- getHetsTime
(_, stdoutC, stderrC) <- executeProcess hyperS
[stpPrelFile, stpTmpFile, stpRunFile] ""
t_end <- getHetsTime
removeFile stpPrelFile
removeFile stpRunFile
removeFile stpTmpFile
return (stdoutC, stderrC, diffHetsTime t_end t_start)
-- | Mapping type from SZS to Hets
data HyperResult = HProved | HDisproved | HTimeout | HError | HMemout
getHyperResult :: [String] -> HyperResult
getHyperResult out = case map (takeWhile isAlpha . dropWhile isSpace)
$ mapMaybe (stripPrefix "% SZS status ") out of
[s] | szsProved s -> HProved
| szsDisproved s -> HDisproved
| szsTimeout s -> HTimeout
| szsMemoryOut s -> HMemout
_ -> HError
-- | examine SZS output
examineProof :: SoftFOLProverState
-> String
-> String
-> ProofStatus ProofTree
-> (ATPRetval, ProofStatus ProofTree)
examineProof sps stdoutC stderrC defStatus =
let outText = "\nOutput was:\n\n" ++ stdoutC ++ stderrC
provenStat = defStatus
{ usedAxioms = getAxioms sps
, proofTree = ProofTree stdoutC }
in case getHyperResult $ lines stdoutC of
HProved -> (ATPSuccess, provenStat { goalStatus = Proved True })
HTimeout -> (ATPTLimitExceeded, defStatus)
HDisproved -> (ATPSuccess, provenStat { goalStatus = Disproved })
HMemout -> (ATPError ("Out of Memory." ++ outText), defStatus)
HError -> ( ATPError ("Internal Error in ekrhyper." ++ outText)
, defStatus)
-- Consistency Checker
hyperConsChecker :: ConsChecker Sign Sentence () SoftFOLMorphism ProofTree
hyperConsChecker = (mkConsChecker hyperS () consCheck)
{ ccNeedsTimer = False }
{- |
Runs the krhyper cons-chcker. The tactic script only contains a string for the
time limit.
-}
runTxtC :: String
runTxtC =
"% start derivation with the input received so far\n" ++
"#(run).\n\n" ++
"% print Hyper proof\n" ++
"%#(print_proof).\n\n" ++
"% print result and proof using SZS terminology;\n" ++
"% requires postprocessing with post_szs script for proper legibility\n" ++
"%#(print_szs_proof).\n\n" ++
"% Show the model\n" ++
"#(print_model).\n"
consCheck :: String
-> TacticScript
-> TheoryMorphism Sign Sentence SoftFOLMorphism ProofTree
-> [FreeDefMorphism SPTerm SoftFOLMorphism] -- ^ freeness constraints
-> IO (CCStatus ProofTree)
consCheck thName (TacticScript tl) tm freedefs =
case tTarget tm of
Theory sig nSens -> do
let proverStateI = spassProverState sig (toNamedList nSens) freedefs
saveFile = basename thName ++ ".tptp"
prob <- showTPTPProblemM thName proverStateI []
(stdoutC, stderrC, t_u) <-
runHyperProcess prob saveFile tl "" runTxtC
return CCStatus
{ ccResult = case getHyperResult $ lines stdoutC of
HProved -> Just True
HDisproved -> Just False
_ -> Nothing
, ccProofTree = ProofTree $ stdoutC ++ stderrC
, ccUsedTime = t_u }
| nevrenato/HetsAlloy | SoftFOL/ProveHyperHyper.hs | gpl-2.0 | 11,196 | 0 | 23 | 2,990 | 2,065 | 1,096 | 969 | 223 | 5 |
{-# LANGUAGE RankNTypes #-}
-- |
-- Module: BitStream
-- Copyright: (C) 2015-2018, Virtual Forge GmbH
-- License: GPL2
-- Maintainer: Hans-Christian Esperer <[email protected]>
-- Stability: experimental
-- Portability: portable
-- |
-- (De-)compress SAPCAR files
module Codec.Archive.SAPCAR.BitStream
( BitStream
, makeStream
, getBits
, consume
, getAndConsume
, Codec.Archive.SAPCAR.BitStream.isEmpty
) where
import Control.Monad.ST
import Control.Monad.State.Strict
import Data.Array.MArray
import Data.Array.ST
import Data.Bits
import Data.ByteString
import Data.ByteString.Char8
import Data.Char
import Data.STRef
import Data.Word
import Debug.Trace
import qualified Data.ByteString as S
-- |Opaque data type that contains a bitstream
data BitStream s = BitStreamy
{ bytes :: STUArray s Int Word8
, len :: Int
, number :: STRef s Int
, offset :: STRef s Int
, position :: STRef s Int
}
-- |Make a bitstream out of a ByteString
makeStream :: ByteString -> ST s (BitStream s)
makeStream theBytes = do
array <- newArray (0, S.length theBytes) 0 :: ST s (STUArray s Int Word8)
mapM_ (\i -> writeArray array i $ S.index theBytes i) [0..(S.length theBytes - 1)]
number <- newSTRef 0
offset <- newSTRef 0
position <- newSTRef 0
return BitStreamy
{ bytes=array
, len=S.length theBytes
, number=number
, offset=offset
, position=position }
-- |Return the specified number of bits from a BitStream,
-- converted to an integer using big endian coding
getBits :: BitStream s -> Int -> ST s Int
getBits _ 0 = return 0
getBits stream numBits = do
offs <- readSTRef $ offset stream
if numBits > offs
then case numBits - offs of
n | n < 9 -> refill False stream >> returnBits stream numBits
n | n < 17 -> refill True stream >> returnBits stream numBits
_ -> refill True stream >> getBits stream numBits
else returnBits stream numBits
returnBits :: BitStream s -> Int -> ST s Int
returnBits stream numBits = do
num <- readSTRef $ number stream
return $ num .&. ((1 `shiftL` numBits) - 1)
refill :: Bool -> BitStream s-> ST s ()
{-# INLINE refill #-}
refill two stream = do
pos <- readSTRef $ position stream
num <- readSTRef $ number stream
offs <- readSTRef $ offset stream
if two
then refillTwoBits pos num offs stream
else refillOneBit pos num offs stream
refillOneBit :: Int -> Int -> Int -> BitStream s -> ST s ()
{-# INLINE refillOneBit #-}
refillOneBit pos num offs stream = do
newByte <- fromIntegral <$> readArray (bytes stream) pos
writeSTRef (position stream) $ pos + 1
let num' = num .|. newByte `shiftL` offs
writeSTRef (offset stream) $ offs + 8
writeSTRef (number stream) num'
refillTwoBits :: Int -> Int -> Int -> BitStream s -> ST s ()
{-# INLINE refillTwoBits #-}
refillTwoBits pos num offs stream = do
newByte1 <- fromIntegral <$> readArray (bytes stream) pos
newByte2 <- fromIntegral <$> readArray (bytes stream) (pos + 1)
writeSTRef (position stream) $ pos + 2
let num' = num .|. newByte1 `shiftL` offs .|. newByte2 `shiftL` (offs + 8)
writeSTRef (offset stream) $ offs + 16
writeSTRef (number stream) num'
-- |Consume the specified number of bits
consume :: BitStream s -> Int -> ST s ()
consume stream numBits = do
modifySTRef (offset stream) $ subtract numBits
modifySTRef (number stream) $ \n -> if numBits == 32 then 0 else n `shiftR` numBits
-- |A combination of the getBits and consume functions
getAndConsume :: BitStream s -> Int -> ST s Int
getAndConsume stream numBits = do
res <- getBits stream numBits
consume stream numBits
return res
-- | Is the BitStream empty?
isEmpty :: BitStream s -> ST s Bool
isEmpty bs = (==) (len bs) <$> readSTRef (position bs)
| VirtualForgeGmbH/hascar | src/Codec/Archive/SAPCAR/BitStream.hs | gpl-2.0 | 3,894 | 0 | 14 | 923 | 1,240 | 628 | 612 | 90 | 4 |
module TestHuffman where
import qualified Data.ByteString.Lazy as ByteString
import Test.QuickCheck
import Test.QuickCheck.Instances
import qualified Huffman
prop_convertWord32ToBits w n =
1 <= n && n <= 32 ==> length (Huffman.convertWord32ToBits w n) == fromIntegral n
prop_decodeEncode buf =
ByteString.length buf <= 100 ==> Huffman.decode (Huffman.encode buf) == buf
return []
tests :: IO Bool
tests = $forAllProperties (quickCheckWithResult stdArgs { maxSuccess = 1000 })
| authchir/SoSe17-FFP-haskell-http2-server | test/TestHuffman.hs | gpl-3.0 | 487 | 0 | 10 | 74 | 151 | 79 | 72 | -1 | -1 |
import Prelude
import System.Process
import QFeldspar.QDSL
{-
Syntax:
GHC Haskell (almost)
Typed Quotations
domain-specific constructs
=
fully applied variables
Primitives:
- constructors and destructors for each datatype
- overloaded arithmetic operators
- overloaded comparative operators (for base types)
- coercion functions (for base types)
- bitwise operators (for Word32)
-}
-- while loop in QFeldspar
-- while :: Rep a =>
-- (a -> Bool) -> (a -> a) -> a -> a
for :: Rep a => Qt (Word32 -> a -> (Word32 -> a -> a) -> a)
for = [|| \ n x0 f ->
snd (while (\ (i , _x) -> i < n)
(\ (i , x) -> (i + 1 , f i x))
(0 , x0)) ||]
fib :: Qt (Word32 -> Word32)
fib = [|| \ n -> fst ($$for n (0, 1)
(\ _i -> \ (a, b) -> (b, a + b))) ||]
fibCodeC :: String
fibCodeC = qdsl fib
makeExec :: IO ()
makeExec = do let mainFunction = "int main (int argc, char *argv[]) {\n"++
" unsigned int inp;\n" ++
" sscanf(argv[1],\"%d\",&inp);\n" ++
" unsigned int out;\n" ++
" out = func(inp);\n" ++
" printf (\"%d\\n\",out);\n" ++
" return 0;\n" ++
" }"
writeFile "./Examples/DSLDISS/fib.c" (fibCodeC ++ "\n" ++ mainFunction)
_ <- runCommand
("gcc -o ./Examples/DSLDISS/fib ./Examples/DSLDISS/fib.c -lm -std=c99")
return ()
{- Above produces the following after macro expansion:
typedef struct {unsigned int fst;
unsigned int snd;} TplWrdWrd;
typedef struct {unsigned int fst;
TplWrdWrd snd;} TplWrdTplWrdWrd;
unsigned int func (unsigned int v0)
{
unsigned int v3;
TplWrdWrd v2;
TplWrdTplWrdWrd v1;
v1 = (TplWrdTplWrdWrd){.fst = 0u
,.snd = (TplWrdWrd)
{.fst = 0u
,.snd = 1u}};
while (v1.fst < v0)
{
v2 = v1.snd;
v3 = v2.snd;
v1 = (TplWrdTplWrdWrd) {.fst = v1.fst + 1u
,.snd = (TplWrdWrd)
{.fst = v3
,.snd = v2.fst + v3}};
}
return v1.snd.fst;
}
-}
{-
Types:
A,B,C ::= Word32 | Float | Bool | Complex_Float
| A -> B | (A , B) | Maybe A
| Array_Word32 A | Vector A
-}
{-
Array Word32 a
Constructor
mkArr :: Word32->(Word32->a)->Array Word32 a
Destructors
lnArr :: Array Word32 a -> Word32
ixArr :: Array Word32 a -> Word32 -> a
Vec a
Constructor
Vec :: Word32 -> (Word32 -> a) -> Vec a
Destructors by pattern matching
-}
toVec :: Rep a => Qt (Array Word32 a -> Vec a)
toVec = [|| \a -> Vec (lnArr a) (\i -> ixArr a i) ||]
fromVec :: Rep a => Qt (Vec a -> Array Word32 a)
fromVec = [|| \(Vec n g) -> mkArr n g ||]
minim :: Ord a => Qt (a -> a -> a)
minim = [|| \x y -> if x < y then x else y ||]
zipVec :: Qt ((a -> b -> c) -> Vec a -> Vec b -> Vec c)
zipVec = [|| \f -> \ (Vec m g) -> \ (Vec n h) ->
Vec ($$minim m n) (\i -> f (g i) (h i)) ||]
sumVec :: (Rep a, Num a) => Qt (Vec a -> a)
sumVec = [|| \(Vec n g) -> $$for n 0 (\i x -> x + g i) ||]
dotVec :: (Rep a, Num a) => Qt (Vec a -> Vec a -> a)
dotVec = [|| \u v -> $$sumVec ($$zipVec (*) u v) ||]
normVec :: Qt (Vec Float -> Float)
normVec = [|| \v -> sqrt ($$dotVec v v) ||]
normAry :: Qt (Array Word32 Float -> Float)
normAry = [|| \ v -> $$normVec ($$toVec v) ||]
testFusion :: String
testFusion = qdsl normAry
{-
dotVec (Vec m g) (Vec n h)
=
sumVec (zipVec (*) (Vec m g) (Vec n h))
=
sumVec (Vec (m `minim` n) (\i -> g i * h i))
=
for (m `minim` n) 0 (\i x -> x + g i * h i)
-}
{- Above produces the following after macro expansion:
typedef struct {unsigned int size;
float* elems;} AryFlt;
typedef struct {unsigned int fst;
float snd;} TplWrdFlt;
float func (AryFlt v0)
{
float v3;
unsigned int v2;
TplWrdFlt v1;
v1 = (TplWrdFlt) {.fst = 0u
,.snd = 0.0f};
while (v1.fst < v0.size)
{
v2 = v1.fst;
v3 = v0.elems[v2];
v1 = (TplWrdFlt) {.fst = v2 + 1u
,.snd = v1.snd + (v3 * v3)};
}
return sqrtf (v1.snd);
}
-}
{-
Pixel
Constructor
$$mkPixel :: Word32->Word32->Word32->Pixel
Destructors
$$red :: Pixel -> Word32
$$green :: Pixel -> Word32
$$blue :: Pixel -> Word32
Image
Constructor
mkImage :: Word32->Word32->
(Word32->Word32->Pixel)->Image
Destructors
$$heightImage :: Image -> Word32
$$widthImage :: Image -> Word32
$$getPixel :: Image->Word32->Word32->Pixel
Conversions
$$aryToImage :: Word32 -> Word32 ->
Array Word32 Word32 -> Image
$$imageToAry :: Image -> Array Word32 Word32
Compiler
compileImageProcessor ::
String -> Qt (Image -> Image) -> IO ()
-}
| shayan-najd/QFeldspar | Examples/DSLDISS/Live.hs | gpl-3.0 | 5,078 | 20 | 16 | 1,787 | 928 | 494 | 434 | -1 | -1 |
{- New approach to solve Wadler's expression
problem Similar to alacarte, but without the problem of
injections. Same technique was used for causal functions.
We study here if it is possible to add new operations, since
in the implementation of causal functions, only one was necessary
-}
{-# LANGUAGE
DeriveFunctor
-- , FlexibleInstances
, GADTs
, ConstraintKinds
#-}
module ModularDatatypes.Existential where
import Auxiliary.Composition(res2)
{-
We create a modular signature functor by providing the arity
of each operation independently.
-}
data Val e = Val Int deriving Functor
data Add e = Add e e deriving Functor
data Mul e = Mul e e deriving Functor
data Fix f = In {out :: f (Fix f)}
type Alg f a = f a -> a
-- needs `ConstraintKinds` for `c`
data Abstract c x where
Abs :: (Functor f, c f) => f x -> Abstract c x
instance Functor (Abstract c) where
fmap m (Abs y) = Abs (fmap m y)
type Expr c = Fix (Abstract c)
-- all types are inferred with NoMonomorphismRestriction
inAbs :: (Functor f, c f) => f (Expr c) -> Expr c
inAbs = In . Abs
val :: (c Val) => Int -> Expr c
add :: (c Add) => Expr c -> Expr c -> Expr c
mul :: (c Mul) => Expr c -> Expr c -> Expr c
val = inAbs . Val
add = inAbs `res2` Add
mul = inAbs `res2` Mul
example1 :: (c Mul, c Val, c Add) => Expr c
example1 = val 80 `mul` (val 5 `add` val 4)
--------------------------------------------------
-- operations
class Eval f where
evalAlg :: f Int -> Int
eval :: Expr Eval -> Int
eval (In (Abs y)) = evalAlg $ fmap eval y
instance Eval Val where
evalAlg (Val x) = x
instance Eval Add where
evalAlg (Add x y) = x + y
instance Eval Mul where
evalAlg (Mul x y) = x * y
--------------------------------------------------
class Render f where
render' :: f (Expr Render) -> String
{-
instance Render (Abstract Render) where
render' (Abs y) = render' y
-- inferred
render :: Expr Render -> String
render (In x) = render' x
-}
render :: Expr Render -> String
render (In (Abs y)) = render' y
instance Render Val where
render' (Val i) = show i
instance Render Add where
render' (Add x y) = "(" ++ render x ++ " + " ++ render y ++ ")"
instance Render Mul where
render' (Mul x y) = "(" ++ render x ++ " * " ++ render y ++ ")"
example2 :: String
example2 = render example1 ++ " == " ++ show (eval example1)
--ex2 x = render x ++ " == " ++ show (eval x)
{- only problem: when the values are polymorphic, they're not
in constructor form and each function call, like `render`
or `eval` would allocate the same isomorphic term representation.
the only possibility is to specialise it by giving it a type, but
this restricts further use of it (the operations that it can be used with).
-}
ex1Render :: Expr Render
ex1Render = example1
{- now, we cannot eval ex1, only render it. It would be good
to be able to choose a specialisation a la carte. and thus
build the constructor normal form only once, when all its usage is known.
-}
| balez/lambda-coinduction | ModularDatatypes/Existential.hs | gpl-3.0 | 2,992 | 0 | 11 | 665 | 785 | 412 | 373 | -1 | -1 |
module Functors where
data Barry t k p = Barry { yabba :: p, dabba :: t k }
instance Functor (Barry a b) where
fmap f (Barry {yabba = x, dabba = y}) = Barry {yabba = f x, dabba = y}
-- BOOOOM, head explosion
-- fmap (++ "!") getLine
-- fmap (fmap (*6) (subtract 1)) $ Just 4
-- fmap ((*6) . (subtract 1)) $ Just 4
data CMaybe a = CNothing | CJust Int a deriving (Show)
instance Functor CMaybe where
fmap f CNothing = CNothing
fmap f (CJust counter x) = CJust (succ counter) (f x) | cevaris/LYAHFGG | src/Functors.hs | gpl-3.0 | 510 | 0 | 10 | 131 | 171 | 97 | 74 | 8 | 0 |
{- ============================================================================
| Copyright 2011 Matthew D. Steele <[email protected]> |
| |
| This file is part of Fallback. |
| |
| Fallback is free software: you can redistribute it and/or modify it under |
| the terms of the GNU General Public License as published by the Free |
| Software Foundation, either version 3 of the License, or (at your option) |
| any later version. |
| |
| Fallback is distributed in the hope that it will be useful, but WITHOUT |
| ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
| FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
| more details. |
| |
| You should have received a copy of the GNU General Public License along |
| with Fallback. If not, see <http://www.gnu.org/licenses/>. |
============================================================================ -}
module Fallback.State.Doodad
(Doodad(..), DoodadHeight(..),
Doodads, emptyDoodads, tickDoodads, paintDoodads,
appendDoodad, appendFloatingWord, appendFloatingNumber,
Message(..), makeMessage, decayMessage)
where
import Control.Applicative ((<$>))
import Control.Monad (forM_, when)
import Data.Ix (Ix)
import qualified Data.Map as Map
import Data.Maybe (fromMaybe, mapMaybe)
import Fallback.Constants (secondsPerFrame)
import Fallback.Data.Color (Tint)
import Fallback.Data.Point
import qualified Fallback.Data.TotalMap as TM
import Fallback.Draw
import Fallback.State.Resources
(Resources, WordTag, rsrcDigitsStripBig, rsrcWordSprite)
import Fallback.State.Terrain (prectRect)
-------------------------------------------------------------------------------
data Doodad = Doodad
{ doodadCountdown :: Int,
doodadHeight :: DoodadHeight,
doodadPaint :: Int -> IPoint -> Paint () }
data DoodadHeight = LowDood | MidDood | HighDood
deriving (Bounded, Enum, Eq, Ix, Ord)
-- delayDoodad :: Int -> Doodad -> Doodad
-- slowDownDoodad :: Int -> Doodad -> Doodad
-- composeDoodads :: Doodad -> Doodad -> Doodad
-------------------------------------------------------------------------------
data Floater = Floater
{ flCount :: Int,
flLimit :: Int,
flPaint :: IPoint -> Paint () }
makeFloater :: (IPoint -> Paint ()) -> Floater
makeFloater fn = Floater { flCount = 0, flLimit = 30, flPaint = fn }
-------------------------------------------------------------------------------
data Doodads = Doodads
{ dsDoodads :: TM.TotalMap DoodadHeight [Doodad],
dsFloaters :: Map.Map PRect [Floater] }
emptyDoodads :: Doodads
emptyDoodads = Doodads
{ dsDoodads = TM.make (const []),
dsFloaters = Map.empty }
tickDoodads :: Doodads -> Doodads
tickDoodads ds = ds { dsDoodads = doodads', dsFloaters = floaters' } where
doodads' = mapMaybe tickDoodad <$> dsDoodads ds
tickDoodad doodad =
let count' = doodadCountdown doodad - 1
in if count' < 1 then Nothing else Just doodad { doodadCountdown = count' }
floaters' = Map.mapMaybe updateFloaters $ dsFloaters ds
updateFloaters floaters =
case mapMaybe tickFloater floaters of { [] -> Nothing; fs -> Just fs }
tickFloater fl =
let count' = flCount fl + 1
in if count' >= flLimit fl then Nothing else Just fl { flCount = count' }
paintDoodads :: IPoint -> DoodadHeight -> Doodads -> Paint ()
paintDoodads cameraTopleft dh ds = do
let paintDoodad d = doodadPaint d (doodadCountdown d - 1) cameraTopleft
mapM_ paintDoodad $ TM.get dh $ dsDoodads ds
when (dh == HighDood) $ do
forM_ (Map.assocs $ dsFloaters ds) $ \(prect, floaters) -> do
let Point cx cy = rectCenter (prectRect prect) `pSub` cameraTopleft
forM_ floaters $ \floater -> do
flPaint floater $ Point cx (cy - flCount floater)
appendDoodad :: Doodad -> Doodads -> Doodads
appendDoodad doodad ds = ds { dsDoodads =
TM.adjust (doodadHeight doodad) (++ [doodad]) (dsDoodads ds) }
appendFloatingWord :: Resources -> WordTag -> PRect -> Doodads -> Doodads
appendFloatingWord resources wordTag = appendFloater floater where
floater = makeFloater (blitLoc sprite . LocCenter)
sprite = rsrcWordSprite resources wordTag
appendFloatingNumber :: Resources -> Tint -> Int -> PRect -> Doodads -> Doodads
appendFloatingNumber resources tint number = appendFloater floater where
floater = makeFloater (paintNumberTinted digits tint number . LocCenter)
digits = rsrcDigitsStripBig resources
appendFloater :: Floater -> PRect -> Doodads -> Doodads
appendFloater floater prect ds = ds { dsFloaters = floaters' } where
floaters' = Map.alter (Just . (floater :) . pushup step . fromMaybe [])
prect (dsFloaters ds)
pushup _ [] = []
pushup to (f : fs) =
let delta = to - flCount f
in if delta <= 0 then f : fs
else f { flCount = to, flLimit = flLimit f + delta } :
pushup (to + step) fs
step = 8 :: Int
-------------------------------------------------------------------------------
data Message = Message Double String
makeMessage :: String -> Message
makeMessage string = Message (2.3 + fromIntegral (length string) / 30) string
decayMessage :: Message -> Maybe Message
decayMessage (Message t s) =
let t' = t - secondsPerFrame in
if t' <= 0 then Nothing else Just (Message t' s)
-------------------------------------------------------------------------------
| mdsteele/fallback | src/Fallback/State/Doodad.hs | gpl-3.0 | 5,866 | 0 | 22 | 1,481 | 1,353 | 731 | 622 | 87 | 4 |
{-# LANGUAGE TypeSynonymInstances , FlexibleInstances #-}
module HMeans.Data where
import HMeans.Common
import HMeans.Algebra
import qualified Data.IntSet as ISet
import qualified Data.IntMap.Strict as IMap
import qualified Data.Map.Strict as Map
import Debug.Trace
class Data a where
getId :: a b -> Int
toCluster :: a b -> Cluster b
instance Data BasicData where
getId = snd . getData
toCluster (BasicData (d, i)) = Cluster 1 d d (ISet.singleton i) ISet.empty
toBasicData :: [d] -> [BasicData d]
toBasicData a = map BasicData . flip zip [1..] $ a
| ehlemur/HMeans | src/HMeans/Data.hs | gpl-3.0 | 605 | 0 | 9 | 141 | 187 | 104 | 83 | 16 | 1 |
{-# language TypeFamilies #-}
module M where
data family F a
data instance F Int = D Int
| lspitzner/brittany | data/Test214.hs | agpl-3.0 | 89 | 0 | 6 | 18 | 24 | 15 | 9 | 4 | 0 |
{-
passman
Copyright (C) 2018-2021 Jonathan Lamothe
<[email protected]>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this program. If not, see
<https://www.gnu.org/licenses/>.
-}
module Spec.PWSetService (tests) where
import qualified Data.Map as M
import System.Random (mkStdGen, StdGen)
import Test.HUnit (Test (..), (~?=))
import Password
tests :: Test
tests = TestLabel "pwSetService" $ TestList
[ addToEmpty, addToNonEmpty, addToExisting ]
addToEmpty :: Test
addToEmpty = tests' "empty database" newPWDatabase 1
addToNonEmpty :: Test
addToNonEmpty = tests' "non-empty database" nonEmpty 3
addToExisting :: Test
addToExisting = tests' "existing database" existing 3
tests' :: String -> PWDatabase -> Int -> Test
tests' label db size = TestLabel label $ TestList
[ dbSize result size
, find result
] where
result = pwSetService "foo" foo db
dbSize :: M.Map String PWData -> Int -> Test
dbSize db expect = TestLabel "database size" $
length db ~?= expect
find :: M.Map String PWData -> Test
find db = TestLabel "record" $
M.lookup "foo" db ~?= Just foo
nonEmpty :: M.Map String PWData
nonEmpty = M.fromList
[ ( "bar", bar )
, ( "baz", baz )
]
existing :: M.Map String PWData
existing = M.fromList
[ ( "foo", foo' )
, ( "bar", bar )
, ( "baz", baz )
]
foo :: PWData
g1 :: StdGen
(foo, g1) = newPWData g
foo' :: PWData
g2 :: StdGen
(foo', g2) = newPWData g1
bar :: PWData
g3 :: StdGen
(bar, g3) = newPWData g2
baz :: PWData
(baz, _) = newPWData g3
g :: StdGen
g = mkStdGen 1
--jl
| jlamothe/passman | test/Spec/PWSetService.hs | lgpl-3.0 | 2,071 | 0 | 8 | 401 | 489 | 268 | 221 | 47 | 1 |
-----------------------------------------------------------------------------
--
-- Module : TestEncoder
-- Description :
-- Copyright : (c) Tobias Reinhardt, 2015 <[email protected]
-- License : Apache License, Version 2.0
--
-- Maintainer : Tobias Reinhardt <[email protected]>
-- Portability : tested only on linux
-- |
--
-----------------------------------------------------------------------------
module TestEncoder(
describtion
) where
import IniConfiguration
import Test.Hspec (it, shouldBe)
import System.IO (openFile, hClose, IOMode (WriteMode))
describtion = do
it "Properties of default section are the first elements to be decoded" $ do
let result = encode [("section1", [("op1", "val1"), ("op2", "val2")]),
("", [("op3", "val3"), ("op4", "val4")]),
("section2", [("op5", "val5")])]
result `shouldBe` "op3=val3\nop4=val4\n\n[section1]\n\nop1=val1\nop2=val2\n\n[section2]\n\nop5=val5\n"
it "Can directly write to a file" $ do
h <- openFile "tests/output/example.ini" WriteMode
hClose h
result <- writeConfiguration "tests/output/example.ini" [("section1", [("op1", "val1"), ("op2", "val2")]),
("", [("op3", "val3"), ("op4", "val4")]),
("section2", [("op5", "val5")])]
content <- readFile "tests/output/example.ini"
True `shouldBe` True
| tobiasreinhardt/show | IniConfiguration/tests/TestEncoder.hs | apache-2.0 | 1,449 | 8 | 17 | 326 | 304 | 183 | 121 | 19 | 1 |
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE TemplateHaskell #-}
module Seraph.Free where
-------------------------------------------------------------------------------
import Control.Exception
import Prelude hiding (log)
import System.Posix.IO (OpenFileFlags, OpenMode)
import System.Posix.Process (ProcessStatus)
import System.Posix.Signals (Signal)
import System.Posix.Types (Fd, GroupID, ProcessID, UserID)
-------------------------------------------------------------------------------
import Control.Monad.Free
import Control.Monad.Free.TH
-------------------------------------------------------------------------------
data SeraphView next = Log String next deriving (Functor)
makeFree ''SeraphView
-------------------------------------------------------------------------------
type SeraphViewM = Free SeraphView
-------------------------------------------------------------------------------
-- i think these "IO" ops here need to come back as SeraphChildM
data SeraphChild next = SetUserID UserID next
| SetGroupID GroupID next
| ChangeWorkingDirectory FilePath next
| ExecuteFile String [String] [(String, String)] (Either IOException ProcessID -> next)
| OpenFd FilePath OpenMode OpenFileFlags (Either IOException Fd -> next)
| DupTo Fd Fd next deriving (Functor)
makeFree ''SeraphChild
-------------------------------------------------------------------------------
type SeraphChildM = Free SeraphChild
-------------------------------------------------------------------------------
data SeraphProcess next = SignalProcess Signal ProcessID next
| WaitSecs Int next
| GetUserEntryForName String (Maybe UserID -> next)
| GetGroupEntryForName String (Maybe GroupID -> next)
| ForkProcess (SeraphChildM ()) (ProcessID -> next)
| GetProcessStatus ProcessID (Maybe ProcessStatus -> next)
deriving (Functor)
makeFree ''SeraphProcess
-------------------------------------------------------------------------------
type SeraphProcessM = Free SeraphProcess
| MichaelXavier/Seraph | src/Seraph/Free.hs | bsd-2-clause | 2,375 | 0 | 9 | 558 | 372 | 216 | 156 | 32 | 0 |
{-# OPTIONS_GHC -fno-warn-missing-import-lists #-}
{-|
Module : Reflex.Dom.HTML5.Component
Description : Components providing pre-defined functionality.
Copyright : (c) gspia 2017 -
License : BSD
Maintainer : gspia
= Components
A set of components with some helper functions to easy up the writing
of user interfaces.
-}
module Reflex.Dom.HTML5.Component
( module Reflex.Dom.HTML5.Component.Table
, module Reflex.Dom.HTML5.Component.Tree
) where
import Reflex.Dom.HTML5.Component.Table
import Reflex.Dom.HTML5.Component.Tree
| gspia/reflex-dom-htmlea | lib/src/Reflex/Dom/HTML5/Component.hs | bsd-3-clause | 556 | 0 | 5 | 90 | 46 | 35 | 11 | 6 | 0 |
{-# LANGUAGE CPP, ForeignFunctionInterface #-}
#ifdef STMHASKELL
import Control.STMHaskell.STM --full abort STM (STM Haskell)
#elif defined(FABORT)
import Control.Full.STM --full abort STM (NoRec)
#elif defined(ORDERED)
import Control.Ordered.STM
#elif defined(CPSFULL)
import Control.CPSFull.STM
#elif defined(PTL2)
import Control.PartialTL2.STM
#elif defined(TL2)
import Control.FullTL2.STM
#elif defined(CHUNKED)
import Control.Chunked.STM
#elif defined(PABORT)
import Control.Partial.STM
#elif defined(FF)
import Control.Ordered.STM
#else
#error No STM Specified
#endif
import Prelude hiding (lookup)
import GHC.Conc(numCapabilities, forkOn)
import Control.Concurrent.MVar
import Control.Exception
import Dump
import Data.Map(Map, empty)
import Text.Printf
import Control.Monad(foldM_)
data STMList a = Head (TVar (STMList a))
| Null
| Node a (TVar (STMList a))
newList :: IO (TVar (STMList a))
newList = do
nullPtr <- newTVarIO Null
l <- newTVarIO (Head nullPtr)
return(l)
lookup :: Eq a => TVar (STMList a) -> a -> STM Bool
lookup l x = do
y <- readTVar l
case y of
Head t -> lookup t x
Null -> return(False)
Node hd tl ->
if x == hd
then return(True)
else lookup tl x
insert :: TVar (STMList a) -> a -> IO()
insert l x = do
raw <- readTVarIO l
case raw of
Head t -> do
newNode <- newTVarIO (Node x t)
writeTVarIO l (Head newNode)
loop 0 l = return()
loop i l = atomically(lookup l 100000000) >>= \_ -> loop (i-1) l
main = do
stmList <- newList
foldM_ (\b -> \a -> insert stmList a) () [0..10000]
putStrLn "Done initializing"
start <- getTime
loop 1000 stmList
end <- getTime
printf "Time = %0.3f\n" (end - start :: Double)
printStats
return()
foreign import ccall unsafe "hs_gettime" getTime :: IO Double | ml9951/ghc | libraries/pastm/examples/Synthetic.hs | bsd-3-clause | 2,029 | 0 | 15 | 577 | 572 | 285 | 287 | -1 | -1 |
{- | This module contains the definition of the TimeStepSummary
-}
module TimeStepSummary (
TimeStepSummary(..)
) where
{- | The TimeStepSummary type class summarizes all of the operations that
were carried out on the object in a single time step. Summaries should
be capable of being "concatenated" together.
-}
class (Monoid s) => TimeStepSummary s where
-- | This indicates that the object did not do anything during the operation.
nothing :: s
nothing = mempty
-- | This indicates that the object was killed during the time step. If the
-- object cannot die, this will be nothing by default.
killed :: s
killed = nothing
-- | This indicates that the object is dying, its remaining life span decremented
-- by one time step. If the object cannot die, this will be nothing by default.
dying :: s
dying = nothing
| ekinan/HaskellTurtleGraphics | src/TimeStep/TimeStepSummary.hs | bsd-3-clause | 889 | 0 | 6 | 218 | 68 | 43 | 25 | 9 | 0 |
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
module Data.Sparse where
import qualified Data.List as L
import Debug.Trace
--import qualified Data.Vector.Generic as V
import qualified Data.Vector.Unboxed as U
import Prelude hiding (pred, (!!))
type Index = (Int, Int)
type Size = Index
class (Num val, Pattern p) => MatrixClass m p val where
-- multiply a matrix by a vector
(*:) :: m p val -> U.Vector val -> U.Vector val
generateMatrix :: (Num val, Pattern p) => p -> (Index -> val) -> m p val
-- get a row:
(!!) :: m p val -> Int -> U.Vector val
{-(+) :: m p val -> m p val -> m p val-}
{-(-) :: m p val -> m p val -> m p val-}
class Pattern p where
union :: p -> p -> p
eye :: Size -> p
index :: p -> Int -> Index
size :: p -> Size
generatePattern :: Size -> (Index -> Bool) -> p
nnz :: p -> Int
data CompressedSparseRow = CompressedSparseRow { _colIndexes :: !(U.Vector Int)
--the start index in values and
--colIndices of each row.
, _rowIndexes :: !(U.Vector Int)
--if length rowIndexes
-- is < (fst size), then
-- there are empty rows.
, _sizeCSR :: !Size} deriving Show
data RowMajorDense = RowMajorDense { _sizeRMD :: !Index } deriving Show
instance Pattern RowMajorDense where
eye s = RowMajorDense { _sizeRMD = s }
index p = \i -> (i `div` snd (size p), i `mod` snd (size p))
union p1 p2 | size p1 == size p2 = RowMajorDense { _sizeRMD = size p1 }
| otherwise = undefined
size = _sizeRMD
generatePattern s _ = RowMajorDense { _sizeRMD = s }
nnz p = (fst $ size p) * (snd $ size p)
instance Pattern CompressedSparseRow where
eye s = CompressedSparseRow { _colIndexes = U.generate (min (fst s) (snd s)) (\i -> i)
, _rowIndexes = U.generate (min (fst s) (snd s)) (\i -> i)
, _sizeCSR = s }
index p = \i -> let c = _colIndexes p U.! i
rs = _rowIndexes p
in (U.ifoldl' ( \le i' e -> if e <= i then i' else le) (fst $ size p) rs, c)
union p1 p2 | size p1 /= size p2 = undefined
| size p1 == size p2 = CompressedSparseRow { _colIndexes = U.concat colslist
, _rowIndexes = U.scanl' (+) 0 $ U.fromList $ map U.length colslist
, _sizeCSR = size p1}
where
colslist = map (\ row -> mergeUnion (partcolIndexes row p1) (partcolIndexes row p2)) [0..(r p1)]
partcolIndexes row p = U.slice (slice p row) (slice p (row + 1) - slice p row) (_colIndexes p)
slice p = (U.!) (U.snoc (_rowIndexes p) $ r p)
r p = fst $ size p
size = _sizeCSR
generatePattern s pred = CompressedSparseRow { _colIndexes = L.foldl1' (U.++) filteredvalues
, _rowIndexes = U.prescanl (+) 0 $ U.fromList $ map U.length $ filteredvalues
, _sizeCSR = s}
where filteredvalues = map (\i -> snd $ U.unzip (U.filter pred i)) $ allvalues
allvalues = map (\r -> U.zip (U.replicate cols r) (U.enumFromN 0 cols)) $ [0..rows-1]
rows = fst s
cols = snd s
nnz = U.length . _colIndexes
traced :: (Show a) => a -> a
traced a = traceShow a a
mergeUnion :: (U.Unbox val, Ord val) => U.Vector val -> U.Vector val -> U.Vector val
mergeUnion a b | U.null a = b
| U.null b = a
| (U.head a) == (U.head b) = (U.head a) `U.cons` mergeUnion (U.tail a) (U.tail b)
| (U.head a) < (U.head b) = (U.head a) `U.cons` mergeUnion (U.tail a) b
| (U.head a) > (U.head b) = (U.head b) `U.cons` mergeUnion a (U.tail b)
| otherwise = undefined
data Matrix p val = Matrix { _values :: !(U.Vector val), _pattern :: p }
instance (Show val, U.Unbox val, Num val) => MatrixClass Matrix CompressedSparseRow val where
(*:) m v = U.generate r rowCalc
where rowCalc row = U.sum $ U.ifilter (\ i _ -> i `U.elem` partcolIndexes row) $ U.accumulate_ (*) v (partcolIndexes row) (partValues row)
partValues row = U.slice (slice row) (slice (row + 1) - slice row) (_values m)
partcolIndexes row = U.slice (slice row) (slice (row + 1) - slice row) (_colIndexes pat)
slice row = U.snoc (_rowIndexes pat) (nnz pat) U.! row
r = fst $ size pat
pat = _pattern m
generateMatrix p f = Matrix { _values = U.generate (nnz p) $ f . (index p), _pattern = p }
(!!) m row = let vec = U.replicate c 0 in
U.update_ vec partcolIndexes partValues
where
partValues = U.slice (slice row) (slice (row + 1) - slice row) (_values m)
partcolIndexes = U.slice (slice row) (slice (row + 1) - slice row) (_colIndexes p)
slice = (U.!) (U.snoc (_rowIndexes p) $ nnz p)
c = snd $ size p
p = _pattern m
{-(+) m1 m2 = Matrix { _values = -}
{-, _pattern = newpattern}-}
{-where v1 = generateMatrix newpattern (\(r,c) -> )-}
{-newpattern = union (_pattern m1) (_pattern m2)-}
instance (Show val, U.Unbox val, Num val) => Show (Matrix CompressedSparseRow val) where
show m = concatMap (\x -> showVec (m !! x) ++ "\n") [0..fst (size $ _pattern m) - 1]
where showVec v = U.foldl' (\a b -> a ++ ", " ++ show b) ("[" ++ show (U.head v)) (U.tail v) ++ "]"
{---This might want to be in a function...-}
{-imap :: (U.Unbox a, U.Unbox b) => (Index -> a -> b) -> CSparseRow a -> CSparseRow b-}
{-imap f m = m {values = U.map (uncurry f) (U.zip index (values m)) }-}
{-where index :: U.Vector Index-}
{-index = U.zip rows (colIndexes m)-}
{-rows = U.convert $ V.foldl1' (V.++) $ V.zipWith (\a b -> V.replicate (b - a) a) ri (V.drop 1 ri)-}
{-ri = V.convert $ U.snoc (rowIndexes m) (fst $ size m)-}
| spott/sparse | src/Data/Sparse.hs | bsd-3-clause | 6,823 | 0 | 16 | 2,715 | 2,194 | 1,146 | 1,048 | 95 | 1 |
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE BangPatterns #-}
module ContractGen where
import Data.Aeson
import Data.Text as T
import qualified Data.ByteString.Lazy as BSL
import Data.HashMap.Strict.InsOrd as HMSIns
import Language.Haskell.Exts as LHE hiding (OPTIONS)
import Data.Vector.Sized as SV hiding ((++), foldM, forM, mapM)
import Safe
import Data.Finite.Internal
-- import Network.HTTP.Types.Method
import Data.Maybe
import Data.List.Split as DLS (splitOn)
import qualified Data.List as DL
import qualified Data.Map.Lazy as Map
import qualified Data.Char as Char
import Control.Monad
import Control.Monad.Trans.State.Strict
import Control.Monad.IO.Class
import System.Directory
-- import Data.String.Interpolate
import Data.Swagger hiding (get, paramSchema)
import Data.Yaml (decodeEither')
import Control.Applicative ((<|>))
import ContractGenTypes
import Constants
import qualified Data.HashMap.Strict as HMS
import qualified SwaggerGen as SG hiding (Tuple)
import Debug.Trace as DT
runDefaultPathCodeGen :: IO ()
runDefaultPathCodeGen = runCodeGen "sampleFiles/swagger-petstore-noXml.json" "/Users/kahlil/projects/ByteAlly/tmp/" "swagger-gen-proj"
runCodeGen :: FilePath -> FilePath -> String -> IO ()
runCodeGen swaggerJsonInputFilePath outputPath projectName = do
let projectFolderGenPath = outputPath ++ projectName ++ "/"
createDirectoryIfMissing True (projectFolderGenPath ++ "src/")
(globalModuleNames, newTypeCreationList) <- runStateT (readSwaggerGenerateDefnModels swaggerJsonInputFilePath projectFolderGenPath projectName) HMS.empty
createNewTypes newTypeCreationList projectFolderGenPath globalModuleNames
writeFile (projectFolderGenPath ++ "src/CommonTypes.hs") commonTypesModuleContent
where
createNewTypes :: HMS.HashMap LevelInfo [TypeInfo] -> FilePath -> [String] -> IO ()
createNewTypes stateHM genPath globalModules = do
createdModuleNames <- HMS.foldlWithKey' (writeGeneratedTypesToFile genPath) (pure globalModules) stateHM
-- TODO : Setting xmlImport to False for now by default since it's not in scope!
writeCabalAndProjectFiles genPath projectName False (DL.nub createdModuleNames)
createTypeDeclFromCDT :: [Decl SrcSpanInfo] -> (CreateDataType, NamingCounter) -> [Decl SrcSpanInfo]
createTypeDeclFromCDT accValue (tyInfo, mNameCounter) = do
let typeInfo = addCtrToConsName mNameCounter tyInfo
case typeInfo of
ProductType newData _ -> do
let toParamInstances =
case (DL.isInfixOf "QueryParam" $ mName newData) of
True -> [defaultToParamInstance (mName newData) "QueryParam"]
False ->
case (DL.isInfixOf "FormParam" $ mName newData) of
True -> [defaultToParamInstance (mName newData) "FormParam"]
False -> []
let (modifiedRecords, dataDecl) = dataDeclaration (DataType noSrcSpan) (mName newData) (Right $ mRecordTypes newData) ["P.Eq", "P.Show", "P.Generic"]
-- TODO: Commenting out all Instances for now
-- let jsonInsts = jsonInstances (mName newData) modifiedRecords
accValue ++ [dataDecl] ++ [] ++ [] ++ [] -- [defaultToSchemaInstance (mName newData)]
SumType (BasicEnum tName tConstructors ogConstructors) -> do
let toParamEncodeParamQueryParamInstance = [toParamQueryParamInstance tName] ++ [encodeParamSumTypeInstance tName (DL.zip tConstructors ogConstructors ) ]
let fromParamDecodeParamQueryParamInstance = [fromParamQueryParamInstance tName] ++ [decodeParamSumTypeInstance tName (DL.zip ogConstructors tConstructors ) ]
let toSchemaInstances = toSchemaInstanceForSumType tName (DL.zip ogConstructors tConstructors )
accValue ++
([enumTypeDeclaration tName tConstructors ["P.Eq", "P.Show","P.Generic"] ])
-- ++ (instanceDeclForShow tName)
-- ++ (instanceDeclForJSONForSumType tName)
-- ++ toParamEncodeParamQueryParamInstance
-- ++ fromParamDecodeParamQueryParamInstance
-- ++ toSchemaInstances)
SumType (ComplexSumType tName constructorTypeList ) -> do
accValue ++
([complexSumTypeDecl tName constructorTypeList ["P.Eq", "P.Generic", "P.Show"] ])
-- ++ jsonInstances tName [] )
HNewType tName alias _ -> accValue ++ [snd $ dataDeclaration (NewType noSrcSpan) (tName) (Left alias) ["P.Eq", "P.Show", "P.Generic"] ]
where
addCtrToConsName :: Maybe Int -> CreateDataType -> CreateDataType
addCtrToConsName mCounterVal cdt =
case mCounterVal of
Just counterVal ->
case cdt of
SumType (BasicEnum consName names ogNames) -> SumType (BasicEnum (consName ++ (show counterVal) ) names ogNames)
SumType (ComplexSumType consName consList ) -> SumType (ComplexSumType (consName ++ (show counterVal) ) consList )
ProductType (NewData consName recList ) ogName -> ProductType (NewData (consName ++ (show counterVal) ) recList ) ogName
HNewType consName ty ogName -> HNewType (consName ++ (show counterVal) ) ty ogName
Nothing -> cdt
traceVal :: Maybe Int -> CreateDataType -> String
traceVal mCounterVal cdt =
case mCounterVal of
Just x -> "CDT : " ++ (show cdt) ++ "\t Counter Val : " ++ (show x)
Nothing -> ""
qualifiedGlobalImports :: [String] -> [(String, (Bool, Maybe (ModuleName SrcSpanInfo)))]
qualifiedGlobalImports moduleList =
let moduleWithQuals = fmap (\modName ->
if DL.isInfixOf globalDefnsModuleName modName
then (modName, globalDefnsQualName)
else if DL.isInfixOf globalRespTypesModuleName modName
then (modName, globalRespTypesQualName)
else if DL.isInfixOf globalParamTypesModuleName modName
then (modName, globalParamsQualName)
else error "Expected one of the 3 Global Defn modules!") (DL.nub moduleList)
-- TODO : find out how duplicate entries are coming into imports so we can remove the above call to `nub`
in fmap (\(modName, qualName) -> (modName, (True, Just $ ModuleName noSrcSpan qualName) ) ) moduleWithQuals
readSwaggerGenerateDefnModels :: FilePath -> FilePath -> String -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO [String]
readSwaggerGenerateDefnModels swaggerJsonInputFilePath contractOutputFolderPath projectName = do
swaggerJSONContents <- liftIO $ BSL.readFile swaggerJsonInputFilePath
let decodedVal = eitherDecode swaggerJSONContents <|> either (Left . show) Right (decodeEither' (BSL.toStrict swaggerJSONContents))
case decodedVal of
Left errMsg -> error $ errMsg -- "Panic: not a valid JSON or yaml"
Right (swaggerData :: Swagger) -> do
(apiNameHs, contractDetails) <- getSwaggerData swaggerData
let xmlImport = needsXmlImport contractDetails
newDefnTypesHM <- generateSwaggerDefinitionData (_swaggerDefinitions swaggerData)
globalResponseTypesHM <- generateGlobalResponseData (_swaggerResponses swaggerData)
globalParamTypesHM <- generateGlobalParamData (_swaggerParameters swaggerData)
-- We ignore the keys (level info) and just concat all the CDTs
-- let newDefnCDTList = fmap (getInnerTyFromTypeInfo) $ DL.concat $ HMS.elems newDefnTypesHM
-- modify' (\stateValue -> DT.trace ("Adding Defns to State!") $ HMS.unionWith (++) stateValue newDefnTypesHM );
let langExts = ["TypeFamilies", "MultiParamTypeClasses", "DeriveGeneric", "TypeOperators", "DataKinds", "TypeSynonymInstances", "FlexibleInstances"]
let contractImports = ["Types", "Data.Int", "Data.Text"]
let qualifiedImportsForContract =
let webApiImports = [ ("WebApi.Contract", (True, Just $ ModuleName noSrcSpan "W"))
, ("WebApi.Param", (True, Just $ ModuleName noSrcSpan "W"))
]
webApiXmlImp = ("WebApi.XML", (True, Just $ ModuleName noSrcSpan "W"))
in if xmlImport
then webApiXmlImp : webApiImports
else webApiImports
let hContractModule =
Module noSrcSpan
(Just $ ModuleHead noSrcSpan (ModuleName noSrcSpan "Contract") Nothing Nothing)
(fmap languageExtension langExts)
-- ((fmap (\modName -> moduleImport (modName,(False, Nothing)) ) contractImports) -- CommonTypes
-- ++ fmap moduleImport qualifiedImportsForContract)
-- (generateContractBody apiNameHs contractDetails)
-- TODO : Setting contract to be empty for now
[]
[]
liftIO $ writeFile (contractOutputFolderPath ++ "src/Contract.hs") $ prettyPrint hContractModule
-- let qualifiedImportsForTypes =
-- [("Data.ByteString.Char8", (True, Just $ ModuleName noSrcSpan "ASCII")),
-- ("Data.HashMap.Lazy", (True, Just $ ModuleName noSrcSpan "HM") ),
-- ("Data.Swagger", (True, Just $ ModuleName noSrcSpan "SW") ),
-- ("Data.Text", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("Data.Int", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("Data.Time.Clock", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("GHC.Generics", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("Data.Aeson", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("WebApi.Param", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("Data.Text.Encoding", (True, Just $ ModuleName noSrcSpan "P") ),
-- ("Prelude", (True, Just $ ModuleName noSrcSpan "P") )
-- ]
-- let hTypesModule =
-- Module noSrcSpan
-- (Just $ ModuleHead noSrcSpan (ModuleName noSrcSpan "Types") Nothing Nothing)
-- (fmap languageExtension ["TypeFamilies", "MultiParamTypeClasses", "DeriveGeneric", "TypeOperators", "DataKinds", "TypeSynonymInstances", "FlexibleInstances", "DuplicateRecordFields", "OverloadedStrings"])
-- (fmap moduleImport
-- ( (DL.zip ["Prelude ()",
-- "Data.Swagger.Schema",
-- "CommonTypes",
-- "Control.Lens",
-- "Data.Swagger.Internal.Schema",
-- "Data.Swagger.ParamSchema",
-- -- TODO : This is kind of a hack!
-- "Data.Swagger.Internal hiding (Tag)"
-- ] (cycle [(False, Nothing)]) ) ++ qualifiedImportsForTypes ) ) --"GHC.Generics", "Data.Time.Calendar"
-- (createDataDeclarations newDefnCDTList)
let respsAndDefns = HMS.unionWith (++) globalResponseTypesHM newDefnTypesHM
let globalTypesWithParams = HMS.unionWith (++) respsAndDefns globalParamTypesHM
liftIO $ do
-- writeFile (contractOutputFolderPath ++ "src/Types.hs") $ prettyPrint hTypesModule ++ "\n\n"
HMS.foldlWithKey' (writeGeneratedTypesToFile contractOutputFolderPath) (pure []) globalTypesWithParams
-- createdModuleNames <-
where
needsXmlImport :: [ContractDetails] -> Bool
needsXmlImport = flip DL.foldl' False (\accBool cDetail ->
case accBool of
True -> True
False ->
let methodMap = methodData cDetail
in Map.foldl' (\innerAcc apiDetails -> hasXML apiDetails || innerAcc) accBool methodMap )
-- createDataDeclarations :: [CreateDataType] -> [Decl SrcSpanInfo]
-- createDataDeclarations = DL.foldl' createTypeDeclFromCDT []
-- (\accValue cNewTy ->
-- case cNewTy of
-- ProductType newDataInfo ->
-- let (modifiedRecords, dataDecl) = dataDeclaration (DataType noSrcSpan) (mName newDataInfo) (Right $ mRecordTypes newDataInfo) ["P.Eq", "P.Show", "P.Generic"]
-- jsonInsts = jsonInstances (mName newDataInfo) modifiedRecords
-- in accValue ++ [dataDecl] ++ jsonInsts ++ [defaultToSchemaInstance (mName newDataInfo)]
-- HNewType tName alias -> (snd $ dataDeclaration (NewType noSrcSpan) (tName) (Left alias) ["Eq", "Show", "Generic"] ):accValue
-- SumType _ -> error $ "Encountered a Sum Type creation while constructing initial types for Types.hs "
-- ++ "\n The value is : " ++ (show cNewTy) ) [] newDataList
writeGeneratedTypesToFile :: FilePath -> IO [String] -> LevelInfo -> [TypeInfo] -> IO [String]
writeGeneratedTypesToFile genPath ioModuleNames levelInfo typeInfos = do
moduleNames <- ioModuleNames
let (typesModuleDir, typesModuleName) =
case levelInfo of
Global gType ->
case gType of
DefinitionTy -> (genPath ++ globalTypesModulePath, hsModuleToFileName globalDefnsModuleName)
ResponseTy -> (genPath ++ globalTypesModulePath, hsModuleToFileName globalRespTypesModuleName )
ParamTy -> (genPath ++ globalTypesModulePath, hsModuleToFileName globalParamTypesModuleName)
Local _ (rName, stdMethod) ->
( genPath ++ (localRouteMethodTypesPath rName stdMethod), hsModuleToFileName localRouteMethodTypesModuleName)
tyModuleExists <- doesFileExist (typesModuleDir ++ typesModuleName)
let (modName:: String) =
case levelInfo of
Global glType ->
case glType of
DefinitionTy -> globalTypesHsModuleName ++ globalDefnsModuleName
ResponseTy -> globalTypesHsModuleName ++ globalRespTypesModuleName
ParamTy -> globalTypesHsModuleName ++ globalParamTypesModuleName
Local _ (rtName, sMethod) -> (localRouteMethodTypesModName rtName sMethod) ++ localRouteMethodTypesModuleName
case tyModuleExists of
True -> do
let createDataTyList = fmap (getInnerTyAndCtr) typeInfos
newContents = "\n\n" ++ (DL.concat $ fmap (++ "\n\n" ) $ fmap prettyPrint $ createDataDeclarations createDataTyList)
appendFile (typesModuleDir ++ typesModuleName) newContents
pure $ modName:moduleNames
False -> do
let createDataTyList = fmap (getInnerTyAndCtr) typeInfos
newTyModuleContents =
prettyPrint $
Module noSrcSpan
(Just $ ModuleHead noSrcSpan (ModuleName noSrcSpan modName) Nothing Nothing)
(fmap languageExtension languageExtensionsForTypesModule)
(fmap moduleImport
( (DL.zip importsForTypesModule (cycle [(False, Nothing)]) )
++ (fmap (\(fullModuleName, qual) -> (fullModuleName, (True, Just $ ModuleName noSrcSpan qual)) ) qualifiedImportsForTypesModule) ) )-- ++ ( qualifiedGlobalImports (getGlobalModuleNames moduleNames) ) ) )
(createDataDeclarations $ createDataTyList)
createDirectoryIfMissing True typesModuleDir
writeFile (typesModuleDir ++ typesModuleName) newTyModuleContents
pure $ modName:moduleNames
where
createDataDeclarations :: [(CreateDataType, NamingCounter)] -> [Decl SrcSpanInfo]
createDataDeclarations = DL.foldl' createTypeDeclFromCDT []
getGlobalModuleNames :: [String] -> [String]
getGlobalModuleNames = DL.filter (DL.isInfixOf ".GlobalDefinitions.")
-- TODO: This function assumes SwaggerObject to be the type and directly reads from schemaProperties. We need to also take additionalProperties into consideration.
generateSwaggerDefinitionData :: InsOrdHashMap Text Schema -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (HMS.HashMap LevelInfo [TypeInfo])
generateSwaggerDefinitionData defDataHM = foldlWithKey' parseSwaggerDefinition (pure HMS.empty) defDataHM
where
parseSwaggerDefinition :: StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (HMS.HashMap LevelInfo [TypeInfo]) -> Text -> Schema -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (HMS.HashMap LevelInfo [TypeInfo])
parseSwaggerDefinition scAccValue modelName modelSchema = do
accValue <- scAccValue
let (schemaProperties::InsOrdHashMap Text (Referenced Schema) ) = _schemaProperties modelSchema
case HMSIns.null schemaProperties of
True -> do
hsType <- getTypeFromSwaggerType (Global DefinitionTy) DefinitionI (T.unpack modelName) (Just modelSchema) (_schemaParamSchema modelSchema)
if hsType == (T.unpack modelName)
-- If the name of the type returned is the same, it would mean that it's a sum type.
-- An alias is not necessary here as the sum type details would be stored in the State
-- And the type will be generated later when the State value is read.
then pure accValue
-- Along with the Restructuring of Generated types into modules, we have scrapped `TypeAlias` and replaced it with `NewType`
else
let createTypeInfo = HNewType (setValidConstructorId $ T.unpack modelName) hsType (T.unpack modelName)
in pure $ HMS.insertWith checkForDuplicateAndAdd (Global DefinitionTy) [DefinitionType createTypeInfo Nothing] accValue
False -> do
prodType <- parseSchemaToCDT (Global DefinitionTy) DefinitionI modelName modelSchema
pure $ HMS.insertWith checkForDuplicateAndAdd (Global DefinitionTy) [DefinitionType prodType Nothing] accValue
parseSchemaToCDT :: LevelInfo -> TInfo -> Text -> Schema -> StateConfig (CreateDataType)
parseSchemaToCDT levelInfo tInfo mainTypeName ilSchema = do
let mandatoryFields = fmap T.unpack (_schemaRequired ilSchema)
recordNamesAndTypes <- foldlWithKey' (\scAccList innerRecord iRefSchema -> do
accList <- scAccList
let innerRecordName = T.unpack innerRecord
let innerRecordTypeName = T.unpack $ T.append (T.toTitle mainTypeName) (T.toTitle innerRecord)
innerRecordType <- case iRefSchema of
Ref referenceName -> pure $ T.unpack $ getReference referenceName
Inline irSchema -> ((getTypeFromSwaggerType levelInfo tInfo innerRecordTypeName (Just irSchema)) . _schemaParamSchema) irSchema
let recordTypeWithMaybe =
case (innerRecordName `DL.elem` mandatoryFields) of
True -> setValidConstructorId innerRecordType
False -> "P.Maybe " ++ innerRecordType
pure $ (innerRecordName, recordTypeWithMaybe):accList ) (pure []) (_schemaProperties ilSchema)
pure $ ProductType (NewData (setValidConstructorId $ T.unpack mainTypeName) recordNamesAndTypes) (T.unpack mainTypeName)
generateGlobalResponseData :: InsOrdHashMap Text Response -> StateConfig (HMS.HashMap LevelInfo [TypeInfo])
generateGlobalResponseData globalRespHM = foldlWithKey' parseResponseDefn (pure HMS.empty) globalRespHM
where
parseResponseDefn :: StateConfig (HMS.HashMap LevelInfo [TypeInfo]) -> Text -> Response -> StateConfig (HMS.HashMap LevelInfo [TypeInfo])
parseResponseDefn scAccValue responseDefName responseObj = do
accValue <- scAccValue
case _responseSchema responseObj of
Just (Ref refSchema) -> do
let refText = getReference refSchema
-- NOTE : We will assume that any references here are only to Definitions types.
let respDataTy = HNewType (setValidConstructorId $ T.unpack responseDefName) (T.unpack refText) (T.unpack responseDefName)
-- TODO: Verify if DefinitionType is okay here.
let newRespHM = HMS.singleton (Global ResponseTy) [DefinitionType respDataTy Nothing]
pure $ HMS.unionWith checkForDuplicateAndAdd accValue newRespHM
Just (Inline ilSchema) -> do
let levelInfo = Global ResponseTy
cdt <- parseSchemaToCDT levelInfo DefinitionI responseDefName ilSchema
pure $ HMS.insertWith checkForDuplicateAndAdd levelInfo [DefinitionType cdt Nothing] accValue
-- TODO: we should probably log this in the error reporting as it doesn't make much sense if it's a `Nothing`
Nothing -> scAccValue
generateGlobalParamData :: InsOrdHashMap Text Param -> StateConfig (HMS.HashMap LevelInfo [TypeInfo])
generateGlobalParamData globalParamsHM = pure $ HMS.empty
-- foldlWithKey' parseParamDefn (pure HMS.empty) globalParamsHM
-- where
-- parseParamDefn :: StateConfig (HMS.HashMap LevelInfo [TypeInfo]) -> Text -> Param -> StateConfig (HMS.HashMap LevelInfo [TypeInfo])
-- parseParamDefn scAccValue paramDefName paramObj = do
-- accValue <- scAccValue
getSwaggerData :: Swagger -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (String, [ContractDetails])
getSwaggerData swaggerData = do
let apiNameFromSwagger = (_infoTitle . _swaggerInfo) swaggerData
validHsApiName = setValidConstructorId (T.unpack apiNameFromSwagger)
contractDetailList <- HMSIns.foldlWithKey' (parseSwaggerPaths swaggerData) (pure []) (_swaggerPaths swaggerData)
pure (validHsApiName, contractDetailList)
where
parseSwaggerPaths :: Swagger -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO [ContractDetails] -> FilePath -> PathItem -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO [ContractDetails]
parseSwaggerPaths swaggerData contractDetailsList swFilePath swPathDetails = do
let (refParamsHM:: InsOrdHashMap Text Param) = _swaggerParameters swaggerData
cDetailsList <- contractDetailsList
let swaggerPath ::[SwPathComponent] = fmap constructSwPathComps $ DLS.splitOn "/" $ removeLeadingSlash swFilePath
mainRouteName = setValidConstructorId $ (prettifyRouteName swaggerPath) ++ "R"
-- TODO: Add a `Static` Type component at the start if the list has just one element of type PathComp
finalPathWithParamTypes::[PathComponent] <- forM swaggerPath (\pathComponent ->
case pathComponent of
PathParamName pathParamName -> do
let pathLvlParams = _pathItemParameters swPathDetails
(mParamNameList::[Maybe String]) <- mapM (getPathParamTypeFromOperation mainRouteName pathParamName refParamsHM pathLvlParams) (getListOfPathOperations swPathDetails::[(SG.Method, Maybe Operation)])
case (DL.nub . catMaybes) mParamNameList of
[] -> error "TODO : Please report this as a bug. Need to handle the use of Common Params!"
singleParamType:[] -> pure (PathParamType singleParamType)
-- TODO : If the below case is encountered we need to handle it. (add separate Routes!)
otherVal -> error $ "Expected only a single Param Type to be present in all Methods of this path."
++ "Instead got : " ++ show otherVal ++ " Path : " ++ swFilePath
PathPiece staticPathCompStr -> pure (PathComp staticPathCompStr)
)
let currentRoutePath = finalPathWithParamTypes
methodList = [SG.GET, SG.PUT, SG.POST, SG.PATCH, SG.DELETE, SG.OPTIONS, SG.HEAD]
currentMethodData <- Control.Monad.foldM (processPathItem mainRouteName swPathDetails swaggerData) (Map.empty) methodList
-- TODO : Remove the routeID from ContractDetails, it is not used. Set 0 for now.
let currentContractDetails = ContractDetails 0 mainRouteName currentRoutePath currentMethodData
pure (currentContractDetails:cDetailsList)
constructSwPathComps :: String -> SwPathComponent
constructSwPathComps routeComponent =
if isParam routeComponent
then PathParamName $ removeCurlyBraces routeComponent
else PathPiece routeComponent
removeLeadingSlash :: String -> String
removeLeadingSlash inputRoute = fromMaybe inputRoute (DL.stripPrefix "/" inputRoute)
prettifyRouteName :: [SwPathComponent] -> String
prettifyRouteName swSinglePathComps = case swSinglePathComps of
[] -> error "Expected atleast one element in the route! Got an empty list!"
(PathPiece ""):[] -> "BaseRoute"
pathComps -> DL.concat $ flip fmap pathComps (\swPathComp ->
case swPathComp of
PathParamName (firstChar:remainingChar) -> (Char.toUpper firstChar):remainingChar
PathPiece (firstChar:remainingChar) -> (Char.toUpper firstChar):remainingChar
PathParamName [] ->
error $ "Path Param Name is an empty String. This should be impossible!"
++ "\nPlease check the Swagger Doc"
++ "\nFull Path : " ++ (show swSinglePathComps)
PathPiece [] ->
error $ "PathPiece is an empty String. This should be impossible! "
++ "Please check the Swagger Doc! \n Full Path : " ++ (show swSinglePathComps) )
isParam :: String -> Bool
isParam pathComponent = (DL.isPrefixOf "{" pathComponent) && (DL.isSuffixOf "}" pathComponent)
removeCurlyBraces :: String -> String
removeCurlyBraces = DL.filter (\x -> not (x == '{' || x == '}') )
getListOfPathOperations :: PathItem -> [(SG.Method, Maybe Operation)]
getListOfPathOperations pathItem = [(SG.GET, _pathItemGet pathItem), (SG.PUT, _pathItemPut pathItem), (SG.POST,_pathItemPost pathItem), (SG.DELETE, _pathItemDelete pathItem), (SG.OPTIONS, _pathItemOptions pathItem), (SG.HEAD, _pathItemHead pathItem), (SG.PATCH, _pathItemPatch pathItem)]
getPathParamTypeFromOperation :: RouteName -> String -> InsOrdHashMap Text Param -> [Referenced Param] -> (SG.Method, Maybe Operation) -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (Maybe String)
getPathParamTypeFromOperation routeNameStr paramPathName refParamsHM pathLvlParams (stdMethod, mOperation) = case mOperation of
Just operation -> do
let opParamList = _operationParameters operation
let finalParams = filterOutOverriddenParams refParamsHM opParamList pathLvlParams
mParamType <- foldM (\existingParamType refOrInlineParam ->
case refOrInlineParam of
Ref (Reference pmText) ->
case HMSIns.lookup pmText refParamsHM of
Just refParam ->
if (_paramName refParam) == T.pack paramPathName
then
case existingParamType of
Nothing -> do
pathParamType <- getParamTypeForPathParam (routeNameStr, stdMethod) refParam
pure $ Just pathParamType
Just _ -> error $ "Atleast two or more Params in the Params Ref HM match this param."
++ "This should be impossible. Please check the Swagger Spec!"
++ "\nDebug Info (Path Param Name) : " ++ (show paramPathName)
else pure existingParamType
Nothing -> pure existingParamType
Inline param ->
case (_paramName param == T.pack paramPathName ) of
True -> do
let pSchema = _paramSchema param
case pSchema of
ParamOther pOSchema ->
case _paramOtherSchemaIn pOSchema of
ParamPath -> do
pathParamType <- getParamTypeForPathParam (routeNameStr, stdMethod) param
pure $ Just pathParamType
_ -> pure existingParamType
ParamBody _ -> pure existingParamType
False -> pure existingParamType
) Nothing finalParams
pure mParamType
Nothing -> pure $ Nothing
getParamTypeForPathParam :: RouteAndMethod -> Param -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO String
getParamTypeForPathParam (routeNameStr, stdMethod) param =
case (_paramSchema param) of
ParamOther paramOtherSchema ->
case _paramOtherSchemaIn paramOtherSchema of
ParamPath ->
-- TODO : Verify that it's okay to put DefinitionI for Path Param.
-- Is it okay if this is generated in the local Types.hs file?
-- TODO : Since Path Params can be only primitive types, we should have another function
-- to calculate the type.
getTypeFromSwaggerType (Local ParamTy (routeNameStr, stdMethod)) DefinitionI (T.unpack $ _paramName param) Nothing (_paramOtherSchemaParamSchema paramOtherSchema)
_ -> error $ "Expected Path Param but got another Param Type. \nParam : " ++ (show param)
ParamBody _ -> error $ "Param matched by name in the Ref Params HM. "
++ "This means it should be a Path Param but it is a Body Param. "
++ "This is theoretically impossible. Please check the Swagger Doc!"
++ "\nDebug Info : (Path) Param -> \n" ++ (show param)
processPathItem :: String -> PathItem -> Swagger -> (Map.Map SG.Method ApiTypeDetails) -> SG.Method -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (Map.Map SG.Method ApiTypeDetails)
processPathItem mainRouteName pathItem swaggerData methodDataAcc currentMethod = do
let commonPathParams = _pathItemParameters pathItem
case currentMethod of
SG.GET -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.GET $ _pathItemGet pathItem
SG.PUT -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.PUT $ _pathItemPut pathItem
SG.POST -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.POST $ _pathItemPost pathItem
SG.DELETE -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.DELETE $ _pathItemDelete pathItem
SG.OPTIONS -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.OPTIONS $ _pathItemOptions pathItem
SG.HEAD -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.HEAD $ _pathItemHead pathItem
SG.PATCH -> (processOperation commonPathParams mainRouteName methodDataAcc swaggerData) SG.PATCH $ _pathItemPatch pathItem
-- TODO: If the following case is hit, we need to add it to error/log reporting.
_ -> pure $ Map.empty
processOperation :: [Referenced Param] -> String -> Map.Map SG.Method ApiTypeDetails -> Swagger -> SG.Method -> Maybe Operation -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (Map.Map SG.Method ApiTypeDetails)
processOperation commonPathLvlParams currentRouteName methodAcc swaggerData stdMethod mOperationData =
case mOperationData of
Just operationData -> do
let refParamsHM = _swaggerParameters swaggerData
let apiResponses = _responsesResponses $ _operationResponses operationData
(mApiOutType, apiErrType) <- getApiType apiResponses swaggerData
-- TODO: Case match on ApiOut and if `Nothing` then check for default responses in `_responsesDefault $ _operationResponses operationData`
let apiOutType = fromMaybe "()" mApiOutType
let addPlainText =
case apiOutType of
"()" -> True
"P.Text" -> True
_ -> False
let pathLvlAndLocalParam = filterOutOverriddenParams refParamsHM (_operationParameters operationData) commonPathLvlParams
-- Group the Referenced Params by ParamLocation and then go through each group separately.
let (formParamList, queryParamList, fileParamList, headerInList, bodyParamList) = DL.foldl' (groupParamTypes refParamsHM) ([], [], [], [], []) pathLvlAndLocalParam
mFormParamType <- getParamTypes formParamList FormParam
mQueryParamType <- getParamTypes queryParamList QueryParam
mFileParamType <- getParamTypes fileParamList FileParam
mHeaderInType <- getParamTypes headerInList HeaderParam
mReqBodyType <- getParamTypes bodyParamList BodyParam
let (mContentTypes, xmlPresent) = getContentTypes (_operationProduces operationData) addPlainText
let finalReqBodyType = flip fmap mReqBodyType (\reqBodyType ->
case (DL.isPrefixOf "[" reqBodyType) of
True -> "'" ++ reqBodyType
False -> "'[" ++ reqBodyType ++ "]" )
let apiTypeDetails =
ApiTypeDetails
{
apiOut = apiOutType
, apiErr = apiErrType
, formParam = mFormParamType
, queryParam = mQueryParamType
, fileParam = mFileParamType
, headerIn = mHeaderInType
, requestBody = finalReqBodyType
, contentTypes = mContentTypes
, hasXML = xmlPresent
}
pure $ Map.insert stdMethod apiTypeDetails methodAcc
Nothing -> pure methodAcc
where
groupParamTypes :: InsOrdHashMap Text Param -> ([Param], [Param], [Param], [Param], [Param]) -> Referenced Param -> ([Param], [Param], [Param], [Param], [Param])
groupParamTypes refParamsHM allParamLists refParam =
case refParam of
Ref (Reference paramRefName) ->
case HMSIns.lookup paramRefName refParamsHM of
Just paramVal -> putParamInMatchingPList allParamLists paramVal
Nothing -> error $ "Could not find referenced params value in the Ref Params HM! "
++ "Please check the Swagger Doc! "
++ "\nParam Name : " ++ (show paramRefName)
Inline param -> putParamInMatchingPList allParamLists param
putParamInMatchingPList :: ([Param], [Param], [Param], [Param], [Param]) -> Param -> ([Param], [Param], [Param], [Param], [Param])
putParamInMatchingPList (formParamList, queryParamList, fileParamList, headerInList, bodyParamList) param =
case _paramSchema param of
ParamBody _ -> (formParamList, queryParamList, fileParamList, headerInList, param:bodyParamList)
ParamOther pOtherSchema ->
case _paramOtherSchemaIn pOtherSchema of
ParamQuery -> (formParamList, param:queryParamList, fileParamList, headerInList, bodyParamList)
ParamHeader -> (formParamList, queryParamList, fileParamList, param:headerInList, bodyParamList)
ParamPath -> (formParamList, queryParamList, fileParamList, headerInList, bodyParamList)
ParamFormData ->
case (_paramSchema param) of
ParamOther pSchema ->
case (_paramSchemaType $ _paramOtherSchemaParamSchema pSchema) of
Just SwaggerFile -> (formParamList, queryParamList, param:fileParamList, headerInList, bodyParamList)
_ -> (param:formParamList, queryParamList, fileParamList, headerInList, bodyParamList)
otherParamSchema -> error $ "Expected ParamOther but encountered : " ++ (show otherParamSchema)
getContentTypes :: Maybe MimeList -> Bool -> (Maybe String, Bool)
getContentTypes mContentList addPlainText = do
let plainTextList::[String] = if addPlainText then ["W.PlainText"] else []
case mContentList of
Just contentList ->
case getMimeList contentList of
[] -> (Nothing, False)
mimeList ->
let mimeTypes = '\'':DL.filter (/= '"') (show $ plainTextList ++ flip fmap mimeList
(\mimeType -> case mimeType of
"application/xml" -> "W.XML"
"application/json" -> "W.JSON"
otherMime -> error $ "Encountered unknown MIME type. Please report this as a bug!"
++ "\nMIME Type encountered is : " ++ (show otherMime) ) )
in (Just mimeTypes, DL.isInfixOf "XML" mimeTypes)
Nothing -> (Nothing, False)
getApiType :: InsOrdHashMap HttpStatusCode (Referenced Response) -> Swagger -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (Maybe String, Maybe String)
getApiType responsesHM swaggerData = foldlWithKey' (\stateConfigWrappedTypes currentCode currentResponse -> do
let newTypeName = currentRouteName ++ (show stdMethod)
(apiOutType, apiErrType) <- stateConfigWrappedTypes
let lvlInfo = Local ResponseTy (currentRouteName, stdMethod)
case (currentCode >= 200 && currentCode < 300) of
True -> do
finalOutType <- do
let newTypeNameConstructor = "ApiOut"
(tyLevelInfo, currentResponseType) <- parseResponseContentGetType (lvlInfo, ApiOutI) currentResponse swaggerData newTypeNameConstructor
fOutType <- addTypeToState (lvlInfo, ApiOutI) (tyLevelInfo, currentResponseType) newTypeNameConstructor
pure $ Just fOutType
pure (finalOutType, apiErrType)
False -> do
finalErrType <- do
let newTypeNameConstructor = "ApiErr"
(tyLevelInfo, currentResponseType) <- parseResponseContentGetType (lvlInfo, ApiErrI) currentResponse swaggerData newTypeNameConstructor
fErrType <- addTypeToState (lvlInfo, ApiErrI) (tyLevelInfo, currentResponseType) newTypeNameConstructor
pure $ Just fErrType
pure (apiOutType, finalErrType)
) (pure (Nothing, Nothing)) responsesHM
parseResponseContentGetType :: (LevelInfo, TInfo) -> Referenced Response -> Swagger -> String -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (LevelInfo, String)
parseResponseContentGetType (levelInfo, tInfo) referencedResp swaggerData newTypeConsName = do
let swResponses :: InsOrdHashMap Text Response = _swaggerResponses swaggerData
-- let swDataDefns :: InsOrdHashMap Text Schema = _swaggerDefinitions swaggerData
case referencedResp of
Ref refText -> do
let refRespName = getReference refText
let globalRespLevel = Global ResponseTy
pure (globalRespLevel, T.unpack refRespName)
Inline responseSchema ->
case (_responseSchema responseSchema) of
Just (Ref refText) -> pure (Global DefinitionTy, T.unpack $ getReference refText)
Just (Inline respSchema) -> do
typeNameStr <- ((getTypeFromSwaggerType levelInfo tInfo newTypeConsName (Just respSchema) ) . _schemaParamSchema) respSchema
pure (levelInfo, typeNameStr)
-- NOTE : The following case means no content is returned with the Response!
Nothing -> pure (levelInfo, "()")
getParamTypes :: [Param] -> ParamType -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO (Maybe String)
getParamTypes paramList paramType =
case paramList of
[] -> pure $ Nothing
_ -> -- TODO : Refactor handling of adding Maybes and adding to State into a single function and call from all places.
case paramType of
FormParam -> do
let paramNames = fmap (\param -> T.unpack $ _paramName param) paramList
hTypesWithIsMandatory <- forM paramList (\param -> do
hType <- getParamTypeParam param (T.unpack ( _paramName param) ) Nothing
pure (isMandatory param, hType) )
let finalHaskellTypes = fmap (\(isMandatoryType, hType) -> (addMaybeToType isMandatoryType hType) ) hTypesWithIsMandatory
let recordTypesInfo = DL.zip paramNames finalHaskellTypes
let newDataTypeName = setValidConstructorId "HFormParam"
let formParamDataInfo = ProductType (NewData newDataTypeName recordTypesInfo) newDataTypeName
let levelInfo = Local ParamTy (currentRouteName, stdMethod)
let fParamHM = HMS.singleton levelInfo [FormParamTy formParamDataInfo Nothing]
modifyState fParamHM
pure $ Just newDataTypeName
QueryParam -> do
let paramNames = fmap (\param -> T.unpack $ _paramName param) paramList
hTypesWithIsMandatory <- forM paramList (\param -> do
hType <- getParamTypeParam param (T.unpack ( _paramName param) ) Nothing
pure (isMandatory param, hType) )
let finalHaskellTypes = fmap (\(isMandatoryType, hType) -> (addMaybeToType isMandatoryType hType) ) hTypesWithIsMandatory
let recordTypesInfo = DL.zip paramNames finalHaskellTypes
let newDataTypeName = setValidConstructorId "HQueryParam"
let queryParamDataInfo = ProductType (NewData newDataTypeName recordTypesInfo) newDataTypeName
let levelInfo = Local ParamTy (currentRouteName, stdMethod)
let qParamHM = HMS.singleton levelInfo [QueryParamTy queryParamDataInfo Nothing]
modifyState qParamHM
pure $ Just newDataTypeName
HeaderParam -> do
let paramNames = fmap (\param -> T.unpack $ _paramName param) paramList
typeListWithIsMandatory <- forM paramList (\param -> do
hType <- getParamTypeParam param (T.unpack ( _paramName param) ) Nothing
pure (isMandatory param, hType) )
let finalHaskellTypes = fmap (\(isMandatoryType, hType) -> (addMaybeToType isMandatoryType hType) ) typeListWithIsMandatory
let recordTypesInfo = DL.zip paramNames finalHaskellTypes
let newDataTypeName = setValidConstructorId "HHeaderParam"
let headerParamDataInfo = ProductType (NewData newDataTypeName recordTypesInfo) newDataTypeName
let levelInfo = Local ParamTy (currentRouteName, stdMethod)
let headerParamHM = HMS.singleton levelInfo [HeaderInTy headerParamDataInfo Nothing]
modifyState headerParamHM
pure $ Just newDataTypeName
FileParam -> do
typeList <- forM paramList (\param -> getParamTypeParam param (T.unpack ( _paramName param) ) Nothing )
case typeList of
[] -> pure Nothing
x:[] -> pure $ Just x
_ -> error $ "Encountered list of FileParam. This is not yet handled! "
++ "\nDebug Info: " ++ (show paramList)
BodyParam -> do
listOfTypes <- forM paramList (\param -> getParamTypeParam param (T.unpack (_paramName param)) Nothing )
case listOfTypes of
[] -> error $ "Tried to Get Body Param type but got an empty list/string! Debug Info: " ++ show paramList
x:[] -> pure $ Just x
_ -> error $ "Encountered a list of Body Params. WebApi/Swagger does not support this currently! Debug Info: " ++ show paramList
getParamTypeParam :: Param -> String -> Maybe Schema -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO String
getParamTypeParam inputParam paramName mOuterSchema = do
-- TODO : This may need to be calculated or passed here (or passed back from getTypeFromSwaggerType) if/when we consider route-level common params.
let levelInfo = Local ParamTy (currentRouteName, stdMethod)
case _paramSchema inputParam of
ParamBody refSchema ->
case refSchema of
Ref refType -> pure $ T.unpack (getReference refType)
Inline rSchema -> getTypeFromSwaggerType levelInfo ReqBodyI paramName (Just rSchema) (_schemaParamSchema rSchema)
ParamOther pSchema ->
let tInfo =
case _paramOtherSchemaIn pSchema of
ParamQuery -> QueryParamI
ParamHeader -> HeaderInI
ParamFormData -> FormParamI
-- TODO : Verify if this should ever be ParamPath.
-- If not, we should log this and error it when adding logging/error mechanism.
ParamPath -> DefinitionI
in getTypeFromSwaggerType levelInfo tInfo paramName mOuterSchema $ _paramOtherSchemaParamSchema pSchema
isMandatory :: Param -> Bool
isMandatory param =
case _paramRequired param of
Just True -> True
_ -> False
addMaybeToType :: Bool -> String -> String
addMaybeToType isNotNull haskellType =
case isNotNull of
True -> haskellType
False -> "P.Maybe " ++ haskellType
filterOutOverriddenParams :: InsOrdHashMap Text Param -> [Referenced Param] -> [Referenced Param] -> [Referenced Param]
filterOutOverriddenParams globalParams pathLvlParams localOpParams = do
let pathLvlParamNames = fmap getParamName pathLvlParams
let localOpParamNames = fmap getParamName localOpParams
case DL.intersect pathLvlParamNames localOpParamNames of
[] -> pathLvlParams ++ localOpParams
ovrdnParams ->
let modPathLvlParams = DL.filter (\param -> not $ DL.elem (getParamName param) ovrdnParams) pathLvlParams
in localOpParams ++ modPathLvlParams
where
getParamName :: Referenced Param -> String
getParamName refParam =
case refParam of
Inline paramObj -> T.unpack $ _paramName paramObj
Ref refObj -> do
let paramNameTxt = getReference refObj
case HMSIns.lookup paramNameTxt globalParams of
Just paramVal -> T.unpack $ _paramName paramVal
Nothing -> error $ "Could not find referenced params value in the Ref Params HM! "
++ "Please check the Swagger Doc! "
++ "\nParam Name : " ++ (T.unpack paramNameTxt)
addTypeToState :: (LevelInfo, TInfo) -> (LevelInfo, String) -> String -> (StateT (HMS.HashMap LevelInfo [TypeInfo]) IO String)
addTypeToState (levelInfo, tInfo) (tLevelInfo, currentType) newTypeName = do
let typeWithQual = addQual levelInfo currentType tLevelInfo
let sumTyConsName =
if currentType == "()"
then "NoContent"
else setValidConstructorId currentType
sumTypeConstructors = [(sumTyConsName, typeWithQual)]
sumTypeInfo = SumType (ComplexSumType newTypeName sumTypeConstructors) -- [currentType, eType] [currentType, eType] -- Note : OgNames not really applicable here so putting Haskell names
modify' (\existingState -> HMS.insertWith (insertIntoExistingSumTy sumTyConsName) levelInfo [tInfoToTypeInfo tInfo sumTypeInfo] existingState )
pure newTypeName
where
addQual :: LevelInfo -> String -> LevelInfo -> String
addQual currentLvlInfo typeStr typeLvlInfo =
case currentLvlInfo == typeLvlInfo of
True -> typeStr
False ->
let qualImportName =
case typeLvlInfo of
Global DefinitionTy -> globalDefnsQualName
Global ResponseTy -> globalRespTypesQualName
Global ParamTy -> globalParamsQualName
_ -> error $ "Expected a Global LevelInfo if LevelInfos don't match. Got : " ++ (show typeLvlInfo)
in qualImportName ++ "." ++ typeStr
insertIntoExistingSumTy :: String -> [TypeInfo] -> [TypeInfo] -> [TypeInfo]
insertIntoExistingSumTy currentTyCons newTyInfo existingTyInfos =
case newTyInfo of
(ApiErrTy ty nc):[] -> (addIfNotChanged (ApiErrTy ty nc)) $ DL.foldl' (addToApiErrTyInfo currentTyCons) (False, []) existingTyInfos
(ApiOutTy ty nc):[] -> (addIfNotChanged (ApiOutTy ty nc)) $ DL.foldl' (addToApiOutTyInfo currentTyCons) (False, []) existingTyInfos
_ -> error $ "Encountered empty or multiple value TypeInfo list."
++ "Expected only one value. Got : " ++ (show newTyInfo)
addToApiErrTyInfo :: String -> (Bool, [TypeInfo]) -> TypeInfo -> (Bool, [TypeInfo])
addToApiErrTyInfo currentTyCons (isChanged, accVal) currentTyInfo =
case currentTyInfo of
ApiErrTy (SumType (ComplexSumType tyName tyList )) nCtr ->
let modTy = ApiErrTy (SumType (ComplexSumType tyName ((currentTyCons,currentTyCons):tyList) )) nCtr
in (True, modTy:accVal)
_ -> (isChanged, currentTyInfo:accVal)
addToApiOutTyInfo :: String -> (Bool, [TypeInfo]) -> TypeInfo -> (Bool, [TypeInfo])
addToApiOutTyInfo currentTyCons (isChanged, accVal) currentTyInfo =
case currentTyInfo of
ApiOutTy (SumType (ComplexSumType tyName tyList )) nCtr ->
let modTy = ApiOutTy (SumType (ComplexSumType tyName ((currentTyCons,currentTyCons):tyList) )) nCtr
in (True, modTy:accVal)
_ -> (isChanged, currentTyInfo:accVal)
addIfNotChanged :: TypeInfo -> (Bool, [TypeInfo]) -> [TypeInfo]
addIfNotChanged newTyInfo (isChanged, tyInfoList) =
if isChanged
then tyInfoList
else newTyInfo:tyInfoList
addToStateSumType :: LevelInfo -> String -> String -> (HMS.HashMap LevelInfo [TypeInfo], Bool) -> LevelInfo -> [TypeInfo] -> (HMS.HashMap LevelInfo [TypeInfo], Bool)
addToStateSumType currentTyLvlInfo newSumTypeName currentTypeStr (accVal, isChanged) lvlInfo tyInfoList =
let (modTypeInfoList, valueChanged) = DL.foldl' (addToSSTypeInfo newSumTypeName currentTypeStr) ([], isChanged) tyInfoList
in (HMS.insertWith checkForDuplicateAndAdd lvlInfo modTypeInfoList accVal, valueChanged)
addToSSTypeInfo :: String -> String -> ([TypeInfo], Bool) -> TypeInfo -> ([TypeInfo], Bool)
addToSSTypeInfo newSumTypeName currentTypeStr (accTyInfoList, isChanged) currentTyInfo =
case getTypeName currentTyInfo == newSumTypeName of
True ->
case getInnerTyFromTypeInfo currentTyInfo of
SumType (ComplexSumType dataName consAndTypes) ->
let consName = setValidConstructorId $ dataName ++ currentTypeStr
modSumType = SumType (ComplexSumType dataName ((consName, currentTypeStr):consAndTypes) )
modTyInfo = updateTypeInfoDataTy currentTyInfo modSumType
in (modTyInfo:accTyInfoList, True)
_ -> (currentTyInfo:accTyInfoList, isChanged)
False -> (currentTyInfo:accTyInfoList, isChanged)
getTypeName :: TypeInfo -> String
getTypeName tyInfo =
let innerTy = getInnerTyFromTypeInfo tyInfo
in case innerTy of
SumType (BasicEnum consName _ _) -> consName
SumType (ComplexSumType consName _ ) -> consName
ProductType (NewData consName _ ) _ -> consName
HNewType consName _ _ -> consName
getInnerTyAndCtr :: TypeInfo -> (CreateDataType, NamingCounter)
getInnerTyAndCtr tyInfo =
case tyInfo of
ApiErrTy cdt nCtr -> (cdt, nCtr)
ApiOutTy cdt nCtr -> (cdt, nCtr)
FormParamTy cdt nCtr -> (cdt, nCtr)
QueryParamTy cdt nCtr -> (cdt, nCtr)
FileParamTy cdt nCtr -> (cdt, nCtr)
HeaderInTy cdt nCtr -> (cdt, nCtr)
ReqBodyTy cdt nCtr -> (cdt, nCtr)
ContentTypesTy cdt nCtr -> (cdt, nCtr)
HeaderOutTy cdt nCtr -> (cdt, nCtr)
DefinitionType cdt nCtr -> (cdt, nCtr)
getInnerTyFromTypeInfo :: TypeInfo -> CreateDataType
getInnerTyFromTypeInfo tyInfo =
case tyInfo of
ApiErrTy cdt _ -> cdt
ApiOutTy cdt _ -> cdt
FormParamTy cdt _ -> cdt
QueryParamTy cdt _ -> cdt
FileParamTy cdt _ -> cdt
HeaderInTy cdt _ -> cdt
ReqBodyTy cdt _ -> cdt
ContentTypesTy cdt _ -> cdt
HeaderOutTy cdt _ -> cdt
DefinitionType cdt _ -> cdt
getTypeNameWithCounter :: TypeInfo -> (String, Maybe Int)
getTypeNameWithCounter tyInfo =
let (innerTy, namingCtr) = getInnerTyFromTypeInfoWithCtr tyInfo
cName =
case innerTy of
SumType (BasicEnum consName _ _) -> consName
SumType (ComplexSumType consName _ ) -> consName
ProductType (NewData consName _ ) _ -> consName
HNewType consName _ _ -> consName
in (cName, namingCtr)
getInnerTyFromTypeInfoWithCtr :: TypeInfo -> (CreateDataType, NamingCounter)
getInnerTyFromTypeInfoWithCtr tyInfo =
case tyInfo of
ApiErrTy cdt nCtr -> (cdt, nCtr)
ApiOutTy cdt nCtr -> (cdt, nCtr)
FormParamTy cdt nCtr -> (cdt, nCtr)
QueryParamTy cdt nCtr -> (cdt, nCtr)
FileParamTy cdt nCtr -> (cdt, nCtr)
HeaderInTy cdt nCtr -> (cdt, nCtr)
ReqBodyTy cdt nCtr -> (cdt, nCtr)
ContentTypesTy cdt nCtr -> (cdt, nCtr)
HeaderOutTy cdt nCtr -> (cdt, nCtr)
DefinitionType cdt nCtr -> (cdt, nCtr)
addCtrToTypeInfo :: NamingCounter -> TypeInfo -> TypeInfo
addCtrToTypeInfo mCtrVal tyInfo =
case tyInfo of
ApiErrTy cdt _ -> ApiErrTy cdt mCtrVal
ApiOutTy cdt _ -> ApiOutTy cdt mCtrVal
FormParamTy cdt _ -> FormParamTy cdt mCtrVal
QueryParamTy cdt _ -> QueryParamTy cdt mCtrVal
FileParamTy cdt _ -> FileParamTy cdt mCtrVal
HeaderInTy cdt _ -> HeaderInTy cdt mCtrVal
ReqBodyTy cdt _ -> ReqBodyTy cdt mCtrVal
ContentTypesTy cdt _ -> ContentTypesTy cdt mCtrVal
HeaderOutTy cdt _ -> HeaderOutTy cdt mCtrVal
DefinitionType cdt _ -> DefinitionType cdt mCtrVal
updateTypeInfoDataTy :: TypeInfo -> CreateDataType -> TypeInfo
updateTypeInfoDataTy tyInfo newCdt =
case tyInfo of
ApiErrTy _ nameCtr -> ApiErrTy newCdt nameCtr
ApiOutTy _ nameCtr -> ApiOutTy newCdt nameCtr
FormParamTy _ nameCtr -> FormParamTy newCdt nameCtr
QueryParamTy _ nameCtr -> QueryParamTy newCdt nameCtr
FileParamTy _ nameCtr -> FileParamTy newCdt nameCtr
HeaderInTy _ nameCtr -> HeaderInTy newCdt nameCtr
ReqBodyTy _ nameCtr -> ReqBodyTy newCdt nameCtr
ContentTypesTy _ nameCtr -> ContentTypesTy newCdt nameCtr
HeaderOutTy _ nameCtr -> HeaderOutTy newCdt nameCtr
DefinitionType _ nameCtr -> DefinitionType newCdt nameCtr
-- NOTE: In the below function, we set the value to Nothing because in all places
-- where the function is called, there is no chance for a naming counter to be set.
tInfoToTypeInfo :: TInfo -> CreateDataType -> TypeInfo
tInfoToTypeInfo tInfo cdt =
case tInfo of
ApiErrI -> ApiErrTy cdt Nothing
ApiOutI -> ApiOutTy cdt Nothing
FormParamI -> FormParamTy cdt Nothing
QueryParamI -> QueryParamTy cdt Nothing
FileParamI -> FileParamTy cdt Nothing
HeaderInI -> HeaderInTy cdt Nothing
ReqBodyI -> ReqBodyTy cdt Nothing
ContentTypesI -> ContentTypesTy cdt Nothing
HeaderOutI -> HeaderOutTy cdt Nothing
DefinitionI -> DefinitionType cdt Nothing
getTypeFromSwaggerType :: LevelInfo -> TInfo -> String -> Maybe Schema -> ParamSchema t -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO String
getTypeFromSwaggerType levelInfo tInfo paramNameOrRecordName mOuterSchema paramSchema =
-- let mRouteName =
case (_paramSchemaType paramSchema) of
Just SwaggerString ->
case _paramSchemaFormat paramSchema of
Just "date" -> pure "P.Day"
Just "date-time" -> pure "P.UTCTime"
Just "password" -> error $ "Encountered SwaggerString with Format as `password`. This needs to be handled! Debug Info : " ++ show paramSchema
Just "byte" -> pure "P.ByteString"
-- TODO : Map binary to ByteString
Just "binary" -> error $ "Encountered SwaggerString with Format as `binary`. This needs to be handled! Debug Info: " ++ show paramSchema
Nothing ->
case _paramSchemaEnum paramSchema of
Nothing -> pure "P.Text"
Just valueEnumList -> do
let newSumTypeName = setValidConstructorId $ paramNameOrRecordName
let (enumVals, ogVals) =
DL.unzip $
fmap (\(Data.Aeson.String enumVal) ->
(setValidConstructorId (T.unpack enumVal), T.unpack enumVal) ) valueEnumList
let haskellNewTypeInfo = SumType (BasicEnum newSumTypeName enumVals ogVals )
let sumTyHM = HMS.singleton levelInfo [tInfoToTypeInfo tInfo haskellNewTypeInfo]
modifyState sumTyHM
pure newSumTypeName
-- currentState <- get
-- let onlySumTypes = DL.filter (\newTypeObj -> do
-- case newTypeObj of
-- SumType (BasicEnum _ _ _) -> True
-- _ -> False ) $ fmap getInnerTyFromTypeInfo $ DL.concat $ HMS.elems currentState
-- let createSumType = DL.foldl' checkIfSumTypeExists (CreateSumType haskellNewTypeInfo) onlySumTypes
-- case createSumType of
-- CreateSumType (SumType (BasicEnum sName sNewVals sOgVals) ) -> do
-- let sumTyInfo = (SumType (BasicEnum sName sNewVals sOgVals) )
-- let sumTyHM = HMS.singleton levelInfo [tInfoToTypeInfo tInfo sumTyInfo]
-- modify' (\existingState -> HMS.unionWith (++) existingState sumTyHM)
-- pure sName
-- CreateSumType otherTy ->
-- error $ "Expected only SumTypes here, but got : " ++ (show otherTy)
-- ++ "\nSince we have already filtered for only SumTypes, this should not be possible!"
-- ExistingType existingTyName -> pure existingTyName
_ -> pure "P.Text" -- error $ "Encountered SwaggerString with unknown Format! Debug Info: " ++ show paramSchema
Just SwaggerNumber ->
case _paramSchemaFormat paramSchema of
Just "float" -> pure "P.Float"
Just "double" -> pure "P.Double"
_ -> pure "SwaggerNumber"
Just SwaggerInteger ->
case _paramSchemaFormat paramSchema of
Just "int32" -> pure "P.Int32"
Just "int64" -> pure "P.Int64"
_ -> pure "P.Int"
Just SwaggerBoolean -> pure "P.Bool"
-- As per the pattern in `PetStore`, for SwaggerArray, we check the Param Schema Items field and look for a reference Name there.
Just SwaggerArray -> case _paramSchemaItems paramSchema of
Just (SwaggerItemsObject obj) ->
case obj of
Ref reference -> pure $ "[" ++ (setValidConstructorId $ T.unpack $ getReference reference) ++ "]"
Inline recursiveSchema -> do
let innerTypeName = (setValidConstructorId $ paramNameOrRecordName ++ "Contents")
hType <- ( ( (getTypeFromSwaggerType levelInfo tInfo innerTypeName (Just recursiveSchema) ) . _schemaParamSchema) recursiveSchema)
pure $ "[" ++ hType ++ "]"
-- NOTE : SwaggerItemsArray is used in a case where there are tuple Schemas.
-- So we need to represent the following list of Ref Schemas as a tuple or newtype (over tuple).
-- TODO : This needs to be changed. Ref to GitHub ticket #34
Just (SwaggerItemsArray innerArray) -> pure $ "SwaggerItemsArrayType"
-- checkIfArray $ flip Control.Monad.mapM innerArray (\singleElem -> do
-- case singleElem of
-- Ref ref -> pure $ setValidConstructorId $ T.unpack $ getReference ref
-- Inline innerSchema -> ((getTypeFromSwaggerType mRouteName mParamNameOrRecordName (Just innerSchema) ) . _schemaParamSchema) innerSchema)
Just (SwaggerItemsPrimitive mCollectionFormat innerParamSchema) -> do
typeName <- do
let paramName = paramNameOrRecordName
let titleCaseParamName = setValidConstructorId $ T.unpack $ T.toTitle $ T.pack paramName
case _paramSchemaEnum innerParamSchema of
Just enumVals -> do
let (enumValList, ogVals)::([String], [String]) =
DL.unzip $ fmap (\(Data.Aeson.String val) -> (T.unpack $ T.toTitle val, T.unpack val) ) enumVals
let haskellNewTypeInfo = SumType (BasicEnum titleCaseParamName enumValList ogVals)
let hNewTyHM = HMS.singleton levelInfo [tInfoToTypeInfo tInfo haskellNewTypeInfo]
modifyState hNewTyHM
pure titleCaseParamName
-- TODO: Is it allowed for a CollectionFmt type to have this inner schema (complex type)?
Nothing -> getTypeFromSwaggerType levelInfo tInfo ("Collection" ++ titleCaseParamName) Nothing innerParamSchema
case mCollectionFormat of
(Just CollectionMulti) -> pure $ "P.MultiSet " ++ typeName
(Just CollectionTSV) -> pure $ "DelimitedCollection \"\t\" " ++ typeName
(Just CollectionSSV) -> pure $ "DelimitedCollection \" \"" ++ typeName
(Just CollectionPipes) -> pure $ "DelimitedCollection \"|\"" ++ typeName
-- Since CSV is the default, the below case takes care of (Just CSV) as well as Nothing
_ -> pure $ "DelimitedCollection \",\" " ++ typeName
Nothing -> error "Expected a SwaggerItems type due to SwaggerArray ParamSchema Type. But it did not find any! Please check the swagger spec!"
Just SwaggerObject -> do
let recordTypeName =
setValidConstructorId paramNameOrRecordName
case mOuterSchema of
Just outerSchema ->
case (HMSIns.toList $ _schemaProperties outerSchema) of
[] ->
case (_schemaAdditionalProperties outerSchema) of
Just additionalProps ->
case additionalProps of
AdditionalPropertiesSchema (Ref ref) -> pure $ "(HM.HashMap P.Text " ++ (setValidConstructorId $ T.unpack $ getReference ref) ++ ")"
AdditionalPropertiesSchema (Inline internalSchema) -> ((getTypeFromSwaggerType levelInfo tInfo recordTypeName (Just internalSchema)) . _schemaParamSchema) internalSchema
AdditionalPropertiesAllowed _ -> error "TODO: unhandled case of additional props"
Nothing ->
case (_paramSchemaType . _schemaParamSchema) outerSchema of
Just SwaggerObject -> pure $ "(HM.HashMap P.Text P.Text)"
_ -> error $ "Type SwaggerObject but swaggerProperties and additionalProperties are both absent! "
++ "Also, the paramSchema type in the ParamSchema is not an Object! Please check the JSON! "
++ "Debug Info (Schema): " ++ show outerSchema
propertyList -> do -- TODO: This needs to be changed when we encounter _schemaProperties in some swagger doc/schema.
innerRecordsInfo <- forM propertyList (\(recordName, iRefSchema) -> do
let recordNameStr = T.unpack recordName
innerRecordType <- case iRefSchema of
Ref refName -> pure $ setValidConstructorId $ T.unpack $ getReference refName
Inline irSchema -> ((getTypeFromSwaggerType levelInfo tInfo recordNameStr (Just irSchema)) . _schemaParamSchema) irSchema
let isRequired = isRequiredType outerSchema recordNameStr
let typeWithMaybe = if isRequired then innerRecordType else setMaybeType innerRecordType
pure (recordNameStr, typeWithMaybe) )
let finalProductTypeInfo = ProductType (NewData recordTypeName innerRecordsInfo) paramNameOrRecordName
let hNewTyHM = HMS.singleton levelInfo [tInfoToTypeInfo tInfo finalProductTypeInfo]
modifyState hNewTyHM
pure recordTypeName
Nothing -> error $ "Expected outer schema to be present when trying to construct type of SwaggerObject. Debug Info (ParamSchema): " ++ show paramSchema
Just SwaggerFile -> pure "W.FileInfo" -- TODO
Just SwaggerNull -> pure "()"
-- NOTE: what are types which have no type info?
Nothing -> pure "()"
-- x -> ("Got Unexpected Primitive Value : " ++ show x)
where
-- setNewTypeConsName :: LevelInfo -> CreateDataType -> CreateDataType
-- setNewTypeConsName lvlInfo (ProductType (NewData oldConsName inRecordInfo) ogName) =
-- routeNameStr = fromJustNote noRouteErrMsg maybeRouteName
-- newConsName = setValidConstructorId $ routeNameStr ++ oldConsName
-- in (ProductType (NewData newConsName inRecordInfo) ogName )
-- setNewTypeConsName _ otherType =
-- error $ "Expected RouteName along with Product Type. "
-- ++ "\nWe are trying to avoid a name clash so we are trying to set a new name "
-- ++ "for the type : " ++ (show otherType)
-- ++ "\nWe expected it to be a Product Type!"
isRequiredType :: Schema -> String -> Bool
isRequiredType tSchema recordFieldName = DL.elem (T.pack recordFieldName) (_schemaRequired tSchema)
setMaybeType :: String -> String
setMaybeType = ("P.Maybe " ++ )
isNamePresent :: String -> HMS.HashMap LevelInfo [TypeInfo] -> Bool
isNamePresent newTypeName stateVals =
let tInfos = DL.concat $ HMS.elems stateVals
stateTypeNames = fmap getTypeName tInfos
in DL.elem newTypeName stateTypeNames
checkIfArray :: StateT (HMS.HashMap LevelInfo [TypeInfo]) IO [String] -> StateT (HMS.HashMap LevelInfo [TypeInfo]) IO String
checkIfArray scStringList = do
stringList <- scStringList
case DL.nub stringList of
sameElem:[] -> pure $ "[" ++ sameElem ++ "]"
x -> error $ "Got different types in the same list. Not sure how to proceed! Please check the swagger doc! " ++ show x
-- checkIfSumTypeExists :: SumTypeCreation -> CreateDataType -> SumTypeCreation
-- checkIfSumTypeExists sumTypeCreation (SumType (BasicEnum typeName tVals _) ) =
-- case sumTypeCreation of
-- ExistingType eTyName -> ExistingType eTyName
-- CreateSumType (SumType (BasicEnum newTypeName newTypeVals ogVals ) ) -> do
-- case (newTypeVals == tVals) of
-- True -> ExistingType typeName
-- False ->
-- case (newTypeVals `DL.intersect` tVals) of
-- [] -> CreateSumType (SumType (BasicEnum newTypeName newTypeVals ogVals ) )
-- _ ->
-- let modConstructorNames = fmap (\oldCons -> setValidConstructorId $ newTypeName ++ oldCons) newTypeVals
-- in CreateSumType (SumType (BasicEnum newTypeName modConstructorNames ogVals ) )
-- CreateSumType xType ->
-- error $ "Expected only SumTypes here but got : " ++ (show xType)
-- ++ "\nSince we already filtered for only SumTypes, this should not be possible!"
-- checkIfSumTypeExists newType existingType =
-- error $ "PANIC : We already filtered for only Sum Types but encountered non-sum type constructor!"
-- ++ "\nDebugInfo : New Type to be created is : " ++ (show newType)
-- ++ "\nExisting type is : " ++ (show existingType)
modifyState :: HMS.HashMap LevelInfo [TypeInfo] -> StateConfig ()
modifyState hNewTyHM = modify' (\existingState -> HMS.unionWith checkForDuplicateAndAdd existingState hNewTyHM)
checkForDuplicateAndAdd :: [TypeInfo] -> [TypeInfo] -> [TypeInfo]
checkForDuplicateAndAdd newList oldList =
let duplicatesRemovedTyInfoList = DL.union newList oldList
in DL.foldl' checkForSameNamedType [] duplicatesRemovedTyInfoList
checkForSameNamedType :: [TypeInfo] -> TypeInfo -> [TypeInfo]
checkForSameNamedType accList currentTyInfo = do
let (currentConsName, _) = getTypeNameWithCounter currentTyInfo
-- NOTE: We discard the naming counter of current TypeInfo because we expect it to always be `Nothing`
let mHighestCtr = DL.foldl' (getCtrIfIdenticalCons currentConsName) Nothing accList
case mHighestCtr of
Just ctr -> (addCtrToTypeInfo (Just (ctr + 1) ) currentTyInfo):accList
Nothing -> currentTyInfo:accList
where
getCtrIfIdenticalCons :: String -> NamingCounter -> TypeInfo -> NamingCounter
getCtrIfIdenticalCons currentConsName accNCtr tyInfo = do
let (cName, mCtr) = getTypeNameWithCounter tyInfo
case cName == currentConsName of
True ->
case mCtr of
Just ctr ->
case accNCtr of
Just accCtrVal -> Just $ max accCtrVal ctr
Nothing -> Just ctr
Nothing ->
case accNCtr of
Just accCtrVal -> Just accCtrVal
Nothing -> Just 0
False -> accNCtr
-- where
-- (trace (traceStateValIfAuthor newList existingList) existingList)
-- traceStateValIfAuthor :: [TypeInfo] -> [TypeInfo] -> String
-- traceStateValIfAuthor newList exList = show newList
-- case newList of
-- someTy:[] ->
-- case getTypeName someTy of
-- "Author" -> "NewList : " ++ (show newList) ++ "\nExistingList : " ++ (show exList)
-- "Tree" -> "NewList : " ++ (show newList) ++ "\nExistingList : " ++ (show exList)
-- _ -> ""
-- _ -> "more than 1 elem or not Def Ty"
parseHaskellSrcContract :: String -> IO ()
parseHaskellSrcContract pathToFile = do
parseResult <- parseFile pathToFile
case parseResult of
ParseOk hModule ->
case hModule of
Module _ (Just _) _ _ declarations -> putStrLn $ show declarations
_ -> error "Module is not in the correct format?!"
ParseFailed srcLoc errMsg -> putStrLn $ (show srcLoc) ++ " : " ++ errMsg
instanceTopVec :: Vector 4 String
instanceTopVec = fromJustNote "Expected a list with 4 elements for WebApi instance!" $ SV.fromList ["ApiContract", "EDITranslatorApi", "POST", "EdiToJsonR" ]
instanceTypeVec :: [Vector 4 String]
instanceTypeVec = [
( fromMaybeSV $ SV.fromList ["ApiOut", "POST", "EdiToJsonR", "Value" ])
, ( fromMaybeSV $ SV.fromList ["ApiErr", "POST", "EdiToJsonR", "Text" ])
, ( fromMaybeSV $ SV.fromList ["FormParam", "POST", "EdiToJsonR", "EdiStr" ])
, ( fromMaybeSV $ SV.fromList ["QueryParam", "POST", "EdiToJsonR", "Maybe CharacterSet"])
]
where
fromMaybeSV :: Maybe a -> a
fromMaybeSV = fromJustNote "Expected a list with 4 elements for WebApi instance! "
fromParamVec :: Vector 3 String
fromParamVec = fromJustNote "Expected a list with 3 elements for WebApi instance!" $ SV.fromList ["FromParam", "FormParam", "EdiStr"]
generateContractBody :: String -> [ContractDetails] -> [Decl SrcSpanInfo]
generateContractBody contractName contractDetails =
[emptyDataDeclaration contractName] ++ flip fmap contractDetails (\cDetail -> routeDeclaration (routeName cDetail) (routePath cDetail) ) ++
[webApiInstance contractName (fmap (\ctDetail -> (routeName ctDetail , fmap qualMethod (Map.keys $ methodData ctDetail))) contractDetails ) ] ++
(fmap (\(topVec, innerVecList) -> apiInstanceDeclaration topVec innerVecList ) $ DL.concat $ fmap (constructVectorForRoute contractName) contractDetails)
where
qualMethod :: SG.Method -> String
qualMethod = ("W." ++) . show
constructVectorForRoute :: String -> ContractDetails -> [(Vector 4 String, [Vector 4 String])]
constructVectorForRoute ctrtName ctrDetails =
let currentRouteName = routeName ctrDetails
in Map.foldlWithKey' (routeDetailToVector ctrtName currentRouteName) [] (methodData ctrDetails)
routeDetailToVector :: String -> String -> [(Vector 4 String, [Vector 4 String])] -> SG.Method -> ApiTypeDetails -> [(Vector 4 String, [Vector 4 String])]
routeDetailToVector ctrtName routeNameStr accValue currentMethod apiDetails =
let qualMethodName = "W." ++ show currentMethod
topLevelVector = fromMaybeSV $ SV.fromList ["W.ApiContract", ctrtName, qualMethodName, routeNameStr]
respType = Just $ apiOut apiDetails
errType = apiErr apiDetails
formParamType = formParam apiDetails
queryParamType = queryParam apiDetails
fileParamType = fileParam apiDetails
headerParamType = headerIn apiDetails
requestBodyType = requestBody apiDetails
contentType = contentTypes apiDetails
instanceVectorList =
catMaybes $ fmap (\(typeInfo, typeLabel) -> fmap (\tInfo -> fromMaybeSV $ SV.fromList [typeLabel, qualMethodName, routeNameStr, tInfo] ) typeInfo)
$ DL.zip (respType:errType:formParamType:queryParamType:fileParamType:headerParamType:requestBodyType:contentType:[])
["ApiOut", "ApiErr","FormParam", "QueryParam", "FileParam", "HeaderIn", "RequestBody", "ContentTypes"]
in (topLevelVector, instanceVectorList):accValue
fromMaybeSV = fromJustNote "Expected a list with 4 elements for WebApi instance! "
typeAliasForDecl :: String -> String -> Decl SrcSpanInfo
typeAliasForDecl typeNameStr typeAliasStr =
TypeDecl noSrcSpan (DHead noSrcSpan (nameDecl typeNameStr)) (typeConstructor typeAliasStr)
-- (DataDecl noSrcSpan
-- (NewType noSrcSpan)
-- Nothing
-- (DHead noSrcSpan (Ident noSrcSpan "X"))
-- [QualConDecl noSrcSpan Nothing Nothing (ConDecl noSrcSpan (Ident noSrcSpan "X") [TyCon noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "String"))])]
-- [Deriving noSrcSpan Nothing [IRule noSrcSpan Nothing Nothing (IHCon noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "Eq"))),IRule noSrcSpan Nothing Nothing (IHCon noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "Show"))),IRule noSrcSpan Nothing Nothing (IHCon noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "Generic")))]])
#if MIN_VERSION_haskell_src_exts(1,20,0)
dataDeclaration :: (DataOrNew SrcSpanInfo) -> String -> Either String InnerRecords -> [DerivingClass] -> (ModifiedRecords, Decl SrcSpanInfo)
dataDeclaration dataOrNew dataName eStringInnerRecords derivingList =
let (modRecords, constructorDecl) = either (\tName -> ([], newTypeConstructorDecl dataName tName)) (constructorDeclaration dataName ) eStringInnerRecords
decl =
DataDecl noSrcSpan
dataOrNew
Nothing
(declarationHead dataName)
constructorDecl
[derivingDecl derivingList]
in (modRecords, decl)
#else
dataDeclaration :: (DataOrNew SrcSpanInfo) -> String -> Either String InnerRecords -> [DerivingClass] -> (ModifiedRecords, Decl SrcSpanInfo)
dataDeclaration dataOrNew dataName eStringInnerRecords derivingList =
let (modRecords, constructorDecl) = either (\tName -> ([], newTypeConstructorDecl dataName tName)) (constructorDeclaration dataName ) eStringInnerRecords
decl =
DataDecl noSrcSpan
dataOrNew
Nothing
(declarationHead dataName)
constructorDecl
(Just $ derivingDecl derivingList)
in (modRecords, decl)
#endif
declarationHead :: String -> DeclHead SrcSpanInfo
declarationHead declHeadName = (DHead noSrcSpan (Ident noSrcSpan declHeadName) )
newTypeConstructorDecl :: String -> String -> [QualConDecl SrcSpanInfo]
newTypeConstructorDecl consName tyName =
[QualConDecl noSrcSpan Nothing Nothing (ConDecl noSrcSpan (nameDecl consName) [typeConstructor tyName])]
constructorDeclaration :: String -> InnerRecords -> (ModifiedRecords, [QualConDecl SrcSpanInfo])
constructorDeclaration constructorName innerRecords =
let (mModRecords, fieldDecls) = DL.unzip (fmap fieldDecl innerRecords)
modRecords = catMaybes mModRecords
qualConDecl = [QualConDecl noSrcSpan Nothing Nothing (RecDecl noSrcSpan (nameDecl constructorName) fieldDecls )]
in (modRecords, qualConDecl)
stringLiteral :: String -> Exp SrcSpanInfo
stringLiteral str = (Lit noSrcSpan (LHE.String noSrcSpan str str))
variableName :: String -> Exp SrcSpanInfo
variableName nameStr = (Var noSrcSpan (UnQual noSrcSpan (nameDecl nameStr) ) )
nameDecl :: String -> Name SrcSpanInfo
nameDecl = Ident noSrcSpan
fieldDecl :: (String, String) -> (Maybe (String, String), FieldDecl SrcSpanInfo)
fieldDecl (fieldName, fieldType) = do
let (isChanged, fName) = setValidFieldName fieldName
let mModRecord =
case isChanged of
True -> Just (fieldName, fName)
False -> Nothing
let fDecl = FieldDecl noSrcSpan [nameDecl fName] (TyCon noSrcSpan (UnQual noSrcSpan (nameDecl fieldType)))
(mModRecord, fDecl)
setValidConstructorId :: String -> String
setValidConstructorId str =
let (_, validName) = setValidFieldName str
in (Char.toUpper $ DL.head validName):(DL.tail validName)
setValidFieldName :: String -> (Bool, String)
setValidFieldName fldName = do
-- Replace invalidId Chars, check if hs keyword and modify else return
let (isChanged, invalidsFixed) = fixInvalidId fldName
case isHsKeyword invalidsFixed of
True -> (True, invalidsFixed ++ "_")
False -> (isChanged, invalidsFixed)
where
isHsKeyword :: String -> Bool
isHsKeyword str = DL.elem str haskellKeywords
fixInvalidId :: String -> (Bool, String)
fixInvalidId idVal
| idVal == "" = error "Encountered potential empty Haskell Identifier! Please check the Swagger JSON!"
| idVal == "_" = (True, "holeName") -- ?? TODO : Is this allowed? Discuss
| idVal == "\'" = (True, "singleQuoteId") -- TODO : Is this allowed?
| DL.length idVal == 1 && isValidHsIdChar (DL.head idVal) = (False, fmap Char.toLower idVal)
| otherwise = do
let newVal = replaceInvalidChars ("",DL.tail idVal) (DL.head idVal)
let lCaseNewVal = makeFirstCharAlpha $ (Char.toLower $ DL.head newVal):(DL.tail newVal)
case lCaseNewVal == idVal of
True -> (False, lCaseNewVal)
False -> (True, lCaseNewVal)
where
replaceInvalidChars :: (String, String) -> Char -> String
replaceInvalidChars (prev, next) currentChar =
if isValidHsIdChar currentChar && (not $ DL.null next)
then replaceInvalidChars (prev ++ [currentChar], DL.tail next) (DL.head next)
else if isValidHsIdChar currentChar
then prev ++ [currentChar]
-- check for a prefix of invalid chars and return the rest of the next chars
else do
let newNext = snd $ DL.break isValidHsIdChar next
case DL.null newNext of
True -> prev ++ "_"
False -> replaceInvalidChars (prev ++ "_", DL.tail newNext ) (DL.head newNext)
isValidHsIdChar :: Char -> Bool
isValidHsIdChar x = (Char.isAlphaNum x) || x == '_' || x == '\''
makeFirstCharAlpha :: String -> String
makeFirstCharAlpha inpString =
case inpString of
[] -> error "Encountered potential empty Haskell Identifier! Please check the Swagger JSON!"
firstChar:_ ->
case Char.isAlpha firstChar of
True -> inpString
False -> 'h':inpString
derivingDecl :: [String] -> Deriving SrcSpanInfo
#if MIN_VERSION_haskell_src_exts(1,20,0)
derivingDecl derivingList = Deriving noSrcSpan Nothing $ fmap iRule derivingList
#else
derivingDecl derivingList = Deriving noSrcSpan $ fmap iRule derivingList
#endif
where
iRule :: String -> InstRule SrcSpanInfo
iRule tClass = IRule noSrcSpan Nothing Nothing (IHCon noSrcSpan (UnQual noSrcSpan (nameDecl tClass)))
emptyDataDeclaration :: String -> Decl SrcSpanInfo
emptyDataDeclaration declName =
DataDecl noSrcSpan
(DataType noSrcSpan)
Nothing
(declarationHead declName)
[]
#if MIN_VERSION_haskell_src_exts(1,20,0)
[]
#else
Nothing
#endif
complexSumTypeDecl :: String -> [(String, String)] -> [DerivingClass] -> Decl SrcSpanInfo
complexSumTypeDecl dataName constructorsAndTypes derivingList =
DataDecl noSrcSpan
(DataType noSrcSpan) Nothing
(declarationHead dataName)
(fmap tConstructors constructorsAndTypes)
#if MIN_VERSION_haskell_src_exts(1,20,0)
[derivingDecl derivingList]
#else
(Just $ derivingDecl derivingList)
#endif
where
tConstructors :: (String, String) -> QualConDecl SrcSpanInfo
tConstructors (cName, consType) =
QualConDecl noSrcSpan Nothing Nothing
(ConDecl noSrcSpan (nameDecl cName) [typeConstructor consType])
enumTypeDeclaration :: String -> [String] -> [DerivingClass] -> Decl SrcSpanInfo
enumTypeDeclaration dataName listOfComponents derivingList =
DataDecl noSrcSpan
(DataType noSrcSpan) Nothing
(declarationHead dataName)
(sumTypeConstructor listOfComponents)
#if MIN_VERSION_haskell_src_exts(1,20,0)
[derivingDecl derivingList]
#else
(Just $ derivingDecl derivingList)
#endif
where
sumTypeConstructor :: [String] -> [QualConDecl SrcSpanInfo]
sumTypeConstructor =
fmap (\construcorVal -> QualConDecl noSrcSpan Nothing Nothing
(ConDecl noSrcSpan
(nameDecl construcorVal) [] ) )
languageExtension :: String -> ModulePragma SrcSpanInfo
languageExtension langExtName = LanguagePragma noSrcSpan [nameDecl langExtName]
moduleImport :: (String, (Bool, Maybe (ModuleName SrcSpanInfo)) )-> ImportDecl SrcSpanInfo
moduleImport (moduleNameStr, (isQualified, qualifiedName) ) =
ImportDecl {
importAnn = noSrcSpan,
importModule = ModuleName noSrcSpan moduleNameStr,
importQualified = isQualified,
importSrc = False,
importSafe = False,
importPkg = Nothing,
importAs = qualifiedName,
importSpecs = Nothing
}
apiInstanceDeclaration :: Vector 4 String -> [Vector 4 String] -> Decl SrcSpanInfo
apiInstanceDeclaration topLevelDecl innerTypesInstList =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(IHApp noSrcSpan
(IHApp noSrcSpan
(instanceHead (SV.index topLevelDecl (Finite 0) ) )
(typeConstructor $ SV.index topLevelDecl (Finite 1) )
)
(typeConstructor $ SV.index topLevelDecl (Finite 2) )
)
(typeConstructor $ SV.index topLevelDecl (Finite 3) )
)
) (Just $ fmap apiInstanceTypeDecl innerTypesInstList)
apiInstanceTypeDecl :: Vector 4 String -> InstDecl SrcSpanInfo
apiInstanceTypeDecl innerTypes =
InsType noSrcSpan
(TyApp noSrcSpan
(TyApp noSrcSpan
(typeConstructor (SV.index innerTypes (Finite 0) ) )
(typeConstructor (SV.index innerTypes (Finite 1) ) )
)
(typeConstructor (SV.index innerTypes (Finite 2) ) )
)
(typeConstructor (SV.index innerTypes (Finite 3) ) )
instanceHead :: String -> InstHead SrcSpanInfo
instanceHead instName = (IHCon noSrcSpan
(UnQual noSrcSpan $ nameDecl instName)
)
typeConstructor :: String -> Type SrcSpanInfo
typeConstructor typeConName = (TyCon noSrcSpan
(UnQual noSrcSpan $ nameDecl typeConName)
)
dataConstructor :: String -> Exp SrcSpanInfo
dataConstructor dataConName = Con noSrcSpan (UnQual noSrcSpan $ nameDecl dataConName)
fromParamInstanceDecl :: Vector 3 String -> Decl SrcSpanInfo
fromParamInstanceDecl instTypes =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(IHApp noSrcSpan
(instanceHead $ SV.index instTypes (Finite 0) )
(TyPromoted noSrcSpan (PromotedCon noSrcSpan True (UnQual noSrcSpan (nameDecl $ SV.index instTypes (Finite 1) ))))
)
(typeConstructor $ SV.index instTypes (Finite 2) )
)
)
Nothing
recursiveTypeForRoute :: [PathComponent] -> Type SrcSpanInfo
recursiveTypeForRoute routeComponents =
case routeComponents of
[] -> error "Did not expect an empty list here! "
x:[] -> processPathComponent x
prevElem:lastElem:[] ->
(TyInfix noSrcSpan
(processPathComponent prevElem)
(unPromotedUnQualSymDecl "W.:/")
(processPathComponent lastElem)
)
currentRoute:remainingRoute ->
(TyInfix noSrcSpan
(processPathComponent currentRoute)
(unPromotedUnQualSymDecl "W.:/")
(recursiveTypeForRoute remainingRoute)
)
where
processPathComponent :: PathComponent -> Type SrcSpanInfo
processPathComponent pathComp =
case pathComp of
PathComp pComp -> promotedType pComp
PathParamType pType -> typeConstructor pType
promotedType :: String -> Type SrcSpanInfo
promotedType typeNameData =
(TyPromoted noSrcSpan
(PromotedString noSrcSpan typeNameData typeNameData)
)
#if MIN_VERSION_haskell_src_exts(1,20,0)
unPromotedUnQualSymDecl :: String -> MaybePromotedName SrcSpanInfo
unPromotedUnQualSymDecl str =
(UnpromotedName noSrcSpan
(UnQual noSrcSpan
(Symbol noSrcSpan str)
))
#else
unPromotedUnQualSymDecl :: String -> QName SrcSpanInfo
unPromotedUnQualSymDecl = unQualSymDecl
#endif
unQualSymDecl :: String -> QName SrcSpanInfo
unQualSymDecl str =
(UnQual noSrcSpan
(Symbol noSrcSpan str)
)
patternVariable :: String -> Pat SrcSpanInfo
patternVariable varName = PVar noSrcSpan (nameDecl varName)
-- Show Instance for Enum Type
instanceDeclForShow :: String -> [Decl SrcSpanInfo]
instanceDeclForShow dataTypeName =
[InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "P.Show")
(typeConstructor dataTypeName)
)
)
(Just
[InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan (Ident noSrcSpan "show")
[PVar noSrcSpan (Ident noSrcSpan "st'")]
(UnGuardedRhs noSrcSpan (InfixApp noSrcSpan (Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "ASCII") (nameDecl "unpack")))
(QVarOp noSrcSpan (unQualSymDecl "$") )
(App noSrcSpan
(variableName "encodeParam")
(variableName "st'")
))) Nothing])]) ]
-- Instances for ToJSON and FromJSON For Sum Types
instanceDeclForJSONForSumType :: String -> [Decl SrcSpanInfo]
instanceDeclForJSONForSumType dataTypeName = [toJsonInstance, fromJsonInstance]
where
toJsonInstance =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "P.ToJSON")
(typeConstructor dataTypeName)
)
)
(Just
[InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan
(nameDecl "toJSON")
[PVar noSrcSpan (nameDecl "enumVal")]
(UnGuardedRhs noSrcSpan (InfixApp noSrcSpan (dataConstructor "String")
(QVarOp noSrcSpan (unQualSymDecl "$") )
(InfixApp noSrcSpan (variableName "pack")
(QVarOp noSrcSpan (unQualSymDecl "$"))
(App noSrcSpan
(variableName "show")
(variableName "enumVal")
)
)
)) Nothing])])
fromJsonInstance =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "P.FromJSON")
(typeConstructor dataTypeName)
)
)
(Just
[InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan (nameDecl "parseJSON")
[PVar noSrcSpan (nameDecl "jsonVal")]
(UnGuardedRhs noSrcSpan
(App noSrcSpan
(App noSrcSpan
(App noSrcSpan (variableName "withText") (stringLiteral "Expected Text in the JSON!" ) )
(Paren noSrcSpan (Lambda noSrcSpan [PVar noSrcSpan (nameDecl "textVal")]
(Case noSrcSpan (InfixApp noSrcSpan (variableName "decodeParam") (QVarOp noSrcSpan (unQualSymDecl "$")) (App noSrcSpan (variableName "encodeUtf8") (variableName "textVal") ))
[Alt noSrcSpan
(PApp noSrcSpan (UnQual noSrcSpan (nameDecl "P.Just")) [PVar noSrcSpan (nameDecl "x")])
(UnGuardedRhs noSrcSpan
(App noSrcSpan (variableName "pure") (variableName "x") ))
Nothing
,Alt noSrcSpan
(PApp noSrcSpan (UnQual noSrcSpan (nameDecl "P.Nothing")) [])
(UnGuardedRhs noSrcSpan (App noSrcSpan (variableName "error") (stringLiteral "Failed while parsing Status value from JSON")))
Nothing ]
)))) (variableName"jsonVal") )) Nothing])])
jsonInstances :: String -> ModifiedRecords -> [Decl SrcSpanInfo]
jsonInstances dataTypeName modRecords = [jsonInstance ToJson, jsonInstance FromJson]
where
jsonInstance :: JsonDirection -> Decl SrcSpanInfo
jsonInstance jsonDirection =
case modRecords of
[] ->
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead $ show jsonDirection)
(typeConstructor dataTypeName)
)
) Nothing
modRecList -> do
let (outerFn, genericFn) = getEncodingFnStr jsonDirection
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead $ show jsonDirection)
(typeConstructor dataTypeName)
)
)
(Just [
InsDecl noSrcSpan
(PatBind noSrcSpan
(PVar noSrcSpan (nameDecl outerFn))
(UnGuardedRhs noSrcSpan (
InfixApp noSrcSpan (variableName genericFn)
(QVarOp noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (Symbol noSrcSpan "$")))
(RecUpdate noSrcSpan (variableName "defaultOptions")
[FieldUpdate noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (Ident noSrcSpan "fieldLabelModifier"))
(InfixApp noSrcSpan (variableName "keyMapping")
(QVarOp noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (Symbol noSrcSpan "$")))
(App noSrcSpan (Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "HM") (nameDecl "fromList")))
(List noSrcSpan (fmap changedFieldModsHM modRecList))
)
) ] ) ) ) Nothing)
] )
changedFieldModsHM :: (String, String) -> Exp SrcSpanInfo
changedFieldModsHM (modFieldName, ogFieldName) =
Tuple noSrcSpan Boxed [stringLiteral ogFieldName, stringLiteral modFieldName]
getEncodingFnStr :: JsonDirection -> (String, String)
getEncodingFnStr jsonDir =
case jsonDir of
ToJson -> ("toEncoding", "genericToEncoding")
FromJson -> ("parseJSON", "genericParseJSON")
queryParamInstanceIRule :: String -> String -> InstRule SrcSpanInfo
queryParamInstanceIRule paramDirection sumTypeName =
IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(IHApp noSrcSpan
(instanceHead paramDirection)
(typeConstructor "P.QueryParam") )
-- (TyPromoted noSrcSpan (PromotedCon noSrcSpan True (UnQual noSrcSpan (nameDecl "QueryParam") ) ) ) )
(typeConstructor sumTypeName) )
-- The ToParam 'QueryParam instance for Sum Type
toParamQueryParamInstance :: String -> Decl SrcSpanInfo
toParamQueryParamInstance sumTypeName =
let sumTypeVarName = (fmap Char.toLower sumTypeName) ++ "'"
in InstDecl noSrcSpan Nothing
(queryParamInstanceIRule "P.ToParam" sumTypeName)
( Just [InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan
(nameDecl "toParam")
[PWildCard noSrcSpan , PVar noSrcSpan (nameDecl "pfx'"),PVar noSrcSpan (nameDecl sumTypeVarName)]
(UnGuardedRhs noSrcSpan
(List noSrcSpan
[Tuple noSrcSpan
Boxed
[Var noSrcSpan
(UnQual noSrcSpan
(nameDecl "pfx'")
),
InfixApp noSrcSpan
(dataConstructor "P.Just")
(QVarOp noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (Symbol noSrcSpan "$")))
(App noSrcSpan
(variableName "encodeParam")
(variableName sumTypeVarName )
)
]]))
Nothing ])])
encodeCaseStatementOption :: (String, String) -> Alt SrcSpanInfo
encodeCaseStatementOption (caseMatchOn, caseResult) =
Alt noSrcSpan
(PApp noSrcSpan
(UnQual noSrcSpan
(nameDecl caseMatchOn)
)
[]
)
(UnGuardedRhs noSrcSpan (stringLiteral caseResult) )
Nothing
decodeCaseStatementOption :: (String, String) -> Alt SrcSpanInfo
decodeCaseStatementOption (caseMatchOnStr, resultOfCaseMatch) =
Alt noSrcSpan
(PLit noSrcSpan (Signless noSrcSpan) (LHE.String noSrcSpan caseMatchOnStr caseMatchOnStr ) )
(UnGuardedRhs noSrcSpan (App noSrcSpan (dataConstructor "P.Just") (dataConstructor resultOfCaseMatch) ))
Nothing
-- the EncodeParam instance for Sum Type
encodeParamSumTypeInstance :: String -> [(String, String)] -> Decl SrcSpanInfo
encodeParamSumTypeInstance sumTypeName caseOptions =
let sumTypeVarName = (fmap Char.toLower sumTypeName) ++ "'"
in InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing (IHApp noSrcSpan (instanceHead "P.EncodeParam") (typeConstructor sumTypeName) ))
(Just [InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan
(nameDecl "encodeParam")
[PVar noSrcSpan (nameDecl sumTypeVarName)]
(UnGuardedRhs noSrcSpan
(Case noSrcSpan
(variableName sumTypeVarName)
(fmap encodeCaseStatementOption caseOptions) ))
Nothing
])])
-- The DecodeParam Instance for Sum Type
decodeParamSumTypeInstance :: String -> [(String, String)] -> Decl SrcSpanInfo
decodeParamSumTypeInstance sumTypeName caseOptions =
let sumTypeVarName = (fmap Char.toLower sumTypeName) ++ "'"
in InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing (IHApp noSrcSpan (instanceHead "P.DecodeParam") (typeConstructor sumTypeName) ))
(Just [InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan (nameDecl "decodeParam")
[PVar noSrcSpan (nameDecl sumTypeVarName)]
(UnGuardedRhs noSrcSpan
(Case noSrcSpan
(variableName sumTypeVarName)
((fmap decodeCaseStatementOption caseOptions) ++ [Alt noSrcSpan (PWildCard noSrcSpan) (UnGuardedRhs noSrcSpan (dataConstructor "P.Nothing") ) Nothing] ) ))
Nothing
])])
-- The FromParam 'QueryParam instance for Sum Type
fromParamQueryParamInstance :: String -> Decl SrcSpanInfo
fromParamQueryParamInstance sumTypeName =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "P.DecodeParam")
(typeConstructor sumTypeName) ))
-- (queryParamInstanceIRule "P.FromParam" sumTypeName)
(Just
[InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan
(nameDecl "fromParam")
(fmap patternVariable ["pt'","key'","kvs'"])
(UnGuardedRhs noSrcSpan
(Case noSrcSpan
(App noSrcSpan
(App noSrcSpan
(App noSrcSpan
(variableName "lookupParam")
(variableName "pt'") )
(variableName "key'") )
(variableName"kvs'")
)
[Alt noSrcSpan
(PApp noSrcSpan
(Qual noSrcSpan (ModuleName noSrcSpan "P")
(nameDecl "Just")
)
[PParen noSrcSpan
(PApp noSrcSpan
(Qual noSrcSpan (ModuleName noSrcSpan "P")
(nameDecl "Just")
) [patternVariable "par'"]
)
]
)
(UnGuardedRhs noSrcSpan
(Do noSrcSpan
[Qualifier noSrcSpan
(Case noSrcSpan
(App noSrcSpan
(variableName "decodeParam")
(variableName "par'")
)
[Alt noSrcSpan
(PApp noSrcSpan
(Qual noSrcSpan (ModuleName noSrcSpan "P") (nameDecl "Just"))
[patternVariable "v"]
)
(UnGuardedRhs noSrcSpan
(InfixApp noSrcSpan
(dataConstructor "Validation")
(QVarOp noSrcSpan
(unQualSymDecl "$")
)
(App noSrcSpan
(dataConstructor "Right")
(variableName "v")
)
)
)
Nothing,
Alt noSrcSpan
(PWildCard noSrcSpan)
(UnGuardedRhs noSrcSpan
(InfixApp noSrcSpan
(dataConstructor "Validation")
(QVarOp noSrcSpan
(unQualSymDecl "$")
)
(App noSrcSpan
(dataConstructor "Left")
(List noSrcSpan
[App noSrcSpan
(App noSrcSpan
(dataConstructor "ParseErr")
(variableName "key'")
)
(stringLiteral ("Unable to cast to " ++ sumTypeName) )
]
)
)
)
)
Nothing
]
)
]
)
)
Nothing,
Alt noSrcSpan
(PWildCard noSrcSpan)
(UnGuardedRhs noSrcSpan
(InfixApp noSrcSpan
(dataConstructor "Validation")
(QVarOp noSrcSpan
(unQualSymDecl "$")
)
(App noSrcSpan
(dataConstructor "Left")
(List noSrcSpan
[App noSrcSpan
(dataConstructor "NotFound")
(variableName "key'")
]
)
)
)
)
Nothing
]
)
)
Nothing
]
)
]
)
routeDeclaration :: String -> [PathComponent] -> Decl SrcSpanInfo
routeDeclaration routeNameStr routePathComponents =
case routePathComponents of
(PathComp _):[] ->
TypeDecl noSrcSpan
(declarationHead routeNameStr)
(TyApp noSrcSpan
(typeConstructor "W.Static")
(recursiveTypeForRoute routePathComponents) )
_ ->
TypeDecl noSrcSpan
(declarationHead routeNameStr)
(recursiveTypeForRoute routePathComponents)
webApiInstance :: String -> [(String, [String])] -> Decl SrcSpanInfo
webApiInstance mainTypeName routeAndMethods =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "W.WebApi")
(typeConstructor mainTypeName)
)
)
(Just
[InsType noSrcSpan
(TyApp noSrcSpan
(typeConstructor "Apis")
(typeConstructor mainTypeName)
)
(TyPromoted noSrcSpan
(PromotedList noSrcSpan True
(fmap innerRouteInstance routeAndMethods)
)
)
]
)
where
innerRouteInstance :: (String, [String]) -> Type SrcSpanInfo
innerRouteInstance (rName, listOfMethods) =
TyApp noSrcSpan
(TyApp noSrcSpan
(typeConstructor "W.Route")
(TyPromoted noSrcSpan
(PromotedList noSrcSpan True
(fmap typeConstructor listOfMethods)
)
)
)
(typeConstructor rName)
defaultToParamInstance :: String -> String -> Decl SrcSpanInfo
defaultToParamInstance dataTypeName paramType =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(IHApp noSrcSpan
(instanceHead "P.ToParam")
(TyPromoted noSrcSpan (PromotedCon noSrcSpan True (UnQual noSrcSpan (nameDecl paramType)))))
(typeConstructor dataTypeName) ))
Nothing
defaultToSchemaInstance :: String -> Decl SrcSpanInfo
defaultToSchemaInstance dataTypeName =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "ToSchema")
(typeConstructor dataTypeName)
)
) Nothing
toSchemaInstanceForSumType :: String -> [(String, String)] -> [Decl SrcSpanInfo]
toSchemaInstanceForSumType typeName constructorValues =
toSchemaInst:multiSetToSchemaInst:multiSetToParamSchemaInst:[]
where
toSchemaInst :: Decl SrcSpanInfo
toSchemaInst =
InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "ToSchema")
(typeConstructor typeName)
)
)
(Just [InsDecl noSrcSpan
(PatBind noSrcSpan
(PVar noSrcSpan
(nameDecl "declareNamedSchema")
)
(UnGuardedRhs noSrcSpan
(App noSrcSpan
(Var noSrcSpan (UnQual noSrcSpan (nameDecl "genericDeclareNamedSchema")))
(Paren noSrcSpan
(App noSrcSpan
(App noSrcSpan
(App noSrcSpan
(App noSrcSpan
(App noSrcSpan
(dataConstructor "SchemaOptions")
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (nameDecl "id"))))
(Paren noSrcSpan
(Lambda noSrcSpan [PVar noSrcSpan (nameDecl "inputConst")]
(Case noSrcSpan
(Var noSrcSpan (UnQual noSrcSpan (nameDecl "inputConst")))
(fmap caseMatchStatement constructorValues ++ errorCaseMatch)
)
)
)
)
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "P") (nameDecl "id")))
)
(dataConstructor "True")
)
(dataConstructor "False")
)
)
)
)
Nothing)])
caseMatchStatement :: (String, String) -> Alt SrcSpanInfo
caseMatchStatement (lowerCaseCons, typeCons) =
(Alt noSrcSpan
(PLit noSrcSpan (Signless noSrcSpan) (LHE.String noSrcSpan typeCons typeCons))
(UnGuardedRhs noSrcSpan (stringLiteral lowerCaseCons) ) Nothing)
errorCaseMatch :: [Alt SrcSpanInfo]
errorCaseMatch =
[Alt noSrcSpan
(PWildCard noSrcSpan)
(UnGuardedRhs noSrcSpan
(App noSrcSpan (variableName "error") (stringLiteral "Encountered invalid constructor value for sum type!"))
) Nothing]
multiSetToSchemaInst :: Decl SrcSpanInfo
multiSetToSchemaInst =
(InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "ToSchema")
(TyParen noSrcSpan
(TyApp noSrcSpan
(typeConstructor "P.MultiSet")
(typeConstructor typeName)
)
)
)
)
(Just
[InsDecl noSrcSpan
(PatBind noSrcSpan
(PVar noSrcSpan (Ident noSrcSpan "declareNamedSchema"))
(UnGuardedRhs noSrcSpan
(InfixApp noSrcSpan
(Var noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "plain")))
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan ".")))
(Var noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "paramSchemaToSchema")))
)
) Nothing ) ] ) )
multiSetToParamSchemaInst :: Decl SrcSpanInfo
multiSetToParamSchemaInst =
(InstDecl noSrcSpan Nothing
(IRule noSrcSpan Nothing Nothing
(IHApp noSrcSpan
(instanceHead "ToParamSchema")
(TyParen noSrcSpan
(TyApp noSrcSpan
(typeConstructor "P.MultiSet")
(typeConstructor typeName)
)
)
)
)
(Just
[InsDecl noSrcSpan
(FunBind noSrcSpan
[Match noSrcSpan
(Ident noSrcSpan "toParamSchema")
[PWildCard noSrcSpan]
(UnGuardedRhs noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(Var noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "mempty")))
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "&")))
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "SW") (Ident noSrcSpan "type_")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan ".~")))
(Con noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "SwaggerArray")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "&")))
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "SW") (Ident noSrcSpan "items")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "?~")))
(App noSrcSpan
(App noSrcSpan
(Con noSrcSpan
(UnQual noSrcSpan (Ident noSrcSpan "SwaggerItemsPrimitive")))
(Con noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "Nothing")))
)
(Paren noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(InfixApp noSrcSpan
(Var noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "mempty")))
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "&")))
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "SW") (Ident noSrcSpan "type_")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan ".~")))
(Con noSrcSpan (UnQual noSrcSpan (Ident noSrcSpan "SwaggerString")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "&")))
(Var noSrcSpan (Qual noSrcSpan (ModuleName noSrcSpan "SW") (Ident noSrcSpan "enum_")))
)
(QVarOp noSrcSpan (UnQual noSrcSpan (Symbol noSrcSpan "?~")))
(List noSrcSpan
(fmap enumConstructor constructorValues)
)
)
)))) Nothing])]))
enumConstructor :: (String, String) -> Exp SrcSpanInfo
enumConstructor (ogCons, _) =
App noSrcSpan
(dataConstructor "String")
(stringLiteral ogCons)
writeCabalAndProjectFiles :: FilePath -> String -> Bool -> [String] -> IO ()
writeCabalAndProjectFiles generationPath projectName needsWebapiXml modulesForImport = do
writeFile (generationPath ++ projectName ++ ".cabal") (cabalFileContents projectName needsWebapiXml modulesForImport)
writeFile (generationPath ++ "LICENSE") licenseFileContents
-- TODO : Once webapi-xml is pushed to GitHub, it needs to be added to the cabal.project file
writeFile (generationPath ++ "cabal.project") cabalProjectFileContents
---------------------------------------------------------------------------------------
-- Support multiple versions of GHC (Use ifndef )
-- for LTS 9.0 -> 1.18.2
| byteally/webapi | webapi-swagger/src/ContractGen.hs | bsd-3-clause | 107,791 | 0 | 43 | 29,935 | 23,387 | 11,854 | 11,533 | 1,574 | 38 |
--------------------------------------------------------------------------------
module Parse.Territory where
import Data.Char (isLetter)
import Text.ParserCombinators.ReadP
import Parse.Common
--------------------------------------------------------------------------------
visaTerritory :: ReadP String
visaTerritory =
leftBiasedChoice
[ "AL" ~> "Alabama"
, "AK" ~> "Alaska"
, "AZ" ~> "Arizona"
, "AR" ~> "Arkansas"
, "CA" ~> "California"
, "CO" ~> "Colorado"
, "CT" ~> "Connecticut"
, "DC" ~> "District of Columbia"
, "DE" ~> "Delaware"
, "FL" ~> "Florida"
, "GA" ~> "Georgia"
, "HI" ~> "Hawaii"
, "ID" ~> "Idaho"
, "IL" ~> "Illinois"
, "IN" ~> "Indiana"
, "IA" ~> "Iowa"
, "KS" ~> "Kansas"
, "KY" ~> "Kentucky"
, "LA" ~> "Louisiana"
, "ME" ~> "Maine"
, "MD" ~> "Maryland"
, "MA" ~> "Massachusetts"
, "MI" ~> "Michigan"
, "MN" ~> "Minnesota"
, "MS" ~> "Mississippi"
, "MO" ~> "Missouri"
, "MT" ~> "Montana"
, "NE" ~> "Nebraska"
, "NV" ~> "Nevada"
, "NH" ~> "New Hampshire"
, "NJ" ~> "New Jersey"
, "NM" ~> "New Mexico"
, "NY" ~> "New York"
, "NC" ~> "North Carolina"
, "ND" ~> "North Dakota"
, "OH" ~> "Ohio"
, "OK" ~> "Oklahoma"
, "OR" ~> "Oregon"
, "PA" ~> "Pennsylvania"
, "RI" ~> "Rhode Island"
, "SC" ~> "South Carolina"
, "SD" ~> "South Dakota"
, "TN" ~> "Tennessee"
, "TX" ~> "Texas"
, "UT" ~> "Utah"
, "VT" ~> "Vermont"
, "VA" ~> "Virginia"
, "WA" ~> "Washington"
, "WV" ~> "West Virginia"
, "WI" ~> "Wisconsin"
, "WY" ~> "Wyoming"
, "AD" ~> "Andorra"
, "AE" ~> "United Arab Emirates"
, "AF" ~> "Afghanistan"
, "AG" ~> "Antigua and Barbuda"
, "AI" ~> "Anguilla"
, "AL" ~> "Albania"
, "AM" ~> "Armenia"
, "AO" ~> "Angola"
, "AQ" ~> "Antarctica"
, "AR" ~> "Argentina"
, "AS" ~> "American Samoa"
, "AT" ~> "Austria"
, "AU" ~> "Australia"
, "AW" ~> "Aruba"
, "AX" ~> "Åland Islands"
, "AZ" ~> "Azerbaijan"
, "BA" ~> "Bosnia and Herzegovina"
, "BB" ~> "Barbados"
, "BD" ~> "Bangladesh"
, "BE" ~> "Belgium"
, "BF" ~> "Burkina Faso"
, "BG" ~> "Bulgaria"
, "BH" ~> "Bahrain"
, "BI" ~> "Burundi"
, "BJ" ~> "Benin"
, "BL" ~> "Saint Barthélemy"
, "BM" ~> "Bermuda"
, "BN" ~> "Brunei" -- Brunei Darussalam
, "BO" ~> "Bolivia" -- Bolivia, Plurinational State of
, "BQ" ~> "Caribbean Netherlands" -- Bonaire, Sint Eustatius and Saba
, "BR" ~> "Brazil"
, "BS" ~> "The Bahamas" -- Bahamas
, "BT" ~> "Bhutan"
, "BV" ~> "Bouvet Island"
, "BW" ~> "Botswana"
, "BY" ~> "Belarus"
, "BZ" ~> "Belize"
, "CA" ~> "Canada"
, "CC" ~> "Cocos (Keeling) Islands"
, "CD" ~> "Democratic Republic of the Congo" -- Congo, the Democratic Republic of the
, "CF" ~> "Central African Republic"
, "CG" ~> "Republic of the Congo" -- Congo
, "CH" ~> "Switzerland"
, "CI" ~> "Côte d'Ivoire"
, "CK" ~> "Cook Islands"
, "CL" ~> "Chile"
, "CM" ~> "Cameroon"
, "CN" ~> "China"
, "CO" ~> "Colombia"
, "CR" ~> "Costa Rica"
, "CU" ~> "Cuba"
, "CV" ~> "Cabo Verde"
, "CW" ~> "Curaçao"
, "CX" ~> "Christmas Island"
, "CY" ~> "Cyprus"
, "CZ" ~> "Czech Republic"
, "DE" ~> "Germany"
, "DJ" ~> "Djibouti"
, "DK" ~> "Denmark"
, "DM" ~> "Dominica"
, "DO" ~> "Dominican Republic"
, "DZ" ~> "Algeria"
, "EC" ~> "Ecuador"
, "EE" ~> "Estonia"
, "EG" ~> "Egypt"
, "EH" ~> "Western Sahara"
, "ER" ~> "Eritrea"
, "ES" ~> "Spain"
, "ET" ~> "Ethiopia"
, "FI" ~> "Finland"
, "FJ" ~> "Fiji"
, "FK" ~> "Falkland Islands" -- Falkland Islands (Malvinas)
, "FM" ~> "Federated States of Micronesia" -- Micronesia, Federated States of
, "FO" ~> "Faroe Islands"
, "FR" ~> "France"
, "GA" ~> "Gabon"
, "GB" ~> "United Kingdom"
, "GD" ~> "Grenada"
, "GE" ~> "Georgia"
, "GF" ~> "French Guiana"
, "GG" ~> "Guernsey"
, "GH" ~> "Ghana"
, "GI" ~> "Gibraltar"
, "GL" ~> "Greenland"
, "GM" ~> "Gambia"
, "GN" ~> "Guinea"
, "GP" ~> "Guadeloupe"
, "GQ" ~> "Equatorial Guinea"
, "GR" ~> "Greece"
, "GS" ~> "South Georgia and the South Sandwich Islands"
, "GT" ~> "Guatemala"
, "GU" ~> "Guam"
, "GW" ~> "Guinea-Bissau"
, "GY" ~> "Guyana"
, "HK" ~> "Hong Kong"
, "HM" ~> "Heard Island and McDonald Islands"
, "HN" ~> "Honduras"
, "HR" ~> "Croatia"
, "HT" ~> "Haiti"
, "HU" ~> "Hungary"
, "ID" ~> "Indonesia"
, "IE" ~> "Ireland" -- Ireland, Republic of
, "IL" ~> "Israel"
, "IM" ~> "Isle of Man"
, "IN" ~> "India"
, "IO" ~> "British Indian Ocean Territory"
, "IQ" ~> "Iraq"
, "IR" ~> "Iran" -- Iran, Islamic Republic of
, "IS" ~> "Iceland"
, "IT" ~> "Italy"
, "JE" ~> "Jersey"
, "JM" ~> "Jamaica"
, "JO" ~> "Jordan"
, "JP" ~> "Japan"
, "KE" ~> "Kenya"
, "KG" ~> "Kyrgyzstan"
, "KH" ~> "Cambodia"
, "KI" ~> "Kiribati"
, "KM" ~> "Comoros"
, "KN" ~> "Saint Kitts and Nevis"
, "KP" ~> "North Korea" -- Korea, Democratic People's Republic of
, "KR" ~> "South Korea" -- Korea, Republic of
, "KW" ~> "Kuwait"
, "KY" ~> "Cayman Islands"
, "KZ" ~> "Kazakhstan"
, "LA" ~> "Laos" -- Lao People's Democratic Republic
, "LB" ~> "Lebanon"
, "LC" ~> "Saint Lucia"
, "LI" ~> "Liechtenstein"
, "LK" ~> "Sri Lanka"
, "LR" ~> "Liberia"
, "LS" ~> "Lesotho"
, "LT" ~> "Lithuania"
, "LU" ~> "Luxembourg"
, "LV" ~> "Latvia"
, "LY" ~> "Libya"
, "MA" ~> "Morocco"
, "MC" ~> "Monaco"
, "MD" ~> "Moldova" -- Moldova, Republic of
, "ME" ~> "Montenegro"
, "MF" ~> "Collectivity of Saint Martin" -- Saint Martin (French part)
, "MG" ~> "Madagascar"
, "MH" ~> "Marshall Islands"
, "MK" ~> "Republic of Macedonia" -- Macedonia, the former Yugoslav Republic of
, "ML" ~> "Mali"
, "MM" ~> "Myanmar"
, "MN" ~> "Mongolia"
, "MO" ~> "Macau" -- Macao
, "MP" ~> "Northern Mariana Islands"
, "MQ" ~> "Martinique"
, "MR" ~> "Mauritania"
, "MS" ~> "Montserrat"
, "MT" ~> "Malta"
, "MU" ~> "Mauritius"
, "MV" ~> "Maldives"
, "MW" ~> "Malawi"
, "MX" ~> "Mexico"
, "MY" ~> "Malaysia"
, "MZ" ~> "Mozambique"
, "NA" ~> "Namibia"
, "NC" ~> "New Caledonia"
, "NE" ~> "Niger"
, "NF" ~> "Norfolk Island"
, "NG" ~> "Nigeria"
, "NI" ~> "Nicaragua"
, "NL" ~> "Netherlands"
, "NO" ~> "Norway"
, "NP" ~> "Nepal"
, "NR" ~> "Nauru"
, "NU" ~> "Niue"
, "NZ" ~> "New Zealand"
, "OM" ~> "Oman"
, "PA" ~> "Panama"
, "PE" ~> "Peru"
, "PF" ~> "French Polynesia"
, "PG" ~> "Papua New Guinea"
, "PH" ~> "Philippines"
, "PK" ~> "Pakistan"
, "PL" ~> "Poland"
, "PM" ~> "Saint Pierre and Miquelon"
, "PN" ~> "Pitcairn Islands" -- Pitcairn
, "PR" ~> "Puerto Rico"
, "PS" ~> "State of Palestine" -- Palestine, State of
, "PT" ~> "Portugal"
, "PW" ~> "Palau"
, "PY" ~> "Paraguay"
, "QA" ~> "Qatar"
, "RE" ~> "Réunion"
, "RO" ~> "Romania"
, "RS" ~> "Serbia"
, "RU" ~> "Russia" -- Russian Federation
, "RW" ~> "Rwanda"
, "SA" ~> "Saudi Arabia"
, "SB" ~> "Solomon Islands"
, "SC" ~> "Seychelles"
, "SD" ~> "Sudan"
, "SE" ~> "Sweden"
, "SG" ~> "Singapore"
, "SH" ~> "Saint Helena and Ascension and Tristan da Cunha" -- Saint Helena, Ascension and Tristan da Cunha
, "SI" ~> "Slovenia"
, "SJ" ~> "Svalbard and Jan Mayen"
, "SK" ~> "Slovakia"
, "SL" ~> "Sierra Leone"
, "SM" ~> "San Marino"
, "SN" ~> "Senegal"
, "SO" ~> "Somalia"
, "SR" ~> "Suriname"
, "SS" ~> "South Sudan"
, "ST" ~> "São Tomé and Príncipe"
, "SV" ~> "El Salvador"
, "SX" ~> "Sint Maarten" -- Sint Maarten (Dutch part)
, "SY" ~> "Syria" -- Syrian Arab Republic
, "SZ" ~> "Swaziland"
, "TC" ~> "Turks and Caicos Islands"
, "TD" ~> "Chad"
, "TF" ~> "French Southern and Antarctic Lands" -- French Southern Territories
, "TG" ~> "Togo"
, "TH" ~> "Thailand"
, "TJ" ~> "Tajikistan"
, "TK" ~> "Tokelau"
, "TL" ~> "East Timor" -- Timor-Leste
, "TM" ~> "Turkmenistan"
, "TN" ~> "Tunisia"
, "TO" ~> "Tonga"
, "TR" ~> "Turkey"
, "TT" ~> "Trinidad and Tobago"
, "TV" ~> "Tuvalu"
, "TW" ~> "Taiwan" -- Taiwan, Province of China
, "TZ" ~> "Tanzania" -- Tanzania, United Republic of
, "UA" ~> "Ukraine"
, "UG" ~> "Uganda"
, "UM" ~> "United States Minor Outlying Islands"
, "US" ~> "United States"
, "UY" ~> "Uruguay"
, "UZ" ~> "Uzbekistan"
, "VA" ~> "Vatican City" -- Holy See (Vatican City State)
, "VC" ~> "Saint Vincent and the Grenadines"
, "VE" ~> "Venezuela" -- Venezuela, Bolivarian Republic of
, "VG" ~> "British Virgin Islands" -- Virgin Islands, British
, "VI" ~> "United States Virgin Islands" -- Virgin Islands, U.S.
, "VN" ~> "Vietnam" -- Viet Nam
, "VU" ~> "Vanuatu"
, "WF" ~> "Wallis and Futuna"
, "WS" ~> "Samoa"
, "YE" ~> "Yemen"
, "YT" ~> "Mayotte"
, "ZA" ~> "South Africa"
, "ZM" ~> "Zambia"
, "ZW" ~> "Zimbabwe"
, count 2 (satisfy isLetter)
]
--------------------------------------------------------------------------------
| mietek/catools | src/caparse/Parse/Territory.hs | bsd-3-clause | 10,076 | 0 | 9 | 3,336 | 2,192 | 1,267 | 925 | 308 | 1 |
import Factorial
num = 20
main = print $ (round (fac (num * 2) / (fac num * fac num)) :: Integer)
| stulli/projectEuler | eu15.hs | bsd-3-clause | 100 | 0 | 12 | 24 | 57 | 30 | 27 | 3 | 1 |
{-# LANGUAGE CPP #-}
{-# LANGUAGE QuasiQuotes #-}
module Ivory.Compile.C.Modules where
import Paths_ivory_backend_c (version)
import Prelude ()
import Prelude.Compat
import Text.PrettyPrint.Mainland
#if MIN_VERSION_mainland_pretty(0,6,0)
import Text.PrettyPrint.Mainland.Class
#endif
import qualified Ivory.Language.Syntax.AST as I
import Ivory.Compile.C.Gen
import Ivory.Compile.C.Types
import Control.Monad (unless, when)
import Data.Char (toUpper)
import Data.Maybe (fromJust, isJust)
import Data.Version (showVersion)
import MonadLib (put, runM)
import System.FilePath.Posix ((<.>))
--------------------------------------------------------------------------------
showModule :: CompileUnits -> String
showModule m = unlines $ map unlines $
[ mk (lbl "Source") (sources m)
, mk (lbl "Header") (headers m)
]
where
lbl l = "// module " ++ unitName m ++ " " ++ l ++ ":\n"
mk _ (_,[]) = []
mk str (incls,units) = str : pp (mkDefs (incls, units))
pp = map (pretty maxWidth . ppr)
--------------------------------------------------------------------------------
compilerVersion :: String
compilerVersion = showVersion version
---
topComments :: Doc
topComments = text "/* This file has been autogenerated by Ivory" </>
text " * Compiler version " <+> text compilerVersion </>
text " */"
renderHdr :: (Includes, Sources) -> String -> String
renderHdr s unitname = displayS (render maxWidth guardedHeader) ""
where
guardedHeader = stack [ topComments
, topGuard
, topExternC
, ppr (mkDefs s)
, botExternC
, botGuard
]
topGuard = text "#ifndef" <+> guardName </> text "#define"
<+> guardName
botGuard = text "#endif" <+> text "/*" <+> guardName <+> text "*/\n"
unitname' = map (\c -> if c == '-' then '_' else c) unitname
guardName = text "__" <> text (toUpper <$> unitname') <> text "_H__"
topExternC = stack $ text <$> [ "#ifdef __cplusplus"
, "extern \"C\" {"
, "#endif"]
botExternC = stack $ text <$> [ "#ifdef __cplusplus"
, "}"
, "#endif"]
renderSrc :: (Includes, Sources) -> String
renderSrc s = displayS (render maxWidth srcdoc) ""
where
srcdoc = topComments </> out </> text ""
out = stack $ punctuate line $ map ppr $ mkDefs s
--------------------------------------------------------------------------------
runOpt :: (I.Proc -> I.Proc) -> I.Module -> I.Module
runOpt opt m =
m { I.modProcs = procs' }
where
procs' = procs { I.public = map' I.public, I.private = map' I.private }
procs = I.modProcs m
map' acc = map opt (acc procs)
--------------------------------------------------------------------------------
-- | Compile a module.
compileModule :: Maybe String -> I.Module -> CompileUnits
compileModule hdr I.Module { I.modName = nm
, I.modDepends = deps
, I.modHeaders = hdrs
, I.modImports = imports
, I.modExterns = externs
, I.modProcs = procs
, I.modStructs = structs
, I.modAreas = areas
, I.modAreaImports = ais
}
= CompileUnits
{ unitName = nm
, sources = sources res
, headers = headers res
}
where
res = compRes comp
compRes = (snd . runM . unCompile)
unitHdr = LocalInclude (nm <.> "h")
comp = do
let c = compRes comp0
unless (null (snd (headers c))) (putSrcInc unitHdr)
Compile (put c)
comp0 :: Compile
comp0 = do
putHdrInc (LocalInclude "ivory.h")
when (isJust hdr)
(putHdrInc (LocalInclude (fromJust hdr)))
-- module names don't have a .h on the end
mapM_ (putHdrInc . LocalInclude . ((<.> "h"))) deps
mapM_ (putHdrInc . LocalInclude) hdrs
mapM_ (compileStruct Public) (I.public structs)
mapM_ (compileStruct Private) (I.private structs)
mapM_ fromImport imports
mapM_ fromExtern externs
mapM_ (extractAreaProto Public) (I.public areas)
mapM_ (extractAreaProto Private) (I.private areas)
mapM_ (compileArea Public) (I.public areas)
mapM_ (compileArea Private) (I.private areas)
mapM_ compileAreaImport ais
mapM_ (extractProto Public) (I.public procs)
mapM_ (extractProto Private) (I.private procs)
mapM_ compileUnit (I.public procs ++ I.private procs)
--------------------------------------------------------------------------------
fromImport :: I.Import -> Compile
fromImport p = putHdrInc (SysInclude (I.importFile p))
--------------------------------------------------------------------------------
fromExtern :: I.Extern -> Compile
fromExtern p = putHdrInc (SysInclude (I.externFile p))
--------------------------------------------------------------------------------
outputProcSyms :: [I.Module] -> IO ()
outputProcSyms mods = putStrLn $ unwords $ concatMap go mods
where
go :: I.Module -> [String]
go m = map I.procSym (pub ++ priv)
where I.Visible pub priv = I.modProcs m
--------------------------------------------------------------------------------
-- This is generated code, and sometimes, we have large expressions. In
-- practice, this means that once the width is reached, one token is placed per
-- line(!). So we'll make a high limit for width, somewhat ironically making
-- generated C more readable.
maxWidth :: Int
maxWidth = 400
mkDefs :: ([Include], Sources) -> Sources
mkDefs (incls, defs) = map includeDef incls ++ defs
| GaloisInc/ivory | ivory-backend-c/src/Ivory/Compile/C/Modules.hs | bsd-3-clause | 6,083 | 0 | 15 | 1,800 | 1,567 | 827 | 740 | 112 | 2 |
{-# LANGUAGE PatternGuards, ScopedTypeVariables, RecordWildCards, ViewPatterns #-}
-- | Check the input/output pairs in the tests/ directory
module Test.InputOutput(testInputOutput) where
import Control.Applicative
import Data.Tuple.Extra
import Control.Exception
import Control.Monad
import Control.Monad.IO.Class
import Data.List.Extra
import Data.IORef
import System.Directory
import System.FilePath
import System.Console.CmdArgs.Explicit
import System.Console.CmdArgs.Verbosity
import System.Exit
import System.IO.Extra
import Prelude
import Test.Util
testInputOutput :: ([String] -> IO ()) -> Test ()
testInputOutput main = do
xs <- liftIO $ getDirectoryContents "tests"
xs <- pure $ filter ((==) ".test" . takeExtension) xs
forM_ xs $ \file -> do
ios <- liftIO $ parseInputOutputs <$> readFile ("tests" </> file)
forM_ (zipFrom 1 ios) $ \(i,io@InputOutput{..}) -> do
progress
liftIO $ forM_ files $ \(name,contents) -> do
createDirectoryIfMissing True $ takeDirectory name
writeFile name contents
checkInputOutput main io{name= "_" ++ takeBaseName file ++ "_" ++ show i}
liftIO $ mapM_ (removeFile . fst) $ concatMap files ios
data InputOutput = InputOutput
{name :: String
,files :: [(FilePath, String)]
,run :: [String]
,output :: String
,exit :: Maybe ExitCode
} deriving Eq
parseInputOutputs :: String -> [InputOutput]
parseInputOutputs = f z . lines
where
z = InputOutput "unknown" [] [] "" Nothing
interest x = any (`isPrefixOf` x) ["----","FILE","RUN","OUTPUT","EXIT"]
f io ((stripPrefix "RUN " -> Just flags):xs) = f io{run = splitArgs flags} xs
f io ((stripPrefix "EXIT " -> Just code):xs) = f io{exit = Just $ let i = read code in if i == 0 then ExitSuccess else ExitFailure i} xs
f io ((stripPrefix "FILE " -> Just file):xs) | (str,xs) <- g xs = f io{files = files io ++ [(file,unlines str)]} xs
f io ("OUTPUT":xs) | (str,xs) <- g xs = f io{output = unlines str} xs
f io ((isPrefixOf "----" -> True):xs) = [io | io /= z] ++ f z xs
f io [] = [io | io /= z]
f io (x:xs) = error $ "Unknown test item, " ++ x
g = first (reverse . dropWhile null . reverse) . break interest
---------------------------------------------------------------------
-- CHECK INPUT/OUTPUT PAIRS
checkInputOutput :: ([String] -> IO ()) -> InputOutput -> Test ()
checkInputOutput main InputOutput{..} = do
code <- liftIO $ newIORef ExitSuccess
got <- liftIO $ fmap (reverse . dropWhile null . reverse . map trimEnd . lines . fst) $ captureOutput $
handle (\(e::SomeException) -> print e) $
handle (\(e::ExitCode) -> writeIORef code e) $
bracket getVerbosity setVerbosity $ const $ setVerbosity Normal >> main run
code <- liftIO $ readIORef code
(want,got) <- pure $ matchStarStar (lines output) got
if maybe False (/= code) exit then
failed
["TEST FAILURE IN tests/" ++ name
,"WRONG EXIT CODE"
,"GOT : " ++ show code
,"WANT: " ++ show exit
]
else if length got == length want && and (zipWith matchStar want got) then
passed
else do
let trail = replicate (max (length got) (length want)) "<EOF>"
let (i,g,w):_ = [(i,g,w) | (i,g,w) <- zip3 [1..] (got++trail) (want++trail), not $ matchStar w g]
failed $
["TEST FAILURE IN tests/" ++ name
,"DIFFER ON LINE: " ++ show i
,"GOT : " ++ g
,"WANT: " ++ w
,"FULL OUTPUT FOR GOT:"] ++ got
-- | First string may have stars in it (the want)
matchStar :: String -> String -> Bool
matchStar ('*':xs) ys = any (matchStar xs) $ tails ys
matchStar ('/':x:xs) ('\\':'\\':ys) | x /= '/' = matchStar (x:xs) ys -- JSON escaped newlines
matchStar (x:xs) (y:ys) = eq x y && matchStar xs ys
where
-- allow path differences between Windows and Linux
eq '/' y = isPathSeparator y
eq x y = x == y
matchStar [] [] = True
matchStar _ _ = False
matchStarStar :: [String] -> [String] -> ([String], [String])
matchStarStar want got = case break (== "**") want of
(_, []) -> (want, got)
(w1,_:w2) -> (w1++w2, g1 ++ takeEnd (length w2) g2)
where (g1,g2) = splitAt (length w1) got
| ndmitchell/hlint | src/Test/InputOutput.hs | bsd-3-clause | 4,369 | 0 | 23 | 1,125 | 1,698 | 885 | 813 | 88 | 8 |
{-# LANGUAGE InstanceSigs #-}
module EitherT where
import Control.Monad.Trans.Class
import Control.Monad
newtype EitherT e m a = EitherT { runEitherT :: m (Either e a) }
instance Functor m => Functor (EitherT e m) where
fmap f (EitherT ma) = EitherT $ (fmap.fmap) f ma
instance Applicative m => Applicative (EitherT e m) where
pure a = EitherT $ (pure.pure) a
(EitherT fnAToB) <*> (EitherT ma) = EitherT $ (fmap (<*>) fnAToB) <*> ma
instance Monad m => Monad (EitherT e m) where
return = pure
(>>=) :: EitherT e m a -> (a -> EitherT e m b) -> EitherT e m b
(EitherT ma) >>= f = EitherT $ do
v <- ma
case v of
Left x -> return (Left x)
Right y -> runEitherT (f y)
swapEither :: Either e a -> Either a e
swapEither (Left x) = Right x
swapEither (Right x) = Left x
swapEitherT :: (Functor m) => EitherT e m a -> EitherT a m e
swapEitherT ema = EitherT $ fmap swapEither (runEitherT ema)
eitherT :: Monad m => (a -> m c) -> (b -> m c) -> EitherT a m b -> m c
eitherT aToMC bToMC eamb = runEitherT eamb >>= either aToMC bToMC
instance MonadTrans (EitherT e) where
lift = EitherT . liftM Right
| stites/composition | src/EitherT.hs | bsd-3-clause | 1,136 | 0 | 14 | 266 | 535 | 267 | 268 | 27 | 1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Control.Applicative ((<*>))
import Control.Exception (try, SomeException)
import Data.Text as T
import qualified Network.Lastfm.Album as LastAlbum
import Network.Lastfm (lastfm, (<*), artist, album, apiKey,
json, Response, Format(JSON))
import Data.Aeson.Types
import System.Environment (getArgs)
import qualified Network.HTTP.Conduit as C
import qualified Network.HTTP.Types as C
main = do args <- getArgs
res <- query (args !! 0) (args !! 1)
case res of
Left e -> putStrLn ("left: " ++ show e)
Right r -> putStrLn ("right: " ++ show r)
query :: String -> String -> IO (Either SomeException String)
query art alb = do
eresp <- try (loadWebsite "http://127.0.0.1:8080")
--eresp <- try (getAlbInfo (T.pack art) (T.pack alb))
case eresp of
Left e -> return (Left e)
Right resp -> return (Right "successfully parsed")
getAlbInfo :: Text -> Text -> IO (Response JSON)
getAlbInfo art alb = lastfm $ LastAlbum.getInfo
<*> artist art
<*> album alb
<*> apiKey "a2c21e95ab7239f87f2e5ff716fc6374"
<* json
loadWebsite :: String -> IO String
loadWebsite rawUrl = do
resp <- C.withManager $ \m ->
C.parseUrl rawUrl >>= flip C.httpLbs m >>= \t ->
return ( show (C.responseHeaders t))
return (show resp)
| rethab/tagger | src/Tagger/Main.hs | bsd-3-clause | 1,649 | 0 | 17 | 565 | 456 | 242 | 214 | 36 | 2 |
{-# LANGUAGE OverloadedStrings #-}
module LineParsers (
ParserDef,
ParserDefs,
loadParsers,
findParser,
isEmptyLine,
isSingleLineComment,
isMultiLineCommentStart,
isMultiLineCommentEnd,
getName,
getExt
) where
import Text.Regex.Posix
import Data.Map as Map
import Data.Maybe
import Data.List as List
import Data.List.Split (splitOn)
import Control.Applicative
import System.FilePath (takeBaseName, takeExtension)
import Paths_hstats
import qualified Data.Yaml as Y
import Data.Yaml (FromJSON(..), (.:), (.:?))
import qualified Data.ByteString.Char8 as BS
type RegExp = String
data ParserDef = ParserDef { ext :: String
, name :: String
, singleLine :: Maybe RegExp
, multiLineStart :: Maybe RegExp
, multiLineEnd :: Maybe RegExp
} deriving (Show)
instance FromJSON ParserDef where
parseJSON (Y.Object v) =
ParserDef <$>
v .: "ext" <*>
v .: "name" <*>
v .:? "singleLine" <*>
v .:? "multiLineStart" <*>
v .:? "multiLineEnd"
type ParserDefs = Map String ParserDef
loadParsers :: IO ParserDefs
loadParsers = do
conf <- getDataFileName "data/lineParsers.yaml" >>= BS.readFile
let defs = fromMaybe [] $ (Y.decode conf :: Maybe [ParserDef])
return $ keyBy ext defs
findParser :: ParserDefs -> FilePath -> Maybe ParserDef
findParser parsers path = findParserByExtensions parsers $ splitOn "." path
findParserByExtensions :: ParserDefs -> [String] -> Maybe ParserDef
findParserByExtensions parsers [] = Nothing
findParserByExtensions parsers (b:[]) = Nothing
findParserByExtensions parsers (b:exts) =
if isNothing value then findParserByExtensions parsers exts else value
where value = Map.lookup toExt parsers
toExt = intercalate "." exts
keyBy :: (Ord b) => (a -> b) -> [a] -> Map b a
keyBy toKey = Map.fromList . (List.map toTuple)
where toTuple x = (toKey x, x)
getName :: ParserDef -> String
getName = name
getExt :: ParserDef -> String
getExt = ext
isEmptyLine :: BS.ByteString -> Bool
isEmptyLine x = x =~ ("^\\s*$" :: BS.ByteString)
isSingleLineComment :: ParserDef -> BS.ByteString -> Bool
isSingleLineComment parser = matchMaybe $ singleLine parser
isMultiLineCommentStart :: ParserDef -> BS.ByteString -> Bool
isMultiLineCommentStart parser = matchMaybe $ multiLineStart parser
isMultiLineCommentEnd :: ParserDef -> BS.ByteString -> Bool
isMultiLineCommentEnd parser = matchMaybe $ multiLineEnd parser
matchMaybe :: Maybe RegExp -> BS.ByteString-> Bool
matchMaybe Nothing _ = False
matchMaybe (Just regexp) line = line =~ regexp
| LFDM/hstats | src/lib/LineParsers.hs | bsd-3-clause | 2,663 | 0 | 15 | 566 | 767 | 418 | 349 | 71 | 2 |
{-# LANGUAGE ExplicitNamespaces #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Promotion.TH
-- Copyright : (C) 2013 Richard Eisenberg
-- License : BSD-style (see LICENSE)
-- Maintainer : Richard Eisenberg ([email protected])
-- Stability : experimental
-- Portability : non-portable
--
-- This module contains everything you need to promote your own functions via
-- Template Haskell.
--
----------------------------------------------------------------------------
module Data.Promotion.TH (
-- * Primary Template Haskell generation functions
promote, promoteOnly, genDefunSymbols, genPromotions,
-- ** Functions to generate @Eq@ instances
promoteEqInstances, promoteEqInstance,
-- ** Functions to generate @Ord@ instances
promoteOrdInstances, promoteOrdInstance,
-- ** Functions to generate @Bounded@ instances
promoteBoundedInstances, promoteBoundedInstance,
-- ** Functions to generate @Enum@ instances
promoteEnumInstances, promoteEnumInstance,
-- ** defunctionalization
TyFun, Apply, type (@@),
-- * Auxiliary definitions
-- | These definitions might be mentioned in code generated by Template Haskell,
-- so they must be in scope.
PEq(..), If, (:&&),
POrd(..),
Any,
Proxy(..), ThenCmp, Foldl,
Error, ErrorSym0,
TrueSym0, FalseSym0,
LTSym0, EQSym0, GTSym0,
Tuple0Sym0,
Tuple2Sym0, Tuple2Sym1, Tuple2Sym2,
Tuple3Sym0, Tuple3Sym1, Tuple3Sym2, Tuple3Sym3,
Tuple4Sym0, Tuple4Sym1, Tuple4Sym2, Tuple4Sym3, Tuple4Sym4,
Tuple5Sym0, Tuple5Sym1, Tuple5Sym2, Tuple5Sym3, Tuple5Sym4, Tuple5Sym5,
Tuple6Sym0, Tuple6Sym1, Tuple6Sym2, Tuple6Sym3, Tuple6Sym4, Tuple6Sym5, Tuple6Sym6,
Tuple7Sym0, Tuple7Sym1, Tuple7Sym2, Tuple7Sym3, Tuple7Sym4, Tuple7Sym5, Tuple7Sym6, Tuple7Sym7,
ThenCmpSym0, FoldlSym0,
SuppressUnusedWarnings(..)
) where
import Data.Singletons
import Data.Singletons.Promote
import Data.Singletons.Prelude.Instances
import Data.Singletons.Prelude.Bool
import Data.Singletons.Prelude.Eq
import Data.Singletons.Prelude.Ord
import Data.Singletons.TypeLits
import Data.Singletons.SuppressUnusedWarnings
import GHC.Exts
| int-index/singletons | src/Data/Promotion/TH.hs | bsd-3-clause | 2,176 | 0 | 5 | 306 | 311 | 218 | 93 | 33 | 0 |
{-# LANGUAGE PatternSynonyms #-}
--------------------------------------------------------------------------------
-- |
-- Module : Graphics.GL.ARB.PolygonOffsetClamp
-- Copyright : (c) Sven Panne 2019
-- License : BSD3
--
-- Maintainer : Sven Panne <[email protected]>
-- Stability : stable
-- Portability : portable
--
--------------------------------------------------------------------------------
module Graphics.GL.ARB.PolygonOffsetClamp (
-- * Extension Support
glGetARBPolygonOffsetClamp,
gl_ARB_polygon_offset_clamp,
-- * Enums
pattern GL_POLYGON_OFFSET_CLAMP,
-- * Functions
glPolygonOffsetClamp
) where
import Graphics.GL.ExtensionPredicates
import Graphics.GL.Tokens
import Graphics.GL.Functions
| haskell-opengl/OpenGLRaw | src/Graphics/GL/ARB/PolygonOffsetClamp.hs | bsd-3-clause | 745 | 0 | 5 | 101 | 57 | 43 | 14 | 9 | 0 |
{-# LANGUAGE CPP #-}
module TcInteract (
solveSimpleGivens, -- Solves [EvVar],GivenLoc
solveSimpleWanteds -- Solves Cts
) where
#include "HsVersions.h"
import BasicTypes ()
import HsTypes ( hsIPNameFS )
import FastString
import TcCanonical
import TcFlatten
import VarSet
import Type
import Kind (isKind)
import Unify
import InstEnv( DFunInstType, lookupInstEnv, instanceDFunId )
import CoAxiom(sfInteractTop, sfInteractInert)
import Var
import TcType
import PrelNames ( knownNatClassName, knownSymbolClassName, ipClassNameKey,
callStackTyConKey, typeableClassName )
import Id( idType )
import Class
import TyCon
import FunDeps
import FamInst
import Inst( tyVarsOfCt )
import TcEvidence
import Outputable
import TcRnTypes
import TcErrors
import TcSMonad
import Bag
import Data.List( partition, foldl', deleteFirstsBy )
import SrcLoc
import VarEnv
import Control.Monad
import Maybes( isJust )
import Pair (Pair(..))
import Unique( hasKey )
import DynFlags
import Util
{-
**********************************************************************
* *
* Main Interaction Solver *
* *
**********************************************************************
Note [Basic Simplifier Plan]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Pick an element from the WorkList if there exists one with depth
less than our context-stack depth.
2. Run it down the 'stage' pipeline. Stages are:
- canonicalization
- inert reactions
- spontaneous reactions
- top-level intreactions
Each stage returns a StopOrContinue and may have sideffected
the inerts or worklist.
The threading of the stages is as follows:
- If (Stop) is returned by a stage then we start again from Step 1.
- If (ContinueWith ct) is returned by a stage, we feed 'ct' on to
the next stage in the pipeline.
4. If the element has survived (i.e. ContinueWith x) the last stage
then we add him in the inerts and jump back to Step 1.
If in Step 1 no such element exists, we have exceeded our context-stack
depth and will simply fail.
Note [Unflatten after solving the simple wanteds]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We unflatten after solving the wc_simples of an implication, and before attempting
to float. This means that
* The fsk/fmv flatten-skolems only survive during solveSimples. We don't
need to worry about then across successive passes over the constraint tree.
(E.g. we don't need the old ic_fsk field of an implication.
* When floating an equality outwards, we don't need to worry about floating its
associated flattening constraints.
* Another tricky case becomes easy: Trac #4935
type instance F True a b = a
type instance F False a b = b
[w] F c a b ~ gamma
(c ~ True) => a ~ gamma
(c ~ False) => b ~ gamma
Obviously this is soluble with gamma := F c a b, and unflattening
will do exactly that after solving the simple constraints and before
attempting the implications. Before, when we were not unflattening,
we had to push Wanted funeqs in as new givens. Yuk!
Another example that becomes easy: indexed_types/should_fail/T7786
[W] BuriedUnder sub k Empty ~ fsk
[W] Intersect fsk inv ~ s
[w] xxx[1] ~ s
[W] forall[2] . (xxx[1] ~ Empty)
=> Intersect (BuriedUnder sub k Empty) inv ~ Empty
Note [Running plugins on unflattened wanteds]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is an annoying mismatch between solveSimpleGivens and
solveSimpleWanteds, because the latter needs to fiddle with the inert
set, unflatten and and zonk the wanteds. It passes the zonked wanteds
to runTcPluginsWanteds, which produces a replacement set of wanteds,
some additional insolubles and a flag indicating whether to go round
the loop again. If so, prepareInertsForImplications is used to remove
the previous wanteds (which will still be in the inert set). Note
that prepareInertsForImplications will discard the insolubles, so we
must keep track of them separately.
-}
solveSimpleGivens :: CtLoc -> [EvVar] -> TcS ()
solveSimpleGivens loc givens
| null givens -- Shortcut for common case
= return ()
| otherwise
= go (map mk_given_ct givens)
where
mk_given_ct ev_id = mkNonCanonical (CtGiven { ctev_evtm = EvId ev_id
, ctev_pred = evVarPred ev_id
, ctev_loc = loc })
go givens = do { solveSimples (listToBag givens)
; new_givens <- runTcPluginsGiven
; when (notNull new_givens) (go new_givens)
}
solveSimpleWanteds :: Cts -> TcS WantedConstraints
solveSimpleWanteds = go emptyBag
where
go insols0 wanteds
= do { solveSimples wanteds
; (implics, tv_eqs, fun_eqs, insols, others) <- getUnsolvedInerts
; unflattened_eqs <- unflatten tv_eqs fun_eqs
-- See Note [Unflatten after solving the simple wanteds]
; zonked <- zonkSimples (others `andCts` unflattened_eqs)
-- Postcondition is that the wl_simples are zonked
; (wanteds', insols', rerun) <- runTcPluginsWanted zonked
-- See Note [Running plugins on unflattened wanteds]
; let all_insols = insols0 `unionBags` insols `unionBags` insols'
; if rerun then do { updInertTcS prepareInertsForImplications
; go all_insols wanteds' }
else return (WC { wc_simple = wanteds'
, wc_insol = all_insols
, wc_impl = implics }) }
-- The main solver loop implements Note [Basic Simplifier Plan]
---------------------------------------------------------------
solveSimples :: Cts -> TcS ()
-- Returns the final InertSet in TcS
-- Has no effect on work-list or residual-iplications
-- The constraints are initially examined in left-to-right order
solveSimples cts
= {-# SCC "solveSimples" #-}
do { dyn_flags <- getDynFlags
; updWorkListTcS (\wl -> foldrBag extendWorkListCt wl cts)
; solve_loop (maxSubGoalDepth dyn_flags) }
where
solve_loop max_depth
= {-# SCC "solve_loop" #-}
do { sel <- selectNextWorkItem max_depth
; case sel of
NoWorkRemaining -- Done, successfuly (modulo frozen)
-> return ()
MaxDepthExceeded cnt ct -- Failure, depth exceeded
-> wrapErrTcS $ solverDepthErrorTcS cnt (ctEvidence ct)
NextWorkItem ct -- More work, loop around!
-> do { runSolverPipeline thePipeline ct; solve_loop max_depth } }
-- | Extract the (inert) givens and invoke the plugins on them.
-- Remove solved givens from the inert set and emit insolubles, but
-- return new work produced so that 'solveSimpleGivens' can feed it back
-- into the main solver.
runTcPluginsGiven :: TcS [Ct]
runTcPluginsGiven = do
(givens,_,_) <- fmap splitInertCans getInertCans
if null givens
then return []
else do
p <- runTcPlugins (givens,[],[])
let (solved_givens, _, _) = pluginSolvedCts p
updInertCans (removeInertCts solved_givens)
mapM_ emitInsoluble (pluginBadCts p)
return (pluginNewCts p)
-- | Given a bag of (flattened, zonked) wanteds, invoke the plugins on
-- them and produce an updated bag of wanteds (possibly with some new
-- work) and a bag of insolubles. The boolean indicates whether
-- 'solveSimpleWanteds' should feed the updated wanteds back into the
-- main solver.
runTcPluginsWanted :: Cts -> TcS (Cts, Cts, Bool)
runTcPluginsWanted zonked_wanteds
| isEmptyBag zonked_wanteds = return (zonked_wanteds, emptyBag, False)
| otherwise = do
(given,derived,_) <- fmap splitInertCans getInertCans
p <- runTcPlugins (given, derived, bagToList zonked_wanteds)
let (solved_givens, solved_deriveds, solved_wanteds) = pluginSolvedCts p
(_, _, wanteds) = pluginInputCts p
updInertCans (removeInertCts $ solved_givens ++ solved_deriveds)
mapM_ setEv solved_wanteds
return ( listToBag $ pluginNewCts p ++ wanteds
, listToBag $ pluginBadCts p
, notNull (pluginNewCts p) )
where
setEv :: (EvTerm,Ct) -> TcS ()
setEv (ev,ct) = case ctEvidence ct of
CtWanted {ctev_evar = evar} -> setWantedEvBind evar ev
_ -> panic "runTcPluginsWanted.setEv: attempt to solve non-wanted!"
-- | A triple of (given, derived, wanted) constraints to pass to plugins
type SplitCts = ([Ct], [Ct], [Ct])
-- | A solved triple of constraints, with evidence for wanteds
type SolvedCts = ([Ct], [Ct], [(EvTerm,Ct)])
-- | Represents collections of constraints generated by typechecker
-- plugins
data TcPluginProgress = TcPluginProgress
{ pluginInputCts :: SplitCts
-- ^ Original inputs to the plugins with solved/bad constraints
-- removed, but otherwise unmodified
, pluginSolvedCts :: SolvedCts
-- ^ Constraints solved by plugins
, pluginBadCts :: [Ct]
-- ^ Constraints reported as insoluble by plugins
, pluginNewCts :: [Ct]
-- ^ New constraints emitted by plugins
}
-- | Starting from a triple of (given, derived, wanted) constraints,
-- invoke each of the typechecker plugins in turn and return
--
-- * the remaining unmodified constraints,
-- * constraints that have been solved,
-- * constraints that are insoluble, and
-- * new work.
--
-- Note that new work generated by one plugin will not be seen by
-- other plugins on this pass (but the main constraint solver will be
-- re-invoked and they will see it later). There is no check that new
-- work differs from the original constraints supplied to the plugin:
-- the plugin itself should perform this check if necessary.
runTcPlugins :: SplitCts -> TcS TcPluginProgress
runTcPlugins all_cts = do
gblEnv <- getGblEnv
foldM do_plugin initialProgress (tcg_tc_plugins gblEnv)
where
do_plugin :: TcPluginProgress -> TcPluginSolver -> TcS TcPluginProgress
do_plugin p solver = do
result <- runTcPluginTcS (uncurry3 solver (pluginInputCts p))
return $ progress p result
progress :: TcPluginProgress -> TcPluginResult -> TcPluginProgress
progress p (TcPluginContradiction bad_cts) =
p { pluginInputCts = discard bad_cts (pluginInputCts p)
, pluginBadCts = bad_cts ++ pluginBadCts p
}
progress p (TcPluginOk solved_cts new_cts) =
p { pluginInputCts = discard (map snd solved_cts) (pluginInputCts p)
, pluginSolvedCts = add solved_cts (pluginSolvedCts p)
, pluginNewCts = new_cts ++ pluginNewCts p
}
initialProgress = TcPluginProgress all_cts ([], [], []) [] []
discard :: [Ct] -> SplitCts -> SplitCts
discard cts (xs, ys, zs) =
(xs `without` cts, ys `without` cts, zs `without` cts)
without :: [Ct] -> [Ct] -> [Ct]
without = deleteFirstsBy eqCt
eqCt :: Ct -> Ct -> Bool
eqCt c c' = case (ctEvidence c, ctEvidence c') of
(CtGiven pred _ _, CtGiven pred' _ _) -> pred `eqType` pred'
(CtWanted pred _ _, CtWanted pred' _ _) -> pred `eqType` pred'
(CtDerived pred _ , CtDerived pred' _ ) -> pred `eqType` pred'
(_ , _ ) -> False
add :: [(EvTerm,Ct)] -> SolvedCts -> SolvedCts
add xs scs = foldl' addOne scs xs
addOne :: SolvedCts -> (EvTerm,Ct) -> SolvedCts
addOne (givens, deriveds, wanteds) (ev,ct) = case ctEvidence ct of
CtGiven {} -> (ct:givens, deriveds, wanteds)
CtDerived{} -> (givens, ct:deriveds, wanteds)
CtWanted {} -> (givens, deriveds, (ev,ct):wanteds)
type WorkItem = Ct
type SimplifierStage = WorkItem -> TcS (StopOrContinue Ct)
data SelectWorkItem
= NoWorkRemaining -- No more work left (effectively we're done!)
| MaxDepthExceeded SubGoalCounter Ct
-- More work left to do but this constraint has exceeded
-- the maximum depth for one of the subgoal counters and we
-- must stop
| NextWorkItem Ct -- More work left, here's the next item to look at
selectNextWorkItem :: SubGoalDepth -- Max depth allowed
-> TcS SelectWorkItem
selectNextWorkItem max_depth
= updWorkListTcS_return pick_next
where
pick_next :: WorkList -> (SelectWorkItem, WorkList)
pick_next wl
= case selectWorkItem wl of
(Nothing,_)
-> (NoWorkRemaining,wl) -- No more work
(Just ct, new_wl)
| Just cnt <- subGoalDepthExceeded max_depth (ctLocDepth (ctLoc ct)) -- Depth exceeded
-> (MaxDepthExceeded cnt ct,new_wl)
(Just ct, new_wl)
-> (NextWorkItem ct, new_wl) -- New workitem and worklist
runSolverPipeline :: [(String,SimplifierStage)] -- The pipeline
-> WorkItem -- The work item
-> TcS ()
-- Run this item down the pipeline, leaving behind new work and inerts
runSolverPipeline pipeline workItem
= do { initial_is <- getTcSInerts
; traceTcS "Start solver pipeline {" $
vcat [ ptext (sLit "work item = ") <+> ppr workItem
, ptext (sLit "inerts = ") <+> ppr initial_is]
; bumpStepCountTcS -- One step for each constraint processed
; final_res <- run_pipeline pipeline (ContinueWith workItem)
; final_is <- getTcSInerts
; case final_res of
Stop ev s -> do { traceFireTcS ev s
; traceTcS "End solver pipeline (discharged) }"
(ptext (sLit "inerts =") <+> ppr final_is)
; return () }
ContinueWith ct -> do { traceFireTcS (ctEvidence ct) (ptext (sLit "Kept as inert"))
; traceTcS "End solver pipeline (not discharged) }" $
vcat [ ptext (sLit "final_item =") <+> ppr ct
, pprTvBndrs (varSetElems $ tyVarsOfCt ct)
, ptext (sLit "inerts =") <+> ppr final_is]
; insertInertItemTcS ct }
}
where run_pipeline :: [(String,SimplifierStage)] -> StopOrContinue Ct
-> TcS (StopOrContinue Ct)
run_pipeline [] res = return res
run_pipeline _ (Stop ev s) = return (Stop ev s)
run_pipeline ((stg_name,stg):stgs) (ContinueWith ct)
= do { traceTcS ("runStage " ++ stg_name ++ " {")
(text "workitem = " <+> ppr ct)
; res <- stg ct
; traceTcS ("end stage " ++ stg_name ++ " }") empty
; run_pipeline stgs res }
{-
Example 1:
Inert: {c ~ d, F a ~ t, b ~ Int, a ~ ty} (all given)
Reagent: a ~ [b] (given)
React with (c~d) ==> IR (ContinueWith (a~[b])) True []
React with (F a ~ t) ==> IR (ContinueWith (a~[b])) False [F [b] ~ t]
React with (b ~ Int) ==> IR (ContinueWith (a~[Int]) True []
Example 2:
Inert: {c ~w d, F a ~g t, b ~w Int, a ~w ty}
Reagent: a ~w [b]
React with (c ~w d) ==> IR (ContinueWith (a~[b])) True []
React with (F a ~g t) ==> IR (ContinueWith (a~[b])) True [] (can't rewrite given with wanted!)
etc.
Example 3:
Inert: {a ~ Int, F Int ~ b} (given)
Reagent: F a ~ b (wanted)
React with (a ~ Int) ==> IR (ContinueWith (F Int ~ b)) True []
React with (F Int ~ b) ==> IR Stop True [] -- after substituting we re-canonicalize and get nothing
-}
thePipeline :: [(String,SimplifierStage)]
thePipeline = [ ("canonicalization", TcCanonical.canonicalize)
, ("interact with inerts", interactWithInertsStage)
, ("top-level reactions", topReactionsStage) ]
{-
*********************************************************************************
* *
The interact-with-inert Stage
* *
*********************************************************************************
Note [The Solver Invariant]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
We always add Givens first. So you might think that the solver has
the invariant
If the work-item is Given,
then the inert item must Given
But this isn't quite true. Suppose we have,
c1: [W] beta ~ [alpha], c2 : [W] blah, c3 :[W] alpha ~ Int
After processing the first two, we get
c1: [G] beta ~ [alpha], c2 : [W] blah
Now, c3 does not interact with the the given c1, so when we spontaneously
solve c3, we must re-react it with the inert set. So we can attempt a
reaction between inert c2 [W] and work-item c3 [G].
It *is* true that [Solver Invariant]
If the work-item is Given,
AND there is a reaction
then the inert item must Given
or, equivalently,
If the work-item is Given,
and the inert item is Wanted/Derived
then there is no reaction
-}
-- Interaction result of WorkItem <~> Ct
type StopNowFlag = Bool -- True <=> stop after this interaction
interactWithInertsStage :: WorkItem -> TcS (StopOrContinue Ct)
-- Precondition: if the workitem is a CTyEqCan then it will not be able to
-- react with anything at this stage.
interactWithInertsStage wi
= do { inerts <- getTcSInerts
; let ics = inert_cans inerts
; case wi of
CTyEqCan {} -> interactTyVarEq ics wi
CFunEqCan {} -> interactFunEq ics wi
CIrredEvCan {} -> interactIrred ics wi
CDictCan {} -> interactDict ics wi
_ -> pprPanic "interactWithInerts" (ppr wi) }
-- CHoleCan are put straight into inert_frozen, so never get here
-- CNonCanonical have been canonicalised
data InteractResult
= IRKeep -- Keep the existing inert constraint in the inert set
| IRReplace -- Replace the existing inert constraint with the work item
| IRDelete -- Delete the existing inert constraint from the inert set
instance Outputable InteractResult where
ppr IRKeep = ptext (sLit "keep")
ppr IRReplace = ptext (sLit "replace")
ppr IRDelete = ptext (sLit "delete")
solveOneFromTheOther :: CtEvidence -- Inert
-> CtEvidence -- WorkItem
-> TcS (InteractResult, StopNowFlag)
-- Preconditions:
-- 1) inert and work item represent evidence for the /same/ predicate
-- 2) ip/class/irred evidence (no coercions) only
solveOneFromTheOther ev_i ev_w
| isDerived ev_w
= return (IRKeep, True)
| isDerived ev_i -- The inert item is Derived, we can just throw it away,
-- The ev_w is inert wrt earlier inert-set items,
-- so it's safe to continue on from this point
= return (IRDelete, False)
| CtWanted { ctev_evar = ev_id } <- ev_w
= do { setWantedEvBind ev_id (ctEvTerm ev_i)
; return (IRKeep, True) }
| CtWanted { ctev_evar = ev_id } <- ev_i
= do { setWantedEvBind ev_id (ctEvTerm ev_w)
; return (IRReplace, True) }
-- So they are both Given
-- See Note [Replacement vs keeping]
| lvl_i == lvl_w
= do { binds <- getTcEvBindsMap
; if has_binding binds ev_w && not (has_binding binds ev_i)
then return (IRReplace, True)
else return (IRKeep, True) }
| otherwise -- Both are Given
= return (if use_replacement then IRReplace else IRKeep, True)
where
pred = ctEvPred ev_i
loc_i = ctEvLoc ev_i
loc_w = ctEvLoc ev_w
lvl_i = ctLocLevel loc_i
lvl_w = ctLocLevel loc_w
has_binding binds ev
| EvId v <- ctEvTerm ev = isJust (lookupEvBind binds v)
| otherwise = True
use_replacement
| isIPPred pred = lvl_w > lvl_i
| otherwise = lvl_w < lvl_i
{-
Note [Replacement vs keeping]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When we have two Given constraints both of type (C tys), say, which should
we keep?
* For implicit parameters we want to keep the innermost (deepest)
one, so that it overrides the outer one.
See Note [Shadowing of Implicit Parameters]
* For everything else, we want to keep the outermost one. Reason: that
makes it more likely that the inner one will turn out to be unused,
and can be reported as redundant. See Note [Tracking redundant constraints]
in TcSimplify.
It transpires that using the outermost one is reponsible for an
8% performance improvement in nofib cryptarithm2, compared to
just rolling the dice. I didn't investigate why.
* If there is no "outermost" one, we keep the one that has a non-trivial
evidence binding. Note [Tracking redundant constraints] again.
Example: f :: (Eq a, Ord a) => blah
then we may find [G] sc_sel (d1::Ord a) :: Eq a
[G] d2 :: Eq a
We want to discard d2 in favour of the superclass selection from
the Ord dictionary.
* Finally, when there is still a choice, use IRKeep rather than
IRReplace, to avoid unnecesary munging of the inert set.
Doing the depth-check for implicit parameters, rather than making the work item
always overrride, is important. Consider
data T a where { T1 :: (?x::Int) => T Int; T2 :: T a }
f :: (?x::a) => T a -> Int
f T1 = ?x
f T2 = 3
We have a [G] (?x::a) in the inert set, and at the pattern match on T1 we add
two new givens in the work-list: [G] (?x::Int)
[G] (a ~ Int)
Now consider these steps
- process a~Int, kicking out (?x::a)
- process (?x::Int), the inner given, adding to inert set
- process (?x::a), the outer given, overriding the inner given
Wrong! The depth-check ensures that the inner implicit parameter wins.
(Actually I think that the order in which the work-list is processed means
that this chain of events won't happen, but that's very fragile.)
*********************************************************************************
* *
interactIrred
* *
*********************************************************************************
-}
-- Two pieces of irreducible evidence: if their types are *exactly identical*
-- we can rewrite them. We can never improve using this:
-- if we want ty1 :: Constraint and have ty2 :: Constraint it clearly does not
-- mean that (ty1 ~ ty2)
interactIrred :: InertCans -> Ct -> TcS (StopOrContinue Ct)
interactIrred inerts workItem@(CIrredEvCan { cc_ev = ev_w })
| let pred = ctEvPred ev_w
(matching_irreds, others) = partitionBag (\ct -> ctPred ct `tcEqType` pred)
(inert_irreds inerts)
, (ct_i : rest) <- bagToList matching_irreds
, let ctev_i = ctEvidence ct_i
= ASSERT( null rest )
do { (inert_effect, stop_now) <- solveOneFromTheOther ctev_i ev_w
; case inert_effect of
IRKeep -> return ()
IRDelete -> updInertIrreds (\_ -> others)
IRReplace -> updInertIrreds (\_ -> others `snocCts` workItem)
-- These const upd's assume that solveOneFromTheOther
-- has no side effects on InertCans
; if stop_now then
return (Stop ev_w (ptext (sLit "Irred equal") <+> parens (ppr inert_effect)))
; else
continueWith workItem }
| otherwise
= continueWith workItem
interactIrred _ wi = pprPanic "interactIrred" (ppr wi)
{-
*********************************************************************************
* *
interactDict
* *
*********************************************************************************
-}
interactDict :: InertCans -> Ct -> TcS (StopOrContinue Ct)
interactDict inerts workItem@(CDictCan { cc_ev = ev_w, cc_class = cls, cc_tyargs = tys })
-- don't ever try to solve CallStack IPs directly from other dicts,
-- we always build new dicts instead.
-- See Note [Overview of implicit CallStacks]
| [_ip, ty] <- tys
, isWanted ev_w
, Just mkEvCs <- isCallStackIP (ctEvLoc ev_w) cls ty
= do let ev_cs =
case lookupInertDict inerts (ctEvLoc ev_w) cls tys of
Just ev | isGiven ev -> mkEvCs (ctEvTerm ev)
_ -> mkEvCs (EvCallStack EvCsEmpty)
-- now we have ev_cs :: CallStack, but the evidence term should
-- be a dictionary, so we have to coerce ev_cs to a
-- dictionary for `IP ip CallStack`
let ip_ty = mkClassPred cls tys
let ev_tm = mkEvCast (EvCallStack ev_cs) (TcCoercion $ wrapIP ip_ty)
addSolvedDict ev_w cls tys
setWantedEvBind (ctEvId ev_w) ev_tm
stopWith ev_w "Wanted CallStack IP"
| Just ctev_i <- lookupInertDict inerts (ctEvLoc ev_w) cls tys
= do { (inert_effect, stop_now) <- solveOneFromTheOther ctev_i ev_w
; case inert_effect of
IRKeep -> return ()
IRDelete -> updInertDicts $ \ ds -> delDict ds cls tys
IRReplace -> updInertDicts $ \ ds -> addDict ds cls tys workItem
; if stop_now then
return (Stop ev_w (ptext (sLit "Dict equal") <+> parens (ppr inert_effect)))
else
continueWith workItem }
| cls `hasKey` ipClassNameKey
, isGiven ev_w
= interactGivenIP inerts workItem
| otherwise
= do { mapBagM_ (addFunDepWork workItem)
(findDictsByClass (inert_dicts inerts) cls)
-- Create derived fds and keep on going.
-- No need to check flavour; fundeps work between
-- any pair of constraints, regardless of flavour
-- Importantly we don't throw workitem back in the
-- worklist bebcause this can cause loops (see #5236)
; continueWith workItem }
interactDict _ wi = pprPanic "interactDict" (ppr wi)
interactGivenIP :: InertCans -> Ct -> TcS (StopOrContinue Ct)
-- Work item is Given (?x:ty)
-- See Note [Shadowing of Implicit Parameters]
interactGivenIP inerts workItem@(CDictCan { cc_ev = ev, cc_class = cls
, cc_tyargs = tys@(ip_str:_) })
= do { updInertCans $ \cans -> cans { inert_dicts = addDict filtered_dicts cls tys workItem }
; stopWith ev "Given IP" }
where
dicts = inert_dicts inerts
ip_dicts = findDictsByClass dicts cls
other_ip_dicts = filterBag (not . is_this_ip) ip_dicts
filtered_dicts = addDictsByClass dicts cls other_ip_dicts
-- Pick out any Given constraints for the same implicit parameter
is_this_ip (CDictCan { cc_ev = ev, cc_tyargs = ip_str':_ })
= isGiven ev && ip_str `tcEqType` ip_str'
is_this_ip _ = False
interactGivenIP _ wi = pprPanic "interactGivenIP" (ppr wi)
addFunDepWork :: Ct -> Ct -> TcS ()
-- Add derived constraints from type-class functional dependencies.
addFunDepWork work_ct inert_ct
= emitFunDepDeriveds $
improveFromAnother derived_loc inert_pred work_pred
-- We don't really rewrite tys2, see below _rewritten_tys2, so that's ok
-- NB: We do create FDs for given to report insoluble equations that arise
-- from pairs of Givens, and also because of floating when we approximate
-- implications. The relevant test is: typecheck/should_fail/FDsFromGivens.hs
-- Also see Note [When improvement happens]
where
work_pred = ctPred work_ct
inert_pred = ctPred inert_ct
work_loc = ctLoc work_ct
inert_loc = ctLoc inert_ct
derived_loc = work_loc { ctl_origin = FunDepOrigin1 work_pred work_loc
inert_pred inert_loc }
{-
Note [Shadowing of Implicit Parameters]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Consider the following example:
f :: (?x :: Char) => Char
f = let ?x = 'a' in ?x
The "let ?x = ..." generates an implication constraint of the form:
?x :: Char => ?x :: Char
Furthermore, the signature for `f` also generates an implication
constraint, so we end up with the following nested implication:
?x :: Char => (?x :: Char => ?x :: Char)
Note that the wanted (?x :: Char) constraint may be solved in
two incompatible ways: either by using the parameter from the
signature, or by using the local definition. Our intention is
that the local definition should "shadow" the parameter of the
signature, and we implement this as follows: when we add a new
*given* implicit parameter to the inert set, it replaces any existing
givens for the same implicit parameter.
This works for the normal cases but it has an odd side effect
in some pathological programs like this:
-- This is accepted, the second parameter shadows
f1 :: (?x :: Int, ?x :: Char) => Char
f1 = ?x
-- This is rejected, the second parameter shadows
f2 :: (?x :: Int, ?x :: Char) => Int
f2 = ?x
Both of these are actually wrong: when we try to use either one,
we'll get two incompatible wnated constraints (?x :: Int, ?x :: Char),
which would lead to an error.
I can think of two ways to fix this:
1. Simply disallow multiple constratits for the same implicit
parameter---this is never useful, and it can be detected completely
syntactically.
2. Move the shadowing machinery to the location where we nest
implications, and add some code here that will produce an
error if we get multiple givens for the same implicit parameter.
*********************************************************************************
* *
interactFunEq
* *
*********************************************************************************
-}
interactFunEq :: InertCans -> Ct -> TcS (StopOrContinue Ct)
-- Try interacting the work item with the inert set
interactFunEq inerts workItem@(CFunEqCan { cc_ev = ev, cc_fun = tc
, cc_tyargs = args, cc_fsk = fsk })
| Just (CFunEqCan { cc_ev = ev_i, cc_fsk = fsk_i }) <- matching_inerts
= if ev_i `canRewriteOrSame` ev
then -- Rewrite work-item using inert
do { traceTcS "reactFunEq (discharge work item):" $
vcat [ text "workItem =" <+> ppr workItem
, text "inertItem=" <+> ppr ev_i ]
; reactFunEq ev_i fsk_i ev fsk
; stopWith ev "Inert rewrites work item" }
else -- Rewrite intert using work-item
do { traceTcS "reactFunEq (rewrite inert item):" $
vcat [ text "workItem =" <+> ppr workItem
, text "inertItem=" <+> ppr ev_i ]
; updInertFunEqs $ \ feqs -> insertFunEq feqs tc args workItem
-- Do the updInertFunEqs before the reactFunEq, so that
-- we don't kick out the inertItem as well as consuming it!
; reactFunEq ev fsk ev_i fsk_i
; stopWith ev "Work item rewrites inert" }
| Just ops <- isBuiltInSynFamTyCon_maybe tc
= do { let matching_funeqs = findFunEqsByTyCon funeqs tc
; let interact = sfInteractInert ops args (lookupFlattenTyVar eqs fsk)
do_one (CFunEqCan { cc_tyargs = iargs, cc_fsk = ifsk, cc_ev = iev })
= mapM_ (unifyDerived (ctEvLoc iev) Nominal)
(interact iargs (lookupFlattenTyVar eqs ifsk))
do_one ct = pprPanic "interactFunEq" (ppr ct)
; mapM_ do_one matching_funeqs
; traceTcS "builtInCandidates 1: " $ vcat [ ptext (sLit "Candidates:") <+> ppr matching_funeqs
, ptext (sLit "TvEqs:") <+> ppr eqs ]
; return (ContinueWith workItem) }
| otherwise
= return (ContinueWith workItem)
where
eqs = inert_eqs inerts
funeqs = inert_funeqs inerts
matching_inerts = findFunEqs funeqs tc args
interactFunEq _ wi = pprPanic "interactFunEq" (ppr wi)
lookupFlattenTyVar :: TyVarEnv EqualCtList -> TcTyVar -> TcType
-- ^ Look up a flatten-tyvar in the inert nominal TyVarEqs;
-- this is used only when dealing with a CFunEqCan
lookupFlattenTyVar inert_eqs ftv
= case lookupVarEnv inert_eqs ftv of
Just (CTyEqCan { cc_rhs = rhs, cc_eq_rel = NomEq } : _) -> rhs
_ -> mkTyVarTy ftv
reactFunEq :: CtEvidence -> TcTyVar -- From this :: F tys ~ fsk1
-> CtEvidence -> TcTyVar -- Solve this :: F tys ~ fsk2
-> TcS ()
reactFunEq from_this fsk1 (CtGiven { ctev_evtm = tm, ctev_loc = loc }) fsk2
= do { let fsk_eq_co = mkTcSymCo (evTermCoercion tm)
`mkTcTransCo` ctEvCoercion from_this
-- :: fsk2 ~ fsk1
fsk_eq_pred = mkTcEqPred (mkTyVarTy fsk2) (mkTyVarTy fsk1)
; new_ev <- newGivenEvVar loc (fsk_eq_pred, EvCoercion fsk_eq_co)
; emitWorkNC [new_ev] }
reactFunEq from_this fuv1 (CtWanted { ctev_evar = evar }) fuv2
= dischargeFmv evar fuv2 (ctEvCoercion from_this) (mkTyVarTy fuv1)
reactFunEq _ _ solve_this@(CtDerived {}) _
= pprPanic "reactFunEq" (ppr solve_this)
{-
Note [Cache-caused loops]
~~~~~~~~~~~~~~~~~~~~~~~~~
It is very dangerous to cache a rewritten wanted family equation as 'solved' in our
solved cache (which is the default behaviour or xCtEvidence), because the interaction
may not be contributing towards a solution. Here is an example:
Initial inert set:
[W] g1 : F a ~ beta1
Work item:
[W] g2 : F a ~ beta2
The work item will react with the inert yielding the _same_ inert set plus:
i) Will set g2 := g1 `cast` g3
ii) Will add to our solved cache that [S] g2 : F a ~ beta2
iii) Will emit [W] g3 : beta1 ~ beta2
Now, the g3 work item will be spontaneously solved to [G] g3 : beta1 ~ beta2
and then it will react the item in the inert ([W] g1 : F a ~ beta1). So it
will set
g1 := g ; sym g3
and what is g? Well it would ideally be a new goal of type (F a ~ beta2) but
remember that we have this in our solved cache, and it is ... g2! In short we
created the evidence loop:
g2 := g1 ; g3
g3 := refl
g1 := g2 ; sym g3
To avoid this situation we do not cache as solved any workitems (or inert)
which did not really made a 'step' towards proving some goal. Solved's are
just an optimization so we don't lose anything in terms of completeness of
solving.
Note [Efficient Orientation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Suppose we are interacting two FunEqCans with the same LHS:
(inert) ci :: (F ty ~ xi_i)
(work) cw :: (F ty ~ xi_w)
We prefer to keep the inert (else we pass the work item on down
the pipeline, which is a bit silly). If we keep the inert, we
will (a) discharge 'cw'
(b) produce a new equality work-item (xi_w ~ xi_i)
Notice the orientation (xi_w ~ xi_i) NOT (xi_i ~ xi_w):
new_work :: xi_w ~ xi_i
cw := ci ; sym new_work
Why? Consider the simplest case when xi1 is a type variable. If
we generate xi1~xi2, porcessing that constraint will kick out 'ci'.
If we generate xi2~xi1, there is less chance of that happening.
Of course it can and should still happen if xi1=a, xi1=Int, say.
But we want to avoid it happening needlessly.
Similarly, if we *can't* keep the inert item (because inert is Wanted,
and work is Given, say), we prefer to orient the new equality (xi_i ~
xi_w).
Note [Carefully solve the right CFunEqCan]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
---- OLD COMMENT, NOW NOT NEEDED
---- because we now allow multiple
---- wanted FunEqs with the same head
Consider the constraints
c1 :: F Int ~ a -- Arising from an application line 5
c2 :: F Int ~ Bool -- Arising from an application line 10
Suppose that 'a' is a unification variable, arising only from
flattening. So there is no error on line 5; it's just a flattening
variable. But there is (or might be) an error on line 10.
Two ways to combine them, leaving either (Plan A)
c1 :: F Int ~ a -- Arising from an application line 5
c3 :: a ~ Bool -- Arising from an application line 10
or (Plan B)
c2 :: F Int ~ Bool -- Arising from an application line 10
c4 :: a ~ Bool -- Arising from an application line 5
Plan A will unify c3, leaving c1 :: F Int ~ Bool as an error
on the *totally innocent* line 5. An example is test SimpleFail16
where the expected/actual message comes out backwards if we use
the wrong plan.
The second is the right thing to do. Hence the isMetaTyVarTy
test when solving pairwise CFunEqCan.
*********************************************************************************
* *
interactTyVarEq
* *
*********************************************************************************
-}
interactTyVarEq :: InertCans -> Ct -> TcS (StopOrContinue Ct)
-- CTyEqCans are always consumed, so always returns Stop
interactTyVarEq inerts workItem@(CTyEqCan { cc_tyvar = tv
, cc_rhs = rhs
, cc_ev = ev
, cc_eq_rel = eq_rel })
| (ev_i : _) <- [ ev_i | CTyEqCan { cc_ev = ev_i, cc_rhs = rhs_i }
<- findTyEqs inerts tv
, ev_i `canRewriteOrSame` ev
, rhs_i `tcEqType` rhs ]
= -- Inert: a ~ b
-- Work item: a ~ b
do { setEvBindIfWanted ev (ctEvTerm ev_i)
; stopWith ev "Solved from inert" }
| Just tv_rhs <- getTyVar_maybe rhs
, (ev_i : _) <- [ ev_i | CTyEqCan { cc_ev = ev_i, cc_rhs = rhs_i }
<- findTyEqs inerts tv_rhs
, ev_i `canRewriteOrSame` ev
, rhs_i `tcEqType` mkTyVarTy tv ]
= -- Inert: a ~ b
-- Work item: b ~ a
do { setEvBindIfWanted ev
(EvCoercion (mkTcSymCo (ctEvCoercion ev_i)))
; stopWith ev "Solved from inert (r)" }
| otherwise
= do { tclvl <- getTcLevel
; if canSolveByUnification tclvl ev eq_rel tv rhs
then do { solveByUnification ev tv rhs
; n_kicked <- kickOutRewritable Given NomEq tv
-- Given because the tv := xi is given
-- NomEq because only nom. equalities are solved
-- by unification
; return (Stop ev (ptext (sLit "Spontaneously solved") <+> ppr_kicked n_kicked)) }
else do { traceTcS "Can't solve tyvar equality"
(vcat [ text "LHS:" <+> ppr tv <+> dcolon <+> ppr (tyVarKind tv)
, ppWhen (isMetaTyVar tv) $
nest 4 (text "TcLevel of" <+> ppr tv
<+> text "is" <+> ppr (metaTyVarTcLevel tv))
, text "RHS:" <+> ppr rhs <+> dcolon <+> ppr (typeKind rhs)
, text "TcLevel =" <+> ppr tclvl ])
; n_kicked <- kickOutRewritable (ctEvFlavour ev)
(ctEvEqRel ev)
tv
; updInertCans (\ ics -> addInertCan ics workItem)
; return (Stop ev (ptext (sLit "Kept as inert") <+> ppr_kicked n_kicked)) } }
interactTyVarEq _ wi = pprPanic "interactTyVarEq" (ppr wi)
-- @trySpontaneousSolve wi@ solves equalities where one side is a
-- touchable unification variable.
-- Returns True <=> spontaneous solve happened
canSolveByUnification :: TcLevel -> CtEvidence -> EqRel
-> TcTyVar -> Xi -> Bool
canSolveByUnification tclvl gw eq_rel tv xi
| ReprEq <- eq_rel -- we never solve representational equalities this way.
= False
| isGiven gw -- See Note [Touchables and givens]
= False
| isTouchableMetaTyVar tclvl tv
= case metaTyVarInfo tv of
SigTv -> is_tyvar xi
_ -> True
| otherwise -- Untouchable
= False
where
is_tyvar xi
= case tcGetTyVar_maybe xi of
Nothing -> False
Just tv -> case tcTyVarDetails tv of
MetaTv { mtv_info = info }
-> case info of
SigTv -> True
_ -> False
SkolemTv {} -> True
FlatSkol {} -> False
RuntimeUnk -> True
solveByUnification :: CtEvidence -> TcTyVar -> Xi -> TcS ()
-- Solve with the identity coercion
-- Precondition: kind(xi) is a sub-kind of kind(tv)
-- Precondition: CtEvidence is Wanted or Derived
-- Precondition: CtEvidence is nominal
-- Returns: workItem where
-- workItem = the new Given constraint
--
-- NB: No need for an occurs check here, because solveByUnification always
-- arises from a CTyEqCan, a *canonical* constraint. Its invariants
-- say that in (a ~ xi), the type variable a does not appear in xi.
-- See TcRnTypes.Ct invariants.
--
-- Post: tv is unified (by side effect) with xi;
-- we often write tv := xi
solveByUnification wd tv xi
= do { let tv_ty = mkTyVarTy tv
; traceTcS "Sneaky unification:" $
vcat [text "Unifies:" <+> ppr tv <+> ptext (sLit ":=") <+> ppr xi,
text "Coercion:" <+> pprEq tv_ty xi,
text "Left Kind is:" <+> ppr (typeKind tv_ty),
text "Right Kind is:" <+> ppr (typeKind xi) ]
; let xi' = defaultKind xi
-- We only instantiate kind unification variables
-- with simple kinds like *, not OpenKind or ArgKind
-- cf TcUnify.uUnboundKVar
; setWantedTyBind tv xi'
; setEvBindIfWanted wd (EvCoercion (mkTcNomReflCo xi')) }
ppr_kicked :: Int -> SDoc
ppr_kicked 0 = empty
ppr_kicked n = parens (int n <+> ptext (sLit "kicked out"))
kickOutRewritable :: CtFlavour -- Flavour of the equality that is
-- being added to the inert set
-> EqRel -- of the new equality
-> TcTyVar -- The new equality is tv ~ ty
-> TcS Int
kickOutRewritable new_flavour new_eq_rel new_tv
| not ((new_flavour, new_eq_rel) `eqCanRewriteFR` (new_flavour, new_eq_rel))
= return 0 -- If new_flavour can't rewrite itself, it can't rewrite
-- anything else, so no need to kick out anything
-- This is a common case: wanteds can't rewrite wanteds
| otherwise
= do { ics <- getInertCans
; let (kicked_out, ics') = kick_out new_flavour new_eq_rel new_tv ics
; setInertCans ics'
; updWorkListTcS (appendWorkList kicked_out)
; unless (isEmptyWorkList kicked_out) $
csTraceTcS $
hang (ptext (sLit "Kick out, tv =") <+> ppr new_tv)
2 (vcat [ text "n-kicked =" <+> int (workListSize kicked_out)
, text "n-kept fun-eqs =" <+> int (sizeFunEqMap (inert_funeqs ics'))
, ppr kicked_out ])
; return (workListSize kicked_out) }
kick_out :: CtFlavour -> EqRel -> TcTyVar -> InertCans -> (WorkList, InertCans)
kick_out new_flavour new_eq_rel new_tv (IC { inert_eqs = tv_eqs
, inert_dicts = dictmap
, inert_funeqs = funeqmap
, inert_irreds = irreds
, inert_insols = insols })
= (kicked_out, inert_cans_in)
where
-- NB: Notice that don't rewrite
-- inert_solved_dicts, and inert_solved_funeqs
-- optimistically. But when we lookup we have to
-- take the substitution into account
inert_cans_in = IC { inert_eqs = tv_eqs_in
, inert_dicts = dicts_in
, inert_funeqs = feqs_in
, inert_irreds = irs_in
, inert_insols = insols_in }
kicked_out = WL { wl_eqs = tv_eqs_out
, wl_funeqs = feqs_out
, wl_rest = bagToList (dicts_out `andCts` irs_out
`andCts` insols_out)
, wl_implics = emptyBag }
(tv_eqs_out, tv_eqs_in) = foldVarEnv kick_out_eqs ([], emptyVarEnv) tv_eqs
(feqs_out, feqs_in) = partitionFunEqs kick_out_ct funeqmap
(dicts_out, dicts_in) = partitionDicts kick_out_ct dictmap
(irs_out, irs_in) = partitionBag kick_out_irred irreds
(insols_out, insols_in) = partitionBag kick_out_ct insols
-- Kick out even insolubles; see Note [Kick out insolubles]
can_rewrite :: CtEvidence -> Bool
can_rewrite = ((new_flavour, new_eq_rel) `eqCanRewriteFR`) . ctEvFlavourRole
kick_out_ct :: Ct -> Bool
kick_out_ct ct = kick_out_ctev (ctEvidence ct)
kick_out_ctev :: CtEvidence -> Bool
kick_out_ctev ev = can_rewrite ev
&& new_tv `elemVarSet` tyVarsOfType (ctEvPred ev)
-- See Note [Kicking out inert constraints]
kick_out_irred :: Ct -> Bool
kick_out_irred ct = can_rewrite (cc_ev ct)
&& new_tv `elemVarSet` closeOverKinds (tyVarsOfCt ct)
-- See Note [Kicking out Irreds]
kick_out_eqs :: EqualCtList -> ([Ct], TyVarEnv EqualCtList)
-> ([Ct], TyVarEnv EqualCtList)
kick_out_eqs eqs (acc_out, acc_in)
= (eqs_out ++ acc_out, case eqs_in of
[] -> acc_in
(eq1:_) -> extendVarEnv acc_in (cc_tyvar eq1) eqs_in)
where
(eqs_in, eqs_out) = partition keep_eq eqs
-- implements criteria K1-K3 in Note [The inert equalities] in TcFlatten
keep_eq (CTyEqCan { cc_tyvar = tv, cc_rhs = rhs_ty, cc_ev = ev
, cc_eq_rel = eq_rel })
| tv == new_tv
= not (can_rewrite ev) -- (K1)
| otherwise
= check_k2 && check_k3
where
check_k2 = not (ev `eqCanRewrite` ev)
|| not (can_rewrite ev)
|| not (new_tv `elemVarSet` tyVarsOfType rhs_ty)
check_k3
| can_rewrite ev
= case eq_rel of
NomEq -> not (rhs_ty `eqType` mkTyVarTy new_tv)
ReprEq -> not (isTyVarExposed new_tv rhs_ty)
| otherwise
= True
keep_eq ct = pprPanic "keep_eq" (ppr ct)
{-
Note [Kicking out inert constraints]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Given a new (a -> ty) inert, we want to kick out an existing inert
constraint if
a) the new constraint can rewrite the inert one
b) 'a' is free in the inert constraint (so that it *will*)
rewrite it if we kick it out.
For (b) we use tyVarsOfCt, which returns the type variables /and
the kind variables/ that are directly visible in the type. Hence we
will have exposed all the rewriting we care about to make the most
precise kinds visible for matching classes etc. No need to kick out
constraints that mention type variables whose kinds contain this
variable! (Except see Note [Kicking out Irreds].)
Note [Kicking out Irreds]
~~~~~~~~~~~~~~~~~~~~~~~~~
There is an awkward special case for Irreds. When we have a
kind-mis-matched equality constraint (a:k1) ~ (ty:k2), we turn it into
an Irred (see Note [Equalities with incompatible kinds] in
TcCanonical). So in this case the free kind variables of k1 and k2
are not visible. More precisely, the type looks like
(~) k1 (a:k1) (ty:k2)
because (~) has kind forall k. k -> k -> Constraint. So the constraint
itself is ill-kinded. We can "see" k1 but not k2. That's why we use
closeOverKinds to make sure we see k2.
This is not pretty. Maybe (~) should have kind
(~) :: forall k1 k1. k1 -> k2 -> Constraint
Note [Kick out insolubles]
~~~~~~~~~~~~~~~~~~~~~~~~~~
Suppose we have an insoluble alpha ~ [alpha], which is insoluble
because an occurs check. And then we unify alpha := [Int].
Then we really want to rewrite the insouluble to [Int] ~ [[Int]].
Now it can be decomposed. Otherwise we end up with a "Can't match
[Int] ~ [[Int]]" which is true, but a bit confusing because the
outer type constructors match.
Note [Avoid double unifications]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The spontaneous solver has to return a given which mentions the unified unification
variable *on the left* of the equality. Here is what happens if not:
Original wanted: (a ~ alpha), (alpha ~ Int)
We spontaneously solve the first wanted, without changing the order!
given : a ~ alpha [having unified alpha := a]
Now the second wanted comes along, but he cannot rewrite the given, so we simply continue.
At the end we spontaneously solve that guy, *reunifying* [alpha := Int]
We avoid this problem by orienting the resulting given so that the unification
variable is on the left. [Note that alternatively we could attempt to
enforce this at canonicalization]
See also Note [No touchables as FunEq RHS] in TcSMonad; avoiding
double unifications is the main reason we disallow touchable
unification variables as RHS of type family equations: F xis ~ alpha.
************************************************************************
* *
* Functional dependencies, instantiation of equations
* *
************************************************************************
When we spot an equality arising from a functional dependency,
we now use that equality (a "wanted") to rewrite the work-item
constraint right away. This avoids two dangers
Danger 1: If we send the original constraint on down the pipeline
it may react with an instance declaration, and in delicate
situations (when a Given overlaps with an instance) that
may produce new insoluble goals: see Trac #4952
Danger 2: If we don't rewrite the constraint, it may re-react
with the same thing later, and produce the same equality
again --> termination worries.
To achieve this required some refactoring of FunDeps.hs (nicer
now!).
-}
emitFunDepDeriveds :: [FunDepEqn CtLoc] -> TcS ()
emitFunDepDeriveds fd_eqns
= mapM_ do_one_FDEqn fd_eqns
where
do_one_FDEqn (FDEqn { fd_qtvs = tvs, fd_eqs = eqs, fd_loc = loc })
| null tvs -- Common shortcut
= mapM_ (unifyDerived loc Nominal) eqs
| otherwise
= do { (subst, _) <- instFlexiTcS tvs -- Takes account of kind substitution
; mapM_ (do_one_eq loc subst) eqs }
do_one_eq loc subst (Pair ty1 ty2)
= unifyDerived loc Nominal $
Pair (Type.substTy subst ty1) (Type.substTy subst ty2)
{-
*********************************************************************************
* *
The top-reaction Stage
* *
*********************************************************************************
-}
topReactionsStage :: WorkItem -> TcS (StopOrContinue Ct)
topReactionsStage wi
= do { inerts <- getTcSInerts
; tir <- doTopReact inerts wi
; case tir of
ContinueWith wi -> return (ContinueWith wi)
Stop ev s -> return (Stop ev (ptext (sLit "Top react:") <+> s)) }
doTopReact :: InertSet -> WorkItem -> TcS (StopOrContinue Ct)
-- The work item does not react with the inert set, so try interaction with top-level
-- instances. Note:
--
-- (a) The place to add superclasses in not here in doTopReact stage.
-- Instead superclasses are added in the worklist as part of the
-- canonicalization process. See Note [Adding superclasses].
doTopReact inerts work_item
= do { traceTcS "doTopReact" (ppr work_item)
; case work_item of
CDictCan {} -> doTopReactDict inerts work_item
CFunEqCan {} -> doTopReactFunEq work_item
_ -> -- Any other work item does not react with any top-level equations
return (ContinueWith work_item) }
--------------------
doTopReactDict :: InertSet -> Ct -> TcS (StopOrContinue Ct)
-- Try to use type-class instance declarations to simplify the constraint
doTopReactDict inerts work_item@(CDictCan { cc_ev = fl, cc_class = cls
, cc_tyargs = xis })
| not (isWanted fl) -- Never use instances for Given or Derived constraints
= try_fundeps_and_return
| Just ev <- lookupSolvedDict inerts dict_loc cls xis -- Cached
= do { setWantedEvBind dict_id (ctEvTerm ev);
; stopWith fl "Dict/Top (cached)" }
| otherwise -- Not cached
= do { lkup_inst_res <- matchClassInst inerts cls xis dict_loc
; case lkup_inst_res of
GenInst wtvs ev_term -> do { addSolvedDict fl cls xis
; solve_from_instance wtvs ev_term }
NoInstance -> try_fundeps_and_return }
where
dict_id = ASSERT( isWanted fl ) ctEvId fl
dict_pred = mkClassPred cls xis
dict_loc = ctEvLoc fl
dict_origin = ctLocOrigin dict_loc
deeper_loc = bumpCtLocDepth CountConstraints dict_loc
solve_from_instance :: [CtEvidence] -> EvTerm -> TcS (StopOrContinue Ct)
-- Precondition: evidence term matches the predicate workItem
solve_from_instance evs ev_term
| null evs
= do { traceTcS "doTopReact/found nullary instance for" $
ppr dict_id
; setWantedEvBind dict_id ev_term
; stopWith fl "Dict/Top (solved, no new work)" }
| otherwise
= do { traceTcS "doTopReact/found non-nullary instance for" $
ppr dict_id
; setWantedEvBind dict_id ev_term
; let mk_new_wanted ev
= mkNonCanonical (ev {ctev_loc = deeper_loc })
; updWorkListTcS (extendWorkListCts (map mk_new_wanted evs))
; stopWith fl "Dict/Top (solved, more work)" }
-- We didn't solve it; so try functional dependencies with
-- the instance environment, and return
-- NB: even if there *are* some functional dependencies against the
-- instance environment, there might be a unique match, and if
-- so we make sure we get on and solve it first. See Note [Weird fundeps]
try_fundeps_and_return
= do { instEnvs <- getInstEnvs
; emitFunDepDeriveds $
improveFromInstEnv instEnvs mk_ct_loc dict_pred
; continueWith work_item }
mk_ct_loc :: PredType -- From instance decl
-> SrcSpan -- also from instance deol
-> CtLoc
mk_ct_loc inst_pred inst_loc
= dict_loc { ctl_origin = FunDepOrigin2 dict_pred dict_origin
inst_pred inst_loc }
doTopReactDict _ w = pprPanic "doTopReactDict" (ppr w)
--------------------
doTopReactFunEq :: Ct -> TcS (StopOrContinue Ct)
-- Note [Short cut for top-level reaction]
doTopReactFunEq work_item@(CFunEqCan { cc_ev = old_ev, cc_fun = fam_tc
, cc_tyargs = args , cc_fsk = fsk })
= ASSERT(isTypeFamilyTyCon fam_tc) -- No associated data families
-- have reached this far
ASSERT( not (isDerived old_ev) ) -- CFunEqCan is never Derived
-- Look up in top-level instances, or built-in axiom
do { match_res <- matchFam fam_tc args -- See Note [MATCHING-SYNONYMS]
; case match_res of {
Nothing -> do { try_improvement; continueWith work_item } ;
Just (ax_co, rhs_ty)
-- Found a top-level instance
| Just (tc, tc_args) <- tcSplitTyConApp_maybe rhs_ty
, isTypeFamilyTyCon tc
, tc_args `lengthIs` tyConArity tc -- Short-cut
-> shortCutReduction old_ev fsk ax_co tc tc_args
-- Try shortcut; see Note [Short cut for top-level reaction]
| isGiven old_ev -- Not shortcut
-> do { let final_co = mkTcSymCo (ctEvCoercion old_ev) `mkTcTransCo` ax_co
-- final_co :: fsk ~ rhs_ty
; new_ev <- newGivenEvVar deeper_loc (mkTcEqPred (mkTyVarTy fsk) rhs_ty,
EvCoercion final_co)
; emitWorkNC [new_ev] -- Non-cannonical; that will mean we flatten rhs_ty
; stopWith old_ev "Fun/Top (given)" }
| not (fsk `elemVarSet` tyVarsOfType rhs_ty)
-> do { dischargeFmv (ctEvId old_ev) fsk ax_co rhs_ty
; traceTcS "doTopReactFunEq" $
vcat [ text "old_ev:" <+> ppr old_ev
, nest 2 (text ":=") <+> ppr ax_co ]
; stopWith old_ev "Fun/Top (wanted)" }
| otherwise -- We must not assign ufsk := ...ufsk...!
-> do { alpha_ty <- newFlexiTcSTy (tyVarKind fsk)
; new_ev <- newWantedEvVarNC loc (mkTcEqPred alpha_ty rhs_ty)
; emitWorkNC [new_ev]
-- By emitting this as non-canonical, we deal with all
-- flattening, occurs-check, and ufsk := ufsk issues
; let final_co = ax_co `mkTcTransCo` mkTcSymCo (ctEvCoercion new_ev)
-- ax_co :: fam_tc args ~ rhs_ty
-- new_ev :: alpha ~ rhs_ty
-- ufsk := alpha
-- final_co :: fam_tc args ~ alpha
; dischargeFmv (ctEvId old_ev) fsk final_co alpha_ty
; traceTcS "doTopReactFunEq (occurs)" $
vcat [ text "old_ev:" <+> ppr old_ev
, nest 2 (text ":=") <+> ppr final_co
, text "new_ev:" <+> ppr new_ev ]
; stopWith old_ev "Fun/Top (wanted)" } } }
where
loc = ctEvLoc old_ev
deeper_loc = bumpCtLocDepth CountTyFunApps loc
try_improvement
| Just ops <- isBuiltInSynFamTyCon_maybe fam_tc
= do { inert_eqs <- getInertEqs
; let eqns = sfInteractTop ops args (lookupFlattenTyVar inert_eqs fsk)
; mapM_ (unifyDerived loc Nominal) eqns }
| otherwise
= return ()
doTopReactFunEq w = pprPanic "doTopReactFunEq" (ppr w)
shortCutReduction :: CtEvidence -> TcTyVar -> TcCoercion
-> TyCon -> [TcType] -> TcS (StopOrContinue Ct)
-- See Note [Top-level reductions for type functions]
shortCutReduction old_ev fsk ax_co fam_tc tc_args
| isGiven old_ev
= ASSERT( ctEvEqRel old_ev == NomEq )
runFlatten $
do { (xis, cos) <- flattenManyNom old_ev tc_args
-- ax_co :: F args ~ G tc_args
-- cos :: xis ~ tc_args
-- old_ev :: F args ~ fsk
-- G cos ; sym ax_co ; old_ev :: G xis ~ fsk
; new_ev <- newGivenEvVar deeper_loc
( mkTcEqPred (mkTyConApp fam_tc xis) (mkTyVarTy fsk)
, EvCoercion (mkTcTyConAppCo Nominal fam_tc cos
`mkTcTransCo` mkTcSymCo ax_co
`mkTcTransCo` ctEvCoercion old_ev) )
; let new_ct = CFunEqCan { cc_ev = new_ev, cc_fun = fam_tc, cc_tyargs = xis, cc_fsk = fsk }
; emitFlatWork new_ct
; stopWith old_ev "Fun/Top (given, shortcut)" }
| otherwise
= ASSERT( not (isDerived old_ev) ) -- Caller ensures this
ASSERT( ctEvEqRel old_ev == NomEq )
do { (xis, cos) <- flattenManyNom old_ev tc_args
-- ax_co :: F args ~ G tc_args
-- cos :: xis ~ tc_args
-- G cos ; sym ax_co ; old_ev :: G xis ~ fsk
-- new_ev :: G xis ~ fsk
-- old_ev :: F args ~ fsk := ax_co ; sym (G cos) ; new_ev
; new_ev <- newWantedEvVarNC deeper_loc
(mkTcEqPred (mkTyConApp fam_tc xis) (mkTyVarTy fsk))
; setWantedEvBind (ctEvId old_ev)
(EvCoercion (ax_co `mkTcTransCo` mkTcSymCo (mkTcTyConAppCo Nominal fam_tc cos)
`mkTcTransCo` ctEvCoercion new_ev))
; let new_ct = CFunEqCan { cc_ev = new_ev, cc_fun = fam_tc, cc_tyargs = xis, cc_fsk = fsk }
; emitFlatWork new_ct
; stopWith old_ev "Fun/Top (wanted, shortcut)" }
where
loc = ctEvLoc old_ev
deeper_loc = bumpCtLocDepth CountTyFunApps loc
dischargeFmv :: EvVar -> TcTyVar -> TcCoercion -> TcType -> TcS ()
-- (dischargeFmv x fmv co ty)
-- [W] x :: F tys ~ fuv
-- co :: F tys ~ ty
-- Precondition: fuv is not filled, and fuv `notElem` ty
--
-- Then set fuv := ty,
-- set x := co
-- kick out any inert things that are now rewritable
dischargeFmv evar fmv co xi
= ASSERT2( not (fmv `elemVarSet` tyVarsOfType xi), ppr evar $$ ppr fmv $$ ppr xi )
do { setWantedTyBind fmv xi
; setWantedEvBind evar (EvCoercion co)
; n_kicked <- kickOutRewritable Given NomEq fmv
; traceTcS "dischargeFuv" (ppr fmv <+> equals <+> ppr xi $$ ppr_kicked n_kicked) }
{- Note [Top-level reductions for type functions]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
c.f. Note [The flattening story] in TcFlatten
Suppose we have a CFunEqCan F tys ~ fmv/fsk, and a matching axiom.
Here is what we do, in four cases:
* Wanteds: general firing rule
(work item) [W] x : F tys ~ fmv
instantiate axiom: ax_co : F tys ~ rhs
Then:
Discharge fmv := alpha
Discharge x := ax_co ; sym x2
New wanted [W] x2 : alpha ~ rhs (Non-canonical)
This is *the* way that fmv's get unified; even though they are
"untouchable".
NB: it can be the case that fmv appears in the (instantiated) rhs.
In that case the new Non-canonical wanted will be loopy, but that's
ok. But it's good reason NOT to claim that it is canonical!
* Wanteds: short cut firing rule
Applies when the RHS of the axiom is another type-function application
(work item) [W] x : F tys ~ fmv
instantiate axiom: ax_co : F tys ~ G rhs_tys
It would be a waste to create yet another fmv for (G rhs_tys).
Instead (shortCutReduction):
- Flatten rhs_tys (cos : rhs_tys ~ rhs_xis)
- Add G rhs_xis ~ fmv to flat cache (note: the same old fmv)
- New canonical wanted [W] x2 : G rhs_xis ~ fmv (CFunEqCan)
- Discharge x := ax_co ; G cos ; x2
* Givens: general firing rule
(work item) [G] g : F tys ~ fsk
instantiate axiom: ax_co : F tys ~ rhs
Now add non-canonical given (since rhs is not flat)
[G] (sym g ; ax_co) : fsk ~ rhs (Non-canonical)
* Givens: short cut firing rule
Applies when the RHS of the axiom is another type-function application
(work item) [G] g : F tys ~ fsk
instantiate axiom: ax_co : F tys ~ G rhs_tys
It would be a waste to create yet another fsk for (G rhs_tys).
Instead (shortCutReduction):
- Flatten rhs_tys: flat_cos : tys ~ flat_tys
- Add new Canonical given
[G] (sym (G flat_cos) ; co ; g) : G flat_tys ~ fsk (CFunEqCan)
Note [Cached solved FunEqs]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When trying to solve, say (FunExpensive big-type ~ ty), it's important
to see if we have reduced (FunExpensive big-type) before, lest we
simply repeat it. Hence the lookup in inert_solved_funeqs. Moreover
we must use `canRewriteOrSame` because both uses might (say) be Wanteds,
and we *still* want to save the re-computation.
Note [MATCHING-SYNONYMS]
~~~~~~~~~~~~~~~~~~~~~~~~
When trying to match a dictionary (D tau) to a top-level instance, or a
type family equation (F taus_1 ~ tau_2) to a top-level family instance,
we do *not* need to expand type synonyms because the matcher will do that for us.
Note [RHS-FAMILY-SYNONYMS]
~~~~~~~~~~~~~~~~~~~~~~~~~~
The RHS of a family instance is represented as yet another constructor which is
like a type synonym for the real RHS the programmer declared. Eg:
type instance F (a,a) = [a]
Becomes:
:R32 a = [a] -- internal type synonym introduced
F (a,a) ~ :R32 a -- instance
When we react a family instance with a type family equation in the work list
we keep the synonym-using RHS without expansion.
Note [FunDep and implicit parameter reactions]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Currently, our story of interacting two dictionaries (or a dictionary
and top-level instances) for functional dependencies, and implicit
paramters, is that we simply produce new Derived equalities. So for example
class D a b | a -> b where ...
Inert:
d1 :g D Int Bool
WorkItem:
d2 :w D Int alpha
We generate the extra work item
cv :d alpha ~ Bool
where 'cv' is currently unused. However, this new item can perhaps be
spontaneously solved to become given and react with d2,
discharging it in favour of a new constraint d2' thus:
d2' :w D Int Bool
d2 := d2' |> D Int cv
Now d2' can be discharged from d1
We could be more aggressive and try to *immediately* solve the dictionary
using those extra equalities, but that requires those equalities to carry
evidence and derived do not carry evidence.
If that were the case with the same inert set and work item we might dischard
d2 directly:
cv :w alpha ~ Bool
d2 := d1 |> D Int cv
But in general it's a bit painful to figure out the necessary coercion,
so we just take the first approach. Here is a better example. Consider:
class C a b c | a -> b
And:
[Given] d1 : C T Int Char
[Wanted] d2 : C T beta Int
In this case, it's *not even possible* to solve the wanted immediately.
So we should simply output the functional dependency and add this guy
[but NOT its superclasses] back in the worklist. Even worse:
[Given] d1 : C T Int beta
[Wanted] d2: C T beta Int
Then it is solvable, but its very hard to detect this on the spot.
It's exactly the same with implicit parameters, except that the
"aggressive" approach would be much easier to implement.
Note [When improvement happens]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We fire an improvement rule when
* Two constraints match (modulo the fundep)
e.g. C t1 t2, C t1 t3 where C a b | a->b
The two match because the first arg is identical
Note that we *do* fire the improvement if one is Given and one is Derived (e.g. a
superclass of a Wanted goal) or if both are Given.
Example (tcfail138)
class L a b | a -> b
class (G a, L a b) => C a b
instance C a b' => G (Maybe a)
instance C a b => C (Maybe a) a
instance L (Maybe a) a
When solving the superclasses of the (C (Maybe a) a) instance, we get
Given: C a b ... and hance by superclasses, (G a, L a b)
Wanted: G (Maybe a)
Use the instance decl to get
Wanted: C a b'
The (C a b') is inert, so we generate its Derived superclasses (L a b'),
and now we need improvement between that derived superclass an the Given (L a b)
Test typecheck/should_fail/FDsFromGivens also shows why it's a good idea to
emit Derived FDs for givens as well.
Note [Weird fundeps]
~~~~~~~~~~~~~~~~~~~~
Consider class Het a b | a -> b where
het :: m (f c) -> a -> m b
class GHet (a :: * -> *) (b :: * -> *) | a -> b
instance GHet (K a) (K [a])
instance Het a b => GHet (K a) (K b)
The two instances don't actually conflict on their fundeps,
although it's pretty strange. So they are both accepted. Now
try [W] GHet (K Int) (K Bool)
This triggers fudeps from both instance decls; but it also
matches a *unique* instance decl, and we should go ahead and
pick that one right now. Otherwise, if we don't, it ends up
unsolved in the inert set and is reported as an error.
Trac #7875 is a case in point.
Note [Overriding implicit parameters]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Consider
f :: (?x::a) -> Bool -> a
g v = let ?x::Int = 3
in (f v, let ?x::Bool = True in f v)
This should probably be well typed, with
g :: Bool -> (Int, Bool)
So the inner binding for ?x::Bool *overrides* the outer one.
Hence a work-item Given overrides an inert-item Given.
-}
data LookupInstResult
= NoInstance
| GenInst [CtEvidence] EvTerm
instance Outputable LookupInstResult where
ppr NoInstance = text "NoInstance"
ppr (GenInst ev t) = text "GenInst" <+> ppr ev <+> ppr t
matchClassInst :: InertSet -> Class -> [Type] -> CtLoc -> TcS LookupInstResult
matchClassInst _ clas [ ty ] _
| className clas == knownNatClassName
, Just n <- isNumLitTy ty = makeDict (EvNum n)
| className clas == knownSymbolClassName
, Just s <- isStrLitTy ty = makeDict (EvStr s)
where
{- This adds a coercion that will convert the literal into a dictionary
of the appropriate type. See Note [KnownNat & KnownSymbol and EvLit]
in TcEvidence. The coercion happens in 2 steps:
Integer -> SNat n -- representation of literal to singleton
SNat n -> KnownNat n -- singleton to dictionary
The process is mirrored for Symbols:
String -> SSymbol n
SSymbol n -> KnownSymbol n
-}
makeDict evLit
| Just (_, co_dict) <- tcInstNewTyCon_maybe (classTyCon clas) [ty]
-- co_dict :: KnownNat n ~ SNat n
, [ meth ] <- classMethods clas
, Just tcRep <- tyConAppTyCon_maybe -- SNat
$ funResultTy -- SNat n
$ dropForAlls -- KnownNat n => SNat n
$ idType meth -- forall n. KnownNat n => SNat n
, Just (_, co_rep) <- tcInstNewTyCon_maybe tcRep [ty]
-- SNat n ~ Integer
= return (GenInst [] $ mkEvCast (EvLit evLit) (mkTcSymCo (mkTcTransCo co_dict co_rep)))
| otherwise
= panicTcS (text "Unexpected evidence for" <+> ppr (className clas)
$$ vcat (map (ppr . idType) (classMethods clas)))
matchClassInst _ clas [k,t] loc
| className clas == typeableClassName = matchTypeableClass clas k t loc
matchClassInst inerts clas tys loc
= do { dflags <- getDynFlags
; tclvl <- getTcLevel
; traceTcS "matchClassInst" $ vcat [ text "pred =" <+> ppr pred
, text "inerts=" <+> ppr inerts
, text "untouchables=" <+> ppr tclvl ]
; instEnvs <- getInstEnvs
; case lookupInstEnv instEnvs clas tys of
([], _, _) -- Nothing matches
-> do { traceTcS "matchClass not matching" $
vcat [ text "dict" <+> ppr pred ]
; return NoInstance }
([(ispec, inst_tys)], [], _) -- A single match
| not (xopt Opt_IncoherentInstances dflags)
, given_overlap tclvl
-> -- See Note [Instance and Given overlap]
do { traceTcS "Delaying instance application" $
vcat [ text "Workitem=" <+> pprType (mkClassPred clas tys)
, text "Relevant given dictionaries=" <+> ppr givens_for_this_clas ]
; return NoInstance }
| otherwise
-> do { let dfun_id = instanceDFunId ispec
; traceTcS "matchClass success" $
vcat [text "dict" <+> ppr pred,
text "witness" <+> ppr dfun_id
<+> ppr (idType dfun_id) ]
-- Record that this dfun is needed
; match_one dfun_id inst_tys }
(matches, _, _) -- More than one matches
-- Defer any reactions of a multitude
-- until we learn more about the reagent
-> do { traceTcS "matchClass multiple matches, deferring choice" $
vcat [text "dict" <+> ppr pred,
text "matches" <+> ppr matches]
; return NoInstance } }
where
pred = mkClassPred clas tys
match_one :: DFunId -> [DFunInstType] -> TcS LookupInstResult
-- See Note [DFunInstType: instantiating types] in InstEnv
match_one dfun_id mb_inst_tys
= do { checkWellStagedDFun pred dfun_id loc
; (tys, theta) <- instDFunType dfun_id mb_inst_tys
; evc_vars <- mapM (newWantedEvVar loc) theta
; let new_ev_vars = freshGoals evc_vars
-- new_ev_vars are only the real new variables that can be emitted
dfun_app = EvDFunApp dfun_id tys (map (ctEvTerm . fst) evc_vars)
; return $ GenInst new_ev_vars dfun_app }
givens_for_this_clas :: Cts
givens_for_this_clas
= filterBag isGivenCt (findDictsByClass (inert_dicts $ inert_cans inerts) clas)
given_overlap :: TcLevel -> Bool
given_overlap tclvl = anyBag (matchable tclvl) givens_for_this_clas
matchable tclvl (CDictCan { cc_class = clas_g, cc_tyargs = sys
, cc_ev = fl })
| isGiven fl
= ASSERT( clas_g == clas )
case tcUnifyTys (\tv -> if isTouchableMetaTyVar tclvl tv &&
tv `elemVarSet` tyVarsOfTypes tys
then BindMe else Skolem) tys sys of
-- We can't learn anything more about any variable at this point, so the only
-- cause of overlap can be by an instantiation of a touchable unification
-- variable. Hence we only bind touchable unification variables. In addition,
-- we use tcUnifyTys instead of tcMatchTys to rule out cyclic substitutions.
Nothing -> False
Just _ -> True
| otherwise = False -- No overlap with a solved, already been taken care of
-- by the overlap check with the instance environment.
matchable _tys ct = pprPanic "Expecting dictionary!" (ppr ct)
{-
Note [Instance and Given overlap]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example, from the OutsideIn(X) paper:
instance P x => Q [x]
instance (x ~ y) => R y [x]
wob :: forall a b. (Q [b], R b a) => a -> Int
g :: forall a. Q [a] => [a] -> Int
g x = wob x
This will generate the impliation constraint:
Q [a] => (Q [beta], R beta [a])
If we react (Q [beta]) with its top-level axiom, we end up with a
(P beta), which we have no way of discharging. On the other hand,
if we react R beta [a] with the top-level we get (beta ~ a), which
is solvable and can help us rewrite (Q [beta]) to (Q [a]) which is
now solvable by the given Q [a].
The solution is that:
In matchClassInst (and thus in topReact), we return a matching
instance only when there is no Given in the inerts which is
unifiable to this particular dictionary.
The end effect is that, much as we do for overlapping instances, we delay choosing a
class instance if there is a possibility of another instance OR a given to match our
constraint later on. This fixes bugs #4981 and #5002.
This is arguably not easy to appear in practice due to our aggressive prioritization
of equality solving over other constraints, but it is possible. I've added a test case
in typecheck/should-compile/GivenOverlapping.hs
We ignore the overlap problem if -XIncoherentInstances is in force: see
Trac #6002 for a worked-out example where this makes a difference.
Moreover notice that our goals here are different than the goals of the top-level
overlapping checks. There we are interested in validating the following principle:
If we inline a function f at a site where the same global instance environment
is available as the instance environment at the definition site of f then we
should get the same behaviour.
But for the Given Overlap check our goal is just related to completeness of
constraint solving.
-}
-- | Is the constraint for an implicit CallStack parameter?
isCallStackIP :: CtLoc -> Class -> Type -> Maybe (EvTerm -> EvCallStack)
isCallStackIP loc cls ty
| Just (tc, []) <- splitTyConApp_maybe ty
, cls `hasKey` ipClassNameKey && tc `hasKey` callStackTyConKey
= occOrigin (ctLocOrigin loc)
where
-- We only want to grab constraints that arose due to the use of an IP or a
-- function call. See Note [Overview of implicit CallStacks]
occOrigin (OccurrenceOf n)
= Just (EvCsPushCall n locSpan)
occOrigin (IPOccOrigin n)
= Just (EvCsTop ('?' `consFS` hsIPNameFS n) locSpan)
occOrigin _
= Nothing
locSpan
= ctLocSpan loc
isCallStackIP _ _ _
= Nothing
-- | Assumes that we've checked that this is the 'Typeable' class,
-- and it was applied to the correc arugment.
matchTypeableClass :: Class -> Kind -> Type -> CtLoc -> TcS LookupInstResult
matchTypeableClass clas k t loc
| isForAllTy k = return NoInstance
| Just (tc, ks) <- splitTyConApp_maybe t
, all isKind ks = doTyCon tc ks
| Just (f,kt) <- splitAppTy_maybe t = doTyApp f kt
| Just _ <- isNumLitTy t = mkSimpEv (EvTypeableTyLit t)
| Just _ <- isStrLitTy t = mkSimpEv (EvTypeableTyLit t)
| otherwise = return NoInstance
where
-- Representation for type constructor applied to some kinds
doTyCon tc ks =
case mapM kindRep ks of
Nothing -> return NoInstance
Just kReps -> mkSimpEv (EvTypeableTyCon tc kReps)
{- Representation for an application of a type to a type-or-kind.
This may happen when the type expression starts with a type variable.
Example (ignoring kind parameter):
Typeable (f Int Char) -->
(Typeable (f Int), Typeable Char) -->
(Typeable f, Typeable Int, Typeable Char) --> (after some simp. steps)
Typeable f
-}
doTyApp f tk
| isKind tk = return NoInstance -- We can't solve until we know the ctr.
| otherwise =
do ct1 <- subGoal f
ct2 <- subGoal tk
let realSubs = [ c | (c,Fresh) <- [ct1,ct2] ]
return $ GenInst realSubs
$ EvTypeable $ EvTypeableTyApp (getEv ct1,f) (getEv ct2,tk)
-- Representation for concrete kinds. We just use the kind itself,
-- but first check to make sure that it is "simple" (i.e., made entirely
-- out of kind constructors).
kindRep ki = do (_,ks) <- splitTyConApp_maybe ki
mapM_ kindRep ks
return ki
getEv (ct,_fresh) = ctEvTerm ct
-- Emit a `Typeable` constraint for the given type.
subGoal ty = do let goal = mkClassPred clas [ typeKind ty, ty ]
newWantedEvVar loc goal
mkSimpEv ev = return (GenInst [] (EvTypeable ev))
| gcampax/ghc | compiler/typecheck/TcInteract.hs | bsd-3-clause | 79,229 | 110 | 23 | 23,625 | 11,462 | 5,959 | 5,503 | -1 | -1 |
module SchemeParser where
import Text.ParserCombinators.Parsec hiding (spaces)
import System.Environment
import Control.Monad
import Control.Monad.Error
import Types
import Error
symbol :: Parser Char
symbol = oneOf "!#$%&|*+-/:<=>?@^_~"
spaces :: Parser()
spaces = skipMany1 space
parseString :: Parser LispVal
parseString = do
char '"'
x <- many (noneOf "\"")
char '"'
return $ String x
parseAtom :: Parser LispVal
parseAtom = do
first <- letter <|> symbol
rest <- many (letter <|> digit <|> symbol)
let atom = first:rest
return $ case atom of
"#t" -> Bool True
"#f" -> Bool False
_ -> Atom atom
parseNumber :: Parser LispVal
parseNumber = liftM (Number . read) $ many1 digit
parseList :: Parser LispVal
parseList = liftM List $ sepBy parseExpr spaces
parseDottedList :: Parser LispVal
parseDottedList = do
head <- endBy parseExpr spaces
tail <- char '.' >> spaces >> parseExpr
return $ DottedList head tail
parseQuoted :: Parser LispVal
parseQuoted = do
char '\''
x <- parseExpr
return $ List [Atom "quote", x]
parseExpr :: Parser LispVal
parseExpr = parseAtom
<|> parseString
<|> parseNumber
<|> parseQuoted
<|> do char '('
x <- try parseList <|> parseDottedList
char ')'
return x
readExpr :: String -> ThrowsError LispVal
readExpr input = case parse parseExpr "lisp " input of
Left err -> throwError $ Parser err
Right val -> return val
| programming-language-kungfu/scheme | src/SchemeParser.hs | bsd-3-clause | 1,647 | 0 | 11 | 521 | 494 | 236 | 258 | 53 | 3 |
{-# LANGUAGE CPP #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TemplateHaskell #-}
-- |
-- Module : Main
-- Copyright : (c) The President and Fellows of Harvard College 2009-2010
-- Copyright : (c) Geoffrey Mainland 2012
-- License : BSD-style
--
-- Maintainer : Geoffrey Mainland <[email protected]>
-- Stability : experimental
-- Portability : non-portable
module Main where
import Prelude hiding (map, zipWith, zipWith3)
import qualified Control.Exception as E
import Control.Monad (forM_)
import qualified Criterion as C
import qualified Criterion.Main as C
import qualified Data.Vector.Storable as V
import System.Environment
import Text.Printf
#if !MIN_VERSION_vector(0,10,0)
import Control.DeepSeq
import Foreign (Storable)
#endif /* !MIN_VERSION_vector(0,10,0) */
import qualified Data.Array.Nikola.Backend.CUDA as N
import qualified Data.Array.Nikola.Backend.CUDA.Haskell as NH
import qualified Data.Array.Nikola.Backend.CUDA.TH as NTH
import Data.Array.Nikola.Util.Statistics
import Data.Array.Nikola.Util.Random
import qualified BlackScholes.Nikola as BSN
import qualified BlackScholes.Vector as BSV
type F = Double
rISKFREE :: F
rISKFREE = 0.02
vOLATILITY :: F
vOLATILITY = 0.30;
main :: IO ()
main = do
args <- System.Environment.getArgs
N.initializeCUDACtx
case args of
["--validate"] -> validate
_ -> mapM benchmarksForN [0,2..20] >>=
C.defaultMain
benchmarksForN :: Double -> IO C.Benchmark
benchmarksForN logn = do
(ss, xs, ts) <- generateData n
return $ C.bgroup (printf "2**%-2.0f" logn)
[ C.bench (printf " vector 2**%-2.0f" logn) $
C.nf blackscholesVector (ss, xs, ts)
, C.bench (printf "nikola interpreter 2**%-2.0f" logn) $
C.nf blackscholesNikola (ss, xs, ts)
, C.bench (printf " nikola compiled 2**%-2.0f" logn) $
C.nf blackscholesNikolaCompiled (ss, xs, ts)
]
where
n :: Int
n = truncate (2**logn)
generateData :: Int -> IO (V.Vector F, V.Vector F, V.Vector F)
generateData n = do
ss <- randomsRange n (5.0, 30.0)
xs <- randomsRange n (1.0, 100.0)
ts <- randomsRange n (0.25, 10.0)
E.evaluate ss
E.evaluate xs
E.evaluate ts
return (ss, xs, ts)
blackscholesNikola :: (V.Vector F, V.Vector F, V.Vector F)
-> V.Vector F
blackscholesNikola (ss, xs, ts) =
NH.compile BSN.blackscholes ss xs ts rISKFREE vOLATILITY
blackscholesNikolaCompiled :: (V.Vector F, V.Vector F, V.Vector F)
-> V.Vector F
blackscholesNikolaCompiled (ss, xs, ts) =
blackscholes ss xs ts rISKFREE vOLATILITY
where
blackscholes :: V.Vector F
-> V.Vector F
-> V.Vector F
-> F
-> F
-> V.Vector F
blackscholes = $(NTH.compileSig BSN.blackscholes
(undefined :: V.Vector F
-> V.Vector F
-> V.Vector F
-> F
-> F
-> V.Vector F))
blackscholesVector :: (V.Vector F, V.Vector F, V.Vector F)
-> V.Vector F
{-# INLINE blackscholesVector #-}
blackscholesVector (ss, xs, ts) =
V.zipWith3 (\s x t -> BSV.blackscholes True s x t rISKFREE vOLATILITY) ss xs ts
validate :: IO ()
validate =
forM_ [0,2..16] $ \(logn :: Double) -> do
let n = truncate (2**logn)
(ss, xs, ts) <- generateData n
let v1 = blackscholesNikolaCompiled (ss, xs, ts)
let v2 = blackscholesVector (ss, xs, ts)
validateL1Norm ePSILON (printf "2**%-2.0f elements" logn) v1 v2
where
ePSILON :: F
ePSILON = 1e-10
#if !MIN_VERSION_vector(0,10,0)
instance Storable a => NFData (V.Vector a) where
rnf v = V.length v `seq` ()
#endif /* !MIN_VERSION_vector(0,10,0) */
| mainland/nikola | examples/blackscholes/Main.hs | bsd-3-clause | 4,080 | 0 | 17 | 1,248 | 1,135 | 620 | 515 | 93 | 2 |
module GSS
( GState, Pos, create, add, pop, fetchDescriptor, mkGState )
where
import Data.Map as M
import Data.Set as S
import Data.Maybe
import Data.List
-- | Position in the input stream
type Pos = Int
-- | Type of GSS node. Nodes should be created using 'create'
data Node lab = Root | Node lab Pos deriving (Eq,Ord,Show) -- TODO: remove Show in production
type Descriptor lab = (lab, Node lab, Pos)
type R lab = [(lab, Node lab, Pos)]
type U lab = Set (lab,Node lab,Pos)
type P lab = Map (Node lab) [Pos]
type G lab = Set (Node lab)
type E lab = Map (Node lab) (Set (Node lab)) -- parents
-- | The state of GSS, whose nodes are labeled with labels of type @lab@
data GState lab = GState
{ gee :: G lab
, er :: R lab
, pe :: P lab
, curr_u :: Node lab
, parents :: E lab
, yu :: U lab
}
-- | Creates initial 'GState'
mkGState :: GState lab
mkGState =
GState { gee = S.singleton Root
, parents = M.empty
, curr_u = Root
, pe = M.empty
, er = []
, yu = S.empty
}
-- | Fetch descriptor from the /R/ set. Return 'Nothing' if /R/ is empty.
-- If /R/ is not empty, it is undefined what descriptor from the /R/ set will be
-- returned.
-- If a descriptor is returned, it is removed from /R/, and current node is
-- updated.
fetchDescriptor :: GState lab -> (GState lab, Maybe (lab, Pos))
fetchDescriptor gstate =
case er gstate of
[] -> (gstate, Nothing)
d@(l,u,i):ds -> (gstate { er = ds, curr_u = u }, Just (l,i))
-- | @create l u i@ ensures that:
--
-- * node @v@ with label @l@ and input position @i@ exists in GSS
--
-- * node @v@ is a child of the current node
--
-- * if @v@ was already popped on position @i@, then descriptors are added to
-- /R/, as if @pop@ was executed after @create@
--
-- * node @v@ is made the current node
--
-- 'create' returns an updated GSS and an indicator whether new node has been
-- created (True) or the node already existed
create :: (Eq lab, Ord lab) => lab -> Pos -> GState lab
-> (GState lab, Bool)
create label i oldgs =
if node_exists && u `S.member` (parents oldgs M.! v)
then -- nothing to do
(oldgs, False)
else
(set_current.add_popped.connect_v.insert_v $ oldgs, not node_exists)
where
node_exists = v `S.member` g
g = gee oldgs
p = pe oldgs
v = Node label i
u = curr_u oldgs
insert_v gstate = gstate { gee = S.insert v g }
connect_v gstate = gstate { parents = M.insertWith (S.union) v (S.singleton u) (parents gstate) }
add_popped gstate = foldl' (\gs j -> add1 (label, u, j) gs) gstate popped
popped = fromMaybe [] (M.lookup v p)
set_current gstate = gstate { curr_u = v }
-- | Adds descriptor to /R/ if it hasn't been added yet
--
-- For internal use only! Outside the module use 'add'
add1 :: (Eq lab, Ord lab) => Descriptor lab -> GState lab -> GState lab
add1 desc oldgs =
if not (desc `S.member` yu_)
then oldgs {er = desc:r, yu = S.insert desc yu_}
else oldgs
where
r = er oldgs
yu_ = yu oldgs
-- | Adds descriptor to /R/ if it hasn't been added yet
add :: (Eq lab, Ord lab) => lab -> Pos -> GState lab -> GState lab
add l i oldgs =
if not (desc `S.member` yu_)
then oldgs {er = desc:r, yu = S.insert desc yu_}
else oldgs
where
desc = (l, curr_u oldgs, i)
r = er oldgs
yu_ = yu oldgs
-- | @pop i@ adds descriptors (@label(u)@, @v@, @i@) to /R/ for every parent @v@ of
-- @u@, where @u@ is the current node.
pop :: (Eq lab, Ord lab) => Pos -> GState lab -> GState lab
pop i oldgs = if u == Root then oldgs else newgs
where
u = curr_u oldgs
Node label _ = u
prnts = parents oldgs
update_pe gstate = gstate { pe = M.insertWith (++) u [i] (pe gstate) }
create_descriptors gstate = foldl (\gs parent -> add1 (label, parent, i) gs) gstate (S.elems (prnts!u))
newgs = create_descriptors . update_pe $ oldgs
| adept/hagll | GSS.hs | bsd-3-clause | 3,947 | 0 | 12 | 1,028 | 1,228 | 692 | 536 | 73 | 2 |
module Utils.Compat where
import MonadLib
import Network.CGI.Monad
import qualified Control.Monad.Trans as MTL
instance MonadT CGIT where
lift = MTL.lift
instance BaseM m n => BaseM (CGIT m) n where
inBase = lift . inBase
instance MonadCGI m => MonadCGI (ReaderT r m) where
cgiAddHeader h s = lift (cgiAddHeader h s)
cgiGet f = lift (cgiGet f)
instance MonadCGI m => MonadCGI (StateT r m) where
cgiAddHeader h s = lift (cgiAddHeader h s)
cgiGet f = lift (cgiGet f)
instance MTL.MonadIO m => MTL.MonadIO (ReaderT r m) where
liftIO = lift . MTL.liftIO
instance MTL.MonadIO m => MTL.MonadIO (StateT r m) where
liftIO = lift . MTL.liftIO
| glguy/hpaste | src/Utils/Compat.hs | bsd-3-clause | 673 | 0 | 8 | 147 | 271 | 138 | 133 | -1 | -1 |
-- |Types used during code generation, shared between 'Generate' and
-- Generate.PostProcessing
module Generate.Types where
import Control.Applicative
import Control.Monad.RW
import Data.Monoid
import BuildData
import Data.XCB.Types
{-
output from genertion:
an HsModule
However, as an intermediate result, we need:
+ a partial HsModule (represented by a morphism (HsModule -> HsModule))
+ meta data about the module:
+ Event meta data
+ Errors meta data
+ Request meta data
+ ???
information needed during parsing:
+ Stuff from the header
+ the list of declarations
+ a mapping from all other module names to fancy-names
the list of dclarations is provided as the input.
the other information is provided in a reader module.
since the output data does not need to be read during processing, it will be dumped out using a writer monad
-}
-- during processing
data ReaderData = ReaderData
{readerData_current :: XHeader
,readerData_all :: [XHeader]
}
type Generate a = RW ReaderData BuildData a
type Gen = Generate ()
-- post processing is a function :: BuildResult -> ReadData -> HsModule
| aslatter/xhb | build-utils/src/Generate/Types.hs | bsd-3-clause | 1,137 | 0 | 9 | 220 | 85 | 54 | 31 | 11 | 0 |
-- -----------------------------------------------------------------------------
-- Copyright 2002, Simon Marlow.
-- Copyright 2006, Bjorn Bringert.
-- Copyright 2009, Henning Thielemann.
-- All rights reserved.
--
-- Redistribution and use in source and binary forms, with or without
-- modification, are permitted provided that the following conditions are
-- met:
--
-- * Redistributions of source code must retain the above copyright notice,
-- this list of conditions and the following disclaimer.
--
-- * Redistributions in binary form must reproduce the above copyright
-- notice, this list of conditions and the following disclaimer in the
-- documentation and/or other materials provided with the distribution.
--
-- * Neither the name of the copyright holder(s) nor the names of
-- contributors may be used to endorse or promote products derived from
-- this software without specific prior written permission.
--
-- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-- -----------------------------------------------------------------------------
module Network.MoHWS.HTTP.Request (
T(Cons), command, uri, httpVersion, headers, body,
toHTTPbis, fromHTTPbis,
Command, HTTP.RequestMethod(..),
Connection(..),
Expect(..),
pHeaders,
getHost,
getConnection,
) where
import Text.ParserCombinators.Parsec (Parser, skipMany1, many, noneOf, )
import Network.MoHWS.ParserUtility (pCRLF, pSP, pToken, parseList, )
import qualified Network.MoHWS.HTTP.Header as Header
import qualified Network.MoHWS.HTTP.Version as HTTPVersion
import Network.MoHWS.HTTP.Header (HasHeaders, )
import Network.MoHWS.Utility (readM, )
import qualified Network.HTTP.Base as HTTP
import qualified Network.HTTP.Headers
-- make getHeaders visible for instance declaration
import Network.Socket (HostName, )
import Network.URI (URI, nullURI, uriPath, uriQuery, )
import qualified Data.Map as Map
import Data.Monoid (Monoid, mempty, )
import Data.Char (toLower, )
-----------------------------------------------------------------------------
-- Requests
-- Request-Line = Method SP Request-URI SP HTTP-Version CRLF
type Command = HTTP.RequestMethod
data T body =
Cons {
command :: Command,
uri :: URI,
httpVersion :: HTTPVersion.T,
headers :: Header.Group,
body :: body
}
toHTTPbis :: T body -> HTTP.Request body
toHTTPbis req =
HTTP.Request {
HTTP.rqURI = uri req,
HTTP.rqMethod = command req,
HTTP.rqHeaders = Header.ungroup $ headers req,
HTTP.rqBody = body req
}
fromHTTPbis :: HTTP.Request body -> T body
fromHTTPbis req =
Cons {
command = HTTP.rqMethod req,
uri = HTTP.rqURI req,
httpVersion = HTTPVersion.http1_1,
headers = Header.group $ HTTP.rqHeaders req,
body = HTTP.rqBody req
}
instance Show (T body) where
showsPrec _ Cons{command = cmd, uri = loc, httpVersion = ver} =
shows cmd . (' ':) . shows loc . (' ':) . shows ver
instance HasHeaders (T body) where
getHeaders = Header.ungroup . headers
setHeaders req hs = req { headers = Header.group hs}
instance Functor T where
fmap f req =
Cons {
command = command req,
uri = uri req,
httpVersion = httpVersion req,
headers = headers req,
body = f $ body req
}
-- Request parsing
-- Parse the request line and the headers, but not the body.
pHeaders :: Monoid body => Parser (T body)
pHeaders =
do (cmd,loc,ver) <- pCommandLine
hdrs <- Header.pGroup
_ <- pCRLF
return $ Cons cmd loc ver hdrs mempty
pCommandLine :: Parser (Command, URI, HTTPVersion.T)
pCommandLine =
do cmd <- pCommand
skipMany1 pSP
loc <- pURI
skipMany1 pSP
ver <- HTTPVersion.pInRequest
_ <- pCRLF
return (cmd,loc,ver)
commandDictionary :: Map.Map String Command
commandDictionary =
Map.fromList $
("HEAD", HTTP.HEAD) :
("PUT", HTTP.PUT) :
("GET", HTTP.GET) :
("POST", HTTP.POST) :
("DELETE", HTTP.DELETE) :
("OPTIONS", HTTP.OPTIONS) :
("TRACE", HTTP.TRACE) :
-- ("CONNECT", HTTP.CONNECT) :
[]
pCommand :: Parser Command
pCommand =
fmap (\tok -> Map.findWithDefault (HTTP.Custom tok) tok commandDictionary) $
pToken
pURI :: Parser URI
pURI =
do u <- many (noneOf [' '])
-- FIXME: this does not handle authority Request-URIs
-- maybe (fail "Bad Request-URI") return $ parseURIReference u
return $ laxParseURIReference u
-- also accepts characters [ ] " in queries, which is sometimes quite handy
laxParseURIReference :: String -> URI
laxParseURIReference u =
let (p,q) = break ('?'==) u
in nullURI{uriPath=p, uriQuery=q}
-----------------------------------------------------------------------------
-- Getting specific request headers
data Connection =
ConnectionClose
| ConnectionKeepAlive -- non-std? Netscape generates it.
| ConnectionOther String
deriving (Eq, Show)
parseConnection :: String -> [Connection]
parseConnection =
let fn "close" = ConnectionClose
fn "keep-alive" = ConnectionKeepAlive
fn other = ConnectionOther other
in map (fn . map toLower) . parseList
getConnection :: HasHeaders a => a -> [Connection]
getConnection =
concatMap parseConnection . Header.lookupMany Header.HdrConnection
data Expect = ExpectContinue
deriving Show
-- parseExpect :: String -> Maybe Expect
-- parseExpect s =
-- case parseList s of
-- ["100-continue"] -> Just ExpectContinue
-- _ -> Nothing
getHost :: HasHeaders a => a -> Maybe (HostName, Maybe Int)
getHost x = Header.lookup Header.HdrHost x >>= parseHost
parseHost :: String -> Maybe (HostName, Maybe Int)
parseHost s =
let (host,prt) = break (==':') s
in case prt of
"" -> Just (host, Nothing)
':':port -> readM port >>= \p -> Just (host, Just p)
_ -> Nothing
| xpika/mohws | src/Network/MoHWS/HTTP/Request.hs | bsd-3-clause | 6,818 | 0 | 14 | 1,555 | 1,419 | 809 | 610 | 124 | 3 |
--
--
--
-----------------
-- Exercise 3.15.
-----------------
--
--
--
module E'3'15 where
import E'3'13 ( max, maxThree )
import E'3'14 ( min, minThree )
import Prelude hiding ( min , max )
import qualified Prelude ( min , max )
import Test.QuickCheck ( quickCheck , verboseCheck )
prop_maxThree :: Integer -> Integer -> Integer -> Bool
prop_maxThree a b c
= ( maxThree a a a ) == a
&& ( maxThree ( a + 1 ) a a ) == a + 1
&& ( maxThree a ( a + 1 ) a ) == a + 1
&& ( maxThree a a ( a + 1 ) ) == a + 1
-- GHCi> quickCheck prop_maxThree
-- Note: If this test fails on your implementation use "verboseCheck"
-- to get the test data that caused it.
prop_min :: Int -> Int -> Bool
prop_min a b
= ( min a b ) == ( Prelude.min a b )
-- GHCi> quickCheck prop_min
prop_minThree :: Int -> Int -> Int -> Bool
prop_minThree a b c
= ( minThree a a a ) == a
&& ( minThree ( a + 1 ) a a ) == a + 1
&& ( minThree a ( a + 1 ) a ) == a + 1
&& ( minThree a a ( a + 1 ) ) == a + 1
-- GHCi> quickCheck prop_minThree
-- Note: If this test fails on your implementation use "verboseCheck"
-- to get the test data that caused it.
prop_maxMinThree :: Integer -> Integer -> Integer -> Bool
prop_maxMinThree a b c
= maxThree a b c >= toInteger (
minThree ( fromInteger a )
( fromInteger b )
( fromInteger c )
)
-- GHCi> quickCheck prop_maxMinThree
{- GHCi>
quickCheck prop_maxThree
quickCheck prop_min
quickCheck prop_minThree
quickCheck prop_maxMinThree
-}
| pascal-knodel/haskell-craft | _/links/E'3'15.hs | mit | 1,828 | 0 | 17 | 704 | 457 | 251 | 206 | 27 | 1 |
{-# LANGUAGE CPP #-}
-----------------------------------------------------------------------------
-- |
-- Module : Control.Monad.Trans.Error
-- Copyright : (c) Michael Weber <[email protected]> 2001,
-- (c) Jeff Newbern 2003-2006,
-- (c) Andriy Palamarchuk 2006
-- License : BSD-style (see the file LICENSE)
--
-- Maintainer : [email protected]
-- Stability : experimental
-- Portability : portable
--
-- This monad transformer adds the ability to fail or throw exceptions
-- to a monad.
--
-- A sequence of actions succeeds, producing a value, only if all the
-- actions in the sequence are successful. If one fails with an error,
-- the rest of the sequence is skipped and the composite action fails
-- with that error.
--
-- If the value of the error is not required, the variant in
-- "Control.Monad.Trans.Maybe" may be used instead.
-----------------------------------------------------------------------------
module Control.Monad.Trans.Error (
-- * The ErrorT monad transformer
Error(..),
ErrorList(..),
ErrorT(..),
mapErrorT,
-- * Error operations
throwError,
catchError,
-- * Lifting other operations
liftCallCC,
liftListen,
liftPass,
-- * Examples
-- $examples
) where
import Control.Monad.IO.Class
import Control.Monad.Trans.Class
import Control.Applicative
import Control.Exception (IOException)
import Control.Monad
import Control.Monad.Fix
import Control.Monad.Instances ()
import Data.Foldable (Foldable(foldMap))
import Data.Monoid (mempty)
import Data.Traversable (Traversable(traverse))
import System.IO.Error
instance MonadPlus IO where
mzero = ioError (userError "mzero")
m `mplus` n = m `catchIOError` \_ -> n
#if !(MIN_VERSION_base(4,4,0))
-- exported by System.IO.Error from base-4.4
catchIOError :: IO a -> (IOError -> IO a) -> IO a
catchIOError = catch
#endif
-- | An exception to be thrown.
--
-- Minimal complete definition: 'noMsg' or 'strMsg'.
class Error a where
-- | Creates an exception without a message.
-- The default implementation is @'strMsg' \"\"@.
noMsg :: a
-- | Creates an exception with a message.
-- The default implementation of @'strMsg' s@ is 'noMsg'.
strMsg :: String -> a
noMsg = strMsg ""
strMsg _ = noMsg
instance Error IOException where
strMsg = userError
-- | A string can be thrown as an error.
instance ErrorList a => Error [a] where
strMsg = listMsg
-- | Workaround so that we can have a Haskell 98 instance @'Error' 'String'@.
class ErrorList a where
listMsg :: String -> [a]
instance ErrorList Char where
listMsg = id
-- ---------------------------------------------------------------------------
-- Our parameterizable error monad
#if !(MIN_VERSION_base(4,3,0))
-- These instances are in base-4.3
instance Applicative (Either e) where
pure = Right
Left e <*> _ = Left e
Right f <*> r = fmap f r
instance Monad (Either e) where
return = Right
Left l >>= _ = Left l
Right r >>= k = k r
instance MonadFix (Either e) where
mfix f = let
a = f $ case a of
Right r -> r
_ -> error "empty mfix argument"
in a
#endif /* base to 4.2.0.x */
instance (Error e) => Alternative (Either e) where
empty = Left noMsg
Left _ <|> n = n
m <|> _ = m
instance (Error e) => MonadPlus (Either e) where
mzero = Left noMsg
Left _ `mplus` n = n
m `mplus` _ = m
-- | The error monad transformer. It can be used to add error handling
-- to other monads.
--
-- The @ErrorT@ Monad structure is parameterized over two things:
--
-- * e - The error type.
--
-- * m - The inner monad.
--
-- The 'return' function yields a successful computation, while @>>=@
-- sequences two subcomputations, failing on the first error.
newtype ErrorT e m a = ErrorT { runErrorT :: m (Either e a) }
-- | Map the unwrapped computation using the given function.
--
-- * @'runErrorT' ('mapErrorT' f m) = f ('runErrorT' m)@
mapErrorT :: (m (Either e a) -> n (Either e' b))
-> ErrorT e m a
-> ErrorT e' n b
mapErrorT f m = ErrorT $ f (runErrorT m)
instance (Functor m) => Functor (ErrorT e m) where
fmap f = ErrorT . fmap (fmap f) . runErrorT
instance (Foldable f) => Foldable (ErrorT e f) where
foldMap f (ErrorT a) = foldMap (either (const mempty) f) a
instance (Traversable f) => Traversable (ErrorT e f) where
traverse f (ErrorT a) =
ErrorT <$> traverse (either (pure . Left) (fmap Right . f)) a
instance (Functor m, Monad m) => Applicative (ErrorT e m) where
pure a = ErrorT $ return (Right a)
f <*> v = ErrorT $ do
mf <- runErrorT f
case mf of
Left e -> return (Left e)
Right k -> do
mv <- runErrorT v
case mv of
Left e -> return (Left e)
Right x -> return (Right (k x))
instance (Functor m, Monad m, Error e) => Alternative (ErrorT e m) where
empty = mzero
(<|>) = mplus
instance (Monad m, Error e) => Monad (ErrorT e m) where
return a = ErrorT $ return (Right a)
m >>= k = ErrorT $ do
a <- runErrorT m
case a of
Left l -> return (Left l)
Right r -> runErrorT (k r)
fail msg = ErrorT $ return (Left (strMsg msg))
instance (Monad m, Error e) => MonadPlus (ErrorT e m) where
mzero = ErrorT $ return (Left noMsg)
m `mplus` n = ErrorT $ do
a <- runErrorT m
case a of
Left _ -> runErrorT n
Right r -> return (Right r)
instance (MonadFix m, Error e) => MonadFix (ErrorT e m) where
mfix f = ErrorT $ mfix $ \a -> runErrorT $ f $ case a of
Right r -> r
_ -> error "empty mfix argument"
instance (Error e) => MonadTrans (ErrorT e) where
lift m = ErrorT $ do
a <- m
return (Right a)
instance (Error e, MonadIO m) => MonadIO (ErrorT e m) where
liftIO = lift . liftIO
-- | Signal an error value @e@.
--
-- * @'runErrorT' ('throwError' e) = 'return' ('Left' e)@
--
-- * @'throwError' e >>= m = 'throwError' e@
throwError :: (Monad m, Error e) => e -> ErrorT e m a
throwError l = ErrorT $ return (Left l)
-- | Handle an error.
--
-- * @'catchError' h ('lift' m) = 'lift' m@
--
-- * @'catchError' h ('throwError' e) = h e@
catchError :: (Monad m, Error e) =>
ErrorT e m a -- ^ the inner computation
-> (e -> ErrorT e m a) -- ^ a handler for errors in the inner
-- computation
-> ErrorT e m a
m `catchError` h = ErrorT $ do
a <- runErrorT m
case a of
Left l -> runErrorT (h l)
Right r -> return (Right r)
-- | Lift a @callCC@ operation to the new monad.
liftCallCC :: (((Either e a -> m (Either e b)) -> m (Either e a)) ->
m (Either e a)) -> ((a -> ErrorT e m b) -> ErrorT e m a) -> ErrorT e m a
liftCallCC callCC f = ErrorT $
callCC $ \c ->
runErrorT (f (\a -> ErrorT $ c (Right a)))
-- | Lift a @listen@ operation to the new monad.
liftListen :: Monad m =>
(m (Either e a) -> m (Either e a,w)) -> ErrorT e m a -> ErrorT e m (a,w)
liftListen listen = mapErrorT $ \ m -> do
(a, w) <- listen m
return $! fmap (\ r -> (r, w)) a
-- | Lift a @pass@ operation to the new monad.
liftPass :: Monad m => (m (Either e a,w -> w) -> m (Either e a)) ->
ErrorT e m (a,w -> w) -> ErrorT e m a
liftPass pass = mapErrorT $ \ m -> pass $ do
a <- m
return $! case a of
Left l -> (Left l, id)
Right (r, f) -> (Right r, f)
{- $examples
Wrapping an IO action that can throw an error @e@:
> type ErrorWithIO e a = ErrorT e IO a
> ==> ErrorT (IO (Either e a))
An IO monad wrapped in @StateT@ inside of @ErrorT@:
> type ErrorAndStateWithIO e s a = ErrorT e (StateT s IO) a
> ==> ErrorT (StateT s IO (Either e a))
> ==> ErrorT (StateT (s -> IO (Either e a,s)))
-}
| jwiegley/ghc-release | libraries/transformers/Control/Monad/Trans/Error.hs | gpl-3.0 | 7,992 | 0 | 21 | 2,189 | 2,268 | 1,184 | 1,084 | 141 | 2 |
module DevelMain (update) where
import Rapid
import Control.Wire.Controller (control)
import Holotype (holotype)
update :: IO ()
update =
rapid 0 $ \r ->
restart r "holotype" $
(control holotype)
| deepfire/mood | proto/DevelMain.hs | agpl-3.0 | 235 | 0 | 9 | 68 | 73 | 41 | 32 | 9 | 1 |
{-# LANGUAGE OverloadedStrings #-}
import Util
import Test.Hspec
import Test.Hspec.HUnit
import Test.HUnit
import Model
main = hspec $ describe "validateContent"
[ it "adds IDs to DITA content" $
"<p id=\"x2\">foo</p><p id=\"x1\">bar</p>" @=? validateContent TFDitaConcept "<p>foo</p><p id=\"x1\">bar</p>"
, it "strips duplicate ids" $
"<p id=\"y\">foo</p><p id=\"x1\">bar</p>" @=? validateContent TFDitaConcept "<p id='y'>foo</p><p id=\"y\">bar</p>"
]
| snoyberg/yesodwiki | runtests.hs | bsd-2-clause | 482 | 0 | 10 | 81 | 76 | 40 | 36 | 11 | 1 |
{-# LANGUAGE OverloadedStrings #-}
module HollaBack.Date.Conversion (decideTimestamp,
timestamp,
dowDiff) where
import Control.Applicative ((<$>),
(<*>),
pure)
import Data.Time.Calendar (Day(..),
addDays,
addGregorianYearsRollOver,
addGregorianMonthsRollOver,
toGregorian,
fromGregorian)
import Data.Time.Calendar.WeekDate (toWeekDate)
import Data.Time.Clock (UTCTime(..),
DiffTime,
getCurrentTime)
import Data.Time.Clock.POSIX (utcTimeToPOSIXSeconds)
import Data.Time.LocalTime (timeOfDayToTime,
midnight)
import HollaBack.Date.Types
decideTime :: DateTimeSpec -> IO UTCTime
decideTime (RelativeDateTime tu) = offsetTime <$> tu' <*> getCurrentTime
where tu' = pure tu
decideTime (SpecificDateTime date tod) = UTCTime <$> day <*> diffTime
where diffTime = pure $ timeOfDayToTime tod
day = dateToDay date
decideTime (SpecificWeekdayTime dow tod) = UTCTime <$> day <*> diffTime
where day = dowToDay dow
diffTime = pure $ timeOfDayToTime tod
decideTime (SpecificWeekday dow) = UTCTime <$> day <*> diffTime
where day = dowToDay dow
diffTime = pure startOfDay
decideTime (SpecificTime tod) = UTCTime <$> today <*> diffTime
where diffTime = pure $ timeOfDayToTime tod
timestamp :: UTCTime -> Integer
timestamp = floor . utcTimeToPOSIXSeconds
decideTimestamp :: Int -> DateTimeSpec -> IO Integer
decideTimestamp _ dts@(RelativeDateTime _) = timestamp <$> decideTime dts
decideTimestamp offset dts = offsetTimestamp offset . timestamp <$> decideTime dts
-- We negate the sender's offset seconds to compensate it. If they are in
-- PST(-8:00) and they ask for 6AM, we store it as 2PM UTC
offsetTimestamp :: Int -> Integer -> Integer
offsetTimestamp secs = (+ compensatedOffset)
where compensatedOffset = negate . fromIntegral $ secs
dowDiff :: DayOfWeek -> DayOfWeek -> Int
dowDiff start finish
| start > finish = dowNum finish + 7 - dowNum start
| otherwise = dowNum finish - dowNum start
addDow :: Day -> DayOfWeek -> Day
addDow start finish = addDays diff start
where diff = toInteger $ adjustedFinish - startDowNum
adjustedFinish
| startDowNum > finishDowNum = finishDowNum + 7
| otherwise = finishDowNum
startDowNum = dayToDowNum start
finishDowNum = dowNum finish
---- Helpers
today :: IO Day
today = utctDay <$> getCurrentTime
thisYear :: IO Integer
thisYear = getYear . toGregorian . utctDay <$> getCurrentTime
where getYear (year, _, _) = year
dowToDay :: DayOfWeek -> IO Day
dowToDay dowFinish = addDow <$> today <*> finish
where finish = pure dowFinish
dayToDowNum :: Day -> Int
dayToDowNum day = fromIntegral dn
where (_, _, dn) = toWeekDate day
startOfDay :: DiffTime
startOfDay = timeOfDayToTime midnight
dateToDay :: Date -> IO Day
dateToDay (Date month dom) = fromGregorian <$> year <*> month' <*> dom'
where year = thisYear
month' = pure $ monthNum month
dom' = pure dom
offsetTime :: TimeUnit -> UTCTime -> UTCTime
offsetTime (TimeUnit ms Minutes)
utct@UTCTime { utctDayTime = dt } = utct { utctDayTime = newTime }
where newTime = dt + seconds
seconds = fromInteger $ ms * 60
offsetTime (TimeUnit hs Hours)
utct@UTCTime { utctDayTime = dt } = utct { utctDayTime = newTime }
where newTime = dt + seconds
seconds = fromInteger $ hs * 60 * 60
offsetTime (TimeUnit ds Days)
utct@UTCTime { utctDay = day } = utct { utctDay = newDay }
where newDay = addDays ds day
offsetTime (TimeUnit ws Weeks)
utct@UTCTime { utctDay = day } = utct { utctDay = newDay }
where newDay = addDays days day
days = ws * 7
offsetTime (TimeUnit ms Months)
utct@UTCTime { utctDay = day } = utct { utctDay = newDay }
where newDay = addGregorianMonthsRollOver ms day
offsetTime (TimeUnit ys Years)
utct@UTCTime { utctDay = day } = utct { utctDay = newDay }
where newDay = addGregorianYearsRollOver ys day
| bitemyapp/HollaBack | HollaBack/Date/Conversion.hs | bsd-2-clause | 4,367 | 0 | 10 | 1,251 | 1,195 | 640 | 555 | 95 | 1 |
{-# LANGUAGE OverloadedStrings #-}
-- | Processor/memory topology
module Haskus.System.Linux.Topology
( CPUMap(..)
, parseMemInfo
, readMemInfo
, parseCPUMap
, readCPUMap
, member
, Node(..)
, NUMA(..)
, loadNUMA
, nodeMemoryStatus
, nodeCPUs
)
where
import Text.Megaparsec
import Text.Megaparsec.Char
import Text.Megaparsec.Char.Lexer hiding (space)
import System.Directory
import Data.Map (Map)
import qualified Data.Map as Map
import qualified Data.Vector as V
import Data.Void
import Haskus.Format.Binary.Buffer (bufferReadFile)
import Haskus.Format.Binary.Word
import Haskus.Format.Binary.Bits
import qualified Haskus.Format.Text as Text
import Haskus.Format.Text (Text)
import Haskus.Utils.List (isPrefixOf,stripPrefix)
import Haskus.Utils.Maybe (fromJust,mapMaybe)
import Haskus.Utils.Flow
type Parser = Parsec Void Text
-- | A CPUMap is a set of CPU identifiers
--
-- TODO: replace Vector of Word32 with a variable length bitset
data CPUMap = CPUMap (V.Vector Word32) deriving (Show)
-- | Read cpumap files
readMemInfo :: FilePath -> IO (Map Text Word64)
readMemInfo p = do
buf <- bufferReadFile p
case parse parseMemInfo p (Text.bufferDecodeUtf8 buf) of
Right v -> return v
Left err -> error ("meminfo parsing error: " ++ show err)
-- | Parse meminfo files
parseMemInfo :: Parser (Map Text Word64)
parseMemInfo = parseFile
where
parseFile = do
es <- manyTill parseLine eof
return (Map.fromList es)
parseLine :: Parser (Text, Word64)
parseLine = do
void (string "Node ")
void (decimal :: Parser Int)
void (char ' ')
lbl <- someTill anySingle (char ':')
space
value <- decimal
kb <- (string " kB" *> return (*1024)) <|> return id
void eol
return (Text.pack lbl, kb value)
-- | Read cpumap files
readCPUMap :: FilePath -> IO CPUMap
readCPUMap p = do
buf <- bufferReadFile p
case parse parseCPUMap p (Text.bufferDecodeUtf8 buf) of
Right v -> return v
Left err -> error ("cpumap parsing error: " ++ show err)
-- | Parse CPU map files
parseCPUMap :: Parser CPUMap
parseCPUMap = parseFile
where
parseFile = do
es <- hexadecimal `sepBy1` char ','
void eol
void eof
return $ CPUMap . V.fromList . reverse . dropWhile (==0) $ es
-- | Check that a CPU belongs to a CPU Map
member :: Word -> CPUMap -> Bool
member idx (CPUMap v) = q < fromIntegral (V.length v) && testBit (v V.! fromIntegral q) r
where
(q,r) = idx `quotRem` 32
-- | Transform a CPUMap into a list of identifiers
fromCPUMap :: CPUMap -> [Word]
fromCPUMap (CPUMap v) = go 0 (V.toList v)
where
go _ [] = []
go n (x:xs) = mapMaybe (f x n) [0..31] ++ go (n+1) xs
f x n idx = if testBit x idx then Just (n * 32 + fromIntegral idx) else Nothing
-- | A set of NUMA nodes
data NUMA = NUMA
{ numaNodes :: [Node]
} deriving (Show)
-- | A NUMA node
data Node = Node
{ nodeId :: Word
, nodeCPUMap :: CPUMap
, nodeMemory :: NodeMemory
} deriving (Show)
-- | A memory node
newtype NodeMemory = NodeMemory FilePath deriving (Show)
-- | Load platform from sysfs (Linux)
loadNUMA :: FilePath -> IO NUMA
loadNUMA sysfsPath = do
let nodePath = sysfsPath ++ "/devices/system/node/"
nDirs <- filter ("node" `isPrefixOf`) <$> getDirectoryContents nodePath
ndes <- forM nDirs $ \nDir -> do
let nid = read (fromJust $ stripPrefix "node" nDir)
cpus <- readCPUMap (nodePath ++ nDir ++ "/cpumap")
return $ Node nid cpus (NodeMemory $ nodePath ++ nDir ++ "/meminfo")
return $ NUMA ndes
-- | Return (total,free) memory for the given node
nodeMemoryStatus :: NodeMemory -> IO (Word64,Word64)
nodeMemoryStatus (NodeMemory path) = do
infos <- readMemInfo path
let
lookupEntry e = case Map.lookup (Text.pack e) infos of
Just x -> x
Nothing -> error $ "Cannot find \"" ++ e ++ "\" entry in file: " ++ show path
return (lookupEntry "MemTotal", lookupEntry "MemFree")
-- | Return a list of CPU numbers from a map in a node
nodeCPUs :: Node -> [Word]
nodeCPUs = fromCPUMap . nodeCPUMap
| hsyl20/ViperVM | haskus-system/src/lib/Haskus/System/Linux/Topology.hs | bsd-3-clause | 4,197 | 0 | 18 | 1,016 | 1,316 | 692 | 624 | 102 | 3 |
{-# Language PatternGuards #-}
module Blub
( blub
, foo
, bar
) where
import Control.Monad
f :: Int -> Int
f = (+ 3)
| jystic/hsimport | tests/goldenFiles/ModuleTest16.hs | bsd-3-clause | 129 | 0 | 5 | 37 | 39 | 25 | 14 | 8 | 1 |
-- | Functions and utilities to detect the importent modules, classes
-- and types of the plugin.
module Control.Super.Plugin.Detect
( -- * Searching for Modules
ModuleQuery(..)
, findModuleByQuery
, findModule
, defaultFindEitherModuleErrMsg
-- * Searching for Classes
, ClassQuery(..)
, isOptionalClassQuery
, queriedClasses, moduleQueryOf
, findClassesByQuery
, findClass
, isClass
-- * Searching for Instances
--, findInstancesInScope
--, findClassAndInstancesInScope
, findClassesAndInstancesInScope
, findMonoTopTyConInstances
-- * Instance implications
, InstanceImplication
, (===>), (<==>)
, clsDictInstImp, clsDictInstEquiv
, checkInstanceImplications
-- * Validation
, checkInstances
) where
import Data.List ( find )
import Data.Either ( isLeft, isRight )
import Data.Maybe ( isNothing, maybeToList, catMaybes, fromMaybe )
import Control.Monad ( forM )
import BasicTypes ( Arity )
import TcRnTypes
( TcGblEnv(..)
, ImportAvails( imp_mods ) )
import TyCon ( TyCon )
import TcPluginM
( TcPluginM
, getEnvs, getInstEnvs )
import Name
( nameModule
, getOccName )
import OccName
( occNameString )
--import RdrName
-- ( GlobalRdrElt(..)
-- , Parent( NoParent ) )
import Module
( Module, ModuleName
, moduleName
, moduleEnvKeys
, mkModuleName )
import Class
( Class(..)
, className, classArity )
import InstEnv
( ClsInst(..)
, instEnvElts
, ie_global
, classInstances )
import PrelNames ( mAIN_NAME )
import Outputable ( SDoc, ($$), text, vcat, ppr, hang )
import qualified Outputable as O
import Control.Super.Plugin.Collection.Set ( Set )
import qualified Control.Super.Plugin.Collection.Set as S
import qualified Control.Super.Plugin.Collection.Map as M
--import Control.Super.Plugin.Log ( printObj, printObjTrace, printMsg )
import Control.Super.Plugin.Wrapper
( UnitId, moduleUnitId )
import Control.Super.Plugin.Instance
( instanceTopTyCons
, isMonoTyConInstance
, isPolyTyConInstance )
import Control.Super.Plugin.Utils
( errIndent
, removeDupByIndexEq
, fromRight, fromLeft
, getClassName, getTyConName )
import Control.Super.Plugin.ClassDict
( ClassDict, Optional
, allClsDictEntries
, lookupClsDictClass )
import Control.Super.Plugin.InstanceDict
( InstanceDict
, insertInstDict, emptyInstDict
, allInstDictTyCons
, lookupInstDictByTyCon )
import Control.Super.Plugin.Names
-- -----------------------------------------------------------------------------
-- Validation
-- -----------------------------------------------------------------------------
-- | Check if there are any supermonad instances that clearly
-- do not belong to a specific supermonad.
checkInstances
:: ClassDict
-- ^ The class dictionary to lookup instances of specific classes.
-> InstanceDict
-- ^ The instance dictionary to check for validity.
-- This should be the instances calculated by 'findMonoTopTyConInstances'
-- from the given class dict. If not the validity of the checks cannot be
-- guarenteed.
-> [InstanceImplication]
-- ^ The instance implications to check on the given instance dictionary.
-> [(Either (TyCon, Class) ClsInst, SDoc)]
checkInstances clsDict instDict instImplications =
monoCheckErrMsgs ++ polyCheckErrMsgs
where
-- Check if all instances for each supermonad type constructor exist.
monoCheckErrMsgs :: [(Either (TyCon, Class) ClsInst, SDoc)]
monoCheckErrMsgs = fmap (\(tc, msg) -> (Left tc, msg))
$ removeDupByIndexEq
$ checkInstanceImplications instDict
$ instImplications
-- Check if there are any instance that involve different type constructors...
polyCheckErrMsgs :: [(Either (TyCon, Class) ClsInst, SDoc)]
polyCheckErrMsgs = do
(_opt, mClsInsts) <- allClsDictEntries clsDict
case mClsInsts of
Just (cls, insts) -> do
polyInst <- filter (isPolyTyConInstance cls) insts
return (Right polyInst, text "Instance involves more then one top-level type constructor: " $$ ppr polyInst)
Nothing -> []
-- -----------------------------------------------------------------------------
-- Searching for Modules
-- -----------------------------------------------------------------------------
-- | Formulates queries to find modules.
data ModuleQuery
= ThisModule PluginModuleName (Maybe UnitId)
-- ^ Search for a specific module using its name and optionally its
-- unit ID (module ID).
| EitherModule [ModuleQuery] (Maybe ([Either SDoc Module] -> SDoc))
-- ^ Find either of the modules described by the given subqueries.
-- If only one of the queries delivers a result, it will be used
-- otherwise an error will be returned. The error message is customizable
-- using the optional function.
| AnyModule [ModuleQuery]
-- ^ Find any of the modules described by the given subqueries.
-- The first one found (in order of the queries) will be delivered as result,
-- the rest will be ignored. If no module could be found an error message
-- will be returned.
instance O.Outputable ModuleQuery where
ppr (ThisModule mdlName mUnitId) = O.text mdlName O.<> (maybe (O.text "") O.ppr $ mUnitId)
ppr (EitherModule mdlQueries _errF) = O.text "XOR " O.<> (O.brackets $ O.hcat $ O.punctuate (O.text ", ") $ fmap O.ppr mdlQueries)
ppr (AnyModule mdlQueries) = O.text "OR " O.<> (O.brackets $ O.hcat $ O.punctuate (O.text ", ") $ fmap O.ppr mdlQueries)
collectModuleNames :: ModuleQuery -> Set ModuleName
collectModuleNames (ThisModule name _) = S.singleton $ mkModuleName name
collectModuleNames (EitherModule qs _) = S.unions $ fmap collectModuleNames qs
collectModuleNames (AnyModule qs) = S.unions $ fmap collectModuleNames qs
isModuleInQuery :: ModuleQuery -> Module -> Bool
isModuleInQuery query mdl = S.member (moduleName mdl)
$ S.insert mAIN_NAME
$ collectModuleNames query
-- | Tries to find a module using the given module query.
findModuleByQuery :: ModuleQuery -> TcPluginM (Either SDoc Module)
findModuleByQuery (ThisModule mdlName mdlUnit) = findModule mdlUnit mdlName
findModuleByQuery (EitherModule queries mErrFun) = do
queryResults <- forM queries findModuleByQuery
return $ findEitherModule mErrFun queryResults
findModuleByQuery (AnyModule queries) = do
queryResults <- forM queries findModuleByQuery
return $ findAnyModule Nothing queryResults
-- | Checks if the module with the given name is imported and,
-- if so, returns that module.
findModule :: Maybe UnitId -> String -> TcPluginM (Either SDoc Module)
findModule pkgKeyToFind mdlNameToFind = do
(gblEnv, _lclEnv) <- getEnvs
let mdls = moduleEnvKeys $ imp_mods $ tcg_imports $ gblEnv
case find (isModule . splitModule) mdls of
Just mdl -> return $ Right mdl
Nothing -> return $ Left $ text $ "Could not find module '" ++ mdlNameToFind ++ "'"
where
isModule :: (UnitId, ModuleName) -> Bool
isModule (pkgKey, mdlName)
= maybe True (pkgKey ==) pkgKeyToFind
&& mdlName == mkModuleName mdlNameToFind
splitModule :: Module -> (UnitId, ModuleName)
splitModule mdl = (moduleUnitId mdl, moduleName mdl)
-- | Makes sure that only one of the given modules was found and returns that
-- found module. If many or none of them were found an error is returned.
-- The returned error message can be customized using the optional
-- function.
findEitherModule :: Maybe ([Either SDoc Module] -> SDoc) -> [Either SDoc Module] -> (Either SDoc Module)
findEitherModule mErrFun eMdls =
case fmap fromRight $ filter isRight eMdls of
[] -> Left $ fromMaybe defaultFindEitherModuleErrMsg mErrFun $ eMdls
[mdl] -> Right mdl
_ -> Left $ fromMaybe defaultFindEitherModuleErrMsg mErrFun $ eMdls
-- | Makes sure that at least one of the given modules was found and, if so, returns
-- the first one found (in order of the list). If none of the modules
-- was found an error message will be returned.
-- The returned error message can be customized using the optional
-- function.
findAnyModule :: Maybe ([SDoc] -> SDoc) -> [Either SDoc Module] -> (Either SDoc Module)
findAnyModule mErrFun eMdls =
case fmap fromRight $ filter isRight eMdls of
[] -> Left $ fromMaybe defaultFindAnyModuleErrMsg mErrFun $ fmap fromLeft eMdls
(mdl : _) -> Right mdl
-- | Default error message function in case 'EitherModule' fails.
defaultFindEitherModuleErrMsg :: [Either SDoc Module] -> SDoc
defaultFindEitherModuleErrMsg mdls = case found of
[] -> hang (text "Failed to find either module!") errIndent $ vcat notFound
_ -> hang (text "Found several modules, unclear which one to use:") errIndent $ vcat $ fmap ppr found
where
found = fmap fromRight $ filter isRight mdls
notFound = fmap fromLeft $ filter isLeft mdls
-- | Default error message function in case 'AnyModule' fails.
defaultFindAnyModuleErrMsg :: [SDoc] -> SDoc
defaultFindAnyModuleErrMsg mdlErrs = hang (text "Could not find any of the modules!") errIndent $ vcat mdlErrs
-- -----------------------------------------------------------------------------
-- Searching for Classes
-- -----------------------------------------------------------------------------
-- | Find a collection of classes in the given module.
data ClassQuery = ClassQuery Optional ModuleQuery [(PluginClassName, Arity)]
instance O.Outputable ClassQuery where
ppr (ClassQuery opt mdlQuery clsNames)
= O.hang (O.text "In module:") errIndent (O.ppr mdlQuery)
O.<> O.hang (O.text $ (if opt then "optionally " else "") ++ "find classes:") errIndent (O.ppr clsNames)
-- | Check if the classes requested by the given query are optional.
isOptionalClassQuery :: ClassQuery -> Bool
isOptionalClassQuery (ClassQuery opt _mdlQ _clss) = opt
-- | Get the names of the classes that are queried for by the given query.
queriedClasses :: ClassQuery -> [PluginClassName]
queriedClasses (ClassQuery _opt _mdlQ clss) = fmap fst clss
-- | Get the module query this class query uses to lookup modules of classes.
moduleQueryOf :: ClassQuery -> ModuleQuery
moduleQueryOf (ClassQuery _opt mdlQ _clss) = mdlQ
-- | Search for a collection of classes using the given query.
-- If any one of the classes could not be found an error is returned.
findClassesByQuery :: ClassQuery -> TcPluginM (Either SDoc [(PluginClassName, Class)])
findClassesByQuery (ClassQuery opt mdlQuery toFindCls) = do
eClss <- forM toFindCls $ \(clsName, clsArity) -> do
eCls <- findClass (isClass (isModuleInQuery mdlQuery) clsName clsArity)
return (clsName, eCls, clsArity)
let notFound = filter (\(_, c, _) -> isNothing c) eClss
let errMsg :: (PluginClassName, Maybe Class, Arity) -> SDoc
errMsg (n, _, a) = text $ "Could not find class '" ++ n ++ "' with arity " ++ show a ++ "!"
return $ case notFound of
-- We found the classes.
[] -> Right $ fmap (\(n, Just c, _) -> (n, c)) $ eClss
-- If the classes are optional, we return an empty list of classes.
_ | opt -> Right []
-- The classes aren't optional therefore we return an error.
_ -> Left $ vcat $ fmap errMsg notFound
-- | Checks if a type class matching the shape of the given
-- predicate is in scope.
findClass :: (Class -> Bool) -> TcPluginM (Maybe Class)
findClass isCls' = do
let isCls = isCls' . is_cls
envs <- fst <$> getEnvs
-- This is needed while compiling the package itself...
let foundInstsLcl = (filter isCls . instEnvElts . tcg_inst_env $ envs)
++ (filter isCls . tcg_insts $ envs)
-- This is needed while compiling an external package depending on it...
foundInstsGbl <- filter isCls . instEnvElts . ie_global <$> getInstEnvs
return $ case foundInstsLcl ++ foundInstsGbl of
(inst : _) -> Just $ is_cls inst
[] -> Nothing
-- | Check if the given class has the given name, arity and if the classes
-- module fulfills the given predicate.
isClass :: (Module -> Bool) -> String -> Arity -> Class -> Bool
isClass isModule targetClassName targetArity cls =
let clsName = className cls
clsMdl = nameModule clsName
clsNameStr = occNameString $ getOccName clsName
clsArity = classArity cls
in isModule clsMdl
&& clsNameStr == targetClassName
&& clsArity == targetArity
-- -----------------------------------------------------------------------------
-- Searching for Instances
-- -----------------------------------------------------------------------------
-- | Use a class query to find classes and their instances.
findClassesAndInstancesInScope :: ClassQuery -> TcPluginM (Either SDoc [(PluginClassName, Class, [ClsInst])])
findClassesAndInstancesInScope clsQuery = do
eClss <- findClassesByQuery clsQuery
case eClss of
Left err -> return $ Left err
Right clss -> fmap Right $ forM clss $ \(n, c) -> do
insts <- findInstancesInScope c
return (n, c, insts)
-- | Returns a list of all instances for the given class that are currently in scope.
findInstancesInScope :: Class -> TcPluginM [ClsInst]
findInstancesInScope cls = do
instEnvs <- TcPluginM.getInstEnvs
return $ classInstances instEnvs cls
-- | Constructs the map between type constructors and their supermonad instances.
-- It essentially collects all of the instances that only use a single top-level
-- constructor and stores them in the instance dictionary. If there are several
-- instances for a single type constructor none is added to the dictionary.
-- This function only searches for the instances and constructs the lookup table.
findMonoTopTyConInstances
:: ClassDict
-- ^ The set of classes and instances to calculate the instance dictionary from.
-> InstanceDict
-- ^ Association between type constructors and their supermonad instances.
findMonoTopTyConInstances clsDict =
mconcat $ do
tc <- supermonadTyCons
(cls, insts) <- dictEntries
return $ findMonoClassInstance tc cls insts
where
dictEntries :: [(Class, [ClsInst])]
dictEntries = catMaybes $ fmap snd $ allClsDictEntries clsDict
-- Collect all type constructors that are used for supermonads
supermonadTyCons :: [TyCon]
supermonadTyCons = S.toList
$ S.unions
$ fmap instanceTopTyCons
$ concat $ fmap snd dictEntries
findMonoClassInstance :: TyCon -> Class -> [ClsInst] -> InstanceDict
findMonoClassInstance tc cls insts =
case filter (isMonoTyConInstance tc cls) insts of
[foundInst] -> insertInstDict tc cls foundInst $ emptyInstDict
_ -> emptyInstDict
-- -----------------------------------------------------------------------------
-- Instance implications
-- -----------------------------------------------------------------------------
-- | Representation of instance implying the existence of other instances that
-- are using the same type constructor.
data InstanceImplication = InstanceImplies Class Class
instance O.Outputable InstanceImplication where
ppr (InstanceImplies ca cb) = O.text (getClassName ca) O.<> O.text " ===> " O.<> O.text (getClassName cb)
infix 7 ===>
infix 7 <==>
-- | Instance of first class implies the existence of an instance from the second class.
(===>) :: Class -> Class -> [InstanceImplication]
(===>) ca cb = [InstanceImplies ca cb]
-- | Instances of either class imply the respective other instance.
(<==>) :: Class -> Class -> [InstanceImplication]
(<==>) ca cb = ca ===> cb ++ cb ===> ca
-- | Instance implication based on lookup of names in class dictionary. See '===>'.
clsDictInstImp :: ClassDict -> PluginClassName -> PluginClassName -> [InstanceImplication]
clsDictInstImp clsDict caName cbName = do
clsA <- maybeToList $ lookupClsDictClass caName clsDict
clsB <- maybeToList $ lookupClsDictClass cbName clsDict
clsA ===> clsB
-- | Instance equivalence based on lookup of names in class dictionary. See '<==>'.
clsDictInstEquiv :: ClassDict -> PluginClassName -> PluginClassName -> [InstanceImplication]
clsDictInstEquiv clsDict caName cbName = do
clsA <- maybeToList $ lookupClsDictClass caName clsDict
clsB <- maybeToList $ lookupClsDictClass cbName clsDict
clsA <==> clsB
-- | Check a given instance dictionary against a set of 'InstanceImplication's
-- and return error messages if there are violations.
checkInstanceImplications :: InstanceDict -> [InstanceImplication] -> [((TyCon,Class), SDoc)]
checkInstanceImplications _instDict [] = []
checkInstanceImplications instDict (imp : imps) = do
tc <- S.toList $ allInstDictTyCons instDict
let tcDict = lookupInstDictByTyCon tc instDict
case imp of
InstanceImplies ca cb -> case (M.member ca tcDict, M.member cb tcDict) of
(False, _ ) -> rest
(True , True ) -> rest
(True , False) ->
let errMsg = text $ "There is no unique instance of '" ++ getClassName cb ++ "' for the type '" ++ getTyConName tc ++ "'!"
in ((tc,cb), errMsg) : rest
where
rest = checkInstanceImplications instDict imps
| jbracker/supermonad-plugin | src/Control/Super/Plugin/Detect.hs | bsd-3-clause | 17,094 | 0 | 21 | 3,309 | 3,624 | 1,950 | 1,674 | 258 | 3 |
fix :: (a -> a) -> a
fix f = f (fix f)
fix2 :: (a -> b -> a) -> (b -> a -> b) -> a
fix2 f g = f (fix2 f g) (fix2 g f)
| YoshikuniJujo/funpaala | samples/others/recLocal.hs | bsd-3-clause | 119 | 0 | 9 | 40 | 100 | 51 | 49 | 4 | 1 |
{-# LANGUAGE PArr #-}
{-# OPTIONS -fvectorise #-}
module Sort ( medianIndex, collect, sort ) where
import Data.Array.Parallel.Prelude
import qualified Data.Array.Parallel.Prelude.Double as D
import Data.Array.Parallel.Prelude.Int as I
import qualified Prelude as P
kthSmallestIndex :: [:(Int,D.Double):] -> Int -> Int
kthSmallestIndex xs k
| k >= lengthP lts && k < n - lengthP gts = i
| otherwise = (kthSmallestIndex ys k')
where
n = lengthP xs
(i,x) = xs !: (n `div` 2)
lts = [:(j,y) | (j,y) <- xs, y D.< x:]
gts = [:(j,y) | (j,y) <- xs, y D.> x:]
(ys, k') | k < lengthP lts = (lts, k)
| otherwise = (gts, k - (n - lengthP gts))
medianIndex :: [:D.Double:] -> Int
medianIndex xs = (kthSmallestIndex (indexedP xs) (lengthP xs `div` 2))
collect :: [:(Int,Int):] -> [:(Int,[:Int:]):]
collect ps
| lengthP ps I.== 0 = [::]
| otherwise = (
let
(pivot,_) = ps !: (lengthP ps `I.div` 2)
ls = [:(i,j) | (i,j) <- ps, i I.< pivot:]
gs = [:(i,j) | (i,j) <- ps, i I.> pivot:]
js = [:j | (i,j) <- ps, i I.== pivot:]
ss = mapP collect [:ls,gs:]
in
(ss!:0) +:+ [:(pivot,sort js):] +:+ (ss!:1)
)
sort :: [:Int:] -> [:Int:]
sort xs | lengthP xs I.<= 1 = xs
sort xs = ((ss!:0) +:+ [:pivot:] +:+ (ss!:1))
where
pivot = xs !: (lengthP xs `I.div` 2)
ls = [:x | x <- xs, x < pivot:]
gs = [:x | x <- xs, x > pivot:]
ss = mapP sort [:ls,gs:]
| mainland/dph | icebox/examples/delaunay/Sort.hs | bsd-3-clause | 1,470 | 84 | 12 | 405 | 597 | 396 | 201 | -1 | -1 |
{- -*- Mode: haskell; -*-
Haskell LDAP Interface
This code is under a 3-clause BSD license; see COPYING for details.
-}
{- |
Module : LDAP
Copyright : Copyright (C) 2005-2007 John Goerzen
License : BSD
Maintainer : John Goerzen,
Maintainer : jgoerzen\@complete.org
Stability : provisional
Portability: portable
Top-level LDAP module.
Written by John Goerzen, jgoerzen\@complete.org
Welcome to the LDAP interface for Haskell. Please see one of the sections
below for more information.
This package comes from:
<http://software.complete.org/ldap-haskell>
-}
module LDAP (-- * Basic Types
module LDAP.Types,
-- * Initialization
module LDAP.Init,
-- * Searching
module LDAP.Search,
-- * Adding, Deleting, and Altering
module LDAP.Modify,
-- * Error Handling
module LDAP.Exceptions,
-- * Haskell enumerated LDAP types
module LDAP.Data,
-- * Other LDAP constants
module LDAP.Constants
)
where
import LDAP.Exceptions
import LDAP.Types
import LDAP.Init
import LDAP.Data
import LDAP.Constants
import LDAP.Search hiding (LDAPScope(..))
import LDAP.Modify hiding (LDAPModOp(..))
| jgoerzen/ldap-haskell | LDAP.hs | bsd-3-clause | 1,290 | 0 | 6 | 368 | 112 | 76 | 36 | 15 | 0 |
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE Rank2Types #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE UndecidableInstances #-}
--------------------------------------------------------------------------------
-- |
-- Module : Data.Comp.Multi.Annotation
-- Copyright : (c) 2011 Patrick Bahr
-- License : BSD3
-- Maintainer : Patrick Bahr <[email protected]>
-- Stability : experimental
-- Portability : non-portable (GHC Extensions)
--
-- This module defines annotations on signatures. All definitions are
-- generalised versions of those in "Data.Comp.Annotation".
--
--------------------------------------------------------------------------------
module Data.Comp.Multi.Annotation
(
(:&:) (..),
DistAnn (..),
RemA (..),
liftA,
ann,
liftA',
stripA,
propAnn,
project'
) where
import Data.Comp.Multi.Algebra
import Data.Comp.Multi.HFunctor
import Data.Comp.Multi.Ops
import Data.Comp.Multi.Term
import qualified Data.Comp.Ops as O
-- | This function transforms a function with a domain constructed
-- from a functor to a function with a domain constructed with the
-- same functor but with an additional annotation.
liftA :: (RemA s s') => (s' a :-> t) -> s a :-> t
liftA f v = f (remA v)
-- | This function annotates each sub term of the given term with the
-- given value (of type a).
ann :: (DistAnn f p g, HFunctor f) => p -> CxtFun f g
ann c = appSigFun (injectA c)
-- | This function transforms a function with a domain constructed
-- from a functor to a function with a domain constructed with the
-- same functor but with an additional annotation.
liftA' :: (DistAnn s' p s, HFunctor s')
=> (s' a :-> Cxt h s' a) -> s a :-> Cxt h s a
liftA' f v = let (v' O.:&: p) = projectA v
in ann p (f v')
{-| This function strips the annotations from a term over a
functor with annotations. -}
stripA :: (RemA g f, HFunctor g) => CxtFun g f
stripA = appSigFun remA
propAnn :: (DistAnn f p f', DistAnn g p g', HFunctor g)
=> Hom f g -> Hom f' g'
propAnn alg f' = ann p (alg f)
where (f O.:&: p) = projectA f'
-- | This function is similar to 'project' but applies to signatures
-- with an annotation which is then ignored.
project' :: (RemA f f', s :<: f') => Cxt h f a i -> Maybe (s (Cxt h f a) i)
project' (Term x) = proj $ remA x
project' _ = Nothing
| spacekitteh/compdata | src/Data/Comp/Multi/Annotation.hs | bsd-3-clause | 2,590 | 0 | 11 | 582 | 542 | 304 | 238 | 42 | 1 |
module Cis194.Hw.CalcSpec (main, spec) where
import Test.Hspec
import Cis194.Hw.Calc
import Cis194.Hw.ExprT
main :: IO ()
main = hspec spec
spec :: Spec
spec = do
describe "Calculator - eval" $ do
it "should add and multiply numbers" $ do
eval (Mul (Add (Lit 2) (Lit 3)) (Lit 4)) `shouldBe` 20
describe "Calculator - evalStr" $ do
it "should evaluate well formed strings" $ do
evalStr "(2+3)*4" `shouldBe` Just 20
it "evals the multiplication operator before addition" $ do
evalStr "2+3*4" `shouldBe` Just 14
it "returns Nothing if the string is malformed" $ do
evalStr "2+3*" `shouldBe` Nothing
describe "Expr ExprT" $ do
it "should generate ExprT expression" $ do
(mul (add (lit 2) (lit 3)) (lit 4)) `shouldBe` (Mul (Add (Lit 2) (Lit 3)) (Lit 4))
describe "Expr Integer" $ do
it "should calculate integer value" $ do
(mul (add (lit 2) (lit 3)) (lit 4)) `shouldBe` (20::Integer)
describe "Expr Bool" $ do
it "should treat positive literals as True" $ do
(lit 3) `shouldBe` True
it "should treat negative literals as False" $ do
(lit (-2)) `shouldBe` False
it "should treat additon as a logical OR" $ do
(add (lit 1) (lit 1)) `shouldBe` True
(add (lit (-1)) (lit 1)) `shouldBe` True
(add (lit 1) (lit (-1))) `shouldBe` True
(add (lit (-1)) (lit (-1))) `shouldBe` False
it "should treat multiplicsation as a logical AND" $ do
(mul (lit 1) (lit 1)) `shouldBe` True
(mul (lit (-1)) (lit 1)) `shouldBe` False
(mul (lit 1) (lit (-1))) `shouldBe` False
(mul (lit (-1)) (lit (-1))) `shouldBe` False
it "should calculate integer value" $ do
(mul (add (lit 2) (lit 3)) (lit 4)) `shouldBe` True
describe "Expr MinMax" $ do
it "should return integer for lit" $ do
(lit 4) `shouldBe` (MinMax 4)
it "should return the largest for add" $ do
(add (lit 4) (lit 8)) `shouldBe` (MinMax 8)
it "should return the smallest for mul" $ do
(mul (lit 4) (lit 8)) `shouldBe` (MinMax 4)
describe "Expr Mod7" $ do
it "should return integer for within the range 0-7" $ do
(lit 4) `shouldBe` (Mod7 4)
(lit 8) `shouldBe` (Mod7 1)
(lit (-2)) `shouldBe` (Mod7 5)
it "should perform addition and multiplication modulo 7" $ do
(add (lit 6) (lit 9)) `shouldBe` (Mod7 1)
(mul (lit 4) (lit 10)) `shouldBe` (Mod7 5)
| halarnold2000/cis194 | test/Cis194/Hw/CalcSpec.hs | bsd-3-clause | 2,421 | 0 | 20 | 625 | 1,083 | 540 | 543 | 56 | 1 |
module PatternMatch6 where
g = f undefined
f x = (\(p:ps) -> (case p of
x | x == 45 -> 12
_ -> 52))
| kmate/HaRe | old/testing/foldDef/PatternMatch6_TokOut.hs | bsd-3-clause | 132 | 0 | 14 | 57 | 64 | 34 | 30 | 5 | 2 |
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE helpset PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp HelpSet Version 2.0//EN" "http://java.sun.com/products/javahelp/helpset_2_0.dtd">
<helpset version="2.0" xml:lang="sr-CS">
<title>Active Scan Rules - Blpha | ZAP Extension</title>
<maps>
<homeID>top</homeID>
<mapref location="map.jhm"/>
</maps>
<view>
<name>TOC</name>
<label>Contents</label>
<type>org.zaproxy.zap.extension.help.ZapTocView</type>
<data>toc.xml</data>
</view>
<view>
<name>Index</name>
<label>Index</label>
<type>javax.help.IndexView</type>
<data>index.xml</data>
</view>
<view>
<name>Search</name>
<label>Search</label>
<type>javax.help.SearchView</type>
<data engine="com.sun.java.help.search.DefaultSearchEngine">
JavaHelpSearch
</data>
</view>
<view>
<name>Favorites</name>
<label>Favorites</label>
<type>javax.help.FavoritesView</type>
</view>
</helpset> | 0xkasun/security-tools | src/org/zaproxy/zap/extension/wavsepRpt/resources/help_sr_CS/helpset_sr_CS.hs | apache-2.0 | 987 | 80 | 67 | 163 | 422 | 213 | 209 | -1 | -1 |
-- Copyright (c) 2000 Galois Connections, Inc.
-- All rights reserved. This software is distributed as
-- free software under the license in the file "LICENSE",
-- which is included in the distribution.
module Eval where
import Control.Monad
import Data.Array
import Geometry
import CSG
import Surface
import Data
import Parse (rayParse, rayParseF)
class Monad m => MonadEval m where
doOp :: PrimOp -> GMLOp -> Stack -> m Stack
tick :: m ()
err :: String -> m a
tick = return ()
newtype Pure a = Pure a deriving Show
instance Functor Pure where
fmap = liftM
instance Applicative Pure where
pure = Pure
(<*>) = ap
instance Monad Pure where
Pure x >>= k = k x
return = pure
fail s = error s
instance MonadEval Pure where
doOp = doPureOp
err s = error s
instance MonadEval IO where
doOp prim op stk = do { -- putStrLn ("Calling " ++ show op
-- ++ " << " ++ show stk ++ " >>")
doAllOp prim op stk
}
err s = error s
data State
= State { env :: Env
, stack :: Stack
, code :: Code
} deriving Show
callback :: Env -> Code -> Stack -> Stack
callback env code stk
= case eval (State { env = env, stack = stk, code = code}) of
Pure stk -> stk
{-# SPECIALIZE eval :: State -> Pure Stack #-}
{-# SPECIALIZE eval :: State -> IO Stack #-}
eval :: MonadEval m => State -> m Stack
eval st =
do { () <- return () -- $ unsafePerformIO (print st) -- Functional debugger
; if moreCode st then
do { tick -- tick first, so as to catch loops on new eval.
; st' <- step st
; eval st'
}
else return (stack st)
}
moreCode :: State -> Bool
moreCode (State {code = []}) = False
moreCode _ = True
-- Step has a precondition that there *is* code to run
{-# SPECIALIZE step :: State -> Pure State #-}
{-# SPECIALIZE step :: State -> IO State #-}
step :: MonadEval m => State -> m State
-- Rule 1: Pushing BaseValues
step st@(State{ stack = stack, code = (TBool b):cs })
= return (st { stack = (VBool b):stack, code = cs })
step st@(State{ stack = stack, code = (TInt i):cs })
= return (st { stack = (VInt i):stack, code = cs })
step st@(State{ stack = stack, code = (TReal r):cs })
= return (st { stack = (VReal r):stack, code = cs })
step st@(State{ stack = stack, code = (TString s):cs })
= return (st { stack = (VString s):stack, code = cs })
-- Rule 2: Name binding
step st@(State{ env = env, stack = (v:stack), code = (TBind id):cs }) =
return (State { env = extendEnv env id v, stack = stack, code = cs })
step st@(State{ env = env, stack = [], code = (TBind id):cs }) =
err "Attempt to bind the top of an empty stack"
-- Rule 3: Name lookup
step st@(State{ env = env, stack = stack, code = (TId id):cs }) =
case (lookupEnv env id) of
Just v -> return (st { stack = v:stack, code = cs })
Nothing -> err ("Cannot find value for identifier: " ++ id)
-- Rule 4: Closure creation
step st@(State{ env = env, stack = stack, code = (TBody body):cs }) =
return (st { stack = (VClosure env body):stack, code = cs })
-- Rule 5: Application
step st@(State{ env = env, stack = (VClosure env' code'):stack, code = TApply:cs }) =
do { stk <- eval (State {env = env', stack = stack, code = code'})
; return (st { stack = stk, code = cs })
}
step st@(State{ env = env, stack = [], code = TApply:cs }) =
err "Application with an empty stack"
step st@(State{ env = env, stack = _:_, code = TApply:cs }) =
err "Application of a non-closure"
-- Rule 6: Arrays
step st@(State{ env = env, stack = stack, code = TArray code':cs }) =
do { stk <- eval (State {env = env, stack = [], code = code'})
; let last = length stk-1
; let arr = array (0,last) (zip [last,last-1..] stk)
; return (st { stack = (VArray arr):stack, code = cs })
}
-- Rule 7 & 8: If statement
step st@(State{ env = env, stack = (VClosure e2 c2):(VClosure e1 c1):(VBool True):stack, code = TIf:cs }) =
do { stk <- eval (State {env = e1, stack = stack, code = c1})
; return (st { stack = stk, code = cs })
}
step st@(State{ env = env, stack = (VClosure e2 c2):(VClosure e1 c1):(VBool False):stack, code = TIf:cs }) =
do { stk <- eval (State {env = e2, stack = stack, code = c2})
; return (st { stack = stk, code = cs })
}
step st@(State{ env = env, stack = _, code = TIf:cs }) =
err "Incorrect use of if (bad and/or inappropriate values on the stack)"
-- Rule 9: Operators
step st@(State{ env = env, stack = stack, code = (TOp op):cs }) =
do { stk <- doOp (opFnTable ! op) op stack
; return (st { stack = stk, code = cs })
}
-- Rule Opps
step _ = err "Tripped on sidewalk while stepping."
--------------------------------------------------------------------------
-- Operator code
opFnTable :: Array GMLOp PrimOp
opFnTable = array (minBound,maxBound)
[ (op,prim) | (_,TOp op,prim) <- opcodes ]
doPureOp :: (MonadEval m) => PrimOp -> GMLOp -> Stack -> m Stack
doPureOp _ Op_render _ =
err ("\nAttempting to call render from inside a purely functional callback.")
doPureOp primOp op stk = doPrimOp primOp op stk -- call the purely functional operators
{-# SPECIALIZE doPrimOp :: PrimOp -> GMLOp -> Stack -> Pure Stack #-}
{-# SPECIALIZE doPrimOp :: PrimOp -> GMLOp -> Stack -> IO Stack #-}
{-# SPECIALIZE doPrimOp :: PrimOp -> GMLOp -> Stack -> Abs Stack #-}
doPrimOp :: (MonadEval m) => PrimOp -> GMLOp -> Stack -> m Stack
-- 1 argument.
doPrimOp (Int_Int fn) _ (VInt i1:stk)
= return ((VInt (fn i1)) : stk)
doPrimOp (Real_Real fn) _ (VReal r1:stk)
= return ((VReal (fn r1)) : stk)
doPrimOp (Point_Real fn) _ (VPoint x y z:stk)
= return ((VReal (fn x y z)) : stk)
-- This is where the callbacks happen from...
doPrimOp (Surface_Obj fn) _ (VClosure env code:stk)
= case absapply env code [VAbsObj AbsFACE,VAbsObj AbsU,VAbsObj AbsV] of
Just [VReal r3,VReal r2,VReal r1,VPoint c1 c2 c3] ->
let
res = prop (color c1 c2 c3) r1 r2 r3
in
return ((VObject (fn (SConst res))) : stk)
_ -> return ((VObject (fn (SFun call))) : stk)
where
-- The most general case
call i r1 r2 =
case callback env code [VReal r2,VReal r1,VInt i] of
[VReal r3,VReal r2,VReal r1,VPoint c1 c2 c3]
-> prop (color c1 c2 c3) r1 r2 r3
stk -> error ("callback failed: incorrectly typed return arguments"
++ show stk)
doPrimOp (Real_Int fn) _ (VReal r1:stk)
= return ((VInt (fn r1)) : stk)
doPrimOp (Int_Real fn) _ (VInt r1:stk)
= return ((VReal (fn r1)) : stk)
doPrimOp (Arr_Int fn) _ (VArray arr:stk)
= return ((VInt (fn arr)) : stk)
-- 2 arguments.
doPrimOp (Int_Int_Int fn) _ (VInt i2:VInt i1:stk)
= return ((VInt (fn i1 i2)) : stk)
doPrimOp (Int_Int_Bool fn) _ (VInt i2:VInt i1:stk)
= return ((VBool (fn i1 i2)) : stk)
doPrimOp (Real_Real_Real fn) _ (VReal r2:VReal r1:stk)
= return ((VReal (fn r1 r2)) : stk)
doPrimOp (Real_Real_Bool fn) _ (VReal r2:VReal r1:stk)
= return ((VBool (fn r1 r2)) : stk)
doPrimOp (Arr_Int_Value fn) _ (VInt i:VArray arr:stk)
= return ((fn arr i) : stk)
-- Many arguments, typically image mangling
doPrimOp (Obj_Obj_Obj fn) _ (VObject o2:VObject o1:stk)
= return ((VObject (fn o1 o2)) : stk)
doPrimOp (Point_Color_Light fn) _ (VPoint r g b:VPoint x y z : stk)
= return (VLight (fn (x,y,z) (color r g b)) : stk)
doPrimOp (Point_Point_Color_Real_Real_Light fn) _
(VReal r2:VReal r1:VPoint r g b:VPoint x2 y2 z2:VPoint x1 y1 z1 : stk)
= return (VLight (fn (x1,y1,z1) (x2,y2,z2) (color r g b) r1 r2) : stk)
doPrimOp (Real_Real_Real_Point fn) _ (VReal r3:VReal r2:VReal r1:stk)
= return ((fn r1 r2 r3) : stk)
doPrimOp (Obj_Real_Obj fn) _ (VReal r:VObject o:stk)
= return (VObject (fn o r) : stk)
doPrimOp (Obj_Real_Real_Real_Obj fn) _ (VReal r3:VReal r2:VReal r1:VObject o:stk)
= return (VObject (fn o r1 r2 r3) : stk)
-- This one is our testing harness
doPrimOp (Value_String_Value fn) _ (VString s:o:stk)
= res `seq` return (res : stk)
where
res = fn o s
doPrimOp primOp op args
= err ("\n\ntype error when attempting to execute builtin primitive \"" ++
show op ++ "\"\n\n| " ++
show op ++ " takes " ++ show (length types) ++ " argument" ++ s
++ " with" ++ the ++ " type" ++ s ++ "\n|\n|" ++
" " ++ unwords [ show ty | ty <- types ] ++ "\n|\n|" ++
" currently, the relevant argument" ++ s ++ " on the stack " ++
are ++ "\n|\n| " ++
unwords [ "(" ++ show arg ++ ")"
| arg <- reverse (take (length types) args) ] ++ "\n|\n| "
++ " (top of stack is on the right hand side)\n\n")
where
len = length types
s = (if len /= 1 then "s" else "")
are = (if len /= 1 then "are" else "is")
the = (if len /= 1 then "" else " the")
types = getPrimOpType primOp
-- Render is somewhat funny, because it can only get called at top level.
-- All other operations are purely functional.
doAllOp :: PrimOp -> GMLOp -> Stack -> IO Stack
doAllOp (Render render) Op_render
(VString str:VInt ht:VInt wid:VReal fov
:VInt dep:VObject obj:VArray arr
:VPoint r g b : stk)
= do { render (color r g b) lights obj dep (fov * (pi / 180.0)) wid ht str
; return stk
}
where
lights = [ light | (VLight light) <- elems arr ]
doAllOp primOp op stk = doPrimOp primOp op stk -- call the purely functional operators
------------------------------------------------------------------------------
{-
- Abstract evaluation.
-
- The idea is you check for constant code that
- (1) does not look at its arguments
- (2) gives a fixed result
-
- We run for 100 steps.
-
-}
absapply :: Env -> Code -> Stack -> Maybe Stack
absapply env code stk =
case runAbs (eval (State env stk code)) 100 of
AbsState stk _ -> Just stk
AbsFail m -> Nothing
newtype Abs a = Abs { runAbs :: Int -> AbsState a }
data AbsState a = AbsState a !Int
| AbsFail String
instance Functor Abs where
fmap = liftM
instance Applicative Abs where
pure x = Abs (\ n -> AbsState x n)
(<*>) = ap
instance Monad Abs where
(Abs fn) >>= k = Abs (\ s -> case fn s of
AbsState r s' -> runAbs (k r) s'
AbsFail m -> AbsFail m)
return = pure
fail s = Abs (\ n -> AbsFail s)
instance MonadEval Abs where
doOp = doAbsOp
err = fail
tick = Abs (\ n -> if n <= 0
then AbsFail "run out of time"
else AbsState () (n-1))
doAbsOp :: PrimOp -> GMLOp -> Stack -> Abs Stack
doAbsOp _ Op_point (VReal r3:VReal r2:VReal r1:stk)
= return ((VPoint r1 r2 r3) : stk)
-- here, you could have an (AbsPoint :: AbsObj) which you put on the
-- stack, with any object in the three fields.
doAbsOp _ op _ = err ("operator not understood (" ++ show op ++ ")")
------------------------------------------------------------------------------
-- Driver
mainEval :: Code -> IO ()
mainEval prog = do { stk <- eval (State emptyEnv [] prog)
; return ()
}
{-
* Oops, one of the example actually has something
* on the stack at the end.
* Oh well...
; if null stk
then return ()
else do { putStrLn done
; print stk
}
-}
done = "Items still on stack at (successful) termination of program"
------------------------------------------------------------------------------
-- testing
test :: String -> Pure Stack
test is = eval (State emptyEnv [] (rayParse is))
testF :: String -> IO Stack
testF is = do prog <- rayParseF is
eval (State emptyEnv [] prog)
testA :: String -> Either String (Stack,Int)
testA is = case runAbs (eval (State emptyEnv
[VAbsObj AbsFACE,VAbsObj AbsU,VAbsObj AbsV]
(rayParse is))) 100 of
AbsState a n -> Right (a,n)
AbsFail m -> Left m
abstest1 = "1.0 0.0 0.0 point /red { /v /u /face red 1.0 0.0 1.0 } apply"
-- should be [3:: Int]
et1 = test "1 /x { x } /f 2 /x f apply x addi"
| shlevy/ghc | testsuite/tests/programs/galois_raytrace/Eval.hs | bsd-3-clause | 12,604 | 0 | 30 | 3,666 | 4,666 | 2,455 | 2,211 | 232 | 6 |
-- | Settings are centralized, as much as possible, into this file. This
-- includes database connection settings, static file locations, etc.
-- In addition, you can configure a number of different aspects of Yesod
-- by overriding methods in the Yesod typeclass. That instance is
-- declared in the Foundation.hs file.
module Settings
( widgetFile
, PersistConfig
, staticRoot
, staticDir
, Extra (..)
, parseExtra
) where
import Prelude
import Text.Shakespeare.Text (st)
import Language.Haskell.TH.Syntax
import Database.Persist.Sqlite (SqliteConf)
import Yesod.Default.Config
import qualified Yesod.Default.Util
import Data.Text (Text)
import Data.Yaml
import Control.Applicative
import Settings.Development
-- | Which Persistent backend this site is using.
type PersistConfig = SqliteConf
-- Static setting below. Changing these requires a recompile
-- | The location of static files on your system. This is a file system
-- path. The default value works properly with your scaffolded site.
staticDir :: FilePath
staticDir = "static"
-- | The base URL for your static files. As you can see by the default
-- value, this can simply be "static" appended to your application root.
-- A powerful optimization can be serving static files from a separate
-- domain name. This allows you to use a web server optimized for static
-- files, more easily set expires and cache values, and avoid possibly
-- costly transference of cookies on static files. For more information,
-- please see:
-- http://code.google.com/speed/page-speed/docs/request.html#ServeFromCookielessDomain
--
-- If you change the resource pattern for StaticR in Foundation.hs, you will
-- have to make a corresponding change here.
--
-- To see how this value is used, see urlRenderOverride in Foundation.hs
staticRoot :: AppConfig DefaultEnv x -> Text
staticRoot conf = [st|#{appRoot conf}/static|]
-- The rest of this file contains settings which rarely need changing by a
-- user.
widgetFile :: String -> Q Exp
widgetFile = if development then Yesod.Default.Util.widgetFileReload
else Yesod.Default.Util.widgetFileNoReload
data Extra = Extra
{ extraCopyright :: Text
, extraAnalytics :: Maybe Text -- ^ Google Analytics
} deriving Show
parseExtra :: DefaultEnv -> Object -> Parser Extra
parseExtra _ o = Extra
<$> o .: "copyright"
<*> o .:? "analytics"
| Jxck/yesod-tutorial | Settings.hs | bsd-2-clause | 2,411 | 0 | 9 | 436 | 267 | 171 | 96 | -1 | -1 |
{-# LANGUAGE TypeFamilies, NamedWildCards, PolyKinds #-}
-- All declarations below are accepted when the NamedWildCards extension is not
-- enabled and the identifiers starting with _ are parsed as type variables.
-- They should remain valid when the extension is on.
--
-- See Trac #11098 and comments in #10982
module NamedWildcardsAsTyVars where
type Synonym _a = _a -> _a
data A a _b = ACon a a Int
data B _a b = BCon _a (_a, Bool)
type family C a b where
C _a _b = _a -> _a
type family D a b where
D _a b = _a -> (_a, Int)
data family E a b
data instance E a _b = ECon a (a, Int)
data family F a b
data instance F _a b = FCon _a _a Bool
class G _a where
gfoo :: _a -> _a
instance G Int where
gfoo = (*2)
type family H a b where
H _a _a = Int
H _a _b = Bool
hfoo :: H String String
hfoo = 10
hbar :: H String Int
hbar = False
type family I (_a :: k) where
I _t = Int
| olsner/ghc | testsuite/tests/partial-sigs/should_compile/NamedWildcardsAsTyVars.hs | bsd-3-clause | 906 | 0 | 7 | 229 | 275 | 164 | 111 | -1 | -1 |
-- The instance decl is illegal without UndecidableInstances
module ShouldFail where
data Rec f = In (f (Rec f))
instance Eq (f (Rec f)) => Eq (Rec f) where
(In x) == (In y) = x == y
| siddhanathan/ghc | testsuite/tests/typecheck/should_fail/tcfail108.hs | bsd-3-clause | 196 | 0 | 10 | 51 | 87 | 45 | 42 | -1 | -1 |
module Y2016.M10.D26.Exercise where
import Data.Map (Map)
import Data.Time
import Analytics.Trading.Data.Row (Symbol) -- http://lpaste.net/109658
import Analytics.Trading.Scan.Top5s -- http://lpaste.net/1443717939034324992
-- below imports available from 1HaskellADay git repository
import Y2016.M10.D19.Exercise (LeadersLosers)
{--
Yesterday we counted occurences of a security in the Top5s daily reports. Great!
That information is useful, and an indicator of staying power. Another such
indicator is runs of a stock.
Today's Haskell problem.
Load the top5s daily archive from the URL:
https://raw.githubusercontent.com/geophf/1HaskellADay/master/exercises/HAD/Y2016/M10/D17/top5s.csv
Now let's compute some runs.
--}
-- 1. What are all the runs (ordered by longest first) of all the stocks?
-- A run is successive daily showings in the top5s lists
{--
type Run = Int
top5sRuns :: Archive Top5s -> Map Symbol [Run]
nope! nope! nope! That isn't very helpful at all! What if I wish to investigate
a run of a stock in the Top5s? Where do I start looking if I don't know the
dates of the run?
Starting over:
--}
data Run = Run { ndays :: Int, from, to :: Day }
deriving Show
-- n.b. ndays IS NOT diffDays to from (weekends and holidays don't count)
top5sRuns :: Archive Top5s -> Map Symbol [Run]
top5sRuns top5s = undefined
-- there!
-- 2. Now, let's do more focused runs. If a stock is hi-lo-hi-lo that's one
-- thing but if it's hi-hi-hi-hi, that's quite another thing. We're going to
-- focus runs on leaders or losers, then: leaders or losers by category
top5sLeadersRuns, top5sLosersRuns :: Archive Top5s -> Map Symbol [Run]
top5sLeadersRuns top5s = undefined
top5sLosersRuns top5s = undefined
-- You may think a losers' run is ignominious, but much good analysis (and
-- profitability) can be gained by following the losers, too.
-- 3. And, finally the BAC ('Bank of America') case. Some stocks excell in a
-- particular category. What is the Leaders Volume runs for BAC?
top5sLeadersByCategoryRuns :: Archive Top5s
-> Map Symbol (Map LeadersLosers [Run])
top5sLeadersByCategoryRuns top5s = undefined
| geophf/1HaskellADay | exercises/HAD/Y2016/M10/D26/Exercise.hs | mit | 2,167 | 0 | 9 | 383 | 198 | 121 | 77 | 16 | 1 |
module State where
import Control.Monad
import Test.QuickCheck
newtype State s a = MkState {unState :: s -> (a,s)}
instance Functor (State s) where
fmap f (MkState x) = MkState (\s -> let (a, s') = x s in (f a, s'))
instance Applicative (State s) where
pure x = MkState (\s -> (x,s))
MkState f <*> MkState x =
MkState (\s ->
let
(f', s') = f s
(a, s'') = x s'
in
(f' a, s''))
instance Monad (State s) where
return x = MkState (\s -> (x,s))
MkState f >>= g = MkState (\s -> let (a,s') = f s in
unState (g a) s')
get :: State s s
get = MkState (\s -> (s, s))
put :: s -> State s ()
put s = MkState (\_ -> ((), s))
(===) ::
Eq a => State Integer a -> State Integer a -> Integer -> Bool
(f === g) s = unState f s == unState g s
prop_get_get =
do x <- get
y <- get
return (x,y)
State.===
do x <- get
return (x,x)
prop_put_put =
do put 10
put 20
x <- get
return x
State.===
do put 20
x <- get
return x
| NickAger/LearningHaskell | Monads and all that/State Monad.hsproj/State.hs | mit | 1,100 | 0 | 13 | 405 | 571 | 289 | 282 | 41 | 1 |
{-# LANGUAGE OverloadedStrings #-}
module DataController where
import Control.Monad (unless, when)
import Data.Acid
import qualified Data.Text as Text
import Data.Time.Clock
import Control.Concurrent (forkIO, ThreadId)
import Subscriptions
import LatestStore
import Shared
import Duration
import Protocol (ImmediateResponse, Response (..))
-- | Initialize intervalThread
startIntervalThread :: Shared -> IO ThreadId
startIntervalThread = forkIO . intervalLoop
-- | Loops and executes interval subscriptions
intervalLoop :: Shared -> IO ()
intervalLoop shared@Shared {sISubDB = intervalSubs
, sLatestStore = latestStore
} = loop
where
loop :: IO ()
loop = do
currentTime <- getCurrentTime
(mTriggered, mNextTriggerTime) <- update intervalSubs $ LookupNextISub currentTime
case mTriggered of
Just ISub {isSubData = subData, isSensors = sensors} -> do
sensorDatas <- query latestStore $ LookupSensorDatas sensors
sendCallback shared subData sensorDatas
Nothing -> return ()
newCurrentTime <- getCurrentTime
case mNextTriggerTime of
Just triggerTime ->
let delay = newCurrentTime `diffUTCTime` triggerTime
in unless (delay < 0) $ -- skip waiting, we have more triggered already
threadDelay delay -- wait, threadDelay should be accurate enough
Nothing ->
threadDelay $ seconds (10 :: Int) -- sleep this interval until we get more ISubs
loop
-- | trigger events and save data
processData :: Shared -> SensorData -> IO ()
processData shared@Shared { sESubDB = eventSubsDB
, sLatestStore = latestStore
}
newData@SensorData { sdSensor = sensor
, sdValue = newValue
, sdTimestamp = newTime
} = do
eventSubs <- query eventSubsDB $ LookupESub sensor
let filterSubs event = filter ((== event) . esEvent) eventSubs
onChangeSubs = filterSubs OnChange
onUpdateSubs = filterSubs OnUpdate
onAttachSubs = filterSubs OnAttach
callback = flip (sendCallback shared) [newData] . esSubData
-- Update events happen always
mapM_ callback onUpdateSubs
mOldData <- query latestStore $ LookupSensorData sensor
case mOldData of
Just SensorData {sdValue = oldValue, sdTimestamp = oldTime} ->
when (oldTime < newTime) $ do
when (oldValue /= newValue) $
mapM_ callback onChangeSubs
update latestStore $ SetSensorData newData
Nothing -> do -- New data
mapM_ callback onAttachSubs
mapM_ callback onChangeSubs
update latestStore $ SetSensorData newData
return ()
triggerEventSubs :: Shared -> Sensor -> Event -> IO ImmediateResponse
triggerEventSubs shared@Shared { sESubDB = eventSubsDB
, sLatestStore = latestStore
} sensor event = do
eventSubs <- query eventSubsDB $ LookupESub sensor
-- TODO: same line as in processData
let filteredSubs = filter ((== event) . esEvent) eventSubs
if null filteredSubs then
return $ Success 0 -- Event triggered but nobody wanted it
else do
mLastData <- query latestStore $ LookupSensorData sensor
case mLastData of
Just lastData ->
let callback = flip (sendCallback shared) [lastData] . esSubData
in do
mapM_ callback filteredSubs
return $ Success 0
Nothing ->
return $ Failure 0 404 $ "No data for sensor: " `Text.append` sensor
| TK009/sensorbox | src/DataController.hs | mit | 3,948 | 0 | 20 | 1,352 | 873 | 443 | 430 | 80 | 3 |
-- -------------------------------------------------------------------------------------
-- Author: Sourabh S Joshi (cbrghostrider); Copyright - All rights reserved.
-- For email, run on linux (perl v5.8.5):
-- perl -e 'print pack "H*","736f75726162682e732e6a6f73686940676d61696c2e636f6d0a"'
-- -------------------------------------------------------------------------------------
fibs = 1: 2 : zipWith (+) fibs (tail fibs)
ans2 n = sum . filter (\x -> x `mod` 2 == 0) . (takeWhile (<=n)) $ fibs
main = do
ip <- getContents
let ns = map read . tail . lines $ ip
mapM_ (putStrLn . show) $ map ans2 ns
| cbrghostrider/Hacking | HackerRank/Contests/ProjectEuler/002_evenFibs.hs | mit | 639 | 0 | 13 | 121 | 143 | 75 | 68 | 6 | 1 |
import Data.Array
import qualified Data.Ix as Ix
import Data.Text (Text)
import qualified Data.Text as Text
import qualified Data.Text.IO as IO
import Text.Parsec
import Text.Parsec.Text
data Instruction
= Skip
| Copy Value Register
| Bump Direction Register
| Add Direction Register Register
| MultiplyAndAdd Direction Register Register Register
| Jump Value Value
| Toggle Value
| InvalidToggle Instruction Int
deriving (Eq, Show)
type Instructions = Array Int Instruction
data Register = A | B | C | D
deriving (Eq, Ord, Enum, Bounded, Ix, Show)
data Value
= Direct Int
| Indirect Register
deriving (Eq, Show)
data Direction = Up | Down
deriving (Eq, Show)
data RunState = RunState {programCounter :: Int, registers :: Array Register Int}
deriving (Eq, Show)
numberOfEggs = 12
main = do
input <- Text.lines <$> IO.getContents
let instructionList = complicate $ map parseInput input
let instructions = listArray (0, length instructionList - 1) instructionList
let finalState = execute initialState instructions
print finalState
parseInput :: Text -> Instruction
parseInput text = either (error . show) id $ parse parser "" text
where
parser = try cpy <|> try inc <|> try dec <|> try jnz <|> try tgl
cpy = do
string "cpy "
from <- value
string " "
to <- register
return $ Copy from to
inc = Bump Up <$> (string "inc " >> register)
dec = Bump Down <$> (string "dec " >> register)
jnz = do
string "jnz "
condition <- value
string " "
distance <- value
return $ Jump condition distance
tgl = do
string "tgl "
distance <- value
return $ Toggle distance
value = (Direct <$> try number) <|> (Indirect <$> try register)
register =
try (string "a" >> return A)
<|> try (string "b" >> return B)
<|> try (string "c" >> return C)
<|> try (string "d" >> return D)
number = read <$> (many1 digit <|> ((:) <$> char '-' <*> many1 digit))
initialState :: RunState
initialState = RunState 0 (listArray (minBound, maxBound) (numberOfEggs : repeat 0))
execute :: RunState -> Instructions -> RunState
execute state@(RunState counter registers) instructions =
if counter >= length instructions
then state
else execute' (instructions ! counter)
where
execute' Skip =
execute nextState instructions
execute' (Copy (Direct value) destination) =
execute (nextState {registers = registers // [(destination, value)]}) instructions
execute' (Copy (Indirect source) destination) =
execute' (Copy (Direct (registers ! source)) destination)
execute' (Bump Up register) =
execute (nextState {registers = registers // [(register, (registers ! register) + 1)]}) instructions
execute' (Bump Down register) =
execute (nextState {registers = registers // [(register, (registers ! register) - 1)]}) instructions
execute' (Add direction destination source) =
execute (nextState {registers = registers // [(destination, result)]}) instructions
where
result = op direction (registers ! destination) (registers ! source)
execute' (MultiplyAndAdd direction destination multiplicand multiplier) =
execute (nextState {registers = registers // [(destination, result)]}) instructions
where
result = op direction (registers ! destination) ((registers ! multiplicand) * (registers ! multiplier))
execute' (Jump (Direct 0) distance) =
execute nextState instructions
execute' (Jump (Direct _) (Direct distance)) =
execute (nextState {programCounter = counter + distance}) instructions
execute' (Jump (Indirect source) distance) =
execute' (Jump (Direct (registers ! source)) distance)
execute' (Jump condition (Indirect source)) =
execute' (Jump condition (Direct (registers ! source)))
execute' (Toggle (Direct distance)) =
execute nextState (toggleInstruction (counter + distance) instructions)
execute' (Toggle (Indirect source)) =
execute' (Toggle (Direct (registers ! source)))
nextState = state {programCounter = counter + 1}
toggleInstruction :: Int -> Instructions -> Instructions
toggleInstruction index instructions
| Ix.inRange (bounds instructions) index = instructions // [(index, toggled (instructions ! index))]
| otherwise = instructions
where
toggled instruction@(Toggle (Direct value)) = InvalidToggle instruction 1
toggled (Bump Up register) = Bump Down register
toggled (Bump Down register) = Bump Up register
toggled (Copy value destination) = Jump value (Indirect destination)
toggled (Jump condition (Indirect source)) = Copy condition source
toggled instruction@(Jump _ _) = InvalidToggle instruction 1
toggled instruction = instruction
complicate :: [Instruction] -> [Instruction]
complicate (a@(Bump direction destination) : b@(Bump _ source) : c@(Jump (Indirect source') (Direct (-2))) : rest)
| source == source' =
complicate $
Add direction destination source :
Copy (Direct 0) source :
Skip :
rest
| otherwise = a : b : c : complicate rest
complicate
( a@(Add direction destination multiplicand) : b@(Copy (Direct 0) multiplicand') : c@Skip
: d@(Bump _ multiplier)
: e@(Jump (Indirect multiplier') (Direct (-5)))
: rest
)
| multiplicand == multiplicand' && multiplier == multiplier' =
complicate $
MultiplyAndAdd direction destination multiplicand multiplier :
Copy (Direct 0) multiplicand :
Copy (Direct 0) multiplier :
Skip :
Skip :
rest
| otherwise = a : b : c : d : e : complicate rest
complicate [] = []
complicate (x : xs) = x : complicate xs
op :: Direction -> (Int -> Int -> Int)
op Up = (+)
op Down = (-)
| SamirTalwar/advent-of-code | 2016/AOC_23_2.hs | mit | 5,821 | 0 | 18 | 1,330 | 2,158 | 1,108 | 1,050 | -1 | -1 |
{-# LANGUAGE OverloadedStrings, RecordWildCards #-}
module Galua.Debugger.View.Analysis where
import qualified Data.Aeson as JS
import qualified Data.Aeson.Types as JS
import Data.Aeson ((.=))
import Data.Text(Text)
import qualified Data.Text as Text
import Data.Text.Encoding(decodeUtf8)
import qualified Data.Set as Set
import Data.Map (Map)
import qualified Data.Map as Map
import qualified Data.ByteString.Lazy as BS (fromStrict)
import Language.Lua.StringLiteral (constructStringLiteral)
import Galua.Micro.Type.Value
import Galua.Micro.Type.Eval(Result(..))
import Galua.Debugger.View.Utils
exportResult :: Result -> JS.Value
exportResult Result { .. } =
JS.object
[ "returns" .= exportListVals maps resReturns
, "raises" .= exportValue maps resRaises
, "post" .= exportGlobalState maps resGlobals
]
where
maps = idMaps resGlobals
exportGlobalState :: IdMaps -> GlobalState -> JS.Value
exportGlobalState maps GlobalState { .. } =
JS.object [ "tables" .= [ exportTableV maps v | v <- Map.elems tables ]
, "heap" .= [ exportValue maps v | v <- Map.elems heap ]
, "functions" .= [ exportFunV maps v | v <- Map.elems functions ]
]
data IdMaps = IdMaps
{ tableIds :: Map TableId Int
, refIds :: Map RefId Int
, cloIds :: Map ClosureId Int
}
idMaps :: GlobalState -> IdMaps
idMaps GlobalState { .. } = IdMaps { tableIds = toIds tables
, refIds = toIds heap
, cloIds = toIds functions
}
where
toIds m = Map.fromList (zip (Map.keys m) [ 0 .. ])
--------------------------------------------------------------------------------
exportType :: Type -> Text
exportType t =
case t of
Nil -> "nil"
Number -> "number"
UserData -> "user data"
LightUserData -> "light user data"
Thread -> "thread"
exportValue :: IdMaps -> Value -> JS.Value
exportValue maps v = JS.object
[ "simple" .= names
, "table" .= seeAlso valueTable tableIds
, "function" .= seeAlso valueFunction cloIds
]
where
names = Set.unions [ name "table" valueTable
, name "function" valueFunction
, booleanName
, stringName
, Set.map exportType (valueBasic v)
]
stringName = case valueString v of
NoValue -> Set.empty
OneValue s -> Set.singleton (shStr s)
MultipleValues -> Set.singleton "string"
booleanName = case valueBoolean v of
NoValue -> Set.empty
OneValue True -> Set.singleton "true"
OneValue False -> Set.singleton "false"
MultipleValues -> Set.singleton "boolean"
shStr x = Text.pack (constructStringLiteral (BS.fromStrict x))
name x f = case f v of
Top -> Set.singleton x
NotTop _ -> Set.empty
seeAlso f g = case f v of
NotTop xs -> Set.map (g maps Map.!) xs
_ -> Set.empty
exportListVals :: IdMaps -> List Value -> Maybe JS.Value
exportListVals maps lv =
case lv of
ListBottom -> Nothing
List n xs a -> Just $ JS.object [ "min_len" .= n
, "elements" .= map (exportValue maps) xs
, "default" .= exportValue maps a
]
exportTableV :: IdMaps -> TableV -> JS.Value
exportTableV maps TableV { .. } =
JS.object [ "meta" .= exportValue maps tableMeta
, "key" .= exportValue maps tableKeys
, "value" .= exportValue maps tableValues
, "attrs" .= JS.object [ x .= exportValue maps v | (x,v) <- attrs ]
]
where
attrs = case tableFields of
FFun mp _ -> [ (decodeUtf8 f,v) | (f, v) <- Map.toList mp ]
exportFunV :: IdMaps -> FunV -> JS.Value
exportFunV IdMaps { .. } FunV { .. } =
JS.object
[ "fid" .= expFun functionFID
, "upvals" .= map expRef (Map.elems functionUpVals)
]
where
expFun x = case x of
NoValue -> "(no function)"
MultipleValues -> "(unknown)"
OneValue (CFunImpl _) -> "(C function)"
OneValue (LuaFunImpl f) -> exportFID f
expRef x = case x of
Top -> [-2]
NotTop xs -> map (refIds Map.!) (Set.toList xs)
--------------------------------------------------------------------------------
-- Misc/helpers
tag :: String -> JS.Pair
tag x = "tag" .= x
tagged :: String -> [JS.Pair] -> JS.Value
tagged x xs = JS.object (tag x : xs)
| GaloisInc/galua | galua-dbg/src/Galua/Debugger/View/Analysis.hs | mit | 4,975 | 0 | 14 | 1,778 | 1,377 | 720 | 657 | 103 | 8 |
{-# LANGUAGE CPP #-}
module GHCJS.DOM.URLUtils (
#if (defined(ghcjs_HOST_OS) && defined(USE_JAVASCRIPTFFI)) || !defined(USE_WEBKIT)
module GHCJS.DOM.JSFFI.Generated.URLUtils
#else
#endif
) where
#if (defined(ghcjs_HOST_OS) && defined(USE_JAVASCRIPTFFI)) || !defined(USE_WEBKIT)
import GHCJS.DOM.JSFFI.Generated.URLUtils
#else
#endif
| plow-technologies/ghcjs-dom | src/GHCJS/DOM/URLUtils.hs | mit | 337 | 0 | 5 | 33 | 33 | 26 | 7 | 4 | 0 |
main = do
print $ filter issqrt [a*1100+b*11 | a <- [1..9], b <- [0..9]] where
isqrt = round . sqrt . fromIntegral
issqrt x = (isqrt x) * (isqrt x) == x
| Voleking/ICPC | references/aoapc-book/BeginningAlgorithmContests/haskell/ch2/p2-1.hs | mit | 163 | 0 | 12 | 43 | 102 | 52 | 50 | 4 | 1 |
module Main (main) where
import qualified Sudoku
main :: IO ()
main = Sudoku.main
| rmcgibbo/sudoku | executable/Main.hs | mit | 84 | 0 | 6 | 16 | 30 | 18 | 12 | 4 | 1 |
{- |
Module : $Header$
Description : definition of the datatype describing
the abstract FreeCAD terms and and a few tools describing simple
mathematical operations on those building-blocks (3d vectors,
rotation matrices, rotation quaternions)
Copyright : (c) Robert Savu and Uni Bremen 2011
License : GPLv2 or higher, see LICENSE.txt
Maintainer : [email protected]
Stability : experimental
Portability : portable
Declaration of the abstract datatypes of FreeCAD terms
-}
module FreeCAD.As where
import qualified Data.Set as Set
data Vector3 =
Vector3 { x::Double, y::Double, z::Double } deriving (Show, Eq, Ord)
data Matrix33 = Matrix33 { a11::Double ,a12::Double ,a13::Double
,a21::Double ,a22::Double ,a23::Double
,a31::Double ,a32::Double ,a33::Double
} deriving (Show, Eq, Ord) --used as a rotation matrix
data Vector4 = Vector4 { q0::Double, q1::Double, q2::Double, q3::Double}
deriving (Show, Eq, Ord)
-- quaternion rotational representation
data Placement = Placement { position::Vector3, orientation::Vector4 }
deriving (Show, Eq, Ord)
{-
-- the placement is determined by 2 vectors:
-- the first one points to the 'center' of the objet in the space
-- the second one determines the orientation of the object in the given space
data Edgelist = []
| 1:Edgelist
| 0:Edgelist
reference from compound objects to 'building-blocks'
objects made through strings or containment of the other
objects
-}
data BaseObject = Box Double Double Double -- Height, Width, Length
| Cylinder Double Double Double -- Angle, Height, Radius
| Sphere Double Double Double Double --Angle1,Angle2,Angle3,Radius
| Cone Double Double Double Double --Angle,Radius1,Radius2,Height
| Torus Double Double Double Double Double
--Angle1, Angle2, Angle3, Radius1, Radius2
| Line Double -- length
| Circle Double Double Double --StartAngle, EndAngle, Radius
| Rectangle Double Double --Height, Length
deriving (Show, Eq, Ord)
--TODO: Plane, Vertex, etc..
data Object = BaseObject BaseObject
| Cut ExtendedObject ExtendedObject
| Common ExtendedObject ExtendedObject
| Fusion ExtendedObject ExtendedObject
| Extrusion ExtendedObject Vector3
| Section ExtendedObject ExtendedObject
deriving (Show, Eq, Ord)
{- --| Fillet, (Base::String, Edges::Edgelist, Radius::Double)),
--not enough data in the xml
--| Chamfer, (Base::String, Edges::Edgelist, Amount::Double)),
--not enough data in the xml
--| Mirror, (Base::String, Position2::Vector))
--mirroring of an object
-}
data ExtendedObject = Placed PlacedObject | Ref String deriving (Show, Eq, Ord)
data PlacedObject =
PlacedObject {p::Placement, o::Object} deriving (Show, Eq, Ord)
data NamedObject = NamedObject { name::String
, object:: PlacedObject}
| EmptyObject --for objects that are WIP
deriving (Show, Eq, Ord)
-- the first parameter is the name of the object as it is stored in the
-- FreeCAD document. the second parameter determines the placement of the object
-- (a pair of vectors) the third parameter contains the type of the object and
--a list of doubles (numbers) describing the characteristics
--of the object (e.g. dimensions, angles, etc)
type Document = [NamedObject]
-- | Datatype for FreeCAD Signatures
-- Signatures are just sets of named objects
data Sign = Sign { objects :: Set.Set NamedObject } deriving (Eq, Ord, Show)
| nevrenato/Hets_Fork | FreeCAD/As.hs | gpl-2.0 | 3,829 | 0 | 10 | 1,028 | 540 | 325 | 215 | 37 | 0 |
module FalseLangermann where
import Data.List
import Control.Monad.State.Lazy
import System.Random
import Text.Printf
import Data.ByteString.Lazy (ByteString)
import GDCN.Trusted.Data.Binary (encode, decode)
type Vector = [Double]
type Matrix = [Vector]
-- Constant variables
dimensions :: Int
dimensions = 2
mConst :: Int
mConst = 5
cConst :: Vector
cConst = [1 , 2, 5, 2, 3]
aConst :: Matrix
aConst = [[3, 5], [5, 2], [2, 1], [1, 4], [7, 9]]
bot :: Double
bot = 0
top :: Double
top = 10
tests :: Int
tests = 1
-- Vector and matrix functions
lookupVector :: Vector -> Int -> Double
lookupVector vector x = vector !! (x - 1)
lookupMatrix :: Matrix -> (Int, Int) -> Double
lookupMatrix matrix (x, y) = (matrix !! (x - 1)) !! (y - 1)
-- Random functions
randomVector :: State StdGen Vector
randomVector = randomVector' dimensions []
randomVector' :: Int -> Vector -> State StdGen Vector
randomVector' 0 nums = return nums
randomVector' n nums = do
gen <- get
let (num, gen') = randomR (bot, top) gen
put gen'
randomVector' (n-1) (num:nums)
-- Calculation function
langermann :: Vector -> Double
langermann x = sum [ let sumValue = theSum x i
in lookupVector cConst i * exp (invPi * sumValue) * cos (pi * sumValue)
| i <- [1..mConst] ]
invPi :: Double
invPi = - (1 / pi)
theSum :: Vector -> Int -> Double
theSum x i = sum [ (lookupVector x j - lookupMatrix aConst (i, j)) ^ 2 | j <- [1..dimensions] ]
--
randomLangermann :: State StdGen (Vector, Double)
randomLangermann = do
vector <- randomVector
return (vector, langermann vector)
search :: Int -> State StdGen (Vector, Double)
search depth = do
results <- repeatM randomLangermann depth
return $ maximumBy (\a b -> compare (snd a) (snd b)) results
multiSearch :: State StdGen (Vector, Double)
multiSearch = do
results <- repeatM (search 100) (tests `div` 100)
return $ maximumBy (\a b -> compare (snd a) (snd b)) results
fullSearch :: Int -> (Vector, Double)
fullSearch seed = evalState (search tests) (mkStdGen seed)
run :: [ByteString] -> (ByteString, String)
run (seedData:_) = let seed = decode seedData :: Int
(resultVec, resultN) = fullSearch seed
resultData = encode resultVec
in (resultData, show2DecVec resultVec ++ " -> " ++ show2Dec resultN)
--
show2Dec :: Double -> String
show2Dec d = printf "%.3f" d
show2DecVec :: Vector -> String
show2DecVec [x, y] = "[" ++ show2Dec x ++ ", " ++ show2Dec y ++ "]"
repeatM :: Monad m => m a -> Int -> m [a]
repeatM m 0 = return []
repeatM m n = do
a <- m
rest <- repeatM m (n-1)
return (a : rest)
| GDCN/GDCN | GDCN_proj/dGDCN/data/FalseWork/code/FalseLangermann.hs | gpl-3.0 | 2,686 | 0 | 13 | 634 | 1,100 | 590 | 510 | 73 | 1 |
module HumanPlayer where
import System.IO
import Games
humanPlayer :: Game b m => Player b m
humanPlayer game =
do --putStrLn $ "Valid moves: " ++ show (moves (turn game) (state game))
putStr $ "Move for player " ++ show (turn game + 1) ++ ": "
hFlush stdout
moveTxt <- getLine
case readMove (turn game) moveTxt (board game) of
Left move ->
if elem move $ moves (turn game) (board game)
then return move
else do putStrLn $ "Invalid move"-- ++ show move
humanPlayer game
Right msg ->
do putStrLn msg
humanPlayer game
| krame505/board_games | haskell/HumanPlayer.hs | gpl-3.0 | 626 | 0 | 14 | 204 | 186 | 87 | 99 | 18 | 3 |
{-
- Copyright (C) 2013 Alexander Berntsen <[email protected]>
- Copyright (C) 2013 Stian Ellingsen <[email protected]>
-
- This file is part of bweakfwu.
-
- bweakfwu is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- bweakfwu is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with bweakfwu. If not, see <http://www.gnu.org/licenses/>.
-} module Movable where
import Graphics.Gloss.Data.Vector (Vector)
import Geometry (Normal)
import Time (StepTime)
import Vector ((^*^), (^.^), vecLimitMag)
-- | All objects that are capable of moving in the 'World' are 'Movable'
-- objects.
class Movable a where
-- | 'vel' is the 'Velocity' of a 'Movable'.
vel :: a -> Velocity
-- | 'move' steps a 'Movable' one step forward by moving it according to its
-- 'Velocity'.
move :: a -> StepTime -> a
-- | 'targetVel' is the target 'Velocity' a 'Movable' wants to achieve.
targetVel :: a -> Velocity
-- | 'acceleration' is the current 'Acceleration' of a movable.
acceleration :: a -> Acceleration
-- | 'Acceleration' is the acceleration of a 'Movable'.
type Acceleration = Float
-- | 'Speed' is the speed of a 'Movable'.
type Speed = Float
-- | 'Velocity' is the velocity of a 'Movable'.
type Velocity = Vector
dvMag :: Float -> Normal -> Velocity -> Float
-- | 'dvMag' finds the magnitude of the delta 'Velocity' from a reflection.
dvMag cor n v = max 0 (n ^.^ v * (-1 - cor))
dvApply :: Velocity -> Normal -> Float -> Velocity
-- | 'dvApply' applies a delta 'Velocity' given its magnitude and the
-- reflection 'Normal'.
dvApply v n dvm = v + n ^*^ dvm
updateVelocity :: Movable a => a -> StepTime -> Velocity
-- | 'updateVelocity' updates the 'Velocity' of a 'Movable' based on the step
-- time. It makes sure the 'Movable' adheres to a limit.
updateVelocity m dt = vel m + vecLimitMag (dt * acceleration m) dv
where dv = targetVel m - vel m
reflect :: Float -> Normal -> Velocity -> Velocity -> Velocity
-- | 'reflect' calculates a new 'Velocity' based on frictionless collision on
-- the collision 'Normal' with the 'Velocity's of the two objects that crash.
reflect cor n v w =
dvApply v n dvm -- New velocity from frictionless collision.
where rv = v - w -- Relative velocity between colliders.
dvm = dvMag cor n rv -- Magnitude of velocity change.
| plaimi/bweakfwu | src/bweakfwu/Movable.hs | gpl-3.0 | 2,781 | 0 | 10 | 575 | 363 | 206 | 157 | 25 | 1 |
module Main where
import System.Term
main :: IO ()
main = initialize
| mdibaiee/serverman | app/Main.hs | gpl-3.0 | 75 | 0 | 6 | 18 | 24 | 14 | 10 | 4 | 1 |
module Fathens.Bitcoin.Binary.Num (
Word256(..)
, BigEndianFixed(..)
, toWord256
, fromBigEndian
, toBigEndian
, putBigEndianFixed
) where
import Control.Monad
import Data.Bits
import Data.ByteString.Lazy (ByteString)
import qualified Data.ByteString.Lazy as BS
import Data.Word (Word32)
import GHC.Enum
import GHC.Real
import System.Random
-- Constants
bitsWord256 = 256
-- Data
data Word256 = Word256 Integer deriving (Show, Eq)
instance Ord Word256 where
(<=) (Word256 a) (Word256 b) = a <= b
instance Num Word256 where
(+) (Word256 a) (Word256 b) = Word256 (a + b)
(*) (Word256 a) (Word256 b) = Word256 (a * b)
negate (Word256 a) = Word256 (-a)
abs (Word256 a) = Word256 (abs a)
signum (Word256 0) = 0
signum (Word256 _) = 1
fromInteger i = Word256 i
instance Bounded Word256 where
minBound = 0
maxBound = 2 ^ bitsWord256 - 1
instance Enum Word256 where
succ a
| a < maxBound = a + 1
| otherwise = succError "Word256"
pred a
| a > minBound = a - 1
| otherwise = predError "Word256"
toEnum i
| minBound <= i && i <= maxBound = Word256 $ fromIntegral i
| otherwise = toEnumError "Word256" i (minBound, maxBound :: Word256)
fromEnum a@(Word256 i)
| i <= fromIntegral (maxBound :: Int) = fromIntegral i
| otherwise = fromEnumError "Word256" a
instance Real Word256 where
toRational (Word256 i) = toRational i
instance Integral Word256 where
quotRem (Word256 a) (Word256 b) = let (i, j) = quotRem a b
in (Word256 i, Word256 j)
toInteger (Word256 i) = i
instance Bits Word256 where
isSigned _ = False
bitSize = finiteBitSize
bitSizeMaybe = Just . finiteBitSize
shift (Word256 a) = Word256 . shift a
rotate (Word256 a) = Word256 . rotate a
(.&.) (Word256 a) (Word256 b) = Word256 (a .&. b)
(.|.) (Word256 a) (Word256 b) = Word256 (a .|. b)
xor (Word256 a) (Word256 b) = Word256 (a `xor` b)
complement (Word256 a) = let (Word256 b) = maxBound
in Word256 (a `xor` b)
popCount = popCountDefault
testBit = testBitDefault
bit = bitDefault
instance FiniteBits Word256 where
finiteBitSize _ = fromIntegral bitsWord256
instance Random Word256 where
randomR ((Word256 a), (Word256 b)) g = let (x, y) = randomR (a, b) g
in (Word256 x, y)
random g = let (x, y) = random g
in (Word256 x, y)
instance BigEndianFixed Word256 where
lengthOfBytes = bitsWord256 `div` 8
decodeBigEndian bs = do
let (d, o) = BS.splitAt (fromIntegral (lengthOfBytes :: Word256)) bs
v <- fromBigEndianFixed d
return (v, o)
fromBigEndianFixed bs = do
guard $ BS.length bs == fromIntegral (lengthOfBytes :: Word256)
return $ fromIntegral $ fromBigEndian bs
toBigEndianFixed (Word256 i) = putBigEndianFixed (lengthOfBytes :: Word256) i
instance BigEndianFixed Word32 where
lengthOfBytes = 32 `div` 8
decodeBigEndian bs = do
let (d, o) = BS.splitAt (fromIntegral (lengthOfBytes :: Word32)) bs
v <- fromBigEndianFixed d
return (v, o)
fromBigEndianFixed bs = do
guard $ BS.length bs == fromIntegral (lengthOfBytes :: Word32)
return $ fromIntegral $ fromBigEndian bs
toBigEndianFixed i = putBigEndianFixed (lengthOfBytes :: Word32) $ toInteger i
-- Classes
class FiniteBits a => BigEndianFixed a where
lengthOfBytes :: a
decodeBigEndian :: ByteString -> Maybe (a, ByteString)
fromBigEndianFixed :: ByteString -> Maybe a
toBigEndianFixed :: a -> ByteString
-- Functions
toWord256 :: Integer -> Maybe Word256
toWord256 i = do
guard $ min <= i && i <= max
return $ Word256 i
where
min = toInteger (minBound :: Word256)
max = toInteger (maxBound :: Word256)
fromBigEndian :: ByteString -> Integer
fromBigEndian = BS.foldr f 0 . BS.reverse
where
f v i = shiftL i 8 .|. fromIntegral v
toBigEndian :: Integer -> ByteString
toBigEndian = BS.reverse . BS.unfoldr f
where
f 0 = Nothing
f i = Just (fromInteger i, shiftR i 8)
putBigEndianFixed :: (Integral n) => n -> Integer -> ByteString
putBigEndianFixed n = padLeft . toBigEndian
where
len = fromIntegral n
padLeft d = BS.replicate (len - BS.length d') 0 `BS.append` d'
where
d' = BS.take len d
| sawatani/bitcoin-hall | src/Fathens/Bitcoin/Binary/Num.hs | gpl-3.0 | 4,334 | 0 | 14 | 1,090 | 1,651 | 847 | 804 | 113 | 2 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TypeFamilies #-}
{-# OPTIONS_GHC -fno-warn-unused-imports #-}
-- Module : Network.AWS.StorageGateway.DescribeVTLDevices
-- Copyright : (c) 2013-2014 Brendan Hay <[email protected]>
-- License : This Source Code Form is subject to the terms of
-- the Mozilla Public License, v. 2.0.
-- A copy of the MPL can be found in the LICENSE file or
-- you can obtain it at http://mozilla.org/MPL/2.0/.
-- Maintainer : Brendan Hay <[email protected]>
-- Stability : experimental
-- Portability : non-portable (GHC extensions)
--
-- Derived from AWS service descriptions, licensed under Apache 2.0.
-- | Returns a description of virtual tape library (VTL) devices for the specified
-- gateway. In the response, AWS Storage Gateway returns VTL device information.
--
-- The list of VTL devices must be from one gateway.
--
-- <http://docs.aws.amazon.com/storagegateway/latest/APIReference/API_DescribeVTLDevices.html>
module Network.AWS.StorageGateway.DescribeVTLDevices
(
-- * Request
DescribeVTLDevices
-- ** Request constructor
, describeVTLDevices
-- ** Request lenses
, dvtldGatewayARN
, dvtldLimit
, dvtldMarker
, dvtldVTLDeviceARNs
-- * Response
, DescribeVTLDevicesResponse
-- ** Response constructor
, describeVTLDevicesResponse
-- ** Response lenses
, dvtldrGatewayARN
, dvtldrMarker
, dvtldrVTLDevices
) where
import Network.AWS.Prelude
import Network.AWS.Request.JSON
import Network.AWS.StorageGateway.Types
import qualified GHC.Exts
data DescribeVTLDevices = DescribeVTLDevices
{ _dvtldGatewayARN :: Text
, _dvtldLimit :: Maybe Nat
, _dvtldMarker :: Maybe Text
, _dvtldVTLDeviceARNs :: List "VTLDeviceARNs" Text
} deriving (Eq, Ord, Read, Show)
-- | 'DescribeVTLDevices' constructor.
--
-- The fields accessible through corresponding lenses are:
--
-- * 'dvtldGatewayARN' @::@ 'Text'
--
-- * 'dvtldLimit' @::@ 'Maybe' 'Natural'
--
-- * 'dvtldMarker' @::@ 'Maybe' 'Text'
--
-- * 'dvtldVTLDeviceARNs' @::@ ['Text']
--
describeVTLDevices :: Text -- ^ 'dvtldGatewayARN'
-> DescribeVTLDevices
describeVTLDevices p1 = DescribeVTLDevices
{ _dvtldGatewayARN = p1
, _dvtldVTLDeviceARNs = mempty
, _dvtldMarker = Nothing
, _dvtldLimit = Nothing
}
dvtldGatewayARN :: Lens' DescribeVTLDevices Text
dvtldGatewayARN = lens _dvtldGatewayARN (\s a -> s { _dvtldGatewayARN = a })
-- | Specifies that the number of VTL devices described be limited to the
-- specified number.
dvtldLimit :: Lens' DescribeVTLDevices (Maybe Natural)
dvtldLimit = lens _dvtldLimit (\s a -> s { _dvtldLimit = a }) . mapping _Nat
-- | An opaque string that indicates the position at which to begin describing the
-- VTL devices.
dvtldMarker :: Lens' DescribeVTLDevices (Maybe Text)
dvtldMarker = lens _dvtldMarker (\s a -> s { _dvtldMarker = a })
-- | An array of strings, where each string represents the Amazon Resource Name
-- (ARN) of a VTL device.
--
-- All of the specified VTL devices must be from the same gateway. If no VTL
-- devices are specified, the result will contain all devices on the specified
-- gateway.
dvtldVTLDeviceARNs :: Lens' DescribeVTLDevices [Text]
dvtldVTLDeviceARNs =
lens _dvtldVTLDeviceARNs (\s a -> s { _dvtldVTLDeviceARNs = a })
. _List
data DescribeVTLDevicesResponse = DescribeVTLDevicesResponse
{ _dvtldrGatewayARN :: Maybe Text
, _dvtldrMarker :: Maybe Text
, _dvtldrVTLDevices :: List "VTLDevices" VTLDevice
} deriving (Eq, Read, Show)
-- | 'DescribeVTLDevicesResponse' constructor.
--
-- The fields accessible through corresponding lenses are:
--
-- * 'dvtldrGatewayARN' @::@ 'Maybe' 'Text'
--
-- * 'dvtldrMarker' @::@ 'Maybe' 'Text'
--
-- * 'dvtldrVTLDevices' @::@ ['VTLDevice']
--
describeVTLDevicesResponse :: DescribeVTLDevicesResponse
describeVTLDevicesResponse = DescribeVTLDevicesResponse
{ _dvtldrGatewayARN = Nothing
, _dvtldrVTLDevices = mempty
, _dvtldrMarker = Nothing
}
dvtldrGatewayARN :: Lens' DescribeVTLDevicesResponse (Maybe Text)
dvtldrGatewayARN = lens _dvtldrGatewayARN (\s a -> s { _dvtldrGatewayARN = a })
-- | An opaque string that indicates the position at which the VTL devices that
-- were fetched for description ended. Use the marker in your next request to
-- fetch the next set of VTL devices in the list. If there are no more VTL
-- devices to describe, this field does not appear in the response.
dvtldrMarker :: Lens' DescribeVTLDevicesResponse (Maybe Text)
dvtldrMarker = lens _dvtldrMarker (\s a -> s { _dvtldrMarker = a })
-- | An array of VTL device objects composed of the Amazon Resource Name(ARN) of
-- the VTL devices.
dvtldrVTLDevices :: Lens' DescribeVTLDevicesResponse [VTLDevice]
dvtldrVTLDevices = lens _dvtldrVTLDevices (\s a -> s { _dvtldrVTLDevices = a }) . _List
instance ToPath DescribeVTLDevices where
toPath = const "/"
instance ToQuery DescribeVTLDevices where
toQuery = const mempty
instance ToHeaders DescribeVTLDevices
instance ToJSON DescribeVTLDevices where
toJSON DescribeVTLDevices{..} = object
[ "GatewayARN" .= _dvtldGatewayARN
, "VTLDeviceARNs" .= _dvtldVTLDeviceARNs
, "Marker" .= _dvtldMarker
, "Limit" .= _dvtldLimit
]
instance AWSRequest DescribeVTLDevices where
type Sv DescribeVTLDevices = StorageGateway
type Rs DescribeVTLDevices = DescribeVTLDevicesResponse
request = post "DescribeVTLDevices"
response = jsonResponse
instance FromJSON DescribeVTLDevicesResponse where
parseJSON = withObject "DescribeVTLDevicesResponse" $ \o -> DescribeVTLDevicesResponse
<$> o .:? "GatewayARN"
<*> o .:? "Marker"
<*> o .:? "VTLDevices" .!= mempty
instance AWSPager DescribeVTLDevices where
page rq rs
| stop (rs ^. dvtldrMarker) = Nothing
| otherwise = (\x -> rq & dvtldMarker ?~ x)
<$> (rs ^. dvtldrMarker)
| dysinger/amazonka | amazonka-storagegateway/gen/Network/AWS/StorageGateway/DescribeVTLDevices.hs | mpl-2.0 | 6,420 | 0 | 14 | 1,374 | 895 | 528 | 367 | 92 | 1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# OPTIONS_GHC -fno-warn-duplicate-exports #-}
{-# OPTIONS_GHC -fno-warn-unused-binds #-}
{-# OPTIONS_GHC -fno-warn-unused-imports #-}
-- |
-- Module : Network.Google.Resource.ConsumerSurveys.MobileApppanels.Update
-- Copyright : (c) 2015-2016 Brendan Hay
-- License : Mozilla Public License, v. 2.0.
-- Maintainer : Brendan Hay <[email protected]>
-- Stability : auto-generated
-- Portability : non-portable (GHC extensions)
--
-- Updates a MobileAppPanel. Currently the only property that can be
-- updated is the owners property.
--
-- /See:/ <https://developers.google.com/surveys/ Consumer Surveys API Reference> for @consumersurveys.mobileapppanels.update@.
module Network.Google.Resource.ConsumerSurveys.MobileApppanels.Update
(
-- * REST Resource
MobileApppanelsUpdateResource
-- * Creating a Request
, mobileApppanelsUpdate
, MobileApppanelsUpdate
-- * Request Lenses
, mauPayload
, mauPanelId
) where
import Network.Google.ConsumerSurveys.Types
import Network.Google.Prelude
-- | A resource alias for @consumersurveys.mobileapppanels.update@ method which the
-- 'MobileApppanelsUpdate' request conforms to.
type MobileApppanelsUpdateResource =
"consumersurveys" :>
"v2" :>
"mobileAppPanels" :>
Capture "panelId" Text :>
QueryParam "alt" AltJSON :>
ReqBody '[JSON] MobileAppPanel :>
Put '[JSON] MobileAppPanel
-- | Updates a MobileAppPanel. Currently the only property that can be
-- updated is the owners property.
--
-- /See:/ 'mobileApppanelsUpdate' smart constructor.
data MobileApppanelsUpdate =
MobileApppanelsUpdate'
{ _mauPayload :: !MobileAppPanel
, _mauPanelId :: !Text
}
deriving (Eq, Show, Data, Typeable, Generic)
-- | Creates a value of 'MobileApppanelsUpdate' with the minimum fields required to make a request.
--
-- Use one of the following lenses to modify other fields as desired:
--
-- * 'mauPayload'
--
-- * 'mauPanelId'
mobileApppanelsUpdate
:: MobileAppPanel -- ^ 'mauPayload'
-> Text -- ^ 'mauPanelId'
-> MobileApppanelsUpdate
mobileApppanelsUpdate pMauPayload_ pMauPanelId_ =
MobileApppanelsUpdate'
{_mauPayload = pMauPayload_, _mauPanelId = pMauPanelId_}
-- | Multipart request metadata.
mauPayload :: Lens' MobileApppanelsUpdate MobileAppPanel
mauPayload
= lens _mauPayload (\ s a -> s{_mauPayload = a})
-- | External URL ID for the panel.
mauPanelId :: Lens' MobileApppanelsUpdate Text
mauPanelId
= lens _mauPanelId (\ s a -> s{_mauPanelId = a})
instance GoogleRequest MobileApppanelsUpdate where
type Rs MobileApppanelsUpdate = MobileAppPanel
type Scopes MobileApppanelsUpdate =
'["https://www.googleapis.com/auth/consumersurveys",
"https://www.googleapis.com/auth/userinfo.email"]
requestClient MobileApppanelsUpdate'{..}
= go _mauPanelId (Just AltJSON) _mauPayload
consumerSurveysService
where go
= buildClient
(Proxy :: Proxy MobileApppanelsUpdateResource)
mempty
| brendanhay/gogol | gogol-consumersurveys/gen/Network/Google/Resource/ConsumerSurveys/MobileApppanels/Update.hs | mpl-2.0 | 3,476 | 0 | 13 | 737 | 387 | 234 | 153 | 63 | 1 |
{-# LANGUAGE BangPatterns
, FlexibleContexts #-}
module Vision.Image.Parallel (computeP) where
import Control.Concurrent (
forkIO, getNumCapabilities, newEmptyMVar, putMVar, takeMVar)
import Control.Monad.ST (ST, stToIO)
import Data.Vector (enumFromN, forM, forM_)
import Foreign.Storable (Storable)
import System.IO.Unsafe (unsafePerformIO)
import Vision.Image.Class (MaskedImage (..), Image (..), (!))
import Vision.Image.Type (Manifest (..))
import Vision.Image.Mutable (MutableManifest, linearWrite, new, unsafeFreeze)
import Vision.Primitive (Z (..), (:.) (..), ix2)
-- | Parallel version of 'compute'.
--
-- Computes the value of an image into a manifest representation in parallel.
--
-- The monad ensures that the image is fully evaluated before continuing.
computeP :: (Monad m, Image i, Storable (ImagePixel i))
=> i -> m (Manifest (ImagePixel i))
computeP !src =
return $! unsafePerformIO $ do
dst <- stToIO newManifest
-- Forks 'nCapabilities' threads.
childs <- forM (enumFromN 0 nCapabilities) $ \c -> do
child <- newEmptyMVar
_ <- forkIO $ do
let nLines | c == 0 = nLinesPerThread + remain
| otherwise = nLinesPerThread
stToIO $ fillFromN dst (c * nLinesPerThread) nLines
-- Sends a signal to the main thread.
putMVar child ()
return child
-- Waits for all threads to finish.
forM_ childs takeMVar
stToIO $ unsafeFreeze dst
where
!size@(Z :. h :. w) = shape src
!nCapabilities = unsafePerformIO getNumCapabilities
!(nLinesPerThread, remain) = h `quotRem` nCapabilities
-- Computes 'n' lines starting at 'from' of the image.
fillFromN !dst !from !n =
forM_ (enumFromN from n) $ \y -> do
let !lineOffset = y * w
forM_ (enumFromN 0 w) $ \x -> do
let !offset = lineOffset + x
!val = src ! (ix2 y x)
linearWrite dst offset val
newManifest :: Storable p => ST s (MutableManifest p s)
newManifest = new size
{-# INLINE computeP #-} | RaphaelJ/friday | src/Vision/Image/Parallel.hs | lgpl-3.0 | 2,177 | 0 | 23 | 617 | 599 | 317 | 282 | 41 | 1 |
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE RecordWildCards #-}
{- |
Module : Neovim.RPC.FunctionCall
Description : Functions for calling functions
Copyright : (c) Sebastian Witte
License : Apache-2.0
Maintainer : [email protected]
Stability : experimental
-}
module Neovim.RPC.FunctionCall (
acall,
acall',
scall,
scall',
atomically',
wait,
wait',
waitErr,
waitErr',
respond,
) where
import Neovim.Classes
import Neovim.Context
import qualified Neovim.Context.Internal as Internal
import Neovim.Plugin.Classes (FunctionName)
import Neovim.Plugin.IPC.Classes
import qualified Neovim.RPC.Classes as MsgpackRPC
import Control.Applicative
import Control.Concurrent.STM
import Control.Monad.Reader
import Data.MessagePack
import Data.Monoid
import qualified Text.PrettyPrint.ANSI.Leijen as P
import Prelude
-- | Simply fail and call 'error' in case an unexpected exception is thrown.
-- This fails with a runtime exception. It is used by the Template Haskell API
-- generator for functions that are defined as not being able to fail. If this
-- exception occurs, it is a bug in neovim.
unexpectedException :: String -> err -> a
unexpectedException fn _ = error $
"Function threw an exception even though it was declared not to throw one: "
<> fn
-- | Strip the error result from the function call. This should only be used by
-- the Template Haskell API generated code for functions that declare
-- themselves as unfailable.
withIgnoredException :: (Functor f, NvimObject result)
=> FunctionName -- ^ For better error messages
-> f (Either err result)
-> f result
withIgnoredException fn = fmap (either ((unexpectedException . show) fn) id)
-- | Helper function that concurrently puts a 'Message' in the event queue and returns an 'STM' action that returns the result.
acall :: (NvimObject result)
=> FunctionName
-> [Object]
-> Neovim r st (STM (Either Object result))
acall fn parameters = do
q <- Internal.asks' Internal.eventQueue
mv <- liftIO newEmptyTMVarIO
timestamp <- liftIO getCurrentTime
atomically' . writeTQueue q . SomeMessage $ FunctionCall fn parameters mv timestamp
return $ convertObject <$> readTMVar mv
where
convertObject = \case
Left e -> Left e
Right o -> case fromObject o of
Left e -> Left (toObject e)
Right r -> Right r
-- | Helper function similar to 'acall' that throws a runtime exception if the
-- result is an error object.
acall' :: (NvimObject result)
=> FunctionName
-> [Object]
-> Neovim r st (STM result)
acall' fn parameters = withIgnoredException fn <$> acall fn parameters
-- | Call a neovim function synchronously. This function blocks until the
-- result is available.
scall :: (NvimObject result)
=> FunctionName
-> [Object] -- ^ Parameters in an 'Object' array
-> Neovim r st (Either Object result)
-- ^ result value of the call or the thrown exception
scall fn parameters = acall fn parameters >>= atomically'
-- | Helper function similar to 'scall' that throws a runtime exception if the
-- result is an error object.
scall' :: NvimObject result => FunctionName -> [Object] -> Neovim r st result
scall' fn = withIgnoredException fn . scall fn
-- | Lifted variant of 'atomically'.
atomically' :: (MonadIO io) => STM result -> io result
atomically' = liftIO . atomically
-- | Wait for the result of the STM action.
--
-- This action possibly blocks as it is an alias for
-- @ \ioSTM -> ioSTM >>= liftIO . atomically@.
wait :: Neovim r st (STM result) -> Neovim r st result
wait = (=<<) atomically'
-- | Variant of 'wait' that discards the result.
wait' :: Neovim r st (STM result) -> Neovim r st ()
wait' = void . wait
-- | Wait for the result of the 'STM' action and call @'err' . (loc++) . show@
-- if the action returned an error.
waitErr :: (P.Pretty e)
=> String -- ^ Prefix error message with this.
-> Neovim r st (STM (Either e result)) -- ^ Function call to neovim
-> Neovim r st result
waitErr loc act = wait act >>= either (err . (P.<>) (P.text loc) . P.pretty) return
-- | 'waitErr' that discards the result.
waitErr' :: (P.Pretty e)
=> String
-> Neovim r st (STM (Either e result))
-> Neovim r st ()
waitErr' loc = void . waitErr loc
-- | Send the result back to the neovim instance.
respond :: (NvimObject result) => Request -> Either String result -> Neovim r st ()
respond Request{..} result = do
q <- Internal.asks' Internal.eventQueue
atomically' . writeTQueue q . SomeMessage . MsgpackRPC.Response reqId $
either (Left . toObject) (Right . toObject) result
| lslah/nvim-hs | library/Neovim/RPC/FunctionCall.hs | apache-2.0 | 4,925 | 0 | 15 | 1,234 | 1,025 | 542 | 483 | 83 | 3 |
module Walk.A293689 (a293689, a293689_list) where
import Helpers.Primes (isPrime)
-- The Prime Ant
-- [3 4 5 6 ...]
-- [2]
-- [4 5 6 ...]
-- [3 2]
-- [5 6 7 ...]
-- [4 3 2]
-- [2 5 6 7 ...]
-- [5 2]
a293689 :: Int -> Int
a293689 = (!!) a293689_list
a293689_list :: [Int]
a293689_list = scanl (+) 0 firstDifferences where
firstDifferences = recurse [3..] [2] where
recurse (f:fs) (c:cs)
| isPrime c = 1 : recurse fs (f:c:cs)
| otherwise = (-1) : recurse newFs newCs where
d = leastPrimeDivisor c
newCs = case cs of (h:t) -> h + d : t
newFs = c `div` d : f : fs
-- least prime divisor
leastPrimeDivisor :: Int -> Int
leastPrimeDivisor n = head $ filter (\i -> n `mod` i == 0) [2..]
------------------------------------------------------------------------------
-- Another idea for the prime ant:
-- -- Find the composite furthest to the right.-- Divide by it's least common
-- -- divisor, and increment the number to the right of it.
--
-- primeAnt = recurse ([3, 2], [4..]) where
-- recurse (beginning, end) = (1 + length beginning) : recurse (nextGen beginning end)
--
-- leastPrimeDivisor n = head $ filter (\i -> n `mod` i == 0) [2..]
--
-- -- [2] [4, 2, 5..] => [3] [2, 2, 5..]
-- nextGen (h1:l1) (h2:l2) =
-- transferLists (h1+1:l1) ((h2 `div` leastPrimeDivisor h2):l2)
--
-- -- [3] [2, 2, 5, 6..] => [5, 2, 2, 3] [6..]
-- -- [6, 2, 2, 3] [3, 7..] => [2, 2, 3] [6, 3, 7..]
-- transferLists (s:begList) (c:endList)
-- | not $ isPrime s = (begList, s : c : endList)
-- | isPrime c = transferLists (c:s:begList) endList
-- | otherwise = (s:begList, c:endList)
--
--
-- -- 2, 3, 4, 5, 6, 7, 8, ...
-- -- ^ a(1) = 3
-- -- 4, 2
-- -- ^ a(2) = 2
-- -- 3, 2, 2, 5, 6
-- -- ^ a(3) = 5
-- -- 6, 3
-- -- ^ a(4) = 4
-- -- 3, 3, 3, 7, 8
-- -- ^ a(5) = 7
-- -- 8, 4
-- -- ^ a(6) = 6
| peterokagey/haskellOEIS | src/Walk/A293689.hs | apache-2.0 | 1,996 | 0 | 15 | 594 | 311 | 189 | 122 | 15 | 1 |
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE helpset PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp HelpSet Version 2.0//EN" "http://java.sun.com/products/javahelp/helpset_2_0.dtd">
<helpset version="2.0" xml:lang="id-ID">
<title>Passive Scan Rules - Alpha | ZAP Extension</title>
<maps>
<homeID>top</homeID>
<mapref location="map.jhm"/>
</maps>
<view>
<name>TOC</name>
<label>Contents</label>
<type>org.zaproxy.zap.extension.help.ZapTocView</type>
<data>toc.xml</data>
</view>
<view>
<name>Index</name>
<label>Index</label>
<type>javax.help.IndexView</type>
<data>index.xml</data>
</view>
<view>
<name>Search</name>
<label>Telusuri</label>
<type>javax.help.SearchView</type>
<data engine="com.sun.java.help.search.DefaultSearchEngine">
JavaHelpSearch
</data>
</view>
<view>
<name>Favorites</name>
<label>Favorites</label>
<type>javax.help.FavoritesView</type>
</view>
</helpset> | 0xkasun/security-tools | src/org/zaproxy/zap/extension/pscanrulesAlpha/resources/help_id_ID/helpset_id_ID.hs | apache-2.0 | 990 | 80 | 67 | 163 | 422 | 213 | 209 | -1 | -1 |
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE TemplateHaskell #-}
module Openshift.V1.BuildSource where
import GHC.Generics
import Data.Text
import Openshift.V1.BinaryBuildSource
import Openshift.V1.GitBuildSource
import Openshift.V1.ImageSource
import Openshift.V1.LocalObjectReference
import Openshift.V1.SecretBuildSource
import Data.Aeson.TH (deriveJSON, defaultOptions, fieldLabelModifier)
-- |
data BuildSource = BuildSource
{ type_ :: Text -- ^ type of build input to accept
, binary :: Maybe BinaryBuildSource -- ^ the binary will be provided by the builder as an archive or file to be placed within the input directory; allows Dockerfile to be optionally set; may not be set with git source type also set
, dockerfile :: Maybe Text -- ^ the contents of a Dockerfile to build; FROM may be overridden by your strategy source, and additional ENV from your strategy will be placed before the rest of the Dockerfile stanzas
, git :: Maybe GitBuildSource -- ^ optional information about git build source
, images :: Maybe [ImageSource] -- ^ optional images for build source.
, contextDir :: Maybe Text -- ^ specifies sub-directory where the source code for the application exists, allows for sources to be built from a directory other than the root of a repository
, sourceSecret :: Maybe LocalObjectReference -- ^ supported auth methods are: ssh-privatekey
, secrets :: [SecretBuildSource] -- ^ list of build secrets and destination directories
} deriving (Show, Eq, Generic)
$(deriveJSON defaultOptions{fieldLabelModifier = (\n -> if Prelude.last n == '_' then Prelude.take ((Prelude.length n) - 1 ) n else n)} ''BuildSource)
| minhdoboi/deprecated-openshift-haskell-api | openshift/lib/Openshift/V1/BuildSource.hs | apache-2.0 | 1,815 | 0 | 18 | 303 | 244 | 148 | 96 | 26 | 0 |
module FractalFlame.Flam3.Flame
( postProcessFlame
, module FractalFlame.Flam3.Types.Flame
)
where
import System.Random
import FractalFlame.Flam3.Types.Flame
import FractalFlame.Generator.Types.Generator
import FractalFlame.Palette
import FractalFlame.Symmetry
-- | Postprocessing steps to get a Flame ready for rendering after it has been parsed.
postProcessFlame :: StdGen -> Flame -> (Flame, StdGen)
postProcessFlame s = (addSymmetry s) . addPalette
-- | Convert a flame with a list of colors to a flame with a Palette
addPalette :: Flame -> Flame
addPalette flame@(Flame {colors=(ColorList colors')}) =
let palette = ColorPalette $ buildPalette colors'
in
flame {colors = palette}
| anthezium/fractal_flame_renderer_haskell | FractalFlame/Flam3/Flame.hs | bsd-2-clause | 705 | 0 | 12 | 108 | 152 | 89 | 63 | 14 | 1 |
{-# LANGUAGE OverloadedStrings #-}
module TestText where
import Test.Hspec
import Pretty()
import Types
import TestCommon
testBuiltInText :: Spec
testBuiltInText =
describe "Built-in Text Functions" $ do
describe "CONCATENATE" $
checkBuiltIn BiCheck
{ emit = "CONCATENATE([First Name],\" \",[Last Name])"
, value = Right $ VText "John Smith"
, expr = "concatenate [fn,\" \",ln];"
, defs = "let fn = field \"First Name\" : Text as \"John\";\
\let ln = field \"Last Name\" : Text as \"Smith\";"
}
describe "FIND" $ do
checkBuiltIn BiCheck
{ emit = "FIND(\"Sci\",[Subject],0)"
, value = Right $ VNum 10
, expr = "find \"Sci\" sub 0;"
, defs = "let sub = field \"Subject\" : Text as \"Arts and Sciences\";"
}
checkBuiltIn BiCheck
{ emit = "FIND(\"s\",[Subject],5)"
, value = Right $ VNum 17
, expr = "find \"s\" sub 5;"
, defs = "let sub = field \"Subject\" : Text as \"Arts and Sciences\";"
}
describe "LEFT" $ do
checkBuiltIn BiCheck
{ emit = "LEFT([Text],4)"
, value = Right $ VText "Sale"
, expr = "left txt 4;"
, defs = "let txt = field \"Text\" : Text as \"Sale Price\";"
}
checkBuiltIn BiCheck
{ emit = "LEFT([Text],1)"
, value = Right $ VText "S"
, expr = "left txt 1;"
, defs = "let txt = field \"Text\" : Text as \"Sweden\";"
}
describe "LEN" $
checkBuiltIn BiCheck
{ emit = "LEN([Last Name])"
, value = Right $ VNum 5
, expr = "len ln;"
, defs = "let ln = field \"Last Name\" : Text as \"Jones\";"
}
describe "LOWER" $ do
checkBuiltIn BiCheck
{ emit = "LOWER([Name])"
, value = Right $ VText "jake miller"
, expr = "lower nm;"
, defs = "let nm = field \"Name\" : Text as \"Jake Miller\";"
}
checkBuiltIn BiCheck
{ emit = "LOWER([Email Address])"
, value = Right $ VText "[email protected]"
, expr = "lower ea;"
, defs = "let ea = field \"Email Address\" : Text \
\as \"[email protected]\";"
}
describe "PROPER" $ do
checkBuiltIn BiCheck
{ emit = "PROPER([Last Name])"
, value = Right $ VText "Jane Pearson-Wyatt"
, expr = "proper ln;"
, defs = "let ln = field \"Last Name\" : Text \
\as \"jane pearson-wyatt\";"
}
checkBuiltIn BiCheck
{ emit = "PROPER([Last Name])"
, value = Right $ VText "O’Neil"
, expr = "proper ln;"
, defs = "let ln = field \"Last Name\" : Text as \"O’NEIL\";"
}
checkBuiltIn BiCheck
{ emit = "PROPER([Last Name])"
, value = Right $ VText "St. John"
, expr = "proper ln;"
, defs = "let ln = field \"Last Name\" : Text as \"ST. JOHN\";"
}
checkBuiltIn BiCheck
{ emit = "PROPER([Web Page])"
, value = Right $ VText "Www.Archer-Tech.Com"
, expr = "proper wp;"
, defs = "let wp = field \"Web Page\" : Text as \"www.archer-tech.com\";"
}
checkBuiltIn BiCheck
{ emit = "PROPER([Equipment Note])"
, value = Right $ VText "This Is Mike’S Laptop."
, expr = "proper en;"
, defs = "let en = field \"Equipment Note\" : Text as\
\ \"This is Mike’s laptop.\";"
}
describe "RIGHT" $
checkBuiltIn BiCheck
{ emit = "RIGHT([Department Name],4)"
, value = Right $ VText "ting"
, expr = "right dn 4;"
, defs = "let dn = field \"Department Name\" : Text as \"Marketing\";"
}
{- checkBuiltIn BiCheck
{ emit = "RIGHT([Department Name],(-1))"
, value = Left $ EvBuiltInError ""
, expr = "right dn -1;"
, defs = "let dn = field \"Department Name\" : Text as \"Marketing\";"
} -}
describe "SUBSTRING" $
checkBuiltIn BiCheck
{ emit = "SUBSTRING([Department Name],1,4)"
, value = Right $ VText "Mark"
, expr = "substring dn 1 4;"
, defs = "let dn = field \"Department Name\" : Text as \"Marketing\";"
}
describe "TRIM" $
checkBuiltIn BiCheck
{ emit = "TRIM([Asset Description])"
, value = Right $ VText "The HR-DB Server is used to store our human resources information."
, expr = "trim ad;"
, defs = "let ad = field \"Asset Description\" : Text as \" The HR-DB Server is used to store our human resources information. \";"
}
describe "UPPER" $ do
checkBuiltIn BiCheck
{ emit = "UPPER([Name])"
, value = Right $ VText "JAKE MILLER"
, expr = "upper nm;"
, defs = "let nm = field \"Name\" : Text as \"Jake Miller\";"
}
checkBuiltIn BiCheck
{ emit = "UPPER([Web Site])"
, value = Right $ VText "WWW.ARCHER-TECH.COM"
, expr = "upper ws;"
, defs = "let ws = field \"Web Site\" : Text as \"www.archer-tech.com\";"
}
| ahodgen/archer-calc | tests/TestText.hs | bsd-2-clause | 5,687 | 0 | 15 | 2,290 | 827 | 456 | 371 | 109 | 1 |
{-
(c) The University of Glasgow 2006-2008
(c) The GRASP/AQUA Project, Glasgow University, 1993-1998
-}
{-# LANGUAGE CPP, NondecreasingIndentation #-}
-- | Module for constructing @ModIface@ values (interface files),
-- writing them to disk and comparing two versions to see if
-- recompilation is required.
module MkIface (
mkIface, -- Build a ModIface from a ModGuts,
-- including computing version information
mkIfaceTc,
writeIfaceFile, -- Write the interface file
checkOldIface, -- See if recompilation is required, by
-- comparing version information
RecompileRequired(..), recompileRequired,
tyThingToIfaceDecl -- Converting things to their Iface equivalents
) where
{-
-----------------------------------------------
Recompilation checking
-----------------------------------------------
A complete description of how recompilation checking works can be
found in the wiki commentary:
http://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/RecompilationAvoidance
Please read the above page for a top-down description of how this all
works. Notes below cover specific issues related to the implementation.
Basic idea:
* In the mi_usages information in an interface, we record the
fingerprint of each free variable of the module
* In mkIface, we compute the fingerprint of each exported thing A.f.
For each external thing that A.f refers to, we include the fingerprint
of the external reference when computing the fingerprint of A.f. So
if anything that A.f depends on changes, then A.f's fingerprint will
change.
Also record any dependent files added with
* addDependentFile
* #include
* -optP-include
* In checkOldIface we compare the mi_usages for the module with
the actual fingerprint for all each thing recorded in mi_usages
-}
#include "HsVersions.h"
import IfaceSyn
import LoadIface
import FlagChecker
import Desugar ( mkUsageInfo, mkUsedNames, mkDependencies )
import Id
import IdInfo
import Demand
import Coercion( tidyCo )
import Annotations
import CoreSyn
import Class
import TyCon
import CoAxiom
import ConLike
import DataCon
import PatSyn
import Type
import TcType
import InstEnv
import FamInstEnv
import TcRnMonad
import HsSyn
import HscTypes
import Finder
import DynFlags
import VarEnv
import VarSet
import Var
import Name
import Avail
import RdrName
import NameEnv
import NameSet
import Module
import BinIface
import ErrUtils
import Digraph
import SrcLoc
import Outputable
import BasicTypes hiding ( SuccessFlag(..) )
import Unique
import Util hiding ( eqListBy )
import FastString
import FastStringEnv
import Maybes
import Binary
import Fingerprint
import Exception
import Control.Monad
import Data.Function
import Data.List
import qualified Data.Map as Map
import Data.Ord
import Data.IORef
import System.Directory
import System.FilePath
{-
************************************************************************
* *
\subsection{Completing an interface}
* *
************************************************************************
-}
mkIface :: HscEnv
-> Maybe Fingerprint -- The old fingerprint, if we have it
-> ModDetails -- The trimmed, tidied interface
-> ModGuts -- Usages, deprecations, etc
-> IO (ModIface, -- The new one
Bool) -- True <=> there was an old Iface, and the
-- new one is identical, so no need
-- to write it
mkIface hsc_env maybe_old_fingerprint mod_details
ModGuts{ mg_module = this_mod,
mg_hsc_src = hsc_src,
mg_usages = usages,
mg_used_th = used_th,
mg_deps = deps,
mg_rdr_env = rdr_env,
mg_fix_env = fix_env,
mg_warns = warns,
mg_hpc_info = hpc_info,
mg_safe_haskell = safe_mode,
mg_trust_pkg = self_trust
}
= mkIface_ hsc_env maybe_old_fingerprint
this_mod hsc_src used_th deps rdr_env fix_env
warns hpc_info self_trust
safe_mode usages mod_details
-- | make an interface from the results of typechecking only. Useful
-- for non-optimising compilation, or where we aren't generating any
-- object code at all ('HscNothing').
mkIfaceTc :: HscEnv
-> Maybe Fingerprint -- The old fingerprint, if we have it
-> SafeHaskellMode -- The safe haskell mode
-> ModDetails -- gotten from mkBootModDetails, probably
-> TcGblEnv -- Usages, deprecations, etc
-> IO (ModIface, Bool)
mkIfaceTc hsc_env maybe_old_fingerprint safe_mode mod_details
tc_result@TcGblEnv{ tcg_mod = this_mod,
tcg_src = hsc_src,
tcg_imports = imports,
tcg_rdr_env = rdr_env,
tcg_fix_env = fix_env,
tcg_warns = warns,
tcg_hpc = other_hpc_info,
tcg_th_splice_used = tc_splice_used,
tcg_dependent_files = dependent_files
}
= do
let used_names = mkUsedNames tc_result
deps <- mkDependencies tc_result
let hpc_info = emptyHpcInfo other_hpc_info
used_th <- readIORef tc_splice_used
dep_files <- (readIORef dependent_files)
usages <- mkUsageInfo hsc_env this_mod (imp_mods imports) used_names dep_files
mkIface_ hsc_env maybe_old_fingerprint
this_mod hsc_src
used_th deps rdr_env
fix_env warns hpc_info
(imp_trust_own_pkg imports) safe_mode usages mod_details
mkIface_ :: HscEnv -> Maybe Fingerprint -> Module -> HscSource
-> Bool -> Dependencies -> GlobalRdrEnv
-> NameEnv FixItem -> Warnings -> HpcInfo
-> Bool
-> SafeHaskellMode
-> [Usage]
-> ModDetails
-> IO (ModIface, Bool)
mkIface_ hsc_env maybe_old_fingerprint
this_mod hsc_src used_th deps rdr_env fix_env src_warns
hpc_info pkg_trust_req safe_mode usages
ModDetails{ md_insts = insts,
md_fam_insts = fam_insts,
md_rules = rules,
md_anns = anns,
md_vect_info = vect_info,
md_types = type_env,
md_exports = exports }
-- NB: notice that mkIface does not look at the bindings
-- only at the TypeEnv. The previous Tidy phase has
-- put exactly the info into the TypeEnv that we want
-- to expose in the interface
= do
let entities = typeEnvElts type_env
decls = [ tyThingToIfaceDecl entity
| entity <- entities,
let name = getName entity,
not (isImplicitTyThing entity),
-- No implicit Ids and class tycons in the interface file
not (isWiredInName name),
-- Nor wired-in things; the compiler knows about them anyhow
nameIsLocalOrFrom this_mod name ]
-- Sigh: see Note [Root-main Id] in TcRnDriver
fixities = sortBy (comparing fst)
[(occ,fix) | FixItem occ fix <- nameEnvElts fix_env]
-- The order of fixities returned from nameEnvElts is not
-- deterministic, so we sort by OccName to canonicalize it.
-- See Note [Deterministic UniqFM] in UniqDFM for more details.
warns = src_warns
iface_rules = map coreRuleToIfaceRule rules
iface_insts = map instanceToIfaceInst $ fixSafeInstances safe_mode insts
iface_fam_insts = map famInstToIfaceFamInst fam_insts
iface_vect_info = flattenVectInfo vect_info
trust_info = setSafeMode safe_mode
annotations = map mkIfaceAnnotation anns
sig_of = getSigOf dflags (moduleName this_mod)
intermediate_iface = ModIface {
mi_module = this_mod,
mi_sig_of = sig_of,
mi_hsc_src = hsc_src,
mi_deps = deps,
mi_usages = usages,
mi_exports = mkIfaceExports exports,
-- Sort these lexicographically, so that
-- the result is stable across compilations
mi_insts = sortBy cmp_inst iface_insts,
mi_fam_insts = sortBy cmp_fam_inst iface_fam_insts,
mi_rules = sortBy cmp_rule iface_rules,
mi_vect_info = iface_vect_info,
mi_fixities = fixities,
mi_warns = warns,
mi_anns = annotations,
mi_globals = maybeGlobalRdrEnv rdr_env,
-- Left out deliberately: filled in by addFingerprints
mi_iface_hash = fingerprint0,
mi_mod_hash = fingerprint0,
mi_flag_hash = fingerprint0,
mi_exp_hash = fingerprint0,
mi_used_th = used_th,
mi_orphan_hash = fingerprint0,
mi_orphan = False, -- Always set by addFingerprints, but
-- it's a strict field, so we can't omit it.
mi_finsts = False, -- Ditto
mi_decls = deliberatelyOmitted "decls",
mi_hash_fn = deliberatelyOmitted "hash_fn",
mi_hpc = isHpcUsed hpc_info,
mi_trust = trust_info,
mi_trust_pkg = pkg_trust_req,
-- And build the cached values
mi_warn_fn = mkIfaceWarnCache warns,
mi_fix_fn = mkIfaceFixCache fixities }
(new_iface, no_change_at_all)
<- {-# SCC "versioninfo" #-}
addFingerprints hsc_env maybe_old_fingerprint
intermediate_iface decls
-- Debug printing
dumpIfSet_dyn dflags Opt_D_dump_hi "FINAL INTERFACE"
(pprModIface new_iface)
-- bug #1617: on reload we weren't updating the PrintUnqualified
-- correctly. This stems from the fact that the interface had
-- not changed, so addFingerprints returns the old ModIface
-- with the old GlobalRdrEnv (mi_globals).
let final_iface = new_iface{ mi_globals = maybeGlobalRdrEnv rdr_env }
return (final_iface, no_change_at_all)
where
cmp_rule = comparing ifRuleName
-- Compare these lexicographically by OccName, *not* by unique,
-- because the latter is not stable across compilations:
cmp_inst = comparing (nameOccName . ifDFun)
cmp_fam_inst = comparing (nameOccName . ifFamInstTcName)
dflags = hsc_dflags hsc_env
-- We only fill in mi_globals if the module was compiled to byte
-- code. Otherwise, the compiler may not have retained all the
-- top-level bindings and they won't be in the TypeEnv (see
-- Desugar.addExportFlagsAndRules). The mi_globals field is used
-- by GHCi to decide whether the module has its full top-level
-- scope available. (#5534)
maybeGlobalRdrEnv :: GlobalRdrEnv -> Maybe GlobalRdrEnv
maybeGlobalRdrEnv rdr_env
| targetRetainsAllBindings (hscTarget dflags) = Just rdr_env
| otherwise = Nothing
deliberatelyOmitted :: String -> a
deliberatelyOmitted x = panic ("Deliberately omitted: " ++ x)
ifFamInstTcName = ifFamInstFam
flattenVectInfo (VectInfo { vectInfoVar = vVar
, vectInfoTyCon = vTyCon
, vectInfoParallelVars = vParallelVars
, vectInfoParallelTyCons = vParallelTyCons
}) =
IfaceVectInfo
{ ifaceVectInfoVar = [Var.varName v | (v, _ ) <- varEnvElts vVar]
, ifaceVectInfoTyCon = [tyConName t | (t, t_v) <- nameEnvElts vTyCon, t /= t_v]
, ifaceVectInfoTyConReuse = [tyConName t | (t, t_v) <- nameEnvElts vTyCon, t == t_v]
, ifaceVectInfoParallelVars = [Var.varName v | v <- dVarSetElems vParallelVars]
, ifaceVectInfoParallelTyCons = nameSetElemsStable vParallelTyCons
}
-----------------------------
writeIfaceFile :: DynFlags -> FilePath -> ModIface -> IO ()
writeIfaceFile dflags hi_file_path new_iface
= do createDirectoryIfMissing True (takeDirectory hi_file_path)
writeBinIface dflags hi_file_path new_iface
-- -----------------------------------------------------------------------------
-- Look up parents and versions of Names
-- This is like a global version of the mi_hash_fn field in each ModIface.
-- Given a Name, it finds the ModIface, and then uses mi_hash_fn to get
-- the parent and version info.
mkHashFun
:: HscEnv -- needed to look up versions
-> ExternalPackageState -- ditto
-> (Name -> Fingerprint)
mkHashFun hsc_env eps
= \name ->
let
mod = ASSERT2( isExternalName name, ppr name ) nameModule name
occ = nameOccName name
iface = lookupIfaceByModule (hsc_dflags hsc_env) hpt pit mod `orElse`
pprPanic "lookupVers2" (ppr mod <+> ppr occ)
in
snd (mi_hash_fn iface occ `orElse`
pprPanic "lookupVers1" (ppr mod <+> ppr occ))
where
hpt = hsc_HPT hsc_env
pit = eps_PIT eps
-- ---------------------------------------------------------------------------
-- Compute fingerprints for the interface
addFingerprints
:: HscEnv
-> Maybe Fingerprint -- the old fingerprint, if any
-> ModIface -- The new interface (lacking decls)
-> [IfaceDecl] -- The new decls
-> IO (ModIface, -- Updated interface
Bool) -- True <=> no changes at all;
-- no need to write Iface
addFingerprints hsc_env mb_old_fingerprint iface0 new_decls
= do
eps <- hscEPS hsc_env
let
-- The ABI of a declaration represents everything that is made
-- visible about the declaration that a client can depend on.
-- see IfaceDeclABI below.
declABI :: IfaceDecl -> IfaceDeclABI
declABI decl = (this_mod, decl, extras)
where extras = declExtras fix_fn ann_fn non_orph_rules non_orph_insts
non_orph_fis decl
edges :: [(IfaceDeclABI, Unique, [Unique])]
edges = [ (abi, getUnique (ifName decl), out)
| decl <- new_decls
, let abi = declABI decl
, let out = localOccs $ freeNamesDeclABI abi
]
name_module n = ASSERT2( isExternalName n, ppr n ) nameModule n
localOccs = map (getUnique . getParent . getOccName)
. filter ((== this_mod) . name_module)
. nameSetElems
where getParent occ = lookupOccEnv parent_map occ `orElse` occ
-- maps OccNames to their parents in the current module.
-- e.g. a reference to a constructor must be turned into a reference
-- to the TyCon for the purposes of calculating dependencies.
parent_map :: OccEnv OccName
parent_map = foldr extend emptyOccEnv new_decls
where extend d env =
extendOccEnvList env [ (b,n) | b <- ifaceDeclImplicitBndrs d ]
where n = ifName d
-- strongly-connected groups of declarations, in dependency order
groups = stronglyConnCompFromEdgedVertices edges
global_hash_fn = mkHashFun hsc_env eps
-- how to output Names when generating the data to fingerprint.
-- Here we want to output the fingerprint for each top-level
-- Name, whether it comes from the current module or another
-- module. In this way, the fingerprint for a declaration will
-- change if the fingerprint for anything it refers to (transitively)
-- changes.
mk_put_name :: (OccEnv (OccName,Fingerprint))
-> BinHandle -> Name -> IO ()
mk_put_name local_env bh name
| isWiredInName name = putNameLiterally bh name
-- wired-in names don't have fingerprints
| otherwise
= ASSERT2( isExternalName name, ppr name )
let hash | nameModule name /= this_mod = global_hash_fn name
| otherwise = snd (lookupOccEnv local_env (getOccName name)
`orElse` pprPanic "urk! lookup local fingerprint"
(ppr name)) -- (undefined,fingerprint0))
-- This panic indicates that we got the dependency
-- analysis wrong, because we needed a fingerprint for
-- an entity that wasn't in the environment. To debug
-- it, turn the panic into a trace, uncomment the
-- pprTraces below, run the compile again, and inspect
-- the output and the generated .hi file with
-- --show-iface.
in put_ bh hash
-- take a strongly-connected group of declarations and compute
-- its fingerprint.
fingerprint_group :: (OccEnv (OccName,Fingerprint),
[(Fingerprint,IfaceDecl)])
-> SCC IfaceDeclABI
-> IO (OccEnv (OccName,Fingerprint),
[(Fingerprint,IfaceDecl)])
fingerprint_group (local_env, decls_w_hashes) (AcyclicSCC abi)
= do let hash_fn = mk_put_name local_env
decl = abiDecl abi
--pprTrace "fingerprinting" (ppr (ifName decl) ) $ do
hash <- computeFingerprint hash_fn abi
env' <- extend_hash_env local_env (hash,decl)
return (env', (hash,decl) : decls_w_hashes)
fingerprint_group (local_env, decls_w_hashes) (CyclicSCC abis)
= do let decls = map abiDecl abis
local_env1 <- foldM extend_hash_env local_env
(zip (repeat fingerprint0) decls)
let hash_fn = mk_put_name local_env1
-- pprTrace "fingerprinting" (ppr (map ifName decls) ) $ do
let stable_abis = sortBy cmp_abiNames abis
-- put the cycle in a canonical order
hash <- computeFingerprint hash_fn stable_abis
let pairs = zip (repeat hash) decls
local_env2 <- foldM extend_hash_env local_env pairs
return (local_env2, pairs ++ decls_w_hashes)
-- we have fingerprinted the whole declaration, but we now need
-- to assign fingerprints to all the OccNames that it binds, to
-- use when referencing those OccNames in later declarations.
--
extend_hash_env :: OccEnv (OccName,Fingerprint)
-> (Fingerprint,IfaceDecl)
-> IO (OccEnv (OccName,Fingerprint))
extend_hash_env env0 (hash,d) = do
return (foldr (\(b,fp) env -> extendOccEnv env b (b,fp)) env0
(ifaceDeclFingerprints hash d))
--
(local_env, decls_w_hashes) <-
foldM fingerprint_group (emptyOccEnv, []) groups
-- when calculating fingerprints, we always need to use canonical
-- ordering for lists of things. In particular, the mi_deps has various
-- lists of modules and suchlike, so put these all in canonical order:
let sorted_deps = sortDependencies (mi_deps iface0)
-- the export hash of a module depends on the orphan hashes of the
-- orphan modules below us in the dependency tree. This is the way
-- that changes in orphans get propagated all the way up the
-- dependency tree. We only care about orphan modules in the current
-- package, because changes to orphans outside this package will be
-- tracked by the usage on the ABI hash of package modules that we import.
let orph_mods
= filter (/= this_mod) -- Note [Do not update EPS with your own hi-boot]
. filter ((== this_pkg) . moduleUnitId)
$ dep_orphs sorted_deps
dep_orphan_hashes <- getOrphanHashes hsc_env orph_mods
-- Note [Do not update EPS with your own hi-boot]
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- (See also Trac #10182). When your hs-boot file includes an orphan
-- instance declaration, you may find that the dep_orphs of a module you
-- import contains reference to yourself. DO NOT actually load this module
-- or add it to the orphan hashes: you're going to provide the orphan
-- instances yourself, no need to consult hs-boot; if you do load the
-- interface into EPS, you will see a duplicate orphan instance.
orphan_hash <- computeFingerprint (mk_put_name local_env)
(map ifDFun orph_insts, orph_rules, orph_fis)
-- the export list hash doesn't depend on the fingerprints of
-- the Names it mentions, only the Names themselves, hence putNameLiterally.
export_hash <- computeFingerprint putNameLiterally
(mi_exports iface0,
orphan_hash,
dep_orphan_hashes,
dep_pkgs (mi_deps iface0),
-- dep_pkgs: see "Package Version Changes" on
-- wiki/Commentary/Compiler/RecompilationAvoidance
mi_trust iface0)
-- Make sure change of Safe Haskell mode causes recomp.
-- put the declarations in a canonical order, sorted by OccName
let sorted_decls = Map.elems $ Map.fromList $
[(ifName d, e) | e@(_, d) <- decls_w_hashes]
-- the flag hash depends on:
-- - (some of) dflags
-- it returns two hashes, one that shouldn't change
-- the abi hash and one that should
flag_hash <- fingerprintDynFlags dflags this_mod putNameLiterally
-- the ABI hash depends on:
-- - decls
-- - export list
-- - orphans
-- - deprecations
-- - vect info
-- - flag abi hash
mod_hash <- computeFingerprint putNameLiterally
(map fst sorted_decls,
export_hash, -- includes orphan_hash
mi_warns iface0,
mi_vect_info iface0)
-- The interface hash depends on:
-- - the ABI hash, plus
-- - the module level annotations,
-- - usages
-- - deps (home and external packages, dependent files)
-- - hpc
iface_hash <- computeFingerprint putNameLiterally
(mod_hash,
ann_fn (mkVarOcc "module"), -- See mkIfaceAnnCache
mi_usages iface0,
sorted_deps,
mi_hpc iface0)
let
no_change_at_all = Just iface_hash == mb_old_fingerprint
final_iface = iface0 {
mi_mod_hash = mod_hash,
mi_iface_hash = iface_hash,
mi_exp_hash = export_hash,
mi_orphan_hash = orphan_hash,
mi_flag_hash = flag_hash,
mi_orphan = not ( all ifRuleAuto orph_rules
-- See Note [Orphans and auto-generated rules]
&& null orph_insts
&& null orph_fis
&& isNoIfaceVectInfo (mi_vect_info iface0)),
mi_finsts = not . null $ mi_fam_insts iface0,
mi_decls = sorted_decls,
mi_hash_fn = lookupOccEnv local_env }
--
return (final_iface, no_change_at_all)
where
this_mod = mi_module iface0
dflags = hsc_dflags hsc_env
this_pkg = thisPackage dflags
(non_orph_insts, orph_insts) = mkOrphMap ifInstOrph (mi_insts iface0)
(non_orph_rules, orph_rules) = mkOrphMap ifRuleOrph (mi_rules iface0)
(non_orph_fis, orph_fis) = mkOrphMap ifFamInstOrph (mi_fam_insts iface0)
fix_fn = mi_fix_fn iface0
ann_fn = mkIfaceAnnCache (mi_anns iface0)
getOrphanHashes :: HscEnv -> [Module] -> IO [Fingerprint]
getOrphanHashes hsc_env mods = do
eps <- hscEPS hsc_env
let
hpt = hsc_HPT hsc_env
pit = eps_PIT eps
dflags = hsc_dflags hsc_env
get_orph_hash mod =
case lookupIfaceByModule dflags hpt pit mod of
Nothing -> pprPanic "moduleOrphanHash" (ppr mod)
Just iface -> mi_orphan_hash iface
--
return (map get_orph_hash mods)
sortDependencies :: Dependencies -> Dependencies
sortDependencies d
= Deps { dep_mods = sortBy (compare `on` (moduleNameFS.fst)) (dep_mods d),
dep_pkgs = sortBy (stableUnitIdCmp `on` fst) (dep_pkgs d),
dep_orphs = sortBy stableModuleCmp (dep_orphs d),
dep_finsts = sortBy stableModuleCmp (dep_finsts d) }
-- | Creates cached lookup for the 'mi_anns' field of ModIface
-- Hackily, we use "module" as the OccName for any module-level annotations
mkIfaceAnnCache :: [IfaceAnnotation] -> OccName -> [AnnPayload]
mkIfaceAnnCache anns
= \n -> lookupOccEnv env n `orElse` []
where
pair (IfaceAnnotation target value) =
(case target of
NamedTarget occn -> occn
ModuleTarget _ -> mkVarOcc "module"
, [value])
-- flipping (++), so the first argument is always short
env = mkOccEnv_C (flip (++)) (map pair anns)
{-
************************************************************************
* *
The ABI of an IfaceDecl
* *
************************************************************************
Note [The ABI of an IfaceDecl]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ABI of a declaration consists of:
(a) the full name of the identifier (inc. module and package,
because these are used to construct the symbol name by which
the identifier is known externally).
(b) the declaration itself, as exposed to clients. That is, the
definition of an Id is included in the fingerprint only if
it is made available as an unfolding in the interface.
(c) the fixity of the identifier (if it exists)
(d) for Ids: rules
(e) for classes: instances, fixity & rules for methods
(f) for datatypes: instances, fixity & rules for constrs
Items (c)-(f) are not stored in the IfaceDecl, but instead appear
elsewhere in the interface file. But they are *fingerprinted* with
the declaration itself. This is done by grouping (c)-(f) in IfaceDeclExtras,
and fingerprinting that as part of the declaration.
-}
type IfaceDeclABI = (Module, IfaceDecl, IfaceDeclExtras)
data IfaceDeclExtras
= IfaceIdExtras IfaceIdExtras
| IfaceDataExtras
(Maybe Fixity) -- Fixity of the tycon itself (if it exists)
[IfaceInstABI] -- Local class and family instances of this tycon
-- See Note [Orphans] in InstEnv
[AnnPayload] -- Annotations of the type itself
[IfaceIdExtras] -- For each constructor: fixity, RULES and annotations
| IfaceClassExtras
(Maybe Fixity) -- Fixity of the class itself (if it exists)
[IfaceInstABI] -- Local instances of this class *or*
-- of its associated data types
-- See Note [Orphans] in InstEnv
[AnnPayload] -- Annotations of the type itself
[IfaceIdExtras] -- For each class method: fixity, RULES and annotations
| IfaceSynonymExtras (Maybe Fixity) [AnnPayload]
| IfaceFamilyExtras (Maybe Fixity) [IfaceInstABI] [AnnPayload]
| IfaceOtherDeclExtras
data IfaceIdExtras
= IdExtras
(Maybe Fixity) -- Fixity of the Id (if it exists)
[IfaceRule] -- Rules for the Id
[AnnPayload] -- Annotations for the Id
-- When hashing a class or family instance, we hash only the
-- DFunId or CoAxiom, because that depends on all the
-- information about the instance.
--
type IfaceInstABI = IfExtName -- Name of DFunId or CoAxiom that is evidence for the instance
abiDecl :: IfaceDeclABI -> IfaceDecl
abiDecl (_, decl, _) = decl
cmp_abiNames :: IfaceDeclABI -> IfaceDeclABI -> Ordering
cmp_abiNames abi1 abi2 = ifName (abiDecl abi1) `compare`
ifName (abiDecl abi2)
freeNamesDeclABI :: IfaceDeclABI -> NameSet
freeNamesDeclABI (_mod, decl, extras) =
freeNamesIfDecl decl `unionNameSet` freeNamesDeclExtras extras
freeNamesDeclExtras :: IfaceDeclExtras -> NameSet
freeNamesDeclExtras (IfaceIdExtras id_extras)
= freeNamesIdExtras id_extras
freeNamesDeclExtras (IfaceDataExtras _ insts _ subs)
= unionNameSets (mkNameSet insts : map freeNamesIdExtras subs)
freeNamesDeclExtras (IfaceClassExtras _ insts _ subs)
= unionNameSets (mkNameSet insts : map freeNamesIdExtras subs)
freeNamesDeclExtras (IfaceSynonymExtras _ _)
= emptyNameSet
freeNamesDeclExtras (IfaceFamilyExtras _ insts _)
= mkNameSet insts
freeNamesDeclExtras IfaceOtherDeclExtras
= emptyNameSet
freeNamesIdExtras :: IfaceIdExtras -> NameSet
freeNamesIdExtras (IdExtras _ rules _) = unionNameSets (map freeNamesIfRule rules)
instance Outputable IfaceDeclExtras where
ppr IfaceOtherDeclExtras = Outputable.empty
ppr (IfaceIdExtras extras) = ppr_id_extras extras
ppr (IfaceSynonymExtras fix anns) = vcat [ppr fix, ppr anns]
ppr (IfaceFamilyExtras fix finsts anns) = vcat [ppr fix, ppr finsts, ppr anns]
ppr (IfaceDataExtras fix insts anns stuff) = vcat [ppr fix, ppr_insts insts, ppr anns,
ppr_id_extras_s stuff]
ppr (IfaceClassExtras fix insts anns stuff) = vcat [ppr fix, ppr_insts insts, ppr anns,
ppr_id_extras_s stuff]
ppr_insts :: [IfaceInstABI] -> SDoc
ppr_insts _ = text "<insts>"
ppr_id_extras_s :: [IfaceIdExtras] -> SDoc
ppr_id_extras_s stuff = vcat (map ppr_id_extras stuff)
ppr_id_extras :: IfaceIdExtras -> SDoc
ppr_id_extras (IdExtras fix rules anns) = ppr fix $$ vcat (map ppr rules) $$ vcat (map ppr anns)
-- This instance is used only to compute fingerprints
instance Binary IfaceDeclExtras where
get _bh = panic "no get for IfaceDeclExtras"
put_ bh (IfaceIdExtras extras) = do
putByte bh 1; put_ bh extras
put_ bh (IfaceDataExtras fix insts anns cons) = do
putByte bh 2; put_ bh fix; put_ bh insts; put_ bh anns; put_ bh cons
put_ bh (IfaceClassExtras fix insts anns methods) = do
putByte bh 3; put_ bh fix; put_ bh insts; put_ bh anns; put_ bh methods
put_ bh (IfaceSynonymExtras fix anns) = do
putByte bh 4; put_ bh fix; put_ bh anns
put_ bh (IfaceFamilyExtras fix finsts anns) = do
putByte bh 5; put_ bh fix; put_ bh finsts; put_ bh anns
put_ bh IfaceOtherDeclExtras = putByte bh 6
instance Binary IfaceIdExtras where
get _bh = panic "no get for IfaceIdExtras"
put_ bh (IdExtras fix rules anns)= do { put_ bh fix; put_ bh rules; put_ bh anns }
declExtras :: (OccName -> Maybe Fixity)
-> (OccName -> [AnnPayload])
-> OccEnv [IfaceRule]
-> OccEnv [IfaceClsInst]
-> OccEnv [IfaceFamInst]
-> IfaceDecl
-> IfaceDeclExtras
declExtras fix_fn ann_fn rule_env inst_env fi_env decl
= case decl of
IfaceId{} -> IfaceIdExtras (id_extras n)
IfaceData{ifCons=cons} ->
IfaceDataExtras (fix_fn n)
(map ifFamInstAxiom (lookupOccEnvL fi_env n) ++
map ifDFun (lookupOccEnvL inst_env n))
(ann_fn n)
(map (id_extras . ifConOcc) (visibleIfConDecls cons))
IfaceClass{ifSigs=sigs, ifATs=ats} ->
IfaceClassExtras (fix_fn n)
(map ifDFun $ (concatMap at_extras ats)
++ lookupOccEnvL inst_env n)
-- Include instances of the associated types
-- as well as instances of the class (Trac #5147)
(ann_fn n)
[id_extras op | IfaceClassOp op _ _ <- sigs]
IfaceSynonym{} -> IfaceSynonymExtras (fix_fn n)
(ann_fn n)
IfaceFamily{} -> IfaceFamilyExtras (fix_fn n)
(map ifFamInstAxiom (lookupOccEnvL fi_env n))
(ann_fn n)
_other -> IfaceOtherDeclExtras
where
n = ifName decl
id_extras occ = IdExtras (fix_fn occ) (lookupOccEnvL rule_env occ) (ann_fn occ)
at_extras (IfaceAT decl _) = lookupOccEnvL inst_env (ifName decl)
lookupOccEnvL :: OccEnv [v] -> OccName -> [v]
lookupOccEnvL env k = lookupOccEnv env k `orElse` []
-- used when we want to fingerprint a structure without depending on the
-- fingerprints of external Names that it refers to.
putNameLiterally :: BinHandle -> Name -> IO ()
putNameLiterally bh name = ASSERT( isExternalName name )
do
put_ bh $! nameModule name
put_ bh $! nameOccName name
{-
-- for testing: use the md5sum command to generate fingerprints and
-- compare the results against our built-in version.
fp' <- oldMD5 dflags bh
if fp /= fp' then pprPanic "computeFingerprint" (ppr fp <+> ppr fp')
else return fp
oldMD5 dflags bh = do
tmp <- newTempName dflags "bin"
writeBinMem bh tmp
tmp2 <- newTempName dflags "md5"
let cmd = "md5sum " ++ tmp ++ " >" ++ tmp2
r <- system cmd
case r of
ExitFailure _ -> throwGhcExceptionIO (PhaseFailed cmd r)
ExitSuccess -> do
hash_str <- readFile tmp2
return $! readHexFingerprint hash_str
-}
----------------------
-- mkOrphMap partitions instance decls or rules into
-- (a) an OccEnv for ones that are not orphans,
-- mapping the local OccName to a list of its decls
-- (b) a list of orphan decls
mkOrphMap :: (decl -> IsOrphan) -- Extract orphan status from decl
-> [decl] -- Sorted into canonical order
-> (OccEnv [decl], -- Non-orphan decls associated with their key;
-- each sublist in canonical order
[decl]) -- Orphan decls; in canonical order
mkOrphMap get_key decls
= foldl go (emptyOccEnv, []) decls
where
go (non_orphs, orphs) d
| NotOrphan occ <- get_key d
= (extendOccEnv_Acc (:) singleton non_orphs occ d, orphs)
| otherwise = (non_orphs, d:orphs)
{-
************************************************************************
* *
Keeping track of what we've slurped, and fingerprints
* *
************************************************************************
-}
mkIfaceAnnotation :: Annotation -> IfaceAnnotation
mkIfaceAnnotation (Annotation { ann_target = target, ann_value = payload })
= IfaceAnnotation {
ifAnnotatedTarget = fmap nameOccName target,
ifAnnotatedValue = payload
}
mkIfaceExports :: [AvailInfo] -> [IfaceExport] -- Sort to make canonical
mkIfaceExports exports
= sortBy stableAvailCmp (map sort_subs exports)
where
sort_subs :: AvailInfo -> AvailInfo
sort_subs (Avail b n) = Avail b n
sort_subs (AvailTC n [] fs) = AvailTC n [] (sort_flds fs)
sort_subs (AvailTC n (m:ms) fs)
| n==m = AvailTC n (m:sortBy stableNameCmp ms) (sort_flds fs)
| otherwise = AvailTC n (sortBy stableNameCmp (m:ms)) (sort_flds fs)
-- Maintain the AvailTC Invariant
sort_flds = sortBy (stableNameCmp `on` flSelector)
{-
Note [Orignal module]
~~~~~~~~~~~~~~~~~~~~~
Consider this:
module X where { data family T }
module Y( T(..) ) where { import X; data instance T Int = MkT Int }
The exported Avail from Y will look like
X.T{X.T, Y.MkT}
That is, in Y,
- only MkT is brought into scope by the data instance;
- but the parent (used for grouping and naming in T(..) exports) is X.T
- and in this case we export X.T too
In the result of MkIfaceExports, the names are grouped by defining module,
so we may need to split up a single Avail into multiple ones.
Note [Internal used_names]
~~~~~~~~~~~~~~~~~~~~~~~~~~
Most of the used_names are External Names, but we can have Internal
Names too: see Note [Binders in Template Haskell] in Convert, and
Trac #5362 for an example. Such Names are always
- Such Names are always for locally-defined things, for which we
don't gather usage info, so we can just ignore them in ent_map
- They are always System Names, hence the assert, just as a double check.
************************************************************************
* *
Load the old interface file for this module (unless
we have it already), and check whether it is up to date
* *
************************************************************************
-}
data RecompileRequired
= UpToDate
-- ^ everything is up to date, recompilation is not required
| MustCompile
-- ^ The .hs file has been touched, or the .o/.hi file does not exist
| RecompBecause String
-- ^ The .o/.hi files are up to date, but something else has changed
-- to force recompilation; the String says what (one-line summary)
deriving Eq
recompileRequired :: RecompileRequired -> Bool
recompileRequired UpToDate = False
recompileRequired _ = True
-- | Top level function to check if the version of an old interface file
-- is equivalent to the current source file the user asked us to compile.
-- If the same, we can avoid recompilation. We return a tuple where the
-- first element is a bool saying if we should recompile the object file
-- and the second is maybe the interface file, where Nothng means to
-- rebuild the interface file not use the exisitng one.
checkOldIface
:: HscEnv
-> ModSummary
-> SourceModified
-> Maybe ModIface -- Old interface from compilation manager, if any
-> IO (RecompileRequired, Maybe ModIface)
checkOldIface hsc_env mod_summary source_modified maybe_iface
= do let dflags = hsc_dflags hsc_env
showPass dflags $
"Checking old interface for " ++
(showPpr dflags $ ms_mod mod_summary)
initIfaceCheck hsc_env $
check_old_iface hsc_env mod_summary source_modified maybe_iface
check_old_iface
:: HscEnv
-> ModSummary
-> SourceModified
-> Maybe ModIface
-> IfG (RecompileRequired, Maybe ModIface)
check_old_iface hsc_env mod_summary src_modified maybe_iface
= let dflags = hsc_dflags hsc_env
getIface =
case maybe_iface of
Just _ -> do
traceIf (text "We already have the old interface for" <+>
ppr (ms_mod mod_summary))
return maybe_iface
Nothing -> loadIface
loadIface = do
let iface_path = msHiFilePath mod_summary
read_result <- readIface (ms_mod mod_summary) iface_path
case read_result of
Failed err -> do
traceIf (text "FYI: cannot read old interface file:" $$ nest 4 err)
return Nothing
Succeeded iface -> do
traceIf (text "Read the interface file" <+> text iface_path)
return $ Just iface
src_changed
| gopt Opt_ForceRecomp (hsc_dflags hsc_env) = True
| SourceModified <- src_modified = True
| otherwise = False
in do
when src_changed $
traceHiDiffs (nest 4 $ text "Source file changed or recompilation check turned off")
case src_changed of
-- If the source has changed and we're in interactive mode,
-- avoid reading an interface; just return the one we might
-- have been supplied with.
True | not (isObjectTarget $ hscTarget dflags) ->
return (MustCompile, maybe_iface)
-- Try and read the old interface for the current module
-- from the .hi file left from the last time we compiled it
True -> do
maybe_iface' <- getIface
return (MustCompile, maybe_iface')
False -> do
maybe_iface' <- getIface
case maybe_iface' of
-- We can't retrieve the iface
Nothing -> return (MustCompile, Nothing)
-- We have got the old iface; check its versions
-- even in the SourceUnmodifiedAndStable case we
-- should check versions because some packages
-- might have changed or gone away.
Just iface -> checkVersions hsc_env mod_summary iface
-- | Check if a module is still the same 'version'.
--
-- This function is called in the recompilation checker after we have
-- determined that the module M being checked hasn't had any changes
-- to its source file since we last compiled M. So at this point in general
-- two things may have changed that mean we should recompile M:
-- * The interface export by a dependency of M has changed.
-- * The compiler flags specified this time for M have changed
-- in a manner that is significant for recompilaiton.
-- We return not just if we should recompile the object file but also
-- if we should rebuild the interface file.
checkVersions :: HscEnv
-> ModSummary
-> ModIface -- Old interface
-> IfG (RecompileRequired, Maybe ModIface)
checkVersions hsc_env mod_summary iface
= do { traceHiDiffs (text "Considering whether compilation is required for" <+>
ppr (mi_module iface) <> colon)
; recomp <- checkFlagHash hsc_env iface
; if recompileRequired recomp then return (recomp, Nothing) else do {
; if getSigOf (hsc_dflags hsc_env) (moduleName (mi_module iface))
/= mi_sig_of iface
then return (RecompBecause "sig-of changed", Nothing) else do {
; recomp <- checkDependencies hsc_env mod_summary iface
; if recompileRequired recomp then return (recomp, Just iface) else do {
-- Source code unchanged and no errors yet... carry on
--
-- First put the dependent-module info, read from the old
-- interface, into the envt, so that when we look for
-- interfaces we look for the right one (.hi or .hi-boot)
--
-- It's just temporary because either the usage check will succeed
-- (in which case we are done with this module) or it'll fail (in which
-- case we'll compile the module from scratch anyhow).
--
-- We do this regardless of compilation mode, although in --make mode
-- all the dependent modules should be in the HPT already, so it's
-- quite redundant
; updateEps_ $ \eps -> eps { eps_is_boot = mod_deps }
; recomp <- checkList [checkModUsage this_pkg u | u <- mi_usages iface]
; return (recomp, Just iface)
}}}}
where
this_pkg = thisPackage (hsc_dflags hsc_env)
-- This is a bit of a hack really
mod_deps :: ModuleNameEnv (ModuleName, IsBootInterface)
mod_deps = mkModDeps (dep_mods (mi_deps iface))
-- | Check the flags haven't changed
checkFlagHash :: HscEnv -> ModIface -> IfG RecompileRequired
checkFlagHash hsc_env iface = do
let old_hash = mi_flag_hash iface
new_hash <- liftIO $ fingerprintDynFlags (hsc_dflags hsc_env)
(mi_module iface)
putNameLiterally
case old_hash == new_hash of
True -> up_to_date (text "Module flags unchanged")
False -> out_of_date_hash "flags changed"
(text " Module flags have changed")
old_hash new_hash
-- If the direct imports of this module are resolved to targets that
-- are not among the dependencies of the previous interface file,
-- then we definitely need to recompile. This catches cases like
-- - an exposed package has been upgraded
-- - we are compiling with different package flags
-- - a home module that was shadowing a package module has been removed
-- - a new home module has been added that shadows a package module
-- See bug #1372.
--
-- Returns True if recompilation is required.
checkDependencies :: HscEnv -> ModSummary -> ModIface -> IfG RecompileRequired
checkDependencies hsc_env summary iface
= checkList (map dep_missing (ms_imps summary ++ ms_srcimps summary))
where
prev_dep_mods = dep_mods (mi_deps iface)
prev_dep_pkgs = dep_pkgs (mi_deps iface)
this_pkg = thisPackage (hsc_dflags hsc_env)
dep_missing (mb_pkg, L _ mod) = do
find_res <- liftIO $ findImportedModule hsc_env mod (mb_pkg)
let reason = moduleNameString mod ++ " changed"
case find_res of
Found _ mod
| pkg == this_pkg
-> if moduleName mod `notElem` map fst prev_dep_mods
then do traceHiDiffs $
text "imported module " <> quotes (ppr mod) <>
text " not among previous dependencies"
return (RecompBecause reason)
else
return UpToDate
| otherwise
-> if pkg `notElem` (map fst prev_dep_pkgs)
then do traceHiDiffs $
text "imported module " <> quotes (ppr mod) <>
text " is from package " <> quotes (ppr pkg) <>
text ", which is not among previous dependencies"
return (RecompBecause reason)
else
return UpToDate
where pkg = moduleUnitId mod
_otherwise -> return (RecompBecause reason)
needInterface :: Module -> (ModIface -> IfG RecompileRequired)
-> IfG RecompileRequired
needInterface mod continue
= do -- Load the imported interface if possible
let doc_str = sep [text "need version info for", ppr mod]
traceHiDiffs (text "Checking usages for module" <+> ppr mod)
mb_iface <- loadInterface doc_str mod ImportBySystem
-- Load the interface, but don't complain on failure;
-- Instead, get an Either back which we can test
case mb_iface of
Failed _ -> do
traceHiDiffs (sep [text "Couldn't load interface for module",
ppr mod])
return MustCompile
-- Couldn't find or parse a module mentioned in the
-- old interface file. Don't complain: it might
-- just be that the current module doesn't need that
-- import and it's been deleted
Succeeded iface -> continue iface
-- | Given the usage information extracted from the old
-- M.hi file for the module being compiled, figure out
-- whether M needs to be recompiled.
checkModUsage :: UnitId -> Usage -> IfG RecompileRequired
checkModUsage _this_pkg UsagePackageModule{
usg_mod = mod,
usg_mod_hash = old_mod_hash }
= needInterface mod $ \iface -> do
let reason = moduleNameString (moduleName mod) ++ " changed"
checkModuleFingerprint reason old_mod_hash (mi_mod_hash iface)
-- We only track the ABI hash of package modules, rather than
-- individual entity usages, so if the ABI hash changes we must
-- recompile. This is safe but may entail more recompilation when
-- a dependent package has changed.
checkModUsage this_pkg UsageHomeModule{
usg_mod_name = mod_name,
usg_mod_hash = old_mod_hash,
usg_exports = maybe_old_export_hash,
usg_entities = old_decl_hash }
= do
let mod = mkModule this_pkg mod_name
needInterface mod $ \iface -> do
let
new_mod_hash = mi_mod_hash iface
new_decl_hash = mi_hash_fn iface
new_export_hash = mi_exp_hash iface
reason = moduleNameString mod_name ++ " changed"
-- CHECK MODULE
recompile <- checkModuleFingerprint reason old_mod_hash new_mod_hash
if not (recompileRequired recompile)
then return UpToDate
else do
-- CHECK EXPORT LIST
checkMaybeHash reason maybe_old_export_hash new_export_hash
(text " Export list changed") $ do
-- CHECK ITEMS ONE BY ONE
recompile <- checkList [ checkEntityUsage reason new_decl_hash u
| u <- old_decl_hash]
if recompileRequired recompile
then return recompile -- This one failed, so just bail out now
else up_to_date (text " Great! The bits I use are up to date")
checkModUsage _this_pkg UsageFile{ usg_file_path = file,
usg_file_hash = old_hash } =
liftIO $
handleIO handle $ do
new_hash <- getFileHash file
if (old_hash /= new_hash)
then return recomp
else return UpToDate
where
recomp = RecompBecause (file ++ " changed")
handle =
#ifdef DEBUG
\e -> pprTrace "UsageFile" (text (show e)) $ return recomp
#else
\_ -> return recomp -- if we can't find the file, just recompile, don't fail
#endif
------------------------
checkModuleFingerprint :: String -> Fingerprint -> Fingerprint
-> IfG RecompileRequired
checkModuleFingerprint reason old_mod_hash new_mod_hash
| new_mod_hash == old_mod_hash
= up_to_date (text "Module fingerprint unchanged")
| otherwise
= out_of_date_hash reason (text " Module fingerprint has changed")
old_mod_hash new_mod_hash
------------------------
checkMaybeHash :: String -> Maybe Fingerprint -> Fingerprint -> SDoc
-> IfG RecompileRequired -> IfG RecompileRequired
checkMaybeHash reason maybe_old_hash new_hash doc continue
| Just hash <- maybe_old_hash, hash /= new_hash
= out_of_date_hash reason doc hash new_hash
| otherwise
= continue
------------------------
checkEntityUsage :: String
-> (OccName -> Maybe (OccName, Fingerprint))
-> (OccName, Fingerprint)
-> IfG RecompileRequired
checkEntityUsage reason new_hash (name,old_hash)
= case new_hash name of
Nothing -> -- We used it before, but it ain't there now
out_of_date reason (sep [text "No longer exported:", ppr name])
Just (_, new_hash) -- It's there, but is it up to date?
| new_hash == old_hash -> do traceHiDiffs (text " Up to date" <+> ppr name <+> parens (ppr new_hash))
return UpToDate
| otherwise -> out_of_date_hash reason (text " Out of date:" <+> ppr name)
old_hash new_hash
up_to_date :: SDoc -> IfG RecompileRequired
up_to_date msg = traceHiDiffs msg >> return UpToDate
out_of_date :: String -> SDoc -> IfG RecompileRequired
out_of_date reason msg = traceHiDiffs msg >> return (RecompBecause reason)
out_of_date_hash :: String -> SDoc -> Fingerprint -> Fingerprint -> IfG RecompileRequired
out_of_date_hash reason msg old_hash new_hash
= out_of_date reason (hsep [msg, ppr old_hash, text "->", ppr new_hash])
----------------------
checkList :: [IfG RecompileRequired] -> IfG RecompileRequired
-- This helper is used in two places
checkList [] = return UpToDate
checkList (check:checks) = do recompile <- check
if recompileRequired recompile
then return recompile
else checkList checks
{-
************************************************************************
* *
Converting things to their Iface equivalents
* *
************************************************************************
-}
tyThingToIfaceDecl :: TyThing -> IfaceDecl
tyThingToIfaceDecl (AnId id) = idToIfaceDecl id
tyThingToIfaceDecl (ATyCon tycon) = snd (tyConToIfaceDecl emptyTidyEnv tycon)
tyThingToIfaceDecl (ACoAxiom ax) = coAxiomToIfaceDecl ax
tyThingToIfaceDecl (AConLike cl) = case cl of
RealDataCon dc -> dataConToIfaceDecl dc -- for ppr purposes only
PatSynCon ps -> patSynToIfaceDecl ps
--------------------------
idToIfaceDecl :: Id -> IfaceDecl
-- The Id is already tidied, so that locally-bound names
-- (lambdas, for-alls) already have non-clashing OccNames
-- We can't tidy it here, locally, because it may have
-- free variables in its type or IdInfo
idToIfaceDecl id
= IfaceId { ifName = getOccName id,
ifType = toIfaceType (idType id),
ifIdDetails = toIfaceIdDetails (idDetails id),
ifIdInfo = toIfaceIdInfo (idInfo id) }
--------------------------
dataConToIfaceDecl :: DataCon -> IfaceDecl
dataConToIfaceDecl dataCon
= IfaceId { ifName = getOccName dataCon,
ifType = toIfaceType (dataConUserType dataCon),
ifIdDetails = IfVanillaId,
ifIdInfo = NoInfo }
--------------------------
patSynToIfaceDecl :: PatSyn -> IfaceDecl
patSynToIfaceDecl ps
= IfacePatSyn { ifName = getOccName . getName $ ps
, ifPatMatcher = to_if_pr (patSynMatcher ps)
, ifPatBuilder = fmap to_if_pr (patSynBuilder ps)
, ifPatIsInfix = patSynIsInfix ps
, ifPatUnivBndrs = map binderToIfaceForAllBndr univ_bndrs'
, ifPatExBndrs = map binderToIfaceForAllBndr ex_bndrs'
, ifPatProvCtxt = tidyToIfaceContext env2 prov_theta
, ifPatReqCtxt = tidyToIfaceContext env2 req_theta
, ifPatArgs = map (tidyToIfaceType env2) args
, ifPatTy = tidyToIfaceType env2 rhs_ty
, ifFieldLabels = (patSynFieldLabels ps)
}
where
(_univ_tvs, req_theta, _ex_tvs, prov_theta, args, rhs_ty) = patSynSig ps
univ_bndrs = patSynUnivTyBinders ps
ex_bndrs = patSynExTyBinders ps
(env1, univ_bndrs') = tidyTyBinders emptyTidyEnv univ_bndrs
(env2, ex_bndrs') = tidyTyBinders env1 ex_bndrs
to_if_pr (id, needs_dummy) = (idName id, needs_dummy)
--------------------------
coAxiomToIfaceDecl :: CoAxiom br -> IfaceDecl
-- We *do* tidy Axioms, because they are not (and cannot
-- conveniently be) built in tidy form
coAxiomToIfaceDecl ax@(CoAxiom { co_ax_tc = tycon, co_ax_branches = branches
, co_ax_role = role })
= IfaceAxiom { ifName = name
, ifTyCon = toIfaceTyCon tycon
, ifRole = role
, ifAxBranches = map (coAxBranchToIfaceBranch tycon
(map coAxBranchLHS branch_list))
branch_list }
where
branch_list = fromBranches branches
name = getOccName ax
-- 2nd parameter is the list of branch LHSs, for conversion from incompatible branches
-- to incompatible indices
-- See Note [Storing compatibility] in CoAxiom
coAxBranchToIfaceBranch :: TyCon -> [[Type]] -> CoAxBranch -> IfaceAxBranch
coAxBranchToIfaceBranch tc lhs_s
branch@(CoAxBranch { cab_incomps = incomps })
= (coAxBranchToIfaceBranch' tc branch) { ifaxbIncomps = iface_incomps }
where
iface_incomps = map (expectJust "iface_incomps"
. (flip findIndex lhs_s
. eqTypes)
. coAxBranchLHS) incomps
-- use this one for standalone branches without incompatibles
coAxBranchToIfaceBranch' :: TyCon -> CoAxBranch -> IfaceAxBranch
coAxBranchToIfaceBranch' tc (CoAxBranch { cab_tvs = tvs, cab_cvs = cvs
, cab_lhs = lhs
, cab_roles = roles, cab_rhs = rhs })
= IfaceAxBranch { ifaxbTyVars = toIfaceTvBndrs tv_bndrs
, ifaxbCoVars = map toIfaceIdBndr cvs
, ifaxbLHS = tidyToIfaceTcArgs env1 tc lhs
, ifaxbRoles = roles
, ifaxbRHS = tidyToIfaceType env1 rhs
, ifaxbIncomps = [] }
where
(env1, tv_bndrs) = tidyTyClTyCoVarBndrs emptyTidyEnv tvs
-- Don't re-bind in-scope tyvars
-- See Note [CoAxBranch type variables] in CoAxiom
-----------------
tyConToIfaceDecl :: TidyEnv -> TyCon -> (TidyEnv, IfaceDecl)
-- We *do* tidy TyCons, because they are not (and cannot
-- conveniently be) built in tidy form
-- The returned TidyEnv is the one after tidying the tyConTyVars
tyConToIfaceDecl env tycon
| Just clas <- tyConClass_maybe tycon
= classToIfaceDecl env clas
| Just syn_rhs <- synTyConRhs_maybe tycon
= ( tc_env1
, IfaceSynonym { ifName = getOccName tycon,
ifRoles = tyConRoles tycon,
ifSynRhs = if_syn_type syn_rhs,
ifBinders = if_binders,
ifResKind = if_res_kind
})
| Just fam_flav <- famTyConFlav_maybe tycon
= ( tc_env1
, IfaceFamily { ifName = getOccName tycon,
ifResVar = if_res_var,
ifFamFlav = to_if_fam_flav fam_flav,
ifBinders = if_binders,
ifResKind = if_res_kind,
ifFamInj = familyTyConInjectivityInfo tycon
})
| isAlgTyCon tycon
= ( tc_env1
, IfaceData { ifName = getOccName tycon,
ifBinders = if_binders,
ifResKind = if_res_kind,
ifCType = tyConCType tycon,
ifRoles = tyConRoles tycon,
ifCtxt = tidyToIfaceContext tc_env1 (tyConStupidTheta tycon),
ifCons = ifaceConDecls (algTyConRhs tycon) (algTcFields tycon),
ifRec = boolToRecFlag (isRecursiveTyCon tycon),
ifGadtSyntax = isGadtSyntaxTyCon tycon,
ifParent = parent })
| otherwise -- FunTyCon, PrimTyCon, promoted TyCon/DataCon
-- For pretty printing purposes only.
= ( env
, IfaceData { ifName = getOccName tycon,
ifBinders = if_degenerate_binders,
ifResKind = if_degenerate_res_kind,
-- These don't have `tyConTyVars`, hence "degenerate"
ifCType = Nothing,
ifRoles = tyConRoles tycon,
ifCtxt = [],
ifCons = IfDataTyCon [] False [],
ifRec = boolToRecFlag False,
ifGadtSyntax = False,
ifParent = IfNoParent })
where
-- NOTE: Not all TyCons have `tyConTyVars` field. Forcing this when `tycon`
-- is one of these TyCons (FunTyCon, PrimTyCon, PromotedDataCon) will cause
-- an error.
(tc_env1, tc_tyvars) = tidyTyClTyCoVarBndrs env (tyConTyVars tycon)
if_binders = zipIfaceBinders tc_tyvars (tyConBinders tycon)
if_res_kind = tidyToIfaceType tc_env1 (tyConResKind tycon)
if_syn_type ty = tidyToIfaceType tc_env1 ty
if_res_var = getOccFS `fmap` tyConFamilyResVar_maybe tycon
-- use these when you don't have tyConTyVars
(degenerate_binders, degenerate_res_kind)
= splitPiTys (tidyType env (tyConKind tycon))
if_degenerate_binders = toDegenerateBinders degenerate_binders
if_degenerate_res_kind = toIfaceType degenerate_res_kind
parent = case tyConFamInstSig_maybe tycon of
Just (tc, ty, ax) -> IfDataInstance (coAxiomName ax)
(toIfaceTyCon tc)
(tidyToIfaceTcArgs tc_env1 tc ty)
Nothing -> IfNoParent
to_if_fam_flav OpenSynFamilyTyCon = IfaceOpenSynFamilyTyCon
to_if_fam_flav (ClosedSynFamilyTyCon (Just ax))
= IfaceClosedSynFamilyTyCon (Just (axn, ibr))
where defs = fromBranches $ coAxiomBranches ax
ibr = map (coAxBranchToIfaceBranch' tycon) defs
axn = coAxiomName ax
to_if_fam_flav (ClosedSynFamilyTyCon Nothing)
= IfaceClosedSynFamilyTyCon Nothing
to_if_fam_flav AbstractClosedSynFamilyTyCon = IfaceAbstractClosedSynFamilyTyCon
to_if_fam_flav (DataFamilyTyCon {}) = IfaceDataFamilyTyCon
to_if_fam_flav (BuiltInSynFamTyCon {}) = IfaceBuiltInSynFamTyCon
ifaceConDecls (NewTyCon { data_con = con }) flds = IfNewTyCon (ifaceConDecl con) (ifaceOverloaded flds) (ifaceFields flds)
ifaceConDecls (DataTyCon { data_cons = cons }) flds = IfDataTyCon (map ifaceConDecl cons) (ifaceOverloaded flds) (ifaceFields flds)
ifaceConDecls (TupleTyCon { data_con = con }) _ = IfDataTyCon [ifaceConDecl con] False []
ifaceConDecls (AbstractTyCon distinct) _ = IfAbstractTyCon distinct
-- The AbstractTyCon case happens when a TyCon has been trimmed
-- during tidying.
-- Furthermore, tyThingToIfaceDecl is also used in TcRnDriver
-- for GHCi, when browsing a module, in which case the
-- AbstractTyCon and TupleTyCon cases are perfectly sensible.
-- (Tuple declarations are not serialised into interface files.)
ifaceConDecl data_con
= IfCon { ifConOcc = getOccName (dataConName data_con),
ifConInfix = dataConIsInfix data_con,
ifConWrapper = isJust (dataConWrapId_maybe data_con),
ifConExTvs = map binderToIfaceForAllBndr ex_bndrs',
ifConEqSpec = map (to_eq_spec . eqSpecPair) eq_spec,
ifConCtxt = tidyToIfaceContext con_env2 theta,
ifConArgTys = map (tidyToIfaceType con_env2) arg_tys,
ifConFields = map (nameOccName . flSelector)
(dataConFieldLabels data_con),
ifConStricts = map (toIfaceBang con_env2)
(dataConImplBangs data_con),
ifConSrcStricts = map toIfaceSrcBang
(dataConSrcBangs data_con)}
where
(univ_tvs, _ex_tvs, eq_spec, theta, arg_tys, _)
= dataConFullSig data_con
ex_bndrs = dataConExTyBinders data_con
-- Tidy the univ_tvs of the data constructor to be identical
-- to the tyConTyVars of the type constructor. This means
-- (a) we don't need to redundantly put them into the interface file
-- (b) when pretty-printing an Iface data declaration in H98-style syntax,
-- we know that the type variables will line up
-- The latter (b) is important because we pretty-print type constructors
-- by converting to IfaceSyn and pretty-printing that
con_env1 = (fst tc_env1, mkVarEnv (zipEqual "ifaceConDecl" univ_tvs tc_tyvars))
-- A bit grimy, perhaps, but it's simple!
(con_env2, ex_bndrs') = tidyTyBinders con_env1 ex_bndrs
to_eq_spec (tv,ty) = (toIfaceTyVar (tidyTyVar con_env2 tv), tidyToIfaceType con_env2 ty)
ifaceOverloaded flds = case fsEnvElts flds of
fl:_ -> flIsOverloaded fl
[] -> False
ifaceFields flds = sort $ map flLabel $ fsEnvElts flds
-- We need to sort the labels because they come out
-- of FastStringEnv in arbitrary order, because
-- FastStringEnv is keyed on Uniques.
-- Sorting FastString is ok here, because Uniques
-- are only used for equality checks in the Ord
-- instance for FastString.
-- See Note [Unique Determinism] in Unique.
toIfaceBang :: TidyEnv -> HsImplBang -> IfaceBang
toIfaceBang _ HsLazy = IfNoBang
toIfaceBang _ (HsUnpack Nothing) = IfUnpack
toIfaceBang env (HsUnpack (Just co)) = IfUnpackCo (toIfaceCoercion (tidyCo env co))
toIfaceBang _ HsStrict = IfStrict
toIfaceSrcBang :: HsSrcBang -> IfaceSrcBang
toIfaceSrcBang (HsSrcBang _ unpk bang) = IfSrcBang unpk bang
classToIfaceDecl :: TidyEnv -> Class -> (TidyEnv, IfaceDecl)
classToIfaceDecl env clas
= ( env1
, IfaceClass { ifCtxt = tidyToIfaceContext env1 sc_theta,
ifName = getOccName tycon,
ifRoles = tyConRoles (classTyCon clas),
ifBinders = binders,
ifFDs = map toIfaceFD clas_fds,
ifATs = map toIfaceAT clas_ats,
ifSigs = map toIfaceClassOp op_stuff,
ifMinDef = fmap getOccFS (classMinimalDef clas),
ifRec = boolToRecFlag (isRecursiveTyCon tycon) })
where
(clas_tyvars, clas_fds, sc_theta, _, clas_ats, op_stuff)
= classExtraBigSig clas
tycon = classTyCon clas
(env1, clas_tyvars') = tidyTyCoVarBndrs env clas_tyvars
binders = zipIfaceBinders clas_tyvars' (tyConBinders tycon)
toIfaceAT :: ClassATItem -> IfaceAT
toIfaceAT (ATI tc def)
= IfaceAT if_decl (fmap (tidyToIfaceType env2 . fst) def)
where
(env2, if_decl) = tyConToIfaceDecl env1 tc
toIfaceClassOp (sel_id, def_meth)
= ASSERT(sel_tyvars == clas_tyvars)
IfaceClassOp (getOccName sel_id)
(tidyToIfaceType env1 op_ty)
(fmap toDmSpec def_meth)
where
-- Be careful when splitting the type, because of things
-- like class Foo a where
-- op :: (?x :: String) => a -> a
-- and class Baz a where
-- op :: (Ord a) => a -> a
(sel_tyvars, rho_ty) = splitForAllTys (idType sel_id)
op_ty = funResultTy rho_ty
toDmSpec :: (Name, DefMethSpec Type) -> DefMethSpec IfaceType
toDmSpec (_, VanillaDM) = VanillaDM
toDmSpec (_, GenericDM dm_ty) = GenericDM (tidyToIfaceType env1 dm_ty)
toIfaceFD (tvs1, tvs2) = (map (getOccFS . tidyTyVar env1) tvs1,
map (getOccFS . tidyTyVar env1) tvs2)
--------------------------
tidyToIfaceType :: TidyEnv -> Type -> IfaceType
tidyToIfaceType env ty = toIfaceType (tidyType env ty)
tidyToIfaceTcArgs :: TidyEnv -> TyCon -> [Type] -> IfaceTcArgs
tidyToIfaceTcArgs env tc tys = toIfaceTcArgs tc (tidyTypes env tys)
tidyToIfaceContext :: TidyEnv -> ThetaType -> IfaceContext
tidyToIfaceContext env theta = map (tidyToIfaceType env) theta
tidyTyClTyCoVarBndrs :: TidyEnv -> [TyCoVar] -> (TidyEnv, [TyCoVar])
tidyTyClTyCoVarBndrs env tvs = mapAccumL tidyTyClTyCoVarBndr env tvs
tidyTyClTyCoVarBndr :: TidyEnv -> TyCoVar -> (TidyEnv, TyCoVar)
-- If the type variable "binder" is in scope, don't re-bind it
-- In a class decl, for example, the ATD binders mention
-- (amd must mention) the class tyvars
tidyTyClTyCoVarBndr env@(_, subst) tv
| Just tv' <- lookupVarEnv subst tv = (env, tv')
| otherwise = tidyTyCoVarBndr env tv
tidyTyVar :: TidyEnv -> TyVar -> TyVar
tidyTyVar (_, subst) tv = lookupVarEnv subst tv `orElse` tv
-- TcType.tidyTyVarOcc messes around with FlatSkols
--------------------------
instanceToIfaceInst :: ClsInst -> IfaceClsInst
instanceToIfaceInst (ClsInst { is_dfun = dfun_id, is_flag = oflag
, is_cls_nm = cls_name, is_cls = cls
, is_tcs = mb_tcs
, is_orphan = orph })
= ASSERT( cls_name == className cls )
IfaceClsInst { ifDFun = dfun_name,
ifOFlag = oflag,
ifInstCls = cls_name,
ifInstTys = map do_rough mb_tcs,
ifInstOrph = orph }
where
do_rough Nothing = Nothing
do_rough (Just n) = Just (toIfaceTyCon_name n)
dfun_name = idName dfun_id
--------------------------
famInstToIfaceFamInst :: FamInst -> IfaceFamInst
famInstToIfaceFamInst (FamInst { fi_axiom = axiom,
fi_fam = fam,
fi_tcs = roughs })
= IfaceFamInst { ifFamInstAxiom = coAxiomName axiom
, ifFamInstFam = fam
, ifFamInstTys = map do_rough roughs
, ifFamInstOrph = orph }
where
do_rough Nothing = Nothing
do_rough (Just n) = Just (toIfaceTyCon_name n)
fam_decl = tyConName $ coAxiomTyCon axiom
mod = ASSERT( isExternalName (coAxiomName axiom) )
nameModule (coAxiomName axiom)
is_local name = nameIsLocalOrFrom mod name
lhs_names = filterNameSet is_local (orphNamesOfCoCon axiom)
orph | is_local fam_decl
= NotOrphan (nameOccName fam_decl)
| otherwise
= chooseOrphanAnchor $ nameSetElems lhs_names
--------------------------
toIfaceLetBndr :: Id -> IfaceLetBndr
toIfaceLetBndr id = IfLetBndr (occNameFS (getOccName id))
(toIfaceType (idType id))
(toIfaceIdInfo (idInfo id))
-- Put into the interface file any IdInfo that CoreTidy.tidyLetBndr
-- has left on the Id. See Note [IdInfo on nested let-bindings] in IfaceSyn
--------------------------t
toIfaceIdDetails :: IdDetails -> IfaceIdDetails
toIfaceIdDetails VanillaId = IfVanillaId
toIfaceIdDetails (DFunId {}) = IfDFunId
toIfaceIdDetails (RecSelId { sel_naughty = n
, sel_tycon = tc }) =
let iface = case tc of
RecSelData ty_con -> Left (toIfaceTyCon ty_con)
RecSelPatSyn pat_syn -> Right (patSynToIfaceDecl pat_syn)
in IfRecSelId iface n
-- The remaining cases are all "implicit Ids" which don't
-- appear in interface files at all
toIfaceIdDetails other = pprTrace "toIfaceIdDetails" (ppr other)
IfVanillaId -- Unexpected; the other
toIfaceIdInfo :: IdInfo -> IfaceIdInfo
toIfaceIdInfo id_info
= case catMaybes [arity_hsinfo, caf_hsinfo, strict_hsinfo,
inline_hsinfo, unfold_hsinfo] of
[] -> NoInfo
infos -> HasInfo infos
-- NB: strictness and arity must appear in the list before unfolding
-- See TcIface.tcUnfolding
where
------------ Arity --------------
arity_info = arityInfo id_info
arity_hsinfo | arity_info == 0 = Nothing
| otherwise = Just (HsArity arity_info)
------------ Caf Info --------------
caf_info = cafInfo id_info
caf_hsinfo = case caf_info of
NoCafRefs -> Just HsNoCafRefs
_other -> Nothing
------------ Strictness --------------
-- No point in explicitly exporting TopSig
sig_info = strictnessInfo id_info
strict_hsinfo | not (isNopSig sig_info) = Just (HsStrictness sig_info)
| otherwise = Nothing
------------ Unfolding --------------
unfold_hsinfo = toIfUnfolding loop_breaker (unfoldingInfo id_info)
loop_breaker = isStrongLoopBreaker (occInfo id_info)
------------ Inline prag --------------
inline_prag = inlinePragInfo id_info
inline_hsinfo | isDefaultInlinePragma inline_prag = Nothing
| otherwise = Just (HsInline inline_prag)
--------------------------
toIfUnfolding :: Bool -> Unfolding -> Maybe IfaceInfoItem
toIfUnfolding lb (CoreUnfolding { uf_tmpl = rhs
, uf_src = src
, uf_guidance = guidance })
= Just $ HsUnfold lb $
case src of
InlineStable
-> case guidance of
UnfWhen {ug_arity = arity, ug_unsat_ok = unsat_ok, ug_boring_ok = boring_ok }
-> IfInlineRule arity unsat_ok boring_ok if_rhs
_other -> IfCoreUnfold True if_rhs
InlineCompulsory -> IfCompulsory if_rhs
InlineRhs -> IfCoreUnfold False if_rhs
-- Yes, even if guidance is UnfNever, expose the unfolding
-- If we didn't want to expose the unfolding, TidyPgm would
-- have stuck in NoUnfolding. For supercompilation we want
-- to see that unfolding!
where
if_rhs = toIfaceExpr rhs
toIfUnfolding lb (DFunUnfolding { df_bndrs = bndrs, df_args = args })
= Just (HsUnfold lb (IfDFunUnfold (map toIfaceBndr bndrs) (map toIfaceExpr args)))
-- No need to serialise the data constructor;
-- we can recover it from the type of the dfun
toIfUnfolding _ _
= Nothing
--------------------------
coreRuleToIfaceRule :: CoreRule -> IfaceRule
coreRuleToIfaceRule (BuiltinRule { ru_fn = fn})
= pprTrace "toHsRule: builtin" (ppr fn) $
bogusIfaceRule fn
coreRuleToIfaceRule (Rule { ru_name = name, ru_fn = fn,
ru_act = act, ru_bndrs = bndrs,
ru_args = args, ru_rhs = rhs,
ru_orphan = orph, ru_auto = auto })
= IfaceRule { ifRuleName = name, ifActivation = act,
ifRuleBndrs = map toIfaceBndr bndrs,
ifRuleHead = fn,
ifRuleArgs = map do_arg args,
ifRuleRhs = toIfaceExpr rhs,
ifRuleAuto = auto,
ifRuleOrph = orph }
where
-- For type args we must remove synonyms from the outermost
-- level. Reason: so that when we read it back in we'll
-- construct the same ru_rough field as we have right now;
-- see tcIfaceRule
do_arg (Type ty) = IfaceType (toIfaceType (deNoteType ty))
do_arg (Coercion co) = IfaceCo (toIfaceCoercion co)
do_arg arg = toIfaceExpr arg
bogusIfaceRule :: Name -> IfaceRule
bogusIfaceRule id_name
= IfaceRule { ifRuleName = fsLit "bogus", ifActivation = NeverActive,
ifRuleBndrs = [], ifRuleHead = id_name, ifRuleArgs = [],
ifRuleRhs = IfaceExt id_name, ifRuleOrph = IsOrphan,
ifRuleAuto = True }
---------------------
toIfaceExpr :: CoreExpr -> IfaceExpr
toIfaceExpr (Var v) = toIfaceVar v
toIfaceExpr (Lit l) = IfaceLit l
toIfaceExpr (Type ty) = IfaceType (toIfaceType ty)
toIfaceExpr (Coercion co) = IfaceCo (toIfaceCoercion co)
toIfaceExpr (Lam x b) = IfaceLam (toIfaceBndr x, toIfaceOneShot x) (toIfaceExpr b)
toIfaceExpr (App f a) = toIfaceApp f [a]
toIfaceExpr (Case s x ty as)
| null as = IfaceECase (toIfaceExpr s) (toIfaceType ty)
| otherwise = IfaceCase (toIfaceExpr s) (getOccFS x) (map toIfaceAlt as)
toIfaceExpr (Let b e) = IfaceLet (toIfaceBind b) (toIfaceExpr e)
toIfaceExpr (Cast e co) = IfaceCast (toIfaceExpr e) (toIfaceCoercion co)
toIfaceExpr (Tick t e)
| Just t' <- toIfaceTickish t = IfaceTick t' (toIfaceExpr e)
| otherwise = toIfaceExpr e
toIfaceOneShot :: Id -> IfaceOneShot
toIfaceOneShot id | isId id
, OneShotLam <- oneShotInfo (idInfo id)
= IfaceOneShot
| otherwise
= IfaceNoOneShot
---------------------
toIfaceTickish :: Tickish Id -> Maybe IfaceTickish
toIfaceTickish (ProfNote cc tick push) = Just (IfaceSCC cc tick push)
toIfaceTickish (HpcTick modl ix) = Just (IfaceHpcTick modl ix)
toIfaceTickish (SourceNote src names) = Just (IfaceSource src names)
toIfaceTickish (Breakpoint {}) = Nothing
-- Ignore breakpoints, since they are relevant only to GHCi, and
-- should not be serialised (Trac #8333)
---------------------
toIfaceBind :: Bind Id -> IfaceBinding
toIfaceBind (NonRec b r) = IfaceNonRec (toIfaceLetBndr b) (toIfaceExpr r)
toIfaceBind (Rec prs) = IfaceRec [(toIfaceLetBndr b, toIfaceExpr r) | (b,r) <- prs]
---------------------
toIfaceAlt :: (AltCon, [Var], CoreExpr)
-> (IfaceConAlt, [FastString], IfaceExpr)
toIfaceAlt (c,bs,r) = (toIfaceCon c, map getOccFS bs, toIfaceExpr r)
---------------------
toIfaceCon :: AltCon -> IfaceConAlt
toIfaceCon (DataAlt dc) = IfaceDataAlt (getName dc)
toIfaceCon (LitAlt l) = IfaceLitAlt l
toIfaceCon DEFAULT = IfaceDefault
---------------------
toIfaceApp :: Expr CoreBndr -> [Arg CoreBndr] -> IfaceExpr
toIfaceApp (App f a) as = toIfaceApp f (a:as)
toIfaceApp (Var v) as
= case isDataConWorkId_maybe v of
-- We convert the *worker* for tuples into IfaceTuples
Just dc | saturated
, Just tup_sort <- tyConTuple_maybe tc
-> IfaceTuple tup_sort tup_args
where
val_args = dropWhile isTypeArg as
saturated = val_args `lengthIs` idArity v
tup_args = map toIfaceExpr val_args
tc = dataConTyCon dc
_ -> mkIfaceApps (toIfaceVar v) as
toIfaceApp e as = mkIfaceApps (toIfaceExpr e) as
mkIfaceApps :: IfaceExpr -> [CoreExpr] -> IfaceExpr
mkIfaceApps f as = foldl (\f a -> IfaceApp f (toIfaceExpr a)) f as
---------------------
toIfaceVar :: Id -> IfaceExpr
toIfaceVar v
| Just fcall <- isFCallId_maybe v = IfaceFCall fcall (toIfaceType (idType v))
-- Foreign calls have special syntax
| isExternalName name = IfaceExt name
| otherwise = IfaceLcl (getOccFS name)
where name = idName v
| GaloisInc/halvm-ghc | compiler/iface/MkIface.hs | bsd-3-clause | 78,588 | 3 | 22 | 24,783 | 14,100 | 7,408 | 6,692 | -1 | -1 |
module FRP.Sodium.GameEngine2D.Geometry where
type Coord = Float
type Point = (Coord, Coord)
type Vector = (Coord, Coord)
type Rect = (Point, Vector) -- Central point and size from centre to edge
scale :: Coord -> Vector -> Vector
scale s (vx, vy) = (vx*s, vy*s)
negateVector :: Vector -> Vector
negateVector (vx, vy) = (-vx, -vy)
magnitude :: Vector -> Coord
magnitude (vx, vy) = sqrt (vx^2 + vy^2)
normalize :: Vector -> Vector
normalize v@(vx, vy) = (vx / mag, vy / mag)
where mag = magnitude v
distance :: Point -> Point -> Coord
distance (x0,y0) (x1,y1) = sqrt ((x1-x0)^2 + (y1-y0)^2)
plus :: Point -> Vector -> Point
plus (x0, y0) (x1, y1) = (x0+x1, y0+y1)
minus :: Point -> Point -> Vector
minus (x0, y0) (x1, y1) = (x0 - x1, y0 - y1)
translateRect :: Vector -> Rect -> Rect
translateRect v (orig, size) = (orig `plus` v, size)
scaleRect :: Vector -> Rect -> Rect
scaleRect (sx, sy) (o, (w, h)) = (o, (sx*w, sy*h))
marginRect :: Coord -> Rect -> Rect
marginRect m rect = scaleRect ((w-m)/w, (h-m)/h) rect
where w = rectWidth rect
h = rectHeight rect
rectWidth :: Rect -> Coord
rectWidth (_, (w, _)) = w*2
rectHeight :: Rect -> Coord
rectHeight (_, (_, h)) = h*2
rectAspect :: Rect -> Coord
rectAspect (_, (w, h)) = w/h
rectOrig :: Rect -> Point
rectOrig (orig, _) = orig
rectSize :: Rect -> Vector
rectSize (_, size) = size
-- | Drop the specified amount off the left of the rectangle.
dropLeft :: Coord -> Rect -> Rect
dropLeft chop rect = edgesRect (x0', y0, max x1 x0', y1)
where (x0, y0, x1, y1) = rectEdges rect
x0' = x0 + chop
dropLeftP :: Float -> Rect -> Rect
dropLeftP p rect = dropLeft (p * w * 0.01) rect
where w = rectWidth rect
takeLeft :: Coord -> Rect -> Rect
takeLeft chop rect = edgesRect (x0, y0, x0 + chop, y1)
where (x0, y0, x1, y1) = rectEdges rect
takeLeftP :: Float -> Rect -> Rect
takeLeftP p rect = takeLeft (p * w * 0.01) rect
where w = rectWidth rect
-- | Drop the specified amount off the right of the rectangle.
dropRight :: Coord -> Rect -> Rect
dropRight chop rect = edgesRect (min x0 x1', y0, x1', y1)
where (x0, y0, x1, y1) = rectEdges rect
x1' = x1 - chop
dropRightP :: Float -> Rect -> Rect
dropRightP p rect = dropRight (p * w * 0.01) rect
where w = rectWidth rect
takeRight :: Coord -> Rect -> Rect
takeRight chop rect = edgesRect (x0, y0, x0 + chop, y1)
where (x0, y0, x1, y1) = rectEdges rect
takeRightP :: Float -> Rect -> Rect
takeRightP p rect = takeRight (p * w * 0.01) rect
where w = rectWidth rect
-- | Drop the specified amount off the bottom of the rectangle.
dropBottom :: Coord -> Rect -> Rect
dropBottom chop rect = edgesRect (x0, y0', x1, max y0' y1)
where (x0, y0, x1, y1) = rectEdges rect
y0' = y0 + chop
dropBottomP :: Float -> Rect -> Rect
dropBottomP p rect = dropBottom (p * h * 0.01) rect
where h = rectHeight rect
takeBottom :: Coord -> Rect -> Rect
takeBottom chop rect = edgesRect (x0, y0, x1, y0 + chop)
where (x0, y0, x1, y1) = rectEdges rect
takeBottomP :: Float -> Rect -> Rect
takeBottomP p rect = takeBottom (p * h * 0.01) rect
where h = rectHeight rect
-- | Drop the specified amount off the top of the rectangle.
dropTop :: Coord -> Rect -> Rect
dropTop chop rect = edgesRect (x0, min y0 y1', x1, y1')
where (x0, y0, x1, y1) = rectEdges rect
y1' = y1 - chop
dropTopP :: Float -> Rect -> Rect
dropTopP p rect = dropTop (p * h * 0.01) rect
where h = rectHeight rect
takeTop :: Coord -> Rect -> Rect
takeTop chop rect = edgesRect (x0, y1 - chop, x1, y1)
where (x0, y0, x1, y1) = rectEdges rect
takeTopP :: Float -> Rect -> Rect
takeTopP p rect = takeTop (p * h * 0.01) rect
where h = rectHeight rect
-- | Chop /chop/ off the left of the rectangle, returning the left and the remainder (right).
splitLeft :: Coord -> Rect -> (Rect, Rect)
splitLeft chop rect = (takeLeft chop rect, dropLeft chop rect)
splitLeftP :: Float -> Rect -> (Rect, Rect)
splitLeftP p rect = splitLeft (p * w * 0.01) rect
where w = rectWidth rect
-- | Chop /chop/ off the right of the rectangle, returning right and the remainder (left).
splitRight :: Coord -> Rect -> (Rect, Rect)
splitRight chop rect = (takeRight chop rect, dropRight chop rect)
splitRightP :: Float -> Rect -> (Rect, Rect)
splitRightP p rect = splitRight (p * w * 0.01) rect
where w = rectWidth rect
-- | Chop /chop/ off the bottom of the rectangle, returning the bottom and the remainder (top).
splitBottom :: Coord -> Rect -> (Rect, Rect)
splitBottom chop rect = (takeBottom chop rect, dropBottom chop rect)
splitBottomP :: Float -> Rect -> (Rect, Rect)
splitBottomP p rect = splitBottom (p * h * 0.01) rect
where h = rectHeight rect
-- | Chop /chop/ off the top of the rectangle, returning the top and the remainder (bottom).
splitTop :: Coord -> Rect -> (Rect, Rect)
splitTop chop rect = (takeTop chop rect, dropTop chop rect)
splitTopP :: Float -> Rect -> (Rect, Rect)
splitTopP p rect = splitTop (p * h * 0.01) rect
where h = rectHeight rect
-- | Split a rectangle vertically into the specified percentages
splitVertical :: [Float] -> Rect -> [Rect]
splitVertical ps rect = go ps rect (sum ps)
where
go [] _ _ = error "splitVertical: empty list"
go [_] rect _ = [rect]
go (p:ps) rect total =
let h = rectHeight rect
(top, bottom) = splitTop (h * p / total) rect
in top : go ps bottom (total - p)
-- | Split a rectangle vertically into the specified percentages
splitHorizontal :: [Float] -> Rect -> [Rect]
splitHorizontal ps rect = go ps rect (sum ps)
where
go [] _ _ = error "splitVertical: empty list"
go [_] rect _ = [rect]
go (p:ps) rect total =
let w = rectWidth rect
(top, bottom) = splitLeft (w * p / total) rect
in top : go ps bottom (total - p)
data Justify = LeftJ | CentreJ | RightJ
-- | The resulting rectangle will have the specified aspect ratio and fit in the
-- specified rectangle.
fitAspect :: Coord -> Justify -> Rect -> Rect
fitAspect aspect' justify ((ox, oy), (w, h)) = ((ox + shift, oy), wh')
where
aspect = w / h
(wh', shift) = if aspect' < aspect
then
let w' = h * aspect'
lat = w - w'
shift = case justify of
LeftJ -> -lat
CentreJ -> 0
RightJ -> lat
in ((w', h), shift)
else ((w, w / aspect'), 0)
-- | The resulting rectangle will have the specified aspect ratio and completely
-- covert the specified rectangle.
coverAspect :: Coord -> Rect -> Rect
coverAspect aspect' ((ox, oy), (w, h)) = ((ox, oy), wh')
where
aspect = w / h
wh' = if aspect' > aspect
then (h * aspect', h)
else (w, w / aspect')
swap :: (a, a) -> (a, a)
swap (a, b) = (b, a)
-- | True if the point is inside the rectangle
inside :: Point -> Rect -> Bool
inside (x, y) ((ox, oy), (wx, wy)) =
x >= ox - wx && x <= ox + wx &&
y >= oy - wy && y <= oy + wy
-- | True if the two rectangles overlap
overlaps :: Rect -> Rect -> Bool
overlaps r0 r1 =
let (ax0, ay0, ax1, ay1) = rectEdges r0
(bx0, by0, bx1, by1) = rectEdges r1
in ax1 > bx0 &&
ay1 > by0 &&
ax0 < bx1 &&
ay0 < by1
rectEdges :: Rect -> (Coord, Coord, Coord, Coord)
rectEdges ((ox, oy), (w, h)) = (x0, y0, x1, y1)
where
x0 = ox - w
y0 = oy - h
x1 = ox + w
y1 = oy + h
edgesRect :: (Coord, Coord, Coord, Coord) -> Rect
edgesRect (x0, y0, x1, y1) = ((ox, oy), (w, h))
where
ox = (x0 + x1) * 0.5
oy = (y0 + y1) * 0.5
w = (x1 - x0) * 0.5
h = (y1 - y0) * 0.5
-- | Limit rectangle size (as distance from centre)
clipRect :: Rect -> Rect -> Rect
clipRect clip r =
let (cx0, cy0, cx1, cy1) = rectEdges clip
(x0, y0, x1, y1) = rectEdges r
in edgesRect (cx0 `max` x0, cy0 `max` y0, cx1 `min` x1, cy1 `min` y1)
-- | True if this is a null rectangle
nullRect :: Rect -> Bool
nullRect (_, (0, 0)) = True
nullRect _ = False
-- | The rectangle that contains the two sub-rectangles
appendRect :: Rect -> Rect -> Rect
appendRect r0 r1 =
if nullRect r0 then r1 else
if nullRect r1 then r0 else
edgesRect (appendEdges (rectEdges r0) (rectEdges r1))
appendEdges :: (Coord, Coord, Coord, Coord) -> (Coord, Coord, Coord, Coord) -> (Coord, Coord, Coord, Coord)
appendEdges (ax0, ay0, ax1, ay1) (bx0, by0, bx1, by1) = (x0, y0, x1, y1)
where
x0 = min ax0 bx0
y0 = min ay0 by0
x1 = max ax1 bx1
y1 = max ay1 by1
-- | Rotate the point 90 degrees clockwise.
clockwisePoint :: Point -> Point
clockwisePoint (x, y) = (y, -x)
-- | Rotate the point 90 degrees anti-clockwise.
anticlockwisePoint :: Point -> Point
anticlockwisePoint (x, y) = (-y, x)
clockwiseRect :: Rect -> Rect
clockwiseRect (orig, (w, h)) = (clockwisePoint orig, (h, w))
anticlockwiseRect :: Rect -> Rect
anticlockwiseRect (orig, (w, h)) = (anticlockwisePoint orig, (h, w))
| the-real-blackh/sodium-2d-game-engine | FRP/Sodium/GameEngine2D/Geometry.hs | bsd-3-clause | 9,010 | 0 | 15 | 2,294 | 3,641 | 2,023 | 1,618 | 198 | 4 |
{-|
Module : Language.MDL.Interp.InterpState
Description : Stores state for the interpreter
The 'InterpState' data structure stores all the state of the interpreter as it
executes each 'Expr'
-}
module Language.MDL.Interp.InterpState (
InterpState(..),
Lighting(..),
PointLight(..),
ShadingType(..),
topTransMat,
initState
) where
import GHC.Prim
import Data.Matrix hiding (empty)
import Data.Picture
import Language.MDL.SymTab
data InterpState = InterpState
{
-- | A function that modifies a blank picture to create the desired picture
picFunc :: Picture RealWorld -> IO ()
, transStack :: ![TransformMatrix] -- ^ The transformation stack
, symtab :: !SymTab -- ^ The symbol table
, lighting :: !Lighting -- ^ The lighting information
, shading :: !ShadingType -- ^ The shading type
}
-- | Get the top of the transformation stack from an 'InterpState'
topTransMat :: InterpState -> TransformMatrix
topTransMat ps = case transStack ps of
(top:_) -> top
[] -> idMatrix
-- | The initial lighting information
initLighting :: Lighting
initLighting = Lighting (pure 0) []
-- | The initial state of an interpreter
initState :: InterpState
initState = InterpState
{ picFunc = const $ return ()
, transStack = []
, symtab = empty
, lighting = initLighting
, shading = Flat }
| jbaum98/graphics | src/Language/MDL/Interp/InterpState.hs | bsd-3-clause | 1,424 | 0 | 11 | 352 | 259 | 156 | 103 | 38 | 2 |
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE RecordWildCards #-}
module Lib.Database ( createUsers
, allUsersQuery
, getUserQuery
, getUser
, createDatabase
, connectDb
, User(..)
) where
import Control.Exception
import Data.Text (Text)
import qualified Data.Text as T
import Data.ByteString (ByteString)
import qualified Data.ByteString as BS
import Text.RawString.QQ
import Data.Typeable
import Database.SQLite.Simple.Types
import Database.SQLite.Simple hiding (close)
import qualified Database.SQLite.Simple as SQLite
data User = User { userId :: Integer
, username :: Text
, shell :: Text
, homeDirectory :: Text
, realName :: Text
, phone :: Text
} deriving (Eq, Show)
{-
-- Exceptions
-}
data DuplicateData = DuplicateData deriving (Eq, Show, Typeable)
instance Exception DuplicateData
createUsers :: Query
createUsers = [r|
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE,
shell TEXT,
homeDirectory TEXT,
realName TEXT,
phone TEXT
)
|]
insertUserQuery :: Query
insertUserQuery = "INSERT INTO users VALUES (?, ?, ?, ?, ?, ?)"
allUsersQuery :: Query
allUsersQuery = "SELECT * from users"
getUserQuery :: Query
getUserQuery = "SELECT * FROM users WHERE username = ?"
getUser :: Connection -> Text -> IO (Maybe User)
getUser conn username = do
results <- query conn getUserQuery (Only username)
case results of
[] -> return Nothing
[user] -> return $ Just user
_ -> throwIO DuplicateData
connectDb :: IO Connection
connectDb = open "finger.db"
createDatabase :: IO ()
createDatabase = do
conn <- connectDb
execute_ conn createUsers
mapM_ (addUser conn) [stampy, rowan, tessa]
rows <- query_ conn allUsersQuery
mapM_ print (rows :: [User])
SQLite.close conn
where stampy = User {
userId = 0
, username = "stampy"
, shell = "/bin/zsh"
, homeDirectory = "/home/stampy"
, realName = "Stampy Longnose"
, phone = "555-123-4567"
}
rowan = User {
userId = 0
, username = "rowan"
, shell = "/bin/zsh"
, homeDirectory = "/home/rowan"
, realName = "Rowan Pascoe"
, phone = "555-123-4568"
}
tessa = User {
userId = 0
, username = "tessa"
, shell = "/bin/zsh"
, homeDirectory = "/home/tessa"
, realName = "Tessa Pascoe"
, phone = "555-123-4569"
}
addUser :: Connection -> User -> IO ()
addUser conn user = do
execute conn insertUserQuery $ newUserRow user
instance FromRow User where
fromRow = User <$> field <*> field <*> field
<*> field <*> field <*> field
instance ToRow User where
toRow (User id_ username shell homeDir realName phone) =
toRow (id_, username, shell, homeDir, realName, phone)
type UserRow = (Null, Text, Text, Text, Text, Text)
newUserRow :: User -> UserRow
newUserRow User{..} = (Null, username, shell, homeDirectory, realName, phone)
| stephenpascoe/haskell-fingerd | src/Lib/Database.hs | bsd-3-clause | 3,285 | 0 | 11 | 984 | 776 | 447 | 329 | 87 | 3 |
{-# LANGUAGE CPP,TemplateHaskell,DeriveDataTypeable #-}
module Data.Encoding.ISO88596
(ISO88596(..)) where
import Data.Array ((!),Array)
import Data.Word (Word8)
import Data.Map (Map,lookup,member)
import Data.Encoding.Base
import Prelude hiding (lookup)
import Control.OldException (throwDyn)
import Data.Typeable
data ISO88596 = ISO88596 deriving (Eq,Show,Typeable)
instance Encoding ISO88596 where
encode _ = encodeSinglebyte (\c -> case lookup c encodeMap of
Just v -> v
Nothing -> throwDyn (HasNoRepresentation c))
encodable _ c = member c encodeMap
decode _ = decodeSinglebyte (decodeArr!)
decodeArr :: Array Word8 Char
#ifndef __HADDOCK__
decodeArr = $(decodingArray "8859-6.TXT")
#endif
encodeMap :: Map Char Word8
#ifndef __HADDOCK__
encodeMap = $(encodingMap "8859-6.TXT")
#endif
| abuiles/turbinado-blog | tmp/dependencies/encoding-0.4.1/Data/Encoding/ISO88596.hs | bsd-3-clause | 804 | 2 | 14 | 108 | 250 | 140 | 110 | 21 | 1 |
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE NoImplicitPrelude #-}
-- | Locations of items.
module Penny.Cursor where
import qualified Control.Lens as Lens
import Text.Show.Pretty (PrettyVal)
import qualified Text.Show.Pretty as Pretty
import qualified Pinchot
import Penny.Pretty
import Penny.Prelude
data Cursor = Cursor
{ _collection :: Either Text FilePath
-- ^ Indicates where this item came from; either it is an arbitrary
-- 'Text' or a 'FilePath'.
, _loc :: Pinchot.Loc
} deriving (Eq, Ord, Show, Generic)
instance PrettyVal Cursor where
prettyVal (Cursor col loc) = Pretty.Rec "Cursor"
[ ("_collection", prettyEither prettyText prettyFilePath col)
, ("_loc", Pretty.prettyVal loc)
]
Lens.makeLenses ''Cursor
| massysett/penny | penny/lib/Penny/Cursor.hs | bsd-3-clause | 811 | 0 | 10 | 136 | 173 | 102 | 71 | 20 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.