text
stringlengths 2
100k
| meta
dict |
---|---|
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="Intro_To_Hadoop" xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
<title>Soft Introduction to Hadoop</title>
<section>
<title>Hadoop = HDFS + MapReduce</title>
<para>
Hadoop provides two things : Storage & Compute. If you think of Hadoop as a coin, one side is storage and other side is compute.
</para>
<figure id="hadoop-coin">
<title>Hadoop coin</title>
<inlinegraphic fileref="hadoop_coin.png" format="PNG" width="100%" scalefit="1" contentdepth="100%"/>
</figure>
<para>
In Hadoop speak, storage is provided by <emphasis>Hadoop Distributed File System (HDFS)</emphasis>. Compute is provided by <emphasis>MapReduce</emphasis>.
</para>
<para>
<emphasis>Hadoop</emphasis> is an open-source implementation of Google's distributed computing framework (which is proprietary).
It consists of two parts: Hadoop Distributed File System (HDFS), which is modeled after
Google's GFS, and Hadoop MapReduce, which is modeled after Google's MapReduce.
<!-- , so when Google teaches college students the
ideas of MapReduce programming, they, too, use Hadoop.
To further emphasize the difference
we can note that the Hadoop engineers at Yahoo like to challenge the engineers at
Google to sorting competitions between Hadoop and MapReduce.
--> </para>
<para>
<emphasis>MapReduce</emphasis> is a programming framework.
Its description was <ulink url="http://research.google.com/archive/mapreduce.html">published by Google in 2004</ulink>.
Much like other frameworks, such as Spring, Struts, or MFC, the MapReduce framework
does some things for you, and provides a place for you to fill in the blanks. What
MapReduce does for you is to organize your multiple computers in a cluster in
order to perform the calculations you need. It takes care of distributing the work
between computers and of putting together the results of each computer's
computation. Just as important, it takes care of hardware and network failures, so
that they do not affect the flow of your computation. You, in turn, have to break
your problem into separate pieces which can be processed in parallel by multiple
machines, and you provide the code to do the actual calculation.
</para>
</section>
<section>
<title>Why Hadoop?</title>
<para>
We have already mentioned that the Hadoop is used at
Yahoo and Facebook. It has seen rapid uptake in finance, retail, telco, and the
government. It is making inroads into life sciences. Why is this?
</para>
<figure>
<title>Will you join the Hadoop dance?</title>
<graphic fileref="hadoop-dance-resized.png"></graphic>
</figure>
<para>
The short answer is that it simplifies dealing with Big Data. This answer
immediately resonates with people, it is clear and succinct, but it is not complete.
The Hadoop framework has built-in power and flexibility to do what you could not
do before. In fact, Cloudera presentations at the latest O'Reilly Strata conference
mentioned that MapReduce was initially used at Google and Facebook not
primarily for its scalability, but for what it allowed you to do with the data.
</para>
<para>
In 2010, the average size of Cloudera's customers' clusters was 30 machines.
In 2011 it was 70. When people start using Hadoop, they do it for many reasons,
all concentrated around the new ways of dealing with the data. What gives them
the security to go ahead is the knowledge that Hadoop solutions are massively
scalable, as has been proved by Hadoop running in the world's largest computer
centers and at the largest companies.
</para>
<para>
As you will discover, the Hadoop framework organizes the data and the
computations, and then runs your code. At times, it makes sense to run your
solution, expressed in a MapReduce paradigm, even on a single machine.
</para>
<para>
But of course, Hadoop really shines when you have not one, but rather tens,
hundreds, or thousands of computers. If your data or computations are significant
enough (and whose aren't these days?), then you need more than one machine to do
the number crunching. If you try to organize the work yourself, you will soon
discover that you have to coordinate the work of many computers, handle failures,
retries, and collect the results together, and so on. Enter Hadoop to solve all these
problems for you. Now that you have a hammer, everything becomes a nail: people
will often reformulate their problem in MapReduce terms, rather than create a new
custom computation platform.
</para>
<para>
No less important than Hadoop itself are its many friends. The Hadoop Distributed
File System (HDFS) provides unlimited file space available from any Hadoop
node. HBase is a high-performance unlimited-size database working on top of
Hadoop. If you need the power of familiar SQL over your large data sets, Pig
provides you with an answer. While Hadoop can be used by programmers and
taught to students as an introduction to Big Data, its companion projects (including
ZooKeeper, about which we will hear later on) will make projects possible and
simplify them by providing tried-and-proven frameworks for every aspect of dealing with
large data sets.
</para>
<para>
As you learn the concepts, and perfect your skills with the techniques described
in this book you will discover that there are many cases where Hadoop storage,
Hadoop computation, or Hadoop's friends can help you. Let's look at some of these
situations.
</para>
<itemizedlist>
<listitem>
<para>
Do you find yourself often cleaning the limited hard drives in your company? Do you
need to transfer data from one drive to another, as a backup? Many people are so used to
this necessity, that they consider it an unpleasant but unavoidable part of life. Hadoop
distributed file system, HDFS, grows by adding servers. To you it looks like one hard
drive. It is self-replicating (you set the replication factor) and thus provides redundancy
as a software alternative to RAID.
</para>
</listitem>
<listitem>
<para>
Do your computations take an unacceptably long time? Are you forced to give up on
projects because you don’t know how to easily distribute the computations between
multiple computers? MapReduce helps you solve these problems. What if you don’t have
the hardware to run the cluster? - Amazon EC2 can run MapReduce jobs for you, and you
pay only for the time that it runs - the cluster is automatically formed for you and then
disbanded.
</para>
</listitem>
<listitem>
<para>
But say you are lucky, and instead of maintaining legacy software, you are charged with
building new, progressive software for your company's work flow. Of course, you want to
have unlimited storage, solving this problem once and for all, so as to concentrate on
what's really important. The answer is: you can mount HDFS as a FUSE file system, and
you have your unlimited storage. In our cases studies we look at the successful use of
HDFS as a grid storage for the Large Hadron Collider.
</para>
</listitem>
<listitem>
<para>
Imagine you have multiple clients using your on line resources, computations, or data.
Each single use is saved in a log, and you need to generate a summary of use of resources
for each client by day or by hour. From this you will do your invoices, so it IS important.
But the data set is large. You can write a quick MapReduce job for that. Better yet, you
can use Hive, a data warehouse infrastructure built on top of Hadoop, with its ETL
capabilities, to generate your invoices in no time. We'll talk about Hive later, but we hope
that you already see that you can use Hadoop and friends for fun and profit.
</para>
</listitem>
</itemizedlist>
<para>
Once you start thinking without the usual limitations, you can improve on what
you already do and come up with new and useful projects. In fact, this book
partially came about by asking people how they used Hadoop in their work. You,
the reader, are invited to submit your applications that became possible with
Hadoop, and I will put it into Case Studies (with attribution :) of course.
</para>
</section>
<section>
<title>Meet the Hadoop Zoo</title>
<para>
QUINCE: Is all our company here?
</para>
<para>
BOTTOM: You were best to call them generally, man by man, according to the script.
</para>
<para>
<ulink url="http://shakespeare.mit.edu/midsummer/full.html">
Shakespeare, "Midsummer Night's Dream"
</ulink>
</para>
<para>
There are a number of animals in the Hadoop zoo, and each deals with a certain
aspect of Big Data. Let us illustrate this with a picture, and then introduce them
one by one.
</para>
<figure>
<title>The Hadoop Zoo</title>
<graphic fileref="chapter-intro-01.png"></graphic>
</figure>
<section>
<title>HDFS - Hadoop Distributed File System</title>
<para>
HDFS, or the Hadoop Distributed File System, gives the programmer unlimited
storage (fulfilling a cherished dream for programmers).
However, here are additional advantages of HDFS.
</para>
<itemizedlist>
<listitem>
<para>
Horizontal scalability. Thousands of servers holding petabytes of data. When you need
even more storage, you don't switch to more expensive solutions, but add servers instead.
</para>
</listitem>
<listitem>
<para>
Commodity hardware. HDFS is designed with relatively cheap commodity hardware
in mind. HDFS is self-healing and replicating.
</para>
</listitem>
<listitem>
<para>
Fault tolerance. Every member of the Hadoop zoo knows how to deal with hardware
failures. If you have 10 thousand servers, then you will see one server fail every day, on
average. HDFS foresees that by replicating the data, by default three times, on
different data node servers. Thus, if one data node fails, the other two can be used to
restore the third one in a different place.
</para>
</listitem>
</itemizedlist>
<para>
HDFS implementation is modeled after GFS, Google Distributed File
system, thus you can read the first paper on this, to be found here:
http://labs.google.com/papers/gfs.html.
</para>
<para>
More in-depth discussion of HDFS is here : <xref linkend="HDFS_Intro"/>
</para>
</section>
<section>
<title>MapReduce</title>
<para>
MapReduce takes care of distributed computing.
It reads the data, usually from its storage, the Hadoop Distributed File System (HDFS), in an optimal way.
However, it can read the data from other places too, including mounted
local file systems, the web, and databases. It divides the computations between
different computers (servers, or nodes). It is also fault-tolerant.
</para>
<para>
If some of your nodes fail, Hadoop knows how to continue with the
computation, by re-assigning the incomplete work to another node and cleaning up
after the node that could not complete its task. It also knows how to combine the
results of the computation in one place.
</para>
<para>
More in-depth discussing of MapReduce is here : <xref linkend="MapReduce_Intro"/>
</para>
</section>
<section>
<title>HBase, the database for Big Data</title>
<para>"Thirty spokes share the wheel's hub, it is the empty space that make it useful"
- Tao Te Ching (
<ulink url = "http://terebess.hu/english/tao/gia.html">
translated by Gia-Fu Feng and Jane English)
</ulink>
</para>
<para>
Not properly an animal, HBase is nevertheless very powerful. It is currently
denoted by the letter H with a base clef. If you think this is not so great, you are
right, and the HBase people are thinking of changing the logo. HBase is a database
for Big Data, up to millions of columns and billions of rows.
</para>
<para>
Another feature of HBase is that it is a key-value database, not a relational
database. We will get into the pros and cons of these two approaches to databases
later, but for now let's just note that key-value databases are considered as more
fitting for Big Data. Why? Because they don't store nulls! This gives them the appellation
of "sparse," and as we saw above, Tao Te Chin says that they are useful for this reason.
</para>
</section>
<section>
<title>ZooKeeper</title>
<para>
Every zoo has a zoo keeper, and the Hadoop zoo is no exception. When all the
Hadoop animals want to do something together, it is the ZooKeeper who helps
them do it. They all know him and listen and obey his commands. Thus, the
ZooKeeper is a centralized service for maintaining configuration information,
naming, providing distributed synchronization, and providing group services.
</para>
<para>
ZooKeeper is also fault-tolerant. In your development environment, you can put
the zookeeper on one node, but in production you usually run it on an odd number
of servers, such as 3 or 5.
</para>
</section>
<section>
<title>Hive - data warehousing</title>
<para>
Hive: "I am Hive, I let you in and out of the HDFS cages, and you can talk SQL to me!"
</para>
<para>
Hive is a way for you to get all the honey, and to leave all the work to the bees.
You can do a lot of data analysis with Hadoop, but you will also have to write
MapReduce tasks. Hive takes that task upon itself. Hive defines a simple SQL-like query
language, called QL, that enables users familiar with SQL to query the data.
</para>
<para>
At the same time, if your Hive program does almost what you need, but not
quite, you can call on your MapReduce skill. Hive allows you to write custom
mappers and reducers to extend the QL capabilities.
</para>
</section>
<section>
<title>
Pig - Big Data manipulation
</title>
<para>
Pig: "I am Pig, I let you move HDFS cages around, and I speak Pig Latin."
</para>
<para>
Pig is called pig not because it eats a lot, although you can imagine a pig
pushing around and consuming big volumes of information. Rather, it is called pig because it speaks Pig Latin. Others who also speak this language are the kids (the programmers) who visit the Hadoop zoo.
</para>
<para>
So what is Pig Latin that Apache Pig speaks? As a rough analogy, if Hive is the
SQL of Big Data, then Pig Latin is the language of the stored procedures of Big
Data. It allows you to manipulate large volumes of information, analyze them, and
create new derivative data sets. Internally it creates a sequence of MapReduce jobs,
and thus you, the programmer-kid, can use this simple language to solve pretty
sophisticated large-scale problems.
</para>
</section>
</section>
<section>
<title>Hadoop alternatives</title>
<para>
Now that we have met the Hadoop zoo, we are ready to start our excursion. Only
one thing stops us at this point - and that is, a gnawing doubt, are we in the right
zoo? Let us look at some alternatives to dealing with Big Data. Granted, our
concentration here is Hadoop, and we may not give justice to all the other
approaches. But we will try.
</para>
<section>
<title>Large data storage alternatives</title>
<para>
HDFS is not the only, and in fact, not the earliest or the latest distributed file
system. CEPH claims to be more flexible and to remove the limit on the number of
files. HDFS stores all of its file information in the memory of the server which is
called the NameNode. This is its strong point - speed - but it is also its Achilles'
heel! CEPH, on the other hand, makes the function of the NameNode completely
distributed.
</para>
<para>
Another possible contender is ZFS, an open-source file system from SUN, and
currently Oracle. Intended as a complete redesign of file system thinking, ZFS
holds a strong promise of unlimited size, robustness, encryption, and many other
desirable qualities built into the low-level file system. After all, HDFS and its
role model GFS both build on a conventional file system, creating their improvement
on top of it, and the premise of ZFS is that the underlying file system should be
redesigned to address the core issues.
</para>
<para>
I have seen production architectures built on ZFS, where the data storage
requirements were very clear and well-defined and where storing data from
multiple field sensors was considered better done with ZFS. The pros for ZFS in
this case were: built-in replication, low overhead, and - given the right structure
of records when written - built-in indexing for searching. Obviously, this was a
very specific, though very fitting solution.
</para>
<para>
While other file system start out with the goal of improving on HDFS/GFS
design, the advantage of HDFS is that it is very widely used. I think that in
evaluating other file systems, the reader can be guided by the same considerations
that led to the design of GFS: its designers analyzed prevalent file usage in the
majority of their applications, and created a file system that optimized reliability
for that particular type of usage. The reader may be well advised to compare the
assumptions of GFS designers with his or her case, and decide if HDFS fits the
purpose, or if something else should be used in its place.
</para>
<para>
We should also note here that we compared Hadoop to other open-source storage
solutions. There are proprietary and commercial solutions, but such comparison
goes beyond the scope of this introduction.
</para>
</section>
<section>
<title>Large database alternatives</title>
<para>
The closest to HBase is Cassandra. While HBase is a near-clone of Google’s
Big Table, Cassandra purports to being a “Big Table/Dynamo hybrid”. It can be said
that while Cassandra’s “writes-never-fail” emphasis has its advantages, HBase is
the more robust database for a majority of use-cases. HBase being more prevalent
in use, Cassandra faces an uphill battle - but it may be just what you need.
</para>
<para>
Hypertable is another database close to Google's Big Table in features, and it
claims to run 10 times faster than HBase. There is an ongoing discussion between
HBase and Hypertable proponents, and the authors do not want to take sides in it,
leaving the comparison to the reader. Like Cassandra, Hypertable has fewer users
than HBase, and here too, the reader needs to evaluate the speed of Hypertable for
his application, and weigh it with other factors.
</para>
<para>
MongoDB (from "humongous") is a scalable, high-performance, open source,
document-oriented database. Written in C++, MongoDB features
document-oriented storage, full index on any attribute, replication and high
availability, rich, document-based queries, and it works with MapReduce. If you
are specifically processing documents and not arbitrary data, it is worth a look.
</para>
<para>
Other open-source and commercial databases that may be given consideration
include Vertica with its SQL support and visualization, Cloudran for OLTP, and
Spire.
</para>
<para>
In the end, before embarking on a development project, you will need to
compare alternatives. Below is an example of such comparison. Please keep in
mind that this is just one possible point of view, and that the specifics of your project
and of your view will be different. Therefore, the table below is mainly to
encourage the reader to do a similar evaluation for his own needs.
</para>
</section>
<table frame='all'>
<title>Comparison of Big Data </title>
<tgroup cols='6' align='left' colsep='1' rowsep='1'>
<colspec colname='c1'/>
<colspec colname='c2'/>
<colspec colname='c3'/>
<colspec colnum='5' colname='c5'/>
<thead>
<row>
<entry>DB Pros/Cons</entry>
<entry>HBase</entry>
<entry>Cassandra</entry>
<entry>Vertica</entry>
<entry>CloudTran</entry>
<entry>HyperTable</entry>
</row>
</thead>
<tbody>
<row>
<entry>Pros</entry>
<entry>Key-based NoSQL, active user community, Cloudera support </entry>
<entry>Key-based NoSQL, active user community, Amazon's Dynamo on EC2</entry>
<entry>Closed-source, SQL-standard, easy to use, visualization tools, complex queries </entry>
<entry>Closed-source optimized on line transaction processing </entry>
<entry>Drop-in replacement for HBase, open-source, arguably much faster</entry>
</row>
<row>
<entry>Cons</entry>
<entry>Steeper learning curve, less tools, simpler queries</entry>
<entry>Steeper learning curve, less tools, simpler queries</entry>
<entry>Vendor lock-in, price, RDMS/BI - may not fit every application </entry>
<entry>Vendor lock-in, price, transaction-optimized, may not fit every application, needs wider adoption</entry>
<entry>New, needs user adoption and more testing</entry>
</row>
<row>
<entry>Notes</entry>
<entry>Good for new, long-term development </entry>
<entry>Easy to set up, no dependence on HDFS, fully distributed architecture</entry>
<entry>Good for existing SQL-based applications that needs fast scaling </entry>
<entry>Arguably the best OLTP </entry>
<entry>To be kept in mind as a possible alternative </entry>
</row>
</tbody>
</tgroup>
</table>
</section>
<section>
<title>Alternatives for distributed massive computations</title>
<para>Here too, depending upon the type of application that the reader needs, other
approaches make prove more useful or more fitting to the purpose.
</para>
<para>
The first such example is the JavaSpaces paradigm. JavaSpaces is a giant hash
map container. It provides the framework for building large-scale systems with
multiple cooperating computational nodes. The framework is thread-safe and
fault-tolerant. Many computers working on the same problem can store their data in
a JavaSpaces container. When a node wants to do some work, it finds the data in
the container, checks it out, works on it, and then returns it. The framework
provides for atomicity. While the node is working on the data, other nodes cannot
see it. If it fails, its lease on the data expires, and the data is returned back to the
pool of data for processing.
</para>
<para>
The champion of JavaSpaces is a commercial company called GigaSpaces. The
license for a JavaSpaces container from GigaSpaces is free - provided that you can
fit into the memory of one computer. Beyond that, GigaSpaces has implemented
unlimited JavaSpaces container where multiple servers combine their memories
into a shared pool. GigaSpaces has created a big sets of additional functionality for
building large distributed systems. So again, everything depends on the reader's
particular situation.
</para>
<para>
GridGain is another Hadoop alternative. The proponents of GridGain claim that
while Hadoop is a compute grid and a data grid, GridGain is just a compute grid,
so if your data requirements are not huge, why bother? They also say that it seems to
be enormously simpler to use. Study of the tools and prototyping with them can
give one a good feel for the most fitting answer.
</para>
<para>
Terracotta is a commercial open source company, and in the open source realm
it provides Java big cache and a number of other components for building large
distributed systems. One of its advantages is that it allows existing
applications to scale without a significant rewrite. By now we have gotten pretty far away
from Hadoop, which proves that we have achieved our goal - give the reader a quick
overview of various alternatives for building large distributed systems. Success in
whichever way you choose to go!
</para>
</section>
<section>
<title>Arguments for Hadoop</title>
<para>
We have given the pro arguments for the Hadoop alternatives, but now we can put
in a word for the little elephant and its zoo. It boasts wide adoption, has an active
community, and has been in production use in many large companies. I think that
before embarking on an exciting journey of building large distributed systems, the
reader will do well to view the presentation by Jeff Dean, a Google Fellow, on the
"Design, Lessons, and Advice from Building Large Distributed Systems"
<ulink url = "http://www.slideshare.net/xlight/google-designs-lessons-and-advice-from-building-large-distributed-systems">
found on SlideShare
</ulink>
</para>
<para>
Google has built multiple applications on GFS, MapReduce, and Big Table, which
are all implemented as open-source projects in the Hadoop zoo. According to Jeff,
the plan is to continue with 1,000,000 to 10,000,000 machines spread at 100s to
1000s of locations around the world, and as arguments go, that is pretty big.
</para>
</section>
<!--
<section>
<title>Say "Hi!" to Hadoop</title>
<para>Enough words, let’s look at some code!
First, however, let us explain how MapReduce works in human terms.
</para>
<section>
<title>A dialog between you and Hadoop</title>
<para>
Imagine you want to count word frequencies in a text. It may be a book or a
document, and word frequencies may tell you something about its subject. Or it
may be a web access log, and you may be looking for suspicious activity. It may be
a log of any customer activity, and you might be looking for most frequent
customers, and so on.
</para>
<para>
In a straightforward approach, you would create an array or a hash table of
words, and start counting the word's occurrences. You may run out of memory, or
the process may be slow. If you try to use multiple computers all accessing shared
memory, the system immediately gets complicated, with multi-threading, and we
have not even thought of hardware failures. Anyone who has gone through similar
exercises knows how quickly a simple task can become a nightmare.
</para>
<para>
Enter Hadoop which offers its services. The following dialog ensues.
</para>
<para>
<emphasis>Hadoop: </emphasis>How can I help?
</para>
<para>
<emphasis>You: </emphasis>I need to count words in a document.
</para>
<para>
<emphasis>Hadoop: </emphasis>I will read the words and give them to you, one at a time, can you
count that?
</para>
<para>
<emphasis>You: </emphasis>Yes, I will assign a count of 1 to each and give them back to you.
</para>
<para>
<emphasis>Hadoop: </emphasis>Very good. I will sort them, and will give them back to you in groups,
grouping the same words together. Can you count that?
</para>
<para>
<emphasis>You: </emphasis>Yes, I will go through them and give you back each word and its count.
</para>
<para>
<emphasis>Hadoop: </emphasis>I will record each word with its count, and we’ll be done.
</para>
<para>I am not pulling your leg: it is that simple. That is the essence of a MapReduce
job. Hadoop uses the cluster of computers (nodes), where each node reads words in
parallel with all others (Map), then the nodes collect the results (Reduce) and writes
them back. Notice that there is a sort step, which is essential to the solution, and is
provided for you - regardless of the size of the data. It may take place all in
memory, or it may spill to disk. If any of the computers go bad, their tasks are
assigned to the remaining healthy ones.
</para>
<para>
How does this dialog look in the code?
</para>
</section>
<section>
<title>Geek Talk</title>
<para>
<emphasis>Hadoop: </emphasis>How can I help?
</para>
<para>
Hadoop:
<programlisting language="java">
public class WordCount extends Configured implements Tool {
public int run(String[] args) throws Exception {
</programlisting>
</para>
<para>
<emphasis>You: </emphasis>I need to count words in a document.
</para>
<para>
You:
<programlisting language="java">
Job job = new Job(getConf());
job.setJarByClass(WordCount.class);
job.setJobName("wordcount");
</programlisting>
</para>
<para>
<emphasis>Hadoop: </emphasis>I will read the words and give them to you, one at a time, can you
count that?
</para>
<para>
Hadoop
<programlisting language="java">
public static class Map extends Mapper <LongWritable, Text, Text, IntWritable> {
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
</programlisting>
</para>
<para>
<emphasis>You: </emphasis>Yes, I will assign a count of 1 to each and give them back to you.
</para>
<para>
You
<programlisting language="java">
String [] tokens = line.split(" ,");
for (String token: tokens) {
Text word = new Text();
word.set(token);
context.write(word, new IntWritable(1));
}
</programlisting>
</para>
<para>
You have done more than you promised - you can process multiple words on
the same line, if Hadoop chooses to give them to you. This follows the principles of
defensive programming. Then you immediately realize that each input line can be
as long as it wants. In fact, Hadoop is optimized to have the best overall throughput
on large data sets. Therefore, each input can be a complete document, and you are
counting word frequencies in documents. If the documents come from the Web, for
example, you already have the scale of computations needed for such tasks.
</para>
<para>
<emphasis>Hadoop: </emphasis>Very good. I will sort them, and will give them back to you in groups,
grouping the same words together. Can you count that?
</para>
<para>
<programlisting language="java">
public static class Reduce extends Reducer <Text, IntWritable, Text, IntWritable> {
@Override public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
</programlisting>
</para>
<para>
<emphasis>You: </emphasis>Yes, I will go through them and give you back each word and its count.
</para>
<para>
You
<programlisting language="java">
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
</programlisting>
</para>
<para>
<emphasis>Hadoop: </emphasis>I will record each word with its count, and we’ll be done.
</para>
<para>
Hadoop:
<programlisting language="java">
// This is invisible - no code
// but you can trust that he does it
}
</programlisting>
</para>
</section>
<section>
<title>Let me see, who is Map and who is Reduce?</title>
<para>
</para>
<para>MapReduce (or MR, if you want to impress your friends) has mappers and
reducers, as can be seen by their names. What are they?
</para>
<para>Mapper is the code that you supply to process one entry. This entry can be a
line from a book or from a log, a temperature or financial record, etc. In our
example it was counting to 1. In a different use case, it may be a name of an
archive file, which the Mapper will unpack and process.
</para>
<para>When the Mapper is done processing, it returns the results to the framework.
The return takes the form of a Map, which consists of a key and a value. In our
case, the key was the word. It can be a hash of a file, or any other important
characteristic of your value. The keys that are the same will be combined, so you
select them in such a way that elements that you want to be processed together will
have the same key.
</para>
<para>Finally, your Mapper code gives the Map to the framework. This is called
emitting the map. It is expressed by the following code line:
</para>
<para>
<programlisting language="java">
context.write(key, value);
</programlisting>
</para>
<para>Now the framework sorts your maps. Sorting is an interesting process and it
occurs a lot in computer processing, so we will talk in more detail about it in the
next chapter. Having sorted the maps, the framework gives them back to you in
groups. You supply the code which tells it how to process each group. This is the
Reducer. The key that you gave to the framework is the same key that it returns to
your Reducer. In our case, it was a word found in a document.
</para>
<para>In the code, this is how we went through a group of values:
</para>
<para>Going through values in the reducer
<programlisting language="java">
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
</programlisting>
</para>
<para>While the key was now the word, the value was count - which, as you may
remember, we have set to 1 for each word. These are being summed up.
</para>
<para>Finally, you return the reduced result to the framework, and it outputs results to
a file.
</para>
<para>Reducer emits the final map
<programlisting language="java">
context.write(key, new IntWritable(sum));
</programlisting>
</para>
</section>
</section>
-->
<!--
<section>
<title>How Hadoop Works Inside</title>
<para>Having successfully gone through the first example, and perhaps even having
compiled and run it, we can take a breather and look at some theory. Early on,
programmers had little use or respect for theory, but that approach can only take
you so far. For example, if a multi-threaded application is written without
knowledge and a good understanding of concurrency issues, the creation will fail at
random, unpredictable times, most often under load and on the verge of success.
</para>
<para>This is why we should welcome theory, and here it is.
</para>
</section>
-->
<!--
<section>
<title>Rolling Up Your Sleeves</title>
<para>
</para>
<section>
<title>For Windows Programmers</title>
<para>We do realize that there are some programmers in the world who run Windows.
You can still learn and use Hadoop. Here are a few possible roads that you can
take: install Ubuntu alongside with Windows, install it as a double-boot, or install
Fedora. Hadoop recommends Linux, and Cloudera provides packages for Debian
and RedHat Linux. If you prefer to stay closer to Windows, use Cygwin, following
the instructions on the Apache Hadoop site.
</para>
</section>
<section>
<title>Running WordCount in Your IDE</title>
<para>Discussion
Which Hadoop distribution should you use? The first two choices are Apache
Hadoop and the Cloudera CDH.
What are the pros and cons of each?
Apache Hadoop is the home of the Hadoop project. You can get any version
there. Of course, if you a real programmer and don't have a production
environment to support, you will choose the latest. That's fine - you can download
and install the latest, usually in your local account, and accept the default settings
and permission. That is the easiest way to get up and running with Apache
Hadoop. If you want to install for more than yourself on that system, you would
have to follow the best installation practices and learn about the preferred groups
and permission.
You always deal with the latest and greatest code, and you can even try to build
it yourself, if you adventurous enough you can make an improvement and suggest
it to the committees, in the form of a HADOOP-PATH-XXX - you can learn about
it here: http://wiki.apache.org/hadoop/HowToContribute
Your other choice is Cloudera CDH distribution. It offers the same code, but
already packaged and tested, for many Linux flavors. If you go this route, you
automatically get introduced to the best deployment practices, everything gets put
into the right directories, and your Hadoop commands work from the command
line out of the box, because they are put on the path by the install process. You
don't even have to set the environmental variable yourself.
Cloudera CDH distribution offers other advantages: it puts in the patches that
Cloudera tests, and it installs other Hadoop-related projects as packages. It may lag
one version behind the current Apache Hadoop, so if you absolutely need the latest
features, you will go with Apache.
Which choice is the better one? It almost does not matter, it will work either
way. Most of my friends consultants tell me that they recommend Cloudera CDH
distribution to the clients, and I guess for two reasons: they will have less
installation questions from the clients, and many commercial companies may
prefer to know that there is commercial support for the open source Hadoop.
Compare this to RedHat, and Cloudera is not shying away from this comparison
either.
This covers the install issues. You need it before you are to go to the next
exercise. Therefore, install Hadoop, download the code from GitHub, create the
project in your IDE, and run it. You cannot do other labs before you do it.
Note that running from the IDE, you essentially ran your job with the following
command:
<programlisting language="java">java -jar dist/Chapter1.jar test-data/moby-dick.txt test-output
</programlisting>
The jar that you give on the command line is the jar that contains your main
function, used to submit your Hadoop job. The other jar, which you set with the
call to , is the one that contains job.setJarByClass() your mapper and
reducer code. It is this this second jar that Hadoop will copy to every node and run
it there, following the rule of "move computations close to the data, not the data
close to the computations." In this lab, one jar contains all the code.
Running inside of the IDE relies on the Hadoop jars being present. The
command java -jar above worked, because the Hadoop jars were copied to
dist/lib, and the Chapter1.jar pointed to them. In a more general case, you will
need to be cognizant of packaging the required jars. This will be covered in a later
chapter dealing with best practices.
</para>
</section>
<section>
<title>Technique 2: Run WordCount Outside of Your IDE</title>
<para>Problem
After you were successful in running a MapReduce job from the IDE, it is time
to to launch it on a Hadoop cluster, even if it is configured only in
pseudo-distributed mode.
Solution
Now that sounds pretty simple. Build the jar in your IDE, then run it from the
command line which will looks something like this:
<programlisting language="java">hadoop jar your-jar parameters</programlisting>
When you were running your WordCountExample from the IDE, the code
picked up the data from the local file system. This is very useful for debugging, but
it will not work when running under hadoop, even in local mode on one machine,
because the data needs to reside in HDFS. Let's do it right from the beginning and
copy the data to where it should be:
<programlisting language="java">
hadoop fs -mkdir /chapter1
hadoop fs -copyFromLocal moby-dick.txt /chapter1
</programlisting>
See it here: http://localhost:50070/
Now we can run it with the following command:
Listing 1.12 run_wordcount_local.sh
<programlisting language="java">
hadoop jar ../dist/Chapter1.jar \
hdfs://localhost/chapter1/test-data/moby-dick.txt \
hdfs://localhost/chapter1/test-output
</programlisting>
After the program runs, it is instructive and pleasant to view the output in the
browser:
Figure 1.4 Output in HDFS viewed in the browser
Discussion
What happened to the Hadoop jars that we needed in Lab 1? Since we are now
running relying on Hadoop, it takes care to provide your code with its library jars,
and we do not need to care about them explicitly. This is true, provided that
configured correctly, and usually this is HADOOP_CLASSPATH true if the install
and run scripts are configured right. If not, you may need to adjust your
installation.
</para>
</section>
<section>
<title>Lab 3: Configure Distributed Hadoop Cluster</title>
<para>Configure a minimal cluster of 2-3 nodes and run the WordCountExample there.
Make sure that the tasks get distributed to different nodes. Verify this with Hadoop
logging.
When you follow the instructions, using Cloudera or Apache Hadoop
distribution, you should be able to see the HFDS in the browser, like in this figure
for a 2-node cluster.
Figure 1.5 Browsing a 2-node HDFS
You would then run it with the following command:
Listing 1.13 run_wordcount_dist.sh
<programlisting language="java">
hadoop jar ../dist/Chapter1.jar \
hdfs://hadoop-master/chapter1/test-data/moby-dick.txt \
hdfs://hadoop-master/chapter1/test-output
</programlisting>
</para>
</section>
<section>
<title>Lab 4: Customer Billing (Advanced)</title>
<para>Each line of your input contains the timestamp for an instance of resource
utilization, then a tab, and customer-related data: customer ID, resource ID, and
resource unit price. Write a MapReduce job that will create, for each customer, a
summary of resource utilization by the hour, and output the result into a text file.
Sample input format:
Wed Jan 5 11:07:00 CST 2011 (Tab) Cust89347281 Res382915 $0.0035
Generate test data for the exercise 4 above. In keeping with the general Hadoop
philosophy, manual testing is not enough. Write an MR task to generate arbitrary
amount of random test data from pre-defined small invoice, then run your answer
to the exercise 4 and see if you get the results you started out with.
</para>
</section>
<section>
<title>Lab 5: Deduplication (Advanced)</title>
<para>Often input files contain records that are the same. This may happen in web
crawling, when individual crawlers may come to the same URL. Write a
MapReduce task that will "dedupe" the records, and output each of the same
records only once. Hint: in the Map stage compute the MD5 or SHA-1 hash of the
record and output this as a key for the Reducer. In the reduce stage (since the
records are sorted by keys) output only one of the records that are given to the
Reducer with the same key value.
</para>
</section>
<section>
<title>Lab 6: Write a Distributed Grep</title>
<para>This lab is taken straight out of Google initial MapReduce paper.
</para>
</section>
</section>
-->
<!--
<section>
<title>Exercises</title>
<para>This chapter, as well as all succeeding ones, contains exercises building on the
material in the chapter. Doing the labs with the help of the instructions in this must
have been useful, but going solo is a special event in the life of every pilot, and we,
the programmers, should imitate the best. Therefore, here are suggested exercises
for your own practice and enjoyment.
</para>
<section>
<title>Exercise 1</title>
<para>Check out the Hadoop and HDFS project code from the Subversion repository on
Apache. The process of doing so is described here:
http://wiki.apache.org/hadoop/HowToContribute. Try to build
the project, read the code, apply a patch, and build again. There are a few benefit in
doing this. You will feel more comfortable with Hadoop by following the famous
advice "Read the code, stupid!" You will also have a feeling of what would be
involve if you want to submit a patch yourself.
</para>
</section>
<section>
<title>Exercise 2</title>
<para>Modify Lab 4 to output results to a relational database. Hint: using RDBMS directly
may be problematic because of the load, and the possibility of node failures, so use
the output to text files instead, and load the text files into the database on a separate
post-processing step.
</para>
</section>
</section>
-->
<!--
<section>
<title>Chapter Summary</title>
<para>In this chapter we were introduced to the MapReduce/Hadoop framework and
wrote our first Hadoop program, which can actually accomplish quite a lot. We got
a first look at when Hadoop can be useful. If you did the labs and exercises, you
can now safely state that you are an intermediate Hadoop programmer, which is no
small thing.
<para></para>
In the next chapter we will go a little deeper into sorting. There are situations
where more control over sorting is required. It will also give you a better feeling
for Hadoop internals, so that after reading it you will feel closer to a seasoned
veteran than to a novice. Still, we will try to make it a breeze, keeping in line with
the motto of this book, "Who said Hadoop is hard?".
</para>
</section>
-->
</chapter>
| {
"pile_set_name": "Github"
} |
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <algorithm>
#include <istream>
#include <iterator>
#include <memory>
#include <string>
#include <unordered_map>
#include <utility>
#include <vector>
#include "absl/container/node_hash_map.h"
#include "mediapipe/calculators/util/top_k_scores_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/formats/classification.pb.h"
#include "mediapipe/framework/port/ret_check.h"
#include "mediapipe/framework/port/status.h"
#include "mediapipe/framework/port/statusor.h"
#include "mediapipe/util/resource_util.h"
#if defined(MEDIAPIPE_MOBILE)
#include "mediapipe/util/android/file/base/file.h"
#include "mediapipe/util/android/file/base/helpers.h"
#else
#include "mediapipe/framework/port/file_helpers.h"
#endif
namespace mediapipe {
// A calculator that takes a vector of scores and returns the indexes, scores,
// labels of the top k elements, classification protos, and summary std::string
// (in csv format).
//
// Usage example:
// node {
// calculator: "TopKScoresCalculator"
// input_stream: "SCORES:score_vector"
// output_stream: "TOP_K_INDEXES:top_k_indexes"
// output_stream: "TOP_K_SCORES:top_k_scores"
// output_stream: "TOP_K_LABELS:top_k_labels"
// output_stream: "TOP_K_CLASSIFICATIONS:top_k_classes"
// output_stream: "SUMMARY:summary"
// options: {
// [mediapipe.TopKScoresCalculatorOptions.ext] {
// top_k: 5
// threshold: 0.1
// label_map_path: "/path/to/label/map"
// }
// }
// }
class TopKScoresCalculator : public CalculatorBase {
public:
static ::mediapipe::Status GetContract(CalculatorContract* cc);
::mediapipe::Status Open(CalculatorContext* cc) override;
::mediapipe::Status Process(CalculatorContext* cc) override;
private:
::mediapipe::Status LoadLabelmap(std::string label_map_path);
int top_k_ = -1;
float threshold_ = 0.0;
absl::node_hash_map<int, std::string> label_map_;
bool label_map_loaded_ = false;
};
REGISTER_CALCULATOR(TopKScoresCalculator);
::mediapipe::Status TopKScoresCalculator::GetContract(CalculatorContract* cc) {
RET_CHECK(cc->Inputs().HasTag("SCORES"));
cc->Inputs().Tag("SCORES").Set<std::vector<float>>();
if (cc->Outputs().HasTag("TOP_K_INDEXES")) {
cc->Outputs().Tag("TOP_K_INDEXES").Set<std::vector<int>>();
}
if (cc->Outputs().HasTag("TOP_K_SCORES")) {
cc->Outputs().Tag("TOP_K_SCORES").Set<std::vector<float>>();
}
if (cc->Outputs().HasTag("TOP_K_LABELS")) {
cc->Outputs().Tag("TOP_K_LABELS").Set<std::vector<std::string>>();
}
if (cc->Outputs().HasTag("CLASSIFICATIONS")) {
cc->Outputs().Tag("CLASSIFICATIONS").Set<ClassificationList>();
}
if (cc->Outputs().HasTag("SUMMARY")) {
cc->Outputs().Tag("SUMMARY").Set<std::string>();
}
return ::mediapipe::OkStatus();
}
::mediapipe::Status TopKScoresCalculator::Open(CalculatorContext* cc) {
const auto& options = cc->Options<::mediapipe::TopKScoresCalculatorOptions>();
RET_CHECK(options.has_top_k() || options.has_threshold())
<< "Must specify at least one of the top_k and threshold fields in "
"TopKScoresCalculatorOptions.";
if (options.has_top_k()) {
RET_CHECK(options.top_k() > 0) << "top_k must be greater than zero.";
top_k_ = options.top_k();
}
if (options.has_threshold()) {
threshold_ = options.threshold();
}
if (options.has_label_map_path()) {
MP_RETURN_IF_ERROR(LoadLabelmap(options.label_map_path()));
}
if (cc->Outputs().HasTag("TOP_K_LABELS")) {
RET_CHECK(!label_map_.empty());
}
return ::mediapipe::OkStatus();
}
::mediapipe::Status TopKScoresCalculator::Process(CalculatorContext* cc) {
const std::vector<float>& input_vector =
cc->Inputs().Tag("SCORES").Get<std::vector<float>>();
std::vector<int> top_k_indexes;
std::vector<float> top_k_scores;
std::vector<std::string> top_k_labels;
if (top_k_ > 0) {
top_k_indexes.reserve(top_k_);
top_k_scores.reserve(top_k_);
top_k_labels.reserve(top_k_);
}
std::priority_queue<std::pair<float, int>, std::vector<std::pair<float, int>>,
std::greater<std::pair<float, int>>>
pq;
for (int i = 0; i < input_vector.size(); ++i) {
if (input_vector[i] < threshold_) {
continue;
}
if (top_k_ > 0) {
if (pq.size() < top_k_) {
pq.push(std::pair<float, int>(input_vector[i], i));
} else if (pq.top().first < input_vector[i]) {
pq.pop();
pq.push(std::pair<float, int>(input_vector[i], i));
}
} else {
pq.push(std::pair<float, int>(input_vector[i], i));
}
}
while (!pq.empty()) {
top_k_indexes.push_back(pq.top().second);
top_k_scores.push_back(pq.top().first);
pq.pop();
}
reverse(top_k_indexes.begin(), top_k_indexes.end());
reverse(top_k_scores.begin(), top_k_scores.end());
if (label_map_loaded_) {
for (int index : top_k_indexes) {
top_k_labels.push_back(label_map_[index]);
}
}
if (cc->Outputs().HasTag("TOP_K_INDEXES")) {
cc->Outputs()
.Tag("TOP_K_INDEXES")
.AddPacket(MakePacket<std::vector<int>>(top_k_indexes)
.At(cc->InputTimestamp()));
}
if (cc->Outputs().HasTag("TOP_K_SCORES")) {
cc->Outputs()
.Tag("TOP_K_SCORES")
.AddPacket(MakePacket<std::vector<float>>(top_k_scores)
.At(cc->InputTimestamp()));
}
if (cc->Outputs().HasTag("TOP_K_LABELS")) {
cc->Outputs()
.Tag("TOP_K_LABELS")
.AddPacket(MakePacket<std::vector<std::string>>(top_k_labels)
.At(cc->InputTimestamp()));
}
if (cc->Outputs().HasTag("SUMMARY")) {
std::vector<std::string> results;
for (int index = 0; index < top_k_indexes.size(); ++index) {
if (label_map_loaded_) {
results.push_back(
absl::StrCat(top_k_labels[index], ":", top_k_scores[index]));
} else {
results.push_back(
absl::StrCat(top_k_indexes[index], ":", top_k_scores[index]));
}
}
cc->Outputs().Tag("SUMMARY").AddPacket(
MakePacket<std::string>(absl::StrJoin(results, ","))
.At(cc->InputTimestamp()));
}
if (cc->Outputs().HasTag("TOP_K_CLASSIFICATION")) {
auto classification_list = absl::make_unique<ClassificationList>();
for (int index = 0; index < top_k_indexes.size(); ++index) {
Classification* classification =
classification_list->add_classification();
classification->set_index(top_k_indexes[index]);
classification->set_score(top_k_scores[index]);
if (label_map_loaded_) {
classification->set_label(top_k_labels[index]);
}
}
}
return ::mediapipe::OkStatus();
}
::mediapipe::Status TopKScoresCalculator::LoadLabelmap(
std::string label_map_path) {
std::string string_path;
ASSIGN_OR_RETURN(string_path, PathToResourceAsFile(label_map_path));
std::string label_map_string;
MP_RETURN_IF_ERROR(file::GetContents(string_path, &label_map_string));
std::istringstream stream(label_map_string);
std::string line;
int i = 0;
while (std::getline(stream, line)) {
label_map_[i++] = line;
}
label_map_loaded_ = true;
return ::mediapipe::OkStatus();
}
} // namespace mediapipe
| {
"pile_set_name": "Github"
} |
{
"policies": {
"WebsiteFilter": {
"Block": [
"*://*.mozilla.org/*",
"invalid_pattern"
],
"Exceptions": [
"*://*.mozilla.org/*about*"
]
}
}
}
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="CompilerConfiguration">
<resourceExtensions />
<wildcardResourcePatterns>
<entry name="!?*.java" />
<entry name="!?*.form" />
<entry name="!?*.class" />
<entry name="!?*.groovy" />
<entry name="!?*.scala" />
<entry name="!?*.flex" />
<entry name="!?*.kt" />
<entry name="!?*.clj" />
<entry name="!?*.aj" />
</wildcardResourcePatterns>
<annotationProcessing>
<profile default="true" name="Default" enabled="false">
<processorPath useClasspath="true" />
</profile>
</annotationProcessing>
</component>
</project> | {
"pile_set_name": "Github"
} |
/*global document window*/
(function($) {
'use strict';
$.fn.extend({
LoaderAnimation: function(customOptions) {
var defaults = {
lineWidth: 20,
/* set preloader's line width */
color: "#ffffff",
/* set preloader color */
glowColor: "#00aeff",
/* set shadow color */
radius: 40,
/* set the preloader radius (JUST FOR CIRCULAR PRELOADER) */
font: "normal 14px Arial",
/* set preloader font (you can embed a font by css and use it here) */
onComplete: null /* on Animation completed */
},
$container = $(this),
// merging the custom options with the default ones
options = $.extend(defaults, customOptions),
self = this;
/*
*
* PUBLIC VAR
* Configuration
*
*/
var lineWidth = options.lineWidth,
color = options.color,
glowColor = options.glowColor,
radius = options.radius,
font = options.font;
this.currentPercentage = 0;
/*
*
* PRIVATE VAR
*
*/
var $window = $(window),
PI = Math.PI,
startAngle = 1.5 * PI,
endAngle = 0,
supportsCanvas = !!document.createElement('canvas').getContext,
canvasWidth = $(window).width(),
canvasHeight = $(window).height(),
$canvas, $fallbackHtml, ctx;
/*
*
* PRIVATE METHODS
*
*/
/*
*
* Used as fallback for the old browsers
*
*
*/
var fallback = function() {
$fallbackHtml.text((self.currentPercentage | 0) + "%");
};
/*
*
* Clear the canvas during each frame of the animation
*
*
*/
var clear = function() {
if (supportsCanvas)
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
return true;
};
/*
*
* Draw on the canvas the animation
*
*
*/
var draw = function() {
var alphaPercentage = (2 / 100) * self.currentPercentage,
positionX = canvasWidth / 2,
positionY = canvasHeight / 2;
// calculating end angle of preloader
endAngle = (alphaPercentage * PI) + startAngle;
clear();
ctx.restore();
//let's start drawning
ctx.beginPath();
//draw percentage text
ctx.font = font;
ctx.fillStyle = color;
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.fillText((self.currentPercentage | 0) + "%", positionX, positionY);
//width of the preloader line
ctx.lineWidth = lineWidth;
//color of preloader line
ctx.strokeStyle = color;
if (glowColor) {
ctx.shadowOffsetX = 0;
ctx.shadowOffsetY = 0;
ctx.shadowBlur = 20;
ctx.shadowColor = glowColor;
}
ctx.arc(positionX, positionY, radius, startAngle, endAngle, false);
ctx.stroke();
ctx.save();
};
/*
*
* Check if the precentage is equal to 100% to remove the preloader
*
*
*/
var onAnimationEnd = function() {
if (self.currentPercentage === 100) {
$container.delay(1000).fadeOut(function() {
$container.remove();
if (typeof options.onComplete === "function")
options.onComplete();
});
$window.off("resize.preloader");
}
};
/*
*
* Center the canvas on window resize
*
*
*/
var centerLoader = function() {
canvasWidth = $(window).width();
canvasHeight = $(window).height();
if (supportsCanvas) {
$canvas[0].width = canvasWidth;
$canvas[0].height = canvasHeight;
}
$container.width(canvasWidth);
$container.height(canvasHeight);
};
/*
*
* PUBLIC METHODS
*
*/
self.init = function() {
if (supportsCanvas) {
$canvas = $("<canvas>");
$container.append($canvas);
ctx = $canvas[0].getContext('2d');
} else {
$fallbackHtml = $("<i class='fallback'></i>");
$container.append($fallbackHtml);
}
centerLoader();
$window.on("resize.preloader", centerLoader);
};
self.update = function(prercentage) {
$.Animation(self, {
currentPercentage: prercentage
}, {
duration: 3000
})
.stop(true, false)
.progress(function() {
if (supportsCanvas)
draw();
else
fallback();
})
.done(onAnimationEnd);
};
this.init();
return this;
}
});
})(jQuery); | {
"pile_set_name": "Github"
} |
/**
* SPDX-License-Identifier: (MIT OR CECILL-C)
*
* Copyright (C) 2006-2019 INRIA and contributors
*
* Spoon is available either under the terms of the MIT License (see LICENSE-MIT.txt) of the Cecill-C License (see LICENSE-CECILL-C.txt). You as the user are entitled to choose the terms under which to adopt Spoon.
*/
package spoon.support.sniper.internal;
/**
* A default dumb implementation of {@link SourceFragmentPrinter}, which only prints the given PrinterEvent.
*/
public class DefaultSourceFragmentPrinter implements SourceFragmentPrinter {
public static final DefaultSourceFragmentPrinter INSTANCE = new DefaultSourceFragmentPrinter();
private DefaultSourceFragmentPrinter() {
}
@Override
public void onPush() {
}
@Override
public void print(PrinterEvent event) {
event.printSourceFragment(null, ModificationStatus.UNKNOWN);
}
@Override
public int update(PrinterEvent event) {
return -1;
}
@Override
public void onFinished() {
}
@Override
public boolean knowsHowToPrint(PrinterEvent event) {
return true;
}
}
| {
"pile_set_name": "Github"
} |
// Copyright (C) Pash Contributors. License GPL/BSD. See https://github.com/Pash-Project/Pash/
using System;
using System.Collections.Generic;
using System.Text;
namespace System.Management.Automation.Runspaces
{
public sealed class SessionStateAssemblyEntry : InitialSessionStateEntry
{
private string fileName;
public string FileName
{
get
{
return this.fileName;
}
}
public SessionStateAssemblyEntry(string name, string fileName)
: base(name)
{
this.fileName = fileName;
}
public SessionStateAssemblyEntry(string name)
: base(name)
{
}
public override InitialSessionStateEntry Clone()
{
throw new NotImplementedException();
}
}
}
| {
"pile_set_name": "Github"
} |
/** @file update.c
*
* Client of update service provider(i.e. UpdateDaemon) for installing/removing app, and installing
* firmware. This client receives update package via STDIN and sequentially calls update APIs for
* successful update. This client follows the steps mentioned in le_update.api documentation
* while performing update. A callback is implemented here to receive progress of ongoing update
* process.
*
* Copyright (C) Sierra Wireless Inc.
*/
#include "legato.h"
#include "limit.h"
#include "interfaces.h"
//--------------------------------------------------------------------------------------------------
/**
* true = -f or --force was specified on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static bool Force = false;
//--------------------------------------------------------------------------------------------------
/**
* true = -r or --remove was specified on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static bool DoRemove = false;
//--------------------------------------------------------------------------------------------------
/**
* set to true in an option parsing callback if the option should cause the update or removal work
* to be skipped.
*/
//--------------------------------------------------------------------------------------------------
static bool Done = false;
//--------------------------------------------------------------------------------------------------
/**
* Positional command-line argument.
*/
//--------------------------------------------------------------------------------------------------
static const char* ArgPtr = "-";
//--------------------------------------------------------------------------------------------------
/**
* Prints help to stdout.
*/
//--------------------------------------------------------------------------------------------------
static void PrintHelp
(
void
)
{
puts(
"NAME:\n"
" update - install/remove utility for legato.\n"
"\n"
"SYNOPSIS:\n"
" update --help\n"
" update [FILE_NAME]\n"
" update --remove APP_NAME\n"
" update --mark-good\n"
" update --mark-bad\n"
" update --defer\n"
"\n"
"DESCRIPTION:\n"
" update --help\n"
" Display this help and exit.\n"
"\n"
" update [FILE_NAME]\n"
" Command takes an update file, decodes the manifest, and takes appropriate action.\n"
" If no file name or the file name '-' is given, input is taken from the standard\n"
" input stream (stdin).\n"
"\n"
" update --remove APP_NAME\n"
" update -r APP_NAME\n"
" Removes an app from the device.\n"
"\n"
" update --mark-good\n"
" update -g\n"
" Ends the new system probation period and marks the current system good.\n"
" Ignored if the current system is already marked good."
"\n"
" update --mark-bad\n"
" update -b\n"
" Marks the current system bad and reboots to rollback to the previous good system.\n"
" The command has no effect if the current system has already been marked good.\n"
" The restart waits for any deferral that is in effect.\n"
"\n"
" update --defer\n"
" update -d\n"
" Command causes all updates to be deferred as long as the program is left running.\n"
" To release the deferral use Ctrl-C or kill to exit this command.\n"
" More than one deferral can be in effect at any time. All of them must be cleared\n"
" before an update can take place.\n"
);
exit(EXIT_SUCCESS);
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when --force or -f appear on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static void SetForce
(
void
)
{
Force = true;
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when --remove or -r appear on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static void RemoveSelected
(
void
)
{
if (DoRemove)
{
fprintf(stderr, "--remove or -r specified more than once.\n");
exit(EXIT_FAILURE);
}
DoRemove = true;
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when --mark-good or -g appear on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static void MarkGood
(
void
)
{
le_updateCtrl_ConnectService();
le_result_t result = le_updateCtrl_MarkGood(Force);
switch (result)
{
case LE_OK:
printf("System is now marked 'Good'.\n");
exit(EXIT_SUCCESS);
break;
case LE_BUSY:
fprintf(stderr, "**ERROR: One or more processes are holding probation locks - check logs.\n");
fprintf(stderr, "Use -f (or --force) option to override.\n");
exit(EXIT_FAILURE);
break;
case LE_DUPLICATE:
fprintf(stderr, "**ERROR: The probation period has already ended. Nothing to do.\n");
exit(EXIT_FAILURE);
break;
default:
fprintf(stderr, "**ERROR: Unknown return code from le_updateCtrl_MarkGood().\n");
exit(EXIT_FAILURE);
}
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when --mark-bad or -b appear on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static void MarkBad
(
void
)
{
le_updateCtrl_ConnectService();
le_updateCtrl_FailProbation();
exit(EXIT_SUCCESS);
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when we get SIGINT (generally user hits Ctrl-C) so we can release
* our Defer before we die.
*/
//--------------------------------------------------------------------------------------------------
static void EndDeferral
(
int sigNum
)
{
le_updateCtrl_Allow();
exit(EXIT_SUCCESS);
}
//--------------------------------------------------------------------------------------------------
/**
* Function that gets called when --defer or -d appear on the command-line.
*/
//--------------------------------------------------------------------------------------------------
static void StartDeferral
(
void
)
{
le_updateCtrl_ConnectService();
// Setup the signal event handler before we do Defer. This way, even if we get signalled
// before Defer gets done we won't deal with the signal until the next time round the
// event loop - so our Defer and Allow count will match by the time we exit.
le_sig_Block(SIGINT);
le_sig_SetEventHandler(SIGINT, EndDeferral);
le_sig_Block(SIGTERM);
le_sig_SetEventHandler(SIGTERM, EndDeferral);
le_updateCtrl_Defer();
// Our work is done here. Go wait on the event loop until someone SIGINTs or kills us.
Done = true;
}
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
/**
* Gets the file descriptor for the input stream. Input file either might be given via STDIN or
* parameter. If no parameter is given, this function waits for data through STDIN.
*
* @return
* - File descriptor pointing to update file.
*/
//--------------------------------------------------------------------------------------------------
static int GetUpdateFile
(
const char* filePathPtr ///< The file path, or "-" for stdin.
)
{
int fileDescriptor;
if (strcmp(filePathPtr, "-") == 0)
{
fileDescriptor = 0;
}
else
{
fileDescriptor = open(filePathPtr, O_RDONLY);
if (fileDescriptor == -1)
{
fprintf(stderr,
"Can't open file '%s': errno %d (%m)\n",
filePathPtr,
errno);
exit(EXIT_FAILURE);
}
}
return fileDescriptor;
}
//--------------------------------------------------------------------------------------------------
/**
* Processes a positional argument from the command line.
**/
//--------------------------------------------------------------------------------------------------
static void HandlePositionalArg
(
const char* argPtr
)
{
ArgPtr = argPtr;
}
//--------------------------------------------------------------------------------------------------
/**
* This function is used for printing progress bar.
*/
//--------------------------------------------------------------------------------------------------
static void PrintProgressBar
(
uint percentDone, ///<[IN] Percent done (for current state) of update task underway.
const char* progMsg ///<[IN] Message to print with progress bar
)
{
static const int ProgressStringBytes = 256;
static const int ProgressBarLen = 50;
static uint lastPercentDone = 0;
static const char* lastProgMsg = NULL;
if ((percentDone < lastPercentDone) || ((lastProgMsg != NULL) && (lastProgMsg != progMsg)))
{
fprintf(stdout, "\n");
}
lastPercentDone = percentDone;
lastProgMsg = progMsg;
int progressCharCnt = percentDone/2;
char progressStr[ProgressStringBytes];
char tempStr[ProgressBarLen+1];
if (percentDone > 100)
{
LE_ERROR("Unexpected percentDone value: %d!!", percentDone);
return;
}
// Change last character to zero.
tempStr[ProgressBarLen] = 0;
// Reset all characters to zero.
memset(progressStr, 0, ProgressStringBytes);
// Forming progress string.
memset(tempStr, ' ', ProgressBarLen);
memset(tempStr, '+', progressCharCnt);
snprintf(progressStr, sizeof(progressStr), "%s: %3d%% %s",
progMsg,
percentDone,
tempStr);
// Print the progress string. CR(i.e. \r) is used as same line will be overwritten multiple times.
fprintf(stdout, "%s\r", progressStr);
fflush(stdout);
}
//--------------------------------------------------------------------------------------------------
/**
* Print message according to the error code.
*/
//--------------------------------------------------------------------------------------------------
static void PrintErrorMsg
(
void
)
{
switch (le_update_GetErrorCode())
{
case LE_UPDATE_ERR_NONE:
fprintf(stderr, "\n***Error: Unexpected error code: NONE\n");
return;
case LE_UPDATE_ERR_BAD_PACKAGE:
fprintf(stderr, "\n***Error: Received bad update package. See log for details.\n");
return;
case LE_UPDATE_ERR_SECURITY_FAILURE:
fprintf(stderr, "\n***Error: Security check failure. See log for details.\n");
return;
case LE_UPDATE_ERR_INTERNAL_ERROR:
fprintf(stderr, "\n***Error: Internal error during update. See log for details.\n");
return;
}
fprintf(stderr, "\n***Error: Unexpected error code: %d.\n", le_update_GetErrorCode());
}
//--------------------------------------------------------------------------------------------------
/**
* Callback function which is registered in update service provider(UpdateDaemon) to get status
* information for ongoing update task.
*/
//--------------------------------------------------------------------------------------------------
static void UpdateProgressHandler
(
le_update_State_t updateState, ///< Current State of ongoing update task in Update State
///< machine.
uint percentDone, ///< Percent done for current state. As example: at state
///< LE_UPDATE_STATE_UNPACKING, percentDone=80 means,
///< 80% of the update file data is already transferred to
///< unpack process.
void* contextPtr ///< Context pointer.
)
{
switch(updateState)
{
case LE_UPDATE_STATE_UNPACKING:
// Print progress bar if there is any noticeable progress.
PrintProgressBar(percentDone, "Unpacking package");
break;
case LE_UPDATE_STATE_DOWNLOAD_SUCCESS:
le_update_Install();
break;
case LE_UPDATE_STATE_APPLYING:
// Print progress bar if there is any noticeable progress.
PrintProgressBar(percentDone, "Applying update");
break;
case LE_UPDATE_STATE_SUCCESS:
//Successful completion.
printf("\nSUCCESS\n");
exit(EXIT_SUCCESS);
case LE_UPDATE_STATE_FAILED:
// Failure in update, exit with failure code.
PrintErrorMsg();
printf("\nFAILED\n");
exit(EXIT_FAILURE);
}
}
//--------------------------------------------------------------------------------------------------
/**
* Process an update pack.
*/
//--------------------------------------------------------------------------------------------------
static void Update
(
const char* filePathPtr ///< The file path, or "-" for stdin.
)
{
int fd = GetUpdateFile(filePathPtr);
le_update_ConnectService();
// Register for progress notifications.
le_update_AddProgressHandler(UpdateProgressHandler, NULL);
// Start update process(asynchronous). Completion will be notified via callback function.
le_result_t result = le_update_Start(fd);
switch (result)
{
case LE_BUSY:
fprintf(stderr, "**ERROR: Another update is currently in progress.\n");
break;
case LE_UNAVAILABLE:
fprintf(stderr, "**ERROR: Updates are currently deferred.\n");
break;
case LE_OK:
break;
default:
fprintf(stderr, "**ERROR: Unexpected result code from update server.\n");
break;
}
// Closing fd is unnecessary since the messaging infrastructure underneath
// le_update_Start API that use it would close it.
if (result != LE_OK)
{
exit(EXIT_FAILURE);
}
}
//--------------------------------------------------------------------------------------------------
/**
* Remove an application.
*/
//--------------------------------------------------------------------------------------------------
static void RemoveApp
(
const char* appNamePtr ///< The app name.
)
{
le_appRemove_ConnectService();
le_result_t result = le_appRemove_Remove(appNamePtr);
if (result == LE_OK)
{
exit(EXIT_SUCCESS);
}
else if (result == LE_BUSY)
{
fprintf(stderr, "Failed to remove app '%s'. System busy, check logs.\n", appNamePtr);
}
else if (result == LE_NOT_FOUND)
{
fprintf(stderr, "App '%s' is not installed\n", appNamePtr);
}
else
{
fprintf(stderr, "Failed to remove app '%s' (%s)\n", appNamePtr, LE_RESULT_TXT(result));
}
exit(EXIT_FAILURE);
}
COMPONENT_INIT
{
// update --help
le_arg_SetFlagCallback(PrintHelp, NULL, "help");
// force option --force for mark-good. Must be set first
le_arg_SetFlagCallback(SetForce, "f", "force");
// update --remove APP_NAME
le_arg_SetFlagCallback(RemoveSelected, "r", "remove");
// update --mark-good
le_arg_SetFlagCallback(MarkGood,"g", "mark-good");
// update --mark-bad
le_arg_SetFlagCallback(MarkBad, "b", "mark-bad");
// update --defer
le_arg_SetFlagCallback(StartDeferral, "d", "defer");
// update [FILE_NAME]
le_arg_AddPositionalCallback(HandlePositionalArg);
le_arg_AllowLessPositionalArgsThanCallbacks();
le_arg_Scan();
if (!Done)
{
// If --remove (or -r) was specified, then remove the app.
if (DoRemove)
{
if (ArgPtr == NULL)
{
fprintf(stderr, "No app name specified.\n");
exit(EXIT_FAILURE);
}
RemoveApp(ArgPtr);
}
// If --remove (or -r) was NOT specified, then process an update pack.
else
{
// If no file path was provided on the command line, default to "-" for standard in.
if (ArgPtr == NULL)
{
ArgPtr = "-";
}
Update(ArgPtr);
}
}
}
| {
"pile_set_name": "Github"
} |
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
import { UserConfigurationStore } from 'background/stores/global/user-configuration-store';
import { Action } from 'common/flux/action';
import { UserConfigurationStoreData } from 'common/types/store-data/user-configuration-store';
import { Rectangle } from 'electron';
import { WindowFrameActionCreator } from 'electron/flux/action-creator/window-frame-action-creator';
import { WindowStateActionCreator } from 'electron/flux/action-creator/window-state-action-creator';
import { RoutePayload } from 'electron/flux/action/route-payloads';
import { WindowStateActions } from 'electron/flux/action/window-state-actions';
import { WindowStatePayload } from 'electron/flux/action/window-state-payload';
import { WindowState } from 'electron/flux/types/window-state';
import { IMock, Mock, MockBehavior, Times } from 'typemoq';
describe(WindowStateActionCreator, () => {
let windowStateActionsMock: IMock<WindowStateActions>;
let windowFrameActionCreatorMock: IMock<WindowFrameActionCreator>;
let userConfigurationStoreMock: IMock<UserConfigurationStore>;
let testSubject: WindowStateActionCreator;
beforeEach(() => {
windowStateActionsMock = Mock.ofType<WindowStateActions>();
windowFrameActionCreatorMock = Mock.ofType<WindowFrameActionCreator>(
WindowFrameActionCreator,
MockBehavior.Strict,
);
userConfigurationStoreMock = Mock.ofType<UserConfigurationStore>();
testSubject = new WindowStateActionCreator(
windowStateActionsMock.object,
windowFrameActionCreatorMock.object,
userConfigurationStoreMock.object,
);
});
it('calling setRoute invokes setRoute action with given payload', () => {
const setRouteActionMock = Mock.ofType<Action<RoutePayload>>();
const testPayload: RoutePayload = {
routeId: 'resultsView',
};
const userConfigStoreDataStub = {
lastWindowState: null,
lastWindowBounds: null,
} as UserConfigurationStoreData;
windowStateActionsMock
.setup(actions => actions.setRoute)
.returns(() => setRouteActionMock.object);
setRouteActionMock.setup(s => s.invoke(testPayload)).verifiable(Times.once());
userConfigurationStoreMock.setup(u => u.getState()).returns(() => userConfigStoreDataStub);
windowFrameActionCreatorMock.setup(w => w.maximize());
testSubject.setRoute(testPayload);
setRouteActionMock.verifyAll();
});
it('calling setRoute with deviceConnectView, invokes setWindowSize', () => {
const setRouteActionMock = Mock.ofType<Action<RoutePayload>>();
const testPayload: RoutePayload = {
routeId: 'deviceConnectView',
};
windowFrameActionCreatorMock
.setup(w => w.setWindowSize({ width: 600, height: 391 }))
.verifiable(Times.once());
windowStateActionsMock
.setup(actions => actions.setRoute)
.returns(() => setRouteActionMock.object);
setRouteActionMock.setup(s => s.invoke(testPayload)).verifiable(Times.once());
testSubject.setRoute(testPayload);
setRouteActionMock.verifyAll();
windowFrameActionCreatorMock.verifyAll();
});
it('calling setWindowState invokes setWindowState action', () => {
const setWindowStatePayload = Mock.ofType<Action<WindowStatePayload>>();
const testPayload: WindowStatePayload = {
currentWindowState: 'maximized',
};
windowStateActionsMock
.setup(actions => actions.setWindowState)
.returns(() => setWindowStatePayload.object);
setWindowStatePayload.setup(s => s.invoke(testPayload)).verifiable(Times.once());
testSubject.setWindowState(testPayload);
setWindowStatePayload.verifyAll();
windowFrameActionCreatorMock.verifyAll();
});
describe('calling setRoute with view other than deviceConnectView', () => {
const testPayload: RoutePayload = {
routeId: 'resultsView',
};
let setRouteActionMock;
beforeEach(() => {
setRouteActionMock = Mock.ofType<Action<RoutePayload>>();
setRouteActionMock.setup(s => s.invoke(testPayload)).verifiable(Times.once());
windowStateActionsMock
.setup(actions => actions.setRoute)
.returns(() => setRouteActionMock.object)
.verifiable(Times.once());
});
it.each(['normal', 'maximized', 'full-screen'])(
'sets window size correctly if lastWindowBounds is specified and windowState is %s',
lastWindowState => {
setRouteNonDeviceViewCore(lastWindowState as WindowState, {
x: 150,
y: 200,
height: 400,
width: 900,
});
},
);
it.each(['normal', 'maximized', 'full-screen'])(
'sets window size correctly if lastWindowBounds is null and windowState is %s',
lastWindowState => {
setRouteNonDeviceViewCore(lastWindowState as WindowState, null);
},
);
function setRouteNonDeviceViewCore(
lastWindowState: WindowState,
lastWindowBounds: Rectangle,
): void {
const userConfigStoreDataStub = ({
lastWindowState: lastWindowState,
lastWindowBounds: lastWindowBounds,
} as unknown) as UserConfigurationStoreData;
const shouldSetBounds: boolean = lastWindowBounds !== null;
const shouldMaximize: boolean = lastWindowState === 'maximized';
const shouldEnterFullScreen: boolean = lastWindowState === 'full-screen';
let callbackCount = 0;
userConfigurationStoreMock
.setup(u => u.getState())
.returns(() => userConfigStoreDataStub)
.verifiable(Times.once());
windowFrameActionCreatorMock
.setup(w => w.setWindowBounds(userConfigStoreDataStub.lastWindowBounds))
.callback(() => {
expect(callbackCount++).toBe(0);
})
.verifiable(shouldSetBounds ? Times.once() : Times.never());
windowFrameActionCreatorMock
.setup(x => x.maximize())
.callback(() => {
expect(callbackCount++).toBe(shouldSetBounds ? 1 : 0);
})
.verifiable(shouldMaximize ? Times.once() : Times.never());
windowFrameActionCreatorMock
.setup(x => x.enterFullScreen())
.callback(() => {
expect(callbackCount++).toBe(shouldSetBounds ? 1 : 0);
})
.verifiable(shouldEnterFullScreen ? Times.once() : Times.never());
testSubject.setRoute(testPayload);
setRouteActionMock.verifyAll();
windowFrameActionCreatorMock.verifyAll();
}
});
});
| {
"pile_set_name": "Github"
} |
from django.db.backends.mysql.base import DatabaseOperations
from django.contrib.gis.db.backends.adapter import WKTAdapter
from django.contrib.gis.db.backends.base import BaseSpatialOperations
class MySQLOperations(DatabaseOperations, BaseSpatialOperations):
compiler_module = 'django.contrib.gis.db.backends.mysql.compiler'
mysql = True
name = 'mysql'
select = 'AsText(%s)'
from_wkb = 'GeomFromWKB'
from_text = 'GeomFromText'
Adapter = WKTAdapter
Adaptor = Adapter # Backwards-compatibility alias.
geometry_functions = {
'bbcontains' : 'MBRContains', # For consistency w/PostGIS API
'bboverlaps' : 'MBROverlaps', # .. ..
'contained' : 'MBRWithin', # .. ..
'contains' : 'MBRContains',
'disjoint' : 'MBRDisjoint',
'equals' : 'MBREqual',
'exact' : 'MBREqual',
'intersects' : 'MBRIntersects',
'overlaps' : 'MBROverlaps',
'same_as' : 'MBREqual',
'touches' : 'MBRTouches',
'within' : 'MBRWithin',
}
gis_terms = dict([(term, None) for term in geometry_functions.keys() + ['isnull']])
def geo_db_type(self, f):
return f.geom_type
def get_geom_placeholder(self, value, srid):
"""
The placeholder here has to include MySQL's WKT constructor. Because
MySQL does not support spatial transformations, there is no need to
modify the placeholder based on the contents of the given value.
"""
if hasattr(value, 'expression'):
placeholder = '%s.%s' % tuple(map(self.quote_name, value.cols[value.expression]))
else:
placeholder = '%s(%%s)' % self.from_text
return placeholder
def spatial_lookup_sql(self, lvalue, lookup_type, value, field, qn):
alias, col, db_type = lvalue
geo_col = '%s.%s' % (qn(alias), qn(col))
lookup_info = self.geometry_functions.get(lookup_type, False)
if lookup_info:
return "%s(%s, %s)" % (lookup_info, geo_col,
self.get_geom_placeholder(value, field.srid))
# TODO: Is this really necessary? MySQL can't handle NULL geometries
# in its spatial indexes anyways.
if lookup_type == 'isnull':
return "%s IS %sNULL" % (geo_col, (not value and 'NOT ' or ''))
raise TypeError("Got invalid lookup_type: %s" % repr(lookup_type))
| {
"pile_set_name": "Github"
} |
# RUN: yaml2obj %p/Inputs/i386-slice.yaml > %t-i386.o
# RUN: yaml2obj %p/Inputs/CPU14-slice.yaml > %t-CPU14.o
# RUN: yaml2obj %p/Inputs/CPU10-slice.yaml > %t-CPU10.o
# RUN: llvm-lipo %t-i386.o %t-CPU14.o %t-CPU10.o -create -output %t-universal.o
# RUN: llvm-objdump %t-universal.o -m --universal-headers | FileCheck %s
# CHECK: fat_magic FAT_MAGIC
# CHECK: nfat_arch 3
# CHECK: architecture
# CHECK: cputype (10)
# CHECK: offset 72
# CHECK: align 2^3 (8)
# CHECK: architecture
# CHECK: cputype (14)
# CHECK: offset 8544
# CHECK: align 2^4 (16)
# CHECK: architecture i386
# CHECK: offset 12288
# CHECK: align 2^12 (4096)
| {
"pile_set_name": "Github"
} |
Sequel.migration do
up{create_table(:sm3333){Integer :smc3}}
down{drop_table(:sm3333)}
end
| {
"pile_set_name": "Github"
} |
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="Category" module="erp5.portal_type"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_count</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAI=</string> </persistent>
</value>
</item>
<item>
<key> <string>_mt_index</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAM=</string> </persistent>
</value>
</item>
<item>
<key> <string>_tree</string> </key>
<value>
<persistent> <string encoding="base64">AAAAAAAAAAQ=</string> </persistent>
</value>
</item>
<item>
<key> <string>categories</string> </key>
<value>
<tuple>
<string>gap/fr/m14/7/78/787/7875</string>
</tuple>
</value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>7875</string> </value>
</item>
<item>
<key> <string>portal_type</string> </key>
<value> <string>Category</string> </value>
</item>
<item>
<key> <string>title</string> </key>
<value> <string>Reprises sur provisions pour risques et charges exceptionnels</string> </value>
</item>
</dictionary>
</pickle>
</record>
<record id="2" aka="AAAAAAAAAAI=">
<pickle>
<global name="Length" module="BTrees.Length"/>
</pickle>
<pickle> <int>0</int> </pickle>
</record>
<record id="3" aka="AAAAAAAAAAM=">
<pickle>
<global name="OOBTree" module="BTrees.OOBTree"/>
</pickle>
<pickle>
<none/>
</pickle>
</record>
<record id="4" aka="AAAAAAAAAAQ=">
<pickle>
<global name="OOBTree" module="BTrees.OOBTree"/>
</pickle>
<pickle>
<none/>
</pickle>
</record>
</ZopeData>
| {
"pile_set_name": "Github"
} |
//
// MagicalRecord+Setup.h
// Magical Record
//
// Created by Saul Mora on 3/7/12.
// Copyright (c) 2012 Magical Panda Software LLC. All rights reserved.
//
#import "MagicalRecordInternal.h"
#import "MagicalRecordXcode7CompatibilityMacros.h"
@interface MagicalRecord (Setup)
+ (void) setupCoreDataStack;
+ (void) setupCoreDataStackWithInMemoryStore;
+ (void) setupAutoMigratingCoreDataStack;
+ (void) setupCoreDataStackWithStoreNamed:(MR_nonnull NSString *)storeName;
+ (void) setupCoreDataStackWithAutoMigratingSqliteStoreNamed:(MR_nonnull NSString *)storeName;
+ (void) setupCoreDataStackWithStoreAtURL:(MR_nonnull NSURL *)storeURL;
+ (void) setupCoreDataStackWithAutoMigratingSqliteStoreAtURL:(MR_nonnull NSURL *)storeURL;
@end
| {
"pile_set_name": "Github"
} |
name: ext4_es_cache_extent
ID: 360
format:
field:unsigned short common_type; offset:0; size:2; signed:0;
field:unsigned char common_flags; offset:2; size:1; signed:0;
field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
field:int common_pid; offset:4; size:4; signed:1;
field:dev_t dev; offset:8; size:4; signed:0;
field:ino_t ino; offset:16; size:8; signed:0;
field:ext4_lblk_t lblk; offset:24; size:4; signed:0;
field:ext4_lblk_t len; offset:28; size:4; signed:0;
field:ext4_fsblk_t pblk; offset:32; size:8; signed:0;
field:char status; offset:40; size:1; signed:0;
print fmt: "dev %d,%d ino %lu es [%u/%u) mapped %llu status %s", ((unsigned int) ((REC->dev) >> 20)), ((unsigned int) ((REC->dev) & ((1U << 20) - 1))), (unsigned long) REC->ino, REC->lblk, REC->len, REC->pblk, __print_flags(REC->status, "", { (1 << ES_WRITTEN_B), "W" }, { (1 << ES_UNWRITTEN_B), "U" }, { (1 << ES_DELAYED_B), "D" }, { (1 << ES_HOLE_B), "H" })
| {
"pile_set_name": "Github"
} |
//! Define the web service as a set of routes, resources, middlewares, serializers, ...
//!
//! [`ServiceBuilder`] combines all the various components (routes, resources,
//! middlewares, serializers, deserializers, catch handlers, ...) and turns it
//! into an HTTP service.
//!
//! [`ServiceBuilder`]: struct.ServiceBuilder.html
mod builder;
mod new_service;
// TODO: Rename this `service`?
mod web;
pub use self::builder::ServiceBuilder;
pub use self::new_service::NewWebService;
pub use self::web::WebService;
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="omari.hamza.storyviewdemo">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest> | {
"pile_set_name": "Github"
} |
---
-api-id: T:Windows.Security.Authentication.Web.Core.FindAllAccountsResult
-api-type: winrt class
---
<!-- Class syntax.
public class FindAllAccountsResult
-->
# Windows.Security.Authentication.Web.Core.FindAllAccountsResult
## -description
This class represents the result of an account retrieval operation.
## -remarks
## -see-also
## -examples
| {
"pile_set_name": "Github"
} |
from __future__ import absolute_import
from sentry.models import UserEmail, UserOption
from sentry.testutils import APITestCase
from django.core.urlresolvers import reverse
class UserNotificationFineTuningTest(APITestCase):
def setUp(self):
self.user = self.create_user(email="[email protected]")
self.org = self.create_organization(name="Org Name", owner=self.user)
self.org2 = self.create_organization(name="Another Org", owner=self.user)
self.team = self.create_team(name="Team Name", organization=self.org, members=[self.user])
self.project = self.create_project(
organization=self.org, teams=[self.team], name="Project Name"
)
self.project2 = self.create_project(
organization=self.org, teams=[self.team], name="Another Name"
)
self.login_as(user=self.user)
def test_returns_correct_defaults(self):
UserOption.objects.create(user=self.user, project=self.project, key="mail:alert", value=1)
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "alerts"},
)
resp = self.client.get(url)
assert resp.data.get(self.project.id) == 1
UserOption.objects.create(
user=self.user, organization=self.org, key="deploy-emails", value=1
)
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "deploy"},
)
resp = self.client.get(url)
assert resp.data.get(self.org.id) == 1
UserOption.objects.create(
user=self.user,
organization=None,
key="reports:disabled-organizations",
value=[self.org.id],
)
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "reports"},
)
resp = self.client.get(url)
assert resp.data.get(self.org.id) == 0
def test_invalid_notification_type(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "invalid"},
)
resp = self.client.get(url)
assert resp.status_code == 404
resp = self.client.put(url)
assert resp.status_code == 404
def test_update_invalid_project(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "alerts"},
)
update = {}
update["123"] = 1
resp = self.client.put(url, data=update)
assert resp.status_code == 403
def test_invalid_id_value(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "alerts"},
)
resp = self.client.put(url, data={"nope": 1})
assert resp.status_code == 400
def test_saves_and_returns_alerts(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "alerts"},
)
update = {}
update[self.project.id] = 1
update[self.project2.id] = 2
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert (
UserOption.objects.get(user=self.user, project=self.project, key="mail:alert").value
== 1
)
assert (
UserOption.objects.get(user=self.user, project=self.project2, key="mail:alert").value
== 2
)
update = {}
update[self.project.id] = -1
# Can return to default
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert not UserOption.objects.filter(
user=self.user, project=self.project, key="mail:alert"
).exists()
assert (
UserOption.objects.get(user=self.user, project=self.project2, key="mail:alert").value
== 2
)
def test_saves_and_returns_workflow(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "workflow"},
)
update = {}
update[self.project.id] = 1
update[self.project2.id] = 2
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert (
UserOption.objects.get(
user=self.user, project=self.project, key="workflow:notifications"
).value
== "1"
)
assert (
UserOption.objects.get(
user=self.user, project=self.project2, key="workflow:notifications"
).value
== "2"
)
update = {}
update[self.project.id] = -1
# Can return to default
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert not UserOption.objects.filter(
user=self.user, project=self.project, key="workflow:notifications"
)
assert (
UserOption.objects.get(
user=self.user, project=self.project2, key="workflow:notifications"
).value
== "2"
)
def test_saves_and_returns_email_routing(self):
UserEmail.objects.create(user=self.user, email="[email protected]", is_verified=True).save()
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "email"},
)
update = {}
update[self.project.id] = "[email protected]"
update[self.project2.id] = "[email protected]"
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert (
UserOption.objects.get(user=self.user, project=self.project, key="mail:email").value
== "[email protected]"
)
assert (
UserOption.objects.get(user=self.user, project=self.project2, key="mail:email").value
== "[email protected]"
)
def test_email_routing_emails_must_be_verified(self):
UserEmail.objects.create(
user=self.user, email="[email protected]", is_verified=False
).save()
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "email"},
)
update = {}
update[self.project.id] = "[email protected]"
resp = self.client.put(url, data=update)
assert resp.status_code == 400
def test_email_routing_emails_must_be_valid(self):
new_user = self.create_user(email="[email protected]")
UserEmail.objects.create(user=new_user, email="[email protected]", is_verified=True).save()
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "email"},
)
update = {}
update[self.project2.id] = "[email protected]"
resp = self.client.put(url, data=update)
assert resp.status_code == 400
def test_saves_and_returns_deploy(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "deploy"},
)
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert (
UserOption.objects.get(
user=self.user, organization=self.org.id, key="deploy-emails"
).value
== "0"
)
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert (
UserOption.objects.get(user=self.user, organization=self.org, key="deploy-emails").value
== "1"
)
update = {}
update[self.org.id] = -1
resp = self.client.put(url, data=update)
assert not UserOption.objects.filter(
user=self.user, organization=self.org, key="deploy-emails"
).exists()
def test_saves_and_returns_weekly_reports(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "reports"},
)
update = {}
update[self.org.id] = 0
update[self.org2.id] = "0"
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([self.org.id, self.org2.id])
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([self.org2.id])
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([self.org.id, self.org2.id])
def test_enable_weekly_reports_from_default_setting(self):
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "reports"},
)
update = {}
update[self.org.id] = 1
update[self.org2.id] = "1"
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([])
# can disable
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([self.org.id])
# re-enable
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert set(
UserOption.objects.get(user=self.user, key="reports:disabled-organizations").value
) == set([])
def test_permissions(self):
new_user = self.create_user(email="[email protected]")
new_org = self.create_organization(name="New Org")
new_team = self.create_team(name="New Team", organization=new_org, members=[new_user])
new_project = self.create_project(
organization=new_org, teams=[new_team], name="New Project"
)
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "reports"},
)
update = {}
update[new_org.id] = 0
resp = self.client.put(url, data=update)
assert resp.status_code == 403
assert not UserOption.objects.filter(
user=self.user, organization=new_org, key="reports"
).exists()
url = reverse(
"sentry-api-0-user-notifications-fine-tuning",
kwargs={"user_id": "me", "notification_type": "alerts"},
)
update = {}
update[new_project.id] = 1
resp = self.client.put(url, data=update)
assert resp.status_code == 403
assert not UserOption.objects.filter(
user=self.user, project=new_project, key="mail:alert"
).exists()
| {
"pile_set_name": "Github"
} |
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Copyright © NetworkDLS 2002, All rights reserved
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
// PARTICULAR PURPOSE.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#ifndef _CCRC32_H
#define _CCRC32_H
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
void InitializeCRC32();
void PartialCRC(unsigned int *iCRC, const unsigned char *sData, size_t iDataLength);
unsigned long FullCRC(const unsigned char *sData, unsigned long ulDataLength);
unsigned int Reflect(unsigned int iReflect, const char cChar);
static unsigned int iTable[256]; // CRC lookup table array.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#endif
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html>
<head>
<title>Api Documentation</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/bootstrap.min.css'/>
<link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/prettify.css'/>
<link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/bootstrap-responsive.min.css'/>
<link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/application.css'/>
<!-- IE6-8 support of HTML5 elements -->
<!--[if lt IE 9]>
<script src="//html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
</head>
<body>
<div class="container-fluid">
<div class="row-fluid">
<div id='container'>
<ul class='breadcrumb'>
<li>
<a href='../../../apidoc/v2.gl.html'>Foreman v2</a>
<span class='divider'>/</span>
</li>
<li>
<a href='../../../apidoc/v2/organizations.gl.html'>
Organizations
</a>
<span class='divider'>/</span>
</li>
<li class='active'>show</li>
<li class='pull-right'>
[ <a href="../../../apidoc/v2/organizations/show.ca.html">ca</a> | <a href="../../../apidoc/v2/organizations/show.cs_CZ.html">cs_CZ</a> | <a href="../../../apidoc/v2/organizations/show.de.html">de</a> | <a href="../../../apidoc/v2/organizations/show.en.html">en</a> | <a href="../../../apidoc/v2/organizations/show.en_GB.html">en_GB</a> | <a href="../../../apidoc/v2/organizations/show.es.html">es</a> | <a href="../../../apidoc/v2/organizations/show.fr.html">fr</a> | <b><a href="../../../apidoc/v2/organizations/show.gl.html">gl</a></b> | <a href="../../../apidoc/v2/organizations/show.it.html">it</a> | <a href="../../../apidoc/v2/organizations/show.ja.html">ja</a> | <a href="../../../apidoc/v2/organizations/show.ko.html">ko</a> | <a href="../../../apidoc/v2/organizations/show.nl_NL.html">nl_NL</a> | <a href="../../../apidoc/v2/organizations/show.pl.html">pl</a> | <a href="../../../apidoc/v2/organizations/show.pt_BR.html">pt_BR</a> | <a href="../../../apidoc/v2/organizations/show.ru.html">ru</a> | <a href="../../../apidoc/v2/organizations/show.sv_SE.html">sv_SE</a> | <a href="../../../apidoc/v2/organizations/show.zh_CN.html">zh_CN</a> | <a href="../../../apidoc/v2/organizations/show.zh_TW.html">zh_TW</a> ]
</li>
</ul>
<div class='page-header'>
<h1>
GET /api/organizations/:id
<br>
<small>Show an organization</small>
</h1>
</div>
<div>
<h2><span class="translation_missing" title="translation missing: gl.apipie.examples">Examples</span></h2>
<pre class="prettyprint">GET /api/organizations/447626445
200
{
"select_all_types": [],
"description": null,
"created_at": "2019-11-07 08:49:08 UTC",
"updated_at": "2019-11-07 08:49:08 UTC",
"ancestry": null,
"parent_id": null,
"parent_name": null,
"id": 447626445,
"name": "org224",
"title": "org224",
"users": [],
"smart_proxies": [],
"subnets": [],
"compute_resources": [],
"media": [],
"config_templates": [],
"ptables": [],
"provisioning_templates": [],
"domains": [],
"realms": [],
"environments": [],
"hostgroups": [],
"locations": [],
"hosts_count": 0,
"parameters": [
{
"priority": 10,
"created_at": "2019-11-07 08:49:08 UTC",
"updated_at": "2019-11-07 08:49:08 UTC",
"id": 767575239,
"name": "foo",
"parameter_type": "string",
"value": "*****"
}
]
}</pre>
<h2><span class="translation_missing" title="translation missing: gl.apipie.params">Params</span></h2>
<table class='table'>
<thead>
<tr>
<th><span class="translation_missing" title="translation missing: gl.apipie.param_name">Param Name</span></th>
<th><span class="translation_missing" title="translation missing: gl.apipie.description">Description</span></th>
</tr>
</thead>
<tbody>
<tr style='background-color:rgb(255,255,255);'>
<td>
<strong>location_id </strong><br>
<small>
<span class="translation_missing" title="translation missing: gl.apipie.optional">Optional</span>
</small>
</td>
<td>
<p>Set the current location context for the request</p>
<p><strong>Validations:</strong></p>
<ul>
<li>
<p>Must be a Integer</p>
</li>
</ul>
</td>
</tr>
<tr style='background-color:rgb(255,255,255);'>
<td>
<strong>organization_id </strong><br>
<small>
<span class="translation_missing" title="translation missing: gl.apipie.optional">Optional</span>
</small>
</td>
<td>
<p>Set the current organization context for the request</p>
<p><strong>Validations:</strong></p>
<ul>
<li>
<p>Must be a Integer</p>
</li>
</ul>
</td>
</tr>
<tr style='background-color:rgb(255,255,255);'>
<td>
<strong>show_hidden_parameters </strong><br>
<small>
<span class="translation_missing" title="translation missing: gl.apipie.optional">Optional</span>
</small>
</td>
<td>
<p>Display hidden parameter values</p>
<p><strong>Validations:</strong></p>
<ul>
<li>
<p>Must be one of: <code>true</code>, <code>false</code>, <code>1</code>, <code>0</code>.</p>
</li>
</ul>
</td>
</tr>
<tr style='background-color:rgb(255,255,255);'>
<td>
<strong>id </strong><br>
<small>
<span class="translation_missing" title="translation missing: gl.apipie.required">Required</span>
</small>
</td>
<td>
<p><strong>Validations:</strong></p>
<ul>
<li>
<p>Must be an identifier, string from 1 to 128 characters containing only alphanumeric characters, space, underscore(_), hypen(-) with no leading or trailing space.</p>
</li>
</ul>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
<hr>
<footer></footer>
</div>
<script type='text/javascript' src='../../../apidoc/javascripts/bundled/jquery.js'></script>
<script type='text/javascript' src='../../../apidoc/javascripts/bundled/bootstrap-collapse.js'></script>
<script type='text/javascript' src='../../../apidoc/javascripts/bundled/prettify.js'></script>
<script type='text/javascript' src='../../../apidoc/javascripts/apipie.js'></script>
</body>
</html>
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="UTF-8"?>
<handler-chains xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee javaee_web_services_1_2.xsd">
<handler-chain>
<handler>
<handler-name>Dummy</handler-name>
<handler-class>org.apache.cxf.systest.ws.policy.handler.DummyHandler</handler-class>
</handler>
</handler-chain>
</handler-chains> | {
"pile_set_name": "Github"
} |
/*
* WebMenuCallback.java
*
* Copyright (C) 2020 by RStudio, PBC
*
* Unless you have received this program directly from RStudio pursuant
* to the terms of a commercial license agreement with RStudio, then
* this program is licensed to you under the terms of version 3 of the
* GNU Affero General Public License. This program is distributed WITHOUT
* ANY EXPRESS OR IMPLIED WARRANTY, INCLUDING THOSE OF NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Please refer to the
* AGPL (http://www.gnu.org/licenses/agpl-3.0.txt) for more details.
*
*/
package org.rstudio.core.client.command.impl;
import org.rstudio.core.client.StringUtil;
import org.rstudio.core.client.command.AppCommand;
import org.rstudio.core.client.command.AppMenuBar;
import org.rstudio.core.client.command.AppMenuItem;
import org.rstudio.core.client.command.MenuCallback;
import org.rstudio.core.client.dom.DomUtils;
import org.rstudio.core.client.dom.DomUtils.ElementPredicate;
import com.google.gwt.dom.client.Element;
import com.google.gwt.dom.client.Style;
import com.google.gwt.event.logical.shared.AttachEvent;
import java.util.Stack;
public class WebMenuCallback implements MenuCallback
{
public void beginMainMenu()
{
menuStack_.push(new AppMenuBar(false));
}
public void beginMenu(String label)
{
AppMenuBar newMenu = new AppMenuBar(true);
newMenu.setEscClosesAll(false);
// Adjust the z-index of the displayed sub-menu, so that it (and any
// adorning contents) are rendered in front of their parents.
final int depth = menuStack_.size();
newMenu.addAttachHandler(new AttachEvent.Handler()
{
@Override
public void onAttachOrDetach(AttachEvent event)
{
ElementPredicate callback = (Element el) -> {
return el.getParentElement().getTagName().toLowerCase().contentEquals("body");
};
Element popupEl = DomUtils.findParentElement(newMenu.getElement(), callback);
if (popupEl == null)
return;
Style style = DomUtils.getComputedStyles(popupEl);
int oldIndex = StringUtil.parseInt(style.getZIndex(), -1);
if (oldIndex == -1)
return;
int newIndex = oldIndex + depth;
popupEl.getStyle().setZIndex(newIndex);
}
});
label = AppMenuItem.replaceMnemonics(label, "");
head().addItem(label, newMenu);
menuStack_.push(newMenu);
}
public void addCommand(String commandId, AppCommand command)
{
head().addItem(command.createMenuItem(true));
}
public void addSeparator()
{
head().addSeparator();
}
public void endMenu()
{
menuStack_.pop();
}
public void endMainMenu()
{
result_ = menuStack_.pop();
}
public AppMenuBar getMenu()
{
return result_;
}
private AppMenuBar head()
{
return menuStack_.peek();
}
private final Stack<AppMenuBar> menuStack_ = new Stack<AppMenuBar>();
private AppMenuBar result_;
}
| {
"pile_set_name": "Github"
} |
RUN: llvm-dwarfdump -debug-abbrev %p/Inputs/implicit-const-test.o | FileCheck %s
CHECK: DW_FORM_implicit_const -9223372036854775808
| {
"pile_set_name": "Github"
} |
/*====================================================================*
- Copyright (C) 2001 Leptonica. All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
- 1. Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above
- copyright notice, this list of conditions and the following
- disclaimer in the documentation and/or other materials
- provided with the distribution.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ANY
- CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
- NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*====================================================================*/
/*!
* \file pixabasic.c
* <pre>
*
* Pixa creation, destruction, copying
* PIXA *pixaCreate()
* PIXA *pixaCreateFromPix()
* PIXA *pixaCreateFromBoxa()
* PIXA *pixaSplitPix()
* void pixaDestroy()
* PIXA *pixaCopy()
*
* Pixa addition
* l_int32 pixaAddPix()
* l_int32 pixaAddBox()
* static l_int32 pixaExtendArray()
* l_int32 pixaExtendArrayToSize()
*
* Pixa accessors
* l_int32 pixaGetCount()
* l_int32 pixaChangeRefcount()
* PIX *pixaGetPix()
* l_int32 pixaGetPixDimensions()
* BOXA *pixaGetBoxa()
* l_int32 pixaGetBoxaCount()
* BOX *pixaGetBox()
* l_int32 pixaGetBoxGeometry()
* l_int32 pixaSetBoxa()
* PIX **pixaGetPixArray()
* l_int32 pixaVerifyDepth()
* l_int32 pixaVerifyDimensions()
* l_int32 pixaIsFull()
* l_int32 pixaCountText()
* l_int32 pixaSetText()
* void ***pixaGetLinePtrs()
*
* Pixa output info
* l_int32 pixaWriteStreamInfo()
*
* Pixa array modifiers
* l_int32 pixaReplacePix()
* l_int32 pixaInsertPix()
* l_int32 pixaRemovePix()
* l_int32 pixaRemovePixAndSave()
* l_int32 pixaRemoveSelected()
* l_int32 pixaInitFull()
* l_int32 pixaClear()
*
* Pixa and Pixaa combination
* l_int32 pixaJoin()
* PIXA *pixaInterleave()
* l_int32 pixaaJoin()
*
* Pixaa creation, destruction
* PIXAA *pixaaCreate()
* PIXAA *pixaaCreateFromPixa()
* void pixaaDestroy()
*
* Pixaa addition
* l_int32 pixaaAddPixa()
* l_int32 pixaaExtendArray()
* l_int32 pixaaAddPix()
* l_int32 pixaaAddBox()
*
* Pixaa accessors
* l_int32 pixaaGetCount()
* PIXA *pixaaGetPixa()
* BOXA *pixaaGetBoxa()
* PIX *pixaaGetPix()
* l_int32 pixaaVerifyDepth()
* l_int32 pixaaVerifyDimensions()
* l_int32 pixaaIsFull()
*
* Pixaa array modifiers
* l_int32 pixaaInitFull()
* l_int32 pixaaReplacePixa()
* l_int32 pixaaClear()
* l_int32 pixaaTruncate()
*
* Pixa serialized I/O (requires png support)
* PIXA *pixaRead()
* PIXA *pixaReadStream()
* PIXA *pixaReadMem()
* l_int32 pixaWriteDebug()
* l_int32 pixaWrite()
* l_int32 pixaWriteStream()
* l_int32 pixaWriteMem()
* PIXA *pixaReadBoth()
*
* Pixaa serialized I/O (requires png support)
* PIXAA *pixaaReadFromFiles()
* PIXAA *pixaaRead()
* PIXAA *pixaaReadStream()
* PIXAA *pixaaReadMem()
* l_int32 pixaaWrite()
* l_int32 pixaaWriteStream()
* l_int32 pixaaWriteMem()
*
*
* Important note on reference counting:
* Reference counting for the Pixa is analogous to that for the Boxa.
* See pix.h for details. pixaCopy() provides three possible modes
* of copy. The basic rule is that however a Pixa is obtained
* (e.g., from pixaCreate*(), pixaCopy(), or a Pixaa accessor),
* it is necessary to call pixaDestroy() on it.
* </pre>
*/
#ifdef HAVE_CONFIG_H
#include "config_auto.h"
#endif /* HAVE_CONFIG_H */
#include <string.h>
#include "allheaders.h"
static const l_int32 INITIAL_PTR_ARRAYSIZE = 20; /* n'import quoi */
/* Static functions */
static l_int32 pixaExtendArray(PIXA *pixa);
/*---------------------------------------------------------------------*
* Pixa creation, destruction, copy *
*---------------------------------------------------------------------*/
/*!
* \brief pixaCreate()
*
* \param[in] n initial number of ptrs
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) This creates an empty boxa.
* </pre>
*/
PIXA *
pixaCreate(l_int32 n)
{
PIXA *pixa;
PROCNAME("pixaCreate");
if (n <= 0)
n = INITIAL_PTR_ARRAYSIZE;
pixa = (PIXA *)LEPT_CALLOC(1, sizeof(PIXA));
pixa->n = 0;
pixa->nalloc = n;
pixa->refcount = 1;
pixa->pix = (PIX **)LEPT_CALLOC(n, sizeof(PIX *));
pixa->boxa = boxaCreate(n);
if (!pixa->pix || !pixa->boxa) {
pixaDestroy(&pixa);
return (PIXA *)ERROR_PTR("pix or boxa not made", procName, NULL);
}
return pixa;
}
/*!
* \brief pixaCreateFromPix()
*
* \param[in] pixs with individual components on a lattice
* \param[in] n number of components
* \param[in] cellw width of each cell
* \param[in] cellh height of each cell
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) For bpp = 1, we truncate each retrieved pix to the ON
* pixels, which we assume for now start at (0,0)
* </pre>
*/
PIXA *
pixaCreateFromPix(PIX *pixs,
l_int32 n,
l_int32 cellw,
l_int32 cellh)
{
l_int32 w, h, d, nw, nh, i, j, index;
PIX *pix1, *pix2;
PIXA *pixa;
PROCNAME("pixaCreateFromPix");
if (!pixs)
return (PIXA *)ERROR_PTR("pixs not defined", procName, NULL);
if (n <= 0)
return (PIXA *)ERROR_PTR("n must be > 0", procName, NULL);
if ((pixa = pixaCreate(n)) == NULL)
return (PIXA *)ERROR_PTR("pixa not made", procName, NULL);
pixGetDimensions(pixs, &w, &h, &d);
if ((pix1 = pixCreate(cellw, cellh, d)) == NULL) {
pixaDestroy(&pixa);
return (PIXA *)ERROR_PTR("pix1 not made", procName, NULL);
}
nw = (w + cellw - 1) / cellw;
nh = (h + cellh - 1) / cellh;
for (i = 0, index = 0; i < nh; i++) {
for (j = 0; j < nw && index < n; j++, index++) {
pixRasterop(pix1, 0, 0, cellw, cellh, PIX_SRC, pixs,
j * cellw, i * cellh);
if (d == 1 && !pixClipToForeground(pix1, &pix2, NULL))
pixaAddPix(pixa, pix2, L_INSERT);
else
pixaAddPix(pixa, pix1, L_COPY);
}
}
pixDestroy(&pix1);
return pixa;
}
/*!
* \brief pixaCreateFromBoxa()
*
* \param[in] pixs
* \param[in] boxa
* \param[in] start first box to use
* \param[in] num number of boxes; use 0 to go to the end
* \param[out] pcropwarn [optional] TRUE if the boxa extent
* is larger than pixs.
* \return pixad, or NULL on error
*
* <pre>
* Notes:
* (1) This simply extracts from pixs the region corresponding to each
* box in the boxa. To extract all the regions, set both %start
* and %num to 0.
* (2) The 5th arg is optional. If the extent of the boxa exceeds the
* size of the pixa, so that some boxes are either clipped
* or entirely outside the pix, a warning is returned as TRUE.
* (3) pixad will have only the properly clipped elements, and
* the internal boxa will be correct.
* </pre>
*/
PIXA *
pixaCreateFromBoxa(PIX *pixs,
BOXA *boxa,
l_int32 start,
l_int32 num,
l_int32 *pcropwarn)
{
l_int32 i, n, end, w, h, wbox, hbox, cropwarn;
BOX *box, *boxc;
PIX *pixd;
PIXA *pixad;
PROCNAME("pixaCreateFromBoxa");
if (!pixs)
return (PIXA *)ERROR_PTR("pixs not defined", procName, NULL);
if (!boxa)
return (PIXA *)ERROR_PTR("boxa not defined", procName, NULL);
if (num < 0)
return (PIXA *)ERROR_PTR("num must be >= 0", procName, NULL);
n = boxaGetCount(boxa);
end = (num == 0) ? n - 1 : L_MIN(start + num - 1, n - 1);
if ((pixad = pixaCreate(end - start + 1)) == NULL)
return (PIXA *)ERROR_PTR("pixad not made", procName, NULL);
boxaGetExtent(boxa, &wbox, &hbox, NULL);
pixGetDimensions(pixs, &w, &h, NULL);
cropwarn = FALSE;
if (wbox > w || hbox > h)
cropwarn = TRUE;
if (pcropwarn)
*pcropwarn = cropwarn;
for (i = start; i <= end; i++) {
box = boxaGetBox(boxa, i, L_COPY);
if (cropwarn) { /* if box is outside pixs, pixd is NULL */
pixd = pixClipRectangle(pixs, box, &boxc); /* may be NULL */
if (pixd) {
pixaAddPix(pixad, pixd, L_INSERT);
pixaAddBox(pixad, boxc, L_INSERT);
}
boxDestroy(&box);
} else {
pixd = pixClipRectangle(pixs, box, NULL);
pixaAddPix(pixad, pixd, L_INSERT);
pixaAddBox(pixad, box, L_INSERT);
}
}
return pixad;
}
/*!
* \brief pixaSplitPix()
*
* \param[in] pixs with individual components on a lattice
* \param[in] nx number of mosaic cells horizontally
* \param[in] ny number of mosaic cells vertically
* \param[in] borderwidth of added border on all sides
* \param[in] bordercolor in our RGBA format: 0xrrggbbaa
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) This is a variant on pixaCreateFromPix(), where we
* simply divide the image up into (approximately) equal
* subunits. If you want the subimages to have essentially
* the same aspect ratio as the input pix, use nx = ny.
* (2) If borderwidth is 0, we ignore the input bordercolor and
* redefine it to white.
* (3) The bordercolor is always used to initialize each tiled pix,
* so that if the src is clipped, the unblitted part will
* be this color. This avoids 1 pixel wide black stripes at the
* left and lower edges.
* </pre>
*/
PIXA *
pixaSplitPix(PIX *pixs,
l_int32 nx,
l_int32 ny,
l_int32 borderwidth,
l_uint32 bordercolor)
{
l_int32 w, h, d, cellw, cellh, i, j;
PIX *pix1;
PIXA *pixa;
PROCNAME("pixaSplitPix");
if (!pixs)
return (PIXA *)ERROR_PTR("pixs not defined", procName, NULL);
if (nx <= 0 || ny <= 0)
return (PIXA *)ERROR_PTR("nx and ny must be > 0", procName, NULL);
borderwidth = L_MAX(0, borderwidth);
if ((pixa = pixaCreate(nx * ny)) == NULL)
return (PIXA *)ERROR_PTR("pixa not made", procName, NULL);
pixGetDimensions(pixs, &w, &h, &d);
cellw = (w + nx - 1) / nx; /* round up */
cellh = (h + ny - 1) / ny;
for (i = 0; i < ny; i++) {
for (j = 0; j < nx; j++) {
if ((pix1 = pixCreate(cellw + 2 * borderwidth,
cellh + 2 * borderwidth, d)) == NULL) {
pixaDestroy(&pixa);
return (PIXA *)ERROR_PTR("pix1 not made", procName, NULL);
}
pixCopyColormap(pix1, pixs);
if (borderwidth == 0) { /* initialize full image to white */
if (d == 1)
pixClearAll(pix1);
else
pixSetAll(pix1);
} else {
pixSetAllArbitrary(pix1, bordercolor);
}
pixRasterop(pix1, borderwidth, borderwidth, cellw, cellh,
PIX_SRC, pixs, j * cellw, i * cellh);
pixaAddPix(pixa, pix1, L_INSERT);
}
}
return pixa;
}
/*!
* \brief pixaDestroy()
*
* \param[in,out] ppixa use ptr address so it will be nulled
*
* <pre>
* Notes:
* (1) Decrements the ref count and, if 0, destroys the pixa.
* (2) Always nulls the input ptr.
* </pre>
*/
void
pixaDestroy(PIXA **ppixa)
{
l_int32 i;
PIXA *pixa;
PROCNAME("pixaDestroy");
if (ppixa == NULL) {
L_WARNING("ptr address is NULL!\n", procName);
return;
}
if ((pixa = *ppixa) == NULL)
return;
/* Decrement the refcount. If it is 0, destroy the pixa. */
pixaChangeRefcount(pixa, -1);
if (pixa->refcount <= 0) {
for (i = 0; i < pixa->n; i++)
pixDestroy(&pixa->pix[i]);
LEPT_FREE(pixa->pix);
boxaDestroy(&pixa->boxa);
LEPT_FREE(pixa);
}
*ppixa = NULL;
return;
}
/*!
* \brief pixaCopy()
*
* \param[in] pixa
* \param[in] copyflag see pix.h for details:
* L_COPY makes a new pixa and copies each pix and each box;
* L_CLONE gives a new ref-counted handle to the input pixa;
* L_COPY_CLONE makes a new pixa and inserts clones of
* all pix and boxes
* \return new pixa, or NULL on error
*/
PIXA *
pixaCopy(PIXA *pixa,
l_int32 copyflag)
{
l_int32 i, nb;
BOX *boxc;
PIX *pixc;
PIXA *pixac;
PROCNAME("pixaCopy");
if (!pixa)
return (PIXA *)ERROR_PTR("pixa not defined", procName, NULL);
if (copyflag == L_CLONE) {
pixaChangeRefcount(pixa, 1);
return pixa;
}
if (copyflag != L_COPY && copyflag != L_COPY_CLONE)
return (PIXA *)ERROR_PTR("invalid copyflag", procName, NULL);
if ((pixac = pixaCreate(pixa->n)) == NULL)
return (PIXA *)ERROR_PTR("pixac not made", procName, NULL);
nb = pixaGetBoxaCount(pixa);
for (i = 0; i < pixa->n; i++) {
if (copyflag == L_COPY) {
pixc = pixaGetPix(pixa, i, L_COPY);
if (i < nb) boxc = pixaGetBox(pixa, i, L_COPY);
} else { /* copy-clone */
pixc = pixaGetPix(pixa, i, L_CLONE);
if (i < nb) boxc = pixaGetBox(pixa, i, L_CLONE);
}
pixaAddPix(pixac, pixc, L_INSERT);
if (i < nb) pixaAddBox(pixac, boxc, L_INSERT);
}
return pixac;
}
/*---------------------------------------------------------------------*
* Pixa addition *
*---------------------------------------------------------------------*/
/*!
* \brief pixaAddPix()
*
* \param[in] pixa
* \param[in] pix to be added
* \param[in] copyflag L_INSERT, L_COPY, L_CLONE
* \return 0 if OK; 1 on error
*/
l_ok
pixaAddPix(PIXA *pixa,
PIX *pix,
l_int32 copyflag)
{
l_int32 n;
PIX *pixc;
PROCNAME("pixaAddPix");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (!pix)
return ERROR_INT("pix not defined", procName, 1);
if (copyflag == L_INSERT)
pixc = pix;
else if (copyflag == L_COPY)
pixc = pixCopy(NULL, pix);
else if (copyflag == L_CLONE)
pixc = pixClone(pix);
else
return ERROR_INT("invalid copyflag", procName, 1);
if (!pixc)
return ERROR_INT("pixc not made", procName, 1);
n = pixaGetCount(pixa);
if (n >= pixa->nalloc)
pixaExtendArray(pixa);
pixa->pix[n] = pixc;
pixa->n++;
return 0;
}
/*!
* \brief pixaAddBox()
*
* \param[in] pixa
* \param[in] box
* \param[in] copyflag L_INSERT, L_COPY, L_CLONE
* \return 0 if OK, 1 on error
*/
l_ok
pixaAddBox(PIXA *pixa,
BOX *box,
l_int32 copyflag)
{
PROCNAME("pixaAddBox");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (!box)
return ERROR_INT("box not defined", procName, 1);
if (copyflag != L_INSERT && copyflag != L_COPY && copyflag != L_CLONE)
return ERROR_INT("invalid copyflag", procName, 1);
boxaAddBox(pixa->boxa, box, copyflag);
return 0;
}
/*!
* \brief pixaExtendArray()
*
* \param[in] pixa
* \return 0 if OK; 1 on error
*
* <pre>
* Notes:
* (1) Doubles the size of the pixa and boxa ptr arrays.
* </pre>
*/
static l_int32
pixaExtendArray(PIXA *pixa)
{
PROCNAME("pixaExtendArray");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
return pixaExtendArrayToSize(pixa, 2 * pixa->nalloc);
}
/*!
* \brief pixaExtendArrayToSize()
*
* \param[in] pixa
* \param[in] size
* \return 0 if OK; 1 on error
*
* <pre>
* Notes:
* (1) If necessary, reallocs new pixa and boxa ptrs arrays to %size.
* The pixa and boxa ptr arrays must always be equal in size.
* </pre>
*/
l_ok
pixaExtendArrayToSize(PIXA *pixa,
l_int32 size)
{
PROCNAME("pixaExtendArrayToSize");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (size > pixa->nalloc) {
if ((pixa->pix = (PIX **)reallocNew((void **)&pixa->pix,
sizeof(PIX *) * pixa->nalloc,
size * sizeof(PIX *))) == NULL)
return ERROR_INT("new ptr array not returned", procName, 1);
pixa->nalloc = size;
}
return boxaExtendArrayToSize(pixa->boxa, size);
}
/*---------------------------------------------------------------------*
* Pixa accessors *
*---------------------------------------------------------------------*/
/*!
* \brief pixaGetCount()
*
* \param[in] pixa
* \return count, or 0 if no pixa
*/
l_int32
pixaGetCount(PIXA *pixa)
{
PROCNAME("pixaGetCount");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 0);
return pixa->n;
}
/*!
* \brief pixaChangeRefcount()
*
* \param[in] pixa
* \param[in] delta
* \return 0 if OK, 1 on error
*/
l_ok
pixaChangeRefcount(PIXA *pixa,
l_int32 delta)
{
PROCNAME("pixaChangeRefcount");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
pixa->refcount += delta;
return 0;
}
/*!
* \brief pixaGetPix()
*
* \param[in] pixa
* \param[in] index to the index-th pix
* \param[in] accesstype L_COPY or L_CLONE
* \return pix, or NULL on error
*/
PIX *
pixaGetPix(PIXA *pixa,
l_int32 index,
l_int32 accesstype)
{
PIX *pix;
PROCNAME("pixaGetPix");
if (!pixa)
return (PIX *)ERROR_PTR("pixa not defined", procName, NULL);
if (index < 0 || index >= pixa->n)
return (PIX *)ERROR_PTR("index not valid", procName, NULL);
if ((pix = pixa->pix[index]) == NULL) {
L_ERROR("no pix at pixa[%d]\n", procName, index);
return (PIX *)ERROR_PTR("pix not found!", procName, NULL);
}
if (accesstype == L_COPY)
return pixCopy(NULL, pix);
else if (accesstype == L_CLONE)
return pixClone(pix);
else
return (PIX *)ERROR_PTR("invalid accesstype", procName, NULL);
}
/*!
* \brief pixaGetPixDimensions()
*
* \param[in] pixa
* \param[in] index to the index-th box
* \param[out] pw, ph, pd [optional] each can be null
* \return 0 if OK, 1 on error
*/
l_ok
pixaGetPixDimensions(PIXA *pixa,
l_int32 index,
l_int32 *pw,
l_int32 *ph,
l_int32 *pd)
{
PIX *pix;
PROCNAME("pixaGetPixDimensions");
if (pw) *pw = 0;
if (ph) *ph = 0;
if (pd) *pd = 0;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (index < 0 || index >= pixa->n)
return ERROR_INT("index not valid", procName, 1);
if ((pix = pixaGetPix(pixa, index, L_CLONE)) == NULL)
return ERROR_INT("pix not found!", procName, 1);
pixGetDimensions(pix, pw, ph, pd);
pixDestroy(&pix);
return 0;
}
/*!
* \brief pixaGetBoxa()
*
* \param[in] pixa
* \param[in] accesstype L_COPY, L_CLONE, L_COPY_CLONE
* \return boxa, or NULL on error
*/
BOXA *
pixaGetBoxa(PIXA *pixa,
l_int32 accesstype)
{
PROCNAME("pixaGetBoxa");
if (!pixa)
return (BOXA *)ERROR_PTR("pixa not defined", procName, NULL);
if (!pixa->boxa)
return (BOXA *)ERROR_PTR("boxa not defined", procName, NULL);
if (accesstype != L_COPY && accesstype != L_CLONE &&
accesstype != L_COPY_CLONE)
return (BOXA *)ERROR_PTR("invalid accesstype", procName, NULL);
return boxaCopy(pixa->boxa, accesstype);
}
/*!
* \brief pixaGetBoxaCount()
*
* \param[in] pixa
* \return count, or 0 on error
*/
l_int32
pixaGetBoxaCount(PIXA *pixa)
{
PROCNAME("pixaGetBoxaCount");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 0);
return boxaGetCount(pixa->boxa);
}
/*!
* \brief pixaGetBox()
*
* \param[in] pixa
* \param[in] index to the index-th pix
* \param[in] accesstype L_COPY or L_CLONE
* \return box if null, not automatically an error, or NULL on error
*
* <pre>
* Notes:
* (1) There is always a boxa with a pixa, and it is initialized so
* that each box ptr is NULL.
* (2) In general, we expect that there is either a box associated
* with each pix, or no boxes at all in the boxa.
* (3) Having no boxes is thus not an automatic error. Whether it
* is an actual error is determined by the calling program.
* If the caller expects to get a box, it is an error; see, e.g.,
* pixaGetBoxGeometry().
* </pre>
*/
BOX *
pixaGetBox(PIXA *pixa,
l_int32 index,
l_int32 accesstype)
{
BOX *box;
PROCNAME("pixaGetBox");
if (!pixa)
return (BOX *)ERROR_PTR("pixa not defined", procName, NULL);
if (!pixa->boxa)
return (BOX *)ERROR_PTR("boxa not defined", procName, NULL);
if (index < 0 || index >= pixa->boxa->n)
return (BOX *)ERROR_PTR("index not valid", procName, NULL);
if (accesstype != L_COPY && accesstype != L_CLONE)
return (BOX *)ERROR_PTR("invalid accesstype", procName, NULL);
box = pixa->boxa->box[index];
if (box) {
if (accesstype == L_COPY)
return boxCopy(box);
else /* accesstype == L_CLONE */
return boxClone(box);
} else {
return NULL;
}
}
/*!
* \brief pixaGetBoxGeometry()
*
* \param[in] pixa
* \param[in] index to the index-th box
* \param[out] px, py, pw, ph [optional] each can be null
* \return 0 if OK, 1 on error
*/
l_ok
pixaGetBoxGeometry(PIXA *pixa,
l_int32 index,
l_int32 *px,
l_int32 *py,
l_int32 *pw,
l_int32 *ph)
{
BOX *box;
PROCNAME("pixaGetBoxGeometry");
if (px) *px = 0;
if (py) *py = 0;
if (pw) *pw = 0;
if (ph) *ph = 0;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (index < 0 || index >= pixa->n)
return ERROR_INT("index not valid", procName, 1);
if ((box = pixaGetBox(pixa, index, L_CLONE)) == NULL)
return ERROR_INT("box not found!", procName, 1);
boxGetGeometry(box, px, py, pw, ph);
boxDestroy(&box);
return 0;
}
/*!
* \brief pixaSetBoxa()
*
* \param[in] pixa
* \param[in] boxa
* \param[in] accesstype L_INSERT, L_COPY, L_CLONE
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This destroys the existing boxa in the pixa.
* </pre>
*/
l_ok
pixaSetBoxa(PIXA *pixa,
BOXA *boxa,
l_int32 accesstype)
{
PROCNAME("pixaSetBoxa");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (!boxa)
return ERROR_INT("boxa not defined", procName, 1);
if (accesstype != L_INSERT && accesstype != L_COPY &&
accesstype != L_CLONE)
return ERROR_INT("invalid access type", procName, 1);
boxaDestroy(&pixa->boxa);
if (accesstype == L_INSERT)
pixa->boxa = boxa;
else
pixa->boxa = boxaCopy(boxa, accesstype);
return 0;
}
/*!
* \brief pixaGetPixArray()
*
* \param[in] pixa
* \return pix array, or NULL on error
*
* <pre>
* Notes:
* (1) This returns a ptr to the actual array. The array is
* owned by the pixa, so it must not be destroyed.
* (2) The caller should always check if the return value is NULL
* before accessing any of the pix ptrs in this array!
* </pre>
*/
PIX **
pixaGetPixArray(PIXA *pixa)
{
PROCNAME("pixaGetPixArray");
if (!pixa)
return (PIX **)ERROR_PTR("pixa not defined", procName, NULL);
return pixa->pix;
}
/*!
* \brief pixaVerifyDepth()
*
* \param[in] pixa
* \param[out] psame 1 if depth is the same for all pix; 0 otherwise
* \param[out] pmaxd [optional] max depth of all pix
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) It is considered to be an error if there are no pix.
* </pre>
*/
l_ok
pixaVerifyDepth(PIXA *pixa,
l_int32 *psame,
l_int32 *pmaxd)
{
l_int32 i, n, d, maxd, same;
PROCNAME("pixaVerifyDepth");
if (pmaxd) *pmaxd = 0;
if (!psame)
return ERROR_INT("psame not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if ((n = pixaGetCount(pixa)) == 0)
return ERROR_INT("no pix in pixa", procName, 1);
same = 1;
pixaGetPixDimensions(pixa, 0, NULL, NULL, &maxd);
for (i = 1; i < n; i++) {
if (pixaGetPixDimensions(pixa, i, NULL, NULL, &d))
return ERROR_INT("pix depth not found", procName, 1);
maxd = L_MAX(maxd, d);
if (d != maxd)
same = 0;
}
*psame = same;
if (pmaxd) *pmaxd = maxd;
return 0;
}
/*!
* \brief pixaVerifyDimensions()
*
* \param[in] pixa
* \param[out] psame 1 if dimensions are the same for all pix; 0 otherwise
* \param[out] pmaxw [optional] max width of all pix
* \param[out] pmaxh [optional] max height of all pix
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) It is considered to be an error if there are no pix.
* </pre>
*/
l_ok
pixaVerifyDimensions(PIXA *pixa,
l_int32 *psame,
l_int32 *pmaxw,
l_int32 *pmaxh)
{
l_int32 i, n, w, h, maxw, maxh, same;
PROCNAME("pixaVerifyDimensions");
if (pmaxw) *pmaxw = 0;
if (pmaxh) *pmaxh = 0;
if (!psame)
return ERROR_INT("psame not defined", procName, 1);
*psame = 0;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if ((n = pixaGetCount(pixa)) == 0)
return ERROR_INT("no pix in pixa", procName, 1);
same = 1;
pixaGetPixDimensions(pixa, 0, &maxw, &maxh, NULL);
for (i = 1; i < n; i++) {
if (pixaGetPixDimensions(pixa, i, &w, &h, NULL))
return ERROR_INT("pix dimensions not found", procName, 1);
maxw = L_MAX(maxw, w);
maxh = L_MAX(maxh, h);
if (w != maxw || h != maxh)
same = 0;
}
*psame = same;
if (pmaxw) *pmaxw = maxw;
if (pmaxh) *pmaxh = maxh;
return 0;
}
/*!
* \brief pixaIsFull()
*
* \param[in] pixa
* \param[out] pfullpa [optional] 1 if pixa is full
* \param[out] pfullba [optional] 1 if boxa is full
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) A pixa is "full" if the array of pix is fully
* occupied from index 0 to index (pixa->n - 1).
* </pre>
*/
l_ok
pixaIsFull(PIXA *pixa,
l_int32 *pfullpa,
l_int32 *pfullba)
{
l_int32 i, n, full;
BOXA *boxa;
PIX *pix;
PROCNAME("pixaIsFull");
if (pfullpa) *pfullpa = 0;
if (pfullba) *pfullba = 0;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
if (pfullpa) {
full = 1;
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL) {
full = 0;
break;
}
pixDestroy(&pix);
}
*pfullpa = full;
}
if (pfullba) {
boxa = pixaGetBoxa(pixa, L_CLONE);
boxaIsFull(boxa, pfullba);
boxaDestroy(&boxa);
}
return 0;
}
/*!
* \brief pixaCountText()
*
* \param[in] pixa
* \param[out] pntext number of pix with non-empty text strings
* \return 0 if OK, 1 on error.
*
* <pre>
* Notes:
* (1) All pix have non-empty text strings if the returned value %ntext
* equals the pixa count.
* </pre>
*/
l_ok
pixaCountText(PIXA *pixa,
l_int32 *pntext)
{
char *text;
l_int32 i, n;
PIX *pix;
PROCNAME("pixaCountText");
if (!pntext)
return ERROR_INT("&ntext not defined", procName, 1);
*pntext = 0;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL)
continue;
text = pixGetText(pix);
if (text && strlen(text) > 0)
(*pntext)++;
pixDestroy(&pix);
}
return 0;
}
/*!
* \brief pixaSetText()
*
* \param[in] pixa
* \param[in] text [optional] single text string, to insert in each pix
* \param[in] sa [optional] array of text strings, to insert in each pix
* \return 0 if OK, 1 on error.
*
* <pre>
* Notes:
* (1) To clear all the text fields, use %sa == NULL and %text == NULL.
* (2) To set all the text fields to the same value %text, use %sa = NULL.
* (3) If %sa is defined, we ignore %text and use it; %sa must have
* the same count as %pixa.
* </pre>
*/
l_ok
pixaSetText(PIXA *pixa,
const char *text,
SARRAY *sa)
{
char *str;
l_int32 i, n;
PIX *pix;
PROCNAME("pixaSetText");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
if (sa && (sarrayGetCount(sa) != n))
return ERROR_INT("pixa and sa sizes differ", procName, 1);
if (!sa) {
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL)
continue;
pixSetText(pix, text);
pixDestroy(&pix);
}
return 0;
}
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL)
continue;
str = sarrayGetString(sa, i, L_NOCOPY);
pixSetText(pix, str);
pixDestroy(&pix);
}
return 0;
}
/*!
* \brief pixaGetLinePtrs()
*
* \param[in] pixa of pix that all have the same depth
* \param[out] psize [optional] number of pix in the pixa
* \return array of array of line ptrs, or NULL on error
*
* <pre>
* Notes:
* (1) See pixGetLinePtrs() for details.
* (2) It is best if all pix in the pixa are the same size.
* The size of each line ptr array is equal to the height
* of the pix that it refers to.
* (3) This is an array of arrays. To destroy it:
* for (i = 0; i < size; i++)
* LEPT_FREE(lineset[i]);
* LEPT_FREE(lineset);
* </pre>
*/
void ***
pixaGetLinePtrs(PIXA *pixa,
l_int32 *psize)
{
l_int32 i, n, same;
void **lineptrs;
void ***lineset;
PIX *pix;
PROCNAME("pixaGetLinePtrs");
if (psize) *psize = 0;
if (!pixa)
return (void ***)ERROR_PTR("pixa not defined", procName, NULL);
pixaVerifyDepth(pixa, &same, NULL);
if (!same)
return (void ***)ERROR_PTR("pixa not all same depth", procName, NULL);
n = pixaGetCount(pixa);
if (psize) *psize = n;
if ((lineset = (void ***)LEPT_CALLOC(n, sizeof(void **))) == NULL)
return (void ***)ERROR_PTR("lineset not made", procName, NULL);
for (i = 0; i < n; i++) {
pix = pixaGetPix(pixa, i, L_CLONE);
lineptrs = pixGetLinePtrs(pix, NULL);
lineset[i] = lineptrs;
pixDestroy(&pix);
}
return lineset;
}
/*---------------------------------------------------------------------*
* Pixa output info *
*---------------------------------------------------------------------*/
/*!
* \brief pixaWriteStreamInfo()
*
* \param[in] fp file stream
* \param[in] pixa
* \return 0 if OK, 1 on error.
*
* <pre>
* Notes:
* (1) For each pix in the pixa, write out the pix dimensions, spp,
* text string (if it exists), and cmap info.
* </pre>
*/
l_ok
pixaWriteStreamInfo(FILE *fp,
PIXA *pixa)
{
char *text;
l_int32 i, n, w, h, d, spp, count, hastext;
PIX *pix;
PIXCMAP *cmap;
PROCNAME("pixaWriteStreamInfo");
if (!fp)
return ERROR_INT("stream not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL) {
fprintf(fp, "%d: no pix at this index\n", i);
continue;
}
pixGetDimensions(pix, &w, &h, &d);
spp = pixGetSpp(pix);
text = pixGetText(pix);
hastext = (text && strlen(text) > 0);
if ((cmap = pixGetColormap(pix)) != NULL)
count = pixcmapGetCount(cmap);
fprintf(fp, "Pix %d: w = %d, h = %d, d = %d, spp = %d",
i, w, h, d, spp);
if (cmap) fprintf(fp, ", cmap(%d colors)", count);
if (hastext) fprintf(fp, ", text = %s", text);
fprintf(fp, "\n");
pixDestroy(&pix);
}
return 0;
}
/*---------------------------------------------------------------------*
* Pixa array modifiers *
*---------------------------------------------------------------------*/
/*!
* \brief pixaReplacePix()
*
* \param[in] pixa
* \param[in] index to the index-th pix
* \param[in] pix insert to replace existing one
* \param[in] box [optional] insert to replace existing
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) In-place replacement of one pix.
* (2) The previous pix at that location is destroyed.
* </pre>
*/
l_ok
pixaReplacePix(PIXA *pixa,
l_int32 index,
PIX *pix,
BOX *box)
{
BOXA *boxa;
PROCNAME("pixaReplacePix");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (index < 0 || index >= pixa->n)
return ERROR_INT("index not valid", procName, 1);
if (!pix)
return ERROR_INT("pix not defined", procName, 1);
pixDestroy(&(pixa->pix[index]));
pixa->pix[index] = pix;
if (box) {
boxa = pixa->boxa;
if (index > boxa->n)
return ERROR_INT("boxa index not valid", procName, 1);
boxaReplaceBox(boxa, index, box);
}
return 0;
}
/*!
* \brief pixaInsertPix()
*
* \param[in] pixa
* \param[in] index at which pix is to be inserted
* \param[in] pixs new pix to be inserted
* \param[in] box [optional] new box to be inserted
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This shifts pixa[i] --> pixa[i + 1] for all i >= index,
* and then inserts at pixa[index].
* (2) To insert at the beginning of the array, set index = 0.
* (3) It should not be used repeatedly on large arrays,
* because the function is O(n).
* (4) To append a pix to a pixa, it's easier to use pixaAddPix().
* </pre>
*/
l_ok
pixaInsertPix(PIXA *pixa,
l_int32 index,
PIX *pixs,
BOX *box)
{
l_int32 i, n;
PROCNAME("pixaInsertPix");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
if (index < 0 || index > n)
return ERROR_INT("index not in {0...n}", procName, 1);
if (!pixs)
return ERROR_INT("pixs not defined", procName, 1);
if (n >= pixa->nalloc) { /* extend both ptr arrays */
pixaExtendArray(pixa);
boxaExtendArray(pixa->boxa);
}
pixa->n++;
for (i = n; i > index; i--)
pixa->pix[i] = pixa->pix[i - 1];
pixa->pix[index] = pixs;
/* Optionally, insert the box */
if (box)
boxaInsertBox(pixa->boxa, index, box);
return 0;
}
/*!
* \brief pixaRemovePix()
*
* \param[in] pixa
* \param[in] index of pix to be removed
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This shifts pixa[i] --> pixa[i - 1] for all i > index.
* (2) It should not be used repeatedly on large arrays,
* because the function is O(n).
* (3) The corresponding box is removed as well, if it exists.
* </pre>
*/
l_ok
pixaRemovePix(PIXA *pixa,
l_int32 index)
{
l_int32 i, n, nbox;
BOXA *boxa;
PIX **array;
PROCNAME("pixaRemovePix");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
if (index < 0 || index >= n)
return ERROR_INT("index not in {0...n - 1}", procName, 1);
/* Remove the pix */
array = pixa->pix;
pixDestroy(&array[index]);
for (i = index + 1; i < n; i++)
array[i - 1] = array[i];
array[n - 1] = NULL;
pixa->n--;
/* Remove the box if it exists */
boxa = pixa->boxa;
nbox = boxaGetCount(boxa);
if (index < nbox)
boxaRemoveBox(boxa, index);
return 0;
}
/*!
* \brief pixaRemovePixAndSave()
*
* \param[in] pixa
* \param[in] index of pix to be removed
* \param[out] ppix [optional] removed pix
* \param[out] pbox [optional] removed box
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This shifts pixa[i] --> pixa[i - 1] for all i > index.
* (2) It should not be used repeatedly on large arrays,
* because the function is O(n).
* (3) The corresponding box is removed as well, if it exists.
* (4) The removed pix and box can either be retained or destroyed.
* </pre>
*/
l_ok
pixaRemovePixAndSave(PIXA *pixa,
l_int32 index,
PIX **ppix,
BOX **pbox)
{
l_int32 i, n, nbox;
BOXA *boxa;
PIX **array;
PROCNAME("pixaRemovePixAndSave");
if (ppix) *ppix = NULL;
if (pbox) *pbox = NULL;
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
if (index < 0 || index >= n)
return ERROR_INT("index not in {0...n - 1}", procName, 1);
/* Remove the pix */
array = pixa->pix;
if (ppix)
*ppix = pixaGetPix(pixa, index, L_CLONE);
pixDestroy(&array[index]);
for (i = index + 1; i < n; i++)
array[i - 1] = array[i];
array[n - 1] = NULL;
pixa->n--;
/* Remove the box if it exists */
boxa = pixa->boxa;
nbox = boxaGetCount(boxa);
if (index < nbox)
boxaRemoveBoxAndSave(boxa, index, pbox);
return 0;
}
/*!
* \brief pixaRemoveSelected()
*
* \param[in] pixa
* \param[in] naindex numa of indices of pix to be removed
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This gives error messages for invalid indices
* </pre>
*/
l_ok
pixaRemoveSelected(PIXA *pixa,
NUMA *naindex)
{
l_int32 i, n, index;
NUMA *na1;
PROCNAME("pixaRemoveSelected");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (!naindex)
return ERROR_INT("naindex not defined", procName, 1);
if ((n = numaGetCount(naindex)) == 0)
return ERROR_INT("naindex is empty", procName, 1);
/* Remove from highest indices first */
na1 = numaSort(NULL, naindex, L_SORT_DECREASING);
for (i = 0; i < n; i++) {
numaGetIValue(na1, i, &index);
pixaRemovePix(pixa, index);
}
numaDestroy(&na1);
return 0;
}
/*!
* \brief pixaInitFull()
*
* \param[in] pixa typically empty
* \param[in] pix [optional] to be replicated to the entire pixa ptr array
* \param[in] box [optional] to be replicated to the entire boxa ptr array
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This initializes a pixa by filling up the entire pix ptr array
* with copies of %pix. If %pix == NULL, we use a tiny placeholder
* pix (w = h = d = 1). Any existing pix are destroyed.
* It also optionally fills the boxa with copies of %box.
* After this operation, the numbers of pix and (optionally)
* boxes are equal to the number of allocated ptrs.
* (2) Note that we use pixaReplacePix() instead of pixaInsertPix().
* They both have the same effect when inserting into a NULL ptr
* in the pixa ptr array:
* (3) If the boxa is not initialized (i.e., filled with boxes),
* later insertion of boxes will cause an error, because the
* 'n' field is 0.
* (4) Example usage. This function is useful to prepare for a
* random insertion (or replacement) of pix into a pixa.
* To randomly insert pix into a pixa, without boxes, up to
* some index "max":
* Pixa *pixa = pixaCreate(max);
* pixaInitFull(pixa, NULL, NULL);
* An existing pixa with a smaller ptr array can also be reused:
* pixaExtendArrayToSize(pixa, max);
* pixaInitFull(pixa, NULL, NULL);
* The initialization allows the pixa to always be properly
* filled, even if all pix (and boxes) are not later replaced.
* </pre>
*/
l_ok
pixaInitFull(PIXA *pixa,
PIX *pix,
BOX *box)
{
l_int32 i, n;
PIX *pix1;
PROCNAME("pixaInitFull");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixa->nalloc;
pixa->n = n;
for (i = 0; i < n; i++) {
if (pix)
pix1 = pixCopy(NULL, pix);
else
pix1 = pixCreate(1, 1, 1);
pixaReplacePix(pixa, i, pix1, NULL);
}
if (box)
boxaInitFull(pixa->boxa, box);
return 0;
}
/*!
* \brief pixaClear()
*
* \param[in] pixa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This destroys all pix in the pixa, as well as
* all boxes in the boxa. The ptrs in the pix ptr array
* are all null'd. The number of allocated pix, n, is set to 0.
* </pre>
*/
l_ok
pixaClear(PIXA *pixa)
{
l_int32 i, n;
PROCNAME("pixaClear");
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
for (i = 0; i < n; i++)
pixDestroy(&pixa->pix[i]);
pixa->n = 0;
return boxaClear(pixa->boxa);
}
/*---------------------------------------------------------------------*
* Pixa and Pixaa combination *
*---------------------------------------------------------------------*/
/*!
* \brief pixaJoin()
*
* \param[in] pixad dest pixa; add to this one
* \param[in] pixas [optional] source pixa; add from this one
* \param[in] istart starting index in pixas
* \param[in] iend ending index in pixas; use -1 to cat all
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This appends a clone of each indicated pix in pixas to pixad
* (2) istart < 0 is taken to mean 'read from the start' (istart = 0)
* (3) iend < 0 means 'read to the end'
* (4) If pixas is NULL or contains no pix, this is a no-op.
* </pre>
*/
l_ok
pixaJoin(PIXA *pixad,
PIXA *pixas,
l_int32 istart,
l_int32 iend)
{
l_int32 i, n, nb;
BOXA *boxas, *boxad;
PIX *pix;
PROCNAME("pixaJoin");
if (!pixad)
return ERROR_INT("pixad not defined", procName, 1);
if (!pixas || ((n = pixaGetCount(pixas)) == 0))
return 0;
if (istart < 0)
istart = 0;
if (iend < 0 || iend >= n)
iend = n - 1;
if (istart > iend)
return ERROR_INT("istart > iend; nothing to add", procName, 1);
for (i = istart; i <= iend; i++) {
pix = pixaGetPix(pixas, i, L_CLONE);
pixaAddPix(pixad, pix, L_INSERT);
}
boxas = pixaGetBoxa(pixas, L_CLONE);
boxad = pixaGetBoxa(pixad, L_CLONE);
nb = pixaGetBoxaCount(pixas);
iend = L_MIN(iend, nb - 1);
boxaJoin(boxad, boxas, istart, iend);
boxaDestroy(&boxas); /* just the clones */
boxaDestroy(&boxad);
return 0;
}
/*!
* \brief pixaInterleave()
*
* \param[in] pixa1 first src pixa
* \param[in] pixa2 second src pixa
* \param[in] copyflag L_CLONE, L_COPY
* \return pixa interleaved from sources, or NULL on error.
*
* <pre>
* Notes:
* (1) %copyflag determines if the pix are copied or cloned.
* The boxes, if they exist, are copied.
* (2) If the two pixa have different sizes, a warning is issued,
* and the number of pairs returned is the minimum size.
* </pre>
*/
PIXA *
pixaInterleave(PIXA *pixa1,
PIXA *pixa2,
l_int32 copyflag)
{
l_int32 i, n1, n2, n, nb1, nb2;
BOX *box;
PIX *pix;
PIXA *pixad;
PROCNAME("pixaInterleave");
if (!pixa1)
return (PIXA *)ERROR_PTR("pixa1 not defined", procName, NULL);
if (!pixa2)
return (PIXA *)ERROR_PTR("pixa2 not defined", procName, NULL);
if (copyflag != L_COPY && copyflag != L_CLONE)
return (PIXA *)ERROR_PTR("invalid copyflag", procName, NULL);
n1 = pixaGetCount(pixa1);
n2 = pixaGetCount(pixa2);
n = L_MIN(n1, n2);
if (n == 0)
return (PIXA *)ERROR_PTR("at least one input pixa is empty",
procName, NULL);
if (n1 != n2)
L_WARNING("counts differ: %d != %d\n", procName, n1, n2);
pixad = pixaCreate(2 * n);
nb1 = pixaGetBoxaCount(pixa1);
nb2 = pixaGetBoxaCount(pixa2);
for (i = 0; i < n; i++) {
pix = pixaGetPix(pixa1, i, copyflag);
pixaAddPix(pixad, pix, L_INSERT);
if (i < nb1) {
box = pixaGetBox(pixa1, i, L_COPY);
pixaAddBox(pixad, box, L_INSERT);
}
pix = pixaGetPix(pixa2, i, copyflag);
pixaAddPix(pixad, pix, L_INSERT);
if (i < nb2) {
box = pixaGetBox(pixa2, i, L_COPY);
pixaAddBox(pixad, box, L_INSERT);
}
}
return pixad;
}
/*!
* \brief pixaaJoin()
*
* \param[in] paad dest pixaa; add to this one
* \param[in] paas [optional] source pixaa; add from this one
* \param[in] istart starting index in pixaas
* \param[in] iend ending index in pixaas; use -1 to cat all
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This appends a clone of each indicated pixa in paas to pixaad
* (2) istart < 0 is taken to mean 'read from the start' (istart = 0)
* (3) iend < 0 means 'read to the end'
* </pre>
*/
l_ok
pixaaJoin(PIXAA *paad,
PIXAA *paas,
l_int32 istart,
l_int32 iend)
{
l_int32 i, n;
PIXA *pixa;
PROCNAME("pixaaJoin");
if (!paad)
return ERROR_INT("pixaad not defined", procName, 1);
if (!paas)
return 0;
if (istart < 0)
istart = 0;
n = pixaaGetCount(paas, NULL);
if (iend < 0 || iend >= n)
iend = n - 1;
if (istart > iend)
return ERROR_INT("istart > iend; nothing to add", procName, 1);
for (i = istart; i <= iend; i++) {
pixa = pixaaGetPixa(paas, i, L_CLONE);
pixaaAddPixa(paad, pixa, L_INSERT);
}
return 0;
}
/*---------------------------------------------------------------------*
* Pixaa creation and destruction *
*---------------------------------------------------------------------*/
/*!
* \brief pixaaCreate()
*
* \param[in] n initial number of pixa ptrs
* \return paa, or NULL on error
*
* <pre>
* Notes:
* (1) A pixaa provides a 2-level hierarchy of images.
* A common use is for segmentation masks, which are
* inexpensive to store in png format.
* (2) For example, suppose you want a mask for each textline
* in a two-column page. The textline masks for each column
* can be represented by a pixa, of which there are 2 in the pixaa.
* The boxes for the textline mask components within a column
* can have their origin referred to the column rather than the page.
* Then the boxa field can be used to represent the two box (regions)
* for the columns, and the (x,y) components of each box can
* be used to get the absolute position of the textlines on
* the page.
* </pre>
*/
PIXAA *
pixaaCreate(l_int32 n)
{
PIXAA *paa;
PROCNAME("pixaaCreate");
if (n <= 0)
n = INITIAL_PTR_ARRAYSIZE;
if ((paa = (PIXAA *)LEPT_CALLOC(1, sizeof(PIXAA))) == NULL)
return (PIXAA *)ERROR_PTR("paa not made", procName, NULL);
paa->n = 0;
paa->nalloc = n;
if ((paa->pixa = (PIXA **)LEPT_CALLOC(n, sizeof(PIXA *))) == NULL) {
pixaaDestroy(&paa);
return (PIXAA *)ERROR_PTR("pixa ptrs not made", procName, NULL);
}
paa->boxa = boxaCreate(n);
return paa;
}
/*!
* \brief pixaaCreateFromPixa()
*
* \param[in] pixa
* \param[in] n number specifying subdivision of pixa
* \param[in] type L_CHOOSE_CONSECUTIVE, L_CHOOSE_SKIP_BY
* \param[in] copyflag L_CLONE, L_COPY
* \return paa, or NULL on error
*
* <pre>
* Notes:
* (1) This subdivides a pixa into a set of smaller pixa that
* are accumulated into a pixaa.
* (2) If type == L_CHOOSE_CONSECUTIVE, the first 'n' pix are
* put in a pixa and added to pixaa, then the next 'n', etc.
* If type == L_CHOOSE_SKIP_BY, the first pixa is made by
* aggregating pix[0], pix[n], pix[2*n], etc.
* (3) The copyflag specifies if each new pix is a copy or a clone.
* </pre>
*/
PIXAA *
pixaaCreateFromPixa(PIXA *pixa,
l_int32 n,
l_int32 type,
l_int32 copyflag)
{
l_int32 count, i, j, npixa;
PIX *pix;
PIXA *pixat;
PIXAA *paa;
PROCNAME("pixaaCreateFromPixa");
if (!pixa)
return (PIXAA *)ERROR_PTR("pixa not defined", procName, NULL);
count = pixaGetCount(pixa);
if (count == 0)
return (PIXAA *)ERROR_PTR("no pix in pixa", procName, NULL);
if (n <= 0)
return (PIXAA *)ERROR_PTR("n must be > 0", procName, NULL);
if (type != L_CHOOSE_CONSECUTIVE && type != L_CHOOSE_SKIP_BY)
return (PIXAA *)ERROR_PTR("invalid type", procName, NULL);
if (copyflag != L_CLONE && copyflag != L_COPY)
return (PIXAA *)ERROR_PTR("invalid copyflag", procName, NULL);
if (type == L_CHOOSE_CONSECUTIVE)
npixa = (count + n - 1) / n;
else /* L_CHOOSE_SKIP_BY */
npixa = L_MIN(n, count);
paa = pixaaCreate(npixa);
if (type == L_CHOOSE_CONSECUTIVE) {
for (i = 0; i < count; i++) {
if (i % n == 0)
pixat = pixaCreate(n);
pix = pixaGetPix(pixa, i, copyflag);
pixaAddPix(pixat, pix, L_INSERT);
if (i % n == n - 1)
pixaaAddPixa(paa, pixat, L_INSERT);
}
if (i % n != 0)
pixaaAddPixa(paa, pixat, L_INSERT);
} else { /* L_CHOOSE_SKIP_BY */
for (i = 0; i < npixa; i++) {
pixat = pixaCreate(count / npixa + 1);
for (j = i; j < count; j += n) {
pix = pixaGetPix(pixa, j, copyflag);
pixaAddPix(pixat, pix, L_INSERT);
}
pixaaAddPixa(paa, pixat, L_INSERT);
}
}
return paa;
}
/*!
* \brief pixaaDestroy()
*
* \param[in,out] ppaa use ptr address so it will be nulled
* \return void
*/
void
pixaaDestroy(PIXAA **ppaa)
{
l_int32 i;
PIXAA *paa;
PROCNAME("pixaaDestroy");
if (ppaa == NULL) {
L_WARNING("ptr address is NULL!\n", procName);
return;
}
if ((paa = *ppaa) == NULL)
return;
for (i = 0; i < paa->n; i++)
pixaDestroy(&paa->pixa[i]);
LEPT_FREE(paa->pixa);
boxaDestroy(&paa->boxa);
LEPT_FREE(paa);
*ppaa = NULL;
return;
}
/*---------------------------------------------------------------------*
* Pixaa addition *
*---------------------------------------------------------------------*/
/*!
* \brief pixaaAddPixa()
*
* \param[in] paa
* \param[in] pixa to be added
* \param[in] copyflag:
* L_INSERT inserts the pixa directly;
* L_COPY makes a new pixa and copies each pix and each box;
* L_CLONE gives a new handle to the input pixa;
* L_COPY_CLONE makes a new pixa and inserts clones of
* all pix and boxes
* \return 0 if OK; 1 on error
*/
l_ok
pixaaAddPixa(PIXAA *paa,
PIXA *pixa,
l_int32 copyflag)
{
l_int32 n;
PIXA *pixac;
PROCNAME("pixaaAddPixa");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if (copyflag != L_INSERT && copyflag != L_COPY &&
copyflag != L_CLONE && copyflag != L_COPY_CLONE)
return ERROR_INT("invalid copyflag", procName, 1);
if (copyflag == L_INSERT) {
pixac = pixa;
} else {
if ((pixac = pixaCopy(pixa, copyflag)) == NULL)
return ERROR_INT("pixac not made", procName, 1);
}
n = pixaaGetCount(paa, NULL);
if (n >= paa->nalloc)
pixaaExtendArray(paa);
paa->pixa[n] = pixac;
paa->n++;
return 0;
}
/*!
* \brief pixaaExtendArray()
*
* \param[in] paa
* \return 0 if OK; 1 on error
*/
l_ok
pixaaExtendArray(PIXAA *paa)
{
PROCNAME("pixaaExtendArray");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if ((paa->pixa = (PIXA **)reallocNew((void **)&paa->pixa,
sizeof(PIXA *) * paa->nalloc,
2 * sizeof(PIXA *) * paa->nalloc)) == NULL)
return ERROR_INT("new ptr array not returned", procName, 1);
paa->nalloc = 2 * paa->nalloc;
return 0;
}
/*!
* \brief pixaaAddPix()
*
* \param[in] paa input paa
* \param[in] index index of pixa in paa
* \param[in] pix to be added
* \param[in] box [optional] to be added
* \param[in] copyflag L_INSERT, L_COPY, L_CLONE
* \return 0 if OK; 1 on error
*/
l_ok
pixaaAddPix(PIXAA *paa,
l_int32 index,
PIX *pix,
BOX *box,
l_int32 copyflag)
{
PIXA *pixa;
PROCNAME("pixaaAddPix");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if (!pix)
return ERROR_INT("pix not defined", procName, 1);
if ((pixa = pixaaGetPixa(paa, index, L_CLONE)) == NULL)
return ERROR_INT("pixa not found", procName, 1);
pixaAddPix(pixa, pix, copyflag);
if (box) pixaAddBox(pixa, box, copyflag);
pixaDestroy(&pixa);
return 0;
}
/*!
* \brief pixaaAddBox()
*
* \param[in] paa
* \param[in] box
* \param[in] copyflag L_INSERT, L_COPY, L_CLONE
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) The box can be used, for example, to hold the support region
* of a pixa that is being added to the pixaa.
* </pre>
*/
l_ok
pixaaAddBox(PIXAA *paa,
BOX *box,
l_int32 copyflag)
{
PROCNAME("pixaaAddBox");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if (!box)
return ERROR_INT("box not defined", procName, 1);
if (copyflag != L_INSERT && copyflag != L_COPY && copyflag != L_CLONE)
return ERROR_INT("invalid copyflag", procName, 1);
boxaAddBox(paa->boxa, box, copyflag);
return 0;
}
/*---------------------------------------------------------------------*
* Pixaa accessors *
*---------------------------------------------------------------------*/
/*!
* \brief pixaaGetCount()
*
* \param[in] paa
* \param[out] pna [optional] number of pix in each pixa
* \return count, or 0 if no pixaa
*
* <pre>
* Notes:
* (1) If paa is empty, a returned na will also be empty.
* </pre>
*/
l_int32
pixaaGetCount(PIXAA *paa,
NUMA **pna)
{
l_int32 i, n;
NUMA *na;
PIXA *pixa;
PROCNAME("pixaaGetCount");
if (pna) *pna = NULL;
if (!paa)
return ERROR_INT("paa not defined", procName, 0);
n = paa->n;
if (pna) {
if ((na = numaCreate(n)) == NULL)
return ERROR_INT("na not made", procName, 0);
*pna = na;
for (i = 0; i < n; i++) {
pixa = pixaaGetPixa(paa, i, L_CLONE);
numaAddNumber(na, pixaGetCount(pixa));
pixaDestroy(&pixa);
}
}
return n;
}
/*!
* \brief pixaaGetPixa()
*
* \param[in] paa
* \param[in] index to the index-th pixa
* \param[in] accesstype L_COPY, L_CLONE, L_COPY_CLONE
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) L_COPY makes a new pixa with a copy of every pix
* (2) L_CLONE just makes a new reference to the pixa,
* and bumps the counter. You would use this, for example,
* when you need to extract some data from a pix within a
* pixa within a pixaa.
* (3) L_COPY_CLONE makes a new pixa with a clone of every pix
* and box
* (4) In all cases, you must invoke pixaDestroy() on the returned pixa
* </pre>
*/
PIXA *
pixaaGetPixa(PIXAA *paa,
l_int32 index,
l_int32 accesstype)
{
PIXA *pixa;
PROCNAME("pixaaGetPixa");
if (!paa)
return (PIXA *)ERROR_PTR("paa not defined", procName, NULL);
if (index < 0 || index >= paa->n)
return (PIXA *)ERROR_PTR("index not valid", procName, NULL);
if (accesstype != L_COPY && accesstype != L_CLONE &&
accesstype != L_COPY_CLONE)
return (PIXA *)ERROR_PTR("invalid accesstype", procName, NULL);
if ((pixa = paa->pixa[index]) == NULL) { /* shouldn't happen! */
L_ERROR("missing pixa[%d]\n", procName, index);
return (PIXA *)ERROR_PTR("pixa not found at index", procName, NULL);
}
return pixaCopy(pixa, accesstype);
}
/*!
* \brief pixaaGetBoxa()
*
* \param[in] paa
* \param[in] accesstype L_COPY, L_CLONE
* \return boxa, or NULL on error
*
* <pre>
* Notes:
* (1) L_COPY returns a copy; L_CLONE returns a new reference to the boxa.
* (2) In both cases, invoke boxaDestroy() on the returned boxa.
* </pre>
*/
BOXA *
pixaaGetBoxa(PIXAA *paa,
l_int32 accesstype)
{
PROCNAME("pixaaGetBoxa");
if (!paa)
return (BOXA *)ERROR_PTR("paa not defined", procName, NULL);
if (accesstype != L_COPY && accesstype != L_CLONE)
return (BOXA *)ERROR_PTR("invalid access type", procName, NULL);
return boxaCopy(paa->boxa, accesstype);
}
/*!
* \brief pixaaGetPix()
*
* \param[in] paa
* \param[in] index index into the pixa array in the pixaa
* \param[in] ipix index into the pix array in the pixa
* \param[in] accessflag L_COPY or L_CLONE
* \return pix, or NULL on error
*/
PIX *
pixaaGetPix(PIXAA *paa,
l_int32 index,
l_int32 ipix,
l_int32 accessflag)
{
PIX *pix;
PIXA *pixa;
PROCNAME("pixaaGetPix");
if ((pixa = pixaaGetPixa(paa, index, L_CLONE)) == NULL)
return (PIX *)ERROR_PTR("pixa not retrieved", procName, NULL);
if ((pix = pixaGetPix(pixa, ipix, accessflag)) == NULL)
L_ERROR("pix not retrieved\n", procName);
pixaDestroy(&pixa);
return pix;
}
/*!
* \brief pixaaVerifyDepth()
*
* \param[in] paa
* \param[out] psame 1 if all pix have the same depth; 0 otherwise
* \param[out] pmaxd [optional] max depth of all pix in pixaa
* \return 0 if OK; 1 on error
*
* <pre>
* Notes:
* (1) It is considered to be an error if any pixa have no pix.
* </pre>
*/
l_ok
pixaaVerifyDepth(PIXAA *paa,
l_int32 *psame,
l_int32 *pmaxd)
{
l_int32 i, n, d, maxd, same, samed;
PIXA *pixa;
PROCNAME("pixaaVerifyDepth");
if (pmaxd) *pmaxd = 0;
if (!psame)
return ERROR_INT("psame not defined", procName, 1);
*psame = 0;
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if ((n = pixaaGetCount(paa, NULL)) == 0)
return ERROR_INT("no pixa in paa", procName, 1);
pixa = pixaaGetPixa(paa, 0, L_CLONE);
pixaVerifyDepth(pixa, &same, &maxd); /* init same, maxd with first pixa */
pixaDestroy(&pixa);
for (i = 1; i < n; i++) {
pixa = pixaaGetPixa(paa, i, L_CLONE);
pixaVerifyDepth(pixa, &samed, &d);
pixaDestroy(&pixa);
maxd = L_MAX(maxd, d);
if (!samed || maxd != d)
same = 0;
}
*psame = same;
if (pmaxd) *pmaxd = maxd;
return 0;
}
/*!
* \brief pixaaVerifyDimensions()
*
* \param[in] paa
* \param[out] psame 1 if all pix have the same depth; 0 otherwise
* \param[out] pmaxw [optional] max width of all pix in pixaa
* \param[out] pmaxh [optional] max height of all pix in pixaa
* \return 0 if OK; 1 on error
*
* <pre>
* Notes:
* (1) It is considered to be an error if any pixa have no pix.
* </pre>
*/
l_ok
pixaaVerifyDimensions(PIXAA *paa,
l_int32 *psame,
l_int32 *pmaxw,
l_int32 *pmaxh)
{
l_int32 i, n, w, h, maxw, maxh, same, same2;
PIXA *pixa;
PROCNAME("pixaaVerifyDimensions");
if (pmaxw) *pmaxw = 0;
if (pmaxh) *pmaxh = 0;
if (!psame)
return ERROR_INT("psame not defined", procName, 1);
*psame = 0;
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if ((n = pixaaGetCount(paa, NULL)) == 0)
return ERROR_INT("no pixa in paa", procName, 1);
/* Init same; init maxw and maxh from first pixa */
pixa = pixaaGetPixa(paa, 0, L_CLONE);
pixaVerifyDimensions(pixa, &same, &maxw, &maxh);
pixaDestroy(&pixa);
for (i = 1; i < n; i++) {
pixa = pixaaGetPixa(paa, i, L_CLONE);
pixaVerifyDimensions(pixa, &same2, &w, &h);
pixaDestroy(&pixa);
maxw = L_MAX(maxw, w);
maxh = L_MAX(maxh, h);
if (!same2 || maxw != w || maxh != h)
same = 0;
}
*psame = same;
if (pmaxw) *pmaxw = maxw;
if (pmaxh) *pmaxh = maxh;
return 0;
}
/*!
* \brief pixaaIsFull()
*
* \param[in] paa
* \param[out] pfull 1 if all pixa in the paa have full pix arrays
* \return return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) Does not require boxa associated with each pixa to be full.
* </pre>
*/
l_int32
pixaaIsFull(PIXAA *paa,
l_int32 *pfull)
{
l_int32 i, n, full;
PIXA *pixa;
PROCNAME("pixaaIsFull");
if (!pfull)
return ERROR_INT("&full not defined", procName, 0);
*pfull = 0;
if (!paa)
return ERROR_INT("paa not defined", procName, 0);
n = pixaaGetCount(paa, NULL);
full = 1;
for (i = 0; i < n; i++) {
pixa = pixaaGetPixa(paa, i, L_CLONE);
pixaIsFull(pixa, &full, NULL);
pixaDestroy(&pixa);
if (!full) break;
}
*pfull = full;
return 0;
}
/*---------------------------------------------------------------------*
* Pixaa array modifiers *
*---------------------------------------------------------------------*/
/*!
* \brief pixaaInitFull()
*
* \param[in] paa typically empty
* \param[in] pixa to be replicated into the entire pixa ptr array
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This initializes a pixaa by filling up the entire pixa ptr array
* with copies of %pixa. Any existing pixa are destroyed.
* (2) Example usage. This function is useful to prepare for a
* random insertion (or replacement) of pixa into a pixaa.
* To randomly insert pixa into a pixaa, up to some index "max":
* Pixaa *paa = pixaaCreate(max);
* Pixa *pixa = pixaCreate(1); // if you want little memory
* pixaaInitFull(paa, pixa); // copy it to entire array
* pixaDestroy(&pixa); // no longer needed
* The initialization allows the pixaa to always be properly filled.
* </pre>
*/
l_ok
pixaaInitFull(PIXAA *paa,
PIXA *pixa)
{
l_int32 i, n;
PIXA *pixat;
PROCNAME("pixaaInitFull");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = paa->nalloc;
paa->n = n;
for (i = 0; i < n; i++) {
pixat = pixaCopy(pixa, L_COPY);
pixaaReplacePixa(paa, i, pixat);
}
return 0;
}
/*!
* \brief pixaaReplacePixa()
*
* \param[in] paa
* \param[in] index to the index-th pixa
* \param[in] pixa insert to replace existing one
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This allows random insertion of a pixa into a pixaa, with
* destruction of any existing pixa at that location.
* The input pixa is now owned by the pixaa.
* (2) No other pixa in the array are affected.
* (3) The index must be within the allowed set.
* </pre>
*/
l_ok
pixaaReplacePixa(PIXAA *paa,
l_int32 index,
PIXA *pixa)
{
PROCNAME("pixaaReplacePixa");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if (index < 0 || index >= paa->n)
return ERROR_INT("index not valid", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
pixaDestroy(&(paa->pixa[index]));
paa->pixa[index] = pixa;
return 0;
}
/*!
* \brief pixaaClear()
*
* \param[in] paa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This destroys all pixa in the pixaa, and nulls the ptrs
* in the pixa ptr array.
* </pre>
*/
l_ok
pixaaClear(PIXAA *paa)
{
l_int32 i, n;
PROCNAME("pixaClear");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
n = pixaaGetCount(paa, NULL);
for (i = 0; i < n; i++)
pixaDestroy(&paa->pixa[i]);
paa->n = 0;
return 0;
}
/*!
* \brief pixaaTruncate()
*
* \param[in] paa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) This identifies the largest index containing a pixa that
* has any pix within it, destroys all pixa above that index,
* and resets the count.
* </pre>
*/
l_ok
pixaaTruncate(PIXAA *paa)
{
l_int32 i, n, np;
PIXA *pixa;
PROCNAME("pixaaTruncate");
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
n = pixaaGetCount(paa, NULL);
for (i = n - 1; i >= 0; i--) {
pixa = pixaaGetPixa(paa, i, L_CLONE);
if (!pixa) {
paa->n--;
continue;
}
np = pixaGetCount(pixa);
pixaDestroy(&pixa);
if (np == 0) {
pixaDestroy(&paa->pixa[i]);
paa->n--;
} else {
break;
}
}
return 0;
}
/*---------------------------------------------------------------------*
* Pixa serialized I/O *
*---------------------------------------------------------------------*/
/*!
* \brief pixaRead()
*
* \param[in] filename
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
PIXA *
pixaRead(const char *filename)
{
FILE *fp;
PIXA *pixa;
PROCNAME("pixaRead");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return (PIXA *)ERROR_PTR("no libpng: can't read data", procName, NULL);
#endif /* !HAVE_LIBPNG */
if (!filename)
return (PIXA *)ERROR_PTR("filename not defined", procName, NULL);
if ((fp = fopenReadStream(filename)) == NULL)
return (PIXA *)ERROR_PTR("stream not opened", procName, NULL);
pixa = pixaReadStream(fp);
fclose(fp);
if (!pixa)
return (PIXA *)ERROR_PTR("pixa not read", procName, NULL);
return pixa;
}
/*!
* \brief pixaReadStream()
*
* \param[in] fp file stream
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
PIXA *
pixaReadStream(FILE *fp)
{
l_int32 n, i, xres, yres, version;
l_int32 ignore;
BOXA *boxa;
PIX *pix;
PIXA *pixa;
PROCNAME("pixaReadStream");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return (PIXA *)ERROR_PTR("no libpng: can't read data", procName, NULL);
#endif /* !HAVE_LIBPNG */
if (!fp)
return (PIXA *)ERROR_PTR("stream not defined", procName, NULL);
if (fscanf(fp, "\nPixa Version %d\n", &version) != 1)
return (PIXA *)ERROR_PTR("not a pixa file", procName, NULL);
if (version != PIXA_VERSION_NUMBER)
return (PIXA *)ERROR_PTR("invalid pixa version", procName, NULL);
if (fscanf(fp, "Number of pix = %d\n", &n) != 1)
return (PIXA *)ERROR_PTR("not a pixa file", procName, NULL);
if ((boxa = boxaReadStream(fp)) == NULL)
return (PIXA *)ERROR_PTR("boxa not made", procName, NULL);
if ((pixa = pixaCreate(n)) == NULL) {
boxaDestroy(&boxa);
return (PIXA *)ERROR_PTR("pixa not made", procName, NULL);
}
boxaDestroy(&pixa->boxa);
pixa->boxa = boxa;
for (i = 0; i < n; i++) {
if ((fscanf(fp, " pix[%d]: xres = %d, yres = %d\n",
&ignore, &xres, &yres)) != 3) {
pixaDestroy(&pixa);
return (PIXA *)ERROR_PTR("res reading error", procName, NULL);
}
if ((pix = pixReadStreamPng(fp)) == NULL) {
pixaDestroy(&pixa);
return (PIXA *)ERROR_PTR("pix not read", procName, NULL);
}
pixSetXRes(pix, xres);
pixSetYRes(pix, yres);
pixaAddPix(pixa, pix, L_INSERT);
}
return pixa;
}
/*!
* \brief pixaReadMem()
*
* \param[in] data of serialized pixa
* \param[in] size of data in bytes
* \return pixa, or NULL on error
*/
PIXA *
pixaReadMem(const l_uint8 *data,
size_t size)
{
FILE *fp;
PIXA *pixa;
PROCNAME("pixaReadMem");
if (!data)
return (PIXA *)ERROR_PTR("data not defined", procName, NULL);
if ((fp = fopenReadFromMemory(data, size)) == NULL)
return (PIXA *)ERROR_PTR("stream not opened", procName, NULL);
pixa = pixaReadStream(fp);
fclose(fp);
if (!pixa) L_ERROR("pixa not read\n", procName);
return pixa;
}
/*!
* \brief pixaWriteDebug()
*
* \param[in] fname
* \param[in] pixa
* \return 0 if OK; 1 on error
*
* <pre>
* Notes:
* (1) Debug version, intended for use in the library when writing
* to files in a temp directory with names that are compiled in.
* This is used instead of pixaWrite() for all such library calls.
* (2) The global variable LeptDebugOK defaults to 0, and can be set
* or cleared by the function setLeptDebugOK().
* </pre>
*/
l_ok
pixaWriteDebug(const char *fname,
PIXA *pixa)
{
PROCNAME("pixaWriteDebug");
if (LeptDebugOK) {
return pixaWrite(fname, pixa);
} else {
L_INFO("write to named temp file %s is disabled\n", procName, fname);
return 0;
}
}
/*!
* \brief pixaWrite()
*
* \param[in] filename
* \param[in] pixa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
l_ok
pixaWrite(const char *filename,
PIXA *pixa)
{
l_int32 ret;
FILE *fp;
PROCNAME("pixaWrite");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return ERROR_INT("no libpng: can't write data", procName, 1);
#endif /* !HAVE_LIBPNG */
if (!filename)
return ERROR_INT("filename not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
if ((fp = fopenWriteStream(filename, "wb")) == NULL)
return ERROR_INT("stream not opened", procName, 1);
ret = pixaWriteStream(fp, pixa);
fclose(fp);
if (ret)
return ERROR_INT("pixa not written to stream", procName, 1);
return 0;
}
/*!
* \brief pixaWriteStream()
*
* \param[in] fp file stream opened for "wb"
* \param[in] pixa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
l_ok
pixaWriteStream(FILE *fp,
PIXA *pixa)
{
l_int32 n, i;
PIX *pix;
PROCNAME("pixaWriteStream");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return ERROR_INT("no libpng: can't write data", procName, 1);
#endif /* !HAVE_LIBPNG */
if (!fp)
return ERROR_INT("stream not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
n = pixaGetCount(pixa);
fprintf(fp, "\nPixa Version %d\n", PIXA_VERSION_NUMBER);
fprintf(fp, "Number of pix = %d\n", n);
boxaWriteStream(fp, pixa->boxa);
for (i = 0; i < n; i++) {
if ((pix = pixaGetPix(pixa, i, L_CLONE)) == NULL)
return ERROR_INT("pix not found", procName, 1);
fprintf(fp, " pix[%d]: xres = %d, yres = %d\n",
i, pix->xres, pix->yres);
pixWriteStreamPng(fp, pix, 0.0);
pixDestroy(&pix);
}
return 0;
}
/*!
* \brief pixaWriteMem()
*
* \param[out] pdata data of serialized pixa
* \param[out] psize size of returned data
* \param[in] pixa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) Serializes a pixa in memory and puts the result in a buffer.
* </pre>
*/
l_ok
pixaWriteMem(l_uint8 **pdata,
size_t *psize,
PIXA *pixa)
{
l_int32 ret;
FILE *fp;
PROCNAME("pixaWriteMem");
if (pdata) *pdata = NULL;
if (psize) *psize = 0;
if (!pdata)
return ERROR_INT("&data not defined", procName, 1);
if (!psize)
return ERROR_INT("&size not defined", procName, 1);
if (!pixa)
return ERROR_INT("pixa not defined", procName, 1);
#if HAVE_FMEMOPEN
if ((fp = open_memstream((char **)pdata, psize)) == NULL)
return ERROR_INT("stream not opened", procName, 1);
ret = pixaWriteStream(fp, pixa);
#else
L_INFO("work-around: writing to a temp file\n", procName);
#ifdef _WIN32
if ((fp = fopenWriteWinTempfile()) == NULL)
return ERROR_INT("tmpfile stream not opened", procName, 1);
#else
if ((fp = tmpfile()) == NULL)
return ERROR_INT("tmpfile stream not opened", procName, 1);
#endif /* _WIN32 */
ret = pixaWriteStream(fp, pixa);
rewind(fp);
*pdata = l_binaryReadStream(fp, psize);
#endif /* HAVE_FMEMOPEN */
fclose(fp);
return ret;
}
/*!
* \brief pixaReadBoth()
*
* \param[in] filename
* \return pixa, or NULL on error
*
* <pre>
* Notes:
* (1) This reads serialized files of either a pixa or a pixacomp,
* and returns a pixa in memory. It requires png and jpeg libraries.
* </pre>
*/
PIXA *
pixaReadBoth(const char *filename)
{
char buf[32];
char *sname;
PIXA *pixa;
PIXAC *pac;
PROCNAME("pixaReadBoth");
if (!filename)
return (PIXA *)ERROR_PTR("filename not defined", procName, NULL);
l_getStructStrFromFile(filename, L_STR_NAME, &sname);
if (!sname)
return (PIXA *)ERROR_PTR("struct name not found", procName, NULL);
snprintf(buf, sizeof(buf), "%s", sname);
LEPT_FREE(sname);
if (strcmp(buf, "Pixacomp") == 0) {
if ((pac = pixacompRead(filename)) == NULL)
return (PIXA *)ERROR_PTR("pac not made", procName, NULL);
pixa = pixaCreateFromPixacomp(pac, L_COPY);
pixacompDestroy(&pac);
} else if (strcmp(buf, "Pixa") == 0) {
if ((pixa = pixaRead(filename)) == NULL)
return (PIXA *)ERROR_PTR("pixa not made", procName, NULL);
} else {
return (PIXA *)ERROR_PTR("invalid file type", procName, NULL);
}
return pixa;
}
/*---------------------------------------------------------------------*
* Pixaa serialized I/O *
*---------------------------------------------------------------------*/
/*!
* \brief pixaaReadFromFiles()
*
* \param[in] dirname directory
* \param[in] substr [optional] substring filter on filenames; can be NULL
* \param[in] first 0-based
* \param[in] nfiles use 0 for everything from %first to the end
* \return paa, or NULL on error or if no pixa files are found.
*
* <pre>
* Notes:
* (1) The files must be serialized pixa files (e.g., *.pa)
* If some files cannot be read, warnings are issued.
* (2) Use %substr to filter filenames in the directory. If
* %substr == NULL, this takes all files.
* (3) After filtering, use %first and %nfiles to select
* a contiguous set of files, that have been lexically
* sorted in increasing order.
* </pre>
*/
PIXAA *
pixaaReadFromFiles(const char *dirname,
const char *substr,
l_int32 first,
l_int32 nfiles)
{
char *fname;
l_int32 i, n;
PIXA *pixa;
PIXAA *paa;
SARRAY *sa;
PROCNAME("pixaaReadFromFiles");
if (!dirname)
return (PIXAA *)ERROR_PTR("dirname not defined", procName, NULL);
sa = getSortedPathnamesInDirectory(dirname, substr, first, nfiles);
if (!sa || ((n = sarrayGetCount(sa)) == 0)) {
sarrayDestroy(&sa);
return (PIXAA *)ERROR_PTR("no pixa files found", procName, NULL);
}
paa = pixaaCreate(n);
for (i = 0; i < n; i++) {
fname = sarrayGetString(sa, i, L_NOCOPY);
if ((pixa = pixaRead(fname)) == NULL) {
L_ERROR("pixa not read for %d-th file", procName, i);
continue;
}
pixaaAddPixa(paa, pixa, L_INSERT);
}
sarrayDestroy(&sa);
return paa;
}
/*!
* \brief pixaaRead()
*
* \param[in] filename
* \return paa, or NULL on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
PIXAA *
pixaaRead(const char *filename)
{
FILE *fp;
PIXAA *paa;
PROCNAME("pixaaRead");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return (PIXAA *)ERROR_PTR("no libpng: can't read data", procName, NULL);
#endif /* !HAVE_LIBPNG */
if (!filename)
return (PIXAA *)ERROR_PTR("filename not defined", procName, NULL);
if ((fp = fopenReadStream(filename)) == NULL)
return (PIXAA *)ERROR_PTR("stream not opened", procName, NULL);
paa = pixaaReadStream(fp);
fclose(fp);
if (!paa)
return (PIXAA *)ERROR_PTR("paa not read", procName, NULL);
return paa;
}
/*!
* \brief pixaaReadStream()
*
* \param[in] fp file stream
* \return paa, or NULL on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
PIXAA *
pixaaReadStream(FILE *fp)
{
l_int32 n, i, version;
l_int32 ignore;
BOXA *boxa;
PIXA *pixa;
PIXAA *paa;
PROCNAME("pixaaReadStream");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return (PIXAA *)ERROR_PTR("no libpng: can't read data", procName, NULL);
#endif /* !HAVE_LIBPNG */
if (!fp)
return (PIXAA *)ERROR_PTR("stream not defined", procName, NULL);
if (fscanf(fp, "\nPixaa Version %d\n", &version) != 1)
return (PIXAA *)ERROR_PTR("not a pixaa file", procName, NULL);
if (version != PIXAA_VERSION_NUMBER)
return (PIXAA *)ERROR_PTR("invalid pixaa version", procName, NULL);
if (fscanf(fp, "Number of pixa = %d\n", &n) != 1)
return (PIXAA *)ERROR_PTR("not a pixaa file", procName, NULL);
if ((paa = pixaaCreate(n)) == NULL)
return (PIXAA *)ERROR_PTR("paa not made", procName, NULL);
if ((boxa = boxaReadStream(fp)) == NULL) {
pixaaDestroy(&paa);
return (PIXAA *)ERROR_PTR("boxa not made", procName, NULL);
}
boxaDestroy(&paa->boxa);
paa->boxa = boxa;
for (i = 0; i < n; i++) {
if ((fscanf(fp, "\n\n --------------- pixa[%d] ---------------\n",
&ignore)) != 1) {
pixaaDestroy(&paa);
return (PIXAA *)ERROR_PTR("text reading", procName, NULL);
}
if ((pixa = pixaReadStream(fp)) == NULL) {
pixaaDestroy(&paa);
return (PIXAA *)ERROR_PTR("pixa not read", procName, NULL);
}
pixaaAddPixa(paa, pixa, L_INSERT);
}
return paa;
}
/*!
* \brief pixaaReadMem()
*
* \param[in] data of serialized pixaa
* \param[in] size of data in bytes
* \return paa, or NULL on error
*/
PIXAA *
pixaaReadMem(const l_uint8 *data,
size_t size)
{
FILE *fp;
PIXAA *paa;
PROCNAME("paaReadMem");
if (!data)
return (PIXAA *)ERROR_PTR("data not defined", procName, NULL);
if ((fp = fopenReadFromMemory(data, size)) == NULL)
return (PIXAA *)ERROR_PTR("stream not opened", procName, NULL);
paa = pixaaReadStream(fp);
fclose(fp);
if (!paa) L_ERROR("paa not read\n", procName);
return paa;
}
/*!
* \brief pixaaWrite()
*
* \param[in] filename
* \param[in] paa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
l_ok
pixaaWrite(const char *filename,
PIXAA *paa)
{
l_int32 ret;
FILE *fp;
PROCNAME("pixaaWrite");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return ERROR_INT("no libpng: can't read data", procName, 1);
#endif /* !HAVE_LIBPNG */
if (!filename)
return ERROR_INT("filename not defined", procName, 1);
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
if ((fp = fopenWriteStream(filename, "wb")) == NULL)
return ERROR_INT("stream not opened", procName, 1);
ret = pixaaWriteStream(fp, paa);
fclose(fp);
if (ret)
return ERROR_INT("paa not written to stream", procName, 1);
return 0;
}
/*!
* \brief pixaaWriteStream()
*
* \param[in] fp file stream opened for "wb"
* \param[in] paa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) The pix are stored in the file as png.
* If the png library is not linked, this will fail.
* </pre>
*/
l_ok
pixaaWriteStream(FILE *fp,
PIXAA *paa)
{
l_int32 n, i;
PIXA *pixa;
PROCNAME("pixaaWriteStream");
#if !HAVE_LIBPNG /* defined in environ.h and config_auto.h */
return ERROR_INT("no libpng: can't read data", procName, 1);
#endif /* !HAVE_LIBPNG */
if (!fp)
return ERROR_INT("stream not defined", procName, 1);
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
n = pixaaGetCount(paa, NULL);
fprintf(fp, "\nPixaa Version %d\n", PIXAA_VERSION_NUMBER);
fprintf(fp, "Number of pixa = %d\n", n);
boxaWriteStream(fp, paa->boxa);
for (i = 0; i < n; i++) {
if ((pixa = pixaaGetPixa(paa, i, L_CLONE)) == NULL)
return ERROR_INT("pixa not found", procName, 1);
fprintf(fp, "\n\n --------------- pixa[%d] ---------------\n", i);
pixaWriteStream(fp, pixa);
pixaDestroy(&pixa);
}
return 0;
}
/*!
* \brief pixaaWriteMem()
*
* \param[out] pdata data of serialized pixaa
* \param[out] psize size of returned data
* \param[in] paa
* \return 0 if OK, 1 on error
*
* <pre>
* Notes:
* (1) Serializes a pixaa in memory and puts the result in a buffer.
* </pre>
*/
l_ok
pixaaWriteMem(l_uint8 **pdata,
size_t *psize,
PIXAA *paa)
{
l_int32 ret;
FILE *fp;
PROCNAME("pixaaWriteMem");
if (pdata) *pdata = NULL;
if (psize) *psize = 0;
if (!pdata)
return ERROR_INT("&data not defined", procName, 1);
if (!psize)
return ERROR_INT("&size not defined", procName, 1);
if (!paa)
return ERROR_INT("paa not defined", procName, 1);
#if HAVE_FMEMOPEN
if ((fp = open_memstream((char **)pdata, psize)) == NULL)
return ERROR_INT("stream not opened", procName, 1);
ret = pixaaWriteStream(fp, paa);
#else
L_INFO("work-around: writing to a temp file\n", procName);
#ifdef _WIN32
if ((fp = fopenWriteWinTempfile()) == NULL)
return ERROR_INT("tmpfile stream not opened", procName, 1);
#else
if ((fp = tmpfile()) == NULL)
return ERROR_INT("tmpfile stream not opened", procName, 1);
#endif /* _WIN32 */
ret = pixaaWriteStream(fp, paa);
rewind(fp);
*pdata = l_binaryReadStream(fp, psize);
#endif /* HAVE_FMEMOPEN */
fclose(fp);
return ret;
}
| {
"pile_set_name": "Github"
} |
package sql
// Copyright (c) Microsoft and contributors. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Code generated by Microsoft (R) AutoRest Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
import (
"context"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/go-autorest/tracing"
"net/http"
)
// ElasticPoolActivitiesClient is the the Azure SQL Database management API provides a RESTful set of web services that
// interact with Azure SQL Database services to manage your databases. The API enables you to create, retrieve, update,
// and delete databases.
type ElasticPoolActivitiesClient struct {
BaseClient
}
// NewElasticPoolActivitiesClient creates an instance of the ElasticPoolActivitiesClient client.
func NewElasticPoolActivitiesClient(subscriptionID string) ElasticPoolActivitiesClient {
return NewElasticPoolActivitiesClientWithBaseURI(DefaultBaseURI, subscriptionID)
}
// NewElasticPoolActivitiesClientWithBaseURI creates an instance of the ElasticPoolActivitiesClient client using a
// custom endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds,
// Azure stack).
func NewElasticPoolActivitiesClientWithBaseURI(baseURI string, subscriptionID string) ElasticPoolActivitiesClient {
return ElasticPoolActivitiesClient{NewWithBaseURI(baseURI, subscriptionID)}
}
// ListByElasticPool returns elastic pool activities.
// Parameters:
// resourceGroupName - the name of the resource group that contains the resource. You can obtain this value
// from the Azure Resource Manager API or the portal.
// serverName - the name of the server.
// elasticPoolName - the name of the elastic pool for which to get the current activity.
func (client ElasticPoolActivitiesClient) ListByElasticPool(ctx context.Context, resourceGroupName string, serverName string, elasticPoolName string) (result ElasticPoolActivityListResult, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/ElasticPoolActivitiesClient.ListByElasticPool")
defer func() {
sc := -1
if result.Response.Response != nil {
sc = result.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
req, err := client.ListByElasticPoolPreparer(ctx, resourceGroupName, serverName, elasticPoolName)
if err != nil {
err = autorest.NewErrorWithError(err, "sql.ElasticPoolActivitiesClient", "ListByElasticPool", nil, "Failure preparing request")
return
}
resp, err := client.ListByElasticPoolSender(req)
if err != nil {
result.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "sql.ElasticPoolActivitiesClient", "ListByElasticPool", resp, "Failure sending request")
return
}
result, err = client.ListByElasticPoolResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "sql.ElasticPoolActivitiesClient", "ListByElasticPool", resp, "Failure responding to request")
}
return
}
// ListByElasticPoolPreparer prepares the ListByElasticPool request.
func (client ElasticPoolActivitiesClient) ListByElasticPoolPreparer(ctx context.Context, resourceGroupName string, serverName string, elasticPoolName string) (*http.Request, error) {
pathParameters := map[string]interface{}{
"elasticPoolName": autorest.Encode("path", elasticPoolName),
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"serverName": autorest.Encode("path", serverName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
const APIVersion = "2014-04-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
autorest.AsGet(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/elasticPools/{elasticPoolName}/elasticPoolActivity", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
// ListByElasticPoolSender sends the ListByElasticPool request. The method will close the
// http.Response Body if it receives an error.
func (client ElasticPoolActivitiesClient) ListByElasticPoolSender(req *http.Request) (*http.Response, error) {
return client.Send(req, azure.DoRetryWithRegistration(client.Client))
}
// ListByElasticPoolResponder handles the response to the ListByElasticPool request. The method always
// closes the http.Response Body.
func (client ElasticPoolActivitiesClient) ListByElasticPoolResponder(resp *http.Response) (result ElasticPoolActivityListResult, err error) {
err = autorest.Respond(
resp,
azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
| {
"pile_set_name": "Github"
} |
/*
* # Semantic UI
* https://github.com/Semantic-Org/Semantic-UI
* http://www.semantic-ui.com/
*
* Copyright 2014 Contributors
* Released under the MIT license
* http://opensource.org/licenses/MIT
*
*/
/*******************************
Activity Feed
*******************************/
.ui.feed {
margin: 1em 0em;
}
.ui.feed:first-child {
margin-top: 0em;
}
.ui.feed:last-child {
margin-top: 0em;
}
/*******************************
Content
*******************************/
/* Event */
.ui.feed > .event {
display: table;
width: 100%;
padding: 0.5rem 0em;
margin: 0em;
background: none;
border-top: none;
}
.ui.feed > .event:first-child {
border-top: 0px;
padding-top: 0em;
}
.ui.feed > .event:last-child {
padding-bottom: 0em;
}
/* Event Label */
.ui.feed > .event > .label {
display: table-cell;
width: 2.5em;
height: 2.5em;
vertical-align: top;
text-align: left;
}
.ui.feed > .event > .label .icon {
opacity: 1;
font-size: 1.5em;
width: 100%;
padding: 0.25em;
background: none;
border: none;
border-radius: none;
color: rgba(0, 0, 0, 0.6);
}
.ui.feed > .event > .label img {
width: 100%;
height: auto;
border-radius: 500rem;
}
.ui.feed > .event > .label + .content {
padding: 0.5em 0em 0.5em 1.25em;
}
/* Content */
.ui.feed > .event > .content {
display: table-cell;
vertical-align: top;
text-align: left;
word-wrap: break-word;
}
.ui.feed > .event:last-child > .content {
padding-bottom: 0em;
}
/* Link */
.ui.feed > .event > .content a {
cursor: pointer;
}
/*--------------
Date
---------------*/
.ui.feed > .event > .content .date {
margin: -0.5rem 0em 0em;
padding: 0em;
font-weight: normal;
font-size: 1em;
font-style: normal;
color: rgba(0, 0, 0, 0.4);
}
/*--------------
Summary
---------------*/
.ui.feed > .event > .content .summary {
margin: 0em;
font-size: 1em;
font-weight: bold;
color: rgba(0, 0, 0, 0.8);
}
/* Summary Image */
.ui.feed > .event > .content .summary img {
display: inline-block;
width: auto;
height: 2em;
margin: -0.25em 0.25em 0em 0em;
border-radius: 0.25em;
vertical-align: middle;
}
/*--------------
User
---------------*/
.ui.feed > .event > .content .user {
display: inline-block;
font-weight: bold;
margin-right: 0em;
vertical-align: baseline;
}
.ui.feed > .event > .content .user img {
margin: -0.25em 0.25em 0em 0em;
width: auto;
height: 2em;
vertical-align: middle;
}
/*--------------
Inline Date
---------------*/
/* Date inside Summary */
.ui.feed > .event > .content .summary > .date {
display: inline-block;
float: none;
font-weight: normal;
font-size: 0.875em;
font-style: normal;
margin: 0em 0em 0em 0.5em;
padding: 0em;
color: rgba(0, 0, 0, 0.4);
}
/*--------------
Extra Summary
---------------*/
.ui.feed > .event > .content .extra {
margin: 0.5em 0em 0em;
background: none;
padding: 0em;
color: rgba(0, 0, 0, 0.8);
}
/* Images */
.ui.feed > .event > .content .extra.images img {
display: inline-block;
margin: 0em 0.25em 0em 0em;
width: 6em;
}
/* Text */
.ui.feed > .event > .content .extra.text {
padding: 0.5em 1em;
border-left: 3px solid rgba(0, 0, 0, 0.2);
font-size: 1em;
max-width: 500px;
line-height: 1.33;
}
/*--------------
Meta
---------------*/
.ui.feed > .event > .content .meta {
display: inline-block;
font-size: 0.875em;
margin: 0.5em 0em 0em;
background: none;
border: none;
border-radius: 0;
box-shadow: none;
padding: 0em;
color: rgba(0, 0, 0, 0.6);
}
.ui.feed > .event > .content .meta > * {
position: relative;
margin-left: 0.75em;
}
.ui.feed > .event > .content .meta > *:after {
content: '';
color: rgba(0, 0, 0, 0.2);
top: 0em;
left: -1em;
opacity: 1;
position: absolute;
vertical-align: top;
}
.ui.feed > .event > .content .meta .like {
color: '';
-webkit-transition: 0.2s color ease;
transition: 0.2s color ease;
}
.ui.feed > .event > .content .meta .like:hover .icon {
color: #ff2733;
}
.ui.feed > .event > .content .meta .active.like .icon {
color: #ef404a;
}
/* First element */
.ui.feed > .event > .content .meta > :first-child {
margin-left: 0em;
}
.ui.feed > .event > .content .meta > :first-child::after {
display: none;
}
/* Action */
.ui.feed > .event > .content .meta a,
.ui.feed > .event > .content .meta > .icon {
cursor: pointer;
opacity: 1;
color: rgba(0, 0, 0, 0.5);
-webkit-transition: color 0.2s ease;
transition: color 0.2s ease;
}
.ui.feed > .event > .content .meta a:hover,
.ui.feed > .event > .content .meta a:hover .icon,
.ui.feed > .event > .content .meta > .icon:hover {
color: rgba(0, 0, 0, 0.8);
}
/*******************************
Variations
*******************************/
.ui.small.feed {
font-size: 0.9em;
}
.ui.feed {
font-size: 1em;
}
.ui.large.feed {
font-size: 1.1em;
}
/*******************************
Theme Overrides
*******************************/
/*******************************
User Variable Overrides
*******************************/
| {
"pile_set_name": "Github"
} |
/*
* Copyright Andrey Semashev 2007 - 2015.
* Distributed under the Boost Software License, Version 1.0.
* (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*/
/*!
* \file thread_specific.hpp
* \author Andrey Semashev
* \date 01.03.2008
*
* \brief This header is the Boost.Log library implementation, see the library documentation
* at http://www.boost.org/doc/libs/release/libs/log/doc/html/index.html.
*/
#ifndef BOOST_LOG_DETAIL_THREAD_SPECIFIC_HPP_INCLUDED_
#define BOOST_LOG_DETAIL_THREAD_SPECIFIC_HPP_INCLUDED_
#include <boost/static_assert.hpp>
#include <boost/type_traits/is_pod.hpp>
#include <boost/log/detail/config.hpp>
#ifdef BOOST_HAS_PRAGMA_ONCE
#pragma once
#endif
#if !defined(BOOST_LOG_NO_THREADS)
#include <boost/log/detail/header.hpp>
namespace boost {
BOOST_LOG_OPEN_NAMESPACE
namespace aux {
//! Base class for TLS to hide platform-specific storage management
class thread_specific_base
{
private:
#if defined(BOOST_THREAD_PLATFORM_WIN32)
typedef unsigned long key_storage;
#else
typedef void* key_storage;
#endif
key_storage m_Key;
protected:
BOOST_LOG_API thread_specific_base();
BOOST_LOG_API ~thread_specific_base();
BOOST_LOG_API void* get_content() const;
BOOST_LOG_API void set_content(void* value) const;
// Copying prohibited
BOOST_DELETED_FUNCTION(thread_specific_base(thread_specific_base const&))
BOOST_DELETED_FUNCTION(thread_specific_base& operator= (thread_specific_base const&))
};
//! A TLS wrapper for small POD types with least possible overhead
template< typename T >
class thread_specific :
public thread_specific_base
{
BOOST_STATIC_ASSERT_MSG(sizeof(T) <= sizeof(void*) && is_pod< T >::value, "Boost.Log: Thread-specific values must be PODs and must not exceed the size of a pointer");
//! Union to perform type casting
union value_storage
{
void* as_pointer;
T as_value;
};
public:
//! Default constructor
BOOST_DEFAULTED_FUNCTION(thread_specific(), {})
//! Initializing constructor
thread_specific(T const& value)
{
set(value);
}
//! Assignment
thread_specific& operator= (T const& value)
{
set(value);
return *this;
}
//! Accessor
T get() const
{
value_storage cast = {};
cast.as_pointer = thread_specific_base::get_content();
return cast.as_value;
}
//! Setter
void set(T const& value)
{
value_storage cast = {};
cast.as_value = value;
thread_specific_base::set_content(cast.as_pointer);
}
};
} // namespace aux
BOOST_LOG_CLOSE_NAMESPACE // namespace log
} // namespace boost
#include <boost/log/detail/footer.hpp>
#endif // !defined(BOOST_LOG_NO_THREADS)
#endif // BOOST_LOG_DETAIL_THREAD_SPECIFIC_HPP_INCLUDED_
| {
"pile_set_name": "Github"
} |
# frozen_string_literal: true
class ApplicationRecord < ActiveRecord::Base
include BlueDoc::RichText::Attribute
self.abstract_class = true
def self.t(*args)
title = args.shift
title = "activerecord.errors.messages.#{title}" if title.start_with?(".")
I18n.t(title, *args)
end
def t(*args)
self.class.t(*args)
end
def as_rc_json(options = {})
json = self.as_json(options)
errors = {}
self.errors.keys.each do |key|
errors[key.to_s] = self.errors.full_messages_for(key)&.first
end
json["errors"] = errors
json
end
end
| {
"pile_set_name": "Github"
} |
date: 2020-09-25
source:
nom: Ministère des Solidarités et de la Santé
# Airtable complété par la DGS
donneesNationales:
casConfirmes: 513034
deces: 20995
decesEhpad: 10666
hospitalises: 6128
reanimation: 1098
gueris: 94891
casConfirmesEhpad: 44471
nouvellesHospitalisations: 661
nouvellesReanimations: 129
| {
"pile_set_name": "Github"
} |
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package concurrency
import (
"context"
"math"
v3 "go.etcd.io/etcd/clientv3"
)
// STM is an interface for software transactional memory.
type STM interface {
// Get returns the value for a key and inserts the key in the txn's read set.
// If Get fails, it aborts the transaction with an error, never returning.
Get(key ...string) string
// Put adds a value for a key to the write set.
Put(key, val string, opts ...v3.OpOption)
// Rev returns the revision of a key in the read set.
Rev(key string) int64
// Del deletes a key.
Del(key string)
// commit attempts to apply the txn's changes to the server.
commit() *v3.TxnResponse
reset()
}
// Isolation is an enumeration of transactional isolation levels which
// describes how transactions should interfere and conflict.
type Isolation int
const (
// SerializableSnapshot provides serializable isolation and also checks
// for write conflicts.
SerializableSnapshot Isolation = iota
// Serializable reads within the same transaction attempt return data
// from the at the revision of the first read.
Serializable
// RepeatableReads reads within the same transaction attempt always
// return the same data.
RepeatableReads
// ReadCommitted reads keys from any committed revision.
ReadCommitted
)
// stmError safely passes STM errors through panic to the STM error channel.
type stmError struct{ err error }
type stmOptions struct {
iso Isolation
ctx context.Context
prefetch []string
}
type stmOption func(*stmOptions)
// WithIsolation specifies the transaction isolation level.
func WithIsolation(lvl Isolation) stmOption {
return func(so *stmOptions) { so.iso = lvl }
}
// WithAbortContext specifies the context for permanently aborting the transaction.
func WithAbortContext(ctx context.Context) stmOption {
return func(so *stmOptions) { so.ctx = ctx }
}
// WithPrefetch is a hint to prefetch a list of keys before trying to apply.
// If an STM transaction will unconditionally fetch a set of keys, prefetching
// those keys will save the round-trip cost from requesting each key one by one
// with Get().
func WithPrefetch(keys ...string) stmOption {
return func(so *stmOptions) { so.prefetch = append(so.prefetch, keys...) }
}
// NewSTM initiates a new STM instance, using serializable snapshot isolation by default.
func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {
opts := &stmOptions{ctx: c.Ctx()}
for _, f := range so {
f(opts)
}
if len(opts.prefetch) != 0 {
f := apply
apply = func(s STM) error {
s.Get(opts.prefetch...)
return f(s)
}
}
return runSTM(mkSTM(c, opts), apply)
}
func mkSTM(c *v3.Client, opts *stmOptions) STM {
switch opts.iso {
case SerializableSnapshot:
s := &stmSerializable{
stm: stm{client: c, ctx: opts.ctx},
prefetch: make(map[string]*v3.GetResponse),
}
s.conflicts = func() []v3.Cmp {
return append(s.rset.cmps(), s.wset.cmps(s.rset.first()+1)...)
}
return s
case Serializable:
s := &stmSerializable{
stm: stm{client: c, ctx: opts.ctx},
prefetch: make(map[string]*v3.GetResponse),
}
s.conflicts = func() []v3.Cmp { return s.rset.cmps() }
return s
case RepeatableReads:
s := &stm{client: c, ctx: opts.ctx, getOpts: []v3.OpOption{v3.WithSerializable()}}
s.conflicts = func() []v3.Cmp { return s.rset.cmps() }
return s
case ReadCommitted:
s := &stm{client: c, ctx: opts.ctx, getOpts: []v3.OpOption{v3.WithSerializable()}}
s.conflicts = func() []v3.Cmp { return nil }
return s
default:
panic("unsupported stm")
}
}
type stmResponse struct {
resp *v3.TxnResponse
err error
}
func runSTM(s STM, apply func(STM) error) (*v3.TxnResponse, error) {
outc := make(chan stmResponse, 1)
go func() {
defer func() {
if r := recover(); r != nil {
e, ok := r.(stmError)
if !ok {
// client apply panicked
panic(r)
}
outc <- stmResponse{nil, e.err}
}
}()
var out stmResponse
for {
s.reset()
if out.err = apply(s); out.err != nil {
break
}
if out.resp = s.commit(); out.resp != nil {
break
}
}
outc <- out
}()
r := <-outc
return r.resp, r.err
}
// stm implements repeatable-read software transactional memory over etcd
type stm struct {
client *v3.Client
ctx context.Context
// rset holds read key values and revisions
rset readSet
// wset holds overwritten keys and their values
wset writeSet
// getOpts are the opts used for gets
getOpts []v3.OpOption
// conflicts computes the current conflicts on the txn
conflicts func() []v3.Cmp
}
type stmPut struct {
val string
op v3.Op
}
type readSet map[string]*v3.GetResponse
func (rs readSet) add(keys []string, txnresp *v3.TxnResponse) {
for i, resp := range txnresp.Responses {
rs[keys[i]] = (*v3.GetResponse)(resp.GetResponseRange())
}
}
// first returns the store revision from the first fetch
func (rs readSet) first() int64 {
ret := int64(math.MaxInt64 - 1)
for _, resp := range rs {
if rev := resp.Header.Revision; rev < ret {
ret = rev
}
}
return ret
}
// cmps guards the txn from updates to read set
func (rs readSet) cmps() []v3.Cmp {
cmps := make([]v3.Cmp, 0, len(rs))
for k, rk := range rs {
cmps = append(cmps, isKeyCurrent(k, rk))
}
return cmps
}
type writeSet map[string]stmPut
func (ws writeSet) get(keys ...string) *stmPut {
for _, key := range keys {
if wv, ok := ws[key]; ok {
return &wv
}
}
return nil
}
// cmps returns a cmp list testing no writes have happened past rev
func (ws writeSet) cmps(rev int64) []v3.Cmp {
cmps := make([]v3.Cmp, 0, len(ws))
for key := range ws {
cmps = append(cmps, v3.Compare(v3.ModRevision(key), "<", rev))
}
return cmps
}
// puts is the list of ops for all pending writes
func (ws writeSet) puts() []v3.Op {
puts := make([]v3.Op, 0, len(ws))
for _, v := range ws {
puts = append(puts, v.op)
}
return puts
}
func (s *stm) Get(keys ...string) string {
if wv := s.wset.get(keys...); wv != nil {
return wv.val
}
return respToValue(s.fetch(keys...))
}
func (s *stm) Put(key, val string, opts ...v3.OpOption) {
s.wset[key] = stmPut{val, v3.OpPut(key, val, opts...)}
}
func (s *stm) Del(key string) { s.wset[key] = stmPut{"", v3.OpDelete(key)} }
func (s *stm) Rev(key string) int64 {
if resp := s.fetch(key); resp != nil && len(resp.Kvs) != 0 {
return resp.Kvs[0].ModRevision
}
return 0
}
func (s *stm) commit() *v3.TxnResponse {
txnresp, err := s.client.Txn(s.ctx).If(s.conflicts()...).Then(s.wset.puts()...).Commit()
if err != nil {
panic(stmError{err})
}
if txnresp.Succeeded {
return txnresp
}
return nil
}
func (s *stm) fetch(keys ...string) *v3.GetResponse {
if len(keys) == 0 {
return nil
}
ops := make([]v3.Op, len(keys))
for i, key := range keys {
if resp, ok := s.rset[key]; ok {
return resp
}
ops[i] = v3.OpGet(key, s.getOpts...)
}
txnresp, err := s.client.Txn(s.ctx).Then(ops...).Commit()
if err != nil {
panic(stmError{err})
}
s.rset.add(keys, txnresp)
return (*v3.GetResponse)(txnresp.Responses[0].GetResponseRange())
}
func (s *stm) reset() {
s.rset = make(map[string]*v3.GetResponse)
s.wset = make(map[string]stmPut)
}
type stmSerializable struct {
stm
prefetch map[string]*v3.GetResponse
}
func (s *stmSerializable) Get(keys ...string) string {
if wv := s.wset.get(keys...); wv != nil {
return wv.val
}
firstRead := len(s.rset) == 0
for _, key := range keys {
if resp, ok := s.prefetch[key]; ok {
delete(s.prefetch, key)
s.rset[key] = resp
}
}
resp := s.stm.fetch(keys...)
if firstRead {
// txn's base revision is defined by the first read
s.getOpts = []v3.OpOption{
v3.WithRev(resp.Header.Revision),
v3.WithSerializable(),
}
}
return respToValue(resp)
}
func (s *stmSerializable) Rev(key string) int64 {
s.Get(key)
return s.stm.Rev(key)
}
func (s *stmSerializable) gets() ([]string, []v3.Op) {
keys := make([]string, 0, len(s.rset))
ops := make([]v3.Op, 0, len(s.rset))
for k := range s.rset {
keys = append(keys, k)
ops = append(ops, v3.OpGet(k))
}
return keys, ops
}
func (s *stmSerializable) commit() *v3.TxnResponse {
keys, getops := s.gets()
txn := s.client.Txn(s.ctx).If(s.conflicts()...).Then(s.wset.puts()...)
// use Else to prefetch keys in case of conflict to save a round trip
txnresp, err := txn.Else(getops...).Commit()
if err != nil {
panic(stmError{err})
}
if txnresp.Succeeded {
return txnresp
}
// load prefetch with Else data
s.rset.add(keys, txnresp)
s.prefetch = s.rset
s.getOpts = nil
return nil
}
func isKeyCurrent(k string, r *v3.GetResponse) v3.Cmp {
if len(r.Kvs) != 0 {
return v3.Compare(v3.ModRevision(k), "=", r.Kvs[0].ModRevision)
}
return v3.Compare(v3.ModRevision(k), "=", 0)
}
func respToValue(resp *v3.GetResponse) string {
if resp == nil || len(resp.Kvs) == 0 {
return ""
}
return string(resp.Kvs[0].Value)
}
// NewSTMRepeatable is deprecated.
func NewSTMRepeatable(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(RepeatableReads))
}
// NewSTMSerializable is deprecated.
func NewSTMSerializable(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(Serializable))
}
// NewSTMReadCommitted is deprecated.
func NewSTMReadCommitted(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(ReadCommitted))
}
| {
"pile_set_name": "Github"
} |
/*
dxf_load_distinct.c -- implements DXF support
[loding features into the DB - by distinct layers]
version 4.1, 2013 May 14
Author: Sandro Furieri [email protected]
-----------------------------------------------------------------------------
Version: MPL 1.1/GPL 2.0/LGPL 2.1
The contents of this file are subject to the Mozilla Public License Version
1.1 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.mozilla.org/MPL/
Software distributed under the License is distributed on an "AS IS" basis,
WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
for the specific language governing rights and limitations under the
License.
The Original Code is the SpatiaLite library
The Initial Developer of the Original Code is Alessandro Furieri
Portions created by the Initial Developer are Copyright (C) 2008-2013
the Initial Developer. All Rights Reserved.
Contributor(s):
Alternatively, the contents of this file may be used under the terms of
either the GNU General Public License Version 2 or later (the "GPL"), or
the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
in which case the provisions of the GPL or the LGPL are applicable instead
of those above. If you wish to allow use of your version of this file only
under the terms of either the GPL or the LGPL, and not to allow others to
use your version of this file under the terms of the MPL, indicate your
decision by deleting the provisions above and replace them with the notice
and other provisions required by the GPL or the LGPL. If you do not delete
the provisions above, a recipient may use your version of this file under
the terms of any one of the MPL, the GPL or the LGPL.
*/
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#if defined(_WIN32) && !defined(__MINGW32__)
#include "config-msvc.h"
#else
#include "config.h"
#endif
#include <spatialite/sqlite.h>
#include <spatialite/debug.h>
#include <spatialite/gaiageo.h>
#include <spatialite/gaiaaux.h>
#include <spatialite/gg_dxf.h>
#include <spatialite.h>
#include <spatialite_private.h>
#include "dxf_private.h"
#if defined(_WIN32) && !defined(__MINGW32__)
#define strcasecmp _stricmp
#endif /* not WIN32 */
static int
create_layer_text_table (sqlite3 * handle, const char *name, int srid,
int text3D, sqlite3_stmt ** xstmt)
{
/* attempting to create the "Text-layer" table */
char *sql;
int ret;
sqlite3_stmt *stmt;
char *xname;
*xstmt = NULL;
xname = gaiaDoubleQuotedSql (name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL,\n"
" label TEXT NOT NULL,\n"
" rotation DOUBLE NOT NULL)", xname);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'POINT', %Q)", name, srid, text3D ? "XYZ" : "XY");
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql = sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
name, sqlite3_errmsg (handle));
return 0;
}
if (!create_text_stmt (handle, name, &stmt))
return 0;
*xstmt = stmt;
return 1;
}
static int
create_layer_point_table (sqlite3 * handle, const char *name, int srid,
int point3D, sqlite3_stmt ** xstmt)
{
/* attempting to create the "Point-layer" table */
char *sql;
int ret;
sqlite3_stmt *stmt;
char *xname;
*xstmt = NULL;
xname = gaiaDoubleQuotedSql (name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL)", xname);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'POINT', %Q)", name, srid,
point3D ? "XYZ" : "XY");
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql = sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
name, sqlite3_errmsg (handle));
return 0;
}
if (!create_point_stmt (handle, name, &stmt))
return 0;
*xstmt = stmt;
return 1;
}
static int
create_layer_line_table (sqlite3 * handle, const char *name, int srid,
int line3D, sqlite3_stmt ** xstmt)
{
/* attempting to create the "Line-layer" table */
char *sql;
int ret;
sqlite3_stmt *stmt;
char *xname;
*xstmt = NULL;
xname = gaiaDoubleQuotedSql (name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL)", xname);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'LINESTRING', %Q)", name, srid,
line3D ? "XYZ" : "XY");
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql = sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
name, sqlite3_errmsg (handle));
return 0;
}
if (!create_line_stmt (handle, name, &stmt))
return 0;
*xstmt = stmt;
return 1;
}
static int
create_layer_polyg_table (sqlite3 * handle, const char *name, int srid,
int polyg3D, sqlite3_stmt ** xstmt)
{
/* attempting to create the "Polyg-layer" table */
char *sql;
int ret;
sqlite3_stmt *stmt;
char *xname;
*xstmt = NULL;
xname = gaiaDoubleQuotedSql (name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL)", xname);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'POLYGON', %Q)", name, srid,
polyg3D ? "XYZ" : "XY");
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql = sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
name, sqlite3_errmsg (handle));
return 0;
}
if (!create_polyg_stmt (handle, name, &stmt))
return 0;
*xstmt = stmt;
return 1;
}
static int
create_layer_hatch_tables (sqlite3 * handle, const char *name, int srid,
sqlite3_stmt ** xstmt, sqlite3_stmt ** xstmt2)
{
/* attempting to create the "Hatch-layer" tables */
char *sql;
int ret;
sqlite3_stmt *stmt;
sqlite3_stmt *stmt2;
char *xname;
char *fk_name;
char *xfk_name;
char *pattern;
char *xpattern;
*xstmt = NULL;
*xstmt2 = NULL;
/* creating the Hatch-Boundary table */
xname = gaiaDoubleQuotedSql (name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL)", xname);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'MULTIPOLYGON', 'XY')", name, srid);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", name,
sqlite3_errmsg (handle));
return 0;
}
sql = sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
name, sqlite3_errmsg (handle));
return 0;
}
/* creating the Hatch-Pattern table */
xname = gaiaDoubleQuotedSql (name);
pattern = sqlite3_mprintf ("%s_pattern", name);
xpattern = gaiaDoubleQuotedSql (pattern);
fk_name = sqlite3_mprintf ("fk_%s_pattern", name);
xfk_name = gaiaDoubleQuotedSql (fk_name);
sqlite3_free (fk_name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" feature_id INTEGER PRIMARY KEY NOT NULL,\n"
" filename TEXT NOTT NULL,\n"
" layer TEXT NOT NULL,\n"
" CONSTRAINT \"%s\" FOREIGN KEY (feature_id) "
" REFERENCES \"%s\" (feature_id))", xpattern,
xfk_name, xname);
free (xname);
free (xfk_name);
free (xpattern);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", pattern,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT AddGeometryColumn(%Q, 'geometry', "
"%d, 'MULTILINESTRING', 'XY')", pattern, srid);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("ADD GEOMETRY %s error: %s\n", pattern,
sqlite3_errmsg (handle));
return 0;
}
sql =
sqlite3_mprintf ("SELECT CreateSpatialIndex(%Q, 'geometry')", pattern);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE SPATIAL INDEX %s error: %s\n",
pattern, sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (pattern);
if (!create_hatch_boundary_stmt (handle, name, &stmt))
return 0;
if (!create_hatch_pattern_stmt (handle, name, &stmt2))
return 0;
*xstmt = stmt;
*xstmt2 = stmt2;
return 1;
}
static int
create_layer_text_extra_attr_table (sqlite3 * handle, const char *name,
char *attr_name, sqlite3_stmt ** xstmt_ext)
{
/* attempting to create the "Text-layer-extra-attr" table */
char *sql;
int ret;
sqlite3_stmt *stmt_ext;
char *xname;
char *xattr_name;
char *fk_name;
char *xfk_name;
char *idx_name;
char *xidx_name;
char *view_name;
char *xview_name;
*xstmt_ext = NULL;
fk_name = sqlite3_mprintf ("fk_%s_attr", name);
xfk_name = gaiaDoubleQuotedSql (fk_name);
xattr_name = gaiaDoubleQuotedSql (attr_name);
xname = gaiaDoubleQuotedSql (name);
sqlite3_free (fk_name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" attr_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" feature_id INTEGER NOT NULL,\n"
" attr_key TEXT NOT NULL,\n"
" attr_value TEXT NOT NULL,\n"
" CONSTRAINT \"%s\" FOREIGN KEY (feature_id) "
"REFERENCES \"%s\" (feature_id))",
xattr_name, xfk_name, xname);
free (xfk_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", attr_name,
sqlite3_errmsg (handle));
return 0;
}
idx_name = sqlite3_mprintf ("idx_%s_attr", name);
xidx_name = gaiaDoubleQuotedSql (idx_name);
sql =
sqlite3_mprintf
("CREATE INDEX \"%s\" ON \"%s\" (feature_id)", xidx_name, xname);
free (xidx_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE INDEX %s error: %s\n", idx_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (idx_name);
view_name = sqlite3_mprintf ("%s_view", name);
xview_name = gaiaDoubleQuotedSql (view_name);
sql = sqlite3_mprintf ("CREATE VIEW \"%s\" AS "
"SELECT f.feature_id AS feature_id, f.layer AS layer, f.label AS label, "
"f.rotation AS rotation, f.geometry AS geometry, "
"a.attr_id AS attr_id, a.attr_key AS attr_key, a.attr_value AS attr_value "
"FROM \"%s\" AS f "
"LEFT JOIN \"%s\" AS a ON (f.feature_id = a.feature_id)",
xview_name, xname, xattr_name);
free (xview_name);
free (xattr_name);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE VIEW %s error: %s\n", view_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (view_name);
if (!create_extra_stmt (handle, attr_name, &stmt_ext))
return 0;
*xstmt_ext = stmt_ext;
return 1;
}
static int
create_layer_point_extra_attr_table (sqlite3 * handle, const char *name,
char *attr_name, sqlite3_stmt ** xstmt_ext)
{
/* attempting to create the "Point-layer-extra-attr" table */
char *sql;
int ret;
sqlite3_stmt *stmt_ext;
char *xname;
char *xattr_name;
char *fk_name;
char *xfk_name;
char *idx_name;
char *xidx_name;
char *view_name;
char *xview_name;
*xstmt_ext = NULL;
fk_name = sqlite3_mprintf ("fk_%s_attr", name);
xfk_name = gaiaDoubleQuotedSql (fk_name);
xattr_name = gaiaDoubleQuotedSql (attr_name);
xname = gaiaDoubleQuotedSql (name);
sqlite3_free (fk_name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" attr_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" feature_id INTEGER NOT NULL,\n"
" attr_key TEXT NOT NULL,\n"
" attr_value TEXT NOT NULL,\n"
" CONSTRAINT \"%s\" FOREIGN KEY (feature_id) "
"REFERENCES \"%s\" (feature_id))",
xattr_name, xfk_name, xname);
free (xfk_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", attr_name,
sqlite3_errmsg (handle));
return 0;
}
idx_name = sqlite3_mprintf ("idx_%s_attr", name);
xidx_name = gaiaDoubleQuotedSql (idx_name);
sql =
sqlite3_mprintf
("CREATE INDEX \"%s\" ON \"%s\" (feature_id)", xidx_name, xname);
free (xidx_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE INDEX %s error: %s\n", idx_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (idx_name);
view_name = sqlite3_mprintf ("%s_view", name);
xview_name = gaiaDoubleQuotedSql (view_name);
sql = sqlite3_mprintf ("CREATE VIEW \"%s\" AS "
"SELECT f.feature_id AS feature_id, f.layer AS layer, f.geometry AS geometry, "
"a.attr_id AS attr_id, a.attr_key AS attr_key, a.attr_value AS attr_value "
"FROM \"%s\" AS f "
"LEFT JOIN \"%s\" AS a ON (f.feature_id = a.feature_id)",
xview_name, xname, xattr_name);
free (xview_name);
free (xattr_name);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE VIEW %s error: %s\n", view_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (view_name);
if (!create_extra_stmt (handle, attr_name, &stmt_ext))
return 0;
*xstmt_ext = stmt_ext;
return 1;
}
static int
create_layer_line_extra_attr_table (sqlite3 * handle, const char *name,
char *attr_name, sqlite3_stmt ** xstmt_ext)
{
/* attempting to create the "Line-layer-extra-attr" table */
char *sql;
int ret;
sqlite3_stmt *stmt_ext;
char *xname;
char *xattr_name;
char *fk_name;
char *xfk_name;
char *idx_name;
char *xidx_name;
char *view_name;
char *xview_name;
*xstmt_ext = NULL;
fk_name = sqlite3_mprintf ("fk_%s_attr", name);
xfk_name = gaiaDoubleQuotedSql (fk_name);
xattr_name = gaiaDoubleQuotedSql (attr_name);
xname = gaiaDoubleQuotedSql (name);
sqlite3_free (fk_name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" attr_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" feature_id INTEGER NOT NULL,\n"
" attr_key TEXT NOT NULL,\n"
" attr_value TEXT NOT NULL,\n"
" CONSTRAINT \"%s\" FOREIGN KEY (feature_id) "
"REFERENCES \"%s\" (feature_id))",
xattr_name, xfk_name, xname);
free (xfk_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", attr_name,
sqlite3_errmsg (handle));
return 0;
}
idx_name = sqlite3_mprintf ("idx_%s_attr", name);
xidx_name = gaiaDoubleQuotedSql (idx_name);
sql =
sqlite3_mprintf
("CREATE INDEX \"%s\" ON \"%s\" (feature_id)", xidx_name, xname);
free (xidx_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE INDEX %s error: %s\n", idx_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (idx_name);
view_name = sqlite3_mprintf ("%s_view", name);
xview_name = gaiaDoubleQuotedSql (view_name);
sql = sqlite3_mprintf ("CREATE VIEW \"%s\" AS "
"SELECT f.feature_id AS feature_id, f.layer AS layer, f.geometry AS geometry, "
"a.attr_id AS attr_id, a.attr_key AS attr_key, a.attr_value AS attr_value "
"FROM \"%s\" AS f "
"LEFT JOIN \"%s\" AS a ON (f.feature_id = a.feature_id)",
xview_name, xname, xattr_name);
free (xview_name);
free (xattr_name);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE VIEW %s error: %s\n", view_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (view_name);
if (!create_extra_stmt (handle, attr_name, &stmt_ext))
return 0;
*xstmt_ext = stmt_ext;
return 1;
}
static int
create_layer_polyg_extra_attr_table (sqlite3 * handle, const char *name,
char *attr_name, sqlite3_stmt ** xstmt_ext)
{
/* attempting to create the "Polyg-layer-extra-attr" table */
char *sql;
int ret;
sqlite3_stmt *stmt_ext;
char *xname;
char *xattr_name;
char *fk_name;
char *xfk_name;
char *idx_name;
char *xidx_name;
char *view_name;
char *xview_name;
*xstmt_ext = NULL;
fk_name = sqlite3_mprintf ("fk_%s_attr", name);
xfk_name = gaiaDoubleQuotedSql (fk_name);
xattr_name = gaiaDoubleQuotedSql (attr_name);
xname = gaiaDoubleQuotedSql (name);
sqlite3_free (fk_name);
sql = sqlite3_mprintf ("CREATE TABLE \"%s\" ("
" attr_id INTEGER PRIMARY KEY AUTOINCREMENT,\n"
" feature_id INTEGER NOT NULL,\n"
" attr_key TEXT NOT NULL,\n"
" attr_value TEXT NOT NULL,\n"
" CONSTRAINT \"%s\" FOREIGN KEY (feature_id) "
"REFERENCES \"%s\" (feature_id))",
xattr_name, xfk_name, xname);
free (xfk_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE TABLE %s error: %s\n", attr_name,
sqlite3_errmsg (handle));
return 0;
}
idx_name = sqlite3_mprintf ("idx_%s_attr", name);
xidx_name = gaiaDoubleQuotedSql (idx_name);
sql =
sqlite3_mprintf
("CREATE INDEX \"%s\" ON \"%s\" (feature_id)", xidx_name, xname);
free (xidx_name);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE INDEX %s error: %s\n", idx_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (idx_name);
view_name = sqlite3_mprintf ("%s_view", name);
xview_name = gaiaDoubleQuotedSql (view_name);
sql = sqlite3_mprintf ("CREATE VIEW \"%s\" AS "
"SELECT f.feature_id AS feature_id, f.layer AS layer, f.geometry AS geometry, "
"a.attr_id AS attr_id, a.attr_key AS attr_key, a.attr_value AS attr_value "
"FROM \"%s\" AS f "
"LEFT JOIN \"%s\" AS a ON (f.feature_id = a.feature_id)",
xview_name, xname, xattr_name);
free (xview_name);
free (xattr_name);
free (xname);
ret = sqlite3_exec (handle, sql, NULL, NULL, NULL);
sqlite3_free (sql);
if (ret != SQLITE_OK)
{
spatialite_e ("CREATE VIEW %s error: %s\n", view_name,
sqlite3_errmsg (handle));
return 0;
}
sqlite3_free (view_name);
if (!create_extra_stmt (handle, attr_name, &stmt_ext))
return 0;
*xstmt_ext = stmt_ext;
return 1;
}
DXF_PRIVATE int
import_by_layer (sqlite3 * handle, gaiaDxfParserPtr dxf, int append)
{
/* populating the target DB - by distinct layers */
int ret;
sqlite3_stmt *stmt;
sqlite3_stmt *stmt_ext;
sqlite3_stmt *stmt_pattern;
unsigned char *blob;
int blob_size;
gaiaGeomCollPtr geom;
gaiaLinestringPtr p_ln;
gaiaPolygonPtr p_pg;
gaiaRingPtr p_rng;
int iv;
char *name;
char *attr_name;
char *block;
gaiaDxfTextPtr txt;
gaiaDxfPointPtr pt;
gaiaDxfPolylinePtr ln;
gaiaDxfPolylinePtr pg;
gaiaDxfHatchPtr p_hatch;
gaiaDxfInsertPtr ins;
gaiaDxfLayerPtr lyr = dxf->first_layer;
while (lyr != NULL)
{
/* looping on layers */
int text = 0;
int point = 0;
int line = 0;
int polyg = 0;
int hatch = 0;
int ins_text = 0;
int ins_point = 0;
int ins_line = 0;
int ins_polyg = 0;
int ins_hatch = 0;
if (lyr->first_text != NULL)
text = 1;
if (lyr->first_point != NULL)
point = 1;
if (lyr->first_line != NULL)
line = 1;
if (lyr->first_polyg != NULL)
polyg = 1;
if (lyr->first_hatch != NULL)
hatch = 1;
if (lyr->first_ins_text != NULL)
ins_text = 1;
if (lyr->first_ins_point != NULL)
ins_point = 1;
if (lyr->first_ins_line != NULL)
ins_line = 1;
if (lyr->first_ins_polyg != NULL)
ins_polyg = 1;
if (lyr->first_ins_hatch != NULL)
ins_hatch = 1;
if (text)
{
/* creating and populating the TEXT-layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_text_%s", lyr->layer_name,
lyr->is3Dtext ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_text_%s", dxf->prefix,
lyr->layer_name,
lyr->is3Dtext ? "3d" : "2d");
if (append
&& check_text_table (handle, name, dxf->srid,
lyr->is3Dtext))
{
/* appending into the already existing table */
if (!create_text_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (!create_layer_text_table
(handle, name, dxf->srid, lyr->is3Dtext, &stmt))
{
sqlite3_free (name);
return 0;
}
}
if (lyr->hasExtraText)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_layer_text_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
txt = lyr->first_text;
while (txt != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, txt->label,
strlen (txt->label), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, txt->angle);
if (lyr->is3Dtext)
geom = gaiaAllocGeomCollXYZ ();
else
geom = gaiaAllocGeomColl ();
geom->Srid = dxf->srid;
if (lyr->is3Dtext)
gaiaAddPointToGeomCollXYZ (geom, txt->x, txt->y,
txt->z);
else
gaiaAddPointToGeomColl (geom, txt->x, txt->y);
gaiaToSpatiaLiteBlobWkb (geom, &blob, &blob_size);
gaiaFreeGeomColl (geom);
sqlite3_bind_blob (stmt, 5, blob, blob_size, free);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = txt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
txt = txt->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (point)
{
/* creating and populating the POINT-layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_point_%s", lyr->layer_name,
lyr->is3Dpoint ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_point_%s", dxf->prefix,
lyr->layer_name,
lyr->is3Dpoint ? "3d" : "2d");
if (append
&& check_point_table (handle, name, dxf->srid,
lyr->is3Dpoint))
{
/* appending into the already existing table */
if (!create_point_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (!create_layer_point_table
(handle, name, dxf->srid, lyr->is3Dpoint, &stmt))
{
sqlite3_free (name);
return 0;
}
}
if (lyr->hasExtraPoint)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_layer_point_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
pt = lyr->first_point;
while (pt != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
if (lyr->is3Dpoint)
geom = gaiaAllocGeomCollXYZ ();
else
geom = gaiaAllocGeomColl ();
geom->Srid = dxf->srid;
if (lyr->is3Dpoint)
gaiaAddPointToGeomCollXYZ (geom, pt->x, pt->y, pt->z);
else
gaiaAddPointToGeomColl (geom, pt->x, pt->y);
gaiaToSpatiaLiteBlobWkb (geom, &blob, &blob_size);
gaiaFreeGeomColl (geom);
sqlite3_bind_blob (stmt, 3, blob, blob_size, free);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = pt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
pt = pt->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (line)
{
/* creating and populating the LINE-layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_line_%s", lyr->layer_name,
lyr->is3Dline ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_line_%s", dxf->prefix,
lyr->layer_name,
lyr->is3Dline ? "3d" : "2d");
if (append
&& check_line_table (handle, name, dxf->srid,
lyr->is3Dline))
{
/* appending into the already existing table */
if (!create_line_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (!create_layer_line_table
(handle, name, dxf->srid, lyr->is3Dline, &stmt))
{
sqlite3_free (name);
return 0;
}
}
if (lyr->hasExtraLine)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_layer_line_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ln = lyr->first_line;
while (ln != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
if (lyr->is3Dline)
geom = gaiaAllocGeomCollXYZ ();
else
geom = gaiaAllocGeomColl ();
geom->Srid = dxf->srid;
gaiaAddLinestringToGeomColl (geom, ln->points);
p_ln = geom->FirstLinestring;
for (iv = 0; iv < ln->points; iv++)
{
if (lyr->is3Dline)
{
gaiaSetPointXYZ (p_ln->Coords, iv,
*(ln->x + iv), *(ln->y + iv),
*(ln->z + iv));
}
else
{
gaiaSetPoint (p_ln->Coords, iv, *(ln->x + iv),
*(ln->y + iv));
}
}
gaiaToSpatiaLiteBlobWkb (geom, &blob, &blob_size);
gaiaFreeGeomColl (geom);
sqlite3_bind_blob (stmt, 3, blob, blob_size, free);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = ln->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
ln = ln->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (polyg)
{
/* creating and populating the POLYG-layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_polyg_%s", lyr->layer_name,
lyr->is3Dpolyg ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_polyg_%s", dxf->prefix,
lyr->layer_name,
lyr->is3Dpolyg ? "3d" : "2d");
if (append
&& check_polyg_table (handle, name, dxf->srid,
lyr->is3Dpolyg))
{
/* appending into the already existing table */
if (!create_polyg_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (!create_layer_polyg_table
(handle, name, dxf->srid, lyr->is3Dpolyg, &stmt))
{
sqlite3_free (name);
return 0;
}
}
if (lyr->hasExtraPolyg)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_layer_polyg_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
pg = lyr->first_polyg;
while (pg != NULL)
{
int num_holes;
gaiaDxfHolePtr hole;
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
if (lyr->is3Dpolyg)
geom = gaiaAllocGeomCollXYZ ();
else
geom = gaiaAllocGeomColl ();
geom->Srid = dxf->srid;
num_holes = 0;
hole = pg->first_hole;
while (hole != NULL)
{
num_holes++;
hole = hole->next;
}
gaiaAddPolygonToGeomColl (geom, pg->points, num_holes);
p_pg = geom->FirstPolygon;
p_rng = p_pg->Exterior;
for (iv = 0; iv < pg->points; iv++)
{
if (lyr->is3Dpolyg)
{
gaiaSetPointXYZ (p_rng->Coords, iv,
*(pg->x + iv), *(pg->y + iv),
*(pg->z + iv));
}
else
{
gaiaSetPoint (p_rng->Coords, iv,
*(pg->x + iv), *(pg->y + iv));
}
}
num_holes = 0;
hole = pg->first_hole;
while (hole != NULL)
{
p_rng =
gaiaAddInteriorRing (p_pg, num_holes,
hole->points);
for (iv = 0; iv < hole->points; iv++)
{
if (lyr->is3Dpolyg)
{
gaiaSetPointXYZ (p_rng->Coords, iv,
*(hole->x + iv),
*(hole->y + iv),
*(hole->z + iv));
}
else
{
gaiaSetPoint (p_rng->Coords, iv,
*(hole->x + iv),
*(hole->y + iv));
}
}
num_holes++;
hole = hole->next;
}
gaiaToSpatiaLiteBlobWkb (geom, &blob, &blob_size);
gaiaFreeGeomColl (geom);
sqlite3_bind_blob (stmt, 3, blob, blob_size, free);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = pg->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
pg = pg->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (hatch)
{
/* creating and populating the HATCH-layer */
if (dxf->prefix == NULL)
name = sqlite3_mprintf ("%s_hatch_2d", lyr->layer_name);
else
name =
sqlite3_mprintf ("%s%s_hatch_2d", dxf->prefix,
lyr->layer_name);
if (append && check_hatch_tables (handle, name, dxf->srid))
{
/* appending into the already existing table */
if (!create_hatch_boundary_stmt (handle, name, &stmt))
return 0;
if (!create_hatch_pattern_stmt
(handle, name, &stmt_pattern))
return 0;
}
else
{
/* creating a new table */
if (!create_layer_hatch_tables
(handle, name, dxf->srid, &stmt, &stmt_pattern))
{
sqlite3_free (name);
return 0;
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
p_hatch = lyr->first_hatch;
while (p_hatch != NULL)
{
sqlite3_int64 feature_id;
gaiaDxfHatchSegmPtr segm;
/* inserting the Boundary Geometry */
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
if (p_hatch->boundary == NULL)
sqlite3_bind_null (stmt, 2);
else
{
gaiaToSpatiaLiteBlobWkb (p_hatch->boundary, &blob,
&blob_size);
sqlite3_bind_blob (stmt, 3, blob, blob_size, free);
}
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_pattern);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
feature_id = sqlite3_last_insert_rowid (handle);
/* inserting the Pattern Geometry */
sqlite3_reset (stmt_pattern);
sqlite3_clear_bindings (stmt_pattern);
sqlite3_bind_int64 (stmt_pattern, 1, feature_id);
sqlite3_bind_text (stmt_pattern, 2, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt_pattern, 3, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
if (p_hatch->first_out == NULL)
sqlite3_bind_null (stmt_pattern, 4);
else
{
geom = gaiaAllocGeomColl ();
geom->Srid = dxf->srid;
geom->DeclaredType = GAIA_MULTILINESTRING;
segm = p_hatch->first_out;
while (segm != NULL)
{
gaiaLinestringPtr p_ln =
gaiaAddLinestringToGeomColl (geom, 2);
gaiaSetPoint (p_ln->Coords, 0, segm->x0,
segm->y0);
gaiaSetPoint (p_ln->Coords, 1, segm->x1,
segm->y1);
segm = segm->next;
}
gaiaToSpatiaLiteBlobWkb (geom, &blob, &blob_size);
gaiaFreeGeomColl (geom);
sqlite3_bind_blob (stmt_pattern, 4, blob, blob_size,
free);
}
ret = sqlite3_step (stmt_pattern);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_pattern);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
p_hatch = p_hatch->next;
}
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_pattern);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (ins_text)
{
/* creating and populating the INSERT (Text reference) layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_instext_%s", lyr->layer_name,
lyr->is3DinsText ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_instext_%s", dxf->prefix,
lyr->layer_name,
lyr->is3DinsText ? "3d" : "2d");
if (append && check_insert_table (handle, name))
{
/* appending into the already existing table */
if (!create_insert_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (dxf->prefix == NULL)
block =
sqlite3_mprintf ("block_text_%s",
lyr->is3DinsText ? "3d" : "2d");
else
block =
sqlite3_mprintf ("%sblock_text_%s", dxf->prefix,
lyr->is3DinsText ? "3d" : "2d");
if (!create_instext_table
(handle, name, block, lyr->is3Dtext, &stmt))
{
sqlite3_free (name);
return 0;
}
sqlite3_free (block);
}
if (lyr->hasExtraInsText)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_insert_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ins = lyr->first_ins_text;
while (ins != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, ins->block_id,
strlen (ins->block_id), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, ins->x);
sqlite3_bind_double (stmt, 5, ins->y);
sqlite3_bind_double (stmt, 6, ins->z);
sqlite3_bind_double (stmt, 7, ins->scale_x);
sqlite3_bind_double (stmt, 8, ins->scale_y);
sqlite3_bind_double (stmt, 9, ins->scale_z);
sqlite3_bind_double (stmt, 10, ins->angle);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = txt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
ins = ins->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (ins_point)
{
/* creating and populating the INSERT (Point reference) layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_inspoint_%s", lyr->layer_name,
lyr->is3DinsPoint ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_inspoint_%s", dxf->prefix,
lyr->layer_name,
lyr->is3DinsPoint ? "3d" : "2d");
if (append && check_insert_table (handle, name))
{
/* appending into the already existing table */
if (!create_insert_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (dxf->prefix == NULL)
block =
sqlite3_mprintf ("block_point_%s",
lyr->is3DinsPoint ? "3d" : "2d");
else
block =
sqlite3_mprintf ("%sblock_point_%s", dxf->prefix,
lyr->is3DinsPoint ? "3d" : "2d");
if (!create_inspoint_table
(handle, name, block, lyr->is3Dpoint, &stmt))
{
sqlite3_free (name);
return 0;
}
sqlite3_free (block);
}
if (lyr->hasExtraInsPoint)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_insert_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ins = lyr->first_ins_point;
while (ins != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, ins->block_id,
strlen (ins->block_id), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, ins->x);
sqlite3_bind_double (stmt, 5, ins->y);
sqlite3_bind_double (stmt, 6, ins->z);
sqlite3_bind_double (stmt, 7, ins->scale_x);
sqlite3_bind_double (stmt, 8, ins->scale_y);
sqlite3_bind_double (stmt, 9, ins->scale_z);
sqlite3_bind_double (stmt, 10, ins->angle);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = txt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
ins = ins->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (ins_line)
{
/* creating and populating the INSERT (Line reference) layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_insline_%s", lyr->layer_name,
lyr->is3DinsLine ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_insline_%s", dxf->prefix,
lyr->layer_name,
lyr->is3DinsLine ? "3d" : "2d");
if (append && check_insert_table (handle, name))
{
/* appending into the already existing table */
if (!create_insert_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (dxf->prefix == NULL)
block =
sqlite3_mprintf ("block_line_%s",
lyr->is3DinsLine ? "3d" : "2d");
else
block =
sqlite3_mprintf ("%sblock_line_%s", dxf->prefix,
lyr->is3DinsLine ? "3d" : "2d");
if (!create_insline_table
(handle, name, block, lyr->is3Dline, &stmt))
{
sqlite3_free (name);
return 0;
}
sqlite3_free (block);
}
if (lyr->hasExtraInsLine)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_insert_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ins = lyr->first_ins_line;
while (ins != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, ins->block_id,
strlen (ins->block_id), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, ins->x);
sqlite3_bind_double (stmt, 5, ins->y);
sqlite3_bind_double (stmt, 6, ins->z);
sqlite3_bind_double (stmt, 7, ins->scale_x);
sqlite3_bind_double (stmt, 8, ins->scale_y);
sqlite3_bind_double (stmt, 9, ins->scale_z);
sqlite3_bind_double (stmt, 10, ins->angle);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = txt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
ins = ins->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (ins_polyg)
{
/* creating and populating the INSERT (Polygon reference) layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name =
sqlite3_mprintf ("%s_inspolyg_%s", lyr->layer_name,
lyr->is3DinsPolyg ? "3d" : "2d");
else
name =
sqlite3_mprintf ("%s%s_inspolyg_%s", dxf->prefix,
lyr->layer_name,
lyr->is3DinsPolyg ? "3d" : "2d");
if (append && check_insert_table (handle, name))
{
/* appending into the already existing table */
if (!create_insert_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (dxf->prefix == NULL)
block =
sqlite3_mprintf ("block_polyg_%s",
lyr->is3DinsPolyg ? "3d" : "2d");
else
block =
sqlite3_mprintf ("%sblock_polyg_%s", dxf->prefix,
lyr->is3DinsPolyg ? "3d" : "2d");
if (!create_inspolyg_table
(handle, name, block, lyr->is3Dpolyg, &stmt))
{
sqlite3_free (name);
return 0;
}
sqlite3_free (block);
}
if (lyr->hasExtraInsPolyg)
{
attr_name = create_extra_attr_table_name (name);
if (append && check_extra_attr_table (handle, attr_name))
{
/* appending into the already existing table */
if (!create_extra_stmt
(handle, attr_name, &stmt_ext))
return 0;
}
else
{
/* creating the Extra Attribute table */
if (!create_insert_extra_attr_table
(handle, name, attr_name, &stmt_ext))
{
sqlite3_finalize (stmt);
return 0;
}
}
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ins = lyr->first_ins_polyg;
while (ins != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, ins->block_id,
strlen (ins->block_id), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, ins->x);
sqlite3_bind_double (stmt, 5, ins->y);
sqlite3_bind_double (stmt, 6, ins->z);
sqlite3_bind_double (stmt, 7, ins->scale_x);
sqlite3_bind_double (stmt, 8, ins->scale_y);
sqlite3_bind_double (stmt, 9, ins->scale_z);
sqlite3_bind_double (stmt, 10, ins->angle);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
if (stmt_ext != NULL)
{
/* inserting all Extra Attributes */
sqlite3_int64 feature_id =
sqlite3_last_insert_rowid (handle);
gaiaDxfExtraAttrPtr ext = txt->first;
while (ext != NULL)
{
sqlite3_reset (stmt_ext);
sqlite3_clear_bindings (stmt_ext);
sqlite3_bind_int64 (stmt_ext, 1, feature_id);
sqlite3_bind_text (stmt_ext, 2, ext->key,
strlen (ext->key),
SQLITE_STATIC);
sqlite3_bind_text (stmt_ext, 3, ext->value,
strlen (ext->value),
SQLITE_STATIC);
ret = sqlite3_step (stmt_ext);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n",
attr_name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_finalize (stmt_ext);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
ret =
sqlite3_exec (handle, "ROLLBACK",
NULL, NULL, NULL);
return 0;
}
ext = ext->next;
}
}
ins = ins->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
if (ins_hatch)
{
/* creating and populating the INSERT (Hatch reference) layer */
stmt_ext = NULL;
attr_name = NULL;
if (dxf->prefix == NULL)
name = sqlite3_mprintf ("%s_inshatch_2d", lyr->layer_name);
else
name =
sqlite3_mprintf ("%s%s_inspolyg_2d", dxf->prefix,
lyr->layer_name);
if (append && check_insert_table (handle, name))
{
/* appending into the already existing table */
if (!create_insert_stmt (handle, name, &stmt))
return 0;
}
else
{
/* creating a new table */
if (dxf->prefix == NULL)
block = sqlite3_mprintf ("block_hatch_2d");
else
block =
sqlite3_mprintf ("%sblock_polyg_2d", dxf->prefix);
if (!create_inshatch_table (handle, name, block, &stmt))
{
sqlite3_free (name);
return 0;
}
sqlite3_free (block);
}
ret = sqlite3_exec (handle, "BEGIN", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("BEGIN %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
ins = lyr->first_ins_polyg;
while (ins != NULL)
{
sqlite3_reset (stmt);
sqlite3_clear_bindings (stmt);
sqlite3_bind_text (stmt, 1, dxf->filename,
strlen (dxf->filename), SQLITE_STATIC);
sqlite3_bind_text (stmt, 2, lyr->layer_name,
strlen (lyr->layer_name),
SQLITE_STATIC);
sqlite3_bind_text (stmt, 3, ins->block_id,
strlen (ins->block_id), SQLITE_STATIC);
sqlite3_bind_double (stmt, 4, ins->x);
sqlite3_bind_double (stmt, 5, ins->y);
sqlite3_bind_double (stmt, 6, ins->z);
sqlite3_bind_double (stmt, 7, ins->scale_x);
sqlite3_bind_double (stmt, 8, ins->scale_y);
sqlite3_bind_double (stmt, 9, ins->scale_z);
sqlite3_bind_double (stmt, 10, ins->angle);
ret = sqlite3_step (stmt);
if (ret == SQLITE_DONE || ret == SQLITE_ROW)
;
else
{
spatialite_e ("INSERT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_finalize (stmt);
ret =
sqlite3_exec (handle, "ROLLBACK", NULL, NULL,
NULL);
sqlite3_free (name);
return 0;
}
ins = ins->next;
}
sqlite3_finalize (stmt);
if (stmt_ext != NULL)
sqlite3_finalize (stmt_ext);
ret = sqlite3_exec (handle, "COMMIT", NULL, NULL, NULL);
if (ret != SQLITE_OK)
{
spatialite_e ("COMMIT %s error: %s\n", name,
sqlite3_errmsg (handle));
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
return 0;
}
sqlite3_free (name);
if (attr_name)
sqlite3_free (attr_name);
}
lyr = lyr->next;
}
return 1;
}
| {
"pile_set_name": "Github"
} |
<?php
/**
* CurrencyTransformer.php
* Copyright (c) 2019 [email protected]
*
* This file is part of Firefly III (https://github.com/firefly-iii).
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <https://www.gnu.org/licenses/>.
*/
declare(strict_types=1);
namespace FireflyIII\Transformers;
use FireflyIII\Models\TransactionCurrency;
use Log;
/**
* Class CurrencyTransformer
*/
class CurrencyTransformer extends AbstractTransformer
{
/**
* CurrencyTransformer constructor.
*
* @codeCoverageIgnore
*/
public function __construct()
{
if ('testing' === config('app.env')) {
Log::warning(sprintf('%s should not be instantiated in the TEST environment!', get_class($this)));
}
}
/**
* Transform the currency.
*
* @param TransactionCurrency $currency
*
* @return array
*/
public function transform(TransactionCurrency $currency): array
{
$isDefault = false;
$defaultCurrency = $this->parameters->get('defaultCurrency');
if (null !== $defaultCurrency) {
$isDefault = (int) $defaultCurrency->id === (int) $currency->id;
}
$data = [
'id' => (int) $currency->id,
'created_at' => $currency->created_at->toAtomString(),
'updated_at' => $currency->updated_at->toAtomString(),
'default' => $isDefault,
'enabled' => $currency->enabled,
'name' => $currency->name,
'code' => $currency->code,
'symbol' => $currency->symbol,
'decimal_places' => (int) $currency->decimal_places,
'links' => [
[
'rel' => 'self',
'uri' => '/currencies/' . $currency->id,
],
],
];
return $data;
}
}
| {
"pile_set_name": "Github"
} |
; RUN: llc < %s -mtriple=powerpc-unknown-linux-gnu -mcpu=g5 | FileCheck %s
define void @test(float %F, i8* %P) {
%I = fptosi float %F to i32
%X = trunc i32 %I to i8
store i8 %X, i8* %P
ret void
; CHECK: fctiwz 0, 1
; CHECK: stfiwx 0, 0, 4
; CHECK: lwz 4, 12(1)
; CHECK: stb 4, 0(3)
; CHECK: blr
}
| {
"pile_set_name": "Github"
} |
## DESCRIPTION
## Calculus
## ENDDESCRIPTION
## Tagged by tda2d
## DBsubject(Differential equations)
## DBchapter(Higher order differential equations)
## DBsection(Undetermined coefficients)
## Institution(Rochester)
## MLT(undet_04)
## Level(3)
## KEYWORDS('differential equation' 'second order' 'linear' 'nonhomogeneous')
DOCUMENT() ;
loadMacros(
"PGstandard.pl",
"PGchoicemacros.pl",
"PGcourse.pl"
);
do {
$B = random(2,4,1) ;
$C = random(-7,-3,1) ;
$r = random(2,4,1) ;
}
until ($r*$r + $B*$r + $C != 0);
$q0 = random(-9,9,1) ;
$q1 = random(-9,9,1) ;
$q2 = random(-9,9,1) ;
$c = ($q2)/(($r)*($r)+$C+$B*$r) ;
$b = ($q1-$c*(4*$r+2*$B))/(($r)*($r)+$C+$B*$r) ;
$a = ($q0-$b*(2*$r+$B)-2*$c)/(($r)*($r)+$C+$B*$r) ;
$S = "($c)*t**2 + ($b)*t + $a";
TEXT(beginproblem()) ;
$showPartialCorrectAnswers = 1 ;
BEGIN_TEXT
Use the method of undetermined coefficients to find
one solution of $BR
\( y'' + $B\,y' + $C\,y = ($q2 \,t^2 + $q1 \, t + $q0 )\, e^{$r t} \).
$BR
Note that the method finds a specific solution, not the general one.
$BR
\(y = \) \{ans_rule(80)\}
END_TEXT
$ans = "($S) *exp($r *t) " ;
ANS(fun_cmp($ans, vars=>"t")) ;
ENDDOCUMENT() ;
##################################################
my $XML_INFORMATION = <<'END_OF_XML_TRAILER_INFO';
<?xml version="1.0"?>
<metaPGdata>
<author>David Prill</author>
<course>MTH163</course>
<description>Differential equations
y'' + $B, y' + $C, y = 0 exp($rho t);
where Q is a polynomial of degree at most 2.
x^2 + $B x + $C = 0 has Gaussian integer roots.</description>
<fullPath>setDESOLinear/19.pg</fullPath>
<institution>University of Rochester</institution>
<keywords>Differential Equation, Inhomogeneous,
Undetermined coefficients,
second order linear, constant coefficients,
</keywords>
<libraryPath>setDESOLinear/19.pg</libraryPath>
<libraryURL>http://webhost.math.rochester.edu/mth163lib/discuss/msgReader$408</libraryURL>
<modified><dateTime.iso8601>20000718T13:14:55</dateTime.iso8601></modified>
<msgNum>408</msgNum>
<pgProblem>true</pgProblem>
<preface></preface>
<problemVariants></problemVariants>
<probNum></probNum>
<psvn></psvn>
<revisedVersions></revisedVersions>
<setName>DESOLinear</setName>
<titleRoot>19</titleRoot>
</metaPGdata>
END_OF_XML_TRAILER_INFO
##################################################
| {
"pile_set_name": "Github"
} |
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_BYTEORDER_H
#define _ASM_X86_BYTEORDER_H
#include <linux/byteorder/little_endian.h>
#endif /* _ASM_X86_BYTEORDER_H */ | {
"pile_set_name": "Github"
} |
/*
* Unsquash a squashfs filesystem. This is a highly compressed read only
* filesystem.
*
* Copyright (c) 2009, 2010
* Phillip Lougher <[email protected]>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2,
* or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* unsquash-3.c
*/
#include "unsquashfs.h"
#include "squashfs_compat.h"
static squashfs_fragment_entry_3 *fragment_table;
int read_fragment_table_3()
{
int res, i, indexes = SQUASHFS_FRAGMENT_INDEXES_3(sBlk.s.fragments);
long long fragment_table_index[indexes];
TRACE("read_fragment_table: %d fragments, reading %d fragment indexes "
"from 0x%llx\n", sBlk.s.fragments, indexes,
sBlk.s.fragment_table_start);
if(sBlk.s.fragments == 0)
return TRUE;
fragment_table = malloc(sBlk.s.fragments *
sizeof(squashfs_fragment_entry_3));
if(fragment_table == NULL)
EXIT_UNSQUASH("read_fragment_table: failed to allocate "
"fragment table\n");
if(swap) {
long long sfragment_table_index[indexes];
res = read_fs_bytes(fd, sBlk.s.fragment_table_start,
SQUASHFS_FRAGMENT_INDEX_BYTES_3(sBlk.s.fragments),
sfragment_table_index);
if(res == FALSE) {
ERROR("read_fragment_table: failed to read fragment "
"table index\n");
return FALSE;
}
SQUASHFS_SWAP_FRAGMENT_INDEXES_3(fragment_table_index,
sfragment_table_index, indexes);
} else {
res = read_fs_bytes(fd, sBlk.s.fragment_table_start,
SQUASHFS_FRAGMENT_INDEX_BYTES_3(sBlk.s.fragments),
fragment_table_index);
if(res == FALSE) {
ERROR("read_fragment_table: failed to read fragment "
"table index\n");
return FALSE;
}
}
for(i = 0; i < indexes; i++) {
int length = read_block(fd, fragment_table_index[i], NULL,
((char *) fragment_table) + (i *
SQUASHFS_METADATA_SIZE));
TRACE("Read fragment table block %d, from 0x%llx, length %d\n",
i, fragment_table_index[i], length);
if(length == FALSE) {
ERROR("read_fragment_table: failed to read fragment "
"table block\n");
return FALSE;
}
}
if(swap) {
squashfs_fragment_entry_3 sfragment;
for(i = 0; i < sBlk.s.fragments; i++) {
SQUASHFS_SWAP_FRAGMENT_ENTRY_3((&sfragment),
(&fragment_table[i]));
memcpy((char *) &fragment_table[i], (char *) &sfragment,
sizeof(squashfs_fragment_entry_3));
}
}
return TRUE;
}
void read_fragment_3(unsigned int fragment, long long *start_block, int *size)
{
TRACE("read_fragment: reading fragment %d\n", fragment);
squashfs_fragment_entry_3 *fragment_entry = &fragment_table[fragment];
*start_block = fragment_entry->start_block;
*size = fragment_entry->size;
}
struct inode *read_inode_3(unsigned int start_block, unsigned int offset)
{
static union squashfs_inode_header_3 header;
long long start = sBlk.s.inode_table_start + start_block;
int bytes = lookup_entry(inode_table_hash, start);
char *block_ptr = inode_table + bytes + offset;
static struct inode i;
TRACE("read_inode: reading inode [%d:%d]\n", start_block, offset);
if(bytes == -1)
EXIT_UNSQUASH("read_inode: inode table block %lld not found\n",
start);
if(swap) {
squashfs_base_inode_header_3 sinode;
memcpy(&sinode, block_ptr, sizeof(header.base));
SQUASHFS_SWAP_BASE_INODE_HEADER_3(&header.base, &sinode,
sizeof(squashfs_base_inode_header_3));
} else
memcpy(&header.base, block_ptr, sizeof(header.base));
i.xattr = SQUASHFS_INVALID_XATTR;
i.uid = (uid_t) uid_table[header.base.uid];
i.gid = header.base.guid == SQUASHFS_GUIDS ? i.uid :
(uid_t) guid_table[header.base.guid];
i.mode = lookup_type[header.base.inode_type] | header.base.mode;
i.type = header.base.inode_type;
i.time = header.base.mtime;
i.inode_number = header.base.inode_number;
switch(header.base.inode_type) {
case SQUASHFS_DIR_TYPE: {
squashfs_dir_inode_header_3 *inode = &header.dir;
if(swap) {
squashfs_dir_inode_header_3 sinode;
memcpy(&sinode, block_ptr, sizeof(header.dir));
SQUASHFS_SWAP_DIR_INODE_HEADER_3(&header.dir,
&sinode);
} else
memcpy(&header.dir, block_ptr,
sizeof(header.dir));
i.data = inode->file_size;
i.offset = inode->offset;
i.start = inode->start_block;
break;
}
case SQUASHFS_LDIR_TYPE: {
squashfs_ldir_inode_header_3 *inode = &header.ldir;
if(swap) {
squashfs_ldir_inode_header_3 sinode;
memcpy(&sinode, block_ptr, sizeof(header.ldir));
SQUASHFS_SWAP_LDIR_INODE_HEADER_3(&header.ldir,
&sinode);
} else
memcpy(&header.ldir, block_ptr,
sizeof(header.ldir));
i.data = inode->file_size;
i.offset = inode->offset;
i.start = inode->start_block;
break;
}
case SQUASHFS_FILE_TYPE: {
squashfs_reg_inode_header_3 *inode = &header.reg;
if(swap) {
squashfs_reg_inode_header_3 sinode;
memcpy(&sinode, block_ptr, sizeof(sinode));
SQUASHFS_SWAP_REG_INODE_HEADER_3(inode,
&sinode);
} else
memcpy(inode, block_ptr, sizeof(*inode));
i.data = inode->file_size;
i.frag_bytes = inode->fragment == SQUASHFS_INVALID_FRAG
? 0 : inode->file_size % sBlk.s.block_size;
i.fragment = inode->fragment;
i.offset = inode->offset;
i.blocks = inode->fragment == SQUASHFS_INVALID_FRAG ?
(i.data + sBlk.s.block_size - 1) >>
sBlk.s.block_log :
i.data >> sBlk.s.block_log;
i.start = inode->start_block;
i.sparse = 1;
i.block_ptr = block_ptr + sizeof(*inode);
break;
}
case SQUASHFS_LREG_TYPE: {
squashfs_lreg_inode_header_3 *inode = &header.lreg;
if(swap) {
squashfs_lreg_inode_header_3 sinode;
memcpy(&sinode, block_ptr, sizeof(sinode));
SQUASHFS_SWAP_LREG_INODE_HEADER_3(inode,
&sinode);
} else
memcpy(inode, block_ptr, sizeof(*inode));
i.data = inode->file_size;
i.frag_bytes = inode->fragment == SQUASHFS_INVALID_FRAG
? 0 : inode->file_size % sBlk.s.block_size;
i.fragment = inode->fragment;
i.offset = inode->offset;
i.blocks = inode->fragment == SQUASHFS_INVALID_FRAG ?
(inode->file_size + sBlk.s.block_size - 1) >>
sBlk.s.block_log :
inode->file_size >> sBlk.s.block_log;
i.start = inode->start_block;
i.sparse = 1;
i.block_ptr = block_ptr + sizeof(*inode);
break;
}
case SQUASHFS_SYMLINK_TYPE: {
squashfs_symlink_inode_header_3 *inodep =
&header.symlink;
if(swap) {
squashfs_symlink_inode_header_3 sinodep;
memcpy(&sinodep, block_ptr, sizeof(sinodep));
SQUASHFS_SWAP_SYMLINK_INODE_HEADER_3(inodep,
&sinodep);
} else
memcpy(inodep, block_ptr, sizeof(*inodep));
i.symlink = malloc(inodep->symlink_size + 1);
if(i.symlink == NULL)
EXIT_UNSQUASH("read_inode: failed to malloc "
"symlink data\n");
strncpy(i.symlink, block_ptr +
sizeof(squashfs_symlink_inode_header_3),
inodep->symlink_size);
i.symlink[inodep->symlink_size] = '\0';
i.data = inodep->symlink_size;
break;
}
case SQUASHFS_BLKDEV_TYPE:
case SQUASHFS_CHRDEV_TYPE: {
squashfs_dev_inode_header_3 *inodep = &header.dev;
if(swap) {
squashfs_dev_inode_header_3 sinodep;
memcpy(&sinodep, block_ptr, sizeof(sinodep));
SQUASHFS_SWAP_DEV_INODE_HEADER_3(inodep,
&sinodep);
} else
memcpy(inodep, block_ptr, sizeof(*inodep));
i.data = inodep->rdev;
break;
}
case SQUASHFS_FIFO_TYPE:
case SQUASHFS_SOCKET_TYPE:
i.data = 0;
break;
default:
EXIT_UNSQUASH("Unknown inode type %d in read_inode!\n",
header.base.inode_type);
}
return &i;
}
struct dir *squashfs_opendir_3(unsigned int block_start, unsigned int offset,
struct inode **i)
{
squashfs_dir_header_3 dirh;
char buffer[sizeof(squashfs_dir_entry_3) + SQUASHFS_NAME_LEN + 1]
__attribute__((aligned));
squashfs_dir_entry_3 *dire = (squashfs_dir_entry_3 *) buffer;
long long start;
int bytes;
int dir_count, size;
struct dir_ent *new_dir;
struct dir *dir;
TRACE("squashfs_opendir: inode start block %d, offset %d\n",
block_start, offset);
*i = s_ops.read_inode(block_start, offset);
start = sBlk.s.directory_table_start + (*i)->start;
bytes = lookup_entry(directory_table_hash, start);
if(bytes == -1)
EXIT_UNSQUASH("squashfs_opendir: directory block %d not "
"found!\n", block_start);
bytes += (*i)->offset;
size = (*i)->data + bytes - 3;
dir = malloc(sizeof(struct dir));
if(dir == NULL)
EXIT_UNSQUASH("squashfs_opendir: malloc failed!\n");
dir->dir_count = 0;
dir->cur_entry = 0;
dir->mode = (*i)->mode;
dir->uid = (*i)->uid;
dir->guid = (*i)->gid;
dir->mtime = (*i)->time;
dir->xattr = (*i)->xattr;
dir->dirs = NULL;
while(bytes < size) {
if(swap) {
squashfs_dir_header_3 sdirh;
memcpy(&sdirh, directory_table + bytes, sizeof(sdirh));
SQUASHFS_SWAP_DIR_HEADER_3(&dirh, &sdirh);
} else
memcpy(&dirh, directory_table + bytes, sizeof(dirh));
dir_count = dirh.count + 1;
TRACE("squashfs_opendir: Read directory header @ byte position "
"%d, %d directory entries\n", bytes, dir_count);
bytes += sizeof(dirh);
while(dir_count--) {
if(swap) {
squashfs_dir_entry_3 sdire;
memcpy(&sdire, directory_table + bytes,
sizeof(sdire));
SQUASHFS_SWAP_DIR_ENTRY_3(dire, &sdire);
} else
memcpy(dire, directory_table + bytes,
sizeof(*dire));
bytes += sizeof(*dire);
memcpy(dire->name, directory_table + bytes,
dire->size + 1);
dire->name[dire->size + 1] = '\0';
TRACE("squashfs_opendir: directory entry %s, inode "
"%d:%d, type %d\n", dire->name,
dirh.start_block, dire->offset, dire->type);
if((dir->dir_count % DIR_ENT_SIZE) == 0) {
new_dir = realloc(dir->dirs, (dir->dir_count +
DIR_ENT_SIZE) * sizeof(struct dir_ent));
if(new_dir == NULL)
EXIT_UNSQUASH("squashfs_opendir: "
"realloc failed!\n");
dir->dirs = new_dir;
}
strcpy(dir->dirs[dir->dir_count].name, dire->name);
dir->dirs[dir->dir_count].start_block =
dirh.start_block;
dir->dirs[dir->dir_count].offset = dire->offset;
dir->dirs[dir->dir_count].type = dire->type;
dir->dir_count ++;
bytes += dire->size + 1;
}
}
return dir;
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2017 Dgraph Labs, Inc. and Contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package options
// FileLoadingMode specifies how data in LSM table files and value log files should
// be loaded.
type FileLoadingMode int
const (
// FileIO indicates that files must be loaded using standard I/O
FileIO FileLoadingMode = iota
// LoadToRAM indicates that file must be loaded into RAM
LoadToRAM
// MemoryMap indicates that that the file must be memory-mapped
MemoryMap
)
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of the Predis package.
*
* (c) Daniele Alessandri <[email protected]>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Predis\Command;
/**
* @link http://redis.io/commands/rpop
*
* @author Daniele Alessandri <[email protected]>
*/
class ListPopLast extends Command
{
/**
* {@inheritdoc}
*/
public function getId()
{
return 'RPOP';
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright 2008 Marc Wick, geonames.org
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.geonames;
/**
* a street line segment. Includes house number information for the beginning
* and end of the line as well as right and left hand side of the line.
*
* @author marc@geonames
*
*/
public class StreetSegment extends PostalCode {
private double[] latArray;
private double[] lngArray;
/**
* census feature class codes see
* http://www.geonames.org/maps/Census-Feature-Class-Codes.txt
*/
private String cfcc;
private String name;
/**
* from address left
*/
private String fraddl;
/**
* from address right
*/
private String fraddr;
/**
* to address left
*/
private String toaddl;
/**
* to address right
*/
private String toaddr;
/**
* @return the latArray
*/
public double[] getLatArray() {
return latArray;
}
/**
* @param latArray
* the latArray to set
*/
public void setLatArray(double[] latArray) {
this.latArray = latArray;
}
/**
* @return the lngArray
*/
public double[] getLngArray() {
return lngArray;
}
/**
* @param lngArray
* the lngArray to set
*/
public void setLngArray(double[] lngArray) {
this.lngArray = lngArray;
}
/**
* @return the cfcc
*/
public String getCfcc() {
return cfcc;
}
/**
* @param cfcc
* the cfcc to set
*/
public void setCfcc(String cfcc) {
this.cfcc = cfcc;
}
/**
* @return the name
*/
public String getName() {
return name;
}
/**
* @param name
* the name to set
*/
public void setName(String name) {
this.name = name;
}
/**
* @return the fraddl
*/
public String getFraddl() {
return fraddl;
}
/**
* @param fraddl
* the fraddl to set
*/
public void setFraddl(String fraddl) {
this.fraddl = fraddl;
}
/**
* @return the fraddr
*/
public String getFraddr() {
return fraddr;
}
/**
* @param fraddr
* the fraddr to set
*/
public void setFraddr(String fraddr) {
this.fraddr = fraddr;
}
/**
* @return the toaddl
*/
public String getToaddl() {
return toaddl;
}
/**
* @param toaddl
* the toaddl to set
*/
public void setToaddl(String toaddl) {
this.toaddl = toaddl;
}
/**
* @return the toaddr
*/
public String getToaddr() {
return toaddr;
}
/**
* @param toaddr
* the toaddr to set
*/
public void setToaddr(String toaddr) {
this.toaddr = toaddr;
}
}
| {
"pile_set_name": "Github"
} |
/*!
* jQuery Cookie Plugin v1.4.1
* https://github.com/carhartl/jquery-cookie
*
* Copyright 2013 Klaus Hartl
* Released under the MIT license
*/
(function (factory) {
if (typeof define === 'function' && define.amd) {
// AMD
define(['jquery'], factory);
} else if (typeof exports === 'object') {
// CommonJS
factory(require('jquery'));
} else {
// Browser globals
factory(jQuery);
}
}(function ($) {
var pluses = /\+/g;
function encode(s) {
return config.raw ? s : encodeURIComponent(s);
}
function decode(s) {
return config.raw ? s : decodeURIComponent(s);
}
function stringifyCookieValue(value) {
return encode(config.json ? JSON.stringify(value) : String(value));
}
function parseCookieValue(s) {
if (s.indexOf('"') === 0) {
// This is a quoted cookie as according to RFC2068, unescape...
s = s.slice(1, -1).replace(/\\"/g, '"').replace(/\\\\/g, '\\');
}
try {
// Replace server-side written pluses with spaces.
// If we can't decode the cookie, ignore it, it's unusable.
// If we can't parse the cookie, ignore it, it's unusable.
s = decodeURIComponent(s.replace(pluses, ' '));
try {
return config.json ? JSON.parse(s) : s;
} catch (e) {
return s;
}
} catch(e) {}
}
function read(s, converter) {
var value = config.raw ? s : parseCookieValue(s);
return $.isFunction(converter) ? converter(value) : value;
}
var config = $.cookie = function (key, value, options) {
// Write
if (value !== undefined && !$.isFunction(value)) {
options = $.extend({}, config.defaults, options);
if (typeof options.expires === 'number') {
var days = options.expires, t = options.expires = new Date();
t.setTime(+t + days * 864e+5);
}
return (document.cookie = [
encode(key), '=', stringifyCookieValue(value),
options.expires ? '; expires=' + options.expires.toUTCString() : '', // use expires attribute, max-age is not supported by IE
options.path ? '; path=' + options.path : '',
options.domain ? '; domain=' + options.domain : '',
options.secure ? '; secure' : ''
].join(''));
}
// Read
var result = key ? undefined : {};
// To prevent the for loop in the first place assign an empty array
// in case there are no cookies at all. Also prevents odd result when
// calling $.cookie().
var cookies = document.cookie ? document.cookie.split('; ') : [];
for (var i = 0, l = cookies.length; i < l; i++) {
var parts = cookies[i].split('=');
var name = decode(parts.shift());
var cookie = parts.join('=');
if (key && key === name) {
// If second argument (value) is a function it's a converter...
result = read(cookie, value);
break;
}
// Prevent storing a cookie that we couldn't decode.
if (!key && (cookie = read(cookie)) !== undefined) {
result[name] = cookie;
}
}
return result;
};
config.defaults = {};
$.removeCookie = function (key, options) {
if ($.cookie(key) === undefined) {
return false;
}
// Must not alter options, thus extending a fresh object...
$.cookie(key, '', $.extend({}, options, { expires: -1 }));
return !$.cookie(key);
};
}));
| {
"pile_set_name": "Github"
} |
# -*- coding: utf-8 -*-
import collections
import platform
import sys
def user_agent(name, version, extras=None):
"""Return an internet-friendly user_agent string.
The majority of this code has been wilfully stolen from the equivalent
function in Requests.
:param name: The intended name of the user-agent, e.g. "python-requests".
:param version: The version of the user-agent, e.g. "0.0.1".
:param extras: List of two-item tuples that are added to the user-agent
string.
:returns: Formatted user-agent string
:rtype: str
"""
if extras is None:
extras = []
return UserAgentBuilder(
name, version
).include_extras(
extras
).include_implementation(
).include_system().build()
class UserAgentBuilder(object):
"""Class to provide a greater level of control than :func:`user_agent`.
This is used by :func:`user_agent` to build its User-Agent string.
.. code-block:: python
user_agent_str = UserAgentBuilder(
name='requests-toolbelt',
version='17.4.0',
).include_implementation(
).include_system(
).include_extras([
('requests', '2.14.2'),
('urllib3', '1.21.2'),
]).build()
"""
format_string = '%s/%s'
def __init__(self, name, version):
"""Initialize our builder with the name and version of our user agent.
:param str name:
Name of our user-agent.
:param str version:
The version string for user-agent.
"""
self._pieces = collections.deque([(name, version)])
def build(self):
"""Finalize the User-Agent string.
:returns:
Formatted User-Agent string.
:rtype:
str
"""
return " ".join([self.format_string % piece for piece in self._pieces])
def include_extras(self, extras):
"""Include extra portions of the User-Agent.
:param list extras:
list of tuples of extra-name and extra-version
"""
if any(len(extra) != 2 for extra in extras):
raise ValueError('Extras should be a sequence of two item tuples.')
self._pieces.extend(extras)
return self
def include_implementation(self):
"""Append the implementation string to the user-agent string.
This adds the the information that you're using CPython 2.7.13 to the
User-Agent.
"""
self._pieces.append(_implementation_tuple())
return self
def include_system(self):
"""Append the information about the Operating System."""
self._pieces.append(_platform_tuple())
return self
def _implementation_tuple():
"""Return the tuple of interpreter name and version.
Returns a string that provides both the name and the version of the Python
implementation currently running. For example, on CPython 2.7.5 it will
return "CPython/2.7.5".
This function works best on CPython and PyPy: in particular, it probably
doesn't work for Jython or IronPython. Future investigation should be done
to work out the correct shape of the code for those platforms.
"""
implementation = platform.python_implementation()
if implementation == 'CPython':
implementation_version = platform.python_version()
elif implementation == 'PyPy':
implementation_version = '%s.%s.%s' % (sys.pypy_version_info.major,
sys.pypy_version_info.minor,
sys.pypy_version_info.micro)
if sys.pypy_version_info.releaselevel != 'final':
implementation_version = ''.join([
implementation_version, sys.pypy_version_info.releaselevel
])
elif implementation == 'Jython':
implementation_version = platform.python_version() # Complete Guess
elif implementation == 'IronPython':
implementation_version = platform.python_version() # Complete Guess
else:
implementation_version = 'Unknown'
return (implementation, implementation_version)
def _implementation_string():
return "%s/%s" % _implementation_tuple()
def _platform_tuple():
try:
p_system = platform.system()
p_release = platform.release()
except IOError:
p_system = 'Unknown'
p_release = 'Unknown'
return (p_system, p_release)
| {
"pile_set_name": "Github"
} |
#!/bin/bash
#
# Copy createdb.sh.example to createdb.sh
# then uncomment then set database name and username to create you need databases
#
# example: .env POSTGRES_USER=appuser and need db name is myshop_db
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER myuser WITH PASSWORD 'mypassword';
# CREATE DATABASE myshop_db;
# GRANT ALL PRIVILEGES ON DATABASE myshop_db TO myuser;
# EOSQL
#
# this sh script will auto run when the postgres container starts and the $DATA_PATH_HOST/postgres not found.
#
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db1 WITH PASSWORD 'db1';
# CREATE DATABASE db1;
# GRANT ALL PRIVILEGES ON DATABASE db1 TO db1;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db2 WITH PASSWORD 'db2';
# CREATE DATABASE db2;
# GRANT ALL PRIVILEGES ON DATABASE db2 TO db2;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db3 WITH PASSWORD 'db3';
# CREATE DATABASE db3;
# GRANT ALL PRIVILEGES ON DATABASE db3 TO db3;
# EOSQL
#
### default database and user for jupyterhub ##############################################
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER laradock_jupyterhub WITH PASSWORD 'laradock_jupyterhub';
CREATE DATABASE laradock_jupyterhub;
GRANT ALL PRIVILEGES ON DATABASE laradock_jupyterhub TO laradock_jupyterhub;
ALTER ROLE laradock_jupyterhub CREATEROLE SUPERUSER;
EOSQL
| {
"pile_set_name": "Github"
} |
#N canvas 144 215 431 450 10;
#X obj 122 68 inlet bang;
#X obj 122 385 outlet 1==OK;
#X msg 122 243 \$3 \$4 \$5 \$6;
#X obj 122 268 unpack 0 0 0 0;
#X obj 184 330 *;
#X obj 122 330 *;
#X obj 122 356 *;
#X obj 256 200 unpack 0 0 0 0;
#X obj 122 307 ==;
#X obj 153 307 ==;
#X obj 184 307 ==;
#X obj 215 307 ==;
#X msg 122 179 matrix 2 2 \$1 \$2 \$3 \$4;
#X obj 122 156 t l l;
#X obj 122 98 t b;
#X msg 122 132 3 2 1 0;
#X obj 122 221 mtx_bitand 1;
#X obj 256 221 & 1;
#X obj 297 221 & 1;
#X obj 338 220 & 1;
#X obj 379 220 & 1;
#X connect 0 0 14 0;
#X connect 2 0 3 0;
#X connect 3 0 8 0;
#X connect 3 1 9 0;
#X connect 3 2 10 0;
#X connect 3 3 11 0;
#X connect 4 0 6 1;
#X connect 5 0 6 0;
#X connect 6 0 1 0;
#X connect 7 0 17 0;
#X connect 7 1 18 0;
#X connect 7 2 19 0;
#X connect 7 3 20 0;
#X connect 8 0 5 0;
#X connect 9 0 5 1;
#X connect 10 0 4 0;
#X connect 11 0 4 1;
#X connect 12 0 16 0;
#X connect 13 0 12 0;
#X connect 13 1 7 0;
#X connect 14 0 15 0;
#X connect 15 0 13 0;
#X connect 16 0 2 0;
#X connect 17 0 8 1;
#X connect 18 0 9 1;
#X connect 19 0 10 1;
#X connect 20 0 11 1;
| {
"pile_set_name": "Github"
} |
package com.github.unidbg.memory;
import com.github.unidbg.pointer.UnicornPointer;
import com.github.unidbg.serialize.Serializable;
public interface StackMemory extends Serializable {
UnicornPointer writeStackString(String str);
UnicornPointer writeStackBytes(byte[] data);
}
| {
"pile_set_name": "Github"
} |
@model ChangePasswordViewModel
@{
ViewData["Title"] = "Change password";
ViewData.AddActivePage(ManageNavPages.ChangePassword);
}
<h4>@ViewData["Title"]</h4>
@Html.Partial("_StatusMessage", Model.StatusMessage)
<div class="row">
<div class="col-md-6">
<form method="post">
<div asp-validation-summary="All" class="text-danger"></div>
<div class="form-group">
<label asp-for="OldPassword"></label>
<input asp-for="OldPassword" class="form-control" />
<span asp-validation-for="OldPassword" class="text-danger"></span>
</div>
<div class="form-group">
<label asp-for="NewPassword"></label>
<input asp-for="NewPassword" class="form-control" />
<span asp-validation-for="NewPassword" class="text-danger"></span>
</div>
<div class="form-group">
<label asp-for="ConfirmPassword"></label>
<input asp-for="ConfirmPassword" class="form-control" />
<span asp-validation-for="ConfirmPassword" class="text-danger"></span>
</div>
<button type="submit" class="btn btn-default">Update password</button>
</form>
</div>
</div>
@section Scripts {
@await Html.PartialAsync("_ValidationScriptsPartial")
}
| {
"pile_set_name": "Github"
} |
# 包含一个学生类,
# 一个sayhello函数,
# 一个打印语句
class Student():
def __init__(self, name="NoName", age=18):
self.name = name
self.age = age
def say(self):
print("My name is {0}".format(self.name))
def sayHello():
print("Hi, 欢迎来到图灵学院!")
print("我是模块p01呀,你特么的叫我干毛")
| {
"pile_set_name": "Github"
} |
# $Id$
# Authority: dag
%define _bindir /bin
Summary: Restricted Unix shell
Name: ibsh
Version: 0.3e
Release: 1%{?dist}
License: GPL
Group: System Environment/Shells
URL: http://ibsh.sourceforge.net/
Source: http://dl.sf.net/ibsh/ibsh-%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root
#BuildRequires:
#Requires:
%description
Iron Bars SHell, or short ibsh is a restricted working environment for Unix.
%prep
%setup
%build
%{__make} %{?_smp_mflags}
%install
%{__rm} -rf %{buildroot}
#%{__make} install DESTDIR="%{buildroot}"
%{__install} -Dp -m0755 ibsh %{buildroot}%{_bindir}/ibsh
%{__install} -Dp -m0644 globals.cmds %{buildroot}%{_sysconfdir}/ibsh/globals.cmds
%{__install} -Dp -m0644 globals.xtns %{buildroot}%{_sysconfdir}/ibsh/globals.xtns
%{__install} -dp -m0755 %{buildroot}%{_sysconfdir}/ibsh/{cmds,xtns}/
%clean
%{__rm} -rf %{buildroot}
%files
%defattr(-, root, root, 0755)
%doc AUTHORS BUGS ChangeLog CONTRIBUTORS COPYING COPYRIGHT INSTALL README TODO *.xtns
%config(noreplace) %{_sysconfdir}/ibsh/
%{_bindir}/ibsh
%changelog
* Thu Jun 28 2007 Dag Wieers <[email protected]> -
- Initial package. (using DAR)
| {
"pile_set_name": "Github"
} |
import numpy as np
import cv2
import math
# map [0,255] into 8 section
def bgr_mapping(img_val):
if img_val >= 0 and img_val <= 31: return 0
if img_val >= 32 and img_val <= 63: return 1
if img_val >= 64 and img_val <= 95: return 2
if img_val >= 96 and img_val <= 127: return 3
if img_val >= 128 and img_val <= 159: return 4
if img_val >= 160 and img_val <= 191: return 5
if img_val >= 192 and img_val <= 223: return 6
if img_val >= 224: return 7
# Calculate color histogram
def calc_bgr_hist(image):
if not image.size: return False
hist = {}
image = cv2.resize(image, (32, 32)) # resize img decrease computation
for bgr_list in image:
for bgr in bgr_list:
maped_b = bgr_mapping(bgr[0])
maped_g = bgr_mapping(bgr[1])
maped_r = bgr_mapping(bgr[2])
index = maped_b * 8 * 8 + maped_g * 8 + maped_r
hist[index] = hist.get(index, 0) + 1
return hist
# Calculate color histogram similarity
def compare_similar_hist(h1, h2):
if not h1 or not h2: return False
sum1, sum2, sum_mixd = 0, 0, 0
for i in range(512):
sum1 = sum1 + (h1.get(i, 0) * h1.get(i, 0))
sum2 = sum2 + (h2.get(i, 0) * h2.get(i, 0))
sum_mixd = sum_mixd + (h1.get(i, 0) * h2.get(i, 0))
# cosine similarity
return sum_mixd / (math.sqrt(sum1) * math.sqrt(sum2))
# Calculate color psnr similarity
def psnr(img1, img2):
img1 = cv2.resize(img1, (10, 10))
img2 = cv2.resize(img2, (10, 10))
mse = np.mean((img1/1.0 - img2/1.0) ** 2)
if mse < 1.0e-10:
return 100
return 10 * math.log10(255.0**2/mse)
d = [[-1, 0], [1, 0], [0, 1], [0, -1]]
# search connected region
def search_region(G, pos):
x1, y1, x2, y2 = pos[1], pos[0], pos[1], pos[0]
Q = set()
Q.add(pos)
h, w = G.shape
visited = np.zeros((h, w))
visited[pos] = 1
while Q:
u = Q.pop()
for move in d:
row = u[0] + move[0]
col = u[1] + move[1]
if (row >= 0 and row < h and col >= 0 and col < w and G[row, col] == 1 and visited[row, col] == 0):
visited[row, col] = 1
Q.add((row, col))
x1 = min(x1, col)
x2 = max(x2, col)
y1 = min(y1, row)
y2 = max(y2, row)
return [int(x1), int(y1), int(x2), int(y2)]
def compute_iou(rec1, rec2):
"""
computing IoU
:param rec1: (y0, x0, y1, x1), which reflects
(top, left, bottom, right)
:param rec2: (y0, x0, y1, x1)
:return: scala value of IoU
"""
# computing area of each rectangles
S_rec1 = (rec1[2] - rec1[0]) * (rec1[3] - rec1[1])
S_rec2 = (rec2[2] - rec2[0]) * (rec2[3] - rec2[1])
# computing the sum_area
sum_area = S_rec1 + S_rec2
# find the each edge of intersect rectangle
left_line = max(rec1[1], rec2[1])
right_line = min(rec1[3], rec2[3])
top_line = max(rec1[0], rec2[0])
bottom_line = min(rec1[2], rec2[2])
# judge if there is an intersect
if left_line >= right_line or top_line >= bottom_line:
return 0
else:
intersect = (right_line - left_line) * (bottom_line - top_line)
# print(intersect,sum_area)
return (float(intersect) / float(sum_area - intersect)) * 1.0
| {
"pile_set_name": "Github"
} |
/*******************************************************************************
* Copyright 2012-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License"). You may not use
* this file except in compliance with the License. A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file.
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
* *****************************************************************************
*
* AWS Tools for Windows (TM) PowerShell (TM)
*
*/
using System;
using System.Collections.Generic;
using System.Linq;
using System.Management.Automation;
using System.Text;
using Amazon.PowerShell.Common;
using Amazon.Runtime;
using Amazon.CloudWatch;
using Amazon.CloudWatch.Model;
namespace Amazon.PowerShell.Cmdlets.CW
{
/// <summary>
/// Deletes the specified alarms. You can delete up to 100 alarms in one operation. However,
/// this total can include no more than one composite alarm. For example, you could delete
/// 99 metric alarms and one composite alarms with one operation, but you can't delete
/// two composite alarms with one operation.
///
///
/// <para>
/// In the event of an error, no alarms are deleted.
/// </para><note><para>
/// It is possible to create a loop or cycle of composite alarms, where composite alarm
/// A depends on composite alarm B, and composite alarm B also depends on composite alarm
/// A. In this scenario, you can't delete any composite alarm that is part of the cycle
/// because there is always still a composite alarm that depends on that alarm that you
/// want to delete.
/// </para><para>
/// To get out of such a situation, you must break the cycle by changing the rule of one
/// of the composite alarms in the cycle to remove a dependency that creates the cycle.
/// The simplest change to make to break a cycle is to change the <code>AlarmRule</code>
/// of one of the alarms to <code>False</code>.
/// </para><para>
/// Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle
/// in the evaluation path.
/// </para></note>
/// </summary>
[Cmdlet("Remove", "CWAlarm", SupportsShouldProcess = true, ConfirmImpact = ConfirmImpact.High)]
[OutputType("None")]
[AWSCmdlet("Calls the Amazon CloudWatch DeleteAlarms API operation.", Operation = new[] {"DeleteAlarms"}, SelectReturnType = typeof(Amazon.CloudWatch.Model.DeleteAlarmsResponse))]
[AWSCmdletOutput("None or Amazon.CloudWatch.Model.DeleteAlarmsResponse",
"This cmdlet does not generate any output." +
"The service response (type Amazon.CloudWatch.Model.DeleteAlarmsResponse) can be referenced from properties attached to the cmdlet entry in the $AWSHistory stack."
)]
public partial class RemoveCWAlarmCmdlet : AmazonCloudWatchClientCmdlet, IExecutor
{
#region Parameter AlarmName
/// <summary>
/// <para>
/// <para>The alarms to be deleted.</para>
/// </para>
/// </summary>
#if !MODULAR
[System.Management.Automation.Parameter(Position = 0, ValueFromPipelineByPropertyName = true, ValueFromPipeline = true)]
#else
[System.Management.Automation.Parameter(Position = 0, ValueFromPipelineByPropertyName = true, ValueFromPipeline = true, Mandatory = true)]
[System.Management.Automation.AllowEmptyCollection]
[System.Management.Automation.AllowNull]
#endif
[Amazon.PowerShell.Common.AWSRequiredParameter]
[Alias("AlarmNames")]
public System.String[] AlarmName { get; set; }
#endregion
#region Parameter Select
/// <summary>
/// Use the -Select parameter to control the cmdlet output. The cmdlet doesn't have a return value by default.
/// Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.CloudWatch.Model.DeleteAlarmsResponse).
/// Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public string Select { get; set; } = "*";
#endregion
#region Parameter PassThru
/// <summary>
/// Changes the cmdlet behavior to return the value passed to the AlarmName parameter.
/// The -PassThru parameter is deprecated, use -Select '^AlarmName' instead. This parameter will be removed in a future version.
/// </summary>
[System.Obsolete("The -PassThru parameter is deprecated, use -Select '^AlarmName' instead. This parameter will be removed in a future version.")]
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public SwitchParameter PassThru { get; set; }
#endregion
#region Parameter Force
/// <summary>
/// This parameter overrides confirmation prompts to force
/// the cmdlet to continue its operation. This parameter should always
/// be used with caution.
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public SwitchParameter Force { get; set; }
#endregion
protected override void ProcessRecord()
{
base.ProcessRecord();
var resourceIdentifiersText = FormatParameterValuesForConfirmationMsg(nameof(this.AlarmName), MyInvocation.BoundParameters);
if (!ConfirmShouldProceed(this.Force.IsPresent, resourceIdentifiersText, "Remove-CWAlarm (DeleteAlarms)"))
{
return;
}
var context = new CmdletContext();
// allow for manipulation of parameters prior to loading into context
PreExecutionContextLoad(context);
#pragma warning disable CS0618, CS0612 //A class member was marked with the Obsolete attribute
if (ParameterWasBound(nameof(this.Select)))
{
context.Select = CreateSelectDelegate<Amazon.CloudWatch.Model.DeleteAlarmsResponse, RemoveCWAlarmCmdlet>(Select) ??
throw new System.ArgumentException("Invalid value for -Select parameter.", nameof(this.Select));
if (this.PassThru.IsPresent)
{
throw new System.ArgumentException("-PassThru cannot be used when -Select is specified.", nameof(this.Select));
}
}
else if (this.PassThru.IsPresent)
{
context.Select = (response, cmdlet) => this.AlarmName;
}
#pragma warning restore CS0618, CS0612 //A class member was marked with the Obsolete attribute
if (this.AlarmName != null)
{
context.AlarmName = new List<System.String>(this.AlarmName);
}
#if MODULAR
if (this.AlarmName == null && ParameterWasBound(nameof(this.AlarmName)))
{
WriteWarning("You are passing $null as a value for parameter AlarmName which is marked as required. In case you believe this parameter was incorrectly marked as required, report this by opening an issue at https://github.com/aws/aws-tools-for-powershell/issues.");
}
#endif
// allow further manipulation of loaded context prior to processing
PostExecutionContextLoad(context);
var output = Execute(context) as CmdletOutput;
ProcessOutput(output);
}
#region IExecutor Members
public object Execute(ExecutorContext context)
{
var cmdletContext = context as CmdletContext;
// create request
var request = new Amazon.CloudWatch.Model.DeleteAlarmsRequest();
if (cmdletContext.AlarmName != null)
{
request.AlarmNames = cmdletContext.AlarmName;
}
CmdletOutput output;
// issue call
var client = Client ?? CreateClient(_CurrentCredentials, _RegionEndpoint);
try
{
var response = CallAWSServiceOperation(client, request);
object pipelineOutput = null;
pipelineOutput = cmdletContext.Select(response, this);
output = new CmdletOutput
{
PipelineOutput = pipelineOutput,
ServiceResponse = response
};
}
catch (Exception e)
{
output = new CmdletOutput { ErrorResponse = e };
}
return output;
}
public ExecutorContext CreateContext()
{
return new CmdletContext();
}
#endregion
#region AWS Service Operation Call
private Amazon.CloudWatch.Model.DeleteAlarmsResponse CallAWSServiceOperation(IAmazonCloudWatch client, Amazon.CloudWatch.Model.DeleteAlarmsRequest request)
{
Utils.Common.WriteVerboseEndpointMessage(this, client.Config, "Amazon CloudWatch", "DeleteAlarms");
try
{
#if DESKTOP
return client.DeleteAlarms(request);
#elif CORECLR
return client.DeleteAlarmsAsync(request).GetAwaiter().GetResult();
#else
#error "Unknown build edition"
#endif
}
catch (AmazonServiceException exc)
{
var webException = exc.InnerException as System.Net.WebException;
if (webException != null)
{
throw new Exception(Utils.Common.FormatNameResolutionFailureMessage(client.Config, webException.Message), webException);
}
throw;
}
}
#endregion
internal partial class CmdletContext : ExecutorContext
{
public List<System.String> AlarmName { get; set; }
public System.Func<Amazon.CloudWatch.Model.DeleteAlarmsResponse, RemoveCWAlarmCmdlet, object> Select { get; set; } =
(response, cmdlet) => null;
}
}
}
| {
"pile_set_name": "Github"
} |
// +build !ignore_autogenerated
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This file was autogenerated by defaulter-gen. Do not edit it manually!
package v1beta1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// RegisterDefaults adds defaulters functions to the given scheme.
// Public to allow building arbitrary schemes.
// All generated defaulters are covering - they call all nested defaulters.
func RegisterDefaults(scheme *runtime.Scheme) error {
return nil
}
| {
"pile_set_name": "Github"
} |
package com.slack.api.methods.response.pins;
import com.slack.api.methods.SlackApiResponse;
import lombok.Data;
@Data
public class PinsRemoveResponse implements SlackApiResponse {
private boolean ok;
private String warning;
private String error;
private String needed;
private String provided;
} | {
"pile_set_name": "Github"
} |
{
"action": {
"error": {
"variety": [
"Publishing error"
],
"vector": [
"Carelessness"
]
}
},
"actor": {
"internal": {
"motive": [
"NA"
],
"variety": [
"Unknown"
]
}
},
"asset": {
"assets": [
{
"variety": "M - Documents"
}
],
"cloud": [
"Unknown"
]
},
"attribute": {
"confidentiality": {
"data": [
{
"variety": "Unknown"
}
],
"data_disclosure": "Yes",
"data_total": 1
}
},
"discovery_method": {
"unknown": true
},
"incident_id": "D40EC68F-4A5A-4AF3-94B4-06622ACDCF09",
"plus": {
"analysis_status": "First pass",
"analyst": "swidup",
"created": "2014-05-29T20:01:01Z",
"master_id": "D40EC68F-4A5A-4AF3-94B4-06622ACDCF09",
"modified": "2014-07-30T19:07:19Z"
},
"reference": "http://vcdb.org/pdf/va-security.pdf",
"schema_version": "1.3.4",
"security_incident": "Confirmed",
"source_id": "vcdb",
"summary": "Birmingham VA Medical Center (BVAMC) Research Compliance Officer (RCO) was notified by BVAMC RN, Research Coordinator, that a consult form and imaging \nof a BVAMC patient was released to the University of Alabama at Birmingham (UAB) without proper authorization. A surgical resident released the information to the BVAMC RN, Research Coordinator (pulled from CPRS) who then provided the consult form and imaging to UAB research employee (who is also a WOC with VA Research). The UAB research employee stated she had obtained consent/HIPAA authorization from the patient. However, this had not occurred. The RN, Research Coordinator stated the image (burned to a CD) and consult have been destroyed/shredded as part of UAB's required action. The RN, Research \nCoordinator provided training to the UAB Study team, UAB research employee, and residents involved in this UAB study (which is not a VA study) on the proper \nmanner in which to refer VA patients to UAB for potential study participation and UAB consenting procedures.",
"timeline": {
"incident": {
"day": 3,
"month": 5,
"year": 2012
}
},
"victim": {
"country": [
"US"
],
"employee_count": "Over 100000",
"industry": "923140",
"region": [
"019021"
],
"state": "MI",
"victim_id": "United States Department of Veterans Affairs"
}
} | {
"pile_set_name": "Github"
} |
//////////////////////////////////////////////////////////////////////////
//
// Copyright 2010 Dr D Studios Pty Limited (ACN 127 184 954) (Dr. D Studios),
// its affiliates and/or its licensors.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
//
// * Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// * Neither the name of Image Engine Design nor the names of any
// other contributors to this software may be used to endorse or
// promote products derived from this software without specific prior
// written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
//////////////////////////////////////////////////////////////////////////
#include <cassert>
#include "IECoreGL/GL.h"
#include "IECoreGL/State.h"
#include "IECoreGL/SpherePrimitive.h"
#include "IECoreGL/ConePrimitive.h"
#include "IECore/PrimitiveVariable.h"
#include "IECoreGL/SkeletonPrimitive.h"
#include "OpenEXR/ImathMatrixAlgo.h"
using namespace IECoreGL;
using namespace Imath;
using namespace std;
IE_CORE_DEFINERUNTIMETYPED( SkeletonPrimitive );
SkeletonPrimitive::SkeletonPrimitive() :
m_parentIds( new IECore::IntVectorData ),
m_globalMatrices( new IECore::M44fVectorData )
{
m_jointsAxis = false;
m_jointsRadius = 1.0;
}
SkeletonPrimitive::SkeletonPrimitive(
IECore::ConstM44fVectorDataPtr globalMatrices, IECore::ConstIntVectorDataPtr parentIds,
bool displayAxis, float jointsSize, const IECore::PrimitiveVariableMap &primVars)
{
m_parentIds = parentIds->copy();
m_globalMatrices = globalMatrices->copy();
IECore::PrimitiveVariableMap primVarsCopy = primVars;
m_jointsAxis = displayAxis;
m_jointsRadius = jointsSize;
synchVectorIds();
}
SkeletonPrimitive::~SkeletonPrimitive()
{
}
void SkeletonPrimitive::addPrimitiveVariable( const std::string &name, const IECore::PrimitiveVariable &primVar )
{
if ( primVar.interpolation==IECore::PrimitiveVariable::Constant )
{
addUniformAttribute( name, primVar.data );
}
if ( primVar.interpolation==IECore::PrimitiveVariable::Uniform )
{
addUniformAttribute( name, primVar.data );
}
if ( primVar.interpolation==IECore::PrimitiveVariable::Vertex )
{
addVertexAttribute( name, primVar.data );
}
if ( primVar.interpolation==IECore::PrimitiveVariable::FaceVarying )
{
addVertexAttribute( name, primVar.data );
}
}
void SkeletonPrimitive::render( const State *state, IECore::TypeId style ) const
{
Imath::V3f from_vec(0.0, 0.0, 1.0), up(0.0, 1.0, 0.0);
JointPrimitive jointPrimitive( m_jointsRadius, 1.0 );
// loop over global transforms
for (unsigned int i=0; i<m_globalMatrices->readable().size(); i++)
{
Imath::M44f child_mtx;
unsigned int numChildren = m_childrenIds[i].size();
if (numChildren > 0)
{
for (unsigned int j=0; j< numChildren; j++ )
{
child_mtx = m_globalMatrices->readable()[ m_childrenIds[i][j] ];
Imath::V3f aim_vec = child_mtx.translation() - m_globalMatrices->readable()[i].translation();
float bone_length = aim_vec.length();
jointPrimitive.setLength( bone_length );
Imath::V3f up_vec = up*m_globalMatrices->readable()[i] - m_globalMatrices->readable()[i].translation();
Imath::M44f bone_mtx = Imath::rotationMatrixWithUpDir( from_vec, aim_vec.normalize(), up_vec );
Imath::M44f bone_offset_mtx;
bone_offset_mtx.translate( m_globalMatrices->readable()[i].translation() );
// draw the jointPrimitive
glPushMatrix();
glMultMatrixf( bone_offset_mtx.getValue() );
glMultMatrixf( bone_mtx.getValue() );
jointPrimitive.render( state, style );
glPopMatrix();
}
}
else
{
// a Null or Locator shape when the joint has no children
glPushMatrix();
Imath::M44f mat = m_globalMatrices->readable()[i];
Imath::removeScaling(mat, false);
glMultMatrixf( mat.getValue() );
glBegin( GL_LINES );
glVertex3f(-m_jointsRadius, 0.0, 0.0);
glVertex3f( m_jointsRadius, 0.0, 0.0);
glEnd();
glBegin( GL_LINES );
glVertex3f(0.0, -m_jointsRadius, 0.0);
glVertex3f(0.0, m_jointsRadius, 0.0);
glEnd();
glBegin( GL_LINES );
glVertex3f(0.0, 0.0, -m_jointsRadius);
glVertex3f(0.0, 0.0, m_jointsRadius);
glEnd();
glPopMatrix();
}
if ( m_jointsAxis == true)
{
float l = m_jointsRadius*3.0;
//// draw the axis lines for debug porpose /////
glPushMatrix();
Imath::M44f matNoSCale = m_globalMatrices->readable()[i];
Imath::removeScaling(matNoSCale, false);
glMultMatrixf( matNoSCale.getValue() );
///// store the current color and lighting mode /////
GLboolean light;
float color[4];
glGetBooleanv(GL_LIGHTING, &light);
glGetFloatv(GL_CURRENT_COLOR, color);
glDisable(GL_LIGHTING);
glBegin( GL_LINES );
glColor3ub(255, 0, 0);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(l, 0.0, 0.0);
glEnd();
glBegin( GL_LINES );
glColor3ub(0, 255, 0);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, l, 0.0);
glEnd();
glBegin( GL_LINES );
glColor3ub(0, 0, 255);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 0.0, l);
glEnd();
//// restore the color and the lighting modo to their initial state /////
glColor4f(color[0], color[1], color[2], color[3]);
if (light==true) { glEnable(GL_LIGHTING); }
glPopMatrix();
}
}
}
Imath::Box3f SkeletonPrimitive::bound() const
{
Imath::Box3f bbox;
for (unsigned int i=0; i<m_globalMatrices->readable().size(); i++)
{
bbox.extendBy( m_globalMatrices->readable()[i].translation() );
}
//std::cerr << bbox.min << ", " << bbox.max << std::endl;
// add a little on for joint radius
bbox.extendBy( bbox.max + Imath::V3f(1,1,1) );
bbox.extendBy( bbox.min - Imath::V3f(1,1,1) );
//std::cerr << bbox.min << ", " << bbox.max << std::endl;
return bbox;
}
void SkeletonPrimitive::synchVectorIds()
{
m_childrenIds.resize( m_parentIds->readable().size() );
for (unsigned int i=0; i<m_parentIds->readable().size(); i++)
{
int thisParentId = m_parentIds->readable()[i];
if ( thisParentId >= 0)
{
m_childrenIds[ thisParentId ].push_back( i );
}
}
}
| {
"pile_set_name": "Github"
} |
/*
* Developed by Justin Mead
* ©2011 MeadMiracle
* www.meadmiracle.com / [email protected]
* Version 1.3
* Testing: IE8/Windows XP
* Firefox/Windows XP
* Chrome/Windows XP
* Licensed under the Creative Commons GPL http://creativecommons.org/licenses/GPL/2.0/
*/
(function ( $ ) {
var settings = new Array();
var group1 = new Array();
var group2 = new Array();
var onSort = new Array();
$.configureBoxes = function ( options ) {
var index = settings.push( { box1View: 'box1View', box1Storage: 'box1Storage', box1Filter: 'box1Filter', box1Clear: 'box1Clear', box1Counter: 'box1Counter', box2View: 'box2View', box2Storage: 'box2Storage', box2Filter: 'box2Filter', box2Clear: 'box2Clear', box2Counter: 'box2Counter', to1: 'to1', allTo1: 'allTo1', to2: 'to2', allTo2: 'allTo2', transferMode: 'move', sortBy: 'text', useFilters: true, useCounters: true, useSorting: true, selectOnSubmit: true } );
index--;
$.extend( settings[index], options );
group1.push( { view: settings[index].box1View, storage: settings[index].box1Storage, filter: settings[index].box1Filter, clear: settings[index].box1Clear, counter: settings[index].box1Counter, index: index } );
group2.push( { view: settings[index].box2View, storage: settings[index].box2Storage, filter: settings[index].box2Filter, clear: settings[index].box2Clear, counter: settings[index].box2Counter, index: index } );
if ( settings[index].sortBy == 'text' ) {
onSort.push( function ( a, b ) {
var aVal = a.text.toLowerCase();
var bVal = b.text.toLowerCase();
if ( aVal < bVal ) {
return -1;
}
if ( aVal > bVal ) {
return 1;
}
return 0;
} );
} else {
onSort.push( function ( a, b ) {
var aVal = a.value.toLowerCase();
var bVal = b.value.toLowerCase();
if ( aVal < bVal ) {
return -1;
}
if ( aVal > bVal ) {
return 1;
}
return 0;
} );
}
if ( settings[index].useFilters ) {
$( '#' + group1[index].filter ).keyup( function () {
Filter( group1[index] );
} );
$( '#' + group2[index].filter ).keyup( function () {
Filter( group2[index] );
} );
$( '#' + group1[index].clear ).click( function () {
ClearFilter( group1[index] );
} );
$( '#' + group2[index].clear ).click( function () {
ClearFilter( group2[index] );
} );
}
if ( IsMoveMode( settings[index] ) ) {
$( '#' + group2[index].view ).dblclick( function () {
MoveSelected( group2[index], group1[index] );
} );
$( '#' + settings[index].to1 ).click( function () {
MoveSelected( group2[index], group1[index] );
} );
$( '#' + settings[index].allTo1 ).click( function () {
MoveAll( group2[index], group1[index] );
} );
} else {
$( '#' + group2[index].view ).dblclick( function () {
RemoveSelected( group2[index], group1[index] );
} );
$( '#' + settings[index].to1 ).click( function () {
RemoveSelected( group2[index], group1[index] );
} );
$( '#' + settings[index].allTo1 ).click( function () {
RemoveAll( group2[index], group1[index] );
} );
}
$( '#' + group1[index].view ).dblclick( function () {
MoveSelected( group1[index], group2[index] );
} );
$( '#' + settings[index].to2 ).click( function () {
MoveSelected( group1[index], group2[index] );
} );
$( '#' + settings[index].allTo2 ).click( function () {
MoveAll( group1[index], group2[index] );
} );
if ( settings[index].useCounters ) {
UpdateLabel( group1[index] );
UpdateLabel( group2[index] );
}
if ( settings[index].useSorting ) {
SortOptions( group1[index] );
SortOptions( group2[index] );
}
$( '#' + group1[index].storage + ',#' + group2[index].storage ).css( 'display', 'none' );
if ( settings[index].selectOnSubmit ) {
$( '#' + settings[index].box2View ).closest( 'form' ).submit( function () {
$( '#' + settings[index].box2View ).children( 'option' ).attr( 'selected', 'selected' );
} );
}
};
function UpdateLabel( group ) {
var showingCount = $( "#" + group.view + " option" ).size();
var hiddenCount = $( "#" + group.storage + " option" ).size();
$( "#" + group.counter ).text( 'Showing ' + showingCount + ' of ' + (showingCount + hiddenCount) );
}
function Filter( group ) {
var index = group.index;
var filterLower;
if ( settings[index].useFilters ) {
filterLower = $( '#' + group.filter ).val().toString().toLowerCase();
} else {
filterLower = '';
}
$( '#' + group.view + ' option' ).filter(function ( i ) {
var toMatch = $( this ).text().toString().toLowerCase();
return toMatch.indexOf( filterLower ) == -1;
} ).appendTo( '#' + group.storage );
$( '#' + group.storage + ' option' ).filter(function ( i ) {
var toMatch = $( this ).text().toString().toLowerCase();
return toMatch.indexOf( filterLower ) != -1;
} ).appendTo( '#' + group.view );
try {
$( '#' + group.view + ' option' ).removeAttr( 'selected' );
} catch ( ex ) {
}
if ( settings[index].useSorting ) {
SortOptions( group );
}
if ( settings[index].useCounters ) {
UpdateLabel( group );
}
}
function SortOptions( group ) {
var $toSortOptions = $( '#' + group.view + ' option' );
$toSortOptions.sort( onSort[group.index] );
$( '#' + group.view ).empty().append( $toSortOptions );
}
function MoveSelected( fromGroup, toGroup ) {
if ( IsMoveMode( settings[fromGroup.index] ) ) {
$( '#' + fromGroup.view + ' option:selected' ).appendTo( '#' + toGroup.view );
} else {
$( '#' + fromGroup.view + ' option:selected:not([class*=copiedOption])' ).clone().appendTo( '#' + toGroup.view ).end().end().addClass( 'copiedOption' );
}
try {
$( '#' + fromGroup.view + ' option,#' + toGroup.view + ' option' ).removeAttr( 'selected' );
} catch ( ex ) {
}
Filter( toGroup );
if ( settings[fromGroup.index].useCounters ) {
UpdateLabel( fromGroup );
}
}
function MoveAll( fromGroup, toGroup ) {
if ( IsMoveMode( settings[fromGroup.index] ) ) {
$( '#' + fromGroup.view + ' option' ).appendTo( '#' + toGroup.view );
} else {
$( '#' + fromGroup.view + ' option:not([class*=copiedOption])' ).clone().appendTo( '#' + toGroup.view ).end().end().addClass( 'copiedOption' );
}
try {
$( '#' + fromGroup.view + ' option,#' + toGroup.view + ' option' ).removeAttr( 'selected' );
} catch ( ex ) {
}
Filter( toGroup );
if ( settings[fromGroup.index].useCounters ) {
UpdateLabel( fromGroup );
}
}
function RemoveSelected( removeGroup, otherGroup ) {
$( '#' + otherGroup.view + ' option.copiedOption' ).add( '#' + otherGroup.storage + ' option.copiedOption' ).remove();
try {
$( '#' + removeGroup.view + ' option:selected' ).appendTo( '#' + otherGroup.view ).removeAttr( 'selected' );
} catch ( ex ) {
}
$( '#' + removeGroup.view + ' option' ).add( '#' + removeGroup.storage + ' option' ).clone().addClass( 'copiedOption' ).appendTo( '#' + otherGroup.view );
Filter( otherGroup );
if ( settings[removeGroup.index].useCounters ) {
UpdateLabel( removeGroup );
}
}
function RemoveAll( removeGroup, otherGroup ) {
$( '#' + otherGroup.view + ' option.copiedOption' ).add( '#' + otherGroup.storage + ' option.copiedOption' ).remove();
try {
$( '#' + removeGroup.storage + ' option' ).clone().addClass( 'copiedOption' ).add( '#' + removeGroup.view + ' option' ).appendTo( '#' + otherGroup.view ).removeAttr( 'selected' );
} catch ( ex ) {
}
Filter( otherGroup );
if ( settings[removeGroup.index].useCounters ) {
UpdateLabel( removeGroup );
}
}
function ClearFilter( group ) {
$( '#' + group.filter ).val( '' );
$( '#' + group.storage + ' option' ).appendTo( '#' + group.view );
try {
$( '#' + group.view + ' option' ).removeAttr( 'selected' );
} catch ( ex ) {
}
if ( settings[group.index].useSorting ) {
SortOptions( group );
}
if ( settings[group.index].useCounters ) {
UpdateLabel( group );
}
}
function IsMoveMode( currSettings ) {
return currSettings.transferMode == 'move';
}
})( jQuery ); | {
"pile_set_name": "Github"
} |
/*
* Copyright 2020 Hewlett Packard Enterprise Development LP
* Copyright 2004-2019 Cray Inc.
* Other additional copyright holders may be indicated within.
*
* The entirety of this work is licensed under the Apache License,
* Version 2.0 (the "License"); you may not use this file except
* in compliance with the License.
*
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef _CYGWIN_CHPLSYS_H_
#define _CYGWIN_CHPLSYS_H_
#include "../chplsys.h"
#define chplGetPageSize() 4096
// Why do we hardcode the pagesize into the cygwin implementation?
//
// From the Cygwin mailing list archives:
//
// On Jun 15 15:09, Dave Korn wrote:
// > On 15 June 2006 14:56, Ehren Jarosek wrote:
// >
// > > I don't know if this is something I am doing wrong or an issue.
// > >
// > > When compiling under cygwin sysconf(_SC_PAGESIZE) returns 65536 (64k)
// > > memory page size. My understanding is that:
// > >
// > > sysconf(_SC_PAGESIZE) * sysconf(_SC_PHYS_PAGES)
// > >
// > > should yield the total physical memory size of the machine. However,
// > > when I do this it yields a very large number (actually overflows my
// > > long). However, if I multiply sysconf(_SC_PHYS_PAGES) * 4096 it yields
// > > the correct size.
// >
// > Alas there is a problem with the definition of sysconf: it is
// > supposed to be the size of the unit of granularity of mmap'ing, but
// > it is also supposed to be the size of a single pageframe of memory.
// > [...]
//
// _SC_PAGESIZE is only for indicating the page size as used in calls to
// mmap(2). POSIX does not demand that _SC_PAGESIZE is actually the
// physical page size.
//
// Two quotes from the Linux man pages:
//
// $ man getpagesize
// [...]
// The function getpagesize() returns the number of bytes in a page,
// where a "page" is the thing used where it says in the description of
// mmap(2) that files are mapped in page-sized units.
//
// The size of the kind of pages that mmap uses, is found using
//
// #include <unistd.h>
// long sz = sysconf(_SC_PAGESIZE);
//
//
// $ man sysconf
// [...]
// These values also exist, but may not be standard.
//
// - _SC_PHYS_PAGES
// The number of pages of physical memory. Note that it is possi-
// ble for the product of this value and the value of
// _SC_PAGE_SIZE to overflow.
//
//
// So, actually Ehren's application works on Linux just coincidentally,
// since it make invalid assumptions.
//
//
// Corinna
//
// --
// Corinna Vinschen Please, send mails regarding Cygwin to
// Cygwin Project Co-Leader cygwin AT cygwin DOT com
// Red Hat
#endif
| {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2014 Andreas Steffen
* HSR Hochschule fuer Technik Rapperswil
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version. See <http://www.fsf.org/copyleft/gpl.txt>.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*/
#include "test_suite.h"
#include <imcv.h>
#include <pa_tnc/pa_tnc_attr.h>
#include <seg/seg_env.h>
#include <seg/seg_contract.h>
#include <seg/seg_contract_manager.h>
#include <ietf/ietf_attr_pa_tnc_error.h>
#include <ita/ita_attr.h>
#include <ita/ita_attr_command.h>
#include <ita/ita_attr_dummy.h>
#include <tcg/seg/tcg_seg_attr_seg_env.h>
#include <tncif_pa_subtypes.h>
static struct {
uint32_t max_seg_size, next_segs, last_seg_size;
} seg_env_tests[] = {
{ 0, 0, 0 },
{ 11, 0, 0 },
{ 12, 3, 12 },
{ 13, 3, 9 },
{ 15, 3, 3 },
{ 16, 2, 16 },
{ 17, 2, 14 },
{ 23, 2, 2 },
{ 24, 1, 24 },
{ 25, 1, 23 },
{ 47, 1, 1 },
{ 48, 0, 0 },
};
static char command[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
static uint32_t id = 0x123456;
START_TEST(test_imcv_seg_env)
{
pa_tnc_attr_t *attr, *attr1, *base_attr, *base_attr1, *error;
tcg_seg_attr_seg_env_t *seg_env_attr;
ita_attr_command_t *ita_attr;
seg_env_t *seg_env, *seg_env1;
pen_type_t type;
uint32_t base_attr_id, max_seg_size, last_seg_size, seg_size, offset;
uint8_t flags;
bool last, last_seg;
chunk_t value, segment, seg;
int n;
libimcv_init(FALSE);
max_seg_size = seg_env_tests[_i].max_seg_size;
last_seg_size = seg_env_tests[_i].last_seg_size;
base_attr = ita_attr_command_create(command);
base_attr->build(base_attr);
seg_env = seg_env_create(id, base_attr, max_seg_size);
if (seg_env_tests[_i].next_segs == 0)
{
ck_assert(seg_env == NULL);
}
else
{
ck_assert(seg_env->get_base_attr_id(seg_env) == id);
base_attr1 = seg_env->get_base_attr(seg_env);
ck_assert(base_attr == base_attr1);
base_attr1->destroy(base_attr1);
for (n = 0; n <= seg_env_tests[_i].next_segs; n++)
{
last_seg = (n == seg_env_tests[_i].next_segs);
seg_size = (last_seg) ? last_seg_size : max_seg_size;
if (n == 0)
{
/* create first segment */
attr = seg_env->first_segment(seg_env, 0);
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr;
segment = seg_env_attr->get_segment(seg_env_attr, &flags);
if (max_seg_size > 12)
{
seg = chunk_create(command, seg_size - 12);
ck_assert(chunk_equals(seg, chunk_skip(segment, 12)));
}
ck_assert(flags == (SEG_ENV_FLAG_MORE | SEG_ENV_FLAG_START));
}
else
{
/* create next segments */
attr = seg_env->next_segment(seg_env, &last);
ck_assert(last == last_seg);
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr;
segment = seg_env_attr->get_segment(seg_env_attr, &flags);
seg = chunk_create(command + n * max_seg_size - 12, seg_size);
ck_assert(chunk_equals(seg, segment));
ck_assert(flags == (last_seg ? SEG_ENV_FLAG_NONE :
SEG_ENV_FLAG_MORE));
}
/* check built segment envelope attribute */
value = attr->get_value(attr);
ck_assert(value.len == 4 + seg_size);
ck_assert(segment.len == seg_size);
ck_assert(seg_env_attr->get_base_attr_id(seg_env_attr) == id);
/* create parse segment envelope attribute from data */
attr1 = tcg_seg_attr_seg_env_create_from_data(value.len, value);
ck_assert(attr1->process(attr1, &offset) == SUCCESS);
attr->destroy(attr);
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr1;
segment = seg_env_attr->get_segment(seg_env_attr, &flags);
base_attr_id = seg_env_attr->get_base_attr_id(seg_env_attr);
ck_assert(base_attr_id == id);
/* create and update seg_env object on the receiving side */
if (n == 0)
{
ck_assert(flags == (SEG_ENV_FLAG_MORE | SEG_ENV_FLAG_START));
seg_env1 = seg_env_create_from_data(base_attr_id, segment,
max_seg_size, &error);
}
else
{
ck_assert(flags == (last_seg ? SEG_ENV_FLAG_NONE :
SEG_ENV_FLAG_MORE));
seg_env1->add_segment(seg_env1, segment, &error);
}
attr1->destroy(attr1);
}
/* check reconstructed base attribute */
base_attr1 = seg_env1->get_base_attr(seg_env1);
ck_assert(base_attr1);
type = base_attr1->get_type(base_attr1);
ck_assert(type.vendor_id == PEN_ITA);
ck_assert(type.type == ITA_ATTR_COMMAND);
ita_attr = (ita_attr_command_t*)base_attr1;
ck_assert(streq(ita_attr->get_command(ita_attr), command));
seg_env->destroy(seg_env);
seg_env1->destroy(seg_env1);
base_attr1->destroy(base_attr1);
}
libimcv_deinit();
}
END_TEST
START_TEST(test_imcv_seg_env_special)
{
pa_tnc_attr_t *attr, *attr1, *base_attr;
tcg_seg_attr_seg_env_t *seg_env_attr;
pen_type_t type;
seg_env_t *seg_env;
chunk_t segment, value;
uint32_t max_attr_len = 60;
uint32_t max_seg_size = 47;
uint32_t last_seg_size = 4;
uint32_t offset = 12;
base_attr = ita_attr_command_create(command);
base_attr->build(base_attr);
/* set noskip flag in base attribute */
base_attr->set_noskip_flag(base_attr, TRUE);
seg_env = seg_env_create(id, base_attr, max_seg_size);
attr = seg_env->first_segment(seg_env, max_attr_len);
attr->destroy(attr);
/* don't return last segment indicator */
attr = seg_env->next_segment(seg_env, NULL);
/* build attribute */
attr->build(attr);
/* don't return flags */
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr;
segment = seg_env_attr->get_segment(seg_env_attr, NULL);
ck_assert(segment.len == last_seg_size);
/* get segment envelope attribute reference and destroy it */
attr1 = attr->get_ref(attr);
attr1->destroy(attr1);
/* check some standard methods */
type = attr->get_type(attr);
ck_assert(type.vendor_id == PEN_TCG);
ck_assert(type.type == TCG_SEG_ATTR_SEG_ENV);
ck_assert(attr->get_noskip_flag(attr) == FALSE);
attr->set_noskip_flag(attr, TRUE);
ck_assert(attr->get_noskip_flag(attr) == TRUE);
/* request next segment which does not exist */
ck_assert(seg_env->next_segment(seg_env, NULL) == NULL);
/* create and parse a too short segment envelope attribute */
attr1 = tcg_seg_attr_seg_env_create_from_data(0, chunk_empty);
ck_assert(attr1->process(attr1, &offset) == FAILED);
ck_assert(offset == 0);
attr1->destroy(attr1);
/* create and parse correct segment envelope attribute */
value = attr->get_value(attr);
attr1 = tcg_seg_attr_seg_env_create_from_data(value.len, value);
ck_assert(attr1->process(attr1, &offset) == SUCCESS);
type = attr1->get_type(attr1);
ck_assert(type.vendor_id == PEN_TCG);
ck_assert(type.type == TCG_SEG_ATTR_SEG_ENV);
attr1->destroy(attr1);
/* cleanup */
attr->destroy(attr);
seg_env->destroy(seg_env);
}
END_TEST
static struct {
pa_tnc_error_code_t error_code;
chunk_t segment;
} env_invalid_tests[] = {
{ PA_ERROR_INVALID_PARAMETER, { NULL, 0 } },
{ PA_ERROR_INVALID_PARAMETER, chunk_from_chars(
0x00, 0xff, 0xff, 0xf0, 0x01, 0x02, 0x03, 0x04, 0x00, 0x00, 0x00, 0x0a)
},
{ PA_ERROR_INVALID_PARAMETER, chunk_from_chars(
0x00, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0c)
},
{ PA_ERROR_INVALID_PARAMETER, chunk_from_chars(
0x00, 0x00, 0x90, 0x2a, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x0c)
},
{ PA_ERROR_ATTR_TYPE_NOT_SUPPORTED, chunk_from_chars(
0x80, 0x00, 0x90, 0x2a, 0xff, 0xff, 0xff, 0xfe, 0x00, 0x00, 0x00, 0x0c)
},
{ PA_ERROR_RESERVED, chunk_from_chars(
0x00, 0x00, 0x90, 0x2a, 0xff, 0xff, 0xff, 0xfe, 0x00, 0x00, 0x00, 0x0c)
},
{ PA_ERROR_RESERVED, chunk_from_chars(
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0c)
},
{ PA_ERROR_INVALID_PARAMETER, chunk_from_chars(
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x0c)
}
};
START_TEST(test_imcv_seg_env_invalid)
{
seg_env_t *seg_env;
pen_type_t error_code;
pa_tnc_attr_t*error;
ietf_attr_pa_tnc_error_t *error_attr;
libimcv_init(FALSE);
seg_env = seg_env_create_from_data(id, env_invalid_tests[_i].segment, 20,
&error);
ck_assert(seg_env == NULL);
if (env_invalid_tests[_i].error_code == PA_ERROR_RESERVED)
{
ck_assert(error == NULL);
}
else
{
ck_assert(error);
error->build(error);
error_attr = (ietf_attr_pa_tnc_error_t*)error;
error_code = error_attr->get_error_code(error_attr);
ck_assert(error_code.vendor_id == PEN_IETF);
ck_assert(error_code.type == env_invalid_tests[_i].error_code);
error->destroy(error);
}
libimcv_deinit();
}
END_TEST
START_TEST(test_imcv_seg_contract)
{
seg_contract_t *contract_i, *contract_r;
tcg_seg_attr_seg_env_t *seg_env_attr;
ita_attr_command_t *ita_attr;
pa_tnc_attr_t *attr, *base_attr_i, *base_attr_r, *error;
pen_type_t type, msg_type = { PEN_ITA, PA_SUBTYPE_ITA_TEST };
uint32_t max_seg_size, max_attr_size = 1000, issuer_id = 1;
uint32_t base_attr_id;
bool more;
libimcv_init(FALSE);
max_seg_size = seg_env_tests[_i].max_seg_size;
base_attr_r = ita_attr_command_create(command);
base_attr_r->build(base_attr_r);
contract_i = seg_contract_create(msg_type, max_attr_size, max_seg_size,
TRUE, issuer_id, FALSE);
contract_r = seg_contract_create(msg_type, max_attr_size, max_seg_size,
FALSE, issuer_id, TRUE);
attr = contract_r->first_segment(contract_r,
base_attr_r->get_ref(base_attr_r), 0);
if (seg_env_tests[_i].next_segs == 0)
{
ck_assert(attr == NULL);
}
else
{
ck_assert(attr);
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr;
base_attr_id = seg_env_attr->get_base_attr_id(seg_env_attr);
ck_assert(base_attr_id == 1);
base_attr_i = contract_i->add_segment(contract_i, attr, &error, &more);
ck_assert(base_attr_i == NULL);
attr->destroy(attr);
ck_assert(more);
while (more)
{
attr = contract_r->next_segment(contract_r, base_attr_id);
ck_assert(attr);
seg_env_attr = (tcg_seg_attr_seg_env_t*)attr;
base_attr_id = seg_env_attr->get_base_attr_id(seg_env_attr);
ck_assert(base_attr_id == 1);
base_attr_i = contract_i->add_segment(contract_i, attr, &error,
&more);
attr->destroy(attr);
}
ck_assert(base_attr_i);
ck_assert(error == NULL);
type = base_attr_i->get_type(base_attr_i);
ck_assert(pen_type_equals(type, base_attr_r->get_type(base_attr_r)));
ita_attr = (ita_attr_command_t*)base_attr_i;
ck_assert(streq(ita_attr->get_command(ita_attr), command));
base_attr_i->destroy(base_attr_i);
}
contract_i->destroy(contract_i);
contract_r->destroy(contract_r);
base_attr_r->destroy(base_attr_r);
libimcv_deinit();
}
END_TEST
START_TEST(test_imcv_seg_contract_special)
{
seg_contract_t *contract_i, *contract_r;
tcg_seg_attr_seg_env_t *seg_env_attr1, *seg_env_attr2;
ita_attr_command_t *ita_attr;
pa_tnc_attr_t *base_attr1_i, *base_attr2_i, *base_attr1_r, *base_attr2_r;
pa_tnc_attr_t *attr1_f, *attr2_f, *attr1_n, *attr2_n, *attr3, *error;
pen_type_t type, msg_type = { PEN_ITA, PA_SUBTYPE_ITA_TEST };
uint32_t max_seg_size, max_attr_size, issuer_id = 1;
uint32_t base_attr1_id, base_attr2_id;
char info[512];
bool oversize, more;
libimcv_init(FALSE);
/* create two base attributes to be segmented */
base_attr1_r = ita_attr_command_create(command);
base_attr2_r = ita_attr_dummy_create(129);
base_attr1_r->build(base_attr1_r);
base_attr2_r->build(base_attr2_r);
/* create an issuer contract*/
contract_i = seg_contract_create(msg_type, 1000, 47,
TRUE, issuer_id, FALSE);
ck_assert(pen_type_equals(contract_i->get_msg_type(contract_i), msg_type));
ck_assert(contract_i->is_issuer(contract_i));
ck_assert(!contract_i->is_null(contract_i));
/* set null contract */
contract_i->set_max_size(contract_i, SEG_CONTRACT_MAX_SIZE_VALUE,
SEG_CONTRACT_MAX_SIZE_VALUE);
ck_assert(contract_i->is_null(contract_i));
/* set and get maximum attribute and segment sizes */
contract_i->set_max_size(contract_i, 1000, 47);
contract_i->get_max_size(contract_i, NULL, NULL);
contract_i->get_max_size(contract_i, &max_attr_size, &max_seg_size);
contract_i->get_info_string(contract_i, info, sizeof(info), TRUE);
ck_assert(max_attr_size == 1000 && max_seg_size == 47);
ck_assert(!contract_i->is_null(contract_i));
/* create a null responder contract*/
contract_r = seg_contract_create(msg_type, SEG_CONTRACT_MAX_SIZE_VALUE,
SEG_CONTRACT_MAX_SIZE_VALUE,
FALSE, issuer_id, TRUE);
ck_assert(!contract_r->is_issuer(contract_r));
ck_assert(!contract_r->check_size(contract_r, base_attr2_r, &oversize));
ck_assert(!oversize);
/* allow no fragmentation */
contract_r->set_max_size(contract_r, 1000, SEG_CONTRACT_MAX_SIZE_VALUE);
ck_assert(!contract_r->is_null(contract_r));
ck_assert(!contract_r->check_size(contract_r, base_attr2_r, &oversize));
ck_assert(!oversize);
/* no maximum size limit and no fragmentation needed */
contract_r->set_max_size(contract_r, SEG_CONTRACT_MAX_SIZE_VALUE, 141);
ck_assert(!contract_r->is_null(contract_r));
ck_assert(!contract_r->check_size(contract_r, base_attr2_r, &oversize));
ck_assert(!oversize);
/* oversize base attribute */
contract_r->set_max_size(contract_r, 140, 47);
ck_assert(!contract_r->is_null(contract_r));
ck_assert(!contract_r->check_size(contract_r, base_attr2_r, &oversize));
ck_assert(oversize);
/* set final maximum attribute and segment sizes */
contract_r->set_max_size(contract_r, 141, 47);
contract_r->get_info_string(contract_r, info, sizeof(info), TRUE);
ck_assert(contract_r->check_size(contract_r, base_attr2_r, &oversize));
ck_assert(!oversize);
/* get first segment of each base attribute */
attr1_f = contract_r->first_segment(contract_r, base_attr1_r->get_ref(base_attr1_r), 0);
attr2_f = contract_r->first_segment(contract_r, base_attr2_r->get_ref(base_attr2_r), 0);
ck_assert(attr1_f);
ck_assert(attr2_f);
seg_env_attr1 = (tcg_seg_attr_seg_env_t*)attr1_f;
seg_env_attr2 = (tcg_seg_attr_seg_env_t*)attr2_f;
base_attr1_id = seg_env_attr1->get_base_attr_id(seg_env_attr1);
base_attr2_id = seg_env_attr2->get_base_attr_id(seg_env_attr2);
ck_assert(base_attr1_id == 1);
ck_assert(base_attr2_id == 2);
/* get second segment of each base attribute */
attr1_n = contract_r->next_segment(contract_r, 1);
attr2_n = contract_r->next_segment(contract_r, 2);
ck_assert(attr1_n);
ck_assert(attr2_n);
/* process first segment of first base attribute */
base_attr1_i = contract_i->add_segment(contract_i, attr1_f, &error, &more);
ck_assert(base_attr1_i == NULL);
ck_assert(error == NULL);
ck_assert(more);
/* reapply first segment of first base attribute */
base_attr1_i = contract_i->add_segment(contract_i, attr1_f, &error, &more);
ck_assert(base_attr1_i == NULL);
ck_assert(error == NULL);
ck_assert(more);
/* process stray second segment of second attribute */
base_attr2_i = contract_i->add_segment(contract_i, attr2_n, &error, &more);
ck_assert(base_attr2_i == NULL);
ck_assert(error == NULL);
ck_assert(more);
/* process first segment of second base attribute */
base_attr2_i = contract_i->add_segment(contract_i, attr2_f, &error, &more);
ck_assert(base_attr2_i == NULL);
ck_assert(error == NULL);
ck_assert(more);
/* try to get a segment of a non-existing base-attribute */
attr3 = contract_r->next_segment(contract_r, 3);
ck_assert(attr3 == NULL);
/* process second segment of first base attribute */
base_attr1_i = contract_i->add_segment(contract_i, attr1_n, &error, &more);
ck_assert(base_attr1_i);
ck_assert(error == NULL);
ck_assert(!more);
/* process second segment of second base attribute */
base_attr2_i = contract_i->add_segment(contract_i, attr2_n, &error, &more);
ck_assert(base_attr2_i == NULL);
ck_assert(error == NULL);
ck_assert(more);
/* destroy first and second segments */
attr1_f->destroy(attr1_f);
attr2_f->destroy(attr2_f);
attr1_n->destroy(attr1_n);
attr2_n->destroy(attr2_n);
/* request surplus segment of first base attribute */
attr1_n = contract_r->next_segment(contract_r, 1);
ck_assert(attr1_n == NULL);
/* get last segment of second base attribute */
attr2_n = contract_r->next_segment(contract_r, 2);
ck_assert(attr2_n);
/* process last segment of second base attribute */
base_attr2_i = contract_i->add_segment(contract_i, attr2_n, &error, &more);
attr2_n->destroy(attr2_n);
ck_assert(base_attr2_i);
ck_assert(error == NULL);
ck_assert(!more);
/* request surplus segment of second base attribute */
attr2_n = contract_r->next_segment(contract_r, 2);
ck_assert(attr2_n == NULL);
/* compare original with reconstructed base attributes */
type = base_attr1_i->get_type(base_attr1_i);
ck_assert(pen_type_equals(type, base_attr1_r->get_type(base_attr1_r)));
ita_attr = (ita_attr_command_t*)base_attr1_i;
ck_assert(streq(ita_attr->get_command(ita_attr), command));
type = base_attr2_i->get_type(base_attr2_i);
ck_assert(pen_type_equals(type, base_attr2_r->get_type(base_attr2_r)));
ck_assert(chunk_equals(base_attr2_i->get_value(base_attr2_i),
base_attr2_r->get_value(base_attr2_r)));
/* cleanup */
base_attr1_r->destroy(base_attr1_r);
base_attr2_r->destroy(base_attr2_r);
base_attr1_i->destroy(base_attr1_i);
base_attr2_i->destroy(base_attr2_i);
contract_i->destroy(contract_i);
contract_r->destroy(contract_r);
libimcv_deinit();
}
END_TEST
static struct {
bool err_f;
chunk_t frag_f;
bool err_n;
bool base_attr;
chunk_t frag_n;
} contract_invalid_tests[] = {
{ FALSE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x01, 0x00, 0x00, 0x90, 0x2a, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x0d),
FALSE, TRUE, chunk_from_chars(
0x00, 0x00, 0x00, 0x01, 0x01 )
},
{ FALSE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x02, 0x00, 0x00, 0x90, 0x2a, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x0e),
TRUE, FALSE, chunk_from_chars(
0x00, 0x00, 0x00, 0x02, 0x01 )
},
{ TRUE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x03, 0x00, 0x00, 0x55, 0x97, 0x00, 0x00, 0x00, 0x23,
0x00, 0x00, 0x00, 0x0d),
FALSE, FALSE, chunk_from_chars(
0x00, 0x00, 0x00, 0x03, 0x01 )
},
{ FALSE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08,
0x00, 0x00, 0x00, 0x14),
FALSE, FALSE, chunk_from_chars(
0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 )
},
{ FALSE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x05, 0x00, 0x00, 0x90, 0x2a, 0x00, 0x00, 0x00, 0x03,
0x00, 0x00, 0x00, 0x0f),
TRUE, FALSE, chunk_from_chars(
0x00, 0x00, 0x00, 0x05, 0x00, 0x02, 0x01 )
},
{ FALSE, chunk_from_chars(
0xc0, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x11),
TRUE, FALSE, chunk_from_chars(
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xff )
}
};
START_TEST(test_imcv_seg_contract_invalid)
{
uint32_t max_seg_size = 12, max_attr_size = 100, issuer_id = 1;
pen_type_t msg_type = { PEN_ITA, PA_SUBTYPE_ITA_TEST };
pa_tnc_attr_t *attr_f, *attr_n, *base_attr, *error;
chunk_t value_f, value_n;
seg_contract_t *contract;
uint32_t offset;
bool more;
libimcv_init(FALSE);
value_f = contract_invalid_tests[_i].frag_f;
value_n = contract_invalid_tests[_i].frag_n;
attr_f = tcg_seg_attr_seg_env_create_from_data(value_f.len, value_f);
attr_n = tcg_seg_attr_seg_env_create_from_data(value_n.len, value_n);
ck_assert(attr_f->process(attr_f, &offset) == SUCCESS);
ck_assert(attr_n->process(attr_n, &offset) == SUCCESS);
contract = seg_contract_create(msg_type, max_attr_size, max_seg_size,
TRUE, issuer_id, FALSE);
base_attr = contract->add_segment(contract, attr_f, &error, &more);
ck_assert(base_attr == NULL);
if (contract_invalid_tests[_i].err_f)
{
ck_assert(error);
error->destroy(error);
}
else
{
ck_assert(error == NULL);
ck_assert(more);
base_attr = contract->add_segment(contract, attr_n, &error, &more);
if (contract_invalid_tests[_i].err_n)
{
ck_assert(error);
error->destroy(error);
}
else
{
ck_assert(error == NULL);
}
if (contract_invalid_tests[_i].base_attr)
{
ck_assert(base_attr);
base_attr->destroy(base_attr);
}
}
/* cleanup */
attr_f->destroy(attr_f);
attr_n->destroy(attr_n);
contract->destroy(contract);
libimcv_deinit();
}
END_TEST
START_TEST(test_imcv_seg_contract_mgr)
{
char buf[BUF_LEN];
uint32_t max_seg_size = 12, max_attr_size = 100;
pen_type_t msg_type1 = { PEN_ITA, PA_SUBTYPE_ITA_TEST };
pen_type_t msg_type2 = { PEN_IETF, PA_SUBTYPE_IETF_OPERATING_SYSTEM };
seg_contract_manager_t *contracts;
seg_contract_t *cx, *c1, *c2, *c3, *c4;
contracts = seg_contract_manager_create();
/* add contract template as issuer */
c1 = seg_contract_create(msg_type1, max_attr_size, max_seg_size,
TRUE, 1, FALSE);
c1->get_info_string(c1, buf, BUF_LEN, TRUE);
contracts->add_contract(contracts, c1);
/* received contract request for msg_type1 as responder */
cx = contracts->get_contract(contracts, msg_type1, FALSE, 2);
ck_assert(cx == NULL);
/* add directed contract as responder */
c2 = seg_contract_create(msg_type1, max_attr_size, max_seg_size,
FALSE, 2, FALSE);
c2->set_responder(c2, 1);
c2->get_info_string(c2, buf, BUF_LEN, TRUE);
contracts->add_contract(contracts, c2);
/* retrieve this contract */
cx = contracts->get_contract(contracts, msg_type1, FALSE, 2);
ck_assert(cx == c2);
/* received directed contract response as issuer */
cx = contracts->get_contract(contracts, msg_type1, TRUE, 3);
ck_assert(cx == NULL);
/* get contract template */
cx = contracts->get_contract(contracts, msg_type1, TRUE, TNC_IMCID_ANY);
ck_assert(cx == c1);
/* clone the contract template and as it as a directed contract */
c3 = cx->clone(cx);
c3->set_responder(c3, 3);
c3->get_info_string(c3, buf, BUF_LEN, FALSE);
contracts->add_contract(contracts, c3);
/* retrieve this contract */
cx = contracts->get_contract(contracts, msg_type1, TRUE, 3);
ck_assert(cx == c3);
/* received contract request for msg_type2 as responder */
cx = contracts->get_contract(contracts, msg_type2, FALSE, 2);
ck_assert(cx == NULL);
/* add directed contract as responder */
c4 = seg_contract_create(msg_type2, max_attr_size, max_seg_size,
FALSE, 2, FALSE);
c4->set_responder(c4, 1);
contracts->add_contract(contracts, c4);
/* retrieve this contract */
cx = contracts->get_contract(contracts, msg_type2, FALSE, 2);
ck_assert(cx == c4);
contracts->destroy(contracts);
}
END_TEST
Suite *imcv_seg_suite_create()
{
Suite *s;
TCase *tc;
s = suite_create("imcv_seg");
tc = tcase_create("env");
tcase_add_loop_test(tc, test_imcv_seg_env, 0, countof(seg_env_tests));
suite_add_tcase(s, tc);
tc = tcase_create("env_special");
tcase_add_test(tc, test_imcv_seg_env_special);
suite_add_tcase(s, tc);
tc = tcase_create("env_invalid");
tcase_add_loop_test(tc, test_imcv_seg_env_invalid, 0,
countof(env_invalid_tests));
suite_add_tcase(s, tc);
tc = tcase_create("contract");
tcase_add_loop_test(tc, test_imcv_seg_contract, 0, countof(seg_env_tests));
suite_add_tcase(s, tc);
tc = tcase_create("contract_special");
tcase_add_test(tc, test_imcv_seg_contract_special);
suite_add_tcase(s, tc);
tc = tcase_create("contract_invalid");
tcase_add_loop_test(tc, test_imcv_seg_contract_invalid, 0,
countof(contract_invalid_tests));
suite_add_tcase(s, tc);
tc = tcase_create("contract_mgr");
tcase_add_test(tc, test_imcv_seg_contract_mgr);
suite_add_tcase(s, tc);
return s;
}
| {
"pile_set_name": "Github"
} |
// Copyright 2006 Nemanja Trifunovic
/*
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
*/
#ifndef UTF8_FOR_CPP_UNCHECKED_H_2675DCD0_9480_4c0c_B92A_CC14C027B731
#define UTF8_FOR_CPP_UNCHECKED_H_2675DCD0_9480_4c0c_B92A_CC14C027B731
#include "core.h"
namespace utf8
{
namespace unchecked
{
template <typename octet_iterator>
octet_iterator append(uint32_t cp, octet_iterator result)
{
if (cp < 0x80) // one octet
*(result++) = static_cast<uint8_t>(cp);
else if (cp < 0x800) { // two octets
*(result++) = static_cast<uint8_t>((cp >> 6) | 0xc0);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
else if (cp < 0x10000) { // three octets
*(result++) = static_cast<uint8_t>((cp >> 12) | 0xe0);
*(result++) = static_cast<uint8_t>(((cp >> 6) & 0x3f) | 0x80);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
else { // four octets
*(result++) = static_cast<uint8_t>((cp >> 18) | 0xf0);
*(result++) = static_cast<uint8_t>(((cp >> 12) & 0x3f)| 0x80);
*(result++) = static_cast<uint8_t>(((cp >> 6) & 0x3f) | 0x80);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
return result;
}
template <typename octet_iterator, typename output_iterator>
output_iterator replace_invalid(octet_iterator start, octet_iterator end, output_iterator out, uint32_t replacement)
{
while (start != end) {
octet_iterator sequence_start = start;
internal::utf_error err_code = utf8::internal::validate_next(start, end);
switch (err_code) {
case internal::UTF8_OK :
for (octet_iterator it = sequence_start; it != start; ++it)
*out++ = *it;
break;
case internal::NOT_ENOUGH_ROOM:
out = utf8::unchecked::append (replacement, out);
start = end;
break;
case internal::INVALID_LEAD:
out = utf8::unchecked::append (replacement, out);
++start;
break;
case internal::INCOMPLETE_SEQUENCE:
case internal::OVERLONG_SEQUENCE:
case internal::INVALID_CODE_POINT:
out = utf8::unchecked::append (replacement, out);
++start;
// just one replacement mark for the sequence
while (start != end && utf8::internal::is_trail(*start))
++start;
break;
}
}
return out;
}
template <typename octet_iterator, typename output_iterator>
inline output_iterator replace_invalid(octet_iterator start, octet_iterator end, output_iterator out)
{
static const uint32_t replacement_marker = utf8::internal::mask16(0xfffd);
return utf8::unchecked::replace_invalid(start, end, out, replacement_marker);
}
template <typename octet_iterator>
uint32_t next(octet_iterator& it)
{
uint32_t cp = utf8::internal::mask8(*it);
typename std::iterator_traits<octet_iterator>::difference_type length = utf8::internal::sequence_length(it);
switch (length) {
case 1:
break;
case 2:
it++;
cp = ((cp << 6) & 0x7ff) + ((*it) & 0x3f);
break;
case 3:
++it;
cp = ((cp << 12) & 0xffff) + ((utf8::internal::mask8(*it) << 6) & 0xfff);
++it;
cp += (*it) & 0x3f;
break;
case 4:
++it;
cp = ((cp << 18) & 0x1fffff) + ((utf8::internal::mask8(*it) << 12) & 0x3ffff);
++it;
cp += (utf8::internal::mask8(*it) << 6) & 0xfff;
++it;
cp += (*it) & 0x3f;
break;
}
++it;
return cp;
}
template <typename octet_iterator>
uint32_t peek_next(octet_iterator it)
{
return utf8::unchecked::next(it);
}
template <typename octet_iterator>
uint32_t prior(octet_iterator& it)
{
while (utf8::internal::is_trail(*(--it))) ;
octet_iterator temp = it;
return utf8::unchecked::next(temp);
}
template <typename octet_iterator, typename distance_type>
void advance (octet_iterator& it, distance_type n)
{
const distance_type zero(0);
if (n < zero) {
// backward
for (distance_type i = n; i < zero; ++i)
utf8::unchecked::prior(it);
} else {
// forward
for (distance_type i = zero; i < n; ++i)
utf8::unchecked::next(it);
}
}
template <typename octet_iterator>
typename std::iterator_traits<octet_iterator>::difference_type
distance (octet_iterator first, octet_iterator last)
{
typename std::iterator_traits<octet_iterator>::difference_type dist;
for (dist = 0; first < last; ++dist)
utf8::unchecked::next(first);
return dist;
}
template <typename u16bit_iterator, typename octet_iterator>
octet_iterator utf16to8 (u16bit_iterator start, u16bit_iterator end, octet_iterator result)
{
while (start != end) {
uint32_t cp = utf8::internal::mask16(*start++);
// Take care of surrogate pairs first
if (utf8::internal::is_lead_surrogate(cp)) {
uint32_t trail_surrogate = utf8::internal::mask16(*start++);
cp = (cp << 10) + trail_surrogate + internal::SURROGATE_OFFSET;
}
result = utf8::unchecked::append(cp, result);
}
return result;
}
template <typename u16bit_iterator, typename octet_iterator>
u16bit_iterator utf8to16 (octet_iterator start, octet_iterator end, u16bit_iterator result)
{
while (start < end) {
uint32_t cp = utf8::unchecked::next(start);
if (cp > 0xffff) { //make a surrogate pair
*result++ = static_cast<uint16_t>((cp >> 10) + internal::LEAD_OFFSET);
*result++ = static_cast<uint16_t>((cp & 0x3ff) + internal::TRAIL_SURROGATE_MIN);
}
else
*result++ = static_cast<uint16_t>(cp);
}
return result;
}
template <typename octet_iterator, typename u32bit_iterator>
octet_iterator utf32to8 (u32bit_iterator start, u32bit_iterator end, octet_iterator result)
{
while (start != end)
result = utf8::unchecked::append(*(start++), result);
return result;
}
template <typename octet_iterator, typename u32bit_iterator>
u32bit_iterator utf8to32 (octet_iterator start, octet_iterator end, u32bit_iterator result)
{
while (start < end)
(*result++) = utf8::unchecked::next(start);
return result;
}
// The iterator class
template <typename octet_iterator>
class iterator {
octet_iterator it;
public:
typedef uint32_t value_type;
typedef uint32_t* pointer;
typedef uint32_t& reference;
typedef std::ptrdiff_t difference_type;
typedef std::bidirectional_iterator_tag iterator_category;
iterator () {}
explicit iterator (const octet_iterator& octet_it): it(octet_it) {}
// the default "big three" are OK
octet_iterator base () const { return it; }
uint32_t operator * () const
{
octet_iterator temp = it;
return utf8::unchecked::next(temp);
}
bool operator == (const iterator& rhs) const
{
return (it == rhs.it);
}
bool operator != (const iterator& rhs) const
{
return !(operator == (rhs));
}
iterator& operator ++ ()
{
::std::advance(it, utf8::internal::sequence_length(it));
return *this;
}
iterator operator ++ (int)
{
iterator temp = *this;
::std::advance(it, utf8::internal::sequence_length(it));
return temp;
}
iterator& operator -- ()
{
utf8::unchecked::prior(it);
return *this;
}
iterator operator -- (int)
{
iterator temp = *this;
utf8::unchecked::prior(it);
return temp;
}
}; // class iterator
} // namespace utf8::unchecked
} // namespace utf8
#endif // header guard
| {
"pile_set_name": "Github"
} |
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 15 2018 10:31:50).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import <IMCore/IMAttachmentMessagePartChatItem.h>
@interface IMExpirableMessageChatItem : IMAttachmentMessagePartChatItem
{
}
@property(readonly, nonatomic) BOOL isSaved;
@property(readonly, nonatomic) BOOL isPlayed;
@end
| {
"pile_set_name": "Github"
} |
/*************************************************************
*
* MathJax/jax/output/HTML-CSS/fonts/STIX/IntegralsUpSm/Regular/All.js
*
* Copyright (c) 2009-2018 The MathJax Consortium
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
MathJax.Hub.Insert(
MathJax.OutputJax['HTML-CSS'].FONTDATA.FONTS['STIXIntegralsUpSm'],
{
0x20: [0,0,250,0,0], // SPACE
0xA0: [0,0,250,0,0], // NO-BREAK SPACE
0x222C: [690,189,587,52,605], // DOUBLE INTEGRAL
0x222D: [690,189,817,52,835], // TRIPLE INTEGRAL
0x222F: [690,189,682,52,642], // SURFACE INTEGRAL
0x2230: [690,189,909,52,869], // VOLUME INTEGRAL
0x2231: [690,189,480,52,447], // CLOCKWISE INTEGRAL
0x2232: [690,189,480,52,448], // CLOCKWISE CONTOUR INTEGRAL
0x2233: [690,189,480,52,470], // ANTICLOCKWISE CONTOUR INTEGRAL
0x2A0B: [694,190,556,41,515], // SUMMATION WITH INTEGRAL
0x2A0C: [694,190,1044,68,1081], // QUADRUPLE INTEGRAL OPERATOR
0x2A0D: [694,190,420,68,391], // FINITE PART INTEGRAL
0x2A0E: [694,190,420,68,391], // INTEGRAL WITH DOUBLE STROKE
0x2A0F: [694,190,520,39,482], // INTEGRAL AVERAGE WITH SLASH
0x2A10: [694,190,324,41,380], // CIRCULATION FUNCTION
0x2A11: [694,190,480,52,447], // ANTICLOCKWISE INTEGRATION
0x2A12: [694,190,450,68,410], // LINE INTEGRATION WITH RECTANGULAR PATH AROUND POLE
0x2A13: [690,189,450,68,412], // LINE INTEGRATION WITH SEMICIRCULAR PATH AROUND POLE
0x2A14: [690,189,550,68,512], // LINE INTEGRATION NOT INCLUDING THE POLE
0x2A15: [690,189,450,50,410], // INTEGRAL AROUND A POINT OPERATOR
0x2A16: [694,191,450,50,410], // QUATERNION INTEGRAL OPERATOR
0x2A17: [694,190,611,12,585], // INTEGRAL WITH LEFTWARDS ARROW WITH HOOK
0x2A18: [694,190,450,48,412], // INTEGRAL WITH TIMES SIGN
0x2A19: [694,190,450,59,403], // INTEGRAL WITH INTERSECTION
0x2A1A: [694,190,450,59,403], // INTEGRAL WITH UNION
0x2A1B: [784,189,379,68,416], // INTEGRAL WITH OVERBAR
0x2A1C: [690,283,357,52,400] // INTEGRAL WITH UNDERBAR
}
);
MathJax.Ajax.loadComplete(MathJax.OutputJax["HTML-CSS"].fontDir + "/IntegralsUpSm/Regular/All.js");
| {
"pile_set_name": "Github"
} |
/* {[The file is published on the basis of YetiForce Public License 3.0 that can be found in the following directory: licenses/LicenseEN.txt or yetiforce.com]} */
'use strict';
jQuery.Class(
'Settings_Users_Locks_Js',
{},
{
/**
* Add item
* @param {jQuery} content
*/
registerAdd: function (content) {
var thisInstance = this;
content.find('.js-add-item').on('click', function (e) {
var id = parseInt(content.find('#js-lock-count').val()) + 1;
var cloneItem = content.find('.js-clone-item tbody').clone(true, true);
cloneItem
.find('tr')
.attr('data-id', id)
.addClass('row' + id);
content.find('.js-locks-table tbody').append(cloneItem.html());
content.find('#js-lock-count').val(id);
thisInstance.registerDelete(content.find('tr.row' + id));
App.Fields.Picklist.showSelect2ElementView(content.find('tr.row' + id).find('select'));
});
},
/**
* Register events for delete item
* @param {jQuery} content
*/
registerDelete: function (content) {
content.find('.js-delete-item').on('click', function (e) {
var target = $(e.currentTarget);
target.closest('tr').remove();
});
},
/**
* Register events for save
* @param {jQuery} content
*/
registerSave: function (content) {
content.find('.js-save-items').on('click', function (e) {
var data = [];
content.find('.js-locks-table tbody tr').each(function (index) {
data.push({
user: $(this).find('.js-users').val(),
locks: $(this).find('.js-locks').val()
});
});
app.saveAjax('saveLocks', data).done(function (data) {
Settings_Vtiger_Index_Js.showMessage({ type: 'success', text: data.result.message });
});
});
},
/**
* Main function
*/
registerEvents: function () {
var content = $('.contentsDiv');
this.registerAdd(content);
this.registerDelete(content);
this.registerSave(content);
}
}
);
| {
"pile_set_name": "Github"
} |
{
"pages": [
"pages/index/index",
"pages/test/test",
"pages/logs/logs"
],
"window": {
"backgroundTextStyle": "light",
"navigationBarBackgroundColor": "#fff",
"navigationBarTitleText": "WeChat",
"navigationBarTextStyle": "black"
},
"sitemapLocation": "sitemap.json"
} | {
"pile_set_name": "Github"
} |
/* inffixed.h -- table for decoding fixed codes
* Generated automatically by makefixed().
*/
/* WARNING: this file should *not* be used by applications.
It is part of the implementation of this library and is
subject to change. Applications should only use zlib.h.
*/
static const code lenfix[512] = {
{96,7,0},{0,8,80},{0,8,16},{20,8,115},{18,7,31},{0,8,112},{0,8,48},
{0,9,192},{16,7,10},{0,8,96},{0,8,32},{0,9,160},{0,8,0},{0,8,128},
{0,8,64},{0,9,224},{16,7,6},{0,8,88},{0,8,24},{0,9,144},{19,7,59},
{0,8,120},{0,8,56},{0,9,208},{17,7,17},{0,8,104},{0,8,40},{0,9,176},
{0,8,8},{0,8,136},{0,8,72},{0,9,240},{16,7,4},{0,8,84},{0,8,20},
{21,8,227},{19,7,43},{0,8,116},{0,8,52},{0,9,200},{17,7,13},{0,8,100},
{0,8,36},{0,9,168},{0,8,4},{0,8,132},{0,8,68},{0,9,232},{16,7,8},
{0,8,92},{0,8,28},{0,9,152},{20,7,83},{0,8,124},{0,8,60},{0,9,216},
{18,7,23},{0,8,108},{0,8,44},{0,9,184},{0,8,12},{0,8,140},{0,8,76},
{0,9,248},{16,7,3},{0,8,82},{0,8,18},{21,8,163},{19,7,35},{0,8,114},
{0,8,50},{0,9,196},{17,7,11},{0,8,98},{0,8,34},{0,9,164},{0,8,2},
{0,8,130},{0,8,66},{0,9,228},{16,7,7},{0,8,90},{0,8,26},{0,9,148},
{20,7,67},{0,8,122},{0,8,58},{0,9,212},{18,7,19},{0,8,106},{0,8,42},
{0,9,180},{0,8,10},{0,8,138},{0,8,74},{0,9,244},{16,7,5},{0,8,86},
{0,8,22},{64,8,0},{19,7,51},{0,8,118},{0,8,54},{0,9,204},{17,7,15},
{0,8,102},{0,8,38},{0,9,172},{0,8,6},{0,8,134},{0,8,70},{0,9,236},
{16,7,9},{0,8,94},{0,8,30},{0,9,156},{20,7,99},{0,8,126},{0,8,62},
{0,9,220},{18,7,27},{0,8,110},{0,8,46},{0,9,188},{0,8,14},{0,8,142},
{0,8,78},{0,9,252},{96,7,0},{0,8,81},{0,8,17},{21,8,131},{18,7,31},
{0,8,113},{0,8,49},{0,9,194},{16,7,10},{0,8,97},{0,8,33},{0,9,162},
{0,8,1},{0,8,129},{0,8,65},{0,9,226},{16,7,6},{0,8,89},{0,8,25},
{0,9,146},{19,7,59},{0,8,121},{0,8,57},{0,9,210},{17,7,17},{0,8,105},
{0,8,41},{0,9,178},{0,8,9},{0,8,137},{0,8,73},{0,9,242},{16,7,4},
{0,8,85},{0,8,21},{16,8,258},{19,7,43},{0,8,117},{0,8,53},{0,9,202},
{17,7,13},{0,8,101},{0,8,37},{0,9,170},{0,8,5},{0,8,133},{0,8,69},
{0,9,234},{16,7,8},{0,8,93},{0,8,29},{0,9,154},{20,7,83},{0,8,125},
{0,8,61},{0,9,218},{18,7,23},{0,8,109},{0,8,45},{0,9,186},{0,8,13},
{0,8,141},{0,8,77},{0,9,250},{16,7,3},{0,8,83},{0,8,19},{21,8,195},
{19,7,35},{0,8,115},{0,8,51},{0,9,198},{17,7,11},{0,8,99},{0,8,35},
{0,9,166},{0,8,3},{0,8,131},{0,8,67},{0,9,230},{16,7,7},{0,8,91},
{0,8,27},{0,9,150},{20,7,67},{0,8,123},{0,8,59},{0,9,214},{18,7,19},
{0,8,107},{0,8,43},{0,9,182},{0,8,11},{0,8,139},{0,8,75},{0,9,246},
{16,7,5},{0,8,87},{0,8,23},{64,8,0},{19,7,51},{0,8,119},{0,8,55},
{0,9,206},{17,7,15},{0,8,103},{0,8,39},{0,9,174},{0,8,7},{0,8,135},
{0,8,71},{0,9,238},{16,7,9},{0,8,95},{0,8,31},{0,9,158},{20,7,99},
{0,8,127},{0,8,63},{0,9,222},{18,7,27},{0,8,111},{0,8,47},{0,9,190},
{0,8,15},{0,8,143},{0,8,79},{0,9,254},{96,7,0},{0,8,80},{0,8,16},
{20,8,115},{18,7,31},{0,8,112},{0,8,48},{0,9,193},{16,7,10},{0,8,96},
{0,8,32},{0,9,161},{0,8,0},{0,8,128},{0,8,64},{0,9,225},{16,7,6},
{0,8,88},{0,8,24},{0,9,145},{19,7,59},{0,8,120},{0,8,56},{0,9,209},
{17,7,17},{0,8,104},{0,8,40},{0,9,177},{0,8,8},{0,8,136},{0,8,72},
{0,9,241},{16,7,4},{0,8,84},{0,8,20},{21,8,227},{19,7,43},{0,8,116},
{0,8,52},{0,9,201},{17,7,13},{0,8,100},{0,8,36},{0,9,169},{0,8,4},
{0,8,132},{0,8,68},{0,9,233},{16,7,8},{0,8,92},{0,8,28},{0,9,153},
{20,7,83},{0,8,124},{0,8,60},{0,9,217},{18,7,23},{0,8,108},{0,8,44},
{0,9,185},{0,8,12},{0,8,140},{0,8,76},{0,9,249},{16,7,3},{0,8,82},
{0,8,18},{21,8,163},{19,7,35},{0,8,114},{0,8,50},{0,9,197},{17,7,11},
{0,8,98},{0,8,34},{0,9,165},{0,8,2},{0,8,130},{0,8,66},{0,9,229},
{16,7,7},{0,8,90},{0,8,26},{0,9,149},{20,7,67},{0,8,122},{0,8,58},
{0,9,213},{18,7,19},{0,8,106},{0,8,42},{0,9,181},{0,8,10},{0,8,138},
{0,8,74},{0,9,245},{16,7,5},{0,8,86},{0,8,22},{64,8,0},{19,7,51},
{0,8,118},{0,8,54},{0,9,205},{17,7,15},{0,8,102},{0,8,38},{0,9,173},
{0,8,6},{0,8,134},{0,8,70},{0,9,237},{16,7,9},{0,8,94},{0,8,30},
{0,9,157},{20,7,99},{0,8,126},{0,8,62},{0,9,221},{18,7,27},{0,8,110},
{0,8,46},{0,9,189},{0,8,14},{0,8,142},{0,8,78},{0,9,253},{96,7,0},
{0,8,81},{0,8,17},{21,8,131},{18,7,31},{0,8,113},{0,8,49},{0,9,195},
{16,7,10},{0,8,97},{0,8,33},{0,9,163},{0,8,1},{0,8,129},{0,8,65},
{0,9,227},{16,7,6},{0,8,89},{0,8,25},{0,9,147},{19,7,59},{0,8,121},
{0,8,57},{0,9,211},{17,7,17},{0,8,105},{0,8,41},{0,9,179},{0,8,9},
{0,8,137},{0,8,73},{0,9,243},{16,7,4},{0,8,85},{0,8,21},{16,8,258},
{19,7,43},{0,8,117},{0,8,53},{0,9,203},{17,7,13},{0,8,101},{0,8,37},
{0,9,171},{0,8,5},{0,8,133},{0,8,69},{0,9,235},{16,7,8},{0,8,93},
{0,8,29},{0,9,155},{20,7,83},{0,8,125},{0,8,61},{0,9,219},{18,7,23},
{0,8,109},{0,8,45},{0,9,187},{0,8,13},{0,8,141},{0,8,77},{0,9,251},
{16,7,3},{0,8,83},{0,8,19},{21,8,195},{19,7,35},{0,8,115},{0,8,51},
{0,9,199},{17,7,11},{0,8,99},{0,8,35},{0,9,167},{0,8,3},{0,8,131},
{0,8,67},{0,9,231},{16,7,7},{0,8,91},{0,8,27},{0,9,151},{20,7,67},
{0,8,123},{0,8,59},{0,9,215},{18,7,19},{0,8,107},{0,8,43},{0,9,183},
{0,8,11},{0,8,139},{0,8,75},{0,9,247},{16,7,5},{0,8,87},{0,8,23},
{64,8,0},{19,7,51},{0,8,119},{0,8,55},{0,9,207},{17,7,15},{0,8,103},
{0,8,39},{0,9,175},{0,8,7},{0,8,135},{0,8,71},{0,9,239},{16,7,9},
{0,8,95},{0,8,31},{0,9,159},{20,7,99},{0,8,127},{0,8,63},{0,9,223},
{18,7,27},{0,8,111},{0,8,47},{0,9,191},{0,8,15},{0,8,143},{0,8,79},
{0,9,255}
};
static const code distfix[32] = {
{16,5,1},{23,5,257},{19,5,17},{27,5,4097},{17,5,5},{25,5,1025},
{21,5,65},{29,5,16385},{16,5,3},{24,5,513},{20,5,33},{28,5,8193},
{18,5,9},{26,5,2049},{22,5,129},{64,5,0},{16,5,2},{23,5,385},
{19,5,25},{27,5,6145},{17,5,7},{25,5,1537},{21,5,97},{29,5,24577},
{16,5,4},{24,5,769},{20,5,49},{28,5,12289},{18,5,13},{26,5,3073},
{22,5,193},{64,5,0}
};
| {
"pile_set_name": "Github"
} |
/* Generated by RuntimeBrowser on iPhone OS 3.0
Image: /System/Library/PrivateFrameworks/iWorkImport.framework/iWorkImport
*/
@class GQDBGSlideNumberPlaceholder, GQDBGTitlePlaceholder, GQDSStyle, GQDSStylesheet, GQDBGBodyPlaceholder, GQDBGObjectPlaceholder;
@interface GQDBGAbstractSlide : NSObject
{
GQDSStylesheet *mStylesheet;
GQDSStyle *mSlideStyle;
GQDBGTitlePlaceholder *mTitlePlaceholder;
GQDBGBodyPlaceholder *mBodyPlaceholder;
GQDBGObjectPlaceholder *mObjectPlaceholder;
GQDBGSlideNumberPlaceholder *mSlideNumberPlaceholder;
BOOL mHidden;
char *mID;
BOOL mCallGenerator;
}
- (void)dealloc;
- (id)slideStyle;
- (id)stylesheet;
- (BOOL)isHidden;
- (char *)ID;
- (NSInteger)readAttributesForSlide:(struct _xmlTextReader { }*)arg1;
@end
| {
"pile_set_name": "Github"
} |
'use strict';
const BlinkDiff = require('blink-diff');
function diffImage(imageAPath, imageB, threshold, outputPath) {
return new Promise((resolve, reject) => {
var diff = new BlinkDiff({
imageAPath: imageAPath, // Path
imageB: imageB, // Buffer
thresholdType: BlinkDiff.THRESHOLD_PERCENT,
threshold: threshold,
imageOutputPath: outputPath
});
diff.run((err, result) => {
if (err) {
return reject(err);
}
var ifPassed = diff.hasPassed(result.code);
console.log(ifPassed ? 'Image Comparison Passed' : 'Image Comparison Failed');
console.log(`Found ${result.differences} pixel differences between two images.`);
resolve(ifPassed);
});
});
}
module.exports = {
diffImage
};
| {
"pile_set_name": "Github"
} |
version https://git-lfs.github.com/spec/v1
oid sha256:679776299d295de31f0480ef4c8cfe0261da39ff735d51d91fc808905f6a3c03
size 8686
| {
"pile_set_name": "Github"
} |
/**
* Copyright (c) Nicolas Gallagher.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @noflow
*/
import I18nManager from '../I18nManager';
import multiplyStyleLengthValue from '../../modules/multiplyStyleLengthValue';
const emptyObject = {};
const borderTopLeftRadius = 'borderTopLeftRadius';
const borderTopRightRadius = 'borderTopRightRadius';
const borderBottomLeftRadius = 'borderBottomLeftRadius';
const borderBottomRightRadius = 'borderBottomRightRadius';
const borderLeftColor = 'borderLeftColor';
const borderLeftStyle = 'borderLeftStyle';
const borderLeftWidth = 'borderLeftWidth';
const borderRightColor = 'borderRightColor';
const borderRightStyle = 'borderRightStyle';
const borderRightWidth = 'borderRightWidth';
const right = 'right';
const marginLeft = 'marginLeft';
const marginRight = 'marginRight';
const paddingLeft = 'paddingLeft';
const paddingRight = 'paddingRight';
const left = 'left';
// Map of LTR property names to their BiDi equivalent.
const PROPERTIES_FLIP = {
borderTopLeftRadius: borderTopRightRadius,
borderTopRightRadius: borderTopLeftRadius,
borderBottomLeftRadius: borderBottomRightRadius,
borderBottomRightRadius: borderBottomLeftRadius,
borderLeftColor: borderRightColor,
borderLeftStyle: borderRightStyle,
borderLeftWidth: borderRightWidth,
borderRightColor: borderLeftColor,
borderRightStyle: borderLeftStyle,
borderRightWidth: borderLeftWidth,
left: right,
marginLeft: marginRight,
marginRight: marginLeft,
paddingLeft: paddingRight,
paddingRight: paddingLeft,
right: left
};
// Map of I18N property names to their LTR equivalent.
const PROPERTIES_I18N = {
borderTopStartRadius: borderTopLeftRadius,
borderTopEndRadius: borderTopRightRadius,
borderBottomStartRadius: borderBottomLeftRadius,
borderBottomEndRadius: borderBottomRightRadius,
borderStartColor: borderLeftColor,
borderStartStyle: borderLeftStyle,
borderStartWidth: borderLeftWidth,
borderEndColor: borderRightColor,
borderEndStyle: borderRightStyle,
borderEndWidth: borderRightWidth,
end: right,
marginStart: marginLeft,
marginEnd: marginRight,
paddingStart: paddingLeft,
paddingEnd: paddingRight,
start: left
};
const PROPERTIES_VALUE = {
clear: true,
float: true,
textAlign: true
};
// Invert the sign of a numeric-like value
const additiveInverse = (value: String | Number) => multiplyStyleLengthValue(value, -1);
const i18nStyle = originalStyle => {
const { doLeftAndRightSwapInRTL, isRTL } = I18nManager;
const style = originalStyle || emptyObject;
const frozenProps = {};
const nextStyle = {};
for (const originalProp in style) {
if (!Object.prototype.hasOwnProperty.call(style, originalProp)) {
continue;
}
const originalValue = style[originalProp];
let prop = originalProp;
let value = originalValue;
// BiDi flip properties
if (PROPERTIES_I18N.hasOwnProperty(originalProp)) {
// convert start/end
const convertedProp = PROPERTIES_I18N[originalProp];
prop = isRTL ? PROPERTIES_FLIP[convertedProp] : convertedProp;
} else if (isRTL && doLeftAndRightSwapInRTL && PROPERTIES_FLIP[originalProp]) {
prop = PROPERTIES_FLIP[originalProp];
}
// BiDi flip values
if (PROPERTIES_VALUE.hasOwnProperty(originalProp)) {
if (originalValue === 'start') {
value = isRTL ? 'right' : 'left';
} else if (originalValue === 'end') {
value = isRTL ? 'left' : 'right';
} else if (isRTL && doLeftAndRightSwapInRTL) {
if (originalValue === 'left') {
value = 'right';
} else if (originalValue === 'right') {
value = 'left';
}
}
}
// BiDi flip transitionProperty value
if (prop === 'transitionProperty') {
// BiDi flip properties
if (PROPERTIES_I18N.hasOwnProperty(value)) {
// convert start/end
const convertedValue = PROPERTIES_I18N[originalValue];
value = isRTL ? PROPERTIES_FLIP[convertedValue] : convertedValue;
} else if (isRTL && doLeftAndRightSwapInRTL && PROPERTIES_FLIP[originalValue]) {
value = PROPERTIES_FLIP[originalValue];
}
}
// Create finalized style
if (isRTL && prop === 'textShadowOffset') {
nextStyle[prop] = value;
nextStyle[prop].width = additiveInverse(value.width);
} else if (!frozenProps[prop]) {
nextStyle[prop] = value;
}
if (PROPERTIES_I18N[originalProp]) {
frozenProps[prop] = true;
}
}
return nextStyle;
};
export default i18nStyle;
| {
"pile_set_name": "Github"
} |
/*
* ====================================================================
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
* ====================================================================
*
* This software consists of voluntary contributions made by many
* individuals on behalf of the Apache Software Foundation. For more
* information on the Apache Software Foundation, please see
* <http://www.apache.org/>.
*
*/
package org.apache.hc.client5.http.impl;
/**
* Supported elements of request execution pipeline.
*
* @since 5.0
*/
public enum ChainElement {
REDIRECT, COMPRESS, BACK_OFF, RETRY, CACHING, PROTOCOL, CONNECT, MAIN_TRANSPORT
}
| {
"pile_set_name": "Github"
} |
Creates textarea.
```php
FormItem::textarea('text', 'Text')
```
 | {
"pile_set_name": "Github"
} |
class OpenIdAuthenticationTablesGenerator < Rails::Generator::NamedBase
def initialize(runtime_args, runtime_options = {})
super
end
def manifest
record do |m|
m.migration_template 'migration.rb', 'db/migrate'
end
end
end
| {
"pile_set_name": "Github"
} |
//
// LayerTextProvider.swift
// lottie-ios-iOS
//
// Created by Alexandr Goncharov on 07/06/2019.
//
import Foundation
/// Connects a LottieTextProvider to a group of text layers
final class LayerTextProvider {
var textProvider: AnimationTextProvider {
didSet {
reloadTexts()
}
}
fileprivate(set) var textLayers: [TextCompositionLayer]
init(textProvider: AnimationTextProvider) {
self.textProvider = textProvider
self.textLayers = []
reloadTexts()
}
func addTextLayers(_ layers: [TextCompositionLayer]) {
textLayers += layers
}
func reloadTexts() {
textLayers.forEach {
$0.textProvider = textProvider
}
}
}
| {
"pile_set_name": "Github"
} |
name=Glyph of Destruction
image=https://magiccards.info/scans/en/lg/148.jpg
value=3.852
rarity=C
type=Instant
cost={R}
effect=Target blocking Wall you control gets +10/+0 until end of combat. Prevent all damage that would be dealt to it this turn. Destroy it at the beginning of the next end step.
timing=removal
oracle=Target blocking Wall you control gets +10/+0 until end of combat. Prevent all damage that would be dealt to it this turn. Destroy it at the beginning of the next end step.
status=needs groovy
| {
"pile_set_name": "Github"
} |
<?xml version="1.0"?>
<!DOCTYPE modulesynopsis SYSTEM "../style/modulesynopsis.dtd">
<?xml-stylesheet type="text/xsl" href="../style/manual.es.xsl"?>
<!-- English Revision: 151408:421100 (outdated) -->
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<modulesynopsis metafile="beos.xml.meta">
<name>beos</name>
<description>Este módulo de muiltiprocesamiento está
optimizado para BeOS.</description>
<status>MPM</status>
<sourcefile>beos.c</sourcefile>
<identifier>mpm_beos_module</identifier>
<summary>
<p>Este módulo de muiltiprocesamiento (MMP)
es el que usa por defecto para BeOS. Usa un
único proceso de control que crea hebras para atender las
peticiones.</p>
</summary>
<seealso><a href="../bind.html">Configurar las direcciones y los
puertos que usa Apache</a></seealso>
<directivesynopsis location="mpm_common"><name>User</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>Group</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>Listen</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>ListenBacklog</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>SendBufferSize</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>StartThreads</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>MinSpareThreads</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>MaxSpareThreads</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>MaxClients</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>CoreDumpDirectory</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>MaxMemFree</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>PidFile</name>
</directivesynopsis>
<directivesynopsis location="mpm_common"><name>ScoreBoardFile</name>
</directivesynopsis>
<directivesynopsis>
<name>MaxRequestsPerThread</name>
<description>Limita el número de peticiones que una hebra (thread) puede
atender durante su vida</description>
<syntax>MaxRequestsPerThread <var>number</var></syntax>
<default>MaxRequestsPerThread 0</default>
<contextlist><context>server config</context></contextlist>
<usage>
<p>La directiva <directive>MaxRequestsPerThread</directive> fija
el número máximo de peticiones que una hebra del
servidor puede atender durante su vida. Despues de atender
<directive>MaxRequestsPerThread</directive> peticiones, la hebra
termina. Si el límite fijado en <directive
>MaxRequestsPerThread</directive> es <code>0</code>, entonces la
hebra puede atender peticiones indefinidamente.</p>
<p>Fijar la directiva <directive>MaxRequestsPerThread</directive>
a un límite distinto de cero ofrece dos benefcios
fundamentales:</p>
<ul>
<li>limita la cantidad de memoria que puede consumir una hebra
si hay una filtración (accidental) de memoria;</li>
<li>poniendo un límite a la vida de las hebras, se ayuda a
reducir el número de hebras cuando se reduce la carga de
trabajo en el servidor.</li>
</ul>
<note><title>Nota:</title> <p>Para peticiones <directive
module="core">KeepAlive</directive>, solo la primera
petición se tiene en cuenta para este límite. De hecho, en este
caso el límite se impone sobre el número máximo
de <em>conexiones</em> por hebra.</p>
</note>
</usage>
</directivesynopsis>
</modulesynopsis>
| {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: b7c1f8068c9dd33418edf6275fbc5a4e
timeCreated: 1503738735
licenseType: Free
MonoImporter:
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
/*
* (C) Copyright 2017 Nuxeo (http://nuxeo.com/) and others.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* Contributors:
* Kevin Leturc <[email protected]>
*/
package org.nuxeo.ftest.server.hotreload;
import static org.junit.Assert.assertEquals;
import static org.nuxeo.functionaltests.AbstractTest.NUXEO_URL;
import javax.ws.rs.core.MultivaluedMap;
import org.junit.Rule;
import org.junit.Test;
import org.nuxeo.jaxrs.test.CloseableClientResponse;
import org.nuxeo.jaxrs.test.HttpClientTestRule;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.sun.jersey.core.util.MultivaluedMapImpl;
/**
* Tests hot reload from CMIS.
*
* @since 10.1
*/
public class ITCMISHotReloadTest {
@Rule
public final HotReloadTestRule hotReloadRule = new HotReloadTestRule();
@Rule
public final HttpClientTestRule httpClientRule = new HttpClientTestRule.Builder().url(NUXEO_URL + "/json/cmis")
.adminCredentials()
.build();
public final ObjectMapper mapper = new ObjectMapper();
@Test
public void testHotReloadDocumentType() throws Exception {
// get root id
String rootId;
try (CloseableClientResponse response = httpClientRule.get("")) {
JsonNode root = mapper.readTree(response.getEntityInputStream());
rootId = root.get("default").get("rootFolderId").asText();
}
// test create a document
MultivaluedMap<String, String> formData = new MultivaluedMapImpl();
formData.add("cmisaction", "createDocument");
formData.add("propertyId[0]", "cmis:objectTypeId");
formData.add("propertyValue[0]", "HotReload");
formData.add("propertyId[1]", "cmis:name");
formData.add("propertyValue[1]", "hot reload");
formData.add("propertyId[2]", "hr:content");
formData.add("propertyValue[2]", "some content");
formData.add("succinct", "true");
try (CloseableClientResponse response = httpClientRule.post("default/root?objectId=" + rootId, formData)) {
assertEquals(201, response.getStatus());
}
}
}
| {
"pile_set_name": "Github"
} |
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package org.chromium.chrome.browser.init;
import android.app.Activity;
import android.content.Context;
import android.os.AsyncTask;
import android.os.Build;
import android.os.Handler;
import android.os.Looper;
import android.os.Process;
import android.os.StrictMode;
import com.squareup.leakcanary.LeakCanary;
import org.chromium.base.ActivityState;
import org.chromium.base.ApplicationStatus;
import org.chromium.base.ApplicationStatus.ActivityStateListener;
import org.chromium.base.BaseSwitches;
import org.chromium.base.CommandLine;
import org.chromium.base.ContentUriUtils;
import org.chromium.base.ContextUtils;
import org.chromium.base.Log;
import org.chromium.base.PathUtils;
import org.chromium.base.ResourceExtractor;
import org.chromium.base.ThreadUtils;
import org.chromium.base.TraceEvent;
import org.chromium.base.annotations.RemovableInRelease;
import org.chromium.base.library_loader.LibraryLoader;
import org.chromium.base.library_loader.LibraryProcessType;
import org.chromium.base.library_loader.ProcessInitException;
import org.chromium.chrome.browser.ChromeApplication;
import org.chromium.chrome.browser.ChromeStrictMode;
import org.chromium.chrome.browser.ChromeSwitches;
import org.chromium.chrome.browser.FileProviderHelper;
import org.chromium.chrome.browser.crash.MinidumpDirectoryObserver;
import org.chromium.chrome.browser.device.DeviceClassManager;
import org.chromium.chrome.browser.services.GoogleServicesManager;
import org.chromium.chrome.browser.tabmodel.document.DocumentTabModelImpl;
import org.chromium.chrome.browser.webapps.ActivityAssigner;
import org.chromium.chrome.browser.webapps.ChromeWebApkHost;
import org.chromium.content.app.ContentApplication;
import org.chromium.content.browser.BrowserStartupController;
import org.chromium.content.browser.ChildProcessCreationParams;
import org.chromium.content.browser.DeviceUtils;
import org.chromium.content.browser.SpeechRecognition;
import org.chromium.net.NetworkChangeNotifier;
import org.chromium.policy.CombinedPolicyProvider;
import org.chromium.ui.base.DeviceFormFactor;
import java.util.LinkedList;
import java.util.Locale;
/**
* Application level delegate that handles start up tasks.
* {@link AsyncInitializationActivity} classes should override the {@link BrowserParts}
* interface for any additional initialization tasks for the initialization to work as intended.
*/
public class ChromeBrowserInitializer {
private static final String TAG = "BrowserInitializer";
private static ChromeBrowserInitializer sChromeBrowserInitiliazer;
private final Handler mHandler;
private final ChromeApplication mApplication;
private final Locale mInitialLocale = Locale.getDefault();
private boolean mPreInflationStartupComplete;
private boolean mPostInflationStartupComplete;
private boolean mNativeInitializationComplete;
private MinidumpDirectoryObserver mMinidumpDirectoryObserver;
// Public to allow use in ChromeBackupAgent
public static final String PRIVATE_DATA_DIRECTORY_SUFFIX = "chrome";
/**
* A callback to be executed when there is a new version available in Play Store.
*/
public interface OnNewVersionAvailableCallback extends Runnable {
/**
* Set the update url to get the new version available.
* @param updateUrl The url to be used.
*/
void setUpdateUrl(String updateUrl);
}
/**
* This class is an application specific object that orchestrates the app initialization.
* @param context The context to get the application context from.
* @return The singleton instance of {@link ChromeBrowserInitializer}.
*/
public static ChromeBrowserInitializer getInstance(Context context) {
if (sChromeBrowserInitiliazer == null) {
sChromeBrowserInitiliazer = new ChromeBrowserInitializer(context);
}
return sChromeBrowserInitiliazer;
}
private ChromeBrowserInitializer(Context context) {
mApplication = (ChromeApplication) context.getApplicationContext();
mHandler = new Handler(Looper.getMainLooper());
initLeakCanary();
}
@RemovableInRelease
private void initLeakCanary() {
// Watch that Activity objects are not retained after their onDestroy() has been called.
// This is a no-op in release builds.
LeakCanary.install(mApplication);
}
/**
* Initializes the Chrome browser process synchronously.
*
* @throws ProcessInitException if there is a problem with the native library.
*/
public void handleSynchronousStartup() throws ProcessInitException {
assert ThreadUtils.runningOnUiThread() : "Tried to start the browser on the wrong thread";
BrowserParts parts = new EmptyBrowserParts();
handlePreNativeStartup(parts);
handlePostNativeStartup(false, parts);
}
/**
* Execute startup tasks that can be done without native libraries. See {@link BrowserParts} for
* a list of calls to be implemented.
* @param parts The delegate for the {@link ChromeBrowserInitializer} to communicate
* initialization tasks.
*/
public void handlePreNativeStartup(final BrowserParts parts) {
assert ThreadUtils.runningOnUiThread() : "Tried to start the browser on the wrong thread";
ProcessInitializationHandler.getInstance().initializePreNative();
preInflationStartup();
parts.preInflationStartup();
if (parts.isActivityFinishing()) return;
preInflationStartupDone();
parts.setContentViewAndLoadLibrary();
postInflationStartup();
parts.postInflationStartup();
}
/**
* This is needed for device class manager which depends on commandline args that are
* initialized in preInflationStartup()
*/
private void preInflationStartupDone() {
// Domain reliability uses significant enough memory that we should disable it on low memory
// devices for now.
// TODO(zbowling): remove this after domain reliability is refactored. (crbug.com/495342)
if (DeviceClassManager.disableDomainReliability()) {
CommandLine.getInstance().appendSwitch(ChromeSwitches.DISABLE_DOMAIN_RELIABILITY);
}
}
/**
* Pre-load shared prefs to avoid being blocked on the disk access async task in the future.
* Running in an AsyncTask as pre-loading itself may cause I/O.
*/
private void warmUpSharedPrefs() {
if (Build.VERSION.CODENAME.equals("N") || Build.VERSION.SDK_INT > Build.VERSION_CODES.M) {
new AsyncTask<Void, Void, Void>() {
@Override
protected Void doInBackground(Void... params) {
ContextUtils.getAppSharedPreferences();
DocumentTabModelImpl.warmUpSharedPrefs(mApplication);
ActivityAssigner.warmUpSharedPrefs(mApplication);
return null;
}
}.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
} else {
ContextUtils.getAppSharedPreferences();
DocumentTabModelImpl.warmUpSharedPrefs(mApplication);
ActivityAssigner.warmUpSharedPrefs(mApplication);
}
}
private void preInflationStartup() {
ThreadUtils.assertOnUiThread();
if (mPreInflationStartupComplete) return;
PathUtils.setPrivateDataDirectorySuffix(PRIVATE_DATA_DIRECTORY_SUFFIX);
// Ensure critical files are available, so they aren't blocked on the file-system
// behind long-running accesses in next phase.
// Don't do any large file access here!
ContentApplication.initCommandLine(mApplication);
waitForDebuggerIfNeeded();
ChromeStrictMode.configureStrictMode();
ChromeWebApkHost.init();
warmUpSharedPrefs();
DeviceUtils.addDeviceSpecificUserAgentSwitch(mApplication);
ApplicationStatus.registerStateListenerForAllActivities(
createActivityStateListener());
mPreInflationStartupComplete = true;
}
private void postInflationStartup() {
ThreadUtils.assertOnUiThread();
if (mPostInflationStartupComplete) return;
// Check to see if we need to extract any new resources from the APK. This could
// be on first run when we need to extract all the .pak files we need, or after
// the user has switched locale, in which case we want new locale resources.
ResourceExtractor.get().startExtractingResources();
mPostInflationStartupComplete = true;
}
/**
* Execute startup tasks that require native libraries to be loaded. See {@link BrowserParts}
* for a list of calls to be implemented.
* @param isAsync Whether this call should synchronously wait for the browser process to be
* fully initialized before returning to the caller.
* @param delegate The delegate for the {@link ChromeBrowserInitializer} to communicate
* initialization tasks.
*/
public void handlePostNativeStartup(final boolean isAsync, final BrowserParts delegate)
throws ProcessInitException {
assert ThreadUtils.runningOnUiThread() : "Tried to start the browser on the wrong thread";
final LinkedList<Runnable> initQueue = new LinkedList<>();
abstract class NativeInitTask implements Runnable {
@Override
public final void run() {
// Run the current task then put a request for the next one onto the
// back of the UI message queue. This lets Chrome handle input events
// between tasks.
initFunction();
if (!initQueue.isEmpty()) {
Runnable nextTask = initQueue.pop();
if (isAsync) {
mHandler.post(nextTask);
} else {
nextTask.run();
}
}
}
public abstract void initFunction();
}
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
ProcessInitializationHandler.getInstance().initializePostNative();
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
initNetworkChangeNotifier(mApplication.getApplicationContext());
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
// This is not broken down as a separate task, since this:
// 1. Should happen as early as possible
// 2. Only submits asynchronous work
// 3. Is thus very cheap (profiled at 0.18ms on a Nexus 5 with Lollipop)
// It should also be in a separate task (and after) initNetworkChangeNotifier, as
// this posts a task to the UI thread that would interfere with preconneciton
// otherwise. By preconnecting afterwards, we make sure that this task has run.
delegate.maybePreconnect();
onStartNativeInitialization();
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
if (delegate.isActivityDestroyed()) return;
delegate.initializeCompositor();
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
if (delegate.isActivityDestroyed()) return;
delegate.initializeState();
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
onFinishNativeInitialization();
}
});
initQueue.add(new NativeInitTask() {
@Override
public void initFunction() {
if (delegate.isActivityDestroyed()) return;
delegate.finishNativeInitialization();
}
});
// See crbug.com/593250. This can be removed after N SDK is released, crbug.com/592722.
ChildProcessCreationParams creationParams = mApplication.getChildProcessCreationParams();
// WebAPK uses this code path to initialize Chrome's native code, and the
// ChildProcessCreationParams has been set in {@link WebApkActivity}. We have to prevent
// resetting with a wrong parameter here. TODO(hanxi): Remove the entire if block after
// N SDK is released, since it breaks WebAPKs on N+.
if (creationParams != null && ChildProcessCreationParams.get() != null) {
ChildProcessCreationParams.set(creationParams);
}
if (isAsync) {
// We want to start this queue once the C++ startup tasks have run; allow the
// C++ startup to run asynchonously, and set it up to start the Java queue once
// it has finished.
startChromeBrowserProcessesAsync(
delegate.shouldStartGpuProcess(),
new BrowserStartupController.StartupCallback() {
@Override
public void onFailure() {
delegate.onStartupFailure();
}
@Override
public void onSuccess(boolean success) {
mHandler.post(initQueue.pop());
}
});
} else {
startChromeBrowserProcessesSync();
initQueue.pop().run();
assert initQueue.isEmpty();
}
}
private void startChromeBrowserProcessesAsync(
boolean startGpuProcess,
BrowserStartupController.StartupCallback callback) throws ProcessInitException {
try {
TraceEvent.begin("ChromeBrowserInitializer.startChromeBrowserProcessesAsync");
BrowserStartupController.get(mApplication, LibraryProcessType.PROCESS_BROWSER)
.startBrowserProcessesAsync(startGpuProcess, callback);
} finally {
TraceEvent.end("ChromeBrowserInitializer.startChromeBrowserProcessesAsync");
}
}
private void startChromeBrowserProcessesSync() throws ProcessInitException {
try {
TraceEvent.begin("ChromeBrowserInitializer.startChromeBrowserProcessesSync");
ThreadUtils.assertOnUiThread();
mApplication.initCommandLine();
LibraryLoader libraryLoader = LibraryLoader.get(LibraryProcessType.PROCESS_BROWSER);
StrictMode.ThreadPolicy oldPolicy = StrictMode.allowThreadDiskReads();
libraryLoader.ensureInitialized();
StrictMode.setThreadPolicy(oldPolicy);
libraryLoader.asyncPrefetchLibrariesToMemory();
BrowserStartupController.get(mApplication, LibraryProcessType.PROCESS_BROWSER)
.startBrowserProcessesSync(false);
GoogleServicesManager.get(mApplication);
} finally {
TraceEvent.end("ChromeBrowserInitializer.startChromeBrowserProcessesSync");
}
}
public static void initNetworkChangeNotifier(Context context) {
ThreadUtils.assertOnUiThread();
TraceEvent.begin("NetworkChangeNotifier.init");
// Enable auto-detection of network connectivity state changes.
NetworkChangeNotifier.init(context);
NetworkChangeNotifier.setAutoDetectConnectivityState(true);
TraceEvent.end("NetworkChangeNotifier.init");
}
private void onStartNativeInitialization() {
ThreadUtils.assertOnUiThread();
if (mNativeInitializationComplete) return;
// The policies are used by browser startup, so we need to register the policy providers
// before starting the browser process.
mApplication.registerPolicyProviders(CombinedPolicyProvider.get());
SpeechRecognition.initialize(mApplication);
}
private void onFinishNativeInitialization() {
if (mNativeInitializationComplete) return;
mNativeInitializationComplete = true;
ContentUriUtils.setFileProviderUtil(new FileProviderHelper());
// Start the file observer to watch the minidump directory.
new AsyncTask<Void, Void, MinidumpDirectoryObserver>() {
@Override
protected MinidumpDirectoryObserver doInBackground(Void... params) {
return new MinidumpDirectoryObserver();
}
@Override
protected void onPostExecute(MinidumpDirectoryObserver minidumpDirectoryObserver) {
mMinidumpDirectoryObserver = minidumpDirectoryObserver;
mMinidumpDirectoryObserver.startWatching();
}
}.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
private void waitForDebuggerIfNeeded() {
if (CommandLine.getInstance().hasSwitch(BaseSwitches.WAIT_FOR_JAVA_DEBUGGER)) {
Log.e(TAG, "Waiting for Java debugger to connect...");
android.os.Debug.waitForDebugger();
Log.e(TAG, "Java debugger connected. Resuming execution.");
}
}
private ActivityStateListener createActivityStateListener() {
return new ActivityStateListener() {
@Override
public void onActivityStateChange(Activity activity, int newState) {
if (newState == ActivityState.CREATED || newState == ActivityState.DESTROYED) {
// Android destroys Activities at some point after a locale change, but doesn't
// kill the process. This can lead to a bug where Chrome is halfway RTL, where
// stale natively-loaded resources are not reloaded (http://crbug.com/552618).
if (!mInitialLocale.equals(Locale.getDefault())) {
Log.e(TAG, "Killing process because of locale change.");
Process.killProcess(Process.myPid());
}
DeviceFormFactor.resetValuesIfNeeded(mApplication);
}
}
};
}
}
| {
"pile_set_name": "Github"
} |
//
// Diese Datei wurde mit der JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.5-2 generiert
// Siehe <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a>
// Änderungen an dieser Datei gehen bei einer Neukompilierung des Quellschemas verloren.
// Generiert: 2016.02.05 um 06:25:30 PM CET
//
package org.onvif.ver10.schema;
import javax.xml.bind.annotation.XmlEnum;
import javax.xml.bind.annotation.XmlType;
/**
* <p>Java-Klasse für ToneCompensationMode.
*
* <p>Das folgende Schemafragment gibt den erwarteten Content an, der in dieser Klasse enthalten ist.
* <p>
* <pre>
* <simpleType name="ToneCompensationMode">
* <restriction base="{http://www.w3.org/2001/XMLSchema}string">
* <enumeration value="OFF"/>
* <enumeration value="ON"/>
* <enumeration value="AUTO"/>
* </restriction>
* </simpleType>
* </pre>
*
*/
@XmlType(name = "ToneCompensationMode")
@XmlEnum
public enum ToneCompensationMode {
OFF,
ON,
AUTO;
public String value() {
return name();
}
public static ToneCompensationMode fromValue(String v) {
return valueOf(v);
}
}
| {
"pile_set_name": "Github"
} |
/*
* Configuation settings for the Renesas Solutions r0p7734 board
*
* Copyright (C) 2010, 2011 Nobuhiro Iwamatsu <[email protected]>
*
* SPDX-License-Identifier: GPL-2.0+
*/
#ifndef __R0P7734_H
#define __R0P7734_H
#define CONFIG_CPU_SH7734 1
#define CONFIG_R0P7734 1
#define CONFIG_400MHZ_MODE 1
/* #define CONFIG_533MHZ_MODE 1 */
#define CONFIG_SYS_TEXT_BASE 0x8FFC0000
#define CONFIG_DISPLAY_BOARDINFO
#undef CONFIG_SHOW_BOOT_PROGRESS
/* Ether */
#define CONFIG_SH_ETHER 1
#define CONFIG_SH_ETHER_USE_PORT (0)
#define CONFIG_SH_ETHER_PHY_ADDR (0x0)
#define CONFIG_PHY_SMSC 1
#define CONFIG_BITBANGMII
#define CONFIG_BITBANGMII_MULTI
#define CONFIG_SH_ETHER_SH7734_MII (0x00) /* MII */
#define CONFIG_SH_ETHER_PHY_MODE PHY_INTERFACE_MODE_MII
#ifndef CONFIG_SH_ETHER
# define CONFIG_SMC911X
# define CONFIG_SMC911X_16_BIT
# define CONFIG_SMC911X_BASE (0x84000000)
#endif
/* undef to save memory */
#define CONFIG_SYS_LONGHELP
/* List of legal baudrate settings for this board */
#define CONFIG_SYS_BAUDRATE_TABLE { 115200 }
/* SCIF */
#define CONFIG_SCIF 1
#define CONFIG_CONS_SCIF3 1
/* Suppress display of console information at boot */
/* SDRAM */
#define CONFIG_SYS_SDRAM_BASE (0x88000000)
#define CONFIG_SYS_SDRAM_SIZE (128 * 1024 * 1024)
#define CONFIG_SYS_LOAD_ADDR (CONFIG_SYS_SDRAM_BASE + 16 * 1024 * 1024)
#define CONFIG_SYS_MEMTEST_START (CONFIG_SYS_SDRAM_BASE)
#define CONFIG_SYS_MEMTEST_END (CONFIG_SYS_MEMTEST_START + 100 * 1024 * 1024)
/* Enable alternate, more extensive, memory test */
#undef CONFIG_SYS_ALT_MEMTEST
/* Scratch address used by the alternate memory test */
#undef CONFIG_SYS_MEMTEST_SCRATCH
/* Enable temporary baudrate change while serial download */
#undef CONFIG_SYS_LOADS_BAUD_CHANGE
/* FLASH */
#define CONFIG_FLASH_CFI_DRIVER 1
#define CONFIG_SYS_FLASH_CFI
#undef CONFIG_SYS_FLASH_QUIET_TEST
#define CONFIG_SYS_FLASH_EMPTY_INFO
#define CONFIG_SYS_FLASH_BASE (0xA0000000)
#define CONFIG_SYS_MAX_FLASH_SECT 512
/* if you use all NOR Flash , you change dip-switch. Please see Manual. */
#define CONFIG_SYS_MAX_FLASH_BANKS 1
#define CONFIG_SYS_FLASH_BANKS_LIST { CONFIG_SYS_FLASH_BASE }
/* Timeout for Flash erase operations (in ms) */
#define CONFIG_SYS_FLASH_ERASE_TOUT (3 * 1000)
/* Timeout for Flash write operations (in ms) */
#define CONFIG_SYS_FLASH_WRITE_TOUT (3 * 1000)
/* Timeout for Flash set sector lock bit operations (in ms) */
#define CONFIG_SYS_FLASH_LOCK_TOUT (3 * 1000)
/* Timeout for Flash clear lock bit operations (in ms) */
#define CONFIG_SYS_FLASH_UNLOCK_TOUT (3 * 1000)
/*
* Use hardware flash sectors protection instead
* of U-Boot software protection
*/
#undef CONFIG_SYS_FLASH_PROTECTION
#undef CONFIG_SYS_DIRECT_FLASH_TFTP
/* Address of u-boot image in Flash (NOT run time address in SDRAM) ?!? */
#define CONFIG_SYS_MONITOR_BASE (CONFIG_SYS_FLASH_BASE)
/* Monitor size */
#define CONFIG_SYS_MONITOR_LEN (256 * 1024)
/* Size of DRAM reserved for malloc() use */
#define CONFIG_SYS_MALLOC_LEN (256 * 1024)
#define CONFIG_SYS_BOOTMAPSZ (8 * 1024 * 1024)
/* ENV setting */
#define CONFIG_ENV_OVERWRITE 1
#define CONFIG_ENV_SECT_SIZE (128 * 1024)
#define CONFIG_ENV_SIZE (CONFIG_ENV_SECT_SIZE)
#define CONFIG_ENV_ADDR (CONFIG_SYS_FLASH_BASE + CONFIG_SYS_MONITOR_LEN)
/* Offset of env Flash sector relative to CONFIG_SYS_FLASH_BASE */
#define CONFIG_ENV_OFFSET (CONFIG_ENV_ADDR - CONFIG_SYS_FLASH_BASE)
#define CONFIG_ENV_SIZE_REDUND (CONFIG_ENV_SECT_SIZE)
/* Board Clock */
#if defined(CONFIG_400MHZ_MODE)
#define CONFIG_SYS_CLK_FREQ 50000000
#else
#define CONFIG_SYS_CLK_FREQ 44444444
#endif
#define CONFIG_SH_TMU_CLK_FREQ CONFIG_SYS_CLK_FREQ
#define CONFIG_SH_SCIF_CLK_FREQ CONFIG_SYS_CLK_FREQ
#define CONFIG_SYS_TMU_CLK_DIV 4
#endif /* __R0P7734_H */
| {
"pile_set_name": "Github"
} |
(ns qu.data.source)
(defprotocol DataSource
(get-datasets [source]) ;; returns a list of all datasets
(get-metadata [source dataset]) ;; returns the metadata for a dataset
(get-concept-data [source dataset concept]) ;; returns the data table for a concept
(get-results [source query])
(load-dataset [source definition options]))
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.drill.exec.physical.config;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;
import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
import org.apache.calcite.rel.core.JoinRelType;
import org.apache.drill.common.expression.SchemaPath;
import org.apache.drill.exec.physical.base.AbstractJoinPop;
import org.apache.drill.exec.physical.base.PhysicalOperator;
import org.apache.drill.exec.physical.base.PhysicalVisitor;
import org.apache.drill.exec.proto.UserBitShared.CoreOperatorType;
import java.util.List;
@JsonTypeName("lateral-join")
public class LateralJoinPOP extends AbstractJoinPop {
static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(LateralJoinPOP.class);
@JsonProperty("excludedColumns")
private List<SchemaPath> excludedColumns;
@JsonProperty("implicitRIDColumn")
private String implicitRIDColumn;
@JsonProperty("unnestForLateralJoin")
private UnnestPOP unnestForLateralJoin;
@JsonCreator
public LateralJoinPOP(
@JsonProperty("left") PhysicalOperator left,
@JsonProperty("right") PhysicalOperator right,
@JsonProperty("joinType") JoinRelType joinType,
@JsonProperty("implicitRIDColumn") String implicitRIDColumn,
@JsonProperty("excludedColumns") List<SchemaPath> excludedColumns) {
super(left, right, joinType, false, null, null);
Preconditions.checkArgument(joinType != JoinRelType.FULL,
"Full outer join is currently not supported with Lateral Join");
Preconditions.checkArgument(joinType != JoinRelType.RIGHT,
"Right join is currently not supported with Lateral Join");
this.excludedColumns = excludedColumns;
this.implicitRIDColumn = implicitRIDColumn;
}
@Override
public PhysicalOperator getNewWithChildren(List<PhysicalOperator> children) {
Preconditions.checkArgument(children.size() == 2,
"Lateral join should have two physical operators");
LateralJoinPOP newPOP = new LateralJoinPOP(children.get(0), children.get(1), joinType, this.implicitRIDColumn, this.excludedColumns);
newPOP.unnestForLateralJoin = this.unnestForLateralJoin;
return newPOP;
}
@JsonProperty("unnestForLateralJoin")
public UnnestPOP getUnnestForLateralJoin() {
return this.unnestForLateralJoin;
}
@JsonProperty("excludedColumns")
public List<SchemaPath> getExcludedColumns() {
return this.excludedColumns;
}
public void setUnnestForLateralJoin(UnnestPOP unnest) {
this.unnestForLateralJoin = unnest;
}
@JsonProperty("implicitRIDColumn")
public String getImplicitRIDColumn() { return this.implicitRIDColumn; }
@Override
public int getOperatorType() {
return CoreOperatorType.LATERAL_JOIN_VALUE;
}
@Override
public <T, X, E extends Throwable> T accept(PhysicalVisitor<T, X, E> physicalVisitor, X value) throws E {
return physicalVisitor.visitLateralJoin(this, value);
}
}
| {
"pile_set_name": "Github"
} |
@inline(__always)
private func _race<U: Thenable>(_ thenables: [U]) -> Promise<U.T> {
let rp = Promise<U.T>(.pending)
for thenable in thenables {
thenable.pipe(to: rp.box.seal)
}
return rp
}
/**
Waits for one promise to resolve
race(promise1, promise2, promise3).then { winner in
//…
}
- Returns: The promise that resolves first
- Warning: If the first resolution is a rejection, the returned promise is rejected
*/
public func race<U: Thenable>(_ thenables: U...) -> Promise<U.T> {
return _race(thenables)
}
/**
Waits for one promise to resolve
race(promise1, promise2, promise3).then { winner in
//…
}
- Returns: The promise that resolves first
- Warning: If the first resolution is a rejection, the returned promise is rejected
- Remark: If the provided array is empty the returned promise is rejected with PMKError.badInput
*/
public func race<U: Thenable>(_ thenables: [U]) -> Promise<U.T> {
guard !thenables.isEmpty else {
return Promise(error: PMKError.badInput)
}
return _race(thenables)
}
/**
Waits for one guarantee to resolve
race(promise1, promise2, promise3).then { winner in
//…
}
- Returns: The guarantee that resolves first
*/
public func race<T>(_ guarantees: Guarantee<T>...) -> Guarantee<T> {
let rg = Guarantee<T>(.pending)
for guarantee in guarantees {
guarantee.pipe(to: rg.box.seal)
}
return rg
}
| {
"pile_set_name": "Github"
} |
// ***********************************************************************
// Copyright (c) 2014 Charlie Poole, Rob Prouse
//
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
// ***********************************************************************
using System.Collections.Generic;
using System.IO;
using NUnit.Engine.Extensibility;
using System.Reflection;
namespace NUnit.Engine.Drivers
{
public abstract class NotRunnableFrameworkDriver : IFrameworkDriver
{
private const string LOAD_RESULT_FORMAT =
"<test-suite type='{0}' id='{1}' name='{2}' fullname='{3}' testcasecount='0' runstate='{4}'>" +
"<properties>" +
"<property name='_SKIPREASON' value='{5}'/>" +
"</properties>" +
"</test-suite>";
private const string RUN_RESULT_FORMAT =
"<test-suite type='{0}' id='{1}' name='{2}' fullname='{3}' testcasecount='0' runstate='{4}' result='{5}' label='{6}'>" +
"<properties>" +
"<property name='_SKIPREASON' value='{7}'/>" +
"</properties>" +
"<reason>" +
"<message>{7}</message>" +
"</reason>" +
"</test-suite>";
private string _name;
private string _fullname;
private string _message;
private string _type;
protected string _runstate;
protected string _result;
protected string _label;
public NotRunnableFrameworkDriver(string assemblyPath, string message)
{
_name = Escape(Path.GetFileName(assemblyPath));
_fullname = Escape(Path.GetFullPath(assemblyPath));
_message = Escape(message);
_type = new List<string> { ".dll", ".exe" }.Contains(Path.GetExtension(assemblyPath)) ? "Assembly" : "Unknown";
}
public string ID { get; set; }
public string Load(string assemblyPath, IDictionary<string, object> settings)
{
return GetLoadResult();
}
public int CountTestCases(string filter)
{
return 0;
}
public string Run(ITestEventListener listener, string filter)
{
return string.Format(RUN_RESULT_FORMAT,
_type, TestID, _name, _fullname, _runstate, _result, _label, _message);
}
public string Explore(string filter)
{
return GetLoadResult();
}
public void StopRun(bool force)
{
}
private static string Escape(string original)
{
return original
.Replace("&", "&")
.Replace("\"", """)
.Replace("'", "'")
.Replace("<", "<")
.Replace(">", ">");
}
private string GetLoadResult()
{
return string.Format(
LOAD_RESULT_FORMAT,
_type, TestID, _name, _fullname, _runstate, _message);
}
private string TestID
{
get
{
return string.IsNullOrEmpty(ID)
? "1"
: ID + "-1";
}
}
}
public class InvalidAssemblyFrameworkDriver :NotRunnableFrameworkDriver
{
public InvalidAssemblyFrameworkDriver(string assemblyPath, string message)
: base(assemblyPath, message)
{
_runstate = "NotRunnable";
_result = "Failed";
_label = "Invalid";
}
}
public class SkippedAssemblyFrameworkDriver : NotRunnableFrameworkDriver
{
public SkippedAssemblyFrameworkDriver(string assemblyPath)
: base(assemblyPath, "Skipping non-test assembly")
{
_runstate = "Runnable";
_result = "Skipped";
_label = "NoTests";
}
}
}
| {
"pile_set_name": "Github"
} |
/**
*
* Problem Statement-
* [Arrays - DS](https://www.hackerrank.com/challenges/arrays-ds)
* [Tutorial](https://youtu.be/u_oUMtj7C3k)
*/
package com.javaaid.hackerrank.solutions.datastructures.arrays;
import java.util.Scanner;
/**
*
* @author Kanahaiya Gupta
*
*/
public class ArraysDS {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int arr[] = new int[n];
for (int arr_i = 0; arr_i < n; arr_i++) {
arr[arr_i] = in.nextInt();
}
for (int arr_i = n - 1; arr_i >= 0; arr_i--) {
System.out.print(arr[arr_i] + " ");
}
in.close();
}
}
| {
"pile_set_name": "Github"
} |
/* origin: FreeBSD /usr/src/lib/msun/src/e_asinl.c */
/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunSoft, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
/*
* See comments in asin.c.
* Converted to long double by David Schultz <[email protected]>.
*/
#include "libm.h"
#if LDBL_MANT_DIG == 53 && LDBL_MAX_EXP == 1024
long double asinl(long double x)
{
return asin(x);
}
#elif (LDBL_MANT_DIG == 64 || LDBL_MANT_DIG == 113) && LDBL_MAX_EXP == 16384
#include "__invtrigl.h"
#if LDBL_MANT_DIG == 64
#define CLOSETO1(u) (u.i.m>>56 >= 0xf7)
#define CLEARBOTTOM(u) (u.i.m &= -1ULL << 32)
#elif LDBL_MANT_DIG == 113
#define CLOSETO1(u) (u.i.top >= 0xee00)
#define CLEARBOTTOM(u) (u.i.lo = 0)
#endif
long double asinl(long double x)
{
union ldshape u = {x};
long double z, r, s;
uint16_t e = u.i.se & 0x7fff;
int sign = u.i.se >> 15;
if (e >= 0x3fff) { /* |x| >= 1 or nan */
/* asin(+-1)=+-pi/2 with inexact */
if (x == 1 || x == -1)
return x*pio2_hi + 0x1p-120f;
return 0/(x-x);
}
if (e < 0x3fff - 1) { /* |x| < 0.5 */
if (e < 0x3fff - (LDBL_MANT_DIG+1)/2) {
/* return x with inexact if x!=0 */
FORCE_EVAL(x + 0x1p120f);
return x;
}
return x + x*__invtrigl_R(x*x);
}
/* 1 > |x| >= 0.5 */
z = (1.0 - fabsl(x))*0.5;
s = sqrtl(z);
r = __invtrigl_R(z);
if (CLOSETO1(u)) {
x = pio2_hi - (2*(s+s*r)-pio2_lo);
} else {
long double f, c;
u.f = s;
CLEARBOTTOM(u);
f = u.f;
c = (z - f*f)/(s + f);
x = 0.5*pio2_hi-(2*s*r - (pio2_lo-2*c) - (0.5*pio2_hi-2*f));
}
return sign ? -x : x;
}
#endif
| {
"pile_set_name": "Github"
} |
<!-- Generated by pkgdown: do not edit by hand -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Create an Arrow Dataset from an input stream, inferring output types and
shapes from the given Arrow schema. — from_schema.arrow_stream_dataset • tfio</title>
<!-- jquery -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<!-- Bootstrap -->
<link href="https://cdnjs.cloudflare.com/ajax/libs/bootswatch/3.3.7/flatly/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha256-U5ZEeKfGNOja007MMD3YBI0A3OSZOQbeG6z2f2Y0hu8=" crossorigin="anonymous"></script>
<!-- Font Awesome icons -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.7.1/css/all.min.css" integrity="sha256-nAmazAk6vS34Xqo0BSrTb+abbtFlgsFK7NKSi6o7Y78=" crossorigin="anonymous" />
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.7.1/css/v4-shims.min.css" integrity="sha256-6qHlizsOWFskGlwVOKuns+D1nB6ssZrHQrNj1wGplHc=" crossorigin="anonymous" />
<!-- clipboard.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.4/clipboard.min.js" integrity="sha256-FiZwavyI2V6+EXO1U+xzLG3IKldpiTFf3153ea9zikQ=" crossorigin="anonymous"></script>
<!-- headroom.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/headroom/0.9.4/headroom.min.js" integrity="sha256-DJFC1kqIhelURkuza0AvYal5RxMtpzLjFhsnVIeuk+U=" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/headroom/0.9.4/jQuery.headroom.min.js" integrity="sha256-ZX/yNShbjqsohH1k95liqY9Gd8uOiE1S4vZc+9KQ1K4=" crossorigin="anonymous"></script>
<!-- pkgdown -->
<link href="../pkgdown.css" rel="stylesheet">
<script src="../pkgdown.js"></script>
<link href="../extra.css" rel="stylesheet">
<script src="../extra.js"></script>
<meta property="og:title" content="Create an Arrow Dataset from an input stream, inferring output types and
shapes from the given Arrow schema. — from_schema.arrow_stream_dataset" />
<meta property="og:description" content="Create an Arrow Dataset from an input stream, inferring output types and
shapes from the given Arrow schema." />
<meta name="twitter:card" content="summary" />
<!-- mathjax -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js" integrity="sha256-nvJJv9wWKEm88qvoQl9ekL2J+k/RWIsaSScxxlsrv8k=" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/config/TeX-AMS-MML_HTMLorMML.js" integrity="sha256-84DKXVJXs0/F8OTMzX4UR909+jtl4G7SPypPavF+GfA=" crossorigin="anonymous"></script>
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="container template-reference-topic">
<header>
<div class="navbar navbar-inverse navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<span class="navbar-brand">
<a class="navbar-link" href="../index.html">tfio</a>
<span class="version label label-default" data-toggle="tooltip" data-placement="bottom" title="Released version">0.4.1</span>
</span>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li>
<a href="../index.html">Home</a>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">
Tutorials
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="../articles/introduction.html">TensorFlow IO Basics</a>
</li>
</ul>
</li>
<li>
<a href="../reference/index.html">Reference</a>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li>
<a href="https://github.com/tensorflow/io">
<span class="fa fa-github"></span>
</a>
</li>
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container -->
</div><!--/.navbar -->
</header>
<div class="row">
<div class="col-md-9 contents">
<div class="page-header">
<h1>Create an Arrow Dataset from an input stream, inferring output types and
shapes from the given Arrow schema.</h1>
<small class="dont-index">Source: <a href='https://github.com/tensorflow/io/blob/master/R/arrow_dataset.R'><code>R/arrow_dataset.R</code></a></small>
<div class="hidden name"><code>from_schema.arrow_stream_dataset.Rd</code></div>
</div>
<div class="ref-description">
<p>Create an Arrow Dataset from an input stream, inferring output types and
shapes from the given Arrow schema.</p>
</div>
<pre class="usage"><span class='co'># S3 method for arrow_stream_dataset</span>
<span class='fu'><a href='from_schema.html'>from_schema</a></span>(<span class='no'>object</span>, <span class='no'>schema</span>, <span class='kw'>columns</span> <span class='kw'>=</span> <span class='kw'>NULL</span>, <span class='kw'>host</span> <span class='kw'>=</span> <span class='kw'>NULL</span>, <span class='kw'>filenames</span> <span class='kw'>=</span> <span class='kw'>NULL</span>, <span class='no'>...</span>)</pre>
<h2 class="hasAnchor" id="arguments"><a class="anchor" href="#arguments"></a>Arguments</h2>
<table class="ref-arguments">
<colgroup><col class="name" /><col class="desc" /></colgroup>
<tr>
<th>object</th>
<td><p>An <span style="R">R</span> object.</p></td>
</tr>
<tr>
<th>schema</th>
<td><p>Arrow schema defining the record batch data in the stream.</p></td>
</tr>
<tr>
<th>columns</th>
<td><p>A list of column indices to be used in the Dataset.</p></td>
</tr>
<tr>
<th>host</th>
<td><p>A <code>tf.string</code> tensor or string defining the input stream.
For a socket client, use "<HOST_IP>:<PORT>", for stdin use "STDIN".</p></td>
</tr>
<tr>
<th>filenames</th>
<td><p>Not used.</p></td>
</tr>
<tr>
<th>...</th>
<td><p>Optional arguments passed on to implementing methods.</p></td>
</tr>
</table>
</div>
<div class="col-md-3 hidden-xs hidden-sm" id="sidebar">
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
</ul>
</div>
</div>
<footer>
<div class="copyright">
<p>Developed by TensorFlow IO Contributors, Yuan Tang.</p>
</div>
<div class="pkgdown">
<p>Site built with <a href="https://pkgdown.r-lib.org/">pkgdown</a> 1.4.1.</p>
</div>
</footer>
</div>
</body>
</html>
| {
"pile_set_name": "Github"
} |
/*
*
* (C) COPYRIGHT 2011-2016 ARM Limited. All rights reserved.
*
* This program is free software and is provided to you under the terms of the
* GNU General Public License version 2 as published by the Free Software
* Foundation, and any use by you of this program is subject to the terms
* of such GNU licence.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, you can access it online at
* http://www.gnu.org/licenses/gpl-2.0.html.
*
* SPDX-License-Identifier: GPL-2.0
*
*/
#if !defined(_TRACE_MALI_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_MALI_H
#undef TRACE_SYSTEM
#define TRACE_SYSTEM mali
#define TRACE_INCLUDE_FILE mali_linux_trace
#include <linux/tracepoint.h>
#define MALI_JOB_SLOTS_EVENT_CHANGED
/**
* mali_job_slots_event - called from mali_kbase_core_linux.c
* @event_id: ORed together bitfields representing a type of event, made with the GATOR_MAKE_EVENT() macro.
*/
TRACE_EVENT(mali_job_slots_event,
TP_PROTO(unsigned int event_id, unsigned int tgid, unsigned int pid,
unsigned char job_id),
TP_ARGS(event_id, tgid, pid, job_id),
TP_STRUCT__entry(
__field(unsigned int, event_id)
__field(unsigned int, tgid)
__field(unsigned int, pid)
__field(unsigned char, job_id)
),
TP_fast_assign(
__entry->event_id = event_id;
__entry->tgid = tgid;
__entry->pid = pid;
__entry->job_id = job_id;
),
TP_printk("event=%u tgid=%u pid=%u job_id=%u",
__entry->event_id, __entry->tgid, __entry->pid, __entry->job_id)
);
/**
* mali_pm_status - Called by mali_kbase_pm_driver.c
* @event_id: core type (shader, tiler, l2 cache)
* @value: 64bits bitmask reporting either power status of the cores (1-ON, 0-OFF)
*/
TRACE_EVENT(mali_pm_status,
TP_PROTO(unsigned int event_id, unsigned long long value),
TP_ARGS(event_id, value),
TP_STRUCT__entry(
__field(unsigned int, event_id)
__field(unsigned long long, value)
),
TP_fast_assign(
__entry->event_id = event_id;
__entry->value = value;
),
TP_printk("event %u = %llu", __entry->event_id, __entry->value)
);
/**
* mali_pm_power_on - Called by mali_kbase_pm_driver.c
* @event_id: core type (shader, tiler, l2 cache)
* @value: 64bits bitmask reporting the cores to power up
*/
TRACE_EVENT(mali_pm_power_on,
TP_PROTO(unsigned int event_id, unsigned long long value),
TP_ARGS(event_id, value),
TP_STRUCT__entry(
__field(unsigned int, event_id)
__field(unsigned long long, value)
),
TP_fast_assign(
__entry->event_id = event_id;
__entry->value = value;
),
TP_printk("event %u = %llu", __entry->event_id, __entry->value)
);
/**
* mali_pm_power_off - Called by mali_kbase_pm_driver.c
* @event_id: core type (shader, tiler, l2 cache)
* @value: 64bits bitmask reporting the cores to power down
*/
TRACE_EVENT(mali_pm_power_off,
TP_PROTO(unsigned int event_id, unsigned long long value),
TP_ARGS(event_id, value),
TP_STRUCT__entry(
__field(unsigned int, event_id)
__field(unsigned long long, value)
),
TP_fast_assign(
__entry->event_id = event_id;
__entry->value = value;
),
TP_printk("event %u = %llu", __entry->event_id, __entry->value)
);
/**
* mali_page_fault_insert_pages - Called by page_fault_worker()
* it reports an MMU page fault resulting in new pages being mapped.
* @event_id: MMU address space number.
* @value: number of newly allocated pages
*/
TRACE_EVENT(mali_page_fault_insert_pages,
TP_PROTO(int event_id, unsigned long value),
TP_ARGS(event_id, value),
TP_STRUCT__entry(
__field(int, event_id)
__field(unsigned long, value)
),
TP_fast_assign(
__entry->event_id = event_id;
__entry->value = value;
),
TP_printk("event %d = %lu", __entry->event_id, __entry->value)
);
/**
* mali_mmu_as_in_use - Called by assign_and_activate_kctx_addr_space()
* it reports that a certain MMU address space is in use now.
* @event_id: MMU address space number.
*/
TRACE_EVENT(mali_mmu_as_in_use,
TP_PROTO(int event_id),
TP_ARGS(event_id),
TP_STRUCT__entry(
__field(int, event_id)
),
TP_fast_assign(
__entry->event_id = event_id;
),
TP_printk("event=%d", __entry->event_id)
);
/**
* mali_mmu_as_released - Called by kbasep_js_runpool_release_ctx_internal()
* it reports that a certain MMU address space has been released now.
* @event_id: MMU address space number.
*/
TRACE_EVENT(mali_mmu_as_released,
TP_PROTO(int event_id),
TP_ARGS(event_id),
TP_STRUCT__entry(
__field(int, event_id)
),
TP_fast_assign(
__entry->event_id = event_id;
),
TP_printk("event=%d", __entry->event_id)
);
/**
* mali_total_alloc_pages_change - Called by kbase_atomic_add_pages()
* and by kbase_atomic_sub_pages()
* it reports that the total number of allocated pages is changed.
* @event_id: number of pages to be added or subtracted (according to the sign).
*/
TRACE_EVENT(mali_total_alloc_pages_change,
TP_PROTO(long long int event_id),
TP_ARGS(event_id),
TP_STRUCT__entry(
__field(long long int, event_id)
),
TP_fast_assign(
__entry->event_id = event_id;
),
TP_printk("event=%lld", __entry->event_id)
);
#endif /* _TRACE_MALI_H */
#undef TRACE_INCLUDE_PATH
#undef linux
#define TRACE_INCLUDE_PATH .
/* This part must be outside protection */
#include <trace/define_trace.h>
| {
"pile_set_name": "Github"
} |
/*
* DateJS Culture String File
* Country Code: fo-FO
* Name: Faroese (Faroe Islands)
* Format: "key" : "value"
* Key is the en-US term, Value is the Key in the current language.
*/
Date.CultureStrings = Date.CultureStrings || {};
Date.CultureStrings["fo-FO"] = {
"name": "fo-FO",
"englishName": "Faroese (Faroe Islands)",
"nativeName": "føroyskt (Føroyar)",
"Sunday": "sunnudagur",
"Monday": "mánadagur",
"Tuesday": "týsdagur",
"Wednesday": "mikudagur",
"Thursday": "hósdagur",
"Friday": "fríggjadagur",
"Saturday": "leygardagur",
"Sun": "sun",
"Mon": "mán",
"Tue": "týs",
"Wed": "mik",
"Thu": "hós",
"Fri": "frí",
"Sat": "leyg",
"Su": "su",
"Mo": "má",
"Tu": "tý",
"We": "mi",
"Th": "hó",
"Fr": "fr",
"Sa": "ley",
"S_Sun_Initial": "s",
"M_Mon_Initial": "m",
"T_Tue_Initial": "t",
"W_Wed_Initial": "m",
"T_Thu_Initial": "h",
"F_Fri_Initial": "f",
"S_Sat_Initial": "l",
"January": "januar",
"February": "februar",
"March": "mars",
"April": "apríl",
"May": "mai",
"June": "juni",
"July": "juli",
"August": "august",
"September": "september",
"October": "oktober",
"November": "november",
"December": "desember",
"Jan_Abbr": "jan",
"Feb_Abbr": "feb",
"Mar_Abbr": "mar",
"Apr_Abbr": "apr",
"May_Abbr": "mai",
"Jun_Abbr": "jun",
"Jul_Abbr": "jul",
"Aug_Abbr": "aug",
"Sep_Abbr": "sep",
"Oct_Abbr": "okt",
"Nov_Abbr": "nov",
"Dec_Abbr": "des",
"AM": "",
"PM": "",
"firstDayOfWeek": 1,
"twoDigitYearMax": 2029,
"mdy": "dmy",
"M/d/yyyy": "dd-MM-yyyy",
"dddd, MMMM dd, yyyy": "d. MMMM yyyy",
"h:mm tt": "HH.mm",
"h:mm:ss tt": "HH.mm.ss",
"dddd, MMMM dd, yyyy h:mm:ss tt": "d. MMMM yyyy HH.mm.ss",
"yyyy-MM-ddTHH:mm:ss": "yyyy-MM-ddTHH:mm:ss",
"yyyy-MM-dd HH:mm:ssZ": "yyyy-MM-dd HH:mm:ssZ",
"ddd, dd MMM yyyy HH:mm:ss": "ddd, dd MMM yyyy HH:mm:ss",
"MMMM dd": "d. MMMM",
"MMMM, yyyy": "MMMM yyyy",
"/jan(uary)?/": "jan(uar)?",
"/feb(ruary)?/": "feb(ruar)?",
"/mar(ch)?/": "mar(s)?",
"/apr(il)?/": "apr(íl)?",
"/may/": "mai",
"/jun(e)?/": "jun(i)?",
"/jul(y)?/": "jul(i)?",
"/aug(ust)?/": "aug(ust)?",
"/sep(t(ember)?)?/": "sep(t(ember)?)?",
"/oct(ober)?/": "okt(ober)?",
"/nov(ember)?/": "nov(ember)?",
"/dec(ember)?/": "des(ember)?",
"/^su(n(day)?)?/": "^su(n(nudagur)?)?",
"/^mo(n(day)?)?/": "^má(n(adagur)?)?",
"/^tu(e(s(day)?)?)?/": "^tý(s(dagur)?)?",
"/^we(d(nesday)?)?/": "^mi(k(udagur)?)?",
"/^th(u(r(s(day)?)?)?)?/": "^hó(s(dagur)?)?",
"/^fr(i(day)?)?/": "^fr(í(ggjadagur)?)?",
"/^sa(t(urday)?)?/": "^ley(g(ardagur)?)?",
"/^next/": "^next",
"/^last|past|prev(ious)?/": "^last|past|prev(ious)?",
"/^(\\+|aft(er)?|from|hence)/": "^(\\+|aft(er)?|from|hence)",
"/^(\\-|bef(ore)?|ago)/": "^(\\-|bef(ore)?|ago)",
"/^yes(terday)?/": "^yes(terday)?",
"/^t(od(ay)?)?/": "^t(od(ay)?)?",
"/^tom(orrow)?/": "^tom(orrow)?",
"/^n(ow)?/": "^n(ow)?",
"/^ms|milli(second)?s?/": "^ms|milli(second)?s?",
"/^sec(ond)?s?/": "^sec(ond)?s?",
"/^mn|min(ute)?s?/": "^mn|min(ute)?s?",
"/^h(our)?s?/": "^h(our)?s?",
"/^w(eek)?s?/": "^w(eek)?s?",
"/^m(onth)?s?/": "^m(onth)?s?",
"/^d(ay)?s?/": "^d(ay)?s?",
"/^y(ear)?s?/": "^y(ear)?s?",
"/^(a|p)/": "^(a|p)",
"/^(a\\.?m?\\.?|p\\.?m?\\.?)/": "^(a\\.?m?\\.?|p\\.?m?\\.?)",
"/^((e(s|d)t|c(s|d)t|m(s|d)t|p(s|d)t)|((gmt)?\\s*(\\+|\\-)\\s*\\d\\d\\d\\d?)|gmt|utc)/": "^((e(s|d)t|c(s|d)t|m(s|d)t|p(s|d)t)|((gmt)?\\s*(\\+|\\-)\\s*\\d\\d\\d\\d?)|gmt|utc)",
"/^\\s*(st|nd|rd|th)/": "^\\s*(st|nd|rd|th)",
"/^\\s*(\\:|a(?!u|p)|p)/": "^\\s*(\\:|a(?!u|p)|p)",
"LINT": "LINT",
"TOT": "TOT",
"CHAST": "CHAST",
"NZST": "NZST",
"NFT": "NFT",
"SBT": "SBT",
"AEST": "AEST",
"ACST": "ACST",
"JST": "JST",
"CWST": "CWST",
"CT": "CT",
"ICT": "ICT",
"MMT": "MMT",
"BIOT": "BST",
"NPT": "NPT",
"IST": "IST",
"PKT": "PKT",
"AFT": "AFT",
"MSK": "MSK",
"IRST": "IRST",
"FET": "FET",
"EET": "EET",
"CET": "CET",
"UTC": "UTC",
"GMT": "GMT",
"CVT": "CVT",
"GST": "GST",
"BRT": "BRT",
"NST": "NST",
"AST": "AST",
"EST": "EST",
"CST": "CST",
"MST": "MST",
"PST": "PST",
"AKST": "AKST",
"MIT": "MIT",
"HST": "HST",
"SST": "SST",
"BIT": "BIT",
"CHADT": "CHADT",
"NZDT": "NZDT",
"AEDT": "AEDT",
"ACDT": "ACDT",
"AZST": "AZST",
"IRDT": "IRDT",
"EEST": "EEST",
"CEST": "CEST",
"BST": "BST",
"PMDT": "PMDT",
"ADT": "ADT",
"NDT": "NDT",
"EDT": "EDT",
"CDT": "CDT",
"MDT": "MDT",
"PDT": "PDT",
"AKDT": "AKDT",
"HADT": "HADT"
};
Date.CultureStrings.lang = "fo-FO";
| {
"pile_set_name": "Github"
} |
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package pointer
import (
"k8s.io/apimachinery/pkg/runtime"
)
func addDefaultingFuncs(scheme *runtime.Scheme) error {
return RegisterDefaults(scheme)
}
func SetDefaults_Tpointer(obj *Tpointer) {
if obj.BoolField == nil {
obj.BoolField = new(bool)
*obj.BoolField = true
}
}
| {
"pile_set_name": "Github"
} |
:107E000001C0B7C0112484B790E89093610010922C
:107E10006100882361F0982F9A70923041F081FFC1
:107E200002C097EF94BF282E80E0C6D0E9C085E05D
:107E30008093810082E08093C00088E18093C1003C
:107E400083E08093C40086E08093C2008EE0B4D0CB
:107E5000259A84E028E13EEF91E03093850020935D
:107E6000840096BBB09BFECF1D9AA8954091C000A0
:107E700047FD02C0815089F793D0813479F490D0C6
:107E8000182FA0D0123811F480E004C088E0113817
:107E900009F083E07ED080E17CD0EECF823419F40B
:107EA00084E198D0F8CF853411F485E0FACF853598
:107EB00041F476D0C82F74D0D82FCC0FDD1F82D0DC
:107EC000EACF863519F484E085D0DECF843691F58B
:107ED00067D066D0F82E64D0D82E00E011E05801AB
:107EE0008FEFA81AB80A5CD0F80180838501FA10D8
:107EF000F6CF68D0F5E4DF1201C0FFCF50E040E0DC
:107F000063E0CE0136D08E01E0E0F1E06F0182E067
:107F1000C80ED11C4081518161E0C8012AD00E5F9A
:107F20001F4FF601FC10F2CF50E040E065E0CE01BB
:107F300020D0B1CF843771F433D032D0F82E30D086
:107F400041D08E01F80185918F0123D0FA94F11070
:107F5000F9CFA1CF853739F435D08EE11AD085E934
:107F600018D084E097CF813509F0A9CF88E024D0DC
:107F7000A6CFFC010A0167BFE895112407B600FCF3
:107F8000FDCF667029F0452B19F481E187BFE89594
:107F900008959091C00095FFFCCF8093C60008958E
:107FA0008091C00087FFFCCF8091C00084FD01C09C
:107FB000A8958091C6000895E0E6F0E098E19083EE
:107FC00080830895EDDF803219F088E0F5DFFFCF80
:107FD00084E1DFCFCF93C82FE3DFC150E9F7CF9122
:027FE000F1CFDF
:027FFE00000879
:0400000300007E007B
:00000001FF
| {
"pile_set_name": "Github"
} |
// Copyright (c) 2016, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
import 'dart:async';
import 'dart:html';
import 'dart:math' show max, min;
import 'package:angular/di.dart';
import 'package:meta/meta.dart';
import 'package:angular_components/utils/async/async.dart';
import 'package:angular_components/utils/disposer/disposable_callback.dart';
// TODO(google): Consolidate this with RenderSync /Angular.
import 'package:angular_components/utils/disposer/disposer.dart';
/// A callback from [DomService.scheduleRead] or [DomService.scheduleWrite].
typedef DomReadWriteFn = void Function();
/// A callback that returns a future that completes in the next animation frame.
typedef RequestAnimationFrame = Future<num> Function();
/// Utility class to synchronize DOM operations across components, e.g. to check
/// changes in the layout after a UI update or application event.
class DomService {
/// Whether to execute functions scheduled within [Zone.current].
///
/// This is the expected behavior and contract of Dart applications, but is
/// not applied automatically to every callback (only to Futures and Streams).
/// Eventually, this flag will be flipped to `true`, and deleted (all code
/// must use this behavior).
///
/// By flipping this to `false`, it means:
/// * [Zone.current] will be not be restored when the callbacks are executed.
/// * AngularDart (or any parent zone) will not know about the change.
@Deprecated('For legacy reasons. DO NOT USE unless you talk to AngularDart.')
static bool maintainZoneOnCallbacks = true;
static const _TURN_DONE_EVENT_TYPE = 'doms-turn';
/// The maximum time the idle scheduler waits between events.
static const int _MAX_IDLE_TIMER_MILLIS = 4000;
/// The minimum time the idle scheduler waits between events.
static const int _MIN_IDLE_TIMER_MILLIS = 400;
/// The time to increment after each layout check.
static const int _IDLE_TIMER_INC_MILLIS = 100;
final _domReadQueue = <DomReadWriteFn>[];
final _domWriteQueue = <DomReadWriteFn>[];
final NgZone _ngZone;
final Window _window;
Zone _rootZone = Zone.root;
bool _insideDigest = false;
Disposable _layoutObserveRead;
bool _scheduledProcessQueue = false;
StreamController<DomService> _onLayoutChangedController;
Stream<DomService> _onLayoutChangedStream;
StreamController<DomService> _onQueuesProcessedController;
Stream<DomService> _onQueuesProcessedStream;
int _nextFrameId = -1;
Completer<num> _nextFrameCompleter;
Future<num> _nextFrameFuture;
DomServiceState _state = DomServiceState.Idle;
bool _crossAppInitialized = false;
StreamController<Null> _onIdleController;
Stream<Null> _onIdleStream;
int _idleTimerMillis = _MAX_IDLE_TIMER_MILLIS;
Timer _idleTimer;
bool _inDispatchTurnDoneEvent = false;
/// Optional callback to check if DOM has been mutated by angular in
/// a zone turn.
///
/// Can be used to reduce layout checks due to Zone turns that don't detect
/// any changes and don't mutate the DOM.
///
/// isDomMutatedPredicate should return true if DOM has been modified since
/// last call to resetIsDomMutated.
IsDomMutatedPredicate isDomMutatedPredicate;
/// Optional callback to reset dom mutation state for predicate.
Function resetIsDomMutated;
bool _writeQueueChangedLayout = false;
/// Creates an instance that automatically runs outside of [ngZone], and
/// uses the browser-supplied ([Window]) for animation frames and resizing
/// checks.
DomService(this._ngZone, this._window);
/// Initializes the DomService to send window events, in order to coordinate
/// layout checks across apps on the same page.
void init() {
if (_crossAppInitialized) return;
_crossAppInitialized = true;
_ngZone.runOutsideAngular(() {
_ngZone.onEventDone.listen((_) {
if (isDomMutatedPredicate == null || isDomMutatedPredicate()) {
// Sending an event to DomService in other apps on the same page.
_inDispatchTurnDoneEvent = true;
_window.dispatchEvent(Event(_TURN_DONE_EVENT_TYPE));
_inDispatchTurnDoneEvent = false;
// If dom has been mutated by angular, mark [_writeQueueChangedLayout]
// to true. So that [_scheduleOnLayoutChanged] will be called normally
// when there is a request to change layout.
if (isDomMutatedPredicate != null && isDomMutatedPredicate()) {
_writeQueueChangedLayout = true;
}
if (resetIsDomMutated != null) {
resetIsDomMutated();
}
}
});
});
}
/// Indicates to users that we are currently processing items in the read
/// queue.
///
/// Client can optimize calls by not adding into the queue but instead
/// is safe to execute the read synchronously.
///
/// Example:
/// if (domService.isReadingDom) {
/// readClientMetrics();
/// } else {
/// domService.scheduleRead(readClientMetrics);
/// }
bool get isReadingDom => (_state == DomServiceState.Reading);
/// Indicates to users that we are currently processing items in the write
/// queue.
///
/// Client can optimize calls by not adding into the queue but instead
/// is safe to execute the write synchronously.
///
/// Example:
/// if (domService.isWritingDom) {
/// writeClientMetrics();
/// } else {
/// domService.scheduleWrite(writeClientMetrics);
/// }
bool get isWritingDom => (_state == DomServiceState.Writing);
/// Advances the animation frame future, without waiting for the window's
/// callback. If there were already an animation frame scheduled, it will
/// cancel it.
///
/// ONLY FOR TESTING!
/// DO NOT CALL THIS METHOD IN PRODUCTION CODE!
@visibleForTesting
void leap({num highResTimer, steps = 1}) {
// Force a angular turn to make sure layout calls are scheduled.
_ngZone.run(() {});
while (steps > 0) {
if (_nextFrameFuture == null) return;
if (highResTimer == null) {
highResTimer = DateTime.now().millisecondsSinceEpoch;
}
assert(_nextFrameCompleter != null);
final completer = _nextFrameCompleter;
_window.cancelAnimationFrame(_nextFrameId);
_nextFrameFuture = null;
_nextFrameCompleter = null;
completer.complete(highResTimer);
steps--;
}
}
/// A future that completes with an animation frame.
///
/// Unlike the browser's animation frame, if there is one already scheduled,
/// it reuses that one, avoiding creating multiple frames across components.
Future<num> get nextFrame {
if (_nextFrameFuture == null) {
assert(_nextFrameCompleter == null);
final completer = Completer<num>.sync();
_nextFrameCompleter = completer;
_ngZone.runOutsideAngular(() {
// Delayed initialization of the cross-app event sending.
// TODO(google): figure out a better way to initialize this earlier
init();
_nextFrameId = _window.requestAnimationFrame((highResTimer) {
// Protect against window implementation that does not
// cancel the frame.
if (completer.isCompleted) return;
if (completer == _nextFrameCompleter) {
_nextFrameFuture = null;
_nextFrameCompleter = null;
}
completer.complete(highResTimer);
});
});
_nextFrameFuture =
ZonedFuture(completer.future, _ngZone.runOutsideAngular);
}
return _nextFrameFuture;
}
/// A stream that fires when the browser seems to be idle.
///
/// **NOTE**:
/// - This is an EXPERIMENTAL feature, and should be used with extreme care.
/// - Subscriptions to the stream should be cancelled as soon as possible.
Stream<Null> get onIdle {
if (_onIdleStream == null) {
_onIdleController = StreamController.broadcast(
sync: true, onListen: _resetIdleTimer, onCancel: _resetIdleTimer);
// TODO(google): consider scoping it to be inside the managed zone:
_onIdleStream =
ZonedStream(_onIdleController.stream, _ngZone.runOutsideAngular);
// TODO(google): integrate with Chrome's new idle detection API
}
return _onIdleStream;
}
/// Schedules a coordinated DOM read. If already [isReadingDom], [fn] is
/// executed *synchronously*.
///
/// Otherwise, it will execute in the next animation frame. It is possible to
/// cancel by calling [Disposable.dispose] on the return (deprecated). It
/// is better to pass in a [DisposableCallback] instead
/// DisposableCallback callback = new DisposableCallback(fn);
/// domService.scheduleRead(callback);
Disposable scheduleRead(DomReadWriteFn fn) {
if (_state == DomServiceState.Reading) {
fn();
return Disposable.Noop;
}
// This is temporary until all the callers are fixed.
DisposableCallback callback = DisposableCallback(fn);
_scheduleInQueue(callback.call, _domReadQueue);
return callback;
}
/// Schedules a coordinated DOM write. If already [isWritingDom], [fn] is
/// executed *synchronously*.
///
/// Otherwise, it will execute in the next animation frame. It is possible to
/// cancel by calling [Disposable.dispose] on the return (deprecated). It
/// is better to pass in a [DisposableCallback] instead
/// DisposableCallback callback = new DisposableCallback(fn);
/// domService.scheduleWrite(callback);
Disposable scheduleWrite(DomReadWriteFn fn) {
if (_state == DomServiceState.Writing) {
fn();
return Disposable.Noop;
}
// This is temporary until all the callers are fixed.
DisposableCallback callback = DisposableCallback(fn);
_scheduleInQueue(callback.call, _domWriteQueue);
return callback;
}
void _scheduleInQueue(DomReadWriteFn fn, List<DomReadWriteFn> queue) {
if (maintainZoneOnCallbacks) {
fn = Zone.current.bindCallback(fn);
}
queue.add(fn);
_scheduleProcessQueue();
}
/// A future-based API version of [scheduleRead].
Future<void> onRead() {
final completer = Completer<Null>.sync();
scheduleRead(completer.complete);
return ZonedFuture(completer.future, _ngZone.runOutsideAngular);
}
/// A future-based API version of [scheduleWrite].
Future<void> onWrite() {
final completer = Completer<Null>.sync();
scheduleWrite(completer.complete);
return ZonedFuture(completer.future, _ngZone.runOutsideAngular);
}
void _processQueues() {
assert(_state == DomServiceState.Idle);
// If all reads and writes were cancelled, prematurely exit.
if (_domReadQueue.isEmpty && _domWriteQueue.isEmpty) {
_scheduledProcessQueue = false;
return;
}
// Execute all DOM reads.
_state = DomServiceState.Reading;
_processQueue(_domReadQueue);
// Execute all DOM writes.
_state = DomServiceState.Writing;
final previousWriteQueueLength = _processQueue(_domWriteQueue);
_writeQueueChangedLayout = previousWriteQueueLength > 0;
// Mention we are now in an 'Idle'. state (neither reading or writing).
_state = DomServiceState.Idle;
// If we have mutated the DOM in this queue, subscribers to
// `onLayoutChanged` will want to be notified, perhaps to recalculate
// dimensions or positioning of their elements.
if (_writeQueueChangedLayout) {
_scheduleOnLayoutChanged();
}
// If there are more outstanding items in the queue, schedule a new frame.
_scheduledProcessQueue = false;
if (_domReadQueue.isNotEmpty || _domWriteQueue.isNotEmpty) {
_scheduleProcessQueue();
} else if (_onQueuesProcessedController != null) {
_onQueuesProcessedController.add(this);
}
}
int _processQueue(List<DomReadWriteFn> queue) {
final int previousLength = queue.length;
for (int i = 0; i < queue.length; i++) {
DomReadWriteFn fn = queue[i];
if (fn == null) continue;
fn();
}
// Because we execute any other dom reads or writes synchronously, we
// should not have scheduled any additional functions.
assert(queue.length == previousLength);
queue.clear();
return previousLength;
}
/// A stream that fires when the queues have been processed and are now empty.
Stream<DomService> get onQueuesProcessed {
if (_onQueuesProcessedStream == null) {
_onQueuesProcessedController = StreamController.broadcast(sync: true);
_onQueuesProcessedStream = ZonedStream(
_onQueuesProcessedController.stream, _ngZone.runOutsideAngular);
}
return _onQueuesProcessedStream;
}
/// A stream that fires when a component should do a layout check.
///
/// **NOTE**:
/// - The stream fires *outside* of a framework managed zone
/// - The stream fires *within* a scheduled DOM read queue, making it safe
/// to openly check elements size or positioning in this callback.
Stream<DomService> get onLayoutChanged {
if (_onLayoutChangedStream == null) {
_onLayoutChangedController = StreamController.broadcast(sync: true);
_onLayoutChangedStream = ZonedStream(
_onLayoutChangedController.stream, _ngZone.runOutsideAngular);
_ngZone.runOutsideAngular(() {
// Capture events from Angular
_ngZone.onTurnStart.listen((_) {
if (_state != DomServiceState.Idle) return;
_insideDigest = true;
});
// Trigger a layout check after the digest.
_ngZone.onEventDone.listen((_) {
if (_state != DomServiceState.Idle) return;
_insideDigest = false;
// Reduce layout checks to only those zone turns that mutated DOM.
if (isDomMutatedPredicate == null ||
isDomMutatedPredicate() ||
_writeQueueChangedLayout) {
_scheduleOnLayoutChanged();
_writeQueueChangedLayout = false;
}
});
_listenOnLayoutEvents(_window.onAnimationEnd);
_listenOnLayoutEvents(_window.onResize);
_listenOnLayoutEvents(_window.onTransitionEnd);
// Listening Angular turn done events coming from other apps.
_window.addEventListener(_TURN_DONE_EVENT_TYPE, (_) {
if (!_inDispatchTurnDoneEvent) {
_scheduleOnLayoutChanged();
}
});
});
}
return _onLayoutChangedStream;
}
void _listenOnLayoutEvents(Stream<Object> events) {
if (events == null) return; // happens only in tests with mocked window
events.listen((_) => _scheduleOnLayoutChanged());
}
/// Tracks a layout change defined by [fn], and calls the [callback] function
/// with the last stable value.
///
/// If [framesToStabilize] is set, the callback will wait for the specified
/// number of animation frames before it considers the value to be stable.
/// If the value changes while waiting for stabilization, the animation frame
/// count restarts. The recommended value depends on the use case:
/// - visibility checks do not need animation frame stabilization,
/// - size-tracking properties (e.g. widths on resize) may wait for 3+ frames
/// before reacting on the new size.
///
/// The [callback] is assumed to do lightweight DOM updates only.
/// If you want to trigger both model changes and async operations, you must
/// set the [runInAngularZone] flag.
///
/// Returns a subscription that allows pausing, resuming and canceling the
/// observer.
StreamSubscription<DomService> trackLayoutChange<T>(
T Function() fn, void Function(T) callback,
{int framesToStabilize = 1, bool runInAngularZone = false}) {
// TODO(google): Move layout checking into ruler service when landed.
var trackerCallback = callback;
if (runInAngularZone) {
trackerCallback = (T value) {
_ngZone.run(() => callback(value));
};
}
var tracker = _ChangeTracker(this, fn, trackerCallback, framesToStabilize);
return onLayoutChanged.listen((_) => tracker._onLayoutObserve());
}
/// Adds a new callback to the layout observer heartbeat.
///
/// The layout observer heartbeat is a consistency check that is run outside
/// of the Angular zone. If a component needs to synchronize its position,
/// size or orientation to the ever-changing layout, it can run its
/// observations in this callback. If there is a need to modify the DOM or
/// trigger a new Angular digest, it can do it through the [updateLayout]
/// method.
///
/// Returns a subscription that allows pausing, resuming and canceling the
/// observer.
@Deprecated("Use onLayoutChanged instead")
StreamSubscription<DomService> addLayoutObserver(void domReadCallback()) =>
onLayoutChanged.listen((_) => domReadCallback());
String describeStability() {
return {
'_insideDigest': _insideDigest,
'_scheduledProcessQueue': _scheduledProcessQueue,
'_layoutObserveRead': _layoutObserveRead != null,
'_nextFrameFuture': _nextFrameFuture != null,
'_domReadQueue': _domReadQueue.length,
'_domWriteQueue': _domWriteQueue.length,
}.toString();
}
/// Whether there is any pending update.
bool get hasPendingUpdate =>
_insideDigest ||
_scheduledProcessQueue ||
(_layoutObserveRead != null) ||
_nextFrameFuture != null ||
_domReadQueue.isNotEmpty ||
_domWriteQueue.isNotEmpty;
/// Whether the view can be considered as stable.
bool get isStable => !hasPendingUpdate;
void _scheduleProcessQueue() {
if (!_scheduledProcessQueue) {
_scheduledProcessQueue = true;
nextFrame.then((_) => _processQueues());
}
}
// TODO(google): Consider deprecating from public API.
void requestLayoutFrame() {
_scheduleOnLayoutChanged();
}
void _scheduleOnLayoutChanged() {
// If we have a previously scheduled layout check, return.
if (_layoutObserveRead != null) return;
// both layout changed and idle listeners can trigger the layout frame
bool hasLayoutListener = _onLayoutChangedController?.hasListener == true;
bool hasIdleListener = _onIdleController?.hasListener == true;
if ((!hasLayoutListener) && (!hasIdleListener)) return;
// Scheduling the layout observe on the next animation frame's DOM read.
if (isReadingDom) {
// We must not join the current DOM read phase, rather force it to be in
// the next animation frame. The DOM write below will be executed after
// the reads, and will schedule a read that will trigger a new animation
// frame.
scheduleWrite(() {});
return;
}
_layoutObserveRead = scheduleRead(() {
_layoutObserveRead = null;
if (_onLayoutChangedController != null) {
_onLayoutChangedController.add(this);
}
_resetIdleTimer();
});
}
/// Returns the current state of the service.
DomServiceState get state => _state;
void _resetIdleTimer() {
if (_onIdleController == null) return;
_idleTimerMillis += _IDLE_TIMER_INC_MILLIS;
_idleTimerMillis = min(_MAX_IDLE_TIMER_MILLIS, _idleTimerMillis);
_cancelIdleTimer();
if (!_onIdleController.hasListener) return;
// running in root zone, in order to go outside of the activity tracking
// TODO(google): implement proper activity tracking integration
_rootZone.run(() {
// TODO(google): consider adding animation frame counting that can be used:
// - to shorten the minimum period
// - to detect CPU activities that we are not aware of
_idleTimerMillis = max(_MIN_IDLE_TIMER_MILLIS, _idleTimerMillis);
_idleTimer = Timer(Duration(milliseconds: _idleTimerMillis), () {
_idleTimer = null;
_idleTimerMillis = _idleTimerMillis ~/ 2;
_onIdleController.add(null);
_scheduleOnLayoutChanged();
});
});
}
void _cancelIdleTimer() {
if (_idleTimer != null) {
_idleTimer.cancel();
_idleTimer = null;
}
}
@visibleForTesting
set rootZone(Zone value) {
_rootZone = value;
}
}
/// State for [DomService] implementations to use.
enum DomServiceState {
/// The DOM service is currently not processing the queue.
Idle,
/// The DOM service is executing all scheduled writes to the DOM.
Writing,
/// The DOM service is executing all scheduled reads to the DOM.
Reading
}
class _ChangeTracker<T> {
final DomService _domService;
final T Function() _fn;
final void Function(T) _callback;
final int _framesToStabilize;
T _lastValue;
int _stableFrameCounter = 0;
_ChangeTracker(
this._domService, this._fn, this._callback, this._framesToStabilize) {
assert(_framesToStabilize > 0);
}
void _onLayoutObserve() {
var value = _fn();
if (value != _lastValue) {
_lastValue = value;
_stableFrameCounter = _framesToStabilize;
}
// zero means we have already sent that value to the callback
if (_stableFrameCounter == 0) return;
_stableFrameCounter--;
if (_stableFrameCounter == 0) {
// just got down to zero, need to invoke callback
_domService.scheduleRead(() {
_callback(_lastValue);
});
} else {
// we need more frames to stabilize the value
_domService.requestLayoutFrame();
}
}
}
typedef IsDomMutatedPredicate = bool Function();
| {
"pile_set_name": "Github"
} |
"push"
enum-value
permissions-1
permissions
1
current
https://w3c.github.io/permissions/#dom-permissionname-push
1
1
PermissionName
-
PublicKeyCredential
interface
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#publickeycredential
1
1
-
PublicKeyCredential
interface
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#publickeycredential
1
1
-
PublicKeyCredentialCreationOptions
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialcreationoptions
1
1
-
PublicKeyCredentialCreationOptions
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialcreationoptions
1
1
-
PublicKeyCredentialDescriptor
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialdescriptor
1
1
-
PublicKeyCredentialDescriptor
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialdescriptor
1
1
-
PublicKeyCredentialEntity
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialentity
1
1
-
PublicKeyCredentialEntity
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialentity
1
1
-
PublicKeyCredentialParameters
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialparameters
1
1
-
PublicKeyCredentialParameters
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialparameters
1
1
-
PublicKeyCredentialRequestOptions
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialrequestoptions
1
1
-
PublicKeyCredentialRequestOptions
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialrequestoptions
1
1
-
PublicKeyCredentialRpEntity
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialrpentity
1
1
-
PublicKeyCredentialRpEntity
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialrpentity
1
1
-
PublicKeyCredentialType
enum
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#enumdef-publickeycredentialtype
1
1
-
PublicKeyCredentialType
enum
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#enumdef-publickeycredentialtype
1
1
-
PublicKeyCredentialUserEntity
dictionary
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialuserentity
1
1
-
PublicKeyCredentialUserEntity
dictionary
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dictdef-publickeycredentialuserentity
1
1
-
PushPermissionDescriptor
dictionary
permissions-1
permissions
1
current
https://w3c.github.io/permissions/#dictdef-pushpermissiondescriptor
1
1
-
PutForwards
extended-attribute
webidl
webidl
1
current
https://heycam.github.io/webidl/#PutForwards
1
1
-
[[PullSteps]]
abstract-op
streams
streams
1
current
https://streams.spec.whatwg.org/#abstract-opdef-pullsteps
1
1
-
pubKeyCredParams
dict-member
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dom-publickeycredentialcreationoptions-pubkeycredparams
1
1
PublicKeyCredentialCreationOptions
-
pubKeyCredParams
dict-member
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dom-publickeycredentialcreationoptions-pubkeycredparams
1
1
PublicKeyCredentialCreationOptions
-
public
dfn
webcryptoapi-1
webcryptoapi
1
snapshot
https://www.w3.org/TR/WebCryptoAPI/#dfn-EcdhKeyDeriveParams-public
1
-
public bluetooth address
dfn
web-bluetooth-1
web-bluetooth
1
snapshot
https://webbluetoothcg.github.io/web-bluetooth/#public-bluetooth-address
1
-
public device address
dfn
web-bluetooth-1
web-bluetooth
1
snapshot
https://webbluetoothcg.github.io/web-bluetooth/#public-device-address
1
-
public id
dfn
dom
dom
1
snapshot
https://dom.spec.whatwg.org/#concept-doctype-publicid
1
1
DocumentType
-
public key credential
dfn
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#public-key-credential
1
-
public key credential
dfn
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#public-key-credential
1
-
public key credential source
dfn
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#public-key-credential-source
1
-
public key credential source
dfn
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#public-key-credential-source
1
-
public suffix
dfn
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/infrastructure.html#public-suffix
1
-
public suffix
dfn
url
url
1
snapshot
https://url.spec.whatwg.org/#host-public-suffix
1
1
host
-
public-key
enum-value
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dom-publickeycredentialtype-public-key
1
1
PublicKeyCredentialType
-
public-key
enum-value
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dom-publickeycredentialtype-public-key
1
1
PublicKeyCredentialType
-
publicId
argument
dom
dom
1
snapshot
https://dom.spec.whatwg.org/#dom-domimplementation-createdocumenttype-qualifiedname-publicid-systemid-publicid
1
1
DOMImplementation/createDocumentType(qualifiedName, publicId, systemId)
-
publicId
attribute
dom
dom
1
snapshot
https://dom.spec.whatwg.org/#dom-documenttype-publicid
1
1
DocumentType
-
publicKey
dict-member
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dom-credentialcreationoptions-publickey
1
1
CredentialCreationOptions
-
publicKey
dict-member
webauthn-1
webauthn
1
snapshot
https://www.w3.org/TR/webauthn/#dom-credentialrequestoptions-publickey
1
1
CredentialRequestOptions
-
publicKey
dict-member
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dom-credentialcreationoptions-publickey
1
1
CredentialCreationOptions
-
publicKey
dict-member
webauthn-1
webauthn
1
current
https://w3c.github.io/webauthn/#dom-credentialrequestoptions-publickey
1
1
CredentialRequestOptions
-
publicexponent
dfn
webcryptoapi-1
webcryptoapi
1
snapshot
https://www.w3.org/TR/WebCryptoAPI/#dfn-RsaKeyGenParams-publicExponent
1
-
publicexponent
dfn
webcryptoapi-1
webcryptoapi
1
snapshot
https://www.w3.org/TR/WebCryptoAPI/#dfn-RsaKeyAlgorithm-publicExponent
1
-
publickey
dfn
webcryptoapi-1
webcryptoapi
1
snapshot
https://www.w3.org/TR/WebCryptoAPI/#dfn-CryptoKeyPair-publicKey
1
-
pull source
dfn
streams
streams
1
current
https://streams.spec.whatwg.org/#pull-source
1
-
pull(controller)
method
streams
streams
1
current
https://streams.spec.whatwg.org/#dom-underlying-source-pull
1
1
underlying source
-
punctuation
value
css-text-4
css-text
4
current
https://drafts.csswg.org/css-text-4/#valdef-text-spacing-punctuation
1
1
text-spacing
-
punctuation
dfn
css-text-decor-4
css-text-decor
4
current
https://drafts.csswg.org/css-text-decor-4/#punctuation
1
-
purple
dfn
css-color-3
css-color
3
snapshot
https://www.w3.org/TR/css3-color/#purple
1
-
purple
dfn
css-color-3
css-color
3
snapshot
https://www.w3.org/TR/css3-color/#purple0
1
-
purple
dfn
css-color-3
css-color
3
current
https://drafts.csswg.org/css-color-3/#purple
1
-
purple
dfn
css-color-3
css-color
3
current
https://drafts.csswg.org/css-color-3/#purple0
1
-
purple
value
css-color-4
css-color
4
snapshot
https://www.w3.org/TR/css-color-4/#valdef-color-purple
1
1
<color>
-
purple
value
css-color-4
css-color
4
current
https://drafts.csswg.org/css-color-4/#valdef-color-purple
1
1
<color>
-
purpose
dfn
appmanifest
appmanifest
1
snapshot
https://www.w3.org/TR/appmanifest/#dom-imageresource-purpose
1
imageresource
-
push
dfn
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/infrastructure.html#stack-push
1
-
push
dfn
infra
infra
1
current
https://infra.spec.whatwg.org/#stack-push
1
1
stack
-
push a ruby annotation
dfn
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/text-level-semantics.html#push-a-ruby-annotation
1
-
push a ruby level
dfn
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/text-level-semantics.html#push-a-ruby-level
1
-
push onto the list of active formatting elements
dfn
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/parsing.html#push-onto-the-list-of-active-formatting-elements
1
-
push source
dfn
streams
streams
1
current
https://streams.spec.whatwg.org/#push-source
1
-
push-button
value
css-ui-4
css-ui
4
current
https://drafts.csswg.org/css-ui-4/#valdef-appearance-push-button
1
1
appearance
-
pushState(data, title)
method
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/history.html#dom-history-pushstate
1
1
History
-
pushed
dfn
encoding-1
encoding
1
current
https://encoding.spec.whatwg.org/#concept-stream-push
1
stream
-
put(request, response)
method
service-workers
service-workers
1
snapshot
https://www.w3.org/TR/service-workers-1/#dom-cache-put
1
1
Cache
-
put(request, response)
method
service-workers
service-workers
1
current
https://w3c.github.io/ServiceWorker/#dom-cache-put
1
1
Cache
-
put(value)
method
indexeddb-2
indexeddb
2
current
https://w3c.github.io/IndexedDB/#dom-idbobjectstore-put
1
1
IDBObjectStore
-
put(value, key)
method
indexeddb-2
indexeddb
2
current
https://w3c.github.io/IndexedDB/#dom-idbobjectstore-put
1
1
IDBObjectStore
-
putImageData()
method
html
html
1
snapshot
https://html.spec.whatwg.org/multipage/canvas.html#dom-context-2d-putimagedata
1
1
CanvasImageData
-
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <[email protected]>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Component\Translation\Writer;
use Symfony\Component\Translation\Dumper\DumperInterface;
use Symfony\Component\Translation\Exception\InvalidArgumentException;
use Symfony\Component\Translation\Exception\RuntimeException;
use Symfony\Component\Translation\MessageCatalogue;
/**
* TranslationWriter writes translation messages.
*
* @author Michel Salib <[email protected]>
*/
class TranslationWriter implements TranslationWriterInterface
{
private $dumpers = [];
/**
* Adds a dumper to the writer.
*
* @param string $format The format of the dumper
* @param DumperInterface $dumper The dumper
*/
public function addDumper($format, DumperInterface $dumper)
{
$this->dumpers[$format] = $dumper;
}
/**
* Disables dumper backup.
*
* @deprecated since Symfony 4.1
*/
public function disableBackup()
{
@trigger_error(
sprintf(
'The "%s()" method is deprecated since Symfony 4.1.',
__METHOD__
),
E_USER_DEPRECATED
);
foreach ($this->dumpers as $dumper) {
if (method_exists($dumper, 'setBackup')) {
$dumper->setBackup(false);
}
}
}
/**
* Obtains the list of supported formats.
*
* @return array
*/
public function getFormats()
{
return array_keys($this->dumpers);
}
/**
* Writes translation from the catalogue according to the selected format.
*
* @param MessageCatalogue $catalogue The message catalogue to write
* @param string $format The format to use to dump the messages
* @param array $options Options that are passed to the dumper
*
* @throws InvalidArgumentException
*/
public function write(MessageCatalogue $catalogue, $format, $options = [])
{
if (!isset($this->dumpers[$format])) {
throw new InvalidArgumentException(
sprintf(
'There is no dumper associated with format "%s".',
$format
)
);
}
// get the right dumper
$dumper = $this->dumpers[$format];
if (
isset($options['path']) &&
!is_dir($options['path']) &&
!@mkdir($options['path'], 0777, true) && !is_dir($options['path'])
) {
throw new RuntimeException(
sprintf(
'Translation Writer was not able to create directory "%s"',
$options['path']
)
);
}
// save
$dumper->dump($catalogue, $options);
}
}
| {
"pile_set_name": "Github"
} |
# AUTOGENERATED FILE
FROM balenalib/artik10-alpine:3.12-run
# remove several traces of python
RUN apk del python*
# http://bugs.python.org/issue19846
# > At the moment, setting "LANG=C" on a Linux system *fundamentally breaks Python 3*, and that's not OK.
ENV LANG C.UTF-8
# install python dependencies
RUN apk add --no-cache ca-certificates libffi \
&& apk add --no-cache libssl1.0 || apk add --no-cache libssl1.1
# key 63C7CC90: public key "Simon McVittie <[email protected]>" imported
# key 3372DCFA: public key "Donald Stufft (dstufft) <[email protected]>" imported
RUN gpg --keyserver keyring.debian.org --recv-keys 4DE8FF2A63C7CC90 \
&& gpg --keyserver keyserver.ubuntu.com --recv-key 6E3CBCE93372DCFA \
&& gpg --keyserver keyserver.ubuntu.com --recv-keys 0x52a43a1e4b77b059
# point Python at a system-provided certificate database. Otherwise, we might hit CERTIFICATE_VERIFY_FAILED.
# https://www.python.org/dev/peps/pep-0476/#trust-database
ENV SSL_CERT_FILE /etc/ssl/certs/ca-certificates.crt
ENV PYTHON_VERSION 3.8.5
# if this is called "PIP_VERSION", pip explodes with "ValueError: invalid truth value '<VERSION>'"
ENV PYTHON_PIP_VERSION 20.1.1
ENV SETUPTOOLS_VERSION 49.1.0
RUN set -x \
&& buildDeps=' \
curl \
gnupg \
' \
&& apk add --no-cache --virtual .build-deps $buildDeps \
&& curl -SLO "http://resin-packages.s3.amazonaws.com/python/v$PYTHON_VERSION/Python-$PYTHON_VERSION.linux-alpine-armv7hf-openssl1.1.tar.gz" \
&& echo "f24a1b6410efb722d686baec5c4396dd2aeab7f7f8313fd7dd534b61bdf7676d Python-$PYTHON_VERSION.linux-alpine-armv7hf-openssl1.1.tar.gz" | sha256sum -c - \
&& tar -xzf "Python-$PYTHON_VERSION.linux-alpine-armv7hf-openssl1.1.tar.gz" --strip-components=1 \
&& rm -rf "Python-$PYTHON_VERSION.linux-alpine-armv7hf-openssl1.1.tar.gz" \
&& if [ ! -e /usr/local/bin/pip3 ]; then : \
&& curl -SLO "https://raw.githubusercontent.com/pypa/get-pip/430ba37776ae2ad89f794c7a43b90dc23bac334c/get-pip.py" \
&& echo "19dae841a150c86e2a09d475b5eb0602861f2a5b7761ec268049a662dbd2bd0c get-pip.py" | sha256sum -c - \
&& python3 get-pip.py \
&& rm get-pip.py \
; fi \
&& pip3 install --no-cache-dir --upgrade --force-reinstall pip=="$PYTHON_PIP_VERSION" setuptools=="$SETUPTOOLS_VERSION" \
&& find /usr/local \
\( -type d -a -name test -o -name tests \) \
-o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
-exec rm -rf '{}' + \
&& cd / \
&& rm -rf /usr/src/python ~/.cache
# make some useful symlinks that are expected to exist
RUN cd /usr/local/bin \
&& ln -sf pip3 pip \
&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
&& ln -sf idle3 idle \
&& ln -sf pydoc3 pydoc \
&& ln -sf python3 python \
&& ln -sf python3-config python-config
CMD ["echo","'No CMD command was set in Dockerfile! Details about CMD command could be found in Dockerfile Guide section in our Docs. Here's the link: https://balena.io/docs"]
RUN curl -SLO "https://raw.githubusercontent.com/balena-io-library/base-images/8accad6af708fca7271c5c65f18a86782e19f877/scripts/assets/tests/[email protected]" \
&& echo "Running test-stack@python" \
&& chmod +x [email protected] \
&& bash [email protected] \
&& rm -rf [email protected]
RUN [ ! -d /.balena/messages ] && mkdir -p /.balena/messages; echo $'Here are a few details about this Docker image (For more information please visit https://www.balena.io/docs/reference/base-images/base-images/): \nArchitecture: ARM v7 \nOS: Alpine Linux 3.12 \nVariant: run variant \nDefault variable(s): UDEV=off \nThe following software stack is preinstalled: \nPython v3.8.5, Pip v20.1.1, Setuptools v49.1.0 \nExtra features: \n- Easy way to install packages with `install_packages <package-name>` command \n- Run anywhere with cross-build feature (for ARM only) \n- Keep the container idling with `balena-idle` command \n- Show base image details with `balena-info` command' > /.balena/messages/image-info
RUN echo $'#!/bin/bash\nbalena-info\nbusybox ln -sf /bin/busybox /bin/sh\n/bin/sh "$@"' > /bin/sh-shim \
&& chmod +x /bin/sh-shim \
&& ln -f /bin/sh /bin/sh.real \
&& ln -f /bin/sh-shim /bin/sh | {
"pile_set_name": "Github"
} |
{
"name": "multifeed",
"description": "multi-writer hypercore",
"author": "Stephen Whitmore <[email protected]>",
"version": "5.2.4",
"repository": {
"url": "git://github.com/noffle/multifeed.git"
},
"homepage": "https://github.com/noffle/multifeed",
"bugs": "https://github.com/noffle/multifeed/issues",
"main": "index.js",
"scripts": {
"test": "tape test/*.js",
"lint": "standard"
},
"keywords": [],
"dependencies": {
"debug": "^4.1.0",
"hypercore": "^8.3.0",
"hypercore-protocol": "^7.7.1",
"inherits": "^2.0.3",
"mutexify": "^1.2.0",
"once": "^1.4.0",
"random-access-file": "^2.0.1",
"random-access-memory": "^3.1.1",
"through2": "^3.0.0"
},
"devDependencies": {
"hypercore-crypto": "^1.0.0",
"pump": "^3.0.0",
"pumpify": "^1.5.1",
"random-access-latency": "^1.0.0",
"rimraf": "^2.6.3",
"standard": "~10.0.0",
"tape": "~4.6.2",
"tmp": "0.0.33"
},
"license": "ISC"
}
| {
"pile_set_name": "Github"
} |
import { NativeAdapter } from "./adapter/native.adapter.class";
import { Button } from "./button.enum";
import { Mouse } from "./mouse.class";
import { Point } from "./point.class";
import { LineHelper } from "./util/linehelper.class";
jest.mock("./adapter/native.adapter.class");
beforeEach(() => {
jest.resetAllMocks();
});
const linehelper = new LineHelper();
describe("Mouse class", () => {
it("should have a default delay of 500 ms", () => {
// GIVEN
const adapterMock = new NativeAdapter();
const SUT = new Mouse(adapterMock);
// WHEN
// THEN
expect(SUT.config.autoDelayMs).toEqual(100);
});
it("should forward scrollLeft to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const scrollAmount = 5;
// WHEN
const result = await SUT.scrollLeft(scrollAmount);
// THEN
expect(nativeAdapterMock.scrollLeft).toBeCalledWith(scrollAmount);
expect(result).toBe(SUT);
});
it("should forward scrollRight to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const scrollAmount = 5;
// WHEN
const result = await SUT.scrollRight(scrollAmount);
// THEN
expect(nativeAdapterMock.scrollRight).toBeCalledWith(scrollAmount);
expect(result).toBe(SUT);
});
it("should forward scrollDown to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const scrollAmount = 5;
// WHEN
const result = await SUT.scrollDown(scrollAmount);
// THEN
expect(nativeAdapterMock.scrollDown).toBeCalledWith(scrollAmount);
expect(result).toBe(SUT);
});
it("should forward scrollUp to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const scrollAmount = 5;
// WHEN
const result = await SUT.scrollUp(scrollAmount);
// THEN
expect(nativeAdapterMock.scrollUp).toBeCalledWith(scrollAmount);
expect(result).toBe(SUT);
});
it("should forward leftClick to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
// WHEN
const result = await SUT.leftClick();
// THEN
expect(nativeAdapterMock.leftClick).toBeCalled();
expect(result).toBe(SUT);
});
it("should forward rightClick to the native adapter class", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
// WHEN
const result = await SUT.rightClick();
// THEN
expect(nativeAdapterMock.rightClick).toBeCalled();
expect(result).toBe(SUT);
});
it("update mouse position along path on move", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const path = linehelper.straightLine(new Point(0, 0), new Point(10, 10));
// WHEN
const result = await SUT.move(path);
// THEN
expect(nativeAdapterMock.setMousePosition).toBeCalledTimes(path.length);
expect(result).toBe(SUT);
});
it("should press and hold left mouse button, move and release left mouse button on drag", async () => {
// GIVEN
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const path = linehelper.straightLine(new Point(0, 0), new Point(10, 10));
// WHEN
const result = await SUT.drag(path);
// THEN
expect(nativeAdapterMock.pressButton).toBeCalledWith(Button.LEFT);
expect(nativeAdapterMock.setMousePosition).toBeCalledTimes(path.length);
expect(nativeAdapterMock.releaseButton).toBeCalledWith(Button.LEFT);
expect(result).toBe(SUT);
});
});
describe("Mousebuttons", () => {
it.each([
[Button.LEFT, Button.LEFT],
[Button.MIDDLE, Button.MIDDLE],
[Button.RIGHT, Button.RIGHT],
] as Array<[Button, Button]>)("should be pressed and released", async (input: Button, expected: Button) => {
const nativeAdapterMock = new NativeAdapter();
const SUT = new Mouse(nativeAdapterMock);
const pressed = await SUT.pressButton(input);
const released = await SUT.releaseButton(input);
expect(nativeAdapterMock.pressButton).toBeCalledWith(expected);
expect(nativeAdapterMock.releaseButton).toBeCalledWith(expected);
expect(pressed).toBe(SUT);
expect(released).toBe(SUT);
});
});
| {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: c5b49c719e90e7448a36096819e48dd2
timeCreated: 1471594173
licenseType: Free
TextureImporter:
fileIDToRecycleName: {}
serializedVersion: 4
mipmaps:
mipMapMode: 0
enableMipMap: 0
sRGBTexture: 0
linearTexture: 0
fadeOut: 0
borderMipMap: 0
mipMapFadeDistanceStart: 1
mipMapFadeDistanceEnd: 3
bumpmap:
convertToNormalMap: 0
externalNormalMap: 0
heightScale: 0.25
normalMapFilter: 0
isReadable: 1
grayScaleToAlpha: 0
generateCubemap: 6
cubemapConvolution: 0
seamlessCubemap: 0
textureFormat: 4
maxTextureSize: 2048
textureSettings:
filterMode: 0
aniso: -1
mipBias: -1
wrapMode: 1
nPOTScale: 0
lightmap: 0
compressionQuality: 50
spriteMode: 0
spriteExtrude: 1
spriteMeshType: 1
alignment: 0
spritePivot: {x: 0.5, y: 0.5}
spriteBorder: {x: 0, y: 0, z: 0, w: 0}
spritePixelsToUnits: 100
alphaUsage: 1
alphaIsTransparency: 0
spriteTessellationDetail: -1
textureType: 0
textureShape: 1
maxTextureSizeSet: 0
compressionQualitySet: 0
textureFormatSet: 0
platformSettings:
- buildTarget: DefaultTexturePlatform
maxTextureSize: 2048
textureFormat: 4
textureCompression: 0
compressionQuality: 50
crunchedCompression: 0
allowsAlphaSplitting: 0
overridden: 0
- buildTarget: Standalone
maxTextureSize: 2048
textureFormat: 4
textureCompression: 0
compressionQuality: 50
crunchedCompression: 0
allowsAlphaSplitting: 0
overridden: 1
- buildTarget: Windows Store Apps
maxTextureSize: 2048
textureFormat: 4
textureCompression: 0
compressionQuality: 50
crunchedCompression: 0
allowsAlphaSplitting: 0
overridden: 0
- buildTarget: WebGL
maxTextureSize: 2048
textureFormat: 4
textureCompression: 0
compressionQuality: 50
crunchedCompression: 0
allowsAlphaSplitting: 0
overridden: 0
spriteSheet:
serializedVersion: 2
sprites: []
outline: []
spritePackingTag:
userData:
assetBundleName: themeassets
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
class VisibilityToggle extends React.Component {
constructor(props) {
super(props);
this.handleToggleVisibility = this.handleToggleVisibility.bind(this);
this.state = {
visibility: false
};
}
handleToggleVisibility() {
this.setState((prevState) => {
return {
visibility: !prevState.visibility
};
});
}
render() {
return (
<div>
<h1>Visibility Toggle</h1>
<button onClick={this.handleToggleVisibility}>
{this.state.visibility ? 'Hide details' : 'Show details'}
</button>
{this.state.visibility && (
<div>
<p>Hey. These are some details you can now see!</p>
</div>
)}
</div>
);
}
}
ReactDOM.render(<VisibilityToggle />, document.getElementById('app'));
| {
"pile_set_name": "Github"
} |
error: br is a self closing tag. Try "<br>" or "<br />"
--> $DIR/should_be_self_closing_tag.rs:10:15
|
10 | <br></br>
| ^^
| {
"pile_set_name": "Github"
} |
{"type":"comment","line":1,"val":"","buffer":false}
{"type":"comment","line":2,"val":"","buffer":false}
{"type":"eos","line":3} | {
"pile_set_name": "Github"
} |
/*
This file is part of libswirl
original code by flyinghead
*/
#include "license/bsd"
#pragma once
#include <string>
typedef void (*StringCallback)(bool cancelled, std::string selection);
void select_directory_popup(const char *prompt, float scaling, StringCallback callback);
void gui_ShowHelpMarker(const char* desc);
| {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.