text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
We plan. Furthermore, a number of other new features are coming as well in the Report Viewer controls. Related to this, my esteemed colleague Brian Hartman started blogging recently. His blog is definitely worth keeping an eye on regarding common questions on current and future versions of the Report Viewer. FAQ: What is the current level of RDL support in Visual Studio 2008 Report Viewer controls? If you are using local mode, you probably already noticed that attempting to load RDL 2008 based reports results in the following error: The report definition is not valid. Details: The report definition has an invalid target namespace '' which cannot be upgraded. The report definition is not valid. Details: The report definition has an invalid target namespace '' which cannot be upgraded. You cannot use RDL 2008 features in VS 2005 or VS 2008 report viewer controls in local mode, because the controls are using the same report processing engine that was shipped with SQL Server 2005 (supporting only the RDL 2005 namespace and feature set). As a side-note, VS 2008 shipped almost 6 months before SQL Server 2008 became available. If you want to use RDL 2008 features with the report viewer controls available today, server mode is your only viable option, because report processing is performed remotely on a Reporting Services 2008 server. Please check Brian's blog posting about RS 2008 and the Report Viewer controls for more details. A general overview of the differences in functionality between Report Viewer and RS 2008 is available in the documentation as well.
http://blogs.msdn.com/b/robertbruckner/archive/2009/01/19/better-report-viewing-in-visual-studio-2010.aspx
CC-MAIN-2014-23
en
refinedweb
Mark Lundquist wrote: > > On Oct 13, 2005, at 8:42 AM, Sylvain Wallez wrote: > >> I never used Apples, but it looks like some people (and not only their >> original creators) are using it. > > > I never really did "get" Apples. Can somebody just sort of give a quick > summary of what it's all about, and why I would want to use it? The quickest summary from apples lamer: Apples is: - a flow interpreter - a front controller that allows you to code in java (no scripting!). I think a it can be compared to struts controller a little but. - it has continuations but they do not suspend apple execution. See an example from our codebase: > public class HanoiApple extends AbstractLogEnabled implements AppleController { > // full state of the puzzle is in the following variables > public Stack[] stacks; > public Object floatingDisk = null; > public int moves = 0; > public int puzzleSize = 0; > > > public void process(AppleRequest req, AppleResponse res) throws ProcessingException { > > // processing > if (stacks == null) { > String requestSize = req.getCocoonRequest().getParameter("size"); > if (requestSize != null) { > try { > int size = Integer.parseInt(requestSize); > intializeStacks(size); > } catch (NumberFormatException ignore) { > } > } > } else { > // decide selected column > String requestStack = req.getCocoonRequest().getParameter("stack"); > if (requestStack != null) { > // do something here > } > } > > //view generation > if (stacks == null) { > res.sendPage("hanoi/intro.jx", null); > } else { > Map bizdata = new HashMap(); > bizdata.put("stacks" , this.stacks); > bizdata.put("moves" , "" + this.moves); > bizdata.put("floatingDisk", this.floatingDisk); > bizdata.put("nextMove" , this.floatingDisk==null ? "Lift it!" : "Drop it!"); > bizdata.put("puzzleSize" , "" + this.puzzleSize); > > res.sendPage("hanoi/hanoi.jx", bizdata); > } > > } > } First time the apple is invoked it is created from scratch. Later on if continuation is being called the apple object is retrieved from continuation and apple.process( req, res ) is called again on the same object. You have to maintain view flow yourself. You do not have to store data in session but have it isolated in single apple. -- Leszek Gawron [email protected] IT Manager MobileBox sp. z o.o. +48 (61) 855 06 67 mobile: +48 (501) 720 812 fax: +48 (61) 853 29 65
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200510.mbox/%[email protected]%3E
CC-MAIN-2014-23
en
refinedweb
[ ] Rick Hillegas updated DERBY-4593: --------------------------------- Attachment: derby-4593-05-aa-reportParser.diff Attaching derby-4593-05-aa-reportParser.diff. This patch significantly revamps the ReleaseNotesGenerator. Committed at subversion revision 936472. I have come to the conclusion that I don't know how to write a general tool for building the release notes. That is, a tool which does not need to be tweaked between releases. The best job I can do is write a tool which clearly states what tweaks need to be made for each release. This patch makes several significant changes to the ReleaseNotesGenerator: 1) Gives up on trying to parse the JIRA reports as valid markup. The xml reports produced by JIRA have to be hand-editted to remove references to missing namespaces and to add entity definitions. The html reports produced by JIRA, although digestible by tolerant browsers, are not well-formed markup--I don't know how to parse them without extensively transforming them first. I have introduced a simple document reader which makes no assumptions about how the document is structured: TagReader. 2) Moves all of the release-specific parsing into a new class called ReportParser. 3) Gives up trying to parse the attachment ids for detailed release notes. Actually, a previous submission threw in the towel here. This information used to be available in the xml reports produced by JIRA. But this information has disappeared from the current JIRA reports. Now, lamely, we ask that the release manager hard-code those attachment ids. As a consequence of this decision, we no longer need two JIRA reports, one listing all of the fixed bugs and the second one listing the issues which have detailed release notes attached to them. This is what the release manager has to do to produce release notes now: 1) As before, edit releaseSummary.xml. 2) As before, create a filter modelled on the "10.6.1 Fixed Bugs List" filter. Run the filter, reformat it by selecting the "Full Content" option, then save the result as a file named fixedBugsList.xml. See the header comment on ReportParser$April_2010 for a list of the fields which we expected would be present in the 10.6.1 filter results. 3) Edit ReportParser as follows: a) Implement a new inner subclass which knows how to parse the current format of the JIRA report and which lists the attachment ids for the issues which have detailed release notes. b) Change ReportParser.makeReportParser() to return an instance of this new inner subclass. Then generate the release notes as before. >, derby-4593-02-aa-secondReleaseNotes.diff, derby-4593-03-aa-firstReleaseNotesForReview.diff, derby-4593-04-aa-tweakReleaseNotes, derby-4593-05-aa-reportParser.diff > > > This is an issue which tracks the work needed to produce the 10.6.1 release. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/db-derby-dev/201004.mbox/%3C15655736.123291271880351081.JavaMail.jira@thor%3E
CC-MAIN-2014-23
en
refinedweb
Hi, AFAIU ContentSession.getCurrentRoot() will return the most up to date version of the root, which provides access to a snapshot of the tree. In oak-jcr this method is used when a SessionDelegate is created and the root is initialized with getCurrentRoot(). Later on this root is used for operations on the tree. now, there are also a lot of other places where getCurrentRoot() is called. some of these look a bit suspicious to me. the namespace registry and node type manager operate mostly on fresh snapshots of the tree for each method call. doesn't that always result in a call all the way down to the microkernel? I know that both namespace and node type management are considered repository wide and not bound to the current session, but it seems excessive to always get a new snapshot for a namespace uri to prefix mapping that rarely changes. another usage I noticed is in Query.getTree(String). Shouldn't the query rather operate on the same stable tree as seen by the JCR session? Regards Marcel
http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201209.mbox/%3C9C0FC4C8E9C29945B01766FC7F9D389816FE0B2244@eurmbx01.eur.adobe.com%3E
CC-MAIN-2014-23
en
refinedweb
Videos tagged “Laurie Towner” Creative Destruction: Chapter 3 part 2 Life's Better in Boardshorts short film. Laurie Towner and The Princes Island Apocolypse SHIPSTERNS BLUFF 2013 Laurie Towner Through The Eyes Of Joel Parkinson def tapes: wade goodall and friends Billabong Pro... The Lay Day Of Days Laurie Towner At "The Right" In West Oz Creative Destruction. Chapter 3 - Part one. the Noisey Ocean RIGHTEOUS - The Right 16th Oct 2011 What are Tags? Tags are keywords that describe videos. For example, a video of your Hawaiian vacation might be tagged with "Hawaii," "beach," "surfing," and "sunburn."
http://vimeo.com/tag:Laurie+Towner/sort:plays/format:thumbnail
CC-MAIN-2014-23
en
refinedweb
Wolfgang Jeltsch <g9ks157k at acme.softbase.org> wrote: > The Yampa people and I (the Grapefruit maintainer) already agreed to > introduce a top-level FRP namespace instead of putting FRP under > Control or whatever.. I do understand that hierarchical classification is inherently problematic, and will never quite fit the ways we think of our modules. But the alternative being proposed here is neither more descriptive, nor more logical. In fact, it is an abandonment of description, in favour of arbitrary naming. A package called foo-1.0 containing a module hierarchy rooted at "Foo." tells me precisely nothing about its contents. It it were rooted at Military.Secret.Foo, at least I would have some clue about what it does, even if the category is inadequate or slightly misleading in certain circumstances. You may argue that only novices are disadvantaged by arbitrary names - once you learn the referent of the name, it is no longer confusing. However, I strongly believe that even experienced users are helped by the continuous re-inforcement of visual links between name and concept. After all, if you are collaboratively building a software artifact that depends on large numbers of library packages, it may not be so easy to keep your internal dictionary of the mapping between names and functionality up-to-date, and in sync with your colleagues. Being just a little bit more explicit in the hierarchy is a one-time cost at time of writing, that reaps perceptual benefits long into the future for yourself, and those maintainers who will follow you. Regards, Malcolm
http://www.haskell.org/pipermail/haskell/2009-June/021422.html
CC-MAIN-2014-23
en
refinedweb
31 August 2010 11:31 [Source: ICIS news] LONDON (ICIS)--Sanctions against ?xml:namespace> Anna Robert said this seemed to be true in particular with regard to the EU autonomous sanctions, which contain additional financial sanctions and export restrictions - in particular in the energy sector - and intensive controls of financial transactions to and from Iran. The EU strengthened its sanctions against It said it would be targeting the country’s oil and gas interests to add pressure on EU sanctions focus on preventing sensitive equipment transfers and investment in the energy sector, unlike US sanctions which ban all oil products trade with “The German-Iranian trade is subject to the restrictions imposed by the UN and EU. German companies are aware of these restrictions and respect them,” Robert said. Several major German companies such as Siemens, Allianz and Munchner Ruck have withdrawn from entering into new contracts with Iranian customers, Robert added. A number of German chemicals companies, including BASF, have strong business interests in Iran. (
http://www.icis.com/Articles/2010/08/31/9389180/germanys-trade-with-iran-will-suffer-from-sanctions.html
CC-MAIN-2014-23
en
refinedweb
. Sets or returns the HTTP 'Location'. Sets or returns the HTTP status. $c->response->status(404); $res->code is an alias for this, to match HTTP::Response->code. Writes $data to the output stream. Returns a PSGI $writer object that has two methods, write and close. You can close over this object. Example: package MyApp::Web::Controller::Test; use base 'Catalyst::Controller'; use Plack::App::Directory; my $app = Plack::App::Directory->new({ root => "/path/to/htdocs" }) ->to_app; sub myaction :Local Args { my ($self, $c) = @_; $c->res->from_psgi_response($app->($self->env)); } Please note this does not attempt to map or nest your PSGI application under the Controller and Action namespace or path. Ensures that the response is flushed and closed at the end of the request. Provided by Moose Catalyst Contributors, see Catalyst.pm This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90051/lib/Catalyst/Response.pm
CC-MAIN-2014-23
en
refinedweb
super ,not bad. in which condition i can use different types of Spring MVC controllers and describe all the types of controllers in details. Post your Comment Spring MVC Controller hierarchy Spring MVC Controllers - Controllers hierarchy in Spring MVC Controllers hierarchy in Spring MVC In this we will will understand the controllers hierarchy in Spring MVC Module mvc mvc I want MVC example using jsp,servlets,pojoclass,jdbc(with mysql... in "Model view controller" format. last time i asked the same question, but i... please give me code in exactly "model view controller" format Model-View-Controller (MVC) Structure Java NotesModel-View-Controller (MVC) Structure Previous - Presentation...-Controller (MVC)pattern. The idea is to separate the user interface... with both the View and Controller as necessary). The literature on MVC leaves Controller Interface example in Spring 2.5 MVC. Learn how to create and run the example. ; Controller Interface implementation example in Spring 2.5 Web MVC... in Spring MVC. The Base portlet Controller interface... will create a Spring 2.5 Web MVC example that used Controller Interface MVC in flex application according to MVC. Thanks Ans: Model?View?Controller (MVC...MVC in flex Hi..... Please tell me What is MVC and how do you... application. The application is devided into three parts in MVC. 1. Model controller ; import com.dao.*; /** * Servlet implementation class Controller */ public class Controller extends HttpServlet { private static final long serialVersionUID...#HttpServlet() */ public Controller() { super(); // TODO Auto MVC Architecture In Java , what is MVC architecture, what is model, what is view, what is controller...-Controller) design pattern. What is MVC Architecture ? Models designs on MVC... into three parts namely Model, View, and Controller. MVC design pattern is used Controller as jsp in mvc2? - Java Beginners Controller as jsp in mvc2? Hi Friend, I am beginner... as a controller in MVC-2 architecture. If so wat's d... information: http MVC in Java MVC in Java Before understanding the use of MVC in Java, let's understand what is MVC? Abbreviated as MVC, Model-View-Controller is a term used... for displaying all or a portion of the data and controller works for handling Struts MVC is used to develop web applications using MVC (Model-View-Controller) design... Struts MVC Struts is open source MVC framework in Java. The Struts framework... applications easily. The application developed in Struts is also maintainable. MVC Spring MVC, Spring MVC Module Tutorial (Model View Controller) design pattern. The features of Spring MVC framework... Spring MVC - Spring MVC Introduction In this example we will give you quick overview of Spring MVC Spring MVC Tutorials which will be very useful for developers. Spring Web MVC (Model-View-Controller...Spring MVC Tutorials and example code In this section I will provide you the list of Spring MVC Tutorials which is very useful for a beginner in Spring MVC How Spring MVC Module Works . In the next section we will see the controller stack in Spring MVC... How Spring MVC Works How Spring MVC Works? In this we will see the request flow for the spring framework MVC Design Pattern stands for the VIEW C in MVC stands for the CONTROLLER MODEL --Model section... to meet the variety of purposes. CONTROLLER -- A Controller Section in the MVC... MVC Design MVC Architecture MVC Architecture The main aim of the MVC architecture is to separate the business... are the reasons why we should use the MVC design pattern. They are resuable : When Understanding MVC design pattern Understanding MVC Design Pattern The MVC (Model View Controller) design... for specific project. The MVC architecture contains the three parts they are Model, View, Controller. Model - This components contains  Jsp using mvc - JSP-Servlet one controller which receives all the request for the application...). Controller: Whenever the user sends a request for something then it always go through the controller. The controller is responsible for intercepting the requests from Java Model View Controller (MVC) Design Pattern ; margin-left: 400px; } .style4 { margin-left: 160px; } Java MVC ( Model View Controller ) Design Pattern Model View controller is a classical design... logic and view who represents data. MVC design pattern isolates tabbar controller tabbar controller how change the tabbar controller and how we maintain the view controller of second tabbar controller Java AWT event hierarchy Java AWT event hierarchy What class is the top of the AWT event hierarchy? The java.awt.AWTEvent class is the highest-level class in the AWT event-class hierarchy... first example in Spring MVC. After completing the tutorial you will be able to start developing small applications using Spring MVC. In this section we Controller in Spring Controller in Spring Hello Sir please help that how to write Controller in Spring web thanks in advance Hi you can write the Spring Controller by extending the abstract class AbstractController or by using Spring 2.5 MVC User Registration Example Spring MVC User Registration example Spring 2.5 MVC User Registration Example This tutorial shows you how to create user registration application in Spring MVC. This application Hierarchy in java - Java Beginners Hierarchy in java Design a vechicle class hierarchy in java? Hi Friend, Try the following code: class Vehicle { void test(){} } class Bus extends Vehicle{ void test() { System.out.println("I am a bus i want elaborate spring mvc control flowRaja September 2, 2012 at 10:12 PM super ,not bad. Controllerssuraj kumar gupta August 13, 2013 at 7:20 PM in which condition i can use different types of Spring MVC controllers and describe all the types of controllers in details. Post your Comment
http://www.roseindia.net/discussion/24588-Spring-MVC-Controller-hierarchy.html
CC-MAIN-2014-23
en
refinedweb
Opened 5 years ago Closed 4 years ago Last modified 3 years ago #11333 closed Uncategorized (wontfix) ORA-01830 date format picture ends before converting entire input Description I've legacy database that uses 'DATE' field and I'm using DateTimeField. Getting error: ORA-01830 date format picture ends before converting entire input if I'm using auto_now_add fields for fields that contain no fractions. If I do following: from models import MyModel e = MyModel.objects.all()[0] e.save() Symptoms are exactly as in #9275, but even I'm using SVN it doesn't work. Attachments (0) Change History (8) comment:1 Changed 5 years ago by anonymous - Component changed from Uncategorized to Database layer (models, ORM) - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 5 years ago by ikelly - Cc mboersma added - milestone set to 1.2 - Triage Stage changed from Unreviewed to Accepted comment:3 Changed 4 years ago by russellm - milestone 1.2 deleted comment:4 Changed 4 years ago by russellm - milestone set to 1.3 comment:5 Changed 4 years ago by ikelly - Resolution set to wontfix - Status changed from new to closed In order to fix this, we would need to somehow distinguish whether a DateTimeField is implemented in the database as a DATE or TIMESTAMP column. I can see two ways of doing this: - By doing automatic introspection on every DateTimeField at start-up in order to determine the actual column types. This would be quite a bit heavier than the introspection we currently do, and it doesn't seem very practical to me. - By adding a precision keyword argument to DateTimeField and TimeField that denotes the fractional second precision to be used. This might be a worthwhile feature to add, but it's outside the scope of this ticket. Also, there are a full complement of workarounds here. Either use a DateField with a DATE column, or a DateTimeField with a TIMESTAMP column, or if you really need to mix and match, turn off auto_now and use a pre_save signal to manually set the field with the correct precision. So my conclusion is that the proper response to this for now is "don't do that", and I'm closing this as wontfix. comment:6 Changed 3 years ago by anurag.chourasia@… - Easy pickings unset - Severity set to Normal - Type set to Uncategorized - UI/UX unset If the model field is a DateTimeField and the Database field is of DATE Type then you could try this to overcome this error Look for the statement where you are updating the Date field. Over there, replace the microseonds to Zero and Oracle will start treating it gracefully. For example, if you model field is called date_field and if you are updating it as date_field=datetime.datetime.now() then change it to date_field=datetime.datetime.now().replace(microsecond=0) And that will solve the issue Regards Guddu comment:7 Changed 3 years ago by jtiai Thanks for the info. Though it doesn't solve problem when you update non-date fields (Django updates still whole record) problem still exists. I agree with ikelly, it's better to do it right than trying to hack around someway. Though it could be useful to mention that workaround in official docs as well. comment:8 Changed 3 years ago by jacob - milestone 1.3 deleted Milestone 1.3 deleted Not critical for 1.2
https://code.djangoproject.com/ticket/11333
CC-MAIN-2014-23
en
refinedweb
Hi, i have a 6124 and has no api jsr 211 included. I'm trying an application that should use this jsr but there is no way to insert it on the mobile phone. Is there anyone that has any idea of how insert this jsr on the mobile phone? The error i'm catching is a namespace error, because at the time of running the application is not finding the library (the jsr 211) i included on my developing program. I think there is trouble when trying to put on the register of the mobile phone the javax library. I don't know how to change it at all. Thanx in advance.
http://developer.nokia.com/community/discussion/showthread.php/154374-Problems-API-JSR-211-Content-Handler?p=522362&viewfull=1
CC-MAIN-2014-23
en
refinedweb
XML::Rules - parse XML and specify what and how to keep/process for individual tags Version 1.12 use XML::Rules; $xml = <<'*END*'; <doc> <person> <fname>...</fname> <lname>...</lname> <email>...</email> <address> <street>...</street> <city>...</city> <country>...</country> <bogus>...</bogus> </address> <phones> <phone type="home">123-456-7890</phone> <phone type="office">663-486-7890</phone> <phone type="fax">663-486-7000</phone> </phones> </person> <person> <fname>...</fname> <lname>...</lname> <email>...</email> <address> <street>...</street> <city>...</city> <country>...</country> <bogus>...</bogus> </address> <phones> <phone type="office">663-486-7891</phone> </phones> </person> </doc> *END* @rules = ( _default => sub {$_[0] => $_[1]->{_content}}, # by default I'm only interested in the content of the tag, not the attributes bogus => undef, # let's ignore this tag and all inner ones as well address => sub {address => "$_[1]->{street}, $_[1]->{city} ($_[1]->{country})"}, # merge the address into a single string phone => sub {$_[1]->{type} => $_[1]->{_content}}, # let's use the "type" attribute as the key and the content as the value phones => sub {delete $_[1]->{_content}; %{$_[1]}}, # remove the text content and pass along the type => content from the child nodes person => sub { # lets print the values, all the data is readily available in the attributes print "$_[1]->{lname}, $_[1]->{fname} <$_[1]->{email}>\n"; print "Home phone: $_[1]->{home}\n" if $_[1]->{home}; print "Office phone: $_[1]->{office}\n" if $_[1]->{office}; print "Fax: $_[1]->{fax}\n" if $_[1]->{fax}; print "$_[1]->{address}\n\n"; return; # the <person> tag is processed, no need to remember what it contained }, ); $parser = XML::Rules->new(rules => \@rules); $parser->parse( $xml); There are several ways to extract data from XML. One that's often used is to read the whole file and transform it into a huge maze of objects and then write code like foreach my $obj ($XML->forTheLifeOfMyMotherGiveMeTheFirstChildNamed("Peter")->pleaseBeSoKindAndGiveMeAllChildrenNamedSomethingLike("Jane")) { my $obj2 = $obj->sorryToKeepBotheringButINeedTheChildNamed("Theophile"); my $birth = $obj2->whatsTheValueOfAttribute("BirthDate"); print "Theophile was born at $birth\n"; } I'm exagerating of course, but you probably know what I mean. You can of course shorten the path and call just one method ... that is if you spend the time to learn one more "cool" thing starting with X. XPath. You can also use XML::Simple and generate an almost equaly huge maze of hashes and arrays ... which may make the code more or less complex. In either case you need to have enough memory to store all that data, even if you only need a piece here and there. Another way to parse the XML is to create some subroutines that handle the start and end tags and the text and whatever else may appear in the XML. Some modules will let you specify just one for start tag, one for text and one for end tag, others will let you install different handlers for different tags. The catch is that you have to build your data structures yourself, you have to know where you are, what tag is just open and what is the parent and its parent etc. so that you could add the attributes and especially the text to the right place. And the handlers have to do everything as their side effect. Does anyone remember what do they say about side efects? They make the code hard to debug, they tend to change the code into a maze of interdependent snippets of code. So what's the difference in the way XML::Rules works? At the first glance, not much. You can also specify subroutines to be called for the tags encountered while parsing the XML, just like the other even based XML parsers. The difference is that you do not have to rely on side-effects if all you want is to store the value of a tag. You simply return whatever you need from the current tag and the module will add it at the right place in the data structure it builds and will provide it to the handlers for the parent tag. And if the parent tag does return that data again it will be passed to its parent and so forth. Until we get to the level at which it's convenient to handle all the data we accumulated from the twig. Do we want to keep just the content and access it in the parent tag handler under a specific name? foo => sub {return 'foo' => $_[1]->{_content}} Do we want to ornament the content a bit and add it to the parent tag's content? u => sub {return '_' . $_[1]->{_content} . '_'} strong => sub {return '*' . $_[1]->{_content} . '*'} uc => sub {return uc($_[1]->{_content})} Do we want to merge the attributes into a string and access the string from the parent tag under a specified name? address => sub {return 'Address' => "Street: $_[1]->{street} $_[1]->{bldngNo}\nCity: $_[1]->{city}\nCountry: $_[1]->{country}\nPostal code: $_[1]->{zip}"} and in this case the $_[1]->{street} may either be an attribute of the <address> tag or it may be ther result of the handler (rule) street => sub {return 'street' => $_[1]->{_content}} and thus come from a child tag <street>. You may also use the rules to convert codes to values]; } } so that you do not have to care whether there was <address id="147"/> or <address><street>Larry Wall's St.</street><streetno>478</streetno><city>Core</city><country>The Programming Republic of Perl</country></address> And if you do not like to end up with a datastructure of plain old arrays and hashes, you can create application specific objects in the rules address => sub { my $type = lc(delete $_[1]->{type}); $type.'Address' => MyApp::Address->new(%{$_[1]}) }, person => sub { '@person' => MyApp::Person->new( firstname => $_[1]->{fname}, lastname => $_[1]->{lname}, deliveryAddress => $_[1]->{deliveryAddress}, billingAddress => $_[1]->{billingAddress}, phone => $_[1]->{phone}, ) } At each level in the tree structure serialized as XML you can decide what to keep, what to throw away, what to transform and then just return the stuff you care about and it will be available to the handler at the next level. my $parser = XML::Rules->new( rules => \@rules, [ start_rules => \@start_rules, ] [ stripspaces => 0 / 1 / 2 / 3 + 0 / 4 + 0 / 8, ] [ normalisespaces => 0 / 1, ] [ style => 'parser' / 'filter', ] [ ident => ' ', [reformat_all => 0 / 1] ], [ encode => 'encoding specification', ] [ output_encoding => 'encoding specification', ] [ namespaces => \%namespace2alias_mapping, ] [ handlers => \%additional_expat_handlers, ] # and optionaly parameters passed to XML::Parser::Expat ); Options passed to XML::Parser::Expat : ProtocolEncoding Namespaces NoExpand Stream_Delimiter ErrorContext ParseParamEnt Base The "stripspaces" controls the handling of whitespace. Please see the Whitespace handling bellow. The "style" specifies whether you want to build a parser used to extract stuff from the XML or filter/modify the XML. If you specify style => 'filter' then all tags for which you do not specify a subroutine rule or that occure inside such a tag are copied to the output filehandle passed to the ->filter() or ->filterfile() methods. The "ident" specifies what character(s) to use to ident the tags when filtering, by default the tags are not formatted in any way. If the "reformat_all" is not set then this affects only the tags that have a rule and their subtags. And in case of subtags only those that were added into the attribute hash by their rules, not those left in the _content array! The "warnoverwrite" instructs XML::Rules to issue a warning whenever the rule cause a key in a tag's hash to be overwritten by new data produced by the rule of a subtag. This happens eg. if a tag is repeated and its rule doesn't expect it. The "encode" allows you to ask the module to run all data through Encode::encode( 'encoding_specification', ...) before being passed to the rules. Otherwise all data comes as UTF8. The "output_encoding" on the other hand specifies in what encoding is the resulting data going to be, the default is again UTF8. This means that if you specify encode => 'windows-1250', output_encoding => 'utf8', and the XML is in ISO-8859-2 (Latin2) then the filter will 1) convert the content and attributes of the tags you are not interested in from Latin2 directly to utf8 and output and 2) convert the content and attributes of the tags you want to process from Latin2 to Windows-1250, let you mangle the data and then convert the results to utf8 for the output. The encode and output_enconding affects also the $parser-toXML(...)>, if they are different then the data are converted from one encoding to the other. The handlers allow you to set additional handlers for XML::Parser::Expat->setHandlers. Your Start, End, Char and XMLDecl handlers are evaluated before the ones installed by XML::Rules and may modify the values in @_, but you should be very carefull with that. Consider that experimental and if you do make that work the way you needed, please let me know so that I know what was it good for and can make sure it doesn't break in a new version. The rules option may be either an arrayref or a hashref, the module doesn't care, but if you want to use regexps to specify the groups of tags to be handled by the same rule you should use the array ref. The rules array/hash is made of pairs in form tagspecification => action where the tagspecification may be either a name of a tag, a string containing comma or pipe ( "|" ) delimited list of tag names or a string containing a regexp enclosed in // optionaly followed by the regular expression modifiers or a qr// compiled regular expressions. The tag names and tag name lists take precedence to the regexps, the regexps are (in case of arrayrefs only!!!) tested in the order in which they are specified. These rules are evaluated/executed whenever a tag if fully parsed including all the content and child tags and they may access the content and attributes of the specified tag plus the stuff produced by the rules evaluated for the child tags. The action may be either - an undef or empty string = ignore the tag and all its children - a subroutine reference = the subroutine will be called to handle the tag data&contents sub { my ($tagname, $attrHash, $contexArray, $parentDataArray, $parser) = @_; ...} - one of the built in rules below The subroutines in the rules specification receive five parameters: $rule->( $tag_name, \%attrs, \@context, \@parent_data, $parser) It's OK to destroy the first two parameters, but you should treat the other three as read only or at least treat them with care! $tag_name = string containing the tag name \%attrs = hash containing the attributes of the tag plus the _content key containing the text content of the tag. If it's not a leaf tag it may also contain the data returned by the rules invoked for the child tags. \@context = an array containing the names of the tags enclosing the current one. The parent tag name is the last element of the array. (READONLY!) \@parent_data = an array containing the hashes with the attributes and content read&produced for the enclosing tags so far. You may need to access this for example to find out the version of the format specified as an attribute of the root tag. You may safely add, change or delete attributes in the hashes, but all bets are off if you change the number or type of elements of this array! $parser = the parser object you may use $parser->{pad} or $parser->{parameters} to store any data you need. The first is never touched by XML::Rules, the second is set to the last argument of parse() or filter() methods and reset to undef before those methods exit. The subroutine may decide to handle the data and return nothing or tweak the data as necessary and return just the relevant bits. It may also load more information from elsewhere based on the ids found in the XML and provide it to the rules of the ancestor tags as if it was part of the XML. The possible return values of the subroutines are: 1) nothing or undef or "" - nothing gets added to the parent tag's hash 2) a single string - if the parent's _content is a string then the one produced by this rule is appended to the parent's _content. If the parent's _content is an array, then the string is push()ed to the array. 3) a single reference - if the parent's _content is a string then it's changed to an array containing the original string and this reference. If the parent's _content is an array, then the string is push()ed to the array. 4) an even numbered list - it's a list of key & value pairs to be added to the parent's hash. The handling of the attributes may be changed by adding '@', '%', '+', '*' or '.' before the attribute name. Without any "sigil" the key & value is added to the hash overwriting any previous values. The values for the keys starting with '@' are push()ed to the arrays referenced by the key name without the @. If there already is an attribute of the same name then the value will be preserved and will become the first element in the array. The values for the keys starting with '%' have to be either hash or array references. The key&value pairs in the referenced hash or array will be added to the hash referenced by the key. This is nice for rows of tags like this: <field name="foo" value="12"/> <field name="bar" value="24"/> if you specify the rule as field => sub { '%fields' => [$_[1]->{name} => $_[1]->{value}]} then the parent tag's has will contain fields => { foo => 12, bar => 24, } The values for the keys starting with '+' are added to the current value, the ones starting with '.' are appended to the current value and the ones starting with '*' multiply the current value. 5) an odd numbered list - the last element is appended or push()ed to the parent's _content, the rest is handled as in the previous case. 'content' = only the content of the tag is preserved and added to the parent tag's hash as an attribute named after the tag. Equivalent to: sub { $_[0] => $_[1]->{_content}} 'content trim' = only the content of the tag is preserved, trimmed and added to the parent tag's hash as an attribute named after the tag sub { s/^\s+//,s/\s+$// for ($_[1]->{_content}); $_[0] => $_[1]->{_content}} 'content array' = only the content of the tag is preserved and pushed to the array pointed to by the attribute sub { '@' . $_[0] => $_[1]->{_content}} 'as is' = the tag's hash is added to the parent tag's hash as an attribute named after the tag sub { $_[0] => $_[1]} 'as is trim' = the tag's hash is added to the parent tag's hash as an attribute named after the tag, the content is trimmed sub { $_[0] => $_[1]} 'as array' = the tag's hash is pushed to the attribute named after the tag in the parent tag's hash sub { '@'.$_[0] => $_[1]} 'as array trim' = the tag's hash is pushed to the attribute named after the tag in the parent tag's hash, the content is trimmed sub { '@'.$_[0] => $_[1]} 'no content' = the _content is removed from the tag's hash and the hash is added to the parent's hash into the attribute named after the tag sub { delete $_[1]->{_content}; $_[0] => $_[1]} 'no content array' = similar to 'no content' except the hash is pushed into the array referenced by the attribute 'as array no content' = same as 'no content array' 'pass' = the tag's hash is dissolved into the parent's hash, that is all tag's attributes become the parent's attributes. The _content is appended to the parent's _content. sub { %{$_[1]}} 'pass no content' = the _content is removed and the hash is dissolved into the parent's hash. sub { delete $_[1]->{_content}; %{$_[1]}} 'pass without content' = same as 'pass no content' 'raw' = the [tagname => attrs] is pushed to the parent tag's _content. You would use this style if you wanted to be able to print the parent tag as XML preserving the whitespace or other textual content sub { [$_[0] => $_[1]]} 'raw extended' = the [tagname => attrs] is pushed to the parent tag's _content and the attrs are added to the parent's attribute hash with ":$tagname" as the key sub { (':'.$_[0] => $_[1], [$_[0] => $_[1]])}; 'raw extended array' = the [tagname => attrs] is pushed to the parent tag's _content and the attrs are pushed to the parent's attribute hash with ":$tagname" as the key sub { ('@:'.$_[0] => $_[1], [$_[0] => $_[1]])}; 'by <attrname>' = uses the value of the specified attribute as the key when adding the attribute hash into the parent tag's hash. You can specify more names, in that case the first found is used. sub {delete($_[1]->{name}) => $_[1]} 'content by <attrname>' = uses the value of the specified attribute as the key when adding the tags content into the parent tag's hash. You can specify more names, in that case the first found is used. sub {$_[1]->{name} => $_[1]->{_content}} 'no content by <attrname>' = uses the value of the specified attribute as the key when adding the attribute hash into the parent tag's hash. The content is dropped. You can specify more names, in that case the first found is used. sub {delete($_[1]->{_content}); delete($_[1]->{name}) => $_[1]} '==...' = replace the tag by the specified string. That is the string will be added to the parent tag's _content sub { return '...' } '=...' = replace the tag contents by the specified string and forget the attributes. sub { return $_[0] => '...' } '' = forget the tag's contents (after processing the rules for subtags) sub { return }; I include the unnamed subroutines that would be equivalent to the builtin rule in case you need to add some tests and then behave as if one of the builtins was used. You can add these modifiers to most rules, just add them to the string literal, at the end, separated from the base rule by a space. no xmlns = strip the namespace alias from the $_[0] (tag name) remove(list,of,attributes) = remove all specified attributes (or keys produced by child tag rules) from the tag data only(list,of,attributes) = filter the hash of attributes and keys+values produced by child tag rules in the tag data to only include those specified here. In case you need to include the tag content do not forget to include _content in the list! Not all modifiers make sense for all rules. For example if the rule is 'content', it's pointless to filter the attributes, because the only one used will be the content anyway. The behaviour of the combination of the 'raw...' rules and the rule modifiers is UNDEFINED! Since 0.19 it's possible to specify several actions for a tag if you need to do something different based on the path to the tag like this: tagname => [ 'tag/path' => action, '/root/tag/path' => action, '/root/*/path' => action, qr{^root/ns:[^/]+/par$} => action, default_action ], The path is matched against the list of parent tags joined by slashes. If you need to use more complex conditions to select the actions you have to use a single subroutine rule and implement the conditions within that subroutine. You have access both to the list of enclosing tags and their attribute hashes (including the data obtained from the rules of the already closed subtags of the enclosing tags. Apart from the normal rules that get invoked once the tag is fully parsed, including the contents and child tags, you may want to attach some code to the start tag to (optionaly) skip whole branches of XML or set up attributes and variables. You may set up the start rules either in a separate parameter to the constructor or in the rules=> by prepending the tag name(s) by ^. These rules are in form tagspecification => undef / '' / 'skip' --> skip the element, including child tags tagspecification => 1 / 'handle' --> handle the element, may be needed if you specify the _default rule. tagspecification => \&subroutine The subroutines receive the same parameters as for the "end tag" rules except of course the _content, but their return value is treated differently. If the subroutine returns a false value then the whole branch enclosed by the current tag is skipped, no data are stored and no rules are executed. You may modify the hash referenced by $attr. You may even tie() the hash referenced by $attr, for example in case you want to store the parsed data in a DBM::Deep. In such case all the data returned by the immediate subtags of this tag will be stored in the DBM::Deep. Make sure you do not overwrite the data by data from another occurance of the same tag if you return $_[1]/$attr from the rule! YourHugeTag => sub { my %temp = %{$_[1]}; tie %{$_[1]}, 'DBM::Deep', $filename; %{$_[1]} = %temp; 1; } Both types of rules are free to store any data they want in $parser->{pad}. This property is NOT emptied after the parsing! There are two options that affect the whitespace handling: stripspaces and normalisespaces. The normalisespaces is a simple flag that controls whether multiple spaces/tabs/newlines are collapsed into a single space or not. The stripspaces is more complex, it's a bit-mask, an ORed combination of the following options: 0 - don't remove whitespace around tags (around tags means before the opening tag and after the closing tag, not in the tag's content!) 1 - remove whitespace before tags whose rules did not return any text content (the rule specified for the tag caused the data of the tag to be ignored, processed them already or added them as attributes to parent's \%attr) 2 - remove whitespace around tags whose rules did not return any text content 3 - remove whitespace around all tags 0 - remove only whitespace-only content (that is remove the whitespace around <foo/> in this case "<bar> <foo/> </bar>" but not this one "<bar>blah <foo/> blah</bar>") 4 - remove trailing/leading whitespace (remove the whitespace in both cases above) 0 - don't trim content 8 - do trim content (That is for "<foo> blah </foo>" only pass to the rule {_content => 'blah'}) That is if you have a data oriented XML in which each tag contains either text content or subtags, but not both, you want to use stripspaces => 3 or stripspaces => 3|4. This will not only make sure you don't need to bother with the whitespace-only _content of the tags with subtags, but will also make sure you do not keep on wasting memory while parsing a huge XML and processing the "twigs". Without that option the parent tag of the repeated tag would keep on accumulating unneeded whitespace in its _content. $parser->parse( $string [, $parameters]); $parser->parse( $IOhandle [, $parameters]); Parses the XML in the string or reads and parses the XML from the opened IO handle, executes the rules as it encounters the closing tags and returns the resulting structure. The scalar or reference passed as the second parameter to the parse() method is assigned to $parser->{parameters} for the parsing of the file or string. Once the XML is parsed the key is deleted. This means that the $parser does not retain a reference to the $parameters after the parsing. $parser->parsestring( $string [, $parameters]); Just an alias to ->parse(). $parser->parse_string( $string [, $parameters]); Just an alias to ->parse(). $parser->parsefile( $filename [, $parameters]); Opens the specified file and parses the XML and executes the rules as it encounters the closing tags and returns the resulting structure. $parser->parse_file( $filename [, $parameters]); Just an alias to ->parsefile(). while (my $chunk = read_chunk_of_data()) { $parser->parse_chunk($chunk); } my $data = $parser->last_chunk(); This method allows you to process the XML in chunks as you receive them. The chunks do not need to be in any way valid ... it's fine if the chunk ends in the middle of a tag or attribute. If you need to set the $parser->{parameters}, pass it to the first call to parse_chunk() the same way you would to parse(). The first chunk may be empty so if you need to set up the parameters, but read the chunks in a loop or in a callback, you can do this: $parser->parse_chunk('', {foo => 15, bar => "Hello World!"}); while (my $chunk = read_chunk_of_data()) { $parser->parse_chunk($chunk); } my $data = $parser->last_chunk(); or $parser->parse_chunk('', {foo => 15, bar => "Hello World!"}); $ua->get($url, ':content_cb' => sub { my($data, $response, $protocol) = @_; $parser->parse_chunk($data); return 1 }); my $data = $parser->last_chunk(); The parse_chunk() returns 1 or dies, to get the accumulated data, you need to call last_chunk(). You will want to either agressively trim the data remembered or handle parts of the file using custom rules as the XML is being parsed. $parser->filter( $string); $parser->filter( $string, $LexicalOutputIOhandle [, $parameters]); $parser->filter( $LexicalInputIOhandle, $LexicalOutputIOhandle [, $parameters]); $parser->filter( $string, \*OutputIOhandle [, $parameters]); $parser->filter( $LexicalInputIOhandle, \*OutputIOhandle [, $parameters]); $parser->filter( $string, $OutputFilename [, $parameters]); $parser->filter( $LexicalInputIOhandle, $OutputFilename [, $parameters]); $parser->filter( $string, $StringReference [, $parameters]); $parser->filter( $LexicalInputIOhandle, $StringReference [, $parameters]); Parses the XML in the string or reads and parses the XML from the opened IO handle, using the ->ToXML() method.. $parser->filterstring( ...); Just an alias to ->filter(). $parser->filter_string( ...); Just an alias to ->filter(). $parser->filterfile( $filename); $parser->filterfile( $filename, $LexicalOutputIOhandle [, $parameters]); $parser->filterfile( $filename, \*OutputIOhandle [, $parameters]); $parser->filterfile( $filename, $OutputFilename [, $parameters]); Parses the XML in the specified file,. Just an alias to ->filterfile(). while (my $chunk = read_chunk_of_data()) { $parser->filter_chunk($chunk); } $parser->last_chunk(); This method allows you to process the XML in chunks as you receive them. The chunks do not need to be in any way valid ... it's fine if the chunk ends in the middle of a tag or attribute. If you need to set the file to store the result to (default is the select()ed filehandle) or set the $parser->{parameters}, pass it to the first call to filter_chunk() the same way you would to filter(). The first chunk may be empty so if you need to set up the parameters, but read the chunks in a loop or in a callback, you can do this: $parser->filter_chunk('', "the-filtered.xml", {foo => 15, bar => "Hello World!"}); while (my $chunk = read_chunk_of_data()) { $parser->filter_chunk($chunk); } $parser->last_chunk(); or $parser->filter_chunk('', "the_filtered.xml", {foo => 15, bar => "Hello World!"}); $ua->get($url, ':content_cb' => sub { my($data, $response, $protocol) = @_; $parser->filter_chunk($data); return 1 }); filter_chunk$parser->last_chunk(); The filter_chunk() returns 1 or dies, you need to call last_chunk() to sign the end of the data and close the filehandles and clean the parser status. Make sure you do not set a rule for the root tag or other tag containing way too much data. Keep in mind that even if the parser works as a filter, the data for a custom rule must be kept in memory for the rule to execute! my $data = $parser->last_chunk(); my $data = $parser->last_chunk($the_last_chunk_contents); Finishes the processing of a XML fed to the parser in chunks. In case of the parser style, returns the accumulated data. In case of the filter style, flushes and closes the output file. You can pass the last piece of the XML to this method or call it without parameters if all the data was passed to parse_chunk()/filter_chunk(). You HAVE to execute this method after call(s) to parse_chunk() or filter_chunk()! Until you do, the parser will refuse to process full documents and expect another call to parse_chunk()/filter_chunk()! $parser->escape_value( $data [, $numericescape]) This method escapes the $data for inclusion in XML, the $numericescape may be 0, 1 or 2 and controls whether to convert 'high' (non ASCII) characters to XML entities.) You can also specify the default value in the constructor my $parser = XML::Rules->new( ... NumericEscape => 2, ); $xml = $parser->toXML( $tagname, \%attrs[, $do_not_close, $ident, $base]) You may use this method to convert the datastructures created by parsing the XML into the XML format. Not all data structures may be printed! I'll add more docs later, for now please do experiment. The $ident and $base, if defined, turn on and control the pretty-printing. The $ident specifies the character(s) used for one level of identation, the base contains the identation of the current tag. That is if you want to include the data inside of <data> <some> <subtag>$here</subtag> </some> </data> you will call $parser->toXML( $tagname, \%attrs, 0, "\t", "\t\t\t"); The method does NOT validate that the $ident and $base are whitespace only, but of course if it's not you end up with invalid XML. Newlines are added only before the start tag and (if the tag has only child tags and no content) before the closing tag, but not after the closing tag! Newlines are added even if the $ident is an empty string. $xml = $parser->parentsToXML( [$level]) Prints all or only the topmost $level ancestor tags, including the attributes and content (parsed so far), but without the closing tags. You may use this to print the header of the file you are parsing, followed by calling toXML() on a structure you build and then by closeParentsToXML() to close the tags left opened by parentsToXML(). You most likely want to use the style => 'filter' option for the constructor instead. $xml = $parser->closeParentsToXML( [$level]) Prints the closing tags for all or the topmost $level ancestor tags of the one currently processed. my $parser = XML::Rules->new( rules => paths2rules { '/root/subtag/tag' => sub { ...}, '/root/othertag/tag' => sub {...}, 'tag' => sub{ ... the default code for this tag ...}, ... } ); This helper function converts a hash of "somewhat xpath-like" paths and subs/rules into the format required by the module. Due to backwards compatibility and efficiency I can't directly support paths in the rules and the direct syntax for their specification is a bit awkward. So if you need the paths and not the regexps, you may use this helper instead of: my $parser = XML::Rules->new( rules => { 'tag' => [ '/root/subtag' => sub { ...}, '/root/othertag' => sub {...}, sub{ ... the default code for this tag ...}, ], ... } ); Stop parsing the XML, forget any data we already have and return from the $parser->parse(). This is only supposed to be used within rules and may be called both as a method and as an ordinary function (it's not exported). Stop parsing the XML, forget any data we already have and return the attributes passed to this subroutine from the $parser->parse(). This is only supposed to be used within rules and may be called both as a method and as an ordinary function (it's not exported). Stop parsing the XML and return whatever data we already have from the $parser->parse(). The rules for the currently opened tags are evaluated as if the XML contained all the closing tags in the right order. These three work via raising an exception, the exception is caught within the $parser->parse() and does not propagate outside. It's also safe to raise any other exception within the rules, the exception will be caught as well, the internal state of the $parser object will be cleaned and the exception rethrown. Dumper(XML::Rules::inferRulesFromExample( $fileOrXML, $fileOrXML, $fileOrXML, ...) Dumper(XML::Rules->inferRulesFromExample( $fileOrXML, $fileOrXML, $fileOrXML, ...) Dumper($parser->inferRulesFromExample( $fileOrXML, $fileOrXML, $fileOrXML, ...) The subroutine parses the listed files and infers the rules that would produce the minimal, but complete datastructure. It finds out what tags may be repeated, whether they contain text content, attributes etc. You may want to give the subroutine several examples to make sure it knows about all possibilities. You should use this once and store the generated rules in your script or even take this as the basis of a more specific set of rules. Dumper(XML::Rules::inferRulesFromDTD( $DTDorDTDfile, [$enableExtended])) Dumper(XML::Rules->inferRulesFromDTD( $DTDorDTDfile, [$enableExtended])) Dumper($parser->inferRulesFromDTD( $DTDorDTDfile, [$enableExtended])) The subroutine parses the DTD and infers the rules that would produce the minimal, but complete datastructure. It finds out what tags may be repeated, whether they contain text content, attributes etc. You may use this each time you are about to parse the XML or once and store the generated rules in your script or even take this as the basis of a more specific set of rules. With the second parameter set to a true value, the tags included in a mixed content will use the "raw extended" or "raw extended array" types instead of just "raw". This makes sure the tag data both stay at the right place in the content and are accessible easily from the parent tag's atrribute hash. This subroutine requires the XML::DTDParser module! You can pass a parameter (scalar or reference) to the parse...() or filter...() methods, this parameter is later available to the rules as $parser->{parameters}. The module will never use this parameter for any other purpose so you are free to use it for any purposes provided that you expect it to be reset by each call to parse...() or filter...() first to the passed value and then, after the parsing is complete, to undef. The $parser->{pad} key is specificaly reserved by the module as a place where the module users can store their data. The module doesn't and will not use this key in any way, doesn't set or reset it under any circumstances. If you need to share some data between the rules and do not want to use the structure built by applying the rules you are free to use this key. You should refrain from modifying or accessing other properties of the XML::Rules object! By default the module doesn't handle namespaces in any way, it doesn't do anything special with xmlns or xmlns:alias attributes and it doesn't strip or mangle the namespace aliases in tag or attribute names. This means that if you know for sure what namespace aliases will be used you can set up rules for tags including the aliases and unless someone decides to use a different alias or makes use of the default namespace your script will work without turning the namespace support on. If you do specify any namespace to alias mapping in the constructor it does start processing the namespace stuff. The xmlns and xmlns:alias attributes for the known namespaces are stripped from the datastructures and the aliases are transformed from whatever the XML author decided to use to whatever your namespace mapping specifies. Aliases are also added to all tags that belong to a default namespace. Assuming the constructor parameters contain namespaces => { '' => 'foo', '' => 'bar', } and the XML looks like this: <root> <Foo xmlns=""> <subFoo>Hello world</subfoo> </Foo> <other xmlns: <b:pub> <b:name>NaRuzku</b:name> <b:address>at any crossroads</b:address> <b:desc>Fakt <b>desnej</b> pajzl.</b:desc> </b:pub> </other> </root> then the rules wil be called as if the XML looked like this while the namespace support is turned off: <root> <foo:Foo> <foo:subFoo>Hello world</foo:subfoo> </foo:Foo> <other> <bar:pub> <bar:name>NaRuzku</bar:name> <bar:address>at any crossroads</bar:address> <bar:desc>Fakt <b>desnej</b> pajzl.</bar:desc> </bar:pub> </other> </root> This means that the namespace handling will normalize the aliases used so that you can use them in the rules. It is possible to specify an empty alias, so eg. in case you are processing a SOAP XML and know the tags defined by SOAP do not colide with the tags in the enclosed XML you may simplify the parsing by removing all namespace aliases. You can control the behaviour with respect to the namespaces that you did not include in your mapping by setting the "alias" for the special pseudonamespace '*'. The possible values of the "alias"are: "warn" (default), "keep", "strip", "" and "die". warn: whenever an unknown namespace is encountered, XML::Rules prints a warning. The xmlns:XX attributes and the XX: aliases are retained for these namespaces. If the alias clashes with one specified by your mapping it will be changed in all places. The xmlns="..." referencing an unexpected namespace are changed to xmlns:nsN and the alias is added to the tag names included. keep: this works just like the "warn" except for the warning. strip: all attributes and tags in the unknown namespaces are stripped. If a tag in such a namespace contains a tag from a known namespace, then the child tag is retained. "": all the xmlns attributes and the aliases for the unexected namespaces are removed, the tags and normal attributes are retained without any alias. die: as soon as any unexpected namespace is encountered, XML::Rules croak()s. You may view the module either as a XML::Simple on steriods and use it to build a data structure similar to the one produced by XML::Simple with the added benefit of being able to specify what tags or attributes to ignore, when to take just the content, what to store as an array etc. Or you could view it as yet another event based XML parser that differs from all the others only in one thing. It stores the data for you so that you do not have to use globals or closures and wonder where to attach the snippet of data you just received onto the structure you are building. You can use it in a way similar to XML::Twig with simplify(), specify the rules to transform the lower level tags into a XML::Simple like (simplify()ed) structure and then handle the structure in the rule for the tag(s) you'd specify in XML::Twig's twig_roots. If you need to parse a XML file without the root tag (something that each and any sane person would allow, but the XML comitee did not), you can parse <!DOCTYPE doc [<!ENTITY real_doc SYSTEM "$the_file_name">]><doc>&real_doc;</doc> instead. Jan Krynicky, <Jenda at CPAN.org> Please report any bugs or feature requests to bug-xml-rules at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. You can find documentation for this module with the perldoc command. perldoc XML::Rules You can also look for information at: Please see or for discussion. XML::Twig, XML::LibXML, XML::Pastor The escape_value() method is taken with minor changes from XML::Simple. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~jenda/XML-Rules-1.12/lib/XML/Rules.pm
CC-MAIN-2014-23
en
refinedweb
Multiple programs use the same file extension, but the formats are totally different and incompatible. For instance, I have .sch files on my computer that are in at least 5 different formats (TINA, PSpice, PADS, Protel, and Eagle). Is there a way to get Windows to treat them differently, so that double-clicking on such a file opens it in the program it's meant to be opened in? .sch Linux uses magic numbers in the files themselves to differentiate, and only uses file extensions as a fallback plan. (All PNG files start with the bytes 89 50 4E 47 0D 0A 1A 0A, for instance, regardless of what you name them.) It would be nice if Windows could support this, but probably very difficult to implement. Maybe something simpler like a second-level extension, like filename.program1.sch and filename.program2.sch? Maybe some kind of filter that renames files on the fly? 89 50 4E 47 0D 0A 1A 0A filename.program1.sch filename.program2.sch Better idea: Associating the ambiguous extension with a pre-processor (.bat file or dedicated app) that checks for a second-level extension or goes into the file itself and scans for the magic number and then launches the appropriate program? Windows does not launch files based on any information in the file - building a database for that would take an incredible amount of work and programming. The only true way to identify a file is by the binary signatures in the file, if the file even has it, and this is up to the software author to implement. In Windows, files are passed to the program you specify for a particular file extension. Windows determines a file's extension as the substring which follows the last occurrence of a period, so it's not possible with the file names you posted. You have to either re-name the files (and give them unique file extensions), or write a batch file to launch the appropriate application for you. For more details, see this Technet article. I solved it myself: I made a Python script that reads the first few bytes of a file and compares them to a dictionary, then launches the appropriate program based on the magic numbers. import sys import subprocess magic_numbers = { 'OB': r'C:\Program Files (x86)\DesignSoft\Tina 9 - TI\TINA.EXE', # TINA '*v': r'C:\Program Files (x86)\Orcad\Capture\Capture.exe', #PSpice 'DP': r'C:\Program Files (x86)\Design Explorer 99 SE\Client99SE.exe', #Protel '\x00\xFE': r'C:\MentorGraphics\9.0PADS\SDD_HOME\Programs\powerlogic.exe', #PADS Logic '\x10\x80': r'C:\Program Files (x86)\EAGLE-5.11.0\bin\eagle.exe', # Eagle } filename = sys.argv[1] f = open(filename, 'rb') # Read just enough bytes to match the keys magic_n = f.read(len(max(magic_numbers.keys()))) subprocess.call([magic_numbers[magic_n], filename]) Latest version will be here: Launch ambiguous files in the appropriate program I tried to associate the file extension with this script, but Windows 7 didn't let me. It just associated it with Python instead, so I went into the registry and added the script name manually. Room for improvement, but it works. I can double-click on different files with the same .sch extension and they open in different apps. Update: I've converted it to an .exe using cx_freeze, with an external YAML config file, and it's easy to associate. See also this libmagic proposal. Not sure if I should make this into a full-fledged "libmagic launcher for Windows" or if it's better to handle only one file extension with one .exe and a simple YAML file. To start, you can rename one of the types of files to have a new extension, and use the "open with" dialogue to set a default program to open those types of files. This doesn't deal with the renaming problem though. But you simplify things by making a specific folder where you put all of the files from one of the programs. Then you can write a script to automatically rename files in that folder to your new file extension. You may have trouble with an "Open File" dialogue in your program, depending on how it is set up. But if you have a single folder where all of your files are you should be able to just use that. A more complicated, but potentially better way would be to create a proxy program. Keep all file extensions, but have them be opened by the proxy program. Have your program examine the binary and choose which type of file it is and which program to start. This will require you to spend some time writing your program, which may or may not be worth it to you. Microsoft Visual Studio implements your last idea. When you launch a .sln file, a small stub checks the solution version number and launches the correct version of Visual Studio (if you've got multiple versions installed). Of course, coordination here is a bit easier since (A) the file format is designed for this and (B) they're all versions of the same software, from the same manufacturer. By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 818 times active
http://superuser.com/questions/317885/get-windows-to-treat-files-with-the-same-extension-differently/317927
CC-MAIN-2014-23
en
refinedweb
There are several use cases for allowing your app to handle custom URIs. The first one I'll cover is simply launching your application. Launching your application from the web A common trend is for companies to include a link to download their rich application on its website. One way you could handle a link like this is to launch the rich application if the user already has it installed, or direct them to the market so they can download it. First, you need to decide what URI you are going to use to achieve this. My natural tendency was to come up with a custom URI that, chances are, only your application will recognize, e.g., appname://get.my.app. However, after doing some research, I found a comment from a Google Engineer (or so the comments seem to indicate) on Stack Overflow that strongly recommended against this approach. He states: "Please DO NOT use your own custom scheme like that!!! URI schemes are a network global namespace. Do you own the "anton:" scheme world-wide? No? Then DON'T use it." Instead, he recommends using a URI of a website that you own, and I think this makes a lot of sense. After all, most legitimate apps on the market have a website counterpart, so why not take advantage of it? Let's assume your application's website is. I recommend using a "/get/" URI from your website. You'll want to make your Android application respond to this URI from the web. This is achieved by assigning an intent-filter in your application's Manifest XML file, as such: <intent-filter> <action android: <category android: <category android: <data android:scheme="http" android:host="yourapp.com" android:</intent-filter> The VIEW action and BROWSABLE category are both needed so your activity can respond to web URIs. The data element is necessary to have the activity respond to the particular URI I mentioned previously, and that element's attributes should be fairly self-explanatory. The only thing left to do is to create a landing page such as "/get/index.html" on your site that will redirect the user to the appropriate app store to download your app. The following simple script could be used: <script> if( navigator.userAgent.match(/android/i) ) { // redirect to the Android Market } else if( navigator.userAgent.match(/iPhone/i) ) { // redirect to iTunes }</script> Now that everything is in place, you can create a link that will launch your app or redirect the user to the market if your app is not installed: <a href="">Download our App!</a> Communicating with your app You could take this technique a step further to communicate data from the web to your application. Let's assume you want to be able to have a user click a link on your website that automatically composes an SMS for them on their phone. This time, instead of using the ‘path' attribute in the data tag of the intent-filter, we can use the pathPrefix attribute. <intent-filter> <action android: <category android: <category android: <data android:</intent-filter> The pathPrefix attribute will allow your app to respond to links such as: <a href="">Compose SMS</a> Next, you can add the following code to the onCreate method of the activity you applied the intent-filter to: List<String> path = getIntent().getData().getPathSegments(); Uri uri = Uri.parse("smsto:" + path.get(1)); Intent intent = new Intent(Intent.ACTION_SENDTO, uri); intent.putExtra("sms_body", path.get(2));startActivity(intent); As you can see from the code above, we start by getting an array of the path segments of the URI that was captured by the intent-filter. In this case, the array would look like this: ["sms", "5554443333", "hello"] Now it's just a matter of crafting the intent using the path segments array and launching the intent. Conclusion The example I provide in this post is fairly basic, but I hope it will help you apply the same logic to solve your own problem. With rich apps and the web becoming more of a blend every day, I'm sure you will eventually find a use case for this technique..
http://www.techrepublic.com/blog/software-engineer/web-to-app-interoperability-launch-your-android-app-from-the-web/?count=50&view=expanded
CC-MAIN-2014-23
en
refinedweb
module Test.IOSpec.STM ( -- * The specification of STM STMS -- * Atomically , atomically -- * The STM monad , STM , TVar , newTVar , readTVar , writeTVar , retry , orElse , check ) where import Test.IOSpec.VirtualMachine import Test.IOSpec.Types import Data.Dynamic import Data.Maybe (fromJust) import Control.Monad.State -- The 'STMS' data type and its instances. -- -- | An expression of type @IOSpec 'STMS' a@ corresponds to an 'IO' -- computation that may use 'atomically' and returns a value of type -- @a@. -- -- By itself, 'STMS' is not terribly useful. You will probably want -- to use @IOSpec (ForkS :+: STMS)@. data STMS a = forall b . Atomically (STM b) (b -> a) instance Functor STMS where fmap f (Atomically s io) = Atomically s (f . io) -- | The 'atomically' function atomically executes an 'STM' action. atomically :: (STMS :<: f) => STM a -> IOSpec f a atomically stm = inject $ Atomically stm (return) instance Executable STMS where step (Atomically stm b) = do state <- get case runStateT (executeSTM stm) state of Done (Nothing,_) -> return Block Done (Just x,finalState) -> put finalState >> return (Step (b x)) _ -> internalError "Unsafe usage of STM" -- The 'STM' data type and its instances. data STM a = STMReturn a | NewTVar Data (Loc -> STM a) | ReadTVar Loc (Data -> STM a) | WriteTVar Loc Data (STM a) | Retry | OrElse (STM a) (STM a) instance Functor STM where fmap f (STMReturn x) = STMReturn (f x) fmap f (NewTVar d io) = NewTVar d (fmap f . io) fmap f (ReadTVar l io) = ReadTVar l (fmap f . io) fmap f (WriteTVar l d io) = WriteTVar l d (fmap f io) fmap _ Retry = Retry fmap f (OrElse io1 io2) = OrElse (fmap f io1) (fmap f io2) instance Monad STM where return = STMReturn STMReturn a >>= f = f a NewTVar d g >>= f = NewTVar d (\l -> g l >>= f) ReadTVar l g >>= f = ReadTVar l (\d -> g d >>= f) WriteTVar l d p >>= f = WriteTVar l d (p >>= f) Retry >>= _ = Retry OrElse p q >>= f = OrElse (p >>= f) (q >>= f) -- | A 'TVar' is a shared, mutable variable used by STM. newtype TVar a = TVar Loc -- | The 'newTVar' function creates a new transactional variable. newTVar :: Typeable a => a -> STM (TVar a) newTVar d = NewTVar (toDyn d) (STMReturn . TVar) -- | The 'readTVar' function reads the value stored in a -- transactional variable. readTVar :: Typeable a => TVar a -> STM a readTVar (TVar l) = ReadTVar l (STMReturn . fromJust . fromDynamic) -- | The 'writeTVar' function overwrites the value stored in a -- transactional variable. writeTVar :: Typeable a => TVar a -> a -> STM () writeTVar (TVar l) d = WriteTVar l (toDyn d) (STMReturn ()) -- | The 'retry' function abandons a transaction and retries at some -- later time. retry :: STM a retry = Retry -- | The 'check' function checks if its boolean argument holds. If -- the boolean is true, it returns (); otherwise it calls 'retry'. check :: Bool -> STM () check True = return () check False = retry -- | The 'orElse' function takes two 'STM' actions @stm1@ and @stm2@ and -- performs @stm1@. If @stm1@ calls 'retry' it performs @stm2@. If @stm1@ -- succeeds, on the other hand, @stm2@ is not executed. orElse :: STM a -> STM a -> STM a orElse p q = OrElse p q executeSTM :: STM a -> VM (Maybe a) executeSTM (STMReturn x) = return (return x) executeSTM (NewTVar d io) = do loc <- alloc updateHeap loc d executeSTM (io loc) executeSTM (ReadTVar l io) = do (Just d) <- lookupHeap l executeSTM (io d) executeSTM (WriteTVar l d io) = do updateHeap l d executeSTM io executeSTM Retry = return Nothing executeSTM (OrElse p q) = do state <- get case runStateT (executeSTM p) state of Done (Nothing,_) -> executeSTM q Done (Just x,s) -> put s >> return (Just x) _ -> internalError "Unsafe usage of STM" internalError :: String -> a internalError msg = error ("IOSpec.STM: " ++ msg)
http://hackage.haskell.org/package/IOSpec-0.2/docs/src/Test-IOSpec-STM.html
CC-MAIN-2014-23
en
refinedweb
Database Showing page 1 of 1. KeePass Password Safe KeePass - A free open source password manager.229,721 weekly downloads rFunc UDF Library User Defined Library for InterBase / Firebird / Yaffil for Windows and Unix. Written on C++.27 weekly downloads eBay-tools Watching, bidding, selling and searching tools for Ebay auctions. All data and pictures are stored in a local SQL database for offline searching and viewing. No installation required! Comfortable listing with thumbnails. See for s MS SQL scriptor .NET utility capable to automatically generate XML database schemas and then translating those to SQL scripts (both data import/export and stored procedures for data access) and C# wrapper classes.1 weekly downloads Digimech Server Object-oriented database server1 weekly downloads Home CD Archive HCDA consists of an CD data storage, software package organizer and debtors control utility.1 weekly downloads The Rsdn.Framework.Data namespace The Rsdn.Framework.Data is a namespace that represents a higher-level wrapper for ADO.NET with high performance object-relational mapping Nuclos "Nuclos" is an object-oriented framework which incorporates basic functionality to build "business" software.0 weekly downloads Qt Berkeley DB plugin Qt library database plugin, which allows usage of Berkeley Databases from Qt0 weekly downloads
http://sourceforge.net/directory/database/database/natlanguage:russian/natlanguage:english/os:win95/
CC-MAIN-2014-23
en
refinedweb
Hi > Congratulations to the release! I have played with it a > little with the intention of using it for RAD prototyping > of a site I will be working on and it looks promising. > > I wonder about one thing though. My need would be to call > a certain servlet with an additional path that would be > used as a parameter, I have done this with java servlets, > and it can be a nice way of hiding the application structure > from the users. > > For example > /cgi-bin/WebKit.cgi/Bands/Metallica/LovesNapster > > is sent to the Bands.py servlet with the rest as a parameter. > Using mod_rewrite you could even hide the cgi-bin and WebKit.cgi > parts. I didn't get this to work, did I miss something or shouldn't > it be possible in WebKit. > > Marcus Ahlfors > mahlfors@... > > > > > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... > > --------------------------------------------------- WorldPilot - Get Synced - The Open Source Personal Information Manager Server At 02:07 PM 6/7/00 +0000, Jay Love wrote: >Hi Sounds fine to me. For increased efficiency, you could do it the left to right instead. This might tie into our Context/Application discussions, so that an entry in our "directory mapping" dictionary would specify that everything that started with /Bands went to a particular servlet. I think this is how Java does it in fact. -Chuck Ok, I have investigated a little. In the Java Servlet Specification page 29, a request is made up of: requestURI = contextPath+servletPath+pathInfo Where pathInfo is the extra path after the servlet name. There is some examples in the specs. I don't know but I suspect that java servlets all must reside in one directory. That would of course simplify the parsing. If played around a little along Jay's advices, this is a works-for-me solution. But nothing that should be used in a final version. I didn´t put that much effort in it since I don't know all details on what the function is supposed to do. def serverSidePathForRequest(self, request, urlPath): ''' Returns what it says. This is a 'private' service method for use by HTTPRequest. ''' + # Create context path + tail = urlPath + pathInfo = "" + try: + prepath = self._Contexts['contextName'] + except KeyError: + prepath = self._Contexts['default'] + if not os.path.isabs(prepath): + prepath = os.path.join(self.serverDir(), prepath) + + # Look for servlets from right to left + while tail != '': + ssPath = os.path.join(prepath, tail + ".py") + if os.path.exists(ssPath): + request._fields['pathInfo'] = pathInfo + return ssPath + tail,head = os.path.split(tail) + pathInfo = os.path.join(head,pathInfo) ssPath = self._serverSidePathCacheByPath.get(urlPath, None) if ssPath is None:
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/[email protected]/
CC-MAIN-2014-23
en
refinedweb
Automating the world one-liner at a time… Both PowerShell and Windows Management Instrumentation (WMI) are pretty incredible technologies that can do a lot of amazing things, but we're all human, and keeping an encyclopedic mental reference of all of these amazing things would give Good Will Hunting a headache. For years, when I used WMI I almost always had a browser open to MSDN to help me figure out and remember the little details of a class in WMI, and, for all of the desktop clutter, it never bothered me... until I started using Powershell and had Get-Help and Get-Command. These two cmdlets rocked my world. Get-Command could tell me all of the commands in a system, and the properties they accepted. Get-Help told me what the cmdlet did, and how to use it. All of a sudden I had the kinds of information I was always looking up online right at my fingertips... but not for WMI. Luckily, it turns out that WMI actually has a way to provide help information. If you run Get-WmiObject, you can see what I mean: Get-WmiObject __Namespace | Format-Table Name This lists the namespaces underneath root\cimv2. You should have at least one namespace named something like ms_### . This namespace contains localized class help for the WMI classes in the parent namespace. This means that no matter where in the world you are, you can get help from WMI in your language without connecting to the internet. I wrote a few functions to make this feature of WMI a lot easier to access. Get-WmiHelp: Gets you a dictionary with the help for a specific WMI classSearch-WmiHelp: Searches through the namespace for classes that match certain filters Here are some examples: Get-WmiHelp Win32_Process Name Value---- -----Description The Win32_Process class represents a sequence...Properties {QuotaPeakPagedPoolUsage, CSCreationClassName...Methods {Terminate, AttachDebugger, Create, GetOwner...} (Get-WmiHelp Win32_Bios).Description The Win32_BIOS class represents the attributes of the computer system's basic input/output services (BIOS) that are installed on the computer. (Get-WmiHelp Win32_Process).Methods.Create The Create method creates a new process.The method returns an integer value that can be interpretted as follows:0 - Successful completion.2 - The user does not have access to the requested information.3 - The user does not have sufficient privilge.8 - Unknown failure.9 - The path specified does not exist.21 - The specified parameter is invalid.Other - For integer values other than those listed above, refer to Win32 error code documentation. (Get-WmiHelp Win32_Volume).Properties.StatusInfo StatusInfo is a string indicating whether the logical device is in an enabled (value = 3), disabled (value = 4) or some other (1) or unknown (2) state. If this property does not apply to the logical device, the value, 5 ("Not Applicable"), should be used. Search-WmiHelp {$_ -like "*Disk*"} # Finds all classes whose description contains "Disk" Name Value---- -----Win32_OperatingSystemAutoch... {Description, Properties, Methods}Win32_Volume {Description, Properties, Methods}.... Search-WmiHelp -Method {$_.Key -like "*Create*} # Finds all classes that contain a method named create Name Value---- -----Win32_Service {Description, Properties, Methods}Win32_BaseService {Description, Properties, Methods}Win32_TerminalService {Description, Properties, Methods}Win32_Share {Description, Properties, Methods} ... Search-WmiHelp -Property {$_.Value -like "*Quota*} # Finds all classes that contain a property whose description contains "Quota" Name Value---- -----Win32_QuotaSetting {Description, Properties, Methods}Win32_LogicalDisk {Description, Properties, Methods}Win32_VolumeQuota {Description, Properties, Methods}Win32_VolumeUserQuota {Description, Properties, Methods}Win32_Process {Description, Properties, Methods}Win32_MappedLogicalDisk {Description, Properties, Methods}Win32_Volume {Description, Properties, Methods}Win32_VolumeQuotaSetting {Description, Properties, Methods}Win32_DiskQuota {Description, Properties, Methods} Hope this helps, James Brundage [MSFT] --- Update: I have added a -Culture option to Get-WmiHelp and Search-WmiHelp. If Search-WmiHelp is giving you an "Invalid Namespace" error, then run: Get-WmiObject "__Namespace" | Format-List Name and try passing the value of any ms_### namespace to Get-WmiHelp/Search-WmiHelp e.g. Get-WmiHelp Win32_Process -culture 0xc0a: Name Value---- -----Description La clase Win32_Process representa una secuenc...Properties {QuotaPeakPagedPoolUsage, CSCreationClassName...Methods {Terminate, AttachDebugger, Create, GetOwner...} PingBack from I get the following error when I run Search-WmiHelp: Get-WmiObject : Invalid namespace At D:\Documents\WindowsPowerShell\Add-WmiHelpFunctions.ps1:120 char:42 + $localizedClasses = Get-WmiObject <<<< -NameSpace $localizedNamespace -Query "select * f rom meta_class" Attempted to divide by zero. At D:\Documents\WindowsPowerShell\Add-WmiHelpFunctions.ps1:125 char:109 + Write-Progress "Searching Wmi Classes" "$count of $($localizedClasses.Count)" -Perc ( $count*100/$ <<<< localizedClasses.Count) And when I run Get-WmiHelp it just produces nothing. I'm running Vista, but I don't suppose that should make any difference. Yup -- this stuff dont work :( If you are having issues, please post the command line you ran Search-WmiHelp or Get-WmiHelp with, and the results of the following script: Get-WmiObject -query "SELECT * FROM __Namespace" -namepsace (value of $namespace you passed to Get-WmiHelp (or blank)) Get-LocalalizedNamespace (value of $namespace you passed to Get-WmiHelp (or "root\Cimv2")) This script mines the information from WMI, which is in a very standardized format (the description qualifier on each method/property/class in the localized namespace contains the help). One possibility is that the namespace does not exist on your box. Another possibility is that the namespace that one of the functions Get-WmiHelp and Search-WmiHelp both call (Get-LocalizedNamespace) is returning a bad namespace name. I hope I can help identify why this script is not working for you. James Brundage [MSFT] Doesn't work here, either. I have an english language OS with German regional settings. (Get-culture).LCID returns 1031 (0x407). WMI only has namespace ms_0409. get-localizednamespace returns "ms_7f" in that case. (warning, I'm a newbie). I get an error just trying to dot-source the script: Unrecognized token in source text (my path)\Add-WmiHelpFunctions.ps1:3 char:1 + @ <<<< (Running Vista Business, 32-bit English) Here's a result - and what fails. Platform is WinXP SP2 English / no localization. ------------------------------------ PS C:\> Get-WmiHelp Win32_Process PS C:\> (Get-WmiHelp Win32_Bios).Description PS C:\> Search-WmiHelp {$_ -like "*Disk*"} Get-WmiObject : Invalid namespace At C:\Documents and Settings\jcn\Desktop\Add-WmiHelpFunctions.ps1:135 char:42 + $localizedClasses = Get-WmiObject <<<< -NameSpace $localizedNamespace -Query "select * from meta_class" At C:\Documents and Settings\jcn\Desktop\Add-WmiHelpFunctions.ps1:140 char:109 + Write-Progress "Searching Wmi Classes" "$count of $($localizedClasses.Count)" -Perc ($count*100/$ <<<< lo calizedClasses.Count) PS C:\> In order to allow the individuals who are having localization issues to still get their wmi help, I have made culture an argument to Get-WmiHelp and Search-WmiHelp. You can specify the culture ID code in hex format. If you run: Get-WmiObject "__Namespace" | ? { $_.Name -like "ms_*"} | Format-Table Name You should be able to see all of the localized namespaces. can help you identify which one you would like to use. ms_409 is en-us. Pick one of these to use, and then run Get-WmiHelp or Search-WmiHelp with -cultureID 0x409 (or 0x + whatever 3 digit number was after ms_ in the namespace name for the locale you want). Please post a comment if you had an issue, and this resolved it, or if you are still having an issue. This is a XP Sp2 MUI + German Language Pack 30# $localizedNamespaces = Get-WmiObject -NameSpace "root\cimv2" -Class "__Namespace" | where {$_.Name -like "ms_*"} 31# $localizedNamespaces __GENUS : 2 __CLASS : __NAMESPACE __SUPERCLASS : __SystemClass __DYNASTY : __SystemClass __RELPATH : __NAMESPACE.Name="ms_409" __PROPERTY_COUNT : 1 __DERIVATION : {__SystemClass} __SERVER : WKS-VIE-301 __NAMESPACE : ROOT\cimv2 __PATH : \\WKS-VIE-301\ROOT\cimv2:__NAMESPACE.Name="ms_409" Name : ms_409 __RELPATH : __NAMESPACE.Name="ms_407" __PATH : \\WKS-VIE-301\ROOT\cimv2:__NAMESPACE.Name="ms_407" Name : ms_407 32# $localizedNamespaces | where {$_.Name -eq "ms_{0:x}" -f (Get-Culture).LCID } 33# Get-Culture LCID Name DisplayName ---- ---- ----------- 3079 de-AT German (Austria) 34# Get-WmiObject -query "SELECT * FROM __Namespace" -namespace "root\cimv2" __RELPATH : __NAMESPACE.Name="sms" __PATH : \\WKS-VIE-301\ROOT\cimv2:__NAMESPACE.Name="sms" Name : sms __RELPATH : __NAMESPACE.Name="Applications" __PATH : \\WKS-VIE-301\ROOT\cimv2:__NAMESPACE.Name="Applications" Name : Applications 35# Get-LocalizedNamespace \ms_7f 36# Hello Powershell Team, I don't know if this is the right place but Jeffrey did ask us to complain when we saw something stupid or when things didn't work as expected. I am trying to use the WMISearcher object and when I pass a query that has a join, it fails. So basically, even though you can pass a standard "Select * from ..." query, you can't use a properly written WQL statement that uses joins or aliases. Or am I missing something? Just an FYI... The latest .PS1 file is missing the '@' for the initial here string. --Greg .cmdletname { font-size:large } .cmdletsynopsis { font-size:medium } .cmdletdescription { font-size:medium
http://blogs.msdn.com/b/powershell/archive/2007/09/24/get-wmihelp-search-wmihelp.aspx
CC-MAIN-2014-23
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. One of the great things about WPF is that it separates the functionality of a control from the way it looks, this has become known as “lookless controls”. Which is great, but how can we ensure that our custom controls behave and also have a default look in the first place. This mini article will try and show you what happens with lookless controls. Ensuring Control Does What We Want It To Do I will firstly talk a bit about how you can create a split designer/developer project that should work correctly. The first things that a developer should do is create a TemplatePartAttribute such that this is captured in the metadata which can be used by an documentation tool. By using this TemplatePartAttribute the developer is able to tell the designer what was intended for a correct control operation. TemplatePartAttribute Here is an example for a small control that I have made 1: [TemplatePart(Name = “PART_DropDown”, 2: Type = typeof(ComboBox))] 3: public class DemoControl : Control 4: { 5: } This should be an alert to the designer that they need to create a part of the control Template that should be a ComboBox and should be called “PART_DropDown”. So that is part1, next the developer should override the OnApplyTemplate and look for any expected parts that are required to make the control work properly and wire up the events required. Here is an example. OnApplyTemplate 1: public override void OnApplyTemplate() 2: { 3: base.OnApplyTemplate(); 4: 5: //Obtain the dropdown and create the items 6: dropDown = 7: base.GetTemplateChild( 8: “PART_DropDown”) as ComboBox; 9: if (dropDown != null) 10: dropDown.SelectionChanged += 11: new SelectionChangedEventHandler( 12: dropDown_SelectionChanged); 13: 14: 15: } 16: 17: void dropDown_SelectionChanged(object sender, 18: SelectionChangedEventArgs e) 19: { 20: 21: 22: } Another method is to rely on RoutedCommands that should be used by the designer in the XAML control Template. These can then used as follows: 1: 2: // Make sure the command is bound, so that it will work when called to 3: CommandBindings.Add(new 4: CommandBinding(DemoCommands.SayHello, 5: //The actual command handler code 6: (s, e) => { 7: MessageBox.Show(“Hello”); 8: })); Lookless Controls In order to create a truly lookless control, we should do the following: Override the default Style associated with a control, this is done by changing the metadata. An example of which is as follows: 1: static DemoControl() 2: { 3: //Provide a default set of visuals for a custom control 4: DefaultStyleKeyProperty.OverrideMetadata( 5: typeof(DemoControl), 6: new FrameworkPropertyMetadata( 7: typeof(DemoControl))); 8: } Next we need to understand a few things about how Themes work in WPF. There is an assembly level attribute that is called ThemeInfoAttribute, which is typically created as follows: ThemeInfoAttribute 1: [assembly: ThemeInfo( 2: ResourceDictionaryLocation.None, 3: //where theme specific resource dictionaries are located 4: //(used if a resource is not found in the page, 5: // or application resource dictionaries) 6: ResourceDictionaryLocation.SourceAssembly 7: //where the generic resource dictionary is located 8: //(used if a resource is not found in the page, 9: // app, or any theme specific resource dictionaries) 10: )] This could be used to indicate a location for a Style for a control. More often than not this is created as I have just shown. If you do not specify an external Dll to look in, the next place that is examined is Themes\generic.xaml, so this is where you should put your default Style/Template for your custom control. So typically you would create a generic.xaml file that held the default control Style/Template. For the attached demo project my generic.xaml simply contains a bunch of merged resource dictionary objects as follows: 1: <ResourceDictionary xmlns=”” 2: xmlns:x=””> 3: 4: <!– Merge in all the available themes –> 5: <ResourceDictionary.MergedDictionaries> 6: <ResourceDictionary 7: Source=”/CustomControls;component/Themes/Default.xaml” /> 8: <ResourceDictionary 9: Source=”/CustomControls;component/Themes/Blue.xaml” /> 10: <ResourceDictionary 11: Source=”/CustomControls;component/Themes/Red.xaml” /> 12: </ResourceDictionary.MergedDictionaries> 13: 14: 15: 16: </ResourceDictionary> If we study one of these, a little more closely, say the “Blue” one, we can see that is also uses a ComponentResourceKey markup extension. 1: <ResourceDictionary xmlns=”” 2: xmlns:x=”” 3: xmlns:local=”clr-namespace:CustomControls”> 4: 5: <Style x:Key=”{ComponentResourceKey {x:Type local:DemoControl}, Blue }” 6: TargetType=”{x:Type local:DemoControl}”> 7: <Setter Property=”Background” Value=”Blue”/> 8: <Setter Property=”Margin” Value=”10″/> 9: <Setter Property=”Template”> 10: <Setter.Value> 11: <ControlTemplate TargetType=”{x:Type local:DemoControl}” > 12: <Border Background=”{TemplateBinding Background}” 13: CornerRadius=”5″ BorderBrush=”Cyan” 14: BorderThickness=”2″> 15: <StackPanel Orientation=”Vertical” 16: Margin=”{TemplateBinding Margin}”> 17: <Button x:Name=”btnSayHello” 18: Margin=”{TemplateBinding Margin}” 19: Background=”LightBlue” 20: Foreground=”Black” 21: Command=”{x:Static 22: local:DemoCommands.SayHello}” 23: Content=”Say Hello” Height=”Auto” 24: Width=”Auto” /> 25: <ComboBox x:Name=”PART_DropDown” 26: Margin=”{TemplateBinding Margin}” 27: Background=”LightBlue” 28: Foreground=”Black”> 29: <ComboBoxItem Content=”Blue”/> 30: <ComboBoxItem Content=”Red”/> 31: </ComboBox> 32: </StackPanel> 33: <Border.LayoutTransform> 34: <ScaleTransform CenterX=”0.5″ 35: CenterY=”0.5″ 36: ScaleX=”3.0″ 37: ScaleY=”3.0″/> 38: </Border.LayoutTransform> 39: </Border> 40: </ControlTemplate> 41: </Setter.Value> 42: </Setter> 43: </Style> 44: 45: 46: </ResourceDictionary> So lets get to the bottom of that. What does that do for us. Well quite simply it allows us another way to select a resource, by using a Type/Id to lookup the resource. Here is an example 1: Style style = (Style)TryFindResource( 2: new ComponentResourceKey( 3: typeof(DemoControl), 4: styleToUseName)); 5: 6: if (style != null) 7: this.Style = style; The working app simply allows users to toggle between 3 different Styles for the lookless control. You can download it and play with using the demo project, theming.zip - 92.64 KB Enjoy. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I am lucky enough to have won a few awards for Zany Crazy code articles over the years public static ComponentResourceKey PrimaryBrushKey { get { return new ComponentResourceKey(typeof(Resources), "PrimaryBrush"); } } <SolidColorBrush x: General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/37326/Lookless-Controls-Themes
CC-MAIN-2014-23
en
refinedweb
InfoWorld: With your latest VMStor offering, you seem to be betting on what you call scalable NAS. Where do you see NAS fitting into the virtualization and cloud landscape? Gluster: We think the future of virtual and cloud storage is NAS. NAS scales to petabytes, enables shared data access across many clients, and is easy to use, cost effective, and high performing -- everything that is required to effectively manage data in VMs. Unstructured data is exploding and it's predicted that by the end of 2010, 1,200 exabytes of data will be created. Scale-out NAS solutions can scale seamlessly and pool disk and memory resources under a global namespace virtualizing the underlying hardware whether you require a thousand virtual disk images, a petabyte of data, or both. This provides an ideal virtual storage environment to complement virtual server deployments. Additionally, NAS volumes can be mounted simultaneously across thousands of servers, providing both the hypervisor and cloud applications to share the same storage, allowing VM migration across a large pool of servers without concerns for storage access. InfoWorld: For years now, organizations with virtualized environments have been told to leverage the power of SAN, but you guys are talking NAS vs. SAN. Why is that? Gluster: SAN solutions have some inherent limitations for scalability, manageability, and ease of sharing data. They can also be expensive, even with the availability of iSCSI-based solutions, and require fairly sophisticated administrative expertise. A cloud environment where hundreds of VMs can be provisioned in minutes is extremely dynamic. SAN administration requires significant expertise such as the need to use multiple extents to create a VMFS volume larger than 2TB or implementing raw device mappings (RDM) for performance. Alternatively, NAS solutions are easily shared and a single mount point can be addressed by hundreds or thousands of clients. VM disk images are just files and NAS is optimized for large-scale storage of files making it an ideal match for virtual machine storage. Key features such as snapshots, thin provisioning, and backup are all supported, and previous questions about performance are no longer relevant with NAS supporting 10 Gigabit Ethernet and InfiniBand. InfoWorld: At what point do you see cloud storage becoming mainstream? Gluster: We think that it's interesting that even though the hype surrounding cloud computing is probably at an all-time high, there are a lot of use cases that we are seeing. We definitely see both private and public cloud storage in production. The industry knows that mainstream adoption is inevitable, real proof points are supporting the momentum and this is a case where the industry will figure things out quickly. Special thanks to AB Periasamy, CTO of Gluster, for speaking to me about the topic of open source NAS storage in a virtualized and cloud enabled data center. This story, "Gluster brings clustered NAS storage to VMware," was originally published at InfoWorld.com. Follow the latest developments in virtualization and cloud computing at InfoWorld.com.
http://www.infoworld.com/d/virtualization/gluster-brings-clustered-nas-storage-vmware-427?page=0,2
CC-MAIN-2014-23
en
refinedweb
Struts2 ajax validation example. Struts2 ajax validation example. In this example, you will see how to validate login through Ajax in struts2. 1-index.jsp <html> <...="1"><tr><td> <s:form action=" Ajax validation in struts2. Ajax validation in struts2. In this section, you will see how to validate fields of form in struts2. 1-index.jsp <html> <head>..."><tr><td> <s:form action=" Ajax form element - Ajax Ajax form element I have problem in passing a value into form feild using ajax. I want to get the value is as a parameter. Can some one help me plz..."/> </form-beans> <action-mappings> <action path Struts 2.1.8 Login Form Struts 2.1.8 Login Form  ... to validate the login form using Struts 2 validator framework. About the example... in the struts.xml file: <!-- Login Form Configuration --> <action name Struts 2 Ajax and complete implementation of login form using the Ajax (DOJO). Lets... Struts 2 Ajax In this section, we explain you Ajax based development in Struts 2. Struts 2 provides built getting null value in action from ajax call getting null value in action from ajax call Getting null value from ajax call in action (FirstList.java)... first list is loading correctly. Need..." name="form1" action="StudentRegister" method="post" enctype="multipart/form-data Ajax Ajax how to include ajax in jsp page? Hi, Please read Ajax First Example - Print Date and Time example. Instead of using PHP you can write your code in JSP. Thanks Struts2 ; Why Struts 2 The new version Struts 2.0 is a combination of the Sturts action...Struts2 Apache Struts: A brief Introduction Apache Struts is an open with BIRT - Ajax or not. ------------------------------------------------------------------ Read for more information about Ajax...Ajax with BIRT Hai i am currently working on BIRT in a company... is my code.. JSP and Servlet using AJAX function getXMLObject Ajax in a tabular form. Same for >33kv We can use ajax where instead of a link a radio button can fetch the data and can populate a table.If ajax can be used Struts2 Training Day--2 Struts2 Actions Simple Action Redirect Action Struts2 Validator Framework Day--3... Tag Submit Tag Reset Tag Day--5 A Simple Struts2 Login Application JSP Ajax Form Ajax Form Example This example explains you how to develop a form that populates dynamically from the database though Ajax call. When user selects employee... in the box automatically. Read more at Login Action Class - Struts Login Action Class Hi Any one can you please give me example of Struts How Login Action Class Communicate with i-bat PHP - Ajax " For read more information on Ajax visit to : How can ajax is used in php,plz expalin with example. Hi friend, Code to solve the problem : Aj tags in struts2 ;Login</title> </head> <body> <s:form action="login...tags in struts2 Hello I am using simple tags in struts2. Like..." pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts DOJO Tree - Ajax DOJO Tree Hi I am able to generate a Tree structure using DOJO toolkit dynamically which is comming from my Struts (using 1.2) action classes..., read for more information. Login - Struts Struts Login page I want to page in which user must login to see...;<table border="1" ><form method="post" action... type that url... Login Page after logging that page must return to my page.please AJAX with AJAX with Ajax resources, in a digg style, allows the user to register and addd his/her own links Read full Description AJAX Search ;/style> </head> <body> <form id="form" action="#">...AJAX Search I have to create a project where the user enters... using PHP and MYSQL as the database. Can somebody please suggest me the AJAX struts2 struts2 how to read properties file in jsp of struts2 Hi, You can use the Properties class of Java in your action class. Properties...(in); String p = pro.getProperty("key"); System.out.println(key + " : " + p); Read ajax doubdt ajax doubdt function getentityshortname(val) { alert("entered...); req.setRequestHeader('Content-Type','application/x-www-form-urlencoded...); //document.forms[0].action=url; //document.forms[0].submit Struts 1.2 and ajax example Struts 1.2 and ajax example Struts 1.2 and ajax example with data from database Registration - Ajax ;hi friend, registration form in jsp function checkEmail(email...; } User registration form...: -------------------------- read AJAX REGISTRATION FORM AJAX REGISTRATION FORM I have implemented user name, check and state selection using ajax, in the html registration form. I am loading two XML... either username or state. How to implement both in same registration form could any Struts2.2.1 Ajax div tag example. ; <action name="...Struts2.2.1 Ajax div tag example. In this section, we will introduce you to about the Ajax div tag. The div tag when used with Ajax refreshes the content Ajax validation - Ajax Ajax validation how to validate a form using Ajax and php  ... you. Please visit for more informaton: Thanks. Amarde example Ajax example Hi, Where can I find Ajax example program? Thanks Hi, Get it at: Ajax Tutorials First Ajax Example Ajax Login Example Thanks Struts 2 Login Form Example you can create Login form in Struts 2 and validate the login action...Struts 2 Login Form Example tutorial - Learn how to develop Login form.... Let's start developing the Struts 2 Login Form Example Step 1... the ajax wll be called and retrieve me the blocks under the selected district What is Ajax? JavaScript and other technologies such as CSS and XML. Read more at What is Ajax...What is Ajax? Hi, What is Ajax and what is use of Ajax in web programming? Thanks Hi, Ajax stands for AJAX stands for Asynchronous Struts 1.2.9 (NB6.1) ? Problems with depend <html:select> and AJAX - Struts select value as a parameter How I do it with AJAX?? Maybe JQuery? Sorry...Struts 1.2.9 (NB6.1) ? Problems with depend and AJAX Hi I have 2 and one is depend to the other The 1st select I fill it of the DB Struts 2 Session Scope ; In this section, you will learn to create an AJAX application in Struts2... is created with AJAX in Struts2 Framework. Before we start the things, we need..." namespace="/roseindia" extends="struts-default"> <action name AJAX Magazine AJAX Magazine AJAX blog focusing on new AJAX developments. Read full Description Lessons Ajax Lessons AjaxLessons.com is a resource for ajax tutorials as well as information surrounding Ajax and web 2.0. Read full Description Ajax Linki Ajax Linki Links - Ajax Contents, Books, Tutorials and everything about Ajax Read full Description Ajax Ajax send the example to fetch the data from the server by using ajax in java. for ex:-if there are states which is used to display in frontend we use ajax. send it to me The AJAX JSP Tag Library JavaScript to implement an AJAX-capable web form. Read full Description... The AJAX JSP Tag Library The AJAX JSP Tag Library is a set of JSP tags that simplify the use Ajax in jQuery Ajax in jQuery How ajax can be use in JQuery? Given below the code for ajax in jquery: $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url Struts 2.2.1 - Struts 2.2.1 Tutorial Configuring Actions in Struts application Login Form Application... Validators Login form validation example Struts 2.2.1 Tags Type... design pattern in Java technology. Struts 2.2.1 provides Ajax support Main function parameter in C language - Ajax Main function parameter in C language Please let me know how this Main(int argc,char*argv[]) works and detailed example on their roles in command line arguement.Also how is fgetpos() and fsetpos() used in file operation  Ajax Tutorial Ajax Tutorial Many Ajax Tutorials in a blog style Read full Description Aspects of AJAX Aspects of AJAX AJAX blog with loads of resources Read full Description AJAX Blog AJAX Blog Daily AJAX content with sreenshots and commentary. Read full Description AJAX Goals AJAX Goals AJAX site with forums, code samples, news and articles Read full Description AJAX Guru AJAX Guru AJAX blog by Kishore Read full Description AJAX Impact AJAX Impact Very nice AJAX Community with great references Read full Description AJAX Line AJAX Line AJAX Community with a blog, forum and tutorials Read full Description Ajax Links Ajax Links AJAX links, code samples and news Read full Description ajax ajax how to connect ajax with mysql without using php,asp or any other scripting language. please answer soon form submit in ajax ajax register form ajax form Register Ajax Camp Ajax Camp Ajax Camp is a community for learning, interacting, and asking questions about web-based development using Javascript and Ajax Read full Description struts imagelink - Ajax struts imagelink i need one dropdown menu in that i have to select image AND i need one button when i clicks on that image will be open AJAX World AJAX World [Google Group]AJAX discussion group with over a thousand members! Read full Description
http://www.roseindia.net/tutorialhelp/comment/99615
CC-MAIN-2014-23
en
refinedweb
After installing Snow Leopard, Pages will not open a saved document used in Leopard, due to the above error notice. Anyone else have this problem? (with hopefully a fix.) Thanks Intel IMac Core 2 Duo, Mac OS X (10.6) 1. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 6:34 AM in response to randyhatchAs I already asked in some other threads but it seems that every user want to have its own thread, this behavior resemble to a wrong install process. How have you installed the package? (1) using the installer (2) copying files from an old install or from a backup If you copied the files, I guess that you copied the iWork '09 folder of the Applications folder but not the iWork '09 folder containing shared items which must be available as: <startupVolume>:Library:Application Support:iWork '09: It is supposed to contain the folder "Frameworks" containing these folders: SFAnimation.framework SFArchiving.framework SFApplication.framework SFRendering.framework SFWebView.framework SFUtility.framework NumbersExtractorHeaders.framework SFLicense.framework SFInspectors.framework Inventor.framework SFDrawables.framework SFWordProcessing.framework SFTabular.framework SFCompatibility.framework SFCharts.framework SFProofReader.framework SFStyles.framework SFControls.framework MobileMe.framework Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 15:34:19 2. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 6:58 AM in response to randyhatchThanks for the rebuff on starting a new thread, since I couldn't find a thread that addressed my specific problem. The problem only began after upgrading to Snow Leopard. Pages was working fine under Leopard. The error message points specifically to the SFWordProcessing plugin. All the files you list are in their appropriate places. Still not resolved. 3. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 7:30 AM in response to randyhatchYour problem under Snow Leopard: The same before Snow Leopard: Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 16:30:10 4. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 8:03 AM in response to randyhatchYvan, Problem still not addressed adequately. It has been a long time since I have done a clean install of the OS to my hard drive. It looks as if it is time. 5. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 8:38 AM in response to randyhatchBefore doing a new system install, try to run Pages from an other user account. If it works, a system install is not required. If it doesn't, it is more complex. As I wrote in other threads (1) problems where reported with cache files (2) problems where reported with a few fonts (3) problems where reported with permissions Case (1) may require a re-install because at this time, tools able to rebuild caches are not updated for Snow. Cases (2) and (3) may be cured without a re-install. Of course I assume that you start checking if the problem is specific to Pages or if it strikes other apps like TextEdit for instance. Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 17:38:03 6. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 3, 2009 9:45 AM in response to randyhatchYvan, Thanks for your responses. Pages and other programs are working fine. The problem seems to be with a large document - my personal journal - with imbedded photos. Other smaller documents, and starting a new document, work fine. Other user account doesn't solve the problem. 7. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 3, 2009 10:06 AM in response to randyhatchTry to send it to my mailbox. Click my blue name to get my address. Yvan KOENIG (VALLAURIS, France) jeudi 3 septembre 2009 19:06:14 8. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiMerged Content 1 Sep 7, 2009 9:33 PM in response to KOENIG YvanI am having precisely the same problem as RandyHatch, and I too find the other links provided by Yvan to be useless. In particular, I am using Pages 3.0.3 (part of iLife '08), which had been installed from the installation CD. After upgrading from 10.5.8 to 10.6, the my large 200-page file would no longer open in Pages. I have tried reinstalling iWork '08 to see if that would help fix the problem. No luck. I tried copying the file to my wife's computer, which has iLife '09 installed, and the document yielded the same error (albeit in a much shorter timespan---instantly rather than within 60s on 3.0.3). I would most appreciate a helpful response. This only seems to affect my 274kb text file, not my slightly smaller 250kb text file, leaving me to believe this is not an install issue or other user-caused issue like Yvan seems to imply. 9. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 8, 2009 3:28 AM in response to Merged Content 1It seems that you are not uptodate in reading my responses. I received Randy's file and for this unique file the problem is clearly an oddity linked to 10.6. If your document is readable, at this time, on a 10.5.8 machine and not on a 10.6 one, it's the 2nd example of the problem. It would be useful to get the two states of the document. May you attach them to a mail and send them to my mailbox. Click my bluename to get my address. Comparing the two documents, I would be able to insert some chunks of text allowing me to isolate what is the wrongdoer. Randy's doc is so huge that I can't do this kind of experiments on it. Switching it from one machine to the other one is too boring. With 274Kb it would be acceptable. Yvan KOENIG (VALLAURIS, France) mardi 8 septembre 2009 12:25:55 10. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiMerged Content 1 Sep 8, 2009 9:54 PM in response to KOENIG YvanWell, since this forum's admins apparently think an actual solution to a problem is less important than how the message is worded, I'll repost my solution to this problem... It turns out the problem is a font issue. Hoefler Text and large files apparently don't mix well, at least in this case. Perhaps the font became corrupted in the 10.6 installation? Changing the document to Times in the few seconds before the crash occurred seemed to fix the issue. Hopefully others having the same problem can use this tip to avoid stress, rather than be lectured in incoherent ramblings which only exacerbate aggravation. 11. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 9, 2009 5:17 AM in response to Merged Content 1Unfortunately, the document that I'm trying to use will not open in Snow Leopard at all. Therefore, I am unable to manipulate text or anything else. It is good to know that it is a font issue, but I can't solve my particular problem. I will probably need to return my OS to Leopard and wait for updates to Snow Leopard to fix the bug. 12. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugifruhulda Sep 9, 2009 7:48 AM in response to randyhatchHave you looked at your fonts in the Fontbook? In the File menu you can Validate fonts. If they are OK keep them, the other should be trashed. 13. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugirandyhatch Sep 9, 2009 9:10 AM in response to fruhuldaYes, to no avail. 14. Re: Why does Pages quit unexpectedly while using the SFWordProcessing plugiKOENIG Yvan Sep 9, 2009 10:14 AM in response to fruhulda fruhulda wrote: Have you looked at your fonts in the Fontbook? In the File menu you can Validate fonts. If they are OK keep them, the other should be trashed. The problem is linked to the official Hoefler font. At this time I am working on Randy's document. When it will be done, I will make systematic tests to determine the limit of document size pushing Pages to crash when the doc uses Hoefler on 10.6. During my work upon Randy's doc, I wrote a script allowing me to grab the name of embedded pictures with their native size and their displayed size. The grabbed infos may be used to resize the pictures out of Pages reducing drastically the document's size. In the given file, replacing huge jpegs by png files of the used size reduce seriously the doc's size with no loss on screen. Here is the script. -- -- [SCRIPT listOfPicturesWithSize] (* Save the script as Application (Application Bundle if you want to use it on a MacIntel) Store it on the Desktop. Drag and drop the icon of a Pages document on the script icon. You may also double click the script's icon. The script will build a list of embedded picture files. The list is available on the desktop in "listof_embeddedpictures.txt" Open it then copy its contents at end of the Pages document so that the page numbering will not be modified. If you want to put the list at the beginning, insert a page break to create a target blank page before running the script. Yvan KOENIG (Vallauris, FRANCE) 2009/09/09 *) property permitted4 : {"com.apple.iWork.pages.sffpages", "com.apple.iWork.pages.sfftemplate"} property permitted5 : {"com.apple.iwork.pages.pages", "com.apple.iwork.pages.template"} property permitted : permitted5 & permitted4 property nomDuRapport : "listof_embeddedpictures.txt" property typeID : missing value property wasFlat : missing value property listePages : {} property listeImages : {} property liste1 : {} --===== on run (* lignes exécutées si on double clique sur l'icône du script application • lines executed if one double click the application script's icon *) tell application "System Events" if my parleAnglais() then set myPrompt to "Choose an iWork’s document" else set myPrompt to "Choisir un document iWork" end if -- parleAnglais if 5 > (system attribute "sys2") then (* if Mac Os X 10.4.x *) set allowed to my permitted else (* it's Mac OS X 10.5.x with a bug with Choose File *) set allowed to {} end if -- 5 > (system… my commun(choose file with prompt myPrompt of type allowed without invisibles) (* un alias *) end tell end run --===== on open (sel) (* sel contient une liste d'alias des élémentsqu'on a déposés sur l'icône du script (la sélection) • sel contains a list of aliases of the items dropped on the script's icon (the selection) *) my commun(item 1 of sel) (* an alias *) end open --===== on commun(thePack) (* • thePack is an alias *) local pFold, pName, path2Index, texteXML my nettoie() set theTruePack to thePack as text tell application "System Events" to tell disk item theTruePack set typeID to (type identifier) as text set pFold to package folder set pName to name end tell if typeID is not in my permitted then if my parleAnglais() then error "“" & (docIwork as text) & "” is not a Pages document!" else error "«" & (docIwork as text) & "» n’est pas un document Pages !" end if -- my parleAnglais() end if -- typeID if pFold then set wasFlat to false else set wasFlat to true set thePack to my expandFlat(thePack) (* an alias *) log thePack end if try set path2Index to (thePack as text) & "Index.xml" set texteXML to my lis_Index(path2Index) (* here code to scan the text *) set line_feed to ASCII character (10) if texteXML contains line_feed then set texteXML to my remplace(texteXML, line_feed, return) if texteXML contains return then set texteXML to my remplace(texteXML, return & return, return) set texteXML to my supprime(texteXML, tab) set balise1 to "<sl:page-group sl:page=" & quote set balise2 to "<sf:media" set balise3 to "<sf:naturalSize sfa:w=" & quote set balise4 to "<sf:data sf:path=" & quote set balise5 to quote & "/>" set balise7 to "sf:size=" & quote if texteXML does not contain balise4 then if my parleAnglais() then error "The document “" & theTruePack & "” doesn’t embed picture !" number 8001 else error "Le document « " & theTruePack & " » ne contient pas d’image !" number 8001 end if else if texteXML contains balise1 then copy "nomImage" & tab & "page #" & tab & "naturalWidth" & tab & "naturalHeight" & tab & "trueWidth" & tab & "trueHeight" & tab & "fileSize (kBytes)" to end of my listeImages set my listePages to my decoupe(texteXML, balise1) repeat with i from 2 to count of my listePages set itmi to item i of my listePages set pageNum to item 1 of my decoupe(itmi, quote) set my liste1 to my decoupe(itmi, balise2) repeat with k from 2 to count of my liste1 set itmk to item k of my liste1 if itmk contains balise4 then -- on a une image set liste3 to my decoupe(itmk, balise3) set liste3 to my decoupe(item 2 of liste3, quote) set wNatural to item 1 of liste3 set hNatural to item 3 of liste3 set w to item 5 of liste3 set h to item 7 of liste3 set liste3 to my decoupe(itmk, balise4) set itmk to item 1 of my decoupe(item 2 of liste3, balise5) set nomImage to item 1 of my decoupe(itmk, quote) set fSize to item -1 of my decoupe(itmk, balise7) copy nomImage & tab & pageNum & tab & wNatural & tab & hNatural & tab & w & tab & h & tab & (fSize div 1024) to end of my listeImages end if set liste3 to {} end repeat end repeat end if -- balise4 set rapport to (my recolle(my listeImages, return)) as text set p2d to path to desktop set p2r to (p2d as Unicode text) & nomDuRapport tell application "System Events" if exists (file p2r) then delete (file p2r) make new file at end of p2d with properties {name:nomDuRapport} end tell -- "System Events" write rapport to (p2r as alias) if wasFlat then tell application "System Events" to delete file (thePack as text) -- delete temporary package tell application "Numbers" to open file p2r on error error_message number error_number my nettoie() if error_number is not -128 then my affiche(error_message) end try my nettoie() end commun --===== on nettoie() set typeID to missing value set wasFlat to missing value set my listePages to {} set my listeImages to {} set my liste1 to {} end nettoie --===== on affiche(msg) tell application "Finder" activate if my parleAnglais() then display dialog msg buttons {"Cancel"} default button 1 giving up after 120 else display dialog msg buttons {"Annuler"} default button 1 giving up after 120 end if -- parleAnglais() end tell -- application end affiche --===== on lis_Index(cheminXML0) local cheminXML0, cheminXMLgz, txtXML set cheminXMLgz to cheminXML0 & ".gz" tell application "System Events" if exists file cheminXMLgz then if exists file cheminXML0 then delete file cheminXML0 (* un curieux a pu dé-gzipper le fichier • someone may have gunzipped the file *) my expand(cheminXMLgz) set txtXML to my lisIndex_xml(cheminXML0) else if exists file cheminXML0 then set txtXML to my lisIndex_xml(cheminXML0) else if my parleAnglais() then error "Index.xml missing" else error "Il n'y a pas de fichier Index.xml" end if -- parleAnglais() end if -- exists file cheminXMLgz end tell -- to System Events return txtXML end lis_Index --===== on expand(f) do shell script "gunzip " & quoted form of (POSIX path of (f)) end expand --===== on lisIndex_xml(f) local t try set t to "" set t to (read file f) end try return t end lisIndex_xml --===== on expandFlat(f) (* f is an alias *) local zipExt, qf, d, nn, nz, fz, qfz tell application "Finder" to set newF to (duplicate f without replacing) as alias -- create a temporary item which will be changed as package set zipExt to ".zip" set qf to quoted form of POSIX path of newF tell application "System Events" tell disk item (newF as text) set d to path of container set nn to name set nz to nn & zipExt set fz to d & nz if exists disk item fz then set name of disk item fz to nn & my horoDateur(modification date of file fz) & zipExt set name to nz end tell -- disk item make new folder at end of folder d with properties {name:nn} end tell -- System Events set qfz to quoted form of POSIX path of fz do shell script "unzip " & qfz & " -d " & qf & " ; rm " & qfz return newF (* path to the temporary package *) end expandFlat --===== (* • Build a stamp from the modification date_time *) on horoDateur(dt) local annee, mois, jour, lHeure, lesSecondes, lesMinutes set annee to year of dt set mois to month of dt as number (* existe depuis 10.4 *) set jour to day of dt set lHeure to time of dt set lesSecondes to lHeure mod 60 set lHeure to round (lHeure div 60) set lesMinutes to lHeure mod 60 set lHeure to round (lHeure div 60) return "_" & annee & text -2 thru -1 of ("00" & mois) & text -2 thru -1 of ("00" & jour) & "-" & text -2 thru -1 of ("00" & lHeure) & text -2 thru -1 of ("00" & lesMinutes) & text -2 thru -1 of ("00" & lesSecondes) (* • Here, the stamp is "_YYYYMMDD-hhmmss" *) end horoDateur --===== on decoupe(t, d) local l set AppleScript's text item delimiters to d set l to text items of t set AppleScript's text item delimiters to "" return l end decoupe --===== on recolle(l, d) local t set AppleScript's text item delimiters to d set t to l as text set AppleScript's text item delimiters to "" return t end recolle --===== (* replaces every occurences of d1 by d2 in the text t *) on remplace(t, d1, d2) local l set AppleScript's text item delimiters to d1 set l to text items of t set AppleScript's text item delimiters to d2 set t to l as text set AppleScript's text item delimiters to "" return t end remplace --===== (* removes every occurences of d in text t *) on supprime(t, d) local l set AppleScript's text item delimiters to d set l to text items of t set AppleScript's text item delimiters to "" return (l as text) end supprime --===== on parleAnglais() local z try tell application "Pages" to set z to localized string "Cancel" on error set z to "Cancel" end try return (z is not "Annuler") end parleAnglais --===== -- [/SCRIPT] -- Yvan KOENIG (VALLAURIS, France) mercredi 9 septembre 2009 19:14:25
https://discussions.apple.com/thread/2141021?start=0&tstart=0
CC-MAIN-2014-23
en
refinedweb
Config::Fast - extremely fast configuration file parser # default config format is a space-separated file company "Supercool, Inc." support [email protected] # and then in Perl use Config::Fast; %cf = fastconfig; print "Thanks for visiting $cf{company}!\n"; print "Please contact $cf{support} for support.\n"; This module is designed to provide an extremely lightweight way to parse moderately complex configuration files. As such, it exports a single function - fastconfig() - and does not provide any OO access methods. Still, it is fairly full-featured. Here's how it works: %cf = fastconfig($file, $delim); Basically, the fastconfig() function returns a hash of keys and values based on the directives in your configuration file. By default, directives and values are separated by whitespace in the config file, but this can be easily changed with the delimiter argument (see below). When the configuration file is read, its modification time is first checked and the results cached. On each call to fastconfig(), if the config file has been changed, then the file is reread. Otherwise, the cached results are returned automatically. This makes this module great for mod_perl modules and scripts, one of the primary reasons I wrote it. Simply include this at the top of your script or inside of your constructor function: my %cf = fastconfig('/path/to/config/file.conf'); If the file argument is omitted, then fastconfig() looks for a file named $0.conf in the ../etc directory relative to the executable. For example, if you ran: /usr/local/bin/myapp Then fastconfig() will automatically look for: /usr/local/etc/myapp.conf This is great if you're really lazy and always in a hurry, like I am. If this doesn't work for you, simply supply a filename manually. Note that filename generation does not work in mod_perl, so you'll need to supply a filename manually. By default, your configuration file is split up on the first white space it finds. Subsequent whitespace is preserved intact - quotes are not needed (but you can include them if you wish). For example, this: company Hardwood Flooring Supplies, Inc. Would result in: $cf{company} = 'Hardwood Flooring Supplies, Inc.'; Of course, you can use the delimiter argument to change the delimiter to anything you want. To read Bourne shell style files, you would use: %cf = fastconfig($file, '='); This would let you read a file of the format: system=Windows kernel=sortof In all formats, any space around the value is stripped. This is one situation where you must include quotes: greeting=" Some leading and trailing space " Each configuration directive is read sequentially and placed in the hash. If the same directive is present multiple times, the last one will override any earlier ones. In addition, you can reuse previously-defined variables by preceding them with a $ sign. Hopefully this seems logical to you. owner Bill Johnson company $owner and Company, Ltd. website products $website/newproducts.html Of course, you can include literal characters by escaping them: price \$5.00 streetname "Guido \"The Enforcer\" Scorcese" verbatim 'Single "quotes" are $$ money @ night' fileregex '(\.exe|\.bat)$' Basically, this modules attempts to mimic, as closely as possible, Perl's own single and double quoting conventions. Variable names are case-insensitive by default (see KEEPCASE). In this example, the last setting of ORACLE_HOME will win: oracle_home /oracle Oracle_Home /oracle/orahome1 ORACLE_HOME /oracle/OraHome2 In addition, variables are converted to lowercase before being returned from fastconfig(), meaning you would access the above as: print $cf{oracle_home}; # /oracle/OraHome2 Speaking of which, an extra nicety is that this module will setup environment variables for any ALLCAPS variables you define. So, the above ORACLE_HOME variable will automatically be stuck into %ENV. But you would still access it in your program as oracle_home. This may seem confusing at first, but once you use it, I think you'll find it makes sense. Finally, if called in a scalar context, then variables will be imported directly into the main:: namespace, just like if you had defined them yourself: use Config::Fast; fastconfig('web.conf'); print "The web address is: $website\n"; # website from conf Generally, this is regarded as dangerous and bad form, so I would strongly advise using this form only in throwaway scripts, or not at all. There are several global variables that can be set which affect how fastconfig() works. These can be set in the following way: use Config::Fast; $Config::Fast::Variable = 'value'; %cf = fastconfig; The recognized variables are: The config file delimiter to use. This can also be specified as the second argument to fastconfig(). This defaults to \s+. If set to 1, then MixedCaseVariables are maintained intact. By default, all variables are converted to lowercase. If set to 1 (the default), then any ALLCAPS variables are set as environment variables. They are still returned in lowercase from fastconfig(). If set to 1, then settings that look like shell arrays are converted into a Perl array. For example, this config block: MATRIX[0]="a b c" MATRIX[1]="d e f" MATRIX[2]="g h i" Would be returned as: $conf{matrix} = [ 'a b c', 'd e f', 'g h i' ]; Instead of the default: $conf{matrix[0]} = 'a b c'; $conf{matrix[1]} = 'd e f'; $conf{matrix[2]} = 'g h i'; This allows you to pre-define var=val pairs that are set before the parsing of the config file. I introduced this feature to solve a specific problem: Executable relocation. In my config files, I put definitions such as: # Parsed by Config::Fast and sourced by shell scripts BIN="$ROOT/bin" SBIN="$ROOT/sbin" LIB="$ROOT/lib" ETC="$ROOT/etc" With the goal that this file would be equally usable by both Perl and shell scripts. When parsed by Config::Fast, I pre-define ROOT to pwd before calling fastconfig(): use Cwd; my $pwd = cwd; @Config::Fast::Define = ([ROOT => $pwd]); my %conf = fastconfig("$pwd/conf/core.conf"); Each element of This is a hash of regex patterns specifying values that should be converted before being returned. By default, values that look like true|on|yes will be converted to 1, and values that match false|off|no will be converted to 0. You could set your own conversions with: $Config::Fast::CONVERT{'fluffy|chewy'} = 'taffy'; This would convert any settings of "fluffy" or "chewy" to "taffy". Variables starting with a leading underscore are considered reserved and should not be used in your config file, unless you enjoy painfully mysterious behavior. For a much more full-featured config module, check out Config::ApacheFormat. It can handle Apache style blocks, array values, etc, etc. This one is supposed to be fast and easy. $Id: Fast.pm,v 1.7 2006/03/06 22:18:41 nwiger Exp $ This module is free software; you may copy this under the terms of the GNU General Public License, or the Artistic License, copies of which should have accompanied your Perl kit.
http://search.cpan.org/dist/Config-Fast/lib/Config/Fast.pm
CC-MAIN-2014-23
en
refinedweb
Data::PrettyPrintObjects - a pretty printing module with better support for objects This module is a fairly powerful pretty printer useful for printing out perl data structure in a readable fashion. The difference between this module and other data dumpers and pretty printers is that it can be configured to handle different types of references (or objects) in different ways, including using object methods to supply the printable value. If you have simple data structures without any blessed objects embedded in them, this module behaves similar to any other pretty printers. However, if you have objects embedded in them, this module is very useful for describing the data. Although modules such as Data::Dumper are often used for this purpose, this module is NOT a replacement for Data::Dumper, or any other similar module. Data::Dumper examines raw data (including printing out the full representation of an embedded object). A pretty printer, such as this one, is designed to print the data in a readable form, which may or may not mean displaying the raw data. As an example, if you have data structure which includes an Archive::Zip object, you may want the printable value of that object to be a list of all files in the archive, rather than a description of the Archive::Zip object. If you have a Date::Manip::Date object, you probably want the printable value to be a date contained in the object. For displaying a data structure, the structure is examined recursively, and turned into a string. The format of the string depends on the type of data and the options described below. Most of the time, a scalar is displayed exactly as it exists. If the scalar includes embedded quotes, commas, spaces, or newlines, it will be quoted. Embedded newlines will be emphasized by including '\n' in the string. This is not true perl quoting since embedded quotes will not be escaped. Embedded newlines will cause the output to be quoted, and an extra space added at the start of each line. For example: print PPO("a\nb\nc") => 'a b c' Note the leading extra space on the second and third lines. This is so printing out a multi-line scalar will correctly line up after quotes have been added. A list will be displayed as square brackets enclosing list elements. In other words: [ ELE1, ELE2, ... ELEN ] A has will be displayed as: { KEY1 => VAL1, KEY2 => VAL2, ... KEYN => VALN } Objects will typically be displayed using their scalar representation (i.e. what you get with the function scalar($object)), but this can be overridden using the options described below. Options may be set in one of two ways. They may be set in a file specified by the PPO_OptionsFile function, or they may be set by passing them to PPO_Options. The argument to PPO_Options is a hash containing option/value key pairs. The argument to PPO_OptionsFile is a file containing a YAML hash. The following keys are known: Each level of a data structure is indented a certain number of spaces relative per level. This defaults to 2, but this option can be used to change that. When displaying a list, the list_format option defines how it will be formatted. Possible values include: standard By default, a list is printed in a one per line format. In other words: [ a, b, c ] indexed This is one item per line with an index. In other words: [ 0: a, 1: b, 2: c ] In a nested data structure, the depth of a piece of data refers to how many levels deep it is nested. If max_depth is 0 (which is the default), all levels will be printed). For example, one data structure might be printed as: [ a, b, [ c, [ d ] ] ] (if max_depth were 0). In this example, 'a' and 'b' are both at depth 1, 'c' is at depth 2, and 'd' is at depth 3. Sometimes, you may only want to print out the top levels. By setting a max_depth to N, every scalar value (or object who's printable value is a scalar) who's depth is N or smaller will be printed out. It will not recurse into more deeply nested data structures, but instead will print them out using the max_depth_method described next. In this example, setting max_depth to 2 might result in the following output: [ a, b, [ c, ARRAY(0x111111) ] ] The format used to display the structures more deeply nested depend on the max_depth_method. When max_depth is set, structure that is more deeply nested than that depth are displayed in some method to indicated that the structure is there, but it is not recursed into to display the actual data contained there. The possible values for max_depth_method are: ref This is the default, and means to display the memory reference of the structure. For example, an array reference would be displayed: ARRAY(0x111111) and an object with a non-scalar printable value would include the class, so an Archive::Zip object (who's printable value might be defined to be a list of files contained in the archive) might be: Archive::Zip=HASH(0x15c8e50) If the printable value of an object is a scalar, it will be printed using the methods defined for that object. type This is a simpler version when you are only interested in seeing the type of structure/object but not the memory reference. They might be displayed as: ARRAY Archive::Zip If a data structure has circular references, or structure/objects embedded in it multiple times, there are different ways to display it. For example, if you have the code: $a = [1]; $d1 = [$a,$a] $d2 = []; push(@$d2,2,$d2); the structures '$d1' and '$d2' will be displayed depending on the value of the duplicates option. The value may be one of the following: link This is the default. In this case, the first occurence of a data structure is displayed normally, and the second (or higher) occurence is listed as a link to the first one. '$d1' would be printed as: [ [ 1 ], $VAR->[0] ] and '$d2' would be printed as: [ 2, $VAR ] reflink This adds memory references to all duplicates. So the '$d1' and '$d2' would be displayed as: [ ARRAY(0x111111) [ 1 ], ARRAY(0x111111) $VAR->[0] ] and ARRAY(0x111111) [ 2, ARRAY(0x111111) $VAR ] ref This simply prints second (or higher) occurrences as memory references (but doesn't indicate what it duplicates): [ [ 1 ], ARRAY(0x111111) ] and [ 2, ARRAY(0x111111) ] The objs option is used to set the options for each type of object. The value of this is a hash described in the OBJECT OPTIONS section below. The value of the objs option is a hash. The keys in this hash are the full names for various objects. The value for each entry is a hash containing the options for that object. For example, to set options for displaying an Archive::Zip object, you would need to pass in the following to the PPO_Options function: %obj_opts = ( 'Archive::Zip' => { OPT => VAL, OPT => VAL, ... } ); PPO_Options(..., objs => \%obj_opts ); The object options include the following: This tells how the printable value of an object should be obtained. Values can be: ref The object will be printed out as a reference: Archive::Zip(0x111111) This is the default method. method If this is passed in, the value is a string which is a method name that can be used to return the printable value. In other words, if $obj is an object, the printable value is obtained by calling: $obj->METHOD(ARGS) where METHOD is the value of the B<func> option, and ARGS is the value of the B<args> option. The arguments are passed unmodified. func This can either be the name of a function, or a function reference. The printable value for the object is obtained by calling: &FUNC(ARGS) where FUNC is the value of the B<func> option and ARGS is the value of the B<args> option. Exactly one of the ARGS should be the literal string '$OBJ' which will be replaced with the actual object. FUNC is looked for in the namespace of the caller, the namespace of the object, and the main namespace (in that order). data This treats the object as a data structure and displays it. This is the name of the method or function used to get the printable value of an object. It must be defined if print is 'method' or 'func'. There is no default value. This is a list of arguments to pass to the method or function. This is only used if the value of the print option is method or func. The output from the method/function will be treated as a scalar by default, but if this is set to any of the following, the output will be treated as that type of structure: scalar list hash If the return value is a scalar that is a reference, it will be displayed using the rules for that type of data. If this option is set to a non-zero value, the reference will be output along with the printable value. For example, if the object is an Archive::Zip object, and (using the method or func method) the printable value is defined to be the list of files, the printable version will be either: [ file1, file2 ] or Archive::Zip(0x111111) [ file1, file2 ] The second will be used if this is non-zero. This option is ignored if the print method is 'ref'. use Data::PrettyPrintObjects; Options(%options); OptionsFile($file); This sets any of the options described above. Any options already set which are not included in the %options argument are left unmodified. This does not hold true for the object options. If you set the object options for a type of object, it overrides completely all options previously set for that type of object. Any file passed in to OptionsFile must be a valid YAML file containing an %options hash. $string = PPO($var); This formats $var (which can be any type of data structure) into a printable string. None known. Please send bug reports to the author. This script is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Sullivan Beck ([email protected])
https://metacpan.org/pod/Data::PrettyPrintObjects
CC-MAIN-2014-23
en
refinedweb
Encode::MAB2 - Das Maschinelle Austauschformat fuer Bibliotheken This module and all the accompanying modules are to be regarded as ALPHA quality software. That means all interfaces, namespaces, functions, etc. are still subject to change. use Encode::MAB2; my $mab2 = 'Some string in MAB2 encoding'; my $unicode = Encode::decode('MAB2',$mab2); The Encode::MAB2 module works on the string level abstraction of MAB2 records. You can feed it a string in the encoding used by MAB2 and get a Unicode string back. The module only works in one direction, it does not provide a way to convert a Unicode string back into MAB2 encoding. MAB2 is a German library data format and an encoding almost completely based on ASCII and ISO 5426:1983. On 2003-09-08 Die Deutsche Bibliothek published, the first official document that maps the MAB2 encoding to Unicode 4.0. The mapping provided by this module follows this publication. See below for small additional convenience tricks that are also implemented by the module to avert common errors. ALERT: USE AT YOUR OWN RISK You are responsible to determine applicability of information provided. Besides the above mentioned mab_unic.pdf, the following documents provided invaluable help in developing the mapping presented in this module: This module uses the module Unicode::Normalize to deliver the combining characters in the MAB2 record in normalization form C. We have taken precautions against common errors in MAB records: This module comes with 6 modules that alleviate the parsing of MAB records. The modules are the following: MAB2::Record::Base MAB2::Record::gkd MAB2::Record::lokal MAB2::Record::pnd MAB2::Record::swd MAB2::Record::titel where Base is based on the file segm000.txt and each of the others is based on the according textfile in the directory on the server of Die Deutsche Bibliothek. More documentation can be found in MAB2::Record::Base. In addition to that, there are two tie interfaces available: Tie::MAB2::Recno and Tie::MAB2::Id. These are the high-level access classes for MAB2 files that use all other modules presented in the package.
http://search.cpan.org/dist/MAB2/lib/Encode/MAB2.pm
CC-MAIN-2014-23
en
refinedweb
Journal: Not much *Phew* Just got a SSL site up on charon so I can check mail. Its a bit harder when you have to write your own Apache config but you do end up knowing more Well here I am typing to you via the brand new charon (actually just a new HD). Debian is no more and Gentoo is here to stay! Its testament to the power of Linux that I can get the server up and running to mostly its original specs within a day. So now I'm a totally Debian-free shop.. *sigh* Charon (my server box) was up and down last night. What I initially put down to hardware failure actually appears to be driver/kernel related. Under heavy load the wifi driver managed to hang the networking stack requiring (since the box is headless) a hard-restart That aside, Charon seems happier now with the new drivers/kernel + a bit of manual fscking/restoring the disks but I've decided I've had enough of Debian. Although wonderful when it Just Works, you have to live with some parts of unstable just to get things working Aside from that, I also posted a request for help to the mono list wrt Perl#. Reproduced below for posterity Hi, I was wondering if anyone on this list had a Windows box + Mono + Perl that could try out my Perl interpreter bindings for C#[1]. Basically I wanted access to all the useful utilities out there on CPAN[2] that aren't yet available in managed code or in the standard class libraries. The distribution gives a sample IMAP client using Mail::IMAPClient. It works on Linux (Gentoo & SuSE 9) and seems to work OK on OS X with a little munging of the library names and modulo mono-wierdness wrt exceptions (yes I know about mint...). I'd ideally like it to work on Windows as well, then it could be a truly cross-platform way of getting lots of functionality into C# for 'free'. I've based the build system on Gtk#. For those that just want to look at how easy it is to embed Perl using the powerful features of C# take a look at the IMAP example[3]. C# really does make this nice, a lot nicer than the Perl C API! For example Scalars are auto-boxed to the approriate fundamental types (int, double, string). [1] [2] [3] -- Rich More wodges of code now. This nicely shows the autoboxing of Scalars and actually does something useful that you namespace PerlEmbedExamples { using System; using Perl; class IMAPExample { public static void Main(string[] args) { Console.WriteLine("Embedding Perl within Mono - IMAP Example"); Console.WriteLine("(C) 2004 Rich Wareham\n"); Interpreter interpreter = new Perl.Interpreter(); interpreter.Embed(); try { interpreter.Require("Mail::IMAPClient"); Scalar client = interpreter.CallClassMethod("Mail::IMAPClient", "new", "Server", "charon", "User", "rjw57", "Password", "XXXXXXXXX"); if(!client) { Console.WriteLine("Could not log into server."); return; } Scalar[] folders = interpreter.CallMethod(client, "subscribed", Interpreter.CallFlags.ArrayContext); Console.WriteLine("Subscribed to {0} folders:", folders.Length); foreach(string folder in folders) { Console.WriteLine(" -> {0}", folder); } } catch ( PerlException e ) { Console.WriteLine("Error was thrown: {0}", e.Message); } } } } Well I've been playing with embedding Perl within C# and have managed to write the first 'useful' program. The following is a re-write of the LWP simple test program in C#. namespace PerlEmbedExamples { using System; using Perl; class LWPExample { public static void Main(string[] args) { Console.WriteLine("Embedding Perl within Mono - LWP Example"); Console.WriteLine("(C) 2004 Rich Wareham <[email protected]>\n"); Interpreter interpreter = new Perl.Interpreter(); interpreter.Embed(); try { interpreter.Require("LWP::UserAgent"); Scalar ua = interpreter.CallClassMethod("LWP::UserAgent", "new"); interpreter.CallMethod(ua, "agent", "Mozilla/6.0"); Scalar req = interpreter.CallClassMethod("HTTP::Request", "new", "GET", ""); Scalar res = interpreter.CallMethod(ua, "request", req); if(interpreter.CallMethod(res, "is_success")) { Scalar content = interpreter.CallMethod(res, "content"); Console.WriteLine("Success!"); Console.WriteLine("Content:\n{0}", content); } else { Console.WriteLine("Request failed."); } } catch ( Exception e ) { Console.WriteLine("Error was thrown: {0}", e.Message); } } } } You may well ask "Why bother?". Well actually I've been wanting to write some code in C# for a while but there just aren't enough utility libraries/classes about under sufficiently good licenses. I could do a load of P/Invoke magic to make use of C-libraries but that invoves a) recompiling on each platform and b) only using C-libraries available on multiple platforms. Using Perl gives C# immediate access to the vast wealth of modules on CPAN. This makes wrapper classes around Perl modules pretty easy to implement....... Gig last naight at Homerton went OK. It was odd working with people we haven't worked with before - they are so unreliable. Right in the middle of '3 words' there was the biggest block I have ever come accross The audience seemed to like it and it all managed to come off OK in the end. Jennie and I commented to each other how snobby we have become, I guess we assume everyone is going to be as professional as Rich and Alex. Sarah was amazingly thespy at the end and gave everyone cards. Good god! Marisa, Ashleigh, Jenni and James were there and seemed to enjoy themselves. James said he actually preferred it to Improwar. Of course I keep going over all the mistakes in my head but I reckon its fine really. Two supervisions today which I'm doing the work for now. Should be OK. Jennie didn't get the ADC job either.... Got the proposed running order for tomorrow's improv show through. Its a wee bit odd, both in pace and assignments, but then I reckon that because Sarah hasn't planned, or appeared in, an improv show with us before. There appear to be some stalling points on odd choices of games but seems OK. Will be playing with some people we haven't worked with much before which should be exciting... "Those who will be able to conquer software will be able to conquer the world." -- Tadahiro Sekimoto, president, NEC Corp.
http://slashdot.org/~rjw57/journal/
CC-MAIN-2014-23
en
refinedweb
Source south / docs / customfields.rst Custom Fields The Problem South stores field definitions by storing both their class and the arguments that need to be passed to the field's constructor, so it can recreate the field instance simply by calling the class with the stored arguments. However,. This isn't the case for custom fields [1], however; South has never seen them before, and it can't guess at which variables mean what arguments, or what arguments are even needed; it only knows the rules for Django's internal fields and those of common third-party apps (those which are either South-aware, or which South ships with a rules module for, such as django-tagging). The Solution There are two ways to tell South how to work with a custom field; if it's similar in form to other fields (in that it has a set type and a few options) you'll probably want to :ref:`extend South's introspection rules <extending-introspection>`. However, if it's particularly odd - such as a field which takes fields as arguments, or dynamically changes based on other factors - you'll probably find it easier to :ref:`add a south_field_triple method <south-field-triple>`. Extending Introspection (Note: This is also featured in the tutorial in :ref:`tutorial-part-4`) South does the majority of its field introspection using a set of simple rules; South works out what class a field is, and then runs all rules which have been defined for either that class or a parent class of it. This way, all of the common options (such as null=) are defined against the main Field class (which all fields inherit from), while specific options (such as max_length) are defined on the specific fields they apply to (in this case, CharField). If your custom field inherits from a core Django field, and doesn't add any new attributes, then you probably won't have to add any rules for it, as it will inherit all those from its parents. However, South first checks that it has explicitly been told a class is introspectable first; even though it will probably have rules defined (since it inherits from Field, at least), there's no way to guarantee that it knows about all of the possible rules until it has been told so. Thus, there are two stages to adding support for your custom field to South; firstly, adding some rules for the new arguments it introduces (or possibly not adding any), and secondly, adding its field name to the list of patterns South knows are safe to introspect. Rules Rules are what make up the core logic of the introspector; you'll need to pass South a (possibly empty) list of them. They consist of a tuple, containing: - A class or tuple of classes to which the rules apply (remember, the rules apply to the specified classes and all subclasses of them). - Rules for recovering positional arguments, in order of the arguments (you are strongly advised not to use this feature, and use keyword argument instead). - A dictionary of keyword argument rules, with the key being the name of the keyword argument, and the value being the rule. Each rule is itself a list or tuple with two elements: - The first element is the name of the attribute the value is taken from - if a field stored its max_length argument as self.max_length, say, this would be "max_length". - The second element is a (possibly empty) dictionary of options describing the various different variations on handling of the value. An example (this is the South rule for the many-to-one relationships in core Django): rules = [ ( (models.ForeignKey, models.OneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] You'll notice that you're allowed to have dots in the attribute name; ForeignKeys, for example, store their destination model as self.rel.to, so the attribute name is "rel.to". The various options are detailed below; most of them allow you to specify the default value for a parameter, so arguments can be omitted for clarity where they're not necessary. The one special case is the is_value keyword; if this is present and True, then the first item in the list will be interpreted as the actual value, rather than the attribute path to it on the field. For example: "frozen_by_south": [True, {"is_value": True}], Parameters - default: The default value of this field (directly as a Python object). If the value retrieved ends up being this, the keyword will be omitted from the frozen result. For example, the base Field class' "null" attribute has {'default':False}, so it's usually omitted, much like in the models. - default_attr: Similar to default, but the value given is another attribute to compare to for the default. This is used in to_field above, as this attribute's default value is the other model's pk name. - default_attr_concat: For when your default value is even more complex, default_attr_concat is a list where the first element is a format string, and the rest is a list of attribute names whose values should be formatted into the string. - ignore_if: Specifies an attribute that, if it coerces to true, causes this keyword to be omitted. Useful for db_index, which has {'ignore_if': 'primary_key'}, since it's always True in that case. - ignore_dynamics: If this is True, any value that is "dynamic" - such as model instances - will cause the field to be omitted instead. Used internally for the default keyword. - is_value: If present, the 'attribute name' is instead used directly as the value. See :ref:`above <is-value-keyword>` for more info. Field name patterns The second of the two steps is to tell South that your field is now safe to introspect (as you've made sure you've added all the rules it needs). Internally, South just has a long list of regular expressions it checks fields' classes against; all you need to do is provide extra arguments to this list. Example (this is in the GeoDjango module South ships with, and presumes rules is the rules triple you defined previously): from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^django\.contrib\.gis"]) Additionally, you can ignore some fields completely if you know they're not needed. For example, django-taggit has a manager that actually shows up as a fake field (this makes the API for using it much nicer, but confuses South to no end). The django-taggit module we ship with contains this rule to ignore it: from south.modelsinspector import add_ignored_fields add_ignored_fields(["^taggit\.managers"]) Where to put the code You need to put the call to add_introspection_rules somewhere where it will get called before South runs; it's probably a good choice to have it either in your models.py file or the module the custom fields are defined in. General Caveats If you have a custom field which adds other fields to the model dynamically (i.e. it overrides contribute_to_class and adds more fields onto the model), you'll need to write your introspection rules appropriately, to make South ignore the extra fields at migration-freezing time, or to add a flag to your field which tells it not to make the new fields again. An example can be found here. south_field_triple There are some cases where introspection of fields just isn't enough; for example, field classes which dynamically change their database column type based on options, or other odd things. Note: :ref:`Extending the introspector <extending-introspection>` is often far cleaner and easier than this method. The method to implement for these fields is south_field_triple(). It should return the standard triple of: ('full.path.to.SomeFieldClass', ['positionalArg1', '"positionalArg2"'], {'kwarg':'"value"'}) (this is the same format used by the :ref:`ORM Freezer <orm-freezing>`; South will just use your output verbatim). Note that the strings are ones that will be passed into eval, so for this reason, a variable reference would be 'foo' while a string would be '"foo"'. Example Here's an example of this method for django-modeltranslation's TranslationField. This custom field stores the type it's wrapping in an attribute of itself, so we'll just use that: def south_field_triple(self): "Returns a suitable description of this field for South." # We'll just introspect the _actual_ field. from south.modelsinspector import introspector field_class = self.translated_field.__class__.__module__ + "." + self.translated_field.__class__.__name__ args, kwargs = introspector(self.translated_field) # That's our definition! return (field_class, args, kwargs)
https://bitbucket.org/spookylukey/south/src/69b4986003d6/docs/customfields.rst?at=0.7
CC-MAIN-2014-23
en
refinedweb
Control.Concurrent.Broadcast Description A Broadcast is a mechanism for communication between threads. Multiple wait until a broadcaster listeners a value. The listeners block until the value is received. When the broadcaster broadcasts a value all listeners are woken. broadcasts All functions are exception safe. Throwing asynchronous exceptions will not compromise the internal state of a broadcast. This module is designed to be imported qualified. We suggest importing it like: import Control.Concurrent.Broadcast ( Broadcast ) import qualified Control.Concurrent.Broadcast as Broadcast ( ... ) Synopsis - data Broadcast α - new :: IO (Broadcast α) - newBroadcasting :: α -> IO (Broadcast α) - listen :: Broadcast α -> IO α - tryListen :: Broadcast α -> IO (Maybe α) - listenTimeout :: Broadcast α -> Integer -> IO (Maybe α) - broadcast :: Broadcast α -> α -> IO () - signal :: Broadcast α -> α -> IO () - silence :: Broadcast α -> IO () Documentation A broadcast is in one of two possible states: Instances Creating broadcasts newBroadcasting :: α -> IO (Broadcast α)Source newBroadcasting x Creates a broadcast in the "broadcasting x" state. Listening to broadcasts listen :: Broadcast α -> IO αSource tryListen :: Broadcast α -> IO (Maybe α)Source listenTimeout :: Broadcast α -> Integer -> IO (Maybe α)Source Listen to a broadcast if it is available within a given amount of time. Like listen, but with a timeout. A return value of Nothing indicates a timeout occurred. The timeout is specified in microseconds. If the broadcast is "silent" and a timeout of 0 μs is specified the function returns Nothing without blocking. Negative timeouts are treated the same as a timeout of 0 μs. Broadcasting broadcast :: Broadcast α -> α -> IO ()Source signal :: Broadcast α -> α -> IO ()Source Broadcast a value before becoming "silent". The state of the broadcast is changed to "silent" after all threads that are to the broadcast are woken and resume with the signalled value. listening The semantics of signal are equivalent to the following definition: signal b x = block$ broadcastb x >> silenceb
http://hackage.haskell.org/package/concurrent-extra-0.5/docs/Control-Concurrent-Broadcast.html
CC-MAIN-2014-23
en
refinedweb
Hi, At 12:29 19/1/01 +0100, Paulo Gaspar wrote: >> -----Original Message----- >> From: Craig R. McClanahan [mailto:[email protected]] >> Sent: Thursday, January 18, 2001 23:04 >> >> One appropriate question to ask yourself, when comparing, is >> "what does having 15 >> entry points give me that I cannot get with a single entry point >> approach"? If >> there is nothing significant, then it would seem cleaner to rely on a >> simpler-to-understand approach. > >That is a problem also found in several parts of business application >frameworks - those things that help you building a big User Interface >to manipulate and extract information from a big Database. > >You also find the same questions over GUI Frameworks - like Delphi's >VCL or Java's Swing. And in database interface libraries... > >In all of these you find events (Hooks) named "onThis", "beforeThat" >and "afterSomethingElse". And all this frameworks are built using >Object Oriented Programming Techniques. I think you are mixing concerns here. I have worked with multi entry point interceptors (what you call hooks) before and I *believe* that there are different levels of organisation that have to be examined. You mention GUIs here so I will address them here. For instance consider javas GUI awt. Under windows it gets it's events from a multitude of different sources (some are grabbed from eventqueue, others from win32 hooks and others are application created) then routes them through a central message queue and central dispatching model. In many ways the way it works is similar to hardwired valve style implementation. It is only at the upper layers where it is transformed into the preReleaseMouseButtonEventHookLudicrouselySizedMethod() and equivelent postRelease*(). I agree that it is sooooooo much easier on the developer to do things this way in the common approach. It becomes difficult in *certain* circumstances but in most cases it is easier. You will see that moderately complex hand-crafted GUIs also move in this direction by routing them via a central bus and dispatching them in a more friendly way higher up in the framework. The advantage of hooks is also the speed of development - ie it takes about 1/5 of the time to develope a hook based solution rather than a general solution. The general solution first has to be general and then it has to layer specificity on top of that (see below) which is even more work. >The advantage of Hooks is that the programmer is only exposed to the >very narrow complexity of a very specific event. The framework takes >care of the rest. Right but generally the good frameworks go specificity -> general -> specificty So your GUI events functionality is provided by classes the extend the base functionality in a OO way. In the case of Tomcat you would do it differently - for hook "foo()" you would instead create a base valve AbstractFooValve and repeat this for all valves. (Yes thats a damn lot of work!!!) >When you build your own thing that you put in a logic/data pipe, >sometimes you have to understand a lot more about the inner working >of the framework in order not to screw anything. agreed. Thats why AbstractFooValve is near essential ;) The only reason to choose Valves is for performance and flexability. You pay for it in container dev time and initial dev complexity. But if you see Tomcat4.x still being operational 2 years from release within a wide variety of purposes then it is well worth it. I am not saying that Catalinas concept of a valve is completely correct (it uses the Anti-Pattern Subvertion of Control - yuck !!) but it is definetly a step in the right direction. Personally if I was doing it then I would implement Inversion of Control and your valve interface would be as simple as below but I think Valves as they exist are a step forward thou YMMV. public interface Valve { public void invoke(ValveContext context, Request request, Response response) throws IOException, ServletException; } public class MyValve implements Valve { public void invoke(ValveContext context, Request request, Response response) throws IOException, ServletException { ...do stuff with request... context.invokeNext( request, response ); ...dostuff with response... } } Cheers, Pete *------------------------------------------------------* | "Computers are useless. They can only give you | | answers." - Pablo Picasso | *------------------------------------------------------*
http://mail-archives.apache.org/mod_mbox/tomcat-dev/200101.mbox/%[email protected]%3E
CC-MAIN-2014-23
en
refinedweb
Exceptions and Stack Unwinding in C++ In the C++ exception mechanism, control moves from the throw statement to the first catch statement that can handle the thrown type. When the catch statement is reached, all of the automatic variables that are in scope between the throw and catch statements are destroyed in a process that is known as stack unwinding. In stack unwinding, execution proceeds as follows: Control reaches the try statement by normal sequential execution. The guarded section in the try block is executed. If no exception is thrown during execution of the guarded section, the catch clauses that follow the try block are not executed. Execution continues at the statement after the last catch clause that follows the associated try block. If an exception is thrown during execution of the guarded section or in any routine that the guarded section calls either directly or indirectly, an exception object is created from the object that is created by the throw operand. (This implies that a copy constructor may be involved.) At this point, the compiler looks for a catch clause in a higher execution context that can handle an exception of the type that is thrown, or for a catch handler that can handle any type of exception. The catch handlers are examined in order of their appearance after the try block. If no appropriate handler is found, the next dynamically enclosing try block is examined. This process continues until the outermost enclosing try block is examined. If a matching handler is still not found, or if an exception occurs during the unwinding process but before the handler gets control, the predefined run-time function terminate is called. If an exception occurs after the exception is thrown fully constructed—but not yet destructed—between the beginning of the try block that is associated with the catch handler and the throw site of the exception. Destruction occurs in reverse order of construction. The catch handler is executed and the program resumes execution after the last handler—that is, at the first statement or construct that is not a catch handler. Control can only enter a catch handler through a thrown exception, never through a goto statement or a case label in a switch statement. The following example demonstrates how the stack is unwound when an exception is thrown. Execution on the thread jumps from the throw statement in C to the catch statement in main, and unwinds each function along the way. Notice the order in which the Dummy objects are created and then destroyed as they go out of scope. Also notice that no function completes except main, which contains the catch statement. Function A never returns from its call to B(), and B never returns from its call to C(). If you uncomment the definition of the Dummy pointer and the corresponding delete statement, and then run the program, notice that the pointer is never deleted. This shows what can happen when functions do not provide an exception guarantee. For more information, see How to: Design for Exceptions. If you comment out the catch statement, you can observe what happens when a program terminates because of an unhandled exception. #include <string> #include <iostream> using namespace std; class MyException{}; class Dummy { public: Dummy(string s) : MyName(s) { PrintMsg("Created Dummy:"); } Dummy(const Dummy& other) : MyName(other.MyName){ PrintMsg("Copy created Dummy:"); } ~Dummy(){ PrintMsg("Destroyed Dummy:"); } void PrintMsg(string s) { cout << s << MyName << endl; } string MyName; int level; }; void C(Dummy d, int i) { cout << "Entering FunctionC" << endl; d.MyName = " C"; throw MyException(); cout << "Exiting FunctionC" << endl; } void B(Dummy d, int i) { cout << "Entering FunctionB" << endl; d.MyName = "B"; C(d, i + 1); cout << "Exiting FunctionB" << endl; } void A(Dummy d, int i) { cout << "Entering FunctionA" << endl; d.MyName = " A" ; // Dummy* pd = new Dummy("new Dummy"); //Not exception safe!!! B(d, i + 1); // delete pd; cout << "Exiting FunctionA" << endl; } int main() { cout << "Entering main" << endl; try { Dummy d(" M"); A(d,1); } catch (MyException& e) { cout << "Caught an exception of type: " << typeid(e).name() << endl; } cout << "Exiting main." << endl; char c; cin >> c; } /* Output: Entering main Created Dummy: M Copy created Dummy: M Entering FunctionA Copy created Dummy: A Entering FunctionB Copy created Dummy: B Entering FunctionC Destroyed Dummy: C Destroyed Dummy: B Destroyed Dummy: A Destroyed Dummy: M Caught an exception of type: class MyException Exiting main. */
http://msdn.microsoft.com/en-us/library/hh254939(v=vs.120).aspx
CC-MAIN-2014-23
en
refinedweb
PrintCapabilities Class Defines the capabilities of a printer. System.Printing.PrintCapabilities Namespace: System.PrintingNamespace: System.Printing Assembly: ReachFramework (in ReachFramework.dll) The PrintCapabilities type exposes the following members. null. The PrintCapabilities document must conform to the Print Schema. The PrintCapabilities class enables your application to obtain a printer's capabilities without having to engage in any direct reading of XML Stream objects. All of the most popular features of file and photo printers, for both home and business, are encapsulated by the PrintCapabilities.. If you want to print from a Windows Forms application, see the System.Drawing.Printing namespace. The following example shows how to determine the capabilities of a specific printer and how.
http://msdn.microsoft.com/en-us/library/ms586425.aspx
CC-MAIN-2014-23
en
refinedweb
14 February 2013 19:16 [Source: ICIS news] TORONTO (ICIS)--Rio Tinto is taking its upgraded slag (UGS) plant at Sorel-Tracy, northeast of Montreal, offline as it responds to weak titanium dioxide (TiO2) demand, the Anglo-Australian miner said on Thursday. Rio Tinto did not disclose for how long the plant would be offline and did not comment on the unit's current capacity. However, Rio Tinto said that with improving economic conditions in ?xml:namespace> The UGS plant produces raw material for the titanium pigment industry that uses the chloride process, according to information on Rio Tinto’s website. Last week, Rio Tinto’s TiO2 business unit, Rio Tinto Iron & Titanium, said it was halting plans for a new facility to produce TiO2 feedstocks
http://www.icis.com/Articles/2013/02/14/9641213/rio-tinto-takes-quebec-slag-plant-offline-on-weak-tio2.html
CC-MAIN-2014-23
en
refinedweb
Here’s a disclaimer: I avoid Linux and am no Linux expert by any means. I shudder at the thought of black screens with a flashing cursor. I find myself moving my mouse around trying to find an icon to click or a menu to select. It’s from such a perspective that this article will demonstrate how anyone (even I!) can get Mono up and running on a fresh, clean Linux box. I’ll walk through my experience of installing the package, and discuss all the components needed to run Mono. What Is. What’s Usable in Mono? Currently, Mono is in development, but the project has reached some significant milestones that make it suitable and stable enough for deployment today. The C# compiler is the only fully featured compiler for Mono. Yes, that’s right: VB.NET and J# developers will need to switch to C# to fully use Mono. Watch the progress of the VB.NET compiler here. The compiler itself is written in C#. You can download all 1.7 million lines of C# code and compile this yourself, or, as we’ll see shortly, you can use one of the many distributions to ease the installation process. ASP.NET is fully-featured and supported within Mono. In fact, it’s the great strength of Mono at present. You can build and deploy both Web forms and Web services using either the built-in Web server that ships with Mono (XSP) or through an Apache modification, Mod Mono. For those who are uncomfortable using Windows and IIS to host applications, Mono provides a viable alternative. Windows Forms is currently under development, but functionality is progressing. Though complex WinForm applications, such as Web Matrix, are currently unavailable, there are alternatives to WinForms in Mono that build GUI applications. Gtk# and #WT are wrappers to the popular GUI tools on Linux. WinForms itself is being built as a library extension to Wine, the Win32 implementation on Linux. The project’s progress is documented here. ADO.NET and the System.Data classes are also fairly mature; however, they aren’t at production level at the time of writing. This is one of the largest projects within Mono, and at present you can connect to a wide variety of databases. Native support is provided for many of the databases usually associated with Linux, such as PostgreSQL. Mono has successfully been used in commercial and heavy-use applications. Novell used the tool to build two of its products, iFolder and ZenWorks. Also, SourceGear has used Web services deployed in Mono within its Vault application. Getting Started: Download Mono Mono is available in many packages. You can download the latest source code build and a daily snapshot source code build, through CVS, an RPM package, or a Red Carpet install for those with Xiamian Desktop. By far the easiest to use is the Red Carpet system, which, while similar to RPM, offers good versioning control, allowing you to upgrade Mono on your machine very easily. The Mono download page details the various packages and how they can be downloaded, as well as specific packages for specific varieties of Linux. I downloaded the Mono Runtime tarball and used the following command to unpack the distribution: tar -zxvf mono-1.0.tar.gz Once the tarball was extracted, I could start the installation process using: ./configure make make install It was at this point I realised I needed to upgrade pkg-config on the system as the installation spat out some errors. I found the RPM distribution for this, and installed it using the following command: rpm -Uvh pkgconfig-0.15.0-4mdk.i586.rpm The make process now worked without a hitch. Hello World in C# Running on Linux It's always a good thing to use the cliched "Hello World" example to test an installation! Open your favourite Linux text editor (I used vi) and enter the following simple C# application code: public class HelloWorld { static void Main() { System.Console.WriteLine("Hello World!"); } } Save this file as HelloWorld.cs and compile the class with the Mono C# compiler: mcs HelloWorld.cs In the directory to which you saved HelloWorld.cs, you should now see a HelloWorld.exe file. This is standard MSIL code that can be executed on any computer on which the .NET framework is installed, including Windows. There are two ways to run Mono applications. One method is to use "mint," which is an interpreter for the MSIL bytecode. Mint does not compile and run your applications into native machine code, hence it is not a JIT runtime. "Mono" however is a JIT runtime, which compiles bytecode when first requested into machine code native for the platform and processor for which it was designed. This means that, for performance, the Mono application is much faster than mint, though it's not as portable as it is tied to the particular operating system. Mint, on the other hand, is far more portable and, as it's written in ANSI C, may be used on a multitude of deployment platforms. To run our Hello World application, we can use the following command, which invokes Mono: mono hello.exe We use the following command to invoke mint: mint hello.exe Dishing Up ASP.NET Mono comes with its own Web server, ready to dish out your ASP.NET applications. Mono. XSP is very easy to use and makes for a simple ASP.NET Web server that is almost akin to the Cassini server that ships with Web Matrix on Windows. For more dominance, you can download a module called mod_mono that allows Apache to support ASP.NET. In this article, however, we'll examine the formation of a simple Web application and host it using XSP. I must admit that I cheated when I created the code for the Web application: I used Visual Studio to create the Web application and its associated files. Then, I copied the code over to the Linux box that was ready to host. For our example, we'll create a Web page with a simple button and a label. When the button is clicked, the label will show that the user clicked the button. <%@ Page Language="C#" %> <script runat="server"> void Button1_Click(object sender, EventArgs e) { titleTag.InnerText = "You clicked the button!"; } void Page_Load(object sender, EventArgs e) { if (!IsPostBack) titleTag.InnerText = "Page has loaded"; } </script> <html> <head> <title runat="server" id="titleTag"/> </head> <body> <form runat="server"> <asp:Button </form> </body> </html> You can save this file to a directory that will act as your root for the Web application. If you are using codebehinds for your application, you will also need to compile the source files using the Mono compiler, in order to obtain the compiled assembly for the application. Just as with ASP.NET on Windows, you'll need to drop this into a /bin directory on the root. XSP, by default, listens on the 8080 port so as to not interfere with Apache; however, you can set up the server, via the command line, to listen at a different port. You can also specify the root directory of the application you wish to host: xsp.exe [--root rootdir] [--port N] With the server running, you can access your page through any standard Web browser. And, there we have it: ASP.NET served over Linux! Third Party Tools There are numerous tools available to aid your developments in Mono so that, unlike me, you don't need to resort back to Visual Studio. - MonoDevelop: With GUI features still lacking in Mono, MonoDevelop is really pushing the current implementations to show what can be achieved in Mono. Resembling #develop for Windows, MonoDevelop supports syntax highlighting and the compilation of projects from an easy-to-use interface. However, this tool is still at an early stage of development and presently lacks the features needed to make it a truly useful instrument. - Eclipse: Billed as "a kind of universal tool platform -- an open extensible IDE for anything and nothing in particular," Eclipse is a great solution for developing Mono applications. By downloading and installing the Improve C# plugin for Eclipse you can have a fully-featured and free Java based IDE for your Mono developments. - MonoDoc: This is a browser for the Mono documentation. It isn't installed by default through the standard Mono packages, but it is a life saver for those needing to check whether certain APIs and parts of the framework are available in Mono. Summary From my study of Mono, I've come to understand that this is a very important project for .NET. The release of .NET from the confines of Microsoft operating systems will allow it to expand within communities that are normally off-limits to Microsoft technologies. Mono can only grow stronger, and perhaps in the near future, .NET developers will be able to develop for Linux as easy as we develop for Windows today. No Reader comments
http://www.sitepoint.com/get-started-mono/
CC-MAIN-2014-23
en
refinedweb
public class YamlPropertiesFactoryBean extends YamlProcessor implements FactoryBean<java.util.comProperties.util.Properties> java.util.Properties() Invoked lazily the first time getObject() is invoked in case of a shared singleton; else, on each getObject() call. ()
https://docs.spring.io/spring/docs/5.1.0.BUILD-SNAPSHOT/javadoc-api/org/springframework/beans/factory/config/YamlPropertiesFactoryBean.html
CC-MAIN-2020-34
en
refinedweb
fourth post in the Little Pitfalls series where I explore these issues; the previous Little Pitfall post can be found here. Today we are going to look at a potential pitfall that can bite developers who expect the default behavior of declaring the same method (with same signature) in a subclass to perform an override. In particular, if the developer came from the C++ world, this may run counter to their expectations. While the C# compiler does a good job of warning you of this event, it is not an error that will break your build, so it’s worth noting and watching out for. Note: even though I’m just covering methods in this post, properties can also be overridden and hidden with the same potential pitfall. When you have a base-class you are going to inherit from, there are two basic choices for “replacing” the functionality of a base-class method in the sub-class: hiding and overriding. Let’s look at overriding first because it is the behavior many people tend to expect. To override behavior from a base-class method, the method must be marked virtual or abstract in the base-class. The virtual keyword indicates the method may be overridden in the subclass and allows you to define a default implementation of the method, and the abstract keyword says that the method must be overridden in a concrete subclass and that there is no default implementation of the method.. For example, let’s take the classes below: 1: public class A { 2: // Must be marked virtual (if has body) or abstract (if no body) 3: public virtual void WhoAmI() { 4: Console.WriteLine("I am an A."); 5: } 6: } 7: 8: public class B : A { 9: // must be marked override to override original behavior 10: public override void WhoAmI() { 11: Console.WriteLine("I am a B."); 12: } 13: } In this example, B.WhoAmI() overrides the implementation of A.WhoAmI(). When overriding, the decision of what method to call is made at runtime, so if you have an object of type B held in a reference of type A, the B.WhoAmI() will still get called because it looks up the actual type of the object the reference refers to at runtime, and not the type of the reference itself, to determine which version of the method to call: 1: public static void Main() 2: { 3: B myB = new B(); 4: A myBasA = myB; 5: 6: myB.WhoAmI(); // I am a B 7: myBasA.WhoAmI(); // I am a B 8: } Note that in the code above, even though reference myBasA is typed as A, the actual object being referred to is of type B, thus B.WhoAmI() will be called at runtime. Hiding, however, takes a different approach. In hiding what we do is create a new method with the exact same name and signature, but we (should) mark it as new. In this case B’s implementation hides A’s implementation: 2: // Method to hide can be virtual or non-virtual (but not abstract) 3: public void WhoAmI() { 9: // SHOULD use new to explicitly state intention to hide original. 10: public new void WhoAmI() { So now, with hiding, the method replaces the definition for class B, which sounds the same on the surface, but in the case of hiding, the method to call is determined at compile-time based on the type of the reference itself. This means that the results of main from before are now: 7: myBasA.WhoAmI(); // I am an A Notice that even though the object in both cases being referred to is type B, the version of WhoAmI() that is called depends solely on the type of the reference, not on the type of the object being referred to. Thus it will be A.WhoAmI() that will be called here. Both hiding and overriding are valid and useful ways to replace base class functionality, and when you’d use each depends on your design and situation. So all that discussion was mainly academic, right? If so, where does the problem lie? Well, the main thing to watch out for is that you aren’t required to use the override or new keyword when “replacing” a method in a subclass. Both of those keywords are purely optional, even if the original method was marked as virtual or abstract. Consider this code example: 2: // Method is marked virtual, which signals intent to be overridden 9: // Person who designed sub-class didn't use 'new' or 'override' explicitly... 10: public void WhoAmI() { So the question is, does B.WhoAmI() hide, or override A.WhoAmI()? The base class implementation was clearly marked virtual. In C++, if the base-class method is marked as virtual, then the sub-class method will be an override and you need not repeat the virtual keyword. This is not true in C#, however, the default behavior is to implicitly hide (not override) if no keyword explicitly says otherwise. This gives us the following results: This can trip up C++ developers who don’t know about this default behavior difference between the two languages. To be fair, C# does give a compiler warning if you are not explicit, asking you politely to explicitly specify either override or new, but it doesn’t require you to do so: 'B.WhoAmI()' hides inherited member 'A.WhoAmI()'. Use the new keyword if hiding was intended. 'B.WhoAmI()' hides inherited member 'A.WhoAmI()'. Use the new keyword if hiding was intended. Note: Remember, don’t ignore your warnings! As they say, a warning is an error waiting to happen. If you really want motivation to clean up warnings in your code, go into your project settings and enable “Treat warnings as errors” on the Build tab. If your class implements an interface, and you want that interface behavior to be overridable, make sure you mark the base class that implements the interface’s methods as virtual or abstract. implement the IDisposable interface. Because it’s highly possible if we’re using a factory pattern that we’ll be referring to concrete implementations of a message consumer as an AbstractMessageConsumer, we’d want to implement IDisposable in such a way that the Dispose() will call the appropriate method based on the concrete class and not just the one defined in AbstractMessageConsumer. So we may do something like this: 1: public abstract class AbstractMessageConsumer : IDisposable 3: public virtual void Dispose() 4: { 5: // dispose of any resources in the abstract base here... 6: } 7: } 8: 9: public sealed class TopicMessageConsumer : AbstractMessageConsumer 10: { 11: public override void Dispose() 12: { 13: // dispose of any resources just in the topic message consumer here... 14: 15: // then dispose of the base by invoking the base class definition. 16: base.Dispose(); 17: } 18: } By defining Dispose() as virtual in AbstractMessageConsumer, we allow the actual definition of Dispose() to be used to be resolved at run-time, thus we will be assured that the correct version will be called. So, what if we hadn’t done this, and would have instead defined our classes like this: 3: // note not virtual... 4: public void Dispose() { ... } 5: } 6: 7: public sealed class TopicMessageConsumer : AbstractMessageConsumer 8: { 9: public void Dispose() { ... } 10: } Then if we would have tried to call Dispose() on an AbstractMessageConsumer reference to a TopicMessageConsumer, we would have gotten the wrong Dispose()! 1: AbstractMessageConsumer mc = new TopicMessageConsumer(); 2: 3: // Because TopicMessageConsumer only hides Dispose(), AbstractMessageConsumer's 4: // Dipose() method is the one called here. 5: mc.Dispose(); So remember, if you implement an interface and want that implemented behavior to be overridable in a sub-class, make sure you mark your interface implementation methods (and/or properties) as virtual or abstract and then correctly override them in a sub-class. Note: If you have no intention of allowing your class to be inherited from, consider making the class sealed to prevent accidental hiding problems from occurring. Hiding has some very handy uses and is a valuable part of the C# toolbox. That said, it can be confusing for a developer who doesn’t know that the implicit behavior in C# is to hide and not to override. To help make sure you always get the correct behavior you want, you can: Print | posted on Thursday, July 21, 2011 6:10 PM | Filed Under [ My Blog C# Software .NET Little Pitfalls ]
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/07/21/c.net-little-pitfalls-the-default-is-to-hide-not-override.aspx
CC-MAIN-2020-34
en
refinedweb
Internet Engineering Task Force (IETF) A. Allen, Ed. Request for Comments: 7255 Blackberry Category: Informational May 2014 ISSN: 2070-1721 Using the International Mobile station Equipment Identity (IMEI) Uniform Resource Name (URN) as an Instance ID Abstract This specification defines how the Uniform Resource Name (URN) reserved for the Global System for Mobile Communications Association (GSMA) identities and its sub-namespace for the International Mobile station Equipment Identity (IMEI) can be used as an instance-id. Its purpose is to fulfill the requirements for defining how a specific URN needs to be constructed and used in the '+sip.instance' Contact header field parameter for outbound. 3GPP Use Cases ..................................................5 5. User Agent Client Procedures ....................................5 6. User Agent Server Procedures ....................................6 7. 3GPP SIP Registrar Procedures ...................................6 8. Security Considerations .........................................7 9. Acknowledgements ................................................7 10. References .....................................................8 10.1. Normative References ......................................8 10.2. Informative References ....................................8 1. Introduction This specification defines how the Uniform Resource Name (URN) reserved for the Global System for Mobile Communications Association (GSMA) identities and its sub-namespace for the International Mobile station Equipment Identity (IMEI) as specified in RFC 7254 [1] can be used as an instance-id as specified in RFC 5626 [2] and also as used by RFC 5627 [3]. RFC 5626 [2] specifies the '+sip.instance' Contact header field parameter that contains a URN as specified in RFC 2141 [4]. The instance-id uniquely identifies a specific User Agent (UA) instance. This instance-id is used as specified in RFC 5626 [2] so that the Session Initiation Protocol (SIP) registrar (as specified in RFC 3261 [9]) can recognize that the contacts from multiple registrations correspond to the same UA. The instance-id is also used as specified by RFC 5627 [3] to create Globally Routable User Agent URIs (GRUUs) that can be used to uniquely address a UA when multiple UAs are registered with the same Address of Record (AoR). RFC 5626 [2] requires that a UA SHOULD create a Universally Unique Identifier (UUID) URN as specified in RFC 4122 [6] as its instance-id but allows for the possibility to use other URN schemes. Per RFC 5626, ". This specification meets this requirement by specifying how the GSMA IMEI URN is used in the '+sip.instance' Contact header field parameter for outbound behavior, and RFC 7254 [1] specifies how the GSMA IMEI URN is constructed. The GSMA IMEI is a URN for the IMEI -- a globally unique identifier that identifies mobile devices used in the GSM, Universal Mobile Telecommunications System (UMTS), and 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) networks. The IMEI allocation is managed by the GSMA to ensure that the IMEI values are globally unique. Details of the formatting of the IMEI as a URN are specified in RFC 7254 [1], and the definition of the IMEI is contained in 3GPP TS 23.003 [10]. Further details about the GSMA's role in allocating the IMEI, and the IMEI allocation guidelines, can be found in GSMA PRD TS.06 [11]. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [7]. 3. Background GSM, UMTS, and LTE capable mobile devices represent 90% of the mobile devices in use worldwide. Every manufactured GSM, UMTS, or LTE mobile device has an allocated IMEI that uniquely identifies this specific mobile device. Among other things, in some regulatory jurisdictions the IMEI is used to identify that a stolen mobile device is being used, to help to identify the subscription that is using it, and to prevent use of the mobile device. While GSM was originally a circuit switched system, enhancements such as the General Packet Radio Service (GPRS) and UMTS have added IP data capabilities that, along with the definition of the IP Multimedia Subsystem (IMS), have made SIP-based calls and IP multimedia sessions from mobile devices possible. The latest enhancement, known as LTE, introduces even higher data rates and dispenses with the circuit switched infrastructure completely. This means that with LTE networks, voice calls will need to be conducted using IP and IMS. However, the transition to all IP SIP-based IMS networks worldwide will take a great many years, and mobile devices, being mobile, will need to operate in both IP/SIP/IMS mode and circuit switched mode. This means that calls and sessions will need to be handed over between IP/SIP/IMS mode and circuit switched mode mid-call or mid-session. Also, since many existing GSM and UMTS radio access networks are unable to support IP/SIP/IMS-based voice services in a commercially acceptable manner, some sessions could have some media types delivered via IP/IMS simultaneously with voice media delivered via the circuit switched domain to the same mobile device. To achieve this, the mobile device needs to be simultaneously attached via both the IP/SIP/IMS domain and the circuit switched domain. To meet this need, the 3GPP has specified how to maintain session continuity between the IP/SIP/IMS domain and the circuit switched domain in 3GPP TS 24.237 [12], and in 3GPP TS 24.292 [13] has specified how to access IMS hosted services via both the IP/SIP/IMS domain and the circuit switched domain. In order for the mobile device to access SIP/IMS services via the circuit switched domain, the 3GPP has specified a Mobile Switching Center (MSC) server enhanced for IMS Centralized Services (ICS) and a MSC server enhanced for Single Radio Voice Call Continuity (SR-VCC) that control mobile voice call setup over the circuit switched radio access while establishing the corresponding voice session in the core network using SIP/IMS. To enable this, the MSC server enhanced for ICS or the MSC server enhanced for SR-VCC performs SIP registration on behalf of the mobile device, which is also simultaneously directly registered with the IP/SIP/IMS domain. The only mobile device identifier that is transportable using GSM/UMTS/LTE signaling is the IMEI; therefore, the instance-id included by the MSC server enhanced for ICS or the MSC server enhanced for SR-VCC when acting on behalf of the mobile device, and the instance-id directly included by the mobile device, both need to be based on the IMEI. Additionally, in order to meet the above requirements, the same IMEI that is obtained from the circuit switched signaling by the MSC server needs to be obtainable from SIP signaling so that it can be determined that both the SIP signaling and circuit switched signaling originate from the same mobile device. For these reasons, 3GPP TS 24.237 [12] and 3GPP TS 24.292 [13] already specify the use of the URN namespace for the GSMA IMEI URN as specified in RFC 7254 [1] as the instance-id used by GSM/UMTS/LTE mobile devices, the MSC server enhanced for SR-VCC, and the MSC server enhanced for ICS, for SIP/IMS registrations and emergency- related SIP requests. 4. 3GPP Use Cases 1. The mobile device includes its IMEI in the SIP REGISTER request so that the SIP registrar can perform a check of the Equipment Identity Register (EIR) to verify whether this mobile device is allowed to access the network for non-emergency services or is barred from doing so (e.g., because the device has been stolen). If the mobile device is not allowed to access the network for non-emergency services, the SIP registrar can reject the registration and thus prevent a barred mobile device from accessing the network for non-emergency services. 2. The mobile device includes its IMEI in SIP INVITE requests used to establish emergency sessions. This is so that the Public Safety Answering Point (PSAP) can obtain the IMEI of the mobile device for identification purposes if required by regulations. 3. The IMEI that is included in SIP INVITE requests by the mobile device and used to establish emergency sessions is also used in cases of unauthenticated emergency sessions to enable the network to identify the mobile device. This is especially important if the unauthenticated emergency session is handed over from the packet switched domain to the circuit switched domain. In this scenario, the IMEI is the only identifier that is common to both domains, so the Emergency Access Transfer Function (EATF) in the network, which in such cases coordinates the transfer between domains, can use the IMEI to determine that the circuit switched call is from the same mobile device that was in the emergency session in the packet switched domain. 5. User Agent Client Procedures A User Agent Client (UAC) that has an IMEI as specified in 3GPP TS 23.003 [10] and that is registering with a 3GPP IMS network MUST include in the "sip.instance" media feature tag the GSMA IMEI URN according to the syntax specified in RFC 7254 [1] when performing the registration procedures specified in RFC 5626 [2] or RFC 5627 [3], or any other procedure requiring the inclusion of the "sip.instance" media feature tag. The UAC SHOULD NOT include the optional 'svn' parameter in the GSMA IMEI URN in the "sip.instance" media feature tag, since the software version can change as a result of upgrades to the device firmware that would create a new instance-id. Any future non-zero values of the 'vers' parameter, or the future definition of additional parameters for the GSMA IMEI URN that are intended to be used as part of an instance-id, will require that an update be made to this RFC. The UAC MUST provide character-by-character identical URNs in each registration according to RFC 5626 [2]. Hence, any optional or variable components of the URN (e.g., the 'vers' parameter) MUST be presented with the same values and in the same order in every registration as in the first registration. A UAC MUST NOT use the GSMA IMEI URN as an instance-id, except when registering with a 3GPP IMS network. When a UAC is operating in IMS mode, it will obtain from the Universal Integrated Circuit Card (UICC) (commonly known as the SIM card) the domain of the network with which to register. This is a carrier's IMS network domain. The UAC will also obtain the address of the IMS edge proxy to send the REGISTER request containing the IMEI using information elements in the Attach response when it attempts to connect to the carrier's packet data network. When registering with a non-3GPP IMS network, a UAC SHOULD use a UUID as an instance-id as specified in RFC 5626 [2]. A UAC MUST NOT include the "sip.instance" media feature tag containing the GSMA IMEI URN in the Contact header field of non- REGISTER requests, except when the request is related to an emergency session. Regulatory requirements can require that the IMEI be provided to the PSAP. Any future exceptions to this prohibition will require the publication of an RFC that addresses how privacy is not violated by such usage. 6. User Agent Server Procedures A User Agent Server (UAS) MUST NOT include its "sip.instance" media feature tag containing the GSMA IMEI URN in the Contact header field of responses, except when the response is related to an emergency session. Regulatory requirements can require that the IMEI be provided to the PSAP. Any future exceptions to this prohibition will require the publication of an RFC that addresses how privacy is not violated by such usage. 7. 3GPP SIP Registrar Procedures In 3GPP IMS, when the SIP registrar receives in the Contact header field a "sip.instance" media feature tag containing the GSMA IMEI URN according to the syntax specified in RFC 7254 [1] the SIP registrar follows the procedures specified in RFC 5626 [2]. The IMEI URN MAY be validated as described in RFC 7254 [1]. If the UA indicates that it supports the extension in RFC 5627 [3] and the SIP registrar allocates a public GRUU according to the procedures specified in RFC 5627 [3], the instance-id MUST be obfuscated when creating the 'gr' parameter in order not to reveal the IMEI to other UAs when the public GRUU is included in non-REGISTER requests and responses. 3GPP TS 24.229 [8] subclause 5.4.7A.2 specifies the mechanism for obfuscating the IMEI when creating the 'gr' parameter. 8. Security Considerations Because IMEIs, like other formats of instance-ids, can be correlated to a user, they are personally identifiable information and therefore MUST be treated in the same way as any other personally identifiable information. In particular, the "sip.instance" media feature tag containing the GSMA IMEI URN MUST NOT be included in requests or responses intended to convey any level of anonymity, as this could violate the user's privacy. RFC 5626 [2] states that "One case where a UA could prefer to omit the "sip.instance" media feature tag is when it is making an anonymous request or some other privacy concern requires that the UA not reveal its identity". The same concerns apply when using the GSMA IMEI URN as an instance-id. Publication of the GSMA IMEI URN to networks to which the UA is not attached, or with which the UA does not have a service relationship, is a security breach, and the "sip.instance" media feature tag MUST NOT be forwarded by the service provider's network elements when forwarding requests or responses towards the destination UA. Additionally, an instance-id containing the GSMA IMEI URN identifies a mobile device and not a user. The instance-id containing the GSMA IMEI URN MUST NOT be used alone as an address for a user or as an identification credential for a user. The GRUU mechanism specified in RFC 5627 [3] provides a means to create URIs that address the user at a specific device or User Agent. Entities that log the instance-id need to protect them as personally identifiable information. Regulatory requirements can require that carriers log SIP IMEIs. In order to protect the "sip.instance" media feature tag containing the GSMA IMEI URN from being tampered with, those REGISTER requests containing the GSMA IMEI URN MUST be sent using a security mechanism such as Transport Layer Security (TLS) (RFC 5246 [5]) or another security mechanism that provides equivalent levels of protection such as hop-by-hop security based upon IPsec. 9. Acknowledgements The author would like to thank Paul Kyzivat, Dale Worley, Cullen Jennings, Adam Roach, Keith Drage, Mary Barnes, Peter Leis, James Yu, S. Moonesamy, Roni Even, and Tim Bray for reviewing this document and providing their comments. 10. References 10.1. Normative References [1] Montemurro, M., Ed., Allen, A., McDonald, D., and P. Gosden, "A Uniform Resource Name Namespace for the Global System for Mobile Communications Association (GSMA) and the International Mobile station Equipment Identity (IMEI)", RFC 7254, May 2014. [2] Jennings, C., Mahy, R., and F. Audet, "Managing Client- Initiated Connections in the Session Initiation Protocol (SIP)", RFC 5626, October 2009. [3] Rosenberg, J., "Obtaining and Using Globally Routable User Agent URIs (GRUUs) in the Session Initiation Protocol (SIP)", RFC 5627, October 2009. [4] Moats, R., "URN Syntax", RFC 2141, May 1997. [5] Dierks, T. and E. Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.2", RFC 5246, August 2008. [6] Leach, P., Mealling, M., and R. Salz, "A Universally Unique IDentifier (UUID) URN Namespace", RFC 4122, July 2005. [7] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [8] 3GPP, "IP multimedia call control protocol based on Session Initiation Protocol (SIP) and Session Description Protocol (SDP); Stage 3", 3GPP TS 24.229 (Release 8), March 2014, < 24.229/>. [9] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M., and E. Schooler, "SIP: Session Initiation Protocol", RFC 3261, June 2002. 10.2. Informative References [10] 3GPP, "Numbering, addressing and identification", 3GPP TS 23.003 (Release 8), March 2014, < archive/23_series/23.003/>. [11] GSM Association, "IMEI Allocation and Approval Guidelines", PRD TS.06 (DG06) Version 6.0, July 2011, < ts0660tacallocationprocessapproved.pdf>. [12] 3GPP, "Mobile radio interface Layer 3 specification; Core network protocols; Stage 3", 3GPP TS 24.237 (Release 8), September 2013, < 24_series/24.237/>. [13] 3GPP, "IP Multimedia (IM) Core Network (CN) subsystem Centralized Services (ICS); Stage 3", 3GPP TS 24.292 (Release 8), December 2013, < archive/24_series/24.292/>. Author's Address Andrew Allen (editor) Blackberry 1200 Sawgrass Corporate Parkway Sunrise, Florida 33323 USA Previous: RFC 7254 - A Uniform Resource Name Namespace for the Global System for Mobile Communications Association (GSMA) and the International Mobile station Equipment Identity (IMEI) Next: RFC 7256 - Multicast Control Extensions for the Access Node Control Protocol (ANCP)
http://www.faqs.org/rfcs/rfc7255.html
CC-MAIN-2020-34
en
refinedweb
Chymyst — declarative concurrency in Scala This repository hosts Chymyst Core — a library that provides a Scala domain-specific language for purely functional, declarative concurrency. Chymyst is a framework-in-planning that will build upon Chymyst Core to enable creating concurrent applications declaratively. Chymyst implements the chemical machine paradigm, known in the academic world as Join Calculus (JC). JC has the same expressive power as CSP (Communicating Sequential Processes) and the Actor model, but is easier to use. (See also Conceptual overview of concurrency.) The initial code of Chymyst Core was based on previous work by He Jiansen (2011) and Philipp Haller (2008), as well as on Join Calculus prototypes in Objective-C/iOS and Java/Android (2012). The current implementation is tested under Oracle JDK 8 with Scala 2.11.8, 2.11.11, and 2.12.2- 2.12.4. Version history and roadmap Overview of Chymyst and the chemical machine programming paradigm Concurrency in Reactions: Get started with this extensive tutorial book Download the tutorial book in PDF Download the tutorial book in EPUB format Manage book details (requires login) Project documentation at Github Pages From actors to reactions: a guide for those familiar with the Actor model Show me the code: An ultra-short guide to Chymyst A "Hello, world" project Presentations on Chymyst and on the chemical machine programming paradigm Oct. 16, 2017: Declarative concurrent programming with Join Calculus. Talk given at the Scala Bay meetup: - Talk slides with audio - See also the talk slides (PDF) and the code examples for the talk. July 2017: Industry-Strength Join Calculus: Declarative concurrent programming with Chymyst: Draft of an academic paper describing Chymyst and its approach to join calculus Nov. 11, 2016: Concurrent Join Calculus in Scala. Talk given at Scalæ by the Bay 2016: - Video presentation of early version of Chymyst, then called JoinRun - See also the talk slides revised for the current syntax. Main features of the chemical machine Comparison of the chemical machine vs. academic Join Calculus Comparison of the chemical machine vs. the Actor model Comparison of the chemical machine vs. the coroutines / channels approach (CSP) Developer documentation for Chymyst Core Example: the "dining philosophers" problemExample: the "dining philosophers" problem The following code snippet is a complete runnable example that implements the logic of "dining philosophers" in a fully declarative and straightforward way. import io.chymyst.jc._ object Main extends App { /** Print message and wait for a random time interval. */ def wait(message: String): Unit = { println(message) Thread.sleep(scala.util.Random.nextInt(20)) } val hungry1 = m[Unit] val hungry2 = m[Unit] val hungry3 = m[Unit] val hungry4 = m[Unit] val hungry5 = m[Unit] val thinking1 = m[Unit] val thinking2 = m[Unit] val thinking3 = m[Unit] val thinking4 = m[Unit] val thinking5 = m[Unit] val fork12 = m[Unit] val fork23 = m[Unit] val fork34 = m[Unit] val fork45 = m[Unit] val fork51 = m[Unit] site ( go { case thinking1(_) => wait("Socrates is thinking"); hungry1() }, go { case thinking2(_) => wait("Confucius is thinking"); hungry2() }, go { case thinking3(_) => wait("Plato is thinking"); hungry3() }, go { case thinking4(_) => wait("Descartes is thinking"); hungry4() }, go { case thinking5(_) => wait("Voltaire is thinking"); hungry5() }, go { case hungry1(_) + fork12(_) + fork51(_) => wait("Socrates is eating"); thinking1() + fork12() + fork51() }, go { case hungry2(_) + fork23(_) + fork12(_) => wait("Confucius is eating"); thinking2() + fork23() + fork12() }, go { case hungry3(_) + fork34(_) + fork23(_) => wait("Plato is eating"); thinking3() + fork34() + fork23() }, go { case hungry4(_) + fork45(_) + fork34(_) => wait("Descartes is eating"); thinking4() + fork45() + fork34() }, go { case hungry5(_) + fork51(_) + fork45(_) => wait("Voltaire is eating"); thinking5() + fork51() + fork45() } ) // Emit molecules representing the initial state: thinking1() + thinking2() + thinking3() + thinking4() + thinking5() fork12() + fork23() + fork34() + fork45() + fork51() // Now reactions will start and print messages to the console. } StatusStatus). Run unit testsRun unit tests sbt test The tests will print some error messages and exception stack traces - this is normal, as long as all tests pass. Some tests are timed and will fail on a slow machine. Run the benchmark applicationRun the benchmark application sbt benchmark/run will run the benchmarks. To build the benchmark application as a self-contained JAR, run sbt clean benchmark/assembly Then run it as java -jar benchmark/target/scala-2.11/benchmark-assembly-*.jar To run with FlightRecorder: sbt benchmark/runFR This will create the file benchmark.jfr in the current directory. Open that file with jmc (Oracle's "Java Mission Control" tool) to inspect Code and then the "Hot Methods" (places where most time is spent). Use Chymyst Core in your programs Chymyst Core is published to Maven Central. To pull the dependency, add this to your build.sbt at the appropriate place: libraryDependencies += "io.chymyst" %% "chymyst-core" % "latest.integration" To use the chemical machine DSL, add import io.chymyst.jc._ in your Scala sources. See the "hello, world" project for an example. Build the library JARsBuild the library JARs To build the library JARs: sbt core/package core/package-doc This will prepare JAR assemblies as well as their Scaladoc documentation packages. The main library is in the core JAR assembly ( core/target/scala-2.11/core-*.jar). User code should depend on that JAR only. Prepare new releasePrepare new release - Edit the version string at the top of build.sbt - Make sure there is a description of changes for this release at the top of docs/roadmap.md - Commit everything to master and add tag with release version - Push everything (with tag) to master; build must pass on CI Publish to SonatypePublish to Sonatype $ sbt > project core > +publishSigned > sonatypeRelease If sonatypeRelease fails due to any problems with POM files while +publishSigned succeeded, it is possible to release manually on the Sonatype web site (requires login). Go to "Staging Repositories" and execute the actions to "promote" the release. TriviaTrivia This drawing is by Robert Boyle, who was one of the founders of the science of chemistry. In 1661 he published a treatise titled “The Sceptical Chymist”, from which the Chymyst framework borrows its name.
https://index.scala-lang.org/chymyst/chymyst-core/chymyst-core/0.2.0?target=_2.12
CC-MAIN-2020-34
en
refinedweb
last session of the series, GitOps Tool Sets on Kubernetes with CircleCI and Argo CD. Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment. Introduction Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with the increased control comes an increased complexity that can make CI/CD systems of cooperative code development, version control, change logging, and automated deployment and rollback particularly difficult to manage manually. To account for these difficulties, DevOps engineers have developed several methods of Kubernetes CI/CD automation, including the system of tooling and best practices called GitOps. GitOps, as proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment. There are many tools that use Git as a focal point for DevOps processes on Kubernetes, including Gitkube developed by Hasura, Flux by Weaveworks, and Jenkins X, the topic of the second webinar in this series. In this tutorial, you will run through a demonstration of two additional tools that you can use to set up your own cloud-based GitOps CI/CD system: The Continuous Integration tool CircleCI and Argo CD, a declarative Continuous Delivery tool. CircleCI uses GitHub or Bitbucket repositories to organize application development and to automate building and testing on Kubernetes. By integrating with the Git repository, CircleCI projects can detect when a change is made to the application code and automatically test it, sending notifications of the change and the results of testing over email or other communication tools like Slack. CircleCI keeps logs of all these changes and test results, and the browser-based interface allows users to monitor the testing in real time, so that a team always knows the status of their project. As a sub-project of the Argo workflow management engine for Kubernetes, Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including ksonnet applications, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket. In this last article of the CI/CD with Kubernetes series, you will try out these GitOps tools by: By the end of this tutorial, you will have a basic understanding of how to construct a CI/CD pipeline on Kubernetes with a GitOps tool set. Docker Hub Account. For an overview on getting started with Docker Hub, please see these instructions. A GitHub account and basic knowledge of GitHub. For a primer on how to use GitHub, check out our How To Create a Pull Request on GitHub tutorial. Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details. A Kubernetes cluster with the kubectl command line tool. This tutorial has been tested on a simulated Kubernetes cluster, set up in a local environment with Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster. To create a Minikube cluster, follow Step 1 of the second webinar in this series, Kubernetes Package Management with Helm and CI/CD with Jenkins X. Step 1 — Setting Up your CircleCI Workflow In this step, you will put together a standard CircleCI workflow that involves three jobs: testing code, building an image, and pushing that image to Docker Hub. In the testing phase, CircleCI will use pytest to test the code for a sample RSVP application. Then, it will build the image of the application code and push the image to DockerHub. First, give CircleCI access to your GitHub account. To do this, navigate to in your favorite web browser: In the top right of the page, you will find a Sign Up button. Click this button, then click Sign Up with GitHub on the following page. The CircleCI website will prompt you for your GitHub credentials: Entering your username and password here gives CircleCI the permission to read your GitHub email address, deploy keys and add service hooks to your repository, create a list of your repositories, and add an SSH key to your GitHub account. These permissions are necessary for CircleCI to monitor and react to changes in your Git repository. If you would like to read more about the requested permissions before giving CircleCI your account information, see the CircleCI documentation. Once you have reviewed these permissions, enter your GitHub credentials and click Sign In. CircleCI will then integrate with your GitHub account and redirect your browser to the CircleCI welcome page: Now that you have access to your CircleCI dashboard, open up another browser window and navigate to the GitHub repository for this webinar,. If prompted to sign in to GitHub, enter your username and password. In this repository, you will find a sample RSVP application created by the CloudYuga team. For the purposes of this tutorial, you will use this application to demonstrate a GitOps workflow. Fork this repository to your GitHub account by clicking the Fork button at the top right of the screen. When you’ve forked the repository, GitHub will redirect you to. On the left side of the screen, you will see a Branch: master button. Click this button to reveal the list of branches for this project. Here, the master branch refers to the current official version of the application. On the other hand, the dev branch is a development sandbox, where you can test changes before promoting them to the official version in the master branch. Select the dev branch. Now that you are in the development section of this demonstration repository, you can start setting up a pipeline. CircleCI requires a YAML configuration file in the repository that describes the steps it needs to take to test your application. The repository you forked already has this file at .circleci/config.yml; in order to practice setting up CircleCI, delete this file and make your own. To create this configuration file, click the Create new file button and make a file named .circleci/config.yml: Once you have this file open in GitHub, you can configure the workflow for CircleCI. To learn about this file’s contents, you will add the sections piece by piece. First, add the following: .circleci/config.yml version: 2 jobs: test: machine: image: circleci/classic:201808-01 docker_layer_caching: true working_directory: ~/repo . . . In the preceding code, version refers to the version of CircleCI that you will use. jobs:test: means that you are setting up a test for your application, and machine:image: indicates where CircleCI will do the testing, in this case a virtual machine based on the circleci/classic:201808-01 image. Next, add the steps you would like CircleCI to take during the test: .circleci/config.yml . . . . . . The steps of the test are listed out after steps:, starting with - checkout, which will checkout your project’s source code and copy it into the job’s space. Next, the - run: name: install dependencies step runs the listed commands to install the dependencies required for the test. In this case, you will be using the Django Web framework’s built-in test-runner and the testing tool pytest. After CircleCI downloads these dependencies, the -run: name: run tests step will instruct CircleCI to run the tests on your application. With the test job completed, add in the following contents to describe the build job: .circleci/config.yml . . . . . . As before, machine:image: means that CircleCI will build the application in a virtual machine based on the specified image. Under steps:, you will find - run: name: build image. This means that CircleCi will build a Docker container from the rsvpapp image in your Docker Hub repository. You will set the $DOCKERHUB_USERNAME environment variable in the CircleCI interface, which the tutorial will cover after this YAML file is complete. After the build job is done, the push job will push the resulting image to your Docker Hub account. Finally, add the following lines to determine the workflows that coordinate the jobs you defined earlier: .circleci/config.yml . . . workflows: version: 2 build-deploy: jobs: - test: context: DOCKERHUB filters: branches: only: dev - build: context: DOCKERHUB requires: - test filters: branches: only: dev - push: context: DOCKERHUB requires: - build filters: branches: only: dev These lines ensure that CircleCI executes the test, build, and push jobs in the correct order. context: DOCKERHUB refers to the context in which the test will take place. You will create this context after finalizing this YAML file. The only: dev line restrains the workflow to trigger only when there is a change to the dev branch of your repository, and ensures that CircleCI will build and test the code from dev. Now that you have added all the code for the .circleci/config.yml file, its contents should be as follows: .circleci/config.yml version: 2 jobs: test: machine: image: circleci/classic:201808-01 docker_layer_caching: true working_directory: ~/repo workflows: version: 2 build-deploy: jobs: - test: context: DOCKERHUB filters: branches: only: dev - build: context: DOCKERHUB requires: - test filters: branches: only: dev - push: context: DOCKERHUB requires: - build filters: branches: only: dev Once you have added this file to the dev branch of your repository, return to the CircleCI dashboard. Next, you will create a CircleCI context to house the environment variables needed for the workflow that you outlined in the preceding YAML file. On the left side of the screen, you will find a SETTINGS button. Click this, then select Contexts under the ORGANIZATION heading. Finally, click the Create Context button on the right side of the screen: CircleCI will then ask you for the name of this context. Enter DOCKERHUB, then click Create. Once you have created the context, select the DOCKERHUB context and click the Add Environment Variable button. For the first, type in the name DOCKERHUB_USERNAME, and in the Value enter your Docker Hub username. Then add another environment variable, but this time, name it DOCKERHUB_PASSWORD and fill in the Value field with your Docker Hub password. When you’ve create the two environment variables for your DOCKERHUB context, create a CircleCI project for the test RSVP application. To do this, select the ADD PROJECTS button from the left-hand side menu. This will yield a list of GitHub projects tied to your account. Select rsvpapp-webinar4 from the list and click the Set Up Project button. Note: If rsvpapp-webinar4 does not show up in the list, reload the CircleCI page. Sometimes it can take a moment for the GitHub projects to show up in the CircleCI interface. You will now find yourself on the Set Up Project page: At the top of the screen, CircleCI instructs you to create a config.yml file. Since you have already done this, scroll down to find the Start Building button on the right side of the page. By selecting this, you will tell CircleCI to start monitoring your application for changes. Click on the Start Building button. CircleCI will redirect you to a build progress/status page, which as yet has no build. To test the pipeline trigger, go to the recently forked repository at and make some changes in the dev branch only. Since you have added the branch filter only: dev to your .circleci/config file, CI will build only when there is change in the dev branch. Make a change to the dev branch code, and you will find that CircleCI has triggered a new workflow in the user interface. Click on the running workflow and you will find the details of what CircleCI is doing: With your CircleCI workflow taking care of the Continuous Integration aspect of your GitOps CI/CD system, you can install and configure Argo CD on top of your Kubernetes cluster to address Continuous Deployment. Step 2 — Installing and Configuring Argo CD on your Kubernetes Cluster Just as CircleCI uses GitHub to trigger automated testing on changes to source code, Argo CD connects your Kubernetes cluster into your GitHub repository to listen for changes and to automatically deploy the updated application. To set this up, you must first install Argo CD into your cluster. First, create a namespace named argocd: - kubectl create namespace argocd Within this namespace, Argo CD will run all the services and resources it needs to create its Continuous Deployment workflow. Next, download the Argo CD manifest from the official GitHub respository for Argo: - kubectl apply -n argocd -f In this command, the -n flag directs kubectl to apply the manifest to the namespace argocd, and -f specifies the file name for the manifest that it will apply, in this case the one downloaded from the Argo repository. By using the kubectl get command, you can find the pods that are now running in the argocd namespace: - kubectl get pod -n argocd Using this command will yield output similar to the following: NAME READY STATUS RESTARTS AGE application-controller-6d68475cd4-j4jtj 1/1 Running 0 1m argocd-repo-server-78f556f55b-tmkvj 1/1 Running 0 1m argocd-server-78f47bf789-trrbw 1/1 Running 0 1m dex-server-74dc6c5ff4-fbr5g 1/1 Running 0 1m Now that Argo CD is running on your cluster, download the Argo CD CLI tool so that you can control the program from your command line: - curl -sSL -o /usr/local/bin/argocd Once you’ve downloaded the file, use chmod to make it executable: - chmod +x /usr/local/bin/argocd To find the Argo CD service, run the kubectl get command in the namespace argocd: - kubectl get svc -n argocd argocd-server You will get output similar to the following: OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server ClusterIP 10.109.189.243 <none> 80/TCP,443/TCP 8m Now, access the Argo CD API server. This server does not automatically have an external IP, so you must first expose the API so that you can access it from your browser at your local workstation. To do this, use kubectl port-forward to forward port 8080 on your local workstation to the 80 TCP port of the argocd-server service from the preceding output: - kubectl port-forward svc/argocd-server -n argocd 8080:80 The output will be: OutputForwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080 Once you run the port-forward command, your command prompt will disappear from your terminal. To enter more commands for your Kubernetes cluster, open a new terminal window and log onto your remote server. To complete the connection, use ssh to forward the 8080 port from your local machine. First, open up an additional terminal window and, from your local workstation, enter the following command, with remote_server_IP_address replaced by the IP address of the remote server on which you are running your Kubernetes cluster: - ssh -L 8080:localhost:8080 root@remote_server_IP_address To make sure that the Argo CD server is exposed to your local workstation, open up a browser and navigate to the URL localhost:8080. You will see the Argo CD landing page: Now that you have installed Argo CD and exposed its server to your local workstation, you can continue to the next step, in which you will connect GitHub into your Argo CD service. Step 3 — Connecting Argo CD to GitHub To allow Argo CD to listen to GitHub and synchronize deployments to your repository, you first have to connect Argo CD into GitHub. To do this, log into Argo. By default, the password for your Argo CD account is the name of the pod for the Argo CD API server. Switch back to the terminal window that is logged into your remote server but is not handling the port forwarding. Retrieve the password with the following command: - kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2 You will get the name of the pod running the Argo API server: Outputargocd-server-b686c584b-6ktwf Enter the following command to log in from the CLI: - argocd login localhost:8080 You will receive the following prompt: OutputWARNING: server certificate had error: x509: certificate signed by unknown authority. Proceed insecurely (y/n)? For the purposes of this demonstration, type y to proceed without a secure connection. Argo CD will then prompt you for your username and password. Enter admin for username and the complete argocd-server pod name for your password. Once you put in your credentials, you’ll receive the following message: Output'admin' logged in successfully Context 'localhost:8080' updated Now that you have logged in, use the following command to change your password: - argocd account update-password Argo CD will ask you for your current password and the password you would like to change it to. Choose a secure password and enter it at the prompts. Once you have done this, use your new password to relogin: Enter your password again, and you will get: OutputContext 'localhost:8080' updated If you were deploying an application on a cluster external to the Argo CD cluster, you would need to register the application cluster's credentials with Argo CD. If, as is the case with this tutorial, Argo CD and your application are on the same cluster, then you will use as the Kubernetes API server when connecting Argo CD to your application. To demonstrate how one might register an external cluster, first get a list of your Kubernetes contexts: - kubectl config get-contexts You'll get: OutputCURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube To add a cluster, enter the following command, with the name of your cluster in place of the highlighted name: - argocd cluster add minikube In this case, the preceding command would yield: OutputINFO[0000] ServiceAccount "argocd-manager" created INFO[0000] ClusterRole "argocd-manager-role" created INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role" Cluster 'minikube' added Now that you have set up your log in credentials for Argo CD and tested how to add an external cluster, move over to the Argo CD landing page and log in from your local workstation. Argo CD will direct you to the Argo CD applications page: From here, click the Settings icon from the left-side tool bar, click Repositories, then click CONNECT REPO. Argo CD will present you with three fields for your GitHub information: In the field for Repository URL, enter, then enter your GitHub username and password. Once you've entered your credentials, click the CONNECT button at the top of the screen. Once you've connected your repository containing the demo RSVP app to Argo CD, choose the Apps icon from the left-side tool bar, click the + button in the top right corner of the screen, and select New Application. From the Select Repository page, select your GitHub repository for the RSVP app and click next. Then choose CREATE APP FROM DIRECTORY to go to a page that asks you to review your application parameters: The Path field designates where the YAML file for your application resides in your GitHub repository. For this project, type k8s. For Application Name, type rsvpapp, and for Cluster URL, select from the dropdown menu, since Argo CD and your application are on the same Kubernetes cluster. Finally, enter default for Namespace. Once you have filled out your application parameters, click on CREATE at the top of the screen. A box will appear, representing your application: After Status:, you will see that your application is OutOfSync with your GitHub repository. To deploy your application as it is on GitHub, click ACTIONS and choose Sync. After a few moments, your application status will change to Synced, meaning that Argo CD has deployed your application. Once your application has been deployed, click your application box to find a detailed diagram of your application: To find this deployment on your Kubernetes cluster, switch back to the terminal window for your remote server and enter: You will receive output with the pods that are running your app: OutputNAME READY STATUS RESTARTS AGE rsvp-755d87f66b-hgfb5 1/1 Running 0 12m rsvp-755d87f66b-p2bsh 1/1 Running 0 12m rsvp-db-54996bf89-gljjz 1/1 Running 0 12m Next, check the services: You'll find a service for the RSVP app and your MongoDB database, in addition to the number of the port from which your app is running, highlighted in the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h mongodb ClusterIP 10.102.150.54 <none> 27017/TCP 25m rsvp NodePort 10.106.91.108 <none> 80:31350/TCP 25m You can find your deployed RSVP app by navigating to your_remote_server_IP_address:app_port_number in your browser, using the preceding highlighted number for app_port_number: Now that you have deployed your application using Argo CD, you can test your Continuous Deployment system and adjust it to automatically sync with GitHub. Step 4 — Testing your Continuous Deployment Setup With Argo CD set up, test out your Continuous Deployment system by making a change in your project and triggering a new build of your application. In your browser, navigate to, click into the master branch, and update the k8s/rsvp.yaml file to deploy your app using the image built by CircleCI as a base. Add dev after image: nkhare/rsvpapp:, as shown in the following: rsvpapp-webinar2/k8s/rsvp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: rsvp spec: replicas: 2 selector: matchLabels: app: rsvp template: metadata: labels: app: rsvp spec: containers: - name: rsvp-app image: nkhare/rsvpapp: dev imagePullPolicy: Always livenessProbe: httpGet: path: / port: 5000 periodSeconds: 30 timeoutSeconds: 1 initialDelaySeconds: 50 env: - name: MONGODB_HOST value: mongodb ports: - containerPort: 5000 name: web-port . . . Instead of pulling the original image from Docker Hub, Argo CD will now use the dev image created in the Continuous Integration system to build the application. Commit the change, then return to the ArgoCD UI. You will notice that nothing has changed yet; this is because you have not activated automatic synchronization and must sync the application manually. To manually sync the application, click the blue circle in the top right of the screen, and click Sync. A new menu will appear, with a field to name your new revision and a checkbox labeled PRUNE: Clicking this checkbox will ensure that, once Argo CD spins up your new application, it will destroy the outdated version. Click the PRUNE box, then click SYNCHRONIZE at the top of the screen. You will see the old elements of your application spinning down, and the new ones spinning up with your CircleCI-made image. If the new image included any changes, you would find these new changes reflected in your application at the URL your_remote_server_IP_address:app_port_number. As mentioned before, Argo CD also has an auto-sync option that will incorporate changes into your application as you make them. To enable this, open up your terminal for your remote server and use the following command: - argocd app set rsvpapp --sync-policy automated To make sure that revisions are not accidentally deleted, the default for automated sync has prune turned off. To turn automated pruning on, simply add the --auto-prune flag at the end of the preceding command. Now that you have added Continuous Deployment capabilities to your Kubernetes cluster, you have completed the demonstration GitOps CI/CD system with CircleCI and Argo CD. Conclusion In this tutorial, you created a pipeline with CircleCI that triggers tests and builds updated images when you change code in your GitHub repository. You also used Argo CD to deploy an application, automatically incorporating the changes integrated by CircleCI. You can now use these tools to create your own GitOps CI/CD system that uses Git as its organizing theme. If you'd like to learn more about Git, check out our An Introduction to Open Source series of tutorials. To explore more DevOps tools that integrate with Git repositories, take a look at How To Install and Configure GitLab on Ubuntu 18.04.
https://www.xpresservers.com/tag/webinar/
CC-MAIN-2020-34
en
refinedweb
import "google.golang.org/grpc/balancer" Package balancer defines APIs for load balancing in gRPC. All APIs in this package are experimental.") ) ErrBadResolverState may be returned by UpdateClientConnState to indicate a problem with the provided name resolver data.. TransientFailureError returns e. It exists for backward compatibility and will be deleted soon. Deprecated: no longer necessary, picker errors are treated this way by default. // CredsBundle is the credentials bundle that the Balancer can use. CredsBundle credentials.Bundle // Dialer is the custom dialer the Balancer implementation can use to dial // to a remote load balancer server. The Balancer implementations // can ignore this if it doesn't need to talk to remote balancer. Dialer func(context.Context, string) (net.Conn, error) // ChannelzParentID is the entity parent's channelz unique identification number. ChannelzParentID int64 //. Get returns the resolver builder registered with the given name. Note that the compare is done in a case-insensitive fashion. If no builder is register with the name, nil will be returned.. ConnectivityStateEvaluator takes the connectivity states of multiple SubConns and returns one aggregated connectivity state. It's not thread safe. the aggregated state is TransientFailure. Idle and Shutdown are not considered.. Package balancer imports 11 packages (graph) and is imported by 115 packages. Updated 2020-07-31. Refresh now. Tools for package owners.
https://godoc.org/google.golang.org/grpc/balancer
CC-MAIN-2020-34
en
refinedweb
The bitwise shift operators are the right-shift operator (>>), which moves the bits of shift_expression to the right, and the left-shift operator (<<), which moves the bits of shift_expression to the left. The left-shift operator causes the bits in shift-expression to be shifted to the left by the number of positions specified by additive-expression. The bit positions that have been vacated by the shift operation are zero-filled. A left shift is a logical shift (the bits that are shifted off the end are discarded, including the sign bit). The right-shift operator causes the bit pattern in shift-expression to be shifted to the right by the number of positions specified by additive-expression. For unsigned numbers, the bit positions that have been vacated by the shift operation are zero-filled. For signed numbers, the sign bit is used to fill the vacated bit positions. In other words, if the number is positive, 0 is used, and if the number is negative, 1 is used. #include<iostream> using namespace std; int main() { int a = 1, b = 3; // a right now is 00000001 // Left shifting it by 3 will make it 00001000, ie, 8 a = a << 3; cout << a << endl; // Right shifting a by 2 will make it 00000010, ie, 2 a = a >> 2; cout << a << endl; return 0; } This will give the output − 8 2 Note that these operators behave very differently with negative numbers. The result of a right-shift of a signed negative number is implementation-dependent. If you left-shift a signed number so that the sign bit is affected, the result is undefined. There are also 2 complex operators that can be used to assign the value directly to the value on left. These are the <<= operator and the >>= operator. Refer to for a much detailed inspection of the shift operators.
https://www.tutorialspoint.com/What-are-shift-operators-in-Cplusplus
CC-MAIN-2020-34
en
refinedweb
[ ] Simone Tripodi commented on SLING-8528: --------------------------------------- added the ability to detect namespaces and register them in [e2a849c|] > ACLs for Serviceusers on nodes with nodetypes registered via content-package may break startup in repoinit. > ----------------------------------------------------------------------------------------------------------- > > Key: SLING-8528 > URL: > Project: Sling > Issue Type: Bug > Components: Feature Model > Reporter: Dominik Süß > Assignee: Simone Tripodi > Priority: Major > > If a content-package contains a CND with a new nodetype these nodetypes are processed and registered before the content is being installed. The CP to featuremodel converter creates paths for nodes on which ACLS for serviceusers are registered. These nodes may be created based on nodetypes defined in the own or another content-package it depends on. > As repoinit is executed ahead of content-package installation the execution of repoinit may fail with {{javax.jcr.nodetype.NoSuchNodeTypeException: Node type my:NodeType does not exist}} > To eliminate this problem altogether the converter should extract all node type definitions found in content-packages and registere via repoinit (see register nodetype section in) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
http://mail-archives.apache.org/mod_mbox/sling-dev/201906.mbox/%3CJIRA.13240879.1561127044000.537217.1561470960053@Atlassian.JIRA%3E
CC-MAIN-2020-34
en
refinedweb
Save up to 92% on file storage Maximizing operational efficiency and reducing total cost of ownership are common drivers for moving business applications to the cloud. Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic, NFS file system for use with AWS Cloud services and on-premises resources. Amazon EFS Infrequent Access (EFS IA) is a storage class that provides price/performance that is cost-optimized for files not accessed everyday, with storage prices up to 92% lower compared to Amazon EFS Standard. The EFS IA storage class costs only $0.025/GB-month*. Using the industry accepted estimate that 20% of data is actively used and 80% is infrequently accessed, you can store your files on EFS at an effective price as low as $0.08/GB-month* (20% x $0.30/GB-month for files stored on EFS Standard + 80% x $0.025/GB-month for files stored on EFS IA = $0.08/GB-month). To get started with EFS IA, simply enable EFS Lifecycle Management for your file system by selecting a lifecycle policy that matches your needs. EFS will automatically move your files to the lower cost EFS IA storage class based on the last time they were accessed. EFS transparently serves files from both storage classes - EFS Standard and EFS IA - from a common file system namespace, so you don't have to worry about which of your files are actively used and which are infrequently accessed. *pricing in US East (N. Virginia) region How it works - Create file system - Choose Lifecycle Management file access policy (7, 14, 30, 60, or 90 days) using the AWS CLI, API, or EFS Management Console - Files not accessed according to the age-off policy are moved to EFS IA Use cases for Amazon EFS IA Backup & Recovery EFS IA cost effectively stores file data, with industry-leading availability, security and durability enabling you to meet regulatory retention and compliance needs so your backups are protected and available when needed. Analytics Managing the massive volume of big data analytics can be costly and complex for workloads like genomics and geospatial applications. EFS IA cost-effectively stores large amounts of data while providing the same throughput to scale applications with files stored in EFS Standard. Document Management Many reporting systems for business applications rely on accessible shared file storage in support of business functions. With EFS IA, customers can now easily and cost-effectively store and access their shared file system, eliminating the need to manage data to control costs. Resources Scalable, cloud-native file storage at pennies per GB-month with Amazon EFS Tens of thousands of customers including T-Mobile, MicroStrategy, HERE, and LoanLogics are storing up to petabytes of data in Amazon Elastic File System (Amazon EFS), to power use cases such as lift-and-shift of enterprise applications, large scale analytics, and persistent file storage for containers, at a blended price point of just $0.096/GB-month. Webinar Reduce file storage TCO In this tech talk, we introduce the Infrequent Access storage class for Amazon EFS. We will also introduce EFS Lifecycle Management, which automatically transitions infrequently accessed files into the new storage class. Whitepaper Load, Store, and Protect Linux-based NFS Workloads in AWS A cloud migration guide for moving Linux-based applications that require Network File System (NFS) storage to AWS. Learn how to use Amazon EFS Infrequent Access (EFS IA). Pay only for the resources that you use.
https://aws.amazon.com/tw/efs/features/infrequent-access/
CC-MAIN-2020-34
en
refinedweb
Hello RealSense community, I need a very-accurate depth-color mapping between cameras, but the factory setting of my camera is not enough, unfortunately . To look around the SDK, I found the following method in "RealSense/Projection.h" in the official SDK (2016-R3). static __inline Projection* CreateInstance(rs_intrinsics *colorIntrinsics, rs_intrinsics *depthIntrinsics, rs_extrinsics *extrinsics) { return (Projection*)RS_Projection_CreateInstanceFromIntrinsicsExtrinsics(colorIntrinsics, depthIntrinsics, extrinsics); } However, I am not sure it is supported or not, and I cannot activate this block because it is wrapped with # ifndef _WIN32 block which is not activated in Windows environment. So My question is like this; Is there anyway to instantiate PXCProjection (Intel::RealSense::Projection) module without using PXCDevice::Device::CreateProjection(void) ? Especially, I would like to instantiate it with my own calibration parameters. To me, it is quite natural to support this kind of activity in the SDK, but I cannot find it... I also found that librealsense ( GitHub - IntelRealSense/librealsense: Cross-platform API for Intel® RealSense™ devices ) supports similar functionality, but my code is largely based on official SDK, so it is hard to transit my code base to librealsense. Please do not tell me to replicate the functionality in librealsense . It is too obvious and I can do it, but I just would like to find more elegant solution. Thank you in advance. A couple of years ago, RealSense expert Samontab created a utility for customizing the laser parameters and has improved it over a series of versions. It was originally designed for the F200 camera, the SR300's direct predecessor, so I'm not certain if it will function properly with the SR300. An SR300 user did report earlier this year that they were able to use the utility though. Here's the forum entry that documents the changes across the versions as time goes by. Utility for changing laser camera parameters (IVCAM v0.5) The download link, which launches the zip file download automatically in the browser, is: MartyG, Actually I also found that IVCAM program (yes, it means I can run it in SR300), but I think it aims to calibrate "laser" parameters, not "camera" parameters. You can certificate that it only provides UI knob in the side of laser projector and there is no adjustment tool in the side IR Camera focal length / principal point, etc. Even if it may supports *depth* camera parameter calibration, I still cannot find the COLOR-DEPTH extrinsic calibration from that program. I also mention that I found the official camera calibrator app (- Download Intel® RealSense™ Camera Calibrator for Windows* ), but both x86 and x64 program only generate "DSAPI.dll" error, like this thread ( ). Unfortunately, there is no further information to fix this error... The camera calibrator app is for the R200 model of camera. The SR300 does not have a calibrator program, unfortunately. The new RealSense SDK 2.0 was launched yesterday. It can run on Windows 10 and is SR300-compatible. Its introductory information says that it can provide intrinsic and extrinsic calibration information. I know you were understandably reluctant to use Librealsense because of the effort you had put into your Windows app. Using the new 2.0 SDK in Windows 10 may be an easier transition though. GitHub - IntelRealSense/librealsense at development MartyG, So, is there no way to achieve what I mentioned in the first question ? By the way, the SDK 2.0 what you mentioned is librealsense, right? Before transition, I would like to know one ; Can I expect that all of the further releases will be updated on librealsense and there is no more release of Windows SDK ? Thank you in advance. I apologize for not providing a further suggestion for solving your problem in the current Windows SDK. This topic is somewhat outside of my knowledge base. I did find a post by a person who had a problem with Projection and CreateColorMappedtoDepth though.... c++ - RealSense Projection::CreateColorImageMappedToDepth returns Null if one of the parameters internals were accessed … I have posted a documentation list for SDK 2.0, which I have established this morning is indeed a 'development' branch of Librealsense. The Windows SDKs as we known them have been discontinued. The SDK 2.0 will now be the means of providing Windows support to RealSense. MartyG, The problem what I have is quite similar to this thread: If I had enough time, I just transit my code to librealsense, but in this moment it is not an option (may be later). So what I wanted to do was quick-and-elegant to fix this problem just utilizing the Windows SDK functionalities. Anyway, it is likely that there is no such a solution, so I just determine to fix in quick-and-dirty way ... Thank you in anyway You are welcome. If the R200 camera calibrator tool cannot be used, the only other means of calibration I know of is a C# calibration script that was provided by an Intel support staff member. I do not know if it works with SR300 though, as it was published on a discussion about the R200 camera model.
https://community.intel.com/t5/Items-with-no-label/Using-my-own-calibration-parameters-for-SR300-in-mapping-depth/td-p/351276
CC-MAIN-2020-34
en
refinedweb
How to start this process instance?clandestino_bgd Feb 5, 2007 4:48 PM Dear JBPM people, I am wondering what it would be the best way to start process instance of process definition, which have in start-state some system action (it is not executed by human). On node-enter event some action should be triggered. Please look at the example below: <?xml version="1.0" encoding="UTF-8"?> <process-definition <swimlane name="admin" /> <action name="messageActionHandler" class="org.springmodules.workflow.jbpm31.JbpmHandlerProxy" config- <targetBean>messageActionHandler</targetBean> </action> <start-state <event type="node-enter"> <action ref- </event> <transition name="to_first" to="first"/> </start-state> <task-node <task swimlane="admin"> <controller> <variable name="font" /> </controller> </task> <transition name="to_end" to="end"/> </task-node> <end-state</end-state> </process-definition> As you can see, here the "start-state" does not contain task and after process instance is created, it cannot be started with: taskInstance = processInstance.getTaskMgmtInstance().createStartTaskInstance(); like in JBPM example webapp, because exception in createStartTaskInstance() occurs, since startTask is NULL. If I fetch the rootToken and call signall() on it, my Action on node-enter event is not invoked. Token rootToken = processInstance.getRootToken(); rootToken.signal(); Just to mention, everything works fine when the start-state is executed by human, and node-enter event with my action is defined in the next state. Any hint? Thank you and regards Milan 1. Re: How to start this process instance?estaub Feb 5, 2007 5:42 PM (in response to clandestino_bgd) Why do you want to do this? Start-nodes in workflow systems often have behavior different than other nodes - they're mostly just a marker for where to start. This is carried forward from general state machines. In general, trying to pin a lot of other behavior (e.g., a task) into the start node is probably asking for trouble. Even if it works in a particular scenario in a particular release, it's unlikely to be tested very well, and hence is likely to get inadvertently broken. But maybe you really need it - I don't know. -Ed Staub (newbie) 2. Re: How to start this process instance?estaub Feb 5, 2007 6:09 PM (in response to clandestino_bgd) Oops - 3. Re: How to start this process instance?clandestino_bgd Feb 6, 2007 2:15 PM (in response to clandestino_bgd) Oops - please ignore my earlier message - I didn't read carefully. I mean the manual of course, so I will answer here to my questions maybe somebody will find it useful, but I do not believe so :) 1. In the JBPM manual stands: Start state: supported event types: {node-leave} So, there is no miracle, why I could not invoke my action on "node-enter" event :) 2. The second question related how to start process instance in generic way and to cover the situations, when your start-state is either Task-node or some wait state, can be solved with additional method in JBPMTemplate (I am using springmodules-0.7) /** * Create and start process instance * * @author agaton * @param definitionId. * The definitionId to be started. * @return ProcessInstance that is created */ public ProcessInstance createStartProcessInstance(final long definitionId) { return (ProcessInstance)execute(new JbpmCallback() { public Object doInJbpm(JbpmContext context) { ProcessInstance processInstance = null; ProcessDefinition processDef = context.getGraphSession() .getProcessDefinition(definitionId); if(processDef != null) { processInstance = (ProcessInstance)processDef.createInstance(); if(processInstance != null) { if(processInstance.getTaskMgmtInstance() .getTaskMgmtDefinition().getStartTask()!=null){ processInstance.getTaskMgmtInstance() .createStartTaskInstance(); } else{ processInstance.getRootToken().signal(); } context.save(processInstance); } } return processInstance; } }); } So, if start-state has task (startTask!=null) createStartTaskInstance() is invoked. If opposite, rootToken is signaled. Thank you for ignoring my questions, apart for Mr Staub, whom I am thankful as a polite and nice person, since it helped me to realize how stupid my question was. Cheers Milan
https://developer.jboss.org/message/383867
CC-MAIN-2020-34
en
refinedweb
. You can generate C++ code that uses a limited subset of C++ language features. By generating code with a namespace, you can more easily integrate your code with other source code that might have identical function or data type names. When you use a namespace, the code generator packages all the generated functions and type definitions into the namespace, except for the generic type definitions contained in tmwtypes.h and the hardware-specific definitions in rtwtypes.h. The example main file and function are not packaged into the namespace. Specify the namespace by using the CppNamespace configuration object option. For example, to generate C++ code in a namespace called generated, enter: cfg = coder.config('dll'); cfg.TargetLang = 'C++'; cfg.CppNamespace = 'generated'; codegen -config cfg foo To specify a namespace from the app, at the Generate Code step, select More Settings > All Settings, and then modify the C++ Namespace field. For an example that uses namespaces, see Integrate Multiple Generated C++ Code Projects. To attain more object-oriented code, you can generate C++ code with a class interface. The entry-point function or functions are produced as methods in a C++ class. Specify the class interface by using the CppInterfaceStyle property. Designate the name of the generated class with CppInterfaceClassName. For example: cfg = coder.config('lib'); cfg.GenCodeOnly = true; cfg.TargetLang = 'C++'; cfg.CppInterfaceStyle = 'Methods'; cfg.CppInterfaceClassName = 'myClass'; codegen foog -config cfg -report -d withClass For more information, see Generate C++ Code with Class Interface. The code generator does not support the generation of a C++ class directly from a MATLAB class. If you separately generate C and C++ code for the same MATLAB function, and inspect the generated source code, then there are implementation differences. These are some notable differences:.
https://www.mathworks.com/help/coder/ug/cpp-code-generation.html
CC-MAIN-2020-34
en
refinedweb
Introduction to PLINQ Parallel LINQ (PLINQ) is a parallel implementation of the Language-Integrated Query (LINQ) pattern. PLINQ implements the full set of LINQ standard query operators as extension methods for the System.Linq namespace and has additional operators for parallel operations. PLINQ combines the simplicity and readability of LINQ syntax with the power of parallel programming. Tip If you're not familiar with LINQ, it features a unified model for querying any enumerable data source in a type-safe manner. LINQ to Objects is the name for LINQ queries that are run against in-memory collections such as List<T> and arrays. This article assumes that you have a basic understanding of LINQ. For more information, see Language-Integrated Query (LINQ). What is a Parallel query?. For more information, see Understanding Speedup in PLINQ. Note This documentation uses lambda expressions to define delegates in PLINQ. If you are not familiar with lambda expressions in C# or Visual Basic, see Lambda Expressions in PLINQ and TPL.. If you are not familiar with LINQ, see Introduction to LINQ (C#) and Introduction to LINQ (Visual Basic). In addition to the standard query operators, the ParallelEnumerable class contains a set of methods that enable behaviors specific to parallel execution. These PLINQ-specific methods are listed in the following table. num % 2 == 0 select num; Console.WriteLine("{0} even numbers out of {1} total", evenNums.Count(), source.Count()); // The example displays the following output: // 5000 even numbers out of 10000 total Dim source = Enumerable.Range(1, 10000) ' Opt in to PLINQ with AsParallel Dim evenNums = From num In source.AsParallel() Where num Mod 2 = 0 Select num Console.WriteLine("{0} even numbers out of {1} total", evenNums.Count(), source.Count()) ' The example displays the following output: ' 5000 even numbers out of 10000 total method and the System.Linq.ParallelExecutionMode enumeration to instruct PLINQ to select the parallel algorithm. This is useful when you know by testing and measurement that a particular query executes faster in parallel. For more information, see How to: Specify the Execution Mode in PLINQ. Degree of Parallelism By default, PLINQ uses all of the processors on the host computer. You can instruct PLINQ to use no more than a specified number of processors by using the WithDegreeOfParallelism method. This is useful when you want to make sure that other processes running on the computer receive a certain amount of CPU time. The following snippet limits the query to utilizing a maximum of two processors. var query = from item in source.AsParallel().WithDegreeOfParallelism(2) where Compute(item) > 42 select item; Dim. An AsOrdered sequence is still processed in parallel, but its results are buffered and sorted. Because order preservation typically involves extra work, an AsOrdered sequence might be processed more slowly than the default AsUnordered sequence. Whether a particular ordered parallel operation is faster than a sequential version of the operation depends on many factors. The following code example shows how to opt in to order preservation. var evenNums = from num in numbers.AsParallel().AsOrdered() where num % 2 == 0 select num; Dim evenNums = From num In numbers.AsParallel().AsOrdered() Where num Mod 2 = 0 Select num For more information, see Order Preservation in PLINQ. method. When you use AsSequential, all subsequent operators in the query are executed sequentially until AsParallel is called again. For more information, see How to: Combine Parallel and Sequential LINQ Queries. method, and the ParallelMergeOptions enumeration. For more information, see Merge Options in PLINQ. The ForAll Operator In sequential LINQ queries, execution is deferred until the query is enumerated either in a foreach ( For Each in Visual Basic) loop or by invoking a method such as ToList , ToArray , or ToDictionary. In PLINQ, you can also use foreach to execute the query and iterate through the results. However, foreach itself does not run in parallel, and therefore, it requires that the output from all parallel tasks be merged back into the thread on which the loop is running. In PLINQ, you can use foreach when you must preserve the final ordering of the query results, and also whenever you are processing the results in a serial manner, for example when you are calling Console.WriteLine for each element. For faster query execution when order preservation is not required and when the processing of the results can itself be parallelized, use the ForAll method to execute a PLINQ query. ForAll does not perform this final merge step. The following code example shows how to use the ForAll method. System.Collections.Concurrent.ConcurrentBag<T> is used here because it is optimized for multiple threads adding concurrently without attempting to remove any items.))); Dim nums = Enumerable.Range(10, 10000) Dim query = From num In nums.AsParallel() Where num Mod 10 = 0 Select num ' Process the results as each thread completes ' and add them to a System.Collections.Concurrent.ConcurrentBag(Of Int) ' which can safely accept concurrent add operations query.ForAll(Sub(e) concurrentBag.Add(Compute(e))) The following illustration shows the difference between foreach and ForAll with regard to query execution. Cancellation PLINQ is integrated with the cancellation types in .NET Framework 4. (For more information, see Cancellation in Managed Threads.) Therefore, unlike sequential LINQ to Objects queries, PLINQ queries can be canceled. To create a cancelable PLINQ query, use the WithCancellation operator on the query and provide a CancellationToken instance as the argument. When the IsCancellationRequested property on the token is set to true, PLINQ will notice it, stop processing on all threads, and throw an OperationCanceledException. It is possible that a PLINQ query might continue to process some elements after the cancellation token is set. For greater responsiveness, you can also respond to cancellation requests in long-running user delegates. For more information, see How to: Cancel a PLINQ Query.. For more information, see How to: Handle Exceptions in a PLINQ Query. Custom Partitioners In some cases, you can improve query performance by writing a custom partitioner that takes advantage of some characteristic of the source data. In the query, the custom partitioner itself is the enumerable object that is queried. int[] arr = new int[9999]; Partitioner<int> partitioner = new MyArrayPartitioner<int>(arr); var query = partitioner.AsParallel().Select(x => SomeFunction(x)); Dim arr(10000) As Integer Dim partitioner As Partitioner(Of Integer) = New MyArrayPartitioner(Of Integer)(arr) Dim query = partitioner.AsParallel().Select(Function(x) SomeFunction(x)) PLINQ supports a fixed number of partitions (although data may be dynamically reassigned to those partitions during run time for load balancing.). For and ForEach support only dynamic partitioning, which means that the number of partitions changes at run time. For more information, see Custom Partitioners for PLINQ and TPL.. For more information, see Concurrency Visualizer and How to: Measure PLINQ Query Performance.
https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/introduction-to-plinq
CC-MAIN-2020-34
en
refinedweb
Flattery Flattery is a library for building HTML elements using Widgets. Widgets are just Dart objects whose purpose is to represent some user interface element. They are implemented using the awesome dart:html package, and do not try to hide it - so you can use your HTML/CSS knowledge to enhance existing widgets, and to create your own, type safely. Usage A simple usage example: import 'dart:html' hide Text; import 'package:flattery/flattery_widgets.dart'; /// Simple Counter Model. class Counter { int value = 0; } /// A Counter Widget. /// /// By implementing [ShadowWidget], we make our widget use a shadow root /// to isolate its contents. class CounterView extends Counter with Widget, ShadowWidget { final text = Text('')..style.textAlign = 'left'; /// Build the component's Element. /// Uses a 2x2 grid to place the child Widgets Element build() => Grid(columnGap: '10px', classes: [ 'main-grid' ], children: [ [text, text], // row 1 (repeat the element so it takes up both columns) [ // row 2 contains 2 buttons to inc/decrement the counter Button(text: 'Increment', onClick: (e) => value++), Button(text: 'Decrement', onClick: (e) => value--), ] // row 2 ]).root; CounterView() { stylesheet = '* { font-family: sans-serif; }' 'button { height: 4em; }' '.main-grid { width: 25em; }'; _update(); } /// All model's setters that affect the state of the view need to be /// overridden in the Widget extending it, so that they update the view. @override set value(int value) { super.value = value; _update(); } _update() => text.text = 'The current count is $value'; } Building pub get Run the example webdev serve example Running the tests Unit tests: pub run test Use option -r jsonor r -expandedto see details. Browser tests: pub run test -p chrome Features and bugs Please file feature requests and bugs at the issue tracker. Links Dart Test Documentation: Libraries - flattery - Flattery is a library for building HTML elements using Widgets. [...] - flattery_widgets - Extension of main flattery library that exposes several useful widgets to make it easier to implement web applications.
https://pub.dev/documentation/flattery/latest/
CC-MAIN-2020-34
en
refinedweb
Cognito identity pools provide an easy way to enable your users to have limited access to your AWS backend. With developer authenticated identities, you can integrate Cognito into your existing authentication process. 1. Create an Identity Pool Go to the Cognito developers console and click “Manage Identity Pools,” then “Create new identity pool.” Name your app, and decide if you want to enable unauthenticated identities. Next, expand the “Authentication providers” dropdown and select the “custom” tab. Provide a developer authenticated name for your backend. This can be any string, but remember, you cannot change it after setting it. This will be used by your backend to identify itself. Clicking “Create Pool” prompts you to set up IAM roles for your users. Make sure to do this, or your users won’t have access to any AWS resources. 2. Backend Setup In order to create a Cognito identity, you will need credentials from your own identity provider. After creating your credentials, you can create a Cognito identity using the GetOpenIdTokenForDeveloperIdentity api call. Here’s an example in TypeScript: ... const identityClient = new CognitoIdentity(); const params = ( credentials: string ): CognitoIdentity.GetOpenIdTokenForDeveloperIdentityInput => ({ IdentityPoolId: "<YOU-COGNITO-IDENTITY-POOL-ID>", Logins: { "<YOUR-DEVELOPER-AUTHENTICATED-NAME>": credentials } }); const openIdRequest = await identityClient .getOpenIdTokenForDeveloperIdentity(params(deviceHashCode)) .promise(); ... Make sure to return the cognitoToken and the identityId from the GetOpenIdTokenForDeveloperIdentity call to your client. These are used to grant your client access to AWS resources. 3. Frontend Setup Now that you have an identity id and a token from Cognito, you can set up your credentials on the frontend. Here’s an example in TypeScript: import * as AWS from "aws-sdk"; AWS.config.region = "<YOUR-AWS-REGION>"; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: "<IDENTITY-POOL-ID>", IdentityId: "<IDENTITY-ID-RETURNED-FROM-YOUR-BACKEND>", Logins: { "cognito-identity.amazonaws.com": "<COGNITO-TOKEN-RETURNED-BY-SERVER>" } }); That’s it! You should now be able to make authorized requests to AWS resources.
https://spin.atomicobject.com/2020/02/26/authenticated-identities-cognito-identity-pools/
CC-MAIN-2020-34
en
refinedweb
Varnish is an Http accelerator designed for content-heavy websites and highly consumable APIs. You can easily spin up a Varnish server on top of your Azure Web Apps to boost your website's performance. Varnish can cache web pages and provide content to your website users blazing fast. This blog post shows you how to install and configure Varnish with sample configuration files. Step 1: Create a cloud service using Linux virtual machine on Azure First, you need to setup a cloud service with a Linux virtual machine, click here for details. For most web apps a single VM is sufficient. However, if you need a failure resilient front end cache, I recommend using at least two virtual machines on your cloud service. For the purpose of this blog post, I will be using Ubuntu LTS. Step 2: Install Varnish on all VMs It is recommended to use Varnish packages provided by varnish-cache.org. The only supported architecture is amd64 for Ubuntu LTS. For other Linux distributions, please see install instructions here. Connect to each virtual machine using PuTTY and do the following as root user: - Add the security key [Debian and Ubuntu]. wget apt-key add GPG-key.txt - Add the package URL to apt-get repository sources list. echo "deb precise varnish-3.0" | sudo tee -a /etc/apt/sources.list - Update the package manager and download/install Varnish Cache apt-get update apt-get install varnish Step 3: Varnish configuration The default settings are not set to run on front-facing port of 80(HTTP) or 443 (for HTTPS) and hence this needs to modified to use port you need for your web app. Port 80 is the default TCP port for HTTP traffic. If you plan on using SSL with your website, you will also need to open port 443 which is the default port for HTTPS traffic. Login to Azure Preview portal and select your virtual machine to add the endpoint for port 80 (HTTP) or 443 (HTTPS). This needs to be done for every virtual machine. The configuration file on Ubuntu is at /etc/default/varnish. Using your favorite editor to edit the file, in this blog post I’m using nano editor. nano /etc/default/varnish The file will have a few default settings. If you scroll down, you will see a block of text defining the Varnish daemon options starting with the text DAEMON_OPTS, similar to: DAEMON_OPTS="-a :6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m" Change the port from 6081 to 80 (HTTP) or 443 (HTTPS) : DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m" By default the port 80 or 443 is blocked by the firewall , and hence you need to explicitly open the port by using the iptables command Using iptables: By running the following commands a root can open port 80 allowing regular Web browsing from websites that communicate via port 80. iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT To allow access to secure websites you must open port 443 as well. iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT Step 4: Modifying the default VCL file under /etc/varnish/ Varnish uses a .vcl file (default located at /etc/varnish/ as default.vcl) containing instructions written in VCL Language in order to run its program. This is used to define how Varnish should handle the requests and how the document caching system should work. Open the editor once again to modify the contents of default.vcl (located under /etc/varnish/) by using the following command. nano /etc/varnish/default.vcl Create a default backend with .host and .port referring to your Azure web app. Here is a sample of basic VCL configuration file (replace my-azure-webapp.azurewebsites.net with your actual web application custom domain or azurewebsite.net domain URL). Note, if you are using Varnish 4.0 and above you need to include vcl 4.0 at the beginning of the file. To learn more about Varnish 4.0 VCL documentation click here. vcl 4.0; backend default { .host = "my-azure-webapp.azurewebsites.net"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } sub vcl_recv { set req.http.host = "my-azure-webapp.azurewebsites.net"; set req.backend = default; return (lookup); } Troubleshooting If you run into any issues with Varnish server, you can view the logs by running the following command. varnishlog cmd Browse your site again and look at the log in the your VM. For more information, click here. Sample VCL configuration files - WordPress If you are using a WordPress web app, click here to download a sample Varnish configuration for WordPress. - Drupal If you are using a Drupal web app, click here to download a sample Varnish configuration for Drupal.
https://azure.microsoft.com/hu-hu/blog/using-varnish-as-front-end-cache-for-azure-web-apps/
CC-MAIN-2018-05
en
refinedweb
In this beginner to intermediate tutorial I'm going to show you how to play HD video without the inevitable blurring that occurs when the video is enlarged. The reason for this is that I'm getting a bit tired of visiting YouTube or other sites that present HD video with a full screen option only to discover, when I click the Full Screen button, that I suddenly need the prescription for my glasses changed. The issue is not the video but how the Flash Player handles the process of going full screen. Let's find out how to do things properly.. Introduction When you play video in the Flash Player, the video, for all intents and purposes, is laid into the stage. Click the full screen button and the stage gets bigger. When the stage gets bigger it brings the video along with it. Enlarge a 720 by 480 video to 1280 by 720 and is it any wonder that the video gets fuzzy? Adobe wrestled with this issue when they were introducing the ability to play full HD video through the Flash Player. Their solution, introduced in Flash Player 9.0.115.0 , was extremely elegant. Instead of enlarging the stage, why not "hover" the vid in a rectangle "above" the stage and have the designer or developer decide whether to enlarge the stage or just a piece of it. This is accomplished through another piece of clever engineering on Adobe’s part: Hardware acceleration and scaling. Hardware acceleration is applied through the Flash Player. If you right-click (PC) or ctrl-click (Mac) on a swf playing in a web page you'll open the Flash Player context menu. Select Settings and you'll be presented with the the settings window shown in Image 1. If you select Enable hardware acceleration you are able to view full screen HD video. If you leave it deselected, clicking a full screen button results in the Player using the Scaling API used when an FLV file is taken out to full screen. The neat thing about this is even though you have selected hardware acceleration, it is only used when needed. Thus, when a Full Screen button is clicked only the rectangle and it contents - a video in this instance - are scaled to full screen and hardware acceleration takes over to play the video. Having given you the briefing on how we got you reading this tutorial, follow these steps to create a full screen HD video experience: Step 1: Download the the Exercise files Included with the download is an .mp4 file- Vultures.mp4. It is a clip from a TV series produced by my College, Humber institute of Technology and Advanced Learning. We'll be using this file for the project though mov, f4v and physically large FLV files can also be used. You may have heard a lot of "buzz" around HD video and the .mp4 format over the past couple of years and wondered what the chatter is all about. Here’s a brief "elevator pitch": The key to the .mp4 format is the AVC/H.264 video standard introduced to the Flash Player in August 2007. The .mp4 standard, to be precise, is known as MPEG-4 which is an international standard developed by the Motion Pictures Expert Group (MPEG) and the format also has ISO recognition. What makes these files so attractive to Flash designers and developers is that MPEG-4 files aren’t device dependent. They can just as easily be played on an HD TV, iPod or Playstation as they can be played in a browser. Also, thanks to hardware acceleration and multithreading support built into the Flash Player, you can play video at any resolution and bit depth up to, and including the full HD 1080p resolution you watch on HD TV’s. The one aspect of the MPEG-4 standard that I find rather intriguing is that, like the XFL format just coming into use throughout the CS4 suite, it is a "container" format. What's meant by this is .mp4 files can store several types of data on a number of tracks in the file. What it does is synchronize and interleave the data meaning an .mp4 file can also include metadata, artwork, subtitles and so on that can potentially be accessed by Flash. That’s the good news. The bad news is even though the MPEG-4 container can contain multiple audio and video tracks, the Flash Player currently only plays one of each and ignores the rest. The other bit of bad news is this format does not support transparency meaning, if you want to add an alpha channel, you are back to the FLV format. Finally, H.264 .mp4 files require heavy duty processing power. Adobe has been quite clear in letting us know this content is best viewed on dual core PC’s and Macs. The shift to these processors has been underway for a couple of years but it will still be a couple of years before all computers will be able to manage the processor demands this format requires. I have barely skimmed the surface of this format. If you want to take a "deep dive" into this format, check out H.264 For The Rest Of Us written by Kush Amerasinghe at Adobe. It's a tremendous primer for those of you new to this technology. Step 2: Big It Up! Open the BigItUp.fla file located in the download. If this is your first time working with an H264 file or going full screen, you may find the Flash stage dimensions - 1050 by 500 - to be rather massive. We need the stage space to accommodate the video which has a physical size of 854 x 480 and to leave room for the button in the upper left corner of the stage. Step 3: Geometry Add the following ActionScript to the actions layer: import flash.geom.*; import flash.display.Stage; var mySound:SoundTransform; var myVideo:Video; var nc:NetConnection = new NetConnection(); nc.connect(null); var ns:NetStream = new NetStream(nc); ns.client = this; btnBig.buttonMode = true; We start by bringing in the geometry package and the Stage class in order to take the "hovering" video to full screen. The next two variables - mySound and myVideo - are going to be used to set the volume level of the audio and to create a Video Object. With that housekeeping out of the way we set up the NetConnection and NetStream objects that will allow the video to play. The final line puts the movieclip used to get the video to full screen into buttonMode. Step 4: Functions Add the following ActionScript: ns.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); function netStatusHandler(evt:NetStatusEvent):void { if(evt.info.code == "NetStream.FileStructureInvalid") { trace("The MP4's file structure is invalid."); } else if(evt.info.code == "NetStream.NoSupportedTrackFound") { trace("The MP4 doesn't contain any supported tracks"); } } function onMetaData(md:Object):void { myVideo.width = md.width; myVideo.height = md.height; } The first function lets us do some error checking. Not all mp4 files are created alike and if the video doesn’t play it would be nice to know what the problem might be. In this case we are going to listen for a couple of error messages from the NetStream class that are germane to mp4 files. The first one is a check to make sure the file is not corrupt or is a format that is not supported. Just because a file will play in the Quicktime player does not mean it will play in Flash. The next one makes sure the audio and video tracks are supported. For example if the H.264 encoding is not used on the video track or AAC encoding is not applied to the audio track, you'll have issues. The next function goes into the video file’s metadata to obtain the width and height values for the Video Object. Step 5: goFullScreen Enter the following ActionScript: function goFullScreen(evt:Object):void { var scalingRect:Rectangle = new Rectangle(myVideo.x, myVideo.y, myVideo.width, myVideo.height); stage["fullScreenSourceRect"] = scalingRect; if(stage.displayState == StageDisplayState.NORMAL) { stage.displayState = StageDisplayState.FULL_SCREEN; } else { stage.displayState = StageDisplayState.NORMAL; } }; btnBig.addEventListener(MouseEvent.CLICK, goFullScreen);< This is where the "magic" happens. This function creates the rectangle used to hold the video and its size is set to match those of the Video Object’s dimensions pulled out of the second function in the previous code block. The next line sets the fullScreenSourceRect property of the stage to the dimensions of the rectangle just created. The conditional statement making up the remainder of the code block checks the current state of the stage size from normal to full screen or vice versa. This is how the video goes full screen. The Video Object is laid into this source rect, not the stage, which means it can expand or contract without the stage doing likewise and "fuzzing" the video. The last line uses the button on the stage to go full screen. Step 6: myVideo Enter the following ActionScript: myVideo = new Video(); myVideo.x = 185; myVideo.y = 5; addChild(myVideo); myVideo.attachNetStream(ns); ns.play("Vultures.mp4"); mySound = ns.soundTransform; mySound.volume = .8; ns.soundTransform = mySound; The first code block tells Flash the variable "myVideo" is the name for a Video Object which is located 185 pixels fro the left edge of the enormous stage and is 5 pixels down from the top. The addChild() method puts the Video Object on the stage and the remaining two lines connect the video object to the NetStream and start the video playing. The final code block looks into the video’s audio track which is being fed into the project through the NetStream and lowers the audio volume to 80%. Step 7: Save Save the file to the same folder as the video. Normally, at this stage of the tutorial I would also tell you to test the swf. You can, but the button won’t work. The best you can expect is to see the video play in the swf. The Full Screen feature is driven by the HTML wrapper of your swf, not Flash. Let’s deal with that one. Step 8: Publish Settings Select File > Publish Settings. When the Publish Settings dialog box opens, select the SWF and HTML options. Step 9: Player Version Click the Flash tab. Select Flash Player 9 or Flash Player 10 in the Player pop down. Remember HD video can only be played in Flash Player 9 or later. Step 10: HTML Click the HTML tab. In the Template pop down menu select Flash Only-Allow Full Screen. Click the Publish button to create the SWF and the HTML file. Step 11: Test Save the file, quit Flash and open the HTML page in a browser. Go ahead, click the "Big it up!" button. What about the Component? What about it? Real Flash designers and developers don’t use no "steenking" components. In December of 2007, Adobe quietly released Update 3 for the Flash Player 9. I use the word "quietly" because mixed in with the usual bug fixes and tweaks, they slipped in an updated version of the FLVPlayback component that allowed it to play HD video. Here’s how: Step 12: New Document Open a new Flash ActionScript 3.0 document and save it to the same folder as the Vultures video. Step 13: FLVPlayback Component Select Window>Components and in the Video components, drag a copy of the FLVPlayback component to the stage. Step 14: Component Inspector Open the Component Inspector. You need to do two things here. Select the SkinUnderAllNoCaption.swf in the skin area, in the source area navigate to the Vultures.mp4 file and add it to the Content Path dialog box. Click the match source dimensions check box and click OK. Flash will go into the video and grab the metadata. When that finishes, the dialog box will close and the component will grow to the dimensions of the video. Close the Component Inspector. Step 15: Modify > Document Select Modify > Document and click the Contents button to resize the stage to the size of the component .... sort of. When the stage is set to the size of the component it only resizes to the size of the video. The skin will be left hanging off the bottom of the stage which means it isn’t going to be visible in a web page. Change the height value to 525 pixels to accomodate the skin. Click OK to accept the change. Of course, now that you have changed the stage dimensions the component is hanging off the stage. Select the component and in the Properties Panel set the X and Y coordinates to 0. Step 16: Publish Settings Select File >Publish Settings and choose the SWF and HTML file types. Step 17: Player Version Click the Flash tab and select Flash Player 9. Step 18: HTML Click the HTML tab and select Flash Only- Allow Full Screen in the Templates pop down. Step 19: Publish Click the Publish button. When the SWF and the HTML file are published click OK. Save the file and quit Flash. Step 20: Test Open the HTML file in a browser. Click the Full Screen button to launch into Full Screen mode. Conclusion In this tutorial I've showed you two ways of smoothly going into full screen mode with Flash. The first method used ActionScript to make this possible and the key was creating a rectangle that "hovered" over the stage and was used to hold the video. The second example showed you how to use the FLVPlayback component to go full screen. As you've discovered, the key for both projects was not the ActionScript but the HTML wrapper that enabled the full screen playback. These tutorials always work locallly but I'm sure you're wondering if they would actually work online. I've posted both to prove that "Yes indeed, it can be done." The code approach in the first example can be found here. The video is kindly provided by Adobe and Red Bull and is a full 1080p production. The Vultures appear in an example that uses the component here. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/treat-your-viewers-to-a-full-screen-hd-video-experience--active-2497
CC-MAIN-2018-05
en
refinedweb
cc [ flag ... ] file ... -lxfn [ library ... ] #include <xfn/xfn.h> An attribute set is a set of attribute objects with distinct identifiers. The fn_attr_multi_get(3XFN) operation takes an attribute set as parameter and returns an attribute set. The fn_attr_get_ids(3XFN) operation returns an attribute set containing the identifiers of the attributes. Attribute sets are represented by the type FN_attrset_t. The following operations are defined for manipulating attribute sets. fn_attrset_create() creates an empty attribute set. fn_attrset_destroy() releases the storage associated with the attribute set aset. fn_attrset_copy() returns a copy of the attribute set aset. fn_attrset_assign() makes a copy of the attribute set src and assigns it to dst, releasing any old contents of dst. A pointer to the same object as dst is returned. fn_attrset_get() returns the attribute with the given identifier attr_id from aset. fn_attrset_count() returns the number attributes found in the attribute set aset. fn_attrset_first() and fn_attrset_next() are functions that can be used to return an enumeration of all the attributes in an attribute set. The attributes are not ordered in any way. There is no guaranteed relation between the order in which items are added to an attribute set and the order of the enumeration. The specification does guarantee that any two enumerations will return the members in the same order, provided that no fn_attrset_add() or fn_attrset_remove() operation was performed on the object in between or during the two enumerations. fn_attrset_first() returns the first attribute from the set and sets iter_pos after the first attribute. fn_attrset_next () returns the attribute following iter_pos and advances iter_pos. fn_attrset_add() adds the attribute attr to the attribute set aset, replacing the attribute's values if the identifier of attr is not distinct in aset and exclusive is 0. If exclusive is non-zero and the identifier of attr is not distinct in aset, the operation fails. fn_attrset_remove() removes the attribute with the identifier attr_id from aset. The operation succeeds even if no such attribute occurs in aset. fn_attrset_first() returns 0 if the attribute set is empty. fn_attrset_next() returns 0 if there are no more attributes in the set. fn_attrset_add() and fn_attrset_remove() return 1 if the operation succeeds, and 0 if the operation fails. Manipulation of attributes using the operations described in this manual page does not affect their representation in the underlying naming system. Changes to attributes in the underlying naming system can only be effected through the use of the interfaces described in xfn_attributes(3XFN). See attributes(5) for descriptions of the following attributes: FN_attribute_t(3XFN), FN_attrvalue_t(3XFN), FN_identifier_t(3XFN), fn_attr_get_ids(3XFN), fn_attr_multi_get(3XFN), xfn(3XFN), xfn_attributes.
http://www.shrubbery.net/solaris9ab/SUNWaman/hman3xfn/FN_attrset_t.3xfn.html
CC-MAIN-2018-05
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. VAT validation or removal of CC validation Hi , Can someone help me write in base_vat.psy a def check_vat_ro(self, vat) for Romania. I don't need a verification of lenght for the VAT but to be able to write the VAT without the country code. Here there are 2 types of companies: ones that have RO in the VAT id to pay the taxes and startup ones that don't have RO in the country code. I've tried to update the _ref_vat = but no luck. Can someone help me? Another solution would be to remove this def simple_vat_check(self, cr, uid, country_code, vat_number, context=None): ! what to comment out? Thank you. You don't need to rewrite, in fact Odoo (Openerp) is managing the VAT with country code..you can leave the RO in front, instead you have the vat_subjected field which is giving you the VAT payer. The validation is working OK but must be changed in an additional module because is checking on vies site, where you have only the registered companies for Export VAT Number...in shorten future will be post to checking directly on mfinante.ro (openapi.ro) with checks regarding VAT on Payment on anaf.ro sites. Hi Mihai, I understand that, and I saw that the validation is ok. The problem is that in the company profile you can't set a VAT ID (CUI) without a RO in front. The company is registered as a non payer of VAT so no RO in front of the VAT ID (CUI). How do I remove the CC validation? I checked the vatnumber.py in the python package but it looks ok and don't think the CC validation is done there. Regards, Andrei You have to input with RO in front and vat_subjected field gives you the payer of VAT, on reports you can set like this to print 'RO' only if it's vat payer: [[ o.partner_id.vat_subjected and o.partner_id.vat or o.partner_id.vat[2:] ]]. rewrite method in addon module : base_vat/base_vat.py def check_vat(self, cr, uid, ids, context=None): user_company = self.pool.get('res.users').browse(cr, uid, uid).company_id if user_company.vat_check_vies: check_func = self.vies_vat_check else: check_func = self.simple_vat_check for partner in self.browse(cr, uid, ids, context=context): if not partner.vat: continue #ADD THIS if partner.country_id.code and partner.vat.startswith(partner.country_id.code): vat_country, vat_number = self._split_vat(partner.vat) elif partner.country_id.code: vat_number = partner.vat vat_country = partner.country_id.code else: #if no country code -> # just raise error that country is required? pass if not check_func(cr, uid, vat_country, vat_number, context=context): return False return True If you want to put in in separate module, make it dependant on base_vat, and include (simply rewrite in new module) constraint definition in order to call this function _constraints = [(check_vat, _construct_constraint_msg, ["vat"])] hope it helps About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/vat-validation-or-removal-of-cc-validation-61185
CC-MAIN-2018-05
en
refinedweb
Creating and deleting event logs with C# .NET The Windows event viewer contains a lot of useful information on what happens on the system: Windows will by default write a lot of information here at differing levels: information, warning, failure, success and error. You can also write to the event log, create new logs and delete them if the code has the EventLogPermission permission. However, bear in mind that it’s quite resource intensive to write to the event logs. So don’t use it for general logging purposes to record what’s happening in your application. Use it to record major but infrequent events like shutdown, severe failure, start-up or any out-of-the-ordinary cases. There are some predefined Windows logs in the event log: Application, Security and System are the usual examples. However, you can create your own custom log if you wish. The key is the EventLog object located in the System.Diagnostics namespace: string source = "DemoTestApplication"; string log = "DemoEventLog"; EventLog demoLog = new EventLog(log); demoLog.Source = source; demoLog.WriteEntry("This is the first message to the log", EventLogEntryType.Information); The new log type was saved under the Applications and Services log category: You’ll probably have to restart the Event Viewer to find the new log type. You can write to one of the existing Windows logs as well by specifying the name of the log. So we can create a log source within the Application log as follows: string source = "DemoSourceWithinApplicationLog"; string log = "Application"; if (!EventLog.SourceExists(source)) { EventLog.CreateEventSource(source, log); } EventLog.WriteEntry(source, "First message from the demo log within Application", EventLogEntryType.Information); The log entry is visible within the Application log: It’s very easy to delete your custom log: string log = "DemoEventLog"; EventLog.Delete(log); The log will be deleted. Again, you’ll have to restart the event viewer to see that changes. You can view all posts related to Diagnostics here. hello, I want to write event logs but i don’t understand how to use EventLogPermission. please, could you give an example? thx
https://dotnetcodr.com/2014/11/04/creating-and-deleting-event-logs-with-c-net/
CC-MAIN-2018-05
en
refinedweb
Jig by itself is useless. You need “plugins” to do the work. The following documentation outlines what it takes to build plugins. One of the primary reasons Jig exists is to enable you to write your own. Contents The most basic plugin has just two files: (Wait, don’t run off and try to create these yet. The Jig command line tool will do this for you. Read on.) The pre-commit is your script and config.cfg contains info about your plugin. It will receive JSON data through the stdin stream. It’s expected to write to stdout if it has anything to say (or stderr if it runs into problems). Although a plugin doesn’t have to write anything. The config.cfg file contains the plugin name and bundle. It can also contain default settings but they aren’t required. Here’s an example: [plugin] bundle = mybundle name = myplugin [settings] Note If you plan on making more than one plugin and you’d like to keep them grouped together, keep the bundle identifier the same. If you want to add settings to your plugin which can be read by your pre-commit script, you can do that like this: [plugin] bundle = mybundle name = myplugin [settings] verbose = no foo = bar [help] verbose = Shoe more stuff Here’s a very simple pre-commit script written with Node.js. You can use any scripting language that you wish as long as it’s installed on the system that runs the plugin. #!/usr/bin/env node process.stdout.write('Always look on the bright side of life'); process.exit(0); Running this plugin with Jig will give you output like this: ▾ myplugin ✓ Always look on the bright side of life Before we get too deep into data formats, it’s a good time to mention testing. While it’s true that your plugins are probably short and simple, tests can provide you with a lot of benefit. Jig provides a framework for writing and running your tests. Let’s see it in action. Tests are ran using Jig’s command line tool. $ jig plugin test -h usage: jig plugin test [-h] [-r RANGE] PLUGIN positional arguments: plugin Path to the plugin directory optional arguments: -h, --help show this help message and exit --verbose, -v Print the input and output (stdin and stdout) --range RANGE, -r RANGE Run a subset of the tests, specified like [s]..[e]. Example -r 3..5 to run tests that have expectations for those changes. By using templates, Jig can get you going quickly. Let’s rewrite that Monty Python lyric plugin in...well Python. We’ll call the plugin bright-side and tell Jig the bundle name is pythonlyrics. (After all we’ll probably be creating more of these, might as well bundle them together.) $ jig plugin create bright-side pythonlyrics Created plugin as ./bright-side The default template is in Python, if we take a look at the pre-commit we can see that it starts with this: #!/usr/bin/env python The pre-commit Jig created is too verbose for this example. Remove everything in there and replace it with this: #!/usr/bin/env python import sys sys.stdout.write('Always look on the bright side of life') sys.exit(0) OK, let’s run the tests. $ jig plugin tests bright-side Could not find any tests: bright-side/tests. No tests. We can fix that. If you were writing these plugins without using Jig’s testing framework it would be a pain to test them. You’d either be creating the input data yourself by hand or using a carefully crafted Git repository. Jig has a way of making this dead simple. It takes a set of numbered directories and creates a Git repository for you that your tests can make assertions against. Warning This is a strange concept to understand at first. Look at some of the tests in Jig’s own common plugins if some real examples would help. To create your fixture we need to start a tests directory: $ mkdir bright-side/tests The next step is to represent the Git repository’s root commit. Just as the name implies, this is the very first commit in a repository (it’s special in Git terms because it’s the only commit that doesn’t have a parent). Numbering starts at 01. We’ll create an empty README file because we need something of substance for that first commit. $ mkdir bright-side/tests/01 $ touch bright-side/tests/01/README The second commit will be based off the first one, copy the directory to 02. $ cp -R bright-side/tests/01 bright-side/tests/02 We need something to change between 01 and 02 for there to be a commit. $ echo "The Life of Brian" > bright-side/tests/02/title.txt With these two directories, Jig has enough information to create an empty repository with the root commit represented by the contents of the 01 directory. The next commit, commit #2, will be based on the contents of the 02 directory. You don’t have to interact with Git at all to make this happen. It’s a feature of Jig’s testing framework and it comes for free. Now that we have a test fixture as a Git repository, run the tests. $ jig plugin test bright-side Missing expectation file: bright-side/tests/expect.rst. Still doesn’t work. But we’re getting closer. Jig’s testing file expect.rst is a bit unique. Instead of a script that runs, you document your plugin to test it using reStructuredText. Create bright-side/tests/expect.rst and edit it to read: Monty Python lyrics =================== The bright-side plugin simply reminds you to look on the bright side of life. .. expectation:: :from: 01 :to: 02 reStructuredText is a plain text markup language. It’s similar to Markdown or a Wiki markup language. Let’s run this test and we can see how this document serves as the description of the behavior we expect from the plugin. $ jig plugin test bright-side Finally we got something. The key to this is in the .. expectations:: directive you saw in the expect.rst file. This tells Jig to run the plugin sending it the difference between the first commit (01) and the second commit (02) in JSON format. If we update our expect.rst file one we can get a passing test. Warning Yes, that’s Unicode. It’s best that you copy and paste instead of trying to type this out. Monty Python lyrics =================== The bright-side plugin simply reminds you to look on the bright side of life. .. expectation:: :from: 01 :to: 02 ▾ bright-side ✓ Always look on the bright side of life Run the tests again: $ jig plugin test bright-side 01 – 02 Pass Pass 1, Fail 0 You’ve just written automated tests for your new plugin. While this is a great first step, it was really simple and not very useful. The next sections will explore the input and output format (in JSON) and how you can work with this data to make something that actually helps. For plugins to operate in Jig’s arena, they have to understand the data coming in and the data going out. It’s JSON both ways. The following outlines what you can expect. The input format is organized by filename. If we turn on verbose output when we run the tests we can see exactly what Jig is feeding our bright-side plugin. $ jig plugin test --verbose bright-side 01 – 02 Pass stdin (sent to the plugin) { "files": [ { "diff": [ [ 1, "+", "The Life of Brian" ] ], "type": "added", "name": "title.txt", "filename": "/Users/ericidle/bright-side/tests/02/title.txt" } ], "config": {} } stdout (received from the plugin) Always look on the bright side of life ················································································ Pass 1, Fail 0 The JSON object has two members, files and config. { "files": [ ... ], "config": { ... } } The files object contains data about which files changed and what changed within them. If we take a look at the first element in the files array, we can see it contains an object with diff, type, name, and filename member. The filename value is the absolute path of the file. { "diff": [ ... ], "type": "added", "name": "title.txt", "filename": "/Users/ericidle/bright-side/tests/02/title.txt" } The name value is the name of the filename relative to the Git repository. { "diff": [ ... ], "type": "added", "name": "title.txt", "filename": "/Users/ericidle/bright-side/tests/02/title.txt" } The type value is the overall action that has occurred to the file. This can be one of 3 values. { "diff": [ ... ], "type": "added", "name": "title.txt", "filename": "/Users/ericidle/bright-side/tests/02/title.txt" } The diff is an an array. Each member in the array is also an array and always contains three values. { "diff": [ [ 1, "+", "The Life of Brian" ] ], "type": "added", "name": "title.txt", "filename": "/Users/ericidle/bright-side/tests/02/title.txt" } Along with information about the files, Jig will also pass configuration settings for a plugin. It will use the default settings found in the [settings] section of $PLUGIN/config.cfg and those settings can be overridden by $GIT_REPO/.jig/plugins.cfg. Our bright-side plugin doesn’t currently have any default settings so let’s add some and see how it affects the JSON input data. Edit bright-side/config.cfg: [plugin] bundle = pythonlyrics name = bright-side [settings] sing_also = no second_chorus_line = no Run the tests again: $ jig plugin test --verbose bright-side 01 – 02 Pass stdin (sent to the plugin) { "files": [ ... ], "config": { "second_chorus_line": "no", "sing_also": "no" } } ... The settings are parsed and made available as string values only. If you want other data types you’ll need to convert them yourself. Note Why string values instead of integers or booleans? The INI format doesn’t support data types. As opposed to trying to guess the data type and take the chance of getting it incorrect, the conversion is left to the plugin author. While testing, Jig provides a directive that allows us to test our plugin based on different settings. Edit bright-side/tests/expect.rst and add another section and test to our expectations. Monty Python lyrics =================== The bright-side plugin simply reminds you to look on the bright side of life. .. expectation:: :from: 01 :to: 02 ▾ bright-side ✓ Always look on the bright side of life Sing to me ~~~~~~~~~~ It will sing to you. Change the ``sing_also`` to ``yes`` to get some additional output. .. plugin-settings:: sing_also = yes second_chorus_line = no .. expectation:: :from: 01 :to: 02 ▾ bright-side ✓ Always look on the bright side of life Our pre-commit script hasn’t been altered to use this new setting so running the test again will show that this passes. $ jig plugin test bright-side 01 – 02 Pass 01 – 02 Pass Pass 2, Fail 0 Change the :file:bright-side/pre-commit script to this: #!/usr/bin/env python # coding=utf-8 import sys import json data = json.loads(sys.stdin.read()) if data['config']['sing_also'] == 'yes': message = '♫ Always look on the bright side of life ♫' else: message = 'Always look on the bright side of life' sys.stdout.write(message) sys.exit(0) The next test result will show a failure because of our altered setting. 01 – 02 Pass ♫ Always look on the bright side of life ♫ + ✓ ♫ Always look on the bright side of life ♫ Pass 1, Fail 1 Change the expectation to look for our singing version of the chorus. .. plugin-settings:: sing_also = yes second_chorus_line = no .. expectation:: :from: 01 :to: 02 ▾ bright-side ✓ ♫ Always look on the bright side of life ♫ With that change it should bring our tests back to a passing state. $ jig plugin test bright-side 01 – 02 Pass 01 – 02 Pass Pass 2, Fail 0 Warning The .. plugin-settings:: directive is sticky to a section. It doesn’t apply just once for the next .. expectation:: directive but will continue to apply until a section change. Sections in our example are separated by ~~~~~~~~~~~~~~~. Settings are useful to control behavior but they need to communicate their intent well to the user. Documentation is good, right? You can provide help messages about your plugin to compensate for this. Edit bright-side/config.cfg: [plugin] bundle = pythonlyrics name = bright-side [settings] sing_also = no second_chorus_line = no [help] sing_also = Sing the chorus to me, either yes or no second_chorus_line = Also display or sing the second chorus line with the first, either yes or no Now these messages will be displayed if the user runs jig config about. Now that we are familiar with the input format, it’s time to improve our pre-commit script and give it a little more whizbang by specifying output. Jig supports three basic types of messages. The default type is ``info`` They are displayed to the user with differently and tallied individually at the end of Jig’s execution. ▾ Plugin 1 ✓ info ⚠ warn ✕ stop A simple message is not specific to a file or a line in a file. It’s used to communicate something to the user that is more general. Examples: [ 'Your commit looks really good, excellent job' ] More than one message: [ 'Your commit looks really good, excellent job', 'Give yourself a pat on the back' ] This will produce output similar to this: ▾ My-Plugin ✓ Your commit looks really good, excellent job ✓ Give yourself a pat on the back The default message type is info but you can change it by providing an array of [TYPE, MESSAGE]. [ ['w', 'Your commit looks a little janky'], ['s', 'On second thought, this is a horrible commit'] ] The output will look like this: ▾ My-Plugin ⚠ Your commit looks a little janky ✕ On second thought, this is a horrible commit File messages are specific to files but not to a particular line. Examples: { 'myMainFile.javascript': [ 'The extension should probably just be .js', 'You should not camelCase your JavaScript filenames' ] } The output will include the filename: ▾ My-Plugin ✓ myMainFile.javascript The extension should probably just be .js ✓ myMainFile.javascript You should not camelCase your JavaScript filenames You can specify the type of message: { 'myMainFile.javascript': [ ['i', 'The extension should probably just be .js'], ['w', 'You should not camelCase your JavaScript filenames'], ['s', 'Really? Putting "File" in the name of your file?'] ] } The output is: ▾ My-Plugin ✓ myMainFile.javascript The extension should probably just be .js ⚠ myMainFile.javascript You should not camelCase your JavaScript filenames ✕ myMainFile.javascript Really? Putting "File" in the name of your file? These are very similar to file messages but include the line number. Examples: { 'utils.sh': [ [1, 's', 'You don't have a hashbang (#!) as the first line'], ] } This will include the line number in the output: ▾ My-Plugin ✕ line 1: utils.sh You don't have a hashbang (#!) as the first line Multiple messages for the file can be specified: { 'utils.sh': [ [1, 's', 'You don't have a hashbang (#!) as the first line'], [5, 'i', 'This is a bash style if statement and will fail with sh'], [500, 'w', "Getting a bit long is it not? You could use Python instead...'] ] } The output: ▾ My-Plugin ✕ line 1: utils.sh You don't have a hashbang (#!) as the first line ✓ line 1: utils.sh This is a bash style if statement and will fail with sh ⚠ line 1: utils.sh Getting a bit long is it not? You could use Python instead... In our examples for the input formatting, our pre-commit script simply printed the messages directly to standard out. They were not in JSON format. Jig is forgiving of this and will not reject messages that come in this way. The output will be treated as simple messages but you’ll have to format newlines yourself. The following examples are equivalent: # As a string with a newline sys.stdout.write('Simple message one') sys.exit(0) # As JSON sys.stdout.write(json.dumps( ['Simple message one'])) sys.exit(0) The output for both of these would be ▾ My-Plugin ✓ Simple message one Jig pays attention to both the standard out and the standard error streams. If your plugins exits with an exit code of 1, any data that is written to stderr will be displayed to the user. ▾ jslint ✕ You need the jslint command line tool installed before running this plugin When you are writing tests for you plugin, these are formatted in a friendly way to aid in debugging. Actual ················································································ Exit code: 1 Std out: (none) Std err: You need the jslint command line tool installed before running this plugin Plugins should always exit with 0 or 1. An exit code of 0 means the plugin functioned normally. Even if it generated warnings or stop messages. If your plugin fails to function as expected, it should exit with 1. This indicates to Jig that a problem exists and the output, if any, from the plugin is not a normal collection of messages that Jig will understand. A common reason for exiting with 1 would be a missing dependency. import sys from subprocess import call, PIPE # which exits with 1 if it can't find the command if call(['which', 'jslint'], stdout=PIPE) == 1: # Write to stderr, not stdout sys.stderr.write('Could not find JSlint, do you need to install it?') sys.exit(1) Jig does not currently support binary files. It doesn’t ignore them, but you will not get any data back in the diff section. For example, if an image was added you’ll see something like this: { "files": [ { "diff": [], "type": "added", "name": "some-image.png", "filename": "/Users/ericidle/bright-side/tests/02/some-image.png" }, ] } Git supports symlinks but Jig will ignore them. This may change in the future, but since they cannot be treated the same as normal files a lot of plugin authors would not perform the additional error handling needed. If you have a valid case for needing to know about symlinks, submit a feature request. Jig currently comes with one template. When you run the following command: $ jig plugin create my-plugin my-bundle The templates can be found at: At the moment the only template is Python. More are planned in the future.
https://pythonhosted.org/jig/pluginapi.html
CC-MAIN-2018-05
en
refinedweb
io_waitread man page io_waitread — read from a descriptor Syntax #include <io.h> int io_waitread(int64 fd,char* buf,int64 len); Description io_waitread tries to read len bytes of data from descriptor fd into buf[0], buf[1], ..., buf[len-1]. (The effects are undefined if len is 0 or smaller.) There are several possible results: - o_waitread returns an integer between 1 and len: This number of bytes was available for immediate reading; the bytes were read into the beginning of buf. Note that this number can be, and often is, smaller than len; you must not assume that io_waitread always succeeds in reading exactly len bytes. - io_waitread returns 0: No bytes were read, because the descriptor is at end of file. For example, this descriptor has reached the end of a disk file, or is reading an empty pipe that has been closed by all writers. - io_waitread returns -3, setting errno to something other than EAGAIN: No bytes were read, because the read attempt encountered a persistent error, such as a serious disk failure (EIO), an unreachable network (ENETUNREACH), or an invalid descriptor number (EBADF). See Also io_nonblock(3), io_waitread(3), io_waitreadtimeout(3) Referenced By io_tryread(3), io_tryreadtimeout(3), io_trywrite(3).
https://www.mankier.com/3/io_waitread
CC-MAIN-2018-05
en
refinedweb
This documentation is archived and is not being maintained. ImageAnimator Class .NET Framework 1.1 Animates an image that has time-based frames. For a list of all members of this type, see ImageAnimator Members. System.Object System.Drawing.ImageAnimator [Visual Basic] NotInheritable Public Class ImageAnimator [C#] public sealed class ImageAnimator [C++] public __gc __sealed class ImageAnimator [JScript] public class ImageAnimator ImageAnimator Members | System.Drawing Namespace Show:
https://msdn.microsoft.com/en-us/library/system.drawing.imageanimator(v=vs.71).aspx
CC-MAIN-2018-05
en
refinedweb
D - Contract Programming Contract programming in D programming is focused on providing a simple and understandable means of error handling. Contract programming in D are implemented by three types of code blocks − - body block - in block - out block Body Block in D Body block contains the actual functionality code of execution. The in and out blocks are optional while the body block is mandatory. A simple syntax is shown below. return_type function_name(function_params) in { // in block } out (result) { // in block } body { // actual function block } In Block for Pre Conditions in D In block is for simple pre conditions that verify whether the input parameters are acceptable and in range that can be handled by the code. A benefit of an in block is that all of the entry conditions can be kept together and separate from the actual body of the function. A simple precondition for validating password for its minimum length is shown below. import std.stdio; import std.string; bool isValid(string password) in { assert(password.length>=5); } body { // other conditions return true; } void main() { writeln(isValid("password")); } When the above code is compiled and executed, it reads the file created in previous section and produces the following result − true Out Blocks for Post Conditions in D The out block takes care of the return values from the function. It validates the return value is in expected range. A simple example containing both in and out is shown below that converts months, year to a combined decimal age form. import std.stdio; import std.string; double getAge(double months,double years) in { assert(months >= 0); assert(months <= 12); } out (result) { assert(result>=years); } body { return years + months/12; } void main () { writeln(getAge(10,12)); } When the above code is compiled and executed, it reads the file created in previous section and produces the following result − 12.8333
https://www.tutorialspoint.com/d_programming/d_programming_contract_programming.htm
CC-MAIN-2018-05
en
refinedweb
The check-pvp package Check whether the version ranges used in the Build-Depends field matches the style of module imports according to the Package Versioning Policy (PVP). See. The tool essentially looks for any dependency like containers >=0.5 && <0.6 that allows the addition of identifiers to modules within the version range. Then it checks whether all module imports from containers are protected against name clashes that could be caused by addition of identifiers. You must run the tool in a directory containing a Cabal package. $ check-pvp This requires that the package is configured, since only then the association of packages to modules is known. If you want to run the tool on a non-configured package you may just check all imports for addition-proof style. $ check-pvp --include-all It follows a detailed description of the procedure and the rationale behind it. First the program classifies all dependencies in the Cabal file of the package. You can show all classifications with the --classify-dependencies option, otherwise only problematic dependencies are shown. A dependency like containers >=0.5.0.3 && <0.5.1 does not allow changes of the API of containers and thus the program does not check its imports. Clashing import abbreviations are an exception. The dependency containers >=0.5.1 && <0.6 requires more care when importing modules from containers and this is what the program is going to check next. This is the main purpose of the program! I warmly recommend this kind of dependency range since it greatly reduces the work to keep your package going together with its imported packages. Dependencies like containers >=0.5 or containers >=0.5 && <1 are always problematic, since within the specified version ranges identifier can disappear. There is no import style that protects against removed identifiers. An inclusive upper bound as in containers >=0.5 && <=0.6 will also cause a warning, because it is unnecessarily strict. If you know that containers-0.6 works for you, then containers-0.6.0.1 or containers-0.6.1 will also work, depending on your import style. A special case of inclusive upper bounds are specific versions like in containers ==0.6. The argument for the warning remains the same. Please note that the check of ranges is performed entirely on the package description. The program will not inspect the imported module contents. E.g. if you depend on containers >=0.5 && <0.6 but import in a way that risks name clashes, then you may just extend the dependency to containers >=0.5 && <0.6.1 in order to let the checker fall silent. If you use the dependency containers >=0.5 && <0.6.1 then the checker expects that you have verified that your package works with all versions of kind 0.5.x and the version 0.6.0. Other versions would then work, too, due to the constraints imposed by package versioning policy. Let us now look at imports that must be protected against identifier additions. The program may complain about a lax import. This means you have imported like import Data.Map as Map Additions to Data.Map may clash with other identifiers, thus you must import either import qualified Data.Map as Map or import Data.Map (Map) The program emits an error on clashing module abbreviations like import qualified Data.Map.Lazy as Map import qualified Data.Map.Strict as Map This error is raised whenever multiple modules are imported with the same abbreviation, where at least one module is open for additions. Our test is overly strict in the sense that it also blames import qualified Data.Map as Map import qualified Data.Map as Map but I think it is good idea to avoid redundant imports anyway. Additionally there are warnings on imports that are consistent with large version ranges, but complicate API changing updates of your dependencies. You can disable these warnings with --disable-warnings. The program warns about an open list of constructors as in import Data.Sequence (ViewL(..)) Additions of constructors to ViewL may also conflict with other identifiers, but additions of constructors are considered API changes since they may turn a complete case analysis into an incomplete one. Similarly additionally class methods can turn a complete class instance into a partial one. Thus addition of constructors and class methods require a version bump from x.y.z to x.y+1. Nonetheless it is a good idea to import either import Data.Sequence (ViewL(EmptyL, (:<))) or import qualified Data.Sequence as Seq because you document the origin of identifiers this way. This is especially important when the imported identifiers are moved or removed in the future. If you use constructors only for constructions and not for pattern matches or if you only call class methods but do not define instances, then with explicit imports or qualified imports your modules survive such additions in your dependent packages without modifications. More warnings are issued for hiding imports. The import import Data.Map hiding (insert) is not bad in the sense of the PVP, but this way you depend on the existence of the identifier insert although you do not need it. If it is removed in a later version of containers, then your import breaks although you did not use the identifier. Finally you can control what items are checked. First of all you can select the imports that are checked. Normally the imports are checked that belong to lax dependencies like containers >=0.5 && <0.6. However this requires the package to be configured in order to know which import belongs to which dependency. E.g. Data.Map belongs to containers. You can just check all imports for being addition-proof using the --include-all option. Following you can write the options --include-import, --exclude-import, --include-dependency, --exclude-dependency that allow to additionally check or ignore imports from certain modules or packages. These modifiers are applied from left to right. E.g. --exclude-import=Prelude will accept any import style for Prelude and --exclude-dependency=foobar will ignore the package foobar, say, because it does not conform to the PVP. Secondly, you may ignore certain modules or components of the package using the options --exclude-module, --exclude-library, --exclude-executables, --exclude-testsuites, --exclude-benchmarks. E.g. --exclude-module=Paths_PKG will exclude the Paths module that is generated by Cabal. I assume that it will always be free of name clashes. Known problems: The program cannot automatically filter out the Pathsmodule. The program cannot find and check preprocessed modules. The program may yield wrong results in the presence of Cabal conditions. If this program proves to be useful it might eventually be integrated in the check command of cabal-install. See. Alternative: If you want to allow exclusively large version ranges, i.e. >=x.y && <x.y+1, then you may also add the option -fwarn-missing-import-lists to the GHC-Options fields of your Cabal file. See. Unfortunately there is no GHC warning on clashing module abbreviations. See. Related: There are programs that check PVP compliance of exports: Properties Downloads - check-pvp-0.0.1.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees
http://hackage.haskell.org/package/check-pvp
CC-MAIN-2018-05
en
refinedweb
1 import java.io.*;2 import com.memoire.vainstall.*;3 import Acme.Crypto.*;4 /* 5 * I suggest to license this file under LGPL license so anyone 6 * could extend this class with own license key validators w/o need 7 * to release source code of validator. I suggest to no to license 8 * this file under GPL2 and stick with LGPL as otherwise it will put 9 * very much of burden on the users of vainstall w/o obvious value10 * to other user as different company polices could require different11 * fields to be supplied and this will be likely only difference in different12 * validators.13 *14 * copyrights are completly transfered to VAInstall team without any 15 * restriction.16 */17 /** 18 * this class is default implementation of license key support, that does nothing.19 */20 public class HelloLicenseKeySupport extends LicenseKeySupport21 {22 String key = "HW0123-4567-89AB-CDEF-8B58-85B7";23 String codekey; 24 /** @return true if license key query step need to be performed 25 */26 public boolean needsLicenseKey() {27 return true;28 }29 /** @return uri of page that contains registration page,30 * if such uri is not null, then it will be shown 31 * to user and launch browser button will be displayed32 * depending on ui and platform.33 */34 35 public String getRegistrationPage() {36 return "";37 }38 39 /** get field info for this installer 40 * @return array of field info.41 */42 public FieldInfo[] getFieldInfo() {43 return new FieldInfo[]{new FieldInfo("serial key", 20, key)};44 }45 /** set field values, this method coudl be called any number of times.46 * implementation of this class should keep last passed values.47 * @param values array of strings where each element correspond field 48 * info returned from get field info.49 */50 public void setFieldValues(String values[]) {51 key = values[0];52 }53 /** @return true, if license key is valid, this method should be called only54 * after set field values were called. 55 */56 public boolean isLicenseKeyValid()57 {58 StringBuffer tmp = new StringBuffer (key.toUpperCase());59 for(int n = tmp.length(),i=0;i<n;i++) {60 if(tmp.charAt(i) == '-'){61 tmp.deleteCharAt(i);62 i--;63 n--;64 }65 }66 String normalized = tmp.toString();67 if(normalized.length() < 26) {68 return false;69 }70 codekey = normalized.substring(0,normalized.length()-8);71 if(codekey.hashCode() != (int)Long.parseLong(normalized.substring(codekey.length(),normalized.length()),16) ){72 return false;73 }74 return true;75 }76 /** encode archive.zip stream with key supplied as string in 77 * configuration file, this fucntion is called during building install 78 * package 79 * @param is input steam to encode80 * @param key key supplied in configuration file81 * @return encrypted stream 82 */83 public OutputStream encodeStream(OutputStream os, String key) throws IOException84 {85 EncryptedOutputStream rc = new EncryptedOutputStream(new AesCipher(key), os);86 return rc;87 }88 /** decode archive.zip stream using infromation supplied in fieldValues 89 * @param is input steam to decode90 * @return decrypted stream 91 */92 public InputStream decodeStream(InputStream is) throws IOException93 {94 EncryptedInputStream rc = new EncryptedInputStream(new AesCipher(codekey), is);95 return rc;96 }97 98 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/HelloLicenseKeySupport.java.htm
CC-MAIN-2018-05
en
refinedweb
public class SanderRossel : Lazy<Person> { public void DoWork() { throw new NotSupportedException(); } } DaveAuld wrote:too muooooch Lazy<Dog> Audery Hepburn told:I’ve been lucky. Opportunities don’t often come along. So, when they do, you have to grab them. W∴ Balboos wrote:"Audery Hepburn's 85th Birthday" is the special. The report of my death was an exaggeration - Mark Twain Simply Elegant Designs JimmyRopes Designs I'm on-line therefore I am. JimmyRopes JimmyRopes wrote:I want to see Audery Hepburn at 85 Kornfeld Eliyahu Peter wrote:No it's not. I hate when someone makes something cheap 'on the back' of someone who worked so hard... With all the respect to Peter Jackson, he is not up to write new stories in the name of Tolkien! And it's event worst - it's a bad action movie! General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?msg=4813008
CC-MAIN-2015-32
en
refinedweb
add web reference - access denied Discussion in 'ASP .Net' started by Steve Rich shared assembly in the add referencebabu dhayal via .NET 247, Aug 5, 2004, in forum: ASP .Net - Replies: - 2 - Views: - 6,422 - Nelson Xu - Aug 17, 2004 Access Denied ... <add assembly="*">Randy Paez, Nov 20, 2003, in forum: ASP .Net - Replies: - 8 - Views: - 3,357 - fokko - Mar 10, 2010 Access Denied to Reference dllbyrd48, Oct 30, 2005, in forum: ASP .Net - Replies: - 3 - Views: - 4,193 - Patrick.O.Ige - Oct 31, 2005 Web Service + Anon Access, but getting 401 Access Denied ErrorAlex Washtell via .NET 247, Apr 5, 2005, in forum: ASP .Net Web Services - Replies: - 1 - Views: - 304 - Mauro Ottaviani [MSFT] - Apr 5, 2005 procedure to add web reference which will not create new namespace just add class in existing namespDeep Mehta via .NET 247, May 28, 2005, in forum: ASP .Net Web Services - Replies: - 2 - Views: - 567 - Dave A - May 31, 2005
http://www.thecodingforums.com/threads/add-web-reference-access-denied.519206/
CC-MAIN-2015-32
en
refinedweb
WebKit Bugzilla HTML5 specifies a getElementsByClassName method on Document and Element. Firefox 3 supports this with a slightly modified API, which Hixie says he plans to update the HTML5 spec to match. Firefox has gotten large (77x over js implementations, 8x over XPath) speed improvements for this functionality from doing it natively. Created attachment 15948 [details] Preliminary patch to implement getElementsByClassName This isn't done (no changelog, no testcases and very limited manual testing, one known wart), but it does work, and is compatible with the Firefox implementation as far as I know. Feedback is welcome, particularly on the part that handles parsing the class string. Darin suggested generating a dummy Element, setting its class, then getting its classList, but I wasn't able to get that working (and I'm not convinced that it's less nasty than the code duplication). The current method is copy-pasted from NamedMappedAttrMap::parseClassAttribute, which is definitely less than ideal. Created attachment 15991 [details] Updated to not have nasty code duplication The updated version of this has moved parseClassAttribute into AtomicStringList (which is renamed to ClassNameList, since that's all it's used for), which allows getting rid of the duplicated code without weird hacks. Still no changelog or tests, but the code should be ready for review. Comment on attachment 15991 [details] Updated to not have nasty code duplication I'm not sure the renaming of AtomicStringList is desirable. Perhaps ClassNameList should be a subclass of AtomicStringList that just adds the parseClassAttribute method? I could be convinced otherwise, though. It also seems like parseClassAttribute could be a static method that functions as a named constructor. (In reply to comment #3) > (From update of attachment 15991 [details] [edit]) > I'm not sure the renaming of AtomicStringList is desirable. Perhaps > ClassNameList should be a subclass of AtomicStringList that just adds the > parseClassAttribute method? I could be convinced otherwise, though. AtomicStringList effectively *is* already ClassNameList; it's used nowhere in the code except to store lists of class names. Maciej has pointed out that it's not even a particularly good structure to use for that, but fixing it is beyond the scope of this patch. > > It also seems like parseClassAttribute could be a static method that functions > as a named constructor. > Seems plausible. I'll try it out and see how it feels. I did a bit of performance testing on this just for fun; Native vs. prototype.js's xpath version vs. prototype.js's DOM traversal version native: 303ms xpath: 5,706ms js/dom: 26,660ms :) I also tried out aroben's second suggestion, but it seemed like a bit of a wash. Made the getElementsByClassName callsite a little nicer, and the NamedMappedAttrMap one a little less nice. What about making a regular constructor that takes a class attribute string and calls through to parseClassAttribute? Is that sort of minor convenience method generally considered worthwhile in webkit? Created attachment 16071 [details] Now with changelog and test Have you tested the behavior of passing js null and js undefined to getElementsByClassName and getElementsByClassNameNS? Created attachment 16176 [details] Without getElementsByClassNameNS, and with null/undefined tests getElementsByClassNameNS doesn't make any sense, so I've removed it (as discussed on irc). This also tests null/undefined as suggested. Some test cases for getElementsByClassName: (The Opera alpha now supports it). Anne van Kesteren points out that this implementation doesn't treat \f as whitespace so it would fail his tests. I didn't double-check that the spec requires this treatment. FWIW: the specification currently doesn't define the string argument. However, it seems likely that it resuses the definitions for getting tokens out of string that is used for class= currently which includes the aforementioned character. (In reply to comment #11) > FWIW: the specification currently doesn't define the string argument. However, > it seems likely that it resuses the definitions for getting tokens out of > string that is used for class= currently which includes the aforementioned > character. > Interesting. That would imply that WebKit is mis-handling class= as well, since I reused the whitespace logic from it for getElementsByClassName. Created attachment 16239 [details] Adds form feed to the whitespace list, and copyright headers This just adds form feed to the whitespace list (for *all* class name parsing, please let me know if that should be split out into another patch. It's about 6 characters on a line that's already being modified, so I figured it would be fine here), and adds my copyright header to the files I made more than tiny changes on (thanks for pointing that out olliej!). Created attachment 16240 [details] Actually include the header Somehow either I hadn't been svn adding ClassNameList.h this whole time, or (based on what filemerge says), svn-apply didn't pick up on the rename. This is the same as the previous one, except it should actually work :) Comment on attachment 16240 [details] Actually include the header Ok, a few comments: + for(const ClassNameList *t = this; t; t = t->next()) + { + if(t->string() == str) + return true; Those don't agree with the style guidelines. { should be on the same line as for, and if( should be if ( No need for this-> here: +void ClassNameList::parseClassAttribute(const String& classStr, const bool inCompatMode) +{ + this->clear(); + String classAttr = inCompatMode ? + (classStr.impl()->isLower() ? classStr : String(classStr.impl()->lower())) : + classStr; doesnt' need to call all the impl() stuff, just call .lower() on string. To me "start" and "end" are more readable than "ePos" and "sPos", but that's not a huge issue. Style guidelines again: +ClassNodeList::ClassNodeList(PassRefPtr<Node> rootNode, const AtomicString& className) +: NodeList(rootNode), m_namespaceURI(nullAtom) Why? + m_classList = *(new ClassNameList()); That shouldn't be needed. Style again: + const ClassNameList* t = static_cast<Element*>(testNode)->getClassList(); + + for (const ClassNameList* c = &m_classList; c; c = c->next()) + { + if(t->contains(c->string())) + continue; also, "t" is not a very readable/useful variable name This is also more easily written as: + for (const ClassNameList* c = &m_classList; c; c = c->next()) + { + if(t->contains(c->string())) + continue; + + return false; if (!contains) return false; IMO. Shouldn't this be "isEmpty()" ? +PassRefPtr<NodeList> Element::getElementsByClassName(const String& className) +{ + if (className.isNull()) + return 0; Otherwise looks OK. I think the style stuff should be corrected and a new patch posted before landing though. Thanks for the patch! Created attachment 16496 [details] Updated to address review comments I ended up removing the empty args optimization entirely, since it was buggy, and probably not useful. Created attachment 16497 [details] More review fixage Addressing stuff I missed before, and stuff mentioned on IRC. Comment on attachment 16497 [details] More review fixage I am surprised. I had figured class names were always case insensitive. It does not appear to be the case with your patch. Does contains() need to lower() the incoming string when in compatMode? This is not needed, please remove: + m_classList = ClassNameList(); Constructor initializers are generally one on each line, with the comma (or colon) leading the line (see the style guidelines). This allows easy use of #if ENABLE(FOO) around certain initializers as well as easy removal/addition with the smallest possible diff. Maybe it's the need to return "false" when m_impl == 0, that has caused isLower() not to be implemented before. One could argue that isLower should return true for "" or null. Not sure it matters. I'm OK having a funny isLower as such. It's also OK to call string->impl()->toLower() (and NOT wrap it in String(), the compiler will do that for you.) I know I encouraged you to add isLower. Doesn't matter either way, your call. Need to fix the one small style issue, the extra initialization and answer regarding case insensitive lookups. I also think we need a compat-mode test and some case-sensitivity and white-space sensitivity tests before this can land. Thanks again for taking the time to work on this. FYI: class names are always case-_sensitive_ as far as getElementsByClassName() is concerned. No need to introduce limited quirks mode behavior into a new API. I'd love to see the last few issues fixed up to land this on trunk. (In reply to comment #20) > I'd love to see the last few issues fixed up to land this on trunk. > I haven't forgotten, just haven't had time to work on it. I'll try to set aside some time this weekend to finish it up. I do have one question regarding case-sensitivity though: now that code is shared between regular class parsing and getElementsByClassName parsing it becomes more difficult to only have a quirks mode for regular class parsing. Should I leave it as is, or add another argument to the function to let the caller control whether quirks mode is allowed (or a third option if I'm missing some better way of doing it)? . This doesn't seem to work with a lot of Anne's tests. It is failing on tests that set a class on the documentElement (<html>) and I think it is failing to find this element because we only search below the root node. I am curious if the the element that getElementsByClassName is called on is supposed to include it self in the list, or only it's descendants. Actually, the spec is pretty specific about this. So it is a bug for implementation of document.getElementsByClassName to just call documentElement.getElementsByClassName because that won't include itself. (In reply to comment #22) > . > I've spoken with Mozilla and Opera developers about this, and brought it up with Hixie and #whatwg; so far four people (counting the two of us) are in favor of maintaining the case insensitivity quirk, and everyone else doesn't care at all. I'll try to remember to send an email to the list about changing the spec to reflect this. Created attachment 17870 [details] updated patch This updates the patch to allow document as the root node, cleaned up things a bit and adds Anne's test suite to the layouttests. Only test 014.html as it originally was failed, but I have updated it to match our current quirksmode behavior and added a comment. Think we should land this with the current functionality. Objections. I also kept David's name in the ChangeLog. Comment on attachment 17870 [details] updated patch There's an extra change log entry here, for bug 15313. I really wish we didn't use a list for ClassNameList. A Vector<AtomicString>* would work better, I think. It's really bad that deleting a ClassNameList uses O(n) stack in the destructor. Is it really good to indent ChildNodeList.h as part of this patch? There's also an unneeded forward declaration added to that file. Please land that separately if at all. We should use foldCase() instead of lower() for contexts like this one. The check of isLower() in parseClassAttribute is unnecessary. String::lower() should already do that. 41 , m_namespaceURI(nullAtom) Isn't nullAtom the default anyway? Otherwise looks good to me. I'll say review- for now because I want you to consider the comments above. I have already filed on the O(n) stack issue (it's a crasher). I think that bug would be a better place to address changing from a linked list to a vector. The current implementation is as it is merely because it's a rename of AtomicStringList rather than a new class. Created attachment 17882 [details] updated patch (now with more Vector) Comment on attachment 17882 [details] updated patch (now with more Vector) 122 bool isLower() const { return m_impl ? m_impl->isLower() : false; } The null string is all-lowercase (and all-uppercase), so this should say true. Comment on attachment 17882 [details] updated patch (now with more Vector) r=me Fix landed in r28722.
https://bugs.webkit.org/show_bug.cgi?id=14955
CC-MAIN-2015-32
en
refinedweb
Opened 6 years ago Closed 6 years ago #12082 closed (duplicate) Inclusion of new auth tests force Sites framework requirement Description After upgrading to 1.1.1 I noticed that the inclusion of auth tests (specifically django.contrib.auth.tests.views.LoginTest imported in source:django/trunk/django/contrib/auth/tests/__init__.py) cause the test framework to throw an exception if you are not using the Sites framework. Here is the exception raised, which includes the specific references. ====================================================================== ERROR: test_current_site_in_context_after_login (django.contrib.auth.tests.views.LoginTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sto/imp/dev/django/Django-1.1.1/django/contrib/auth/tests/views.py", line 191, in test_current_site_in_context_after_login site = Site.objects.get_current() File "/sto/imp/dev/django/Django-1.1.1/django/contrib/sites/models.py", line 18, in get_current raise ImproperlyConfigured("You're using the Django \"sites framework\" without having set the SITE_ID setting. Create a site in your database and set the SITE_ID setting to fix this error.") ImproperlyConfigured: You're using the Django "sites framework" without having set the SITE_ID setting. Create a site in your database and set the SITE_ID setting to fix this error. ---------------------------------------------------------------------- I realize that I can specify the individual applications in the test command to prevent this, but I think this should be fixed in the base code. The problem is in source:django/trunk/django/contrib/auth/tests/views.py@#L186 LoginTest.test_current_site_in_context_after_login where it doesn't check whether the Sites framwork is installed. def test_current_site_in_context_after_login(self): response = self.client.get(reverse('django.contrib.auth.views.login')) self.assertEquals(response.status_code, 200) site = Site.objects.get_current() I am attaching a diff file with my proposed fix. Attachments (1) Change History (2) Changed 6 years ago by thebitguru comment:1 Changed 6 years ago by Alex - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to duplicate - Status changed from new to closed Dupe of #10608.
https://code.djangoproject.com/ticket/12082
CC-MAIN-2015-32
en
refinedweb
I was told to compile the following to see what errors the compiler gives when you try to compile a program that goes past the upper bound of an array. However codeblocks did not give me any errors. Now my question is if it is possible for codeblocks to do this by changing something in it's setting? Code: #include <iostream> using namespace std; int main() { long TargetArray[25]; // Array to fill int i; for (i = 0; i < 25; i++) TargetArray[i] = 10; cout << "Test 1: \n"; // test current values, should be 0 cout << "TargetArray[0]: " << TargetArray[0] << endl; //lower bound cout << "TargetArray[24]: " << TargetArray[24] << endl << endl; // upper bound cout << "\nAttempting at assigning values beyond the upper bound..."; for (i = 0; i <= 25; i++) // going a little to far TargetArray[i] = 20; // asigning my fail for element 25 cout << "Test 2: \n"; cout << "TargetArray[0]: " << TargetArray[0] << endl; cout << "TargetArray[24]: " << TargetArray[24] << endl; cout << "TargetArray[25]: " << TargetArray[25] << endl; //out of bounds return 0; }
http://cboard.cprogramming.com/cplusplus-programming/126935-codeblocks-printable-thread.html
CC-MAIN-2015-32
en
refinedweb
#include <pixel.hpp> List of all members. A pixel is a set of channels defining the color at a given point in an image. Conceptually, a pixel is little more than a color base whose elements model ChannelConcept. The class pixel defines a simple, homogeneous pixel value. It is used to store the value of a color. The built-in C++ references to pixel, pixel& and const pixel& are used to represent a reference to a pixel inside an interleaved image view (a view in which all channels are together in memory). Similarly, built-in pointer types pixel* and const pixel* are used as the standard iterator over a row of interleaved homogeneous pixels. ChannelConcept pixel pixel& const pixel* Since pixel inherits the properties of color base, assigning, equality comparison and copy-construcion are allowed between compatible pixels. This means that an 8-bit RGB pixel may be assigned to an 8-bit BGR pixel, or to an 8-bit planar reference. The channels are properly paired semantically. The single-channel (grayscale) instantiation of the class pixel, (i.e. pixel<T,gray_layout_t>) is also convertible to/from a channel value. This allows grayscale pixels to be used in simpler expressions like *gray_pix1 = *gray_pix2 instead of more complicated at_c<0>(gray_pix1) = at_c<0>(gray_pix2) or get_color<gray_color_t>(gray_pix1) = get_color<gray_color_t>(gray_pix2) pixel<T,gray_layout_t>
http://www.boost.org/doc/libs/1_45_0/libs/gil/doc/html/g_i_l_0599.html
CC-MAIN-2015-32
en
refinedweb
Typically, (1) Server disk (2) Server OS (3) Client OS For (2), if the server is implemented correctly, there should be no namespace caching. And the write caching can be controlled by a flag in the client's WRITE request. For (3), one can usually control that using various mount(8) options.
http://fixunix.com/nfs/61901-caching-levels-nfs.html
CC-MAIN-2015-32
en
refinedweb
Ticket #371 (closed defect: fixed) New --database option for migrate command doesn't work Description I attempted using the --database option when migrating and noticed it always used the default database. settings.py: DATABASES = { 'default': { 'NAME': 'test', 'ENGINE': 'django.db.backends.mysql', 'USER': 'test', }, 'test': { 'NAME': 'test1', 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'USER': 'test1', }, } Using the command: python manage.py migrate appname --database=test The default database is used instead of 'test'. It appears the correct database is set at However, uses the import: from south.db import db. When it does that the last line of that file always sets the db to the default. So even though we passed in a different database at the command line it is always overwritten during the import. Fixed in [2273d7101099]. It was actually because "from x import y" imports copy the namespace, so when we reassign in south.db.db it doesn't reassign in the module.
http://south.aeracode.org/ticket/371
CC-MAIN-2015-32
en
refinedweb
[UNIX] Music Daemon DoS and File Disclosure Vulnerabilities From: SecuriTeam (support_at_securiteam.com) Date: 08/26/04 - Previous message: SecuriTeam: "[NT] NtRegmon Local Denial of Service" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ] To: [email protected] Date: 26 Aug 2004 14:08:29 +0200 The following security advisory is sent to the securiteam mailing list, and can be found at the SecuriTeam web site: - - promotion The SecuriTeam alerts list - Free, Accurate, Independent. Get your security news from a reliable source. - - - - - - - - - Music Daemon DoS and File Disclosure Vulnerabilities ------------------------------------------------------------------------ SUMMARY <> Music daemon (musicd) is a "music player designed to run as a independent server where different front-end can connect to control the play or get information about what is playing etc". Two remotely exploitable vulnerabilities have been found in the product, one allows attackers to cause the program to no longer respond to legitimate users, the other allows reading of sensitive files, such as the /etc/shadow file. DETAILS Vulnerable Systems: * MusicDaemon version 0.0.3 and prior Exploit: /* MusicDaemon <= 0.0.3 v2 Remote /etc/shadow Stealer / DoS * Vulnerability discovered by: Tal0n 05-22-04 * Exploit code by: Tal0n 05-22-04 * * Greets to: atomix, vile, ttl, foxtrot, uberuser, d4rkgr3y, blinded, wsxz, * serinth, phreaked, h3x4gr4m, xaxisx, hex, phawnky, brotroxer, xires, * bsdaemon, r4t, mal0, drug5t0r3, skilar, lostbyte, peanuter, and over_g * * MusicDaemon MUST be running as root, which it does by default anyways. * Tested on Slackware 9 and Redhat 9, but should work generically since the * nature of this vulnerability doesn't require * shellcode or return addresses. * * * Client Side View: * * root@vortex:~/test# ./md-xplv2 127.0.0.1 1234 shadow * * MusicDaemon <= 0.0.3 Remote /etc/shadow Stealer * * Connected to 127.0.0.1:1234... * Sending exploit data... * * <*** /etc/shadow file from 127.0.0.1 ***> * * Hello * <snipped for privacy> * ...... * bin:*:9797:0::::: * ftp:*:9797:0::::: * sshd:*:9797:0::::: * ...... * </snipped for privacy> * * <*** End /etc/shadow file ***> * * root@vortex:~/test# * * Server Side View: * * root@vortex:~/test/musicdaemon-0.0.3/src# ./musicd -c ../musicd.conf -p 1234 * Using configuration: ../musicd.conf * [Mon May 17 05:26:07 2004] cmd_set() called * Binding to port 5555. * [Mon May 17 05:26:07 2004] Message for nobody: VALUE: LISTEN-PORT=5555 * [Mon May 17 05:26:07 2004] cmd_modulescandir() called * [Mon May 17 05:26:07 2004] cmd_modulescandir() called Binding to port 1234. * [Mon May 17 05:26:11 2004] New connection! * [Mon May 17 05:26:11 2004] cmd_load() called * [Mon May 17 05:26:13 2004] cmd_show() called * [Mon May 17 05:26:20 2004] Client lost. * * * As you can see, it simply makes a connection, sends the commands, and * leaves. MusicDaemon doesn't even log that new connection's IPs that I * know of. Works very well, eh? :) * * The vulnerability is in where the is no authenciation for 1. For 2, it * will let you "LOAD" any file on the box if you have the correct privledges, * and by default, as I said before, it runs as root, unless you change the * configuration file to make it run as a different user. * * After we "LOAD" the /etc/shadow file, we do a "SHOWLIST" so we can grab * the contents of the actual file. You can subtitute any file you want in * for /etc/shadow, I just coded it to grab it because it being such an * important system file if you know what I mean ;). * * As for the DoS, if you "LOAD" any binary on the system, then use "SHOWLIST", * it will crash music daemon. * * */ #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int main(int argc, char *argv[]) { char buffer[16384]; char *xpldata1 = "LOAD /etc/shadow\r\n"; char *xpldata2 = "SHOWLIST\r\n"; char *xpldata3 = "CLEAR\r\n"; char *dosdata1 = "LOAD /bin/cat\r\n"; char *dosdata2 = "SHOWLIST\r\n"; char *dosdata3 = "CLEAR\r\n"; int len1 = strlen(xpldata1); int len2 = strlen(xpldata2); int len3 = strlen(xpldata3); int len4 = strlen(dosdata1); int len5 = strlen(dosdata2); int len6 = strlen(dosdata3); if(argc != 4) { printf("\nMusicDaemon <= 0.0.3 Remote /etc/shadow Stealer / DoS"); printf("\nDiscovered and Coded by: Tal0n 05-22-04\n"); printf("\nUsage: %s <host> <port> <option>\n", argv[0]); printf("\nOptions:"); printf("\n\t\tshadow - Steal /etc/shadow file"); printf("\n\t\tdos - DoS Music Daemon\n\n"); return 0; } printf("\nMusicDaemon <= 0.0.3 Remote /etc/shadow Stealer / DoS\n\n"); int sock; struct sockaddr_in remote; remote.sin_family = AF_INET; remote.sin_port = htons(atoi(argv[2])); remote.sin_addr.s_addr = inet_addr(argv[1]); if((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) { printf("\nError: Can't create socket!\n\n"); return -1; } if(connect(sock,(struct sockaddr *)&remote, sizeof(struct sockaddr)) < 0) { printf("\nError: Can't connect to %s:%s!\n\n", argv[1], argv[2]); return -1; } printf("Connected to %s:%s...\n", argv[1], argv[2]); if(strcmp(argv[3], "dos") == 0) { printf("Sending DoS data...\n"); send(sock, dosdata1, len4, 0); sleep(2); send(sock, dosdata2, len5, 0); sleep(2); send(sock, dosdata3, len6, 0); printf("\nTarget %s DoS'd!\n\n", argv[1]); return 0; } if(strcmp(argv[3], "shadow") == 0) { printf("Sending exploit data...\n"); send(sock, xpldata1, len1, 0); sleep(2); send(sock, xpldata2, len2, 0); sleep(5); printf("Done! Grabbing /etc/shadow...\n"); memset(buffer, 0, sizeof(buffer)); read(sock, buffer, sizeof(buffer)); sleep(2); printf("\n<*** /etc/shadow file from %s ***>\n\n", argv[1]); printf("%s", buffer); printf("\n<*** End /etc/shadow file ***>\n\n"); send(sock, xpldata3, len3, 0); sleep(1); close(sock); return 0; } return 0; } ADDITIONAL INFORMATION The information has been provided by Tal0] NtRegmon Local Denial of Service" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
http://www.derkeiler.com/Mailing-Lists/Securiteam/2004-08/0088.html
CC-MAIN-2015-32
en
refinedweb
When I tried the following code in VC++ SP4 #include <iostream> void main(int argc, char* argv[]) { // printf("Hello World!\n"); cout << "hello world!" << endl ; } I got the following error, c:\temp\del\del.cpp(13) : fatal error C1010: unexpected end of file while looking for precompiled header directive What did I do wrong? Please help! Thanks!
http://forums.devx.com/printthread.php?t=90718&pp=15&page=1
CC-MAIN-2015-32
en
refinedweb
IntPtr int (IntPtr)(((int)byteArray.bitPtr) + startOffset) (IntPtr)(byteArray.bitPtr + startOffset) //*; FloodFill2\Form1.Designer.cs(381,28): error CS0234: The type or namespace name 'PicturePanel' does not exist in the namespace 'FloodFill2' (are you missing an assembly reference?) FloodFill2\AbstractFloodFiller.cs(23,19): error CS0246: The type or namespace name 'EditableBitmap' could not be found (are you missing a using directive or an assembly reference?) FloodFill2\AbstractFloodFiller.cs(88,16): error CS0246: The type or namespace name 'EditableBitmap' could not be found (are you missing a using directive or an assembly reference?) / Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/16405/Queue-Linear-Flood-Fill-A-Fast-Flood-Fill-Algorith?msg=2206525
CC-MAIN-2015-32
en
refinedweb
^ (Handle to Object on Managed Heap) Declares a handle to an object on the managed heap. A handle to an object on the managed heap points to the "whole" object, and not to a member of the object. See gcnew for information on how to create an object on the managed heap. In Visual C++ 2002 and Visual C++ 2003, __gc * was used to declare an object on the managed heap. The ^ replaces __gc * in the new syntax. The common language runtime maintains a separate heap on which it implements a precise, asynchronous, compacting garbage collection scheme. To work correctly, it must track all storage locations that can point into this heap at runtime. ^ provides a handle through which the garbage collector can track a reference to an object on the managed heap, thereby being able to update it whenever that object is moved. Because native C++ pointers (*) and references (&) cannot be tracked precisely, a handle-to object declarator is used. Member selection through a handle (^) uses the pointer-to-member operator (->). For more information, see How to: Declare Handles in Native Types. This sample shows how to create an instance of reference type on the managed heap. This sample also shows that you can initialize one handle with another, resulting in two references to same object on managed, garbage-collected heap. Notice that assigning nullptr to one handle does not mark the object for garbage collection. // mcppv2_handle.cpp // compile with: /clr ref class MyClass { public: MyClass() : i(){} int i; void Test() { i++; System::Console::WriteLine(i); } }; int main() { MyClass ^ p_MyClass = gcnew MyClass; p_MyClass->Test(); MyClass ^ p_MyClass2; p_MyClass2 = p_MyClass; p_MyClass = nullptr; p_MyClass2->Test(); } The following sample shows how to declare a handle to an object on the managed heap, where the type of object is a boxed value type. The sample also shows how to get the value type from the boxed object. This sample shows that the common C++ idiom of using a void* pointer to point to an arbitrary object is replaced by Object^, which can hold a handle to any reference class. It also shows that all types, such as arrays and delegates, can be converted to an object handle. // mcppv2_handle_3.cpp // compile with: /clr using namespace System; using namespace System::Collections; public delegate void MyDel(); ref class MyClass { public: void Test() {} }; void Test(Object ^ x) { Console::WriteLine("Type is {0}", x->GetType()); } int main() { // handle to Object can hold any ref type Object ^ h_MyClass = gcnew MyClass; ArrayList ^ arr = gcnew ArrayList(); arr->Add(gcnew MyClass); h_MyClass = dynamic_cast<MyClass ^>(arr[0]); Test(arr); Int32 ^ bi = 1; Test(bi); MyClass ^ h_MyClass2 = gcnew MyClass; MyDel^ DelInst = gcnew MyDel(h_MyClass2, &MyClass::Test); Test(DelInst); } This sample shows that a handle can be dereferenced and that a member can be accessed via a dereferenced handle. // mcppv2_handle_4.cpp // compile with: /clr using namespace System; value struct DataCollection { private: int Size; array<String^>^ x; public: DataCollection(int i) : Size(i) { x = gcnew array<String^>(Size); for (int i = 0 ; i < Size ; i++) x[i] = i.ToString(); } void f(int Item) { if (Item >= Size) { System::Console::WriteLine("Cannot access array element {0}, size is {1}", Item, Size); return; } else System::Console::WriteLine("Array value: {0}", x[Item]); } }; void f(DataCollection y, int Item) { y.f(Item); } int main() { DataCollection ^ a = gcnew DataCollection(10); f(*a, 7); // dereference a handle, return handle's object (*a).f(11); // access member via dereferenced handle } This sample shows that a native reference (&) can’t bind to an int member of a managed type, .as the int may be stored in the garbage collected heap, and native references don’t track object movement in the managed heap. The fix is to use a local variable, or to change & to %, making it a tracking reference.
https://msdn.microsoft.com/en-US/library/yk97tc08(v=vs.90).aspx
CC-MAIN-2015-32
en
refinedweb
#include "etherShield.h"#include "NUClient.h"/*******&tempf=50&softwaretype=xAP&action=updateraw******&dateutc=2011-03-13%2017:39:00&tempf=50&softwaretype=xAP&action=updateraw*/#define WU_STATION_ID "IESUSSEX2" #define WU_PASSWORD "******" byte mac[] = { 0x54,0x55,0x58,0x10,0x00,0x26 };byte ip[] = { 192, 168, 1, 15 };byte gateway[] = { 192, 168, 1, 1 };byte server[] = { 38, 102, 136, 104 }; //wunderground.com 38.102.136.104 , 125int port = 80;//Create a clientNUClient client(mac,ip,gateway,server,port);int content_length;void setup () { Serial.begin(9600); client.init(); //Useful for debugging //Serial.print outputs of what the client is doing. client.debug(0); //If your having trouble with timeouts //try different timeout times it may sort it. client.timeout(20,30);} void loop(){ //Do things that need to be done even when //the ethernet shield is connecting etc here. //If the client has finished sending. if (client.done) { //Increment values so that we can see something happening! // valA += 1.2; //Do things that only need to be done when //the client has done sending. delay(10000); //Tell the client that it is not done and it needs to start sending again. client.done=0; Serial.println("start"); } //client.process opens and closes the client connection //It returns a one when the connection is at the data //sending stage. //client process will also check for pings if(client.process()!=0) { Serial.println("data process"); //If your using a shared server you may need to include //as below. If your server has a static ip its not needed. client.print("PUT******&dateutc=2011-03-14 19:05:00&tempf=55&softwaretype=Arduino&action=updateraw"); client.print(" HTTP/1.0\r\n"); client.print("Host: 38.102.136.104"); client.print(""); client.send(); Serial.println("data sent"); }}
http://forum.arduino.cc/index.php?topic=118049.0;prev_next=next
CC-MAIN-2015-32
en
refinedweb
RPLCD 0.3.0 A Raspberry Pi LCD library for the widely used Hitachi HD44780 controller. A Python 2/3 Raspberry PI Character LCD library for the Hitachi HD44780 controller. Tested with the 20x4 LCD that is sold for example by adafruit.com or mikroshop.ch. Also tested with a 16x2 LCD from mikroshop.ch. This library is inspired by Adafruit Industries’ CharLCD library as well as by Arduino’s LiquidCrystal library. No external dependencies (except the RPi.GPIO library, which comes preinstalled on Raspbian) are needed to use this library. If you like this library, I’m happy for support via Flattr or Gittip! Features Implemented - Simple to use API - Support for both 4 bit and 8 bit modes - Support for custom characters - Python 2/3 compatible - Caching: Only write characters if they changed - No external dependencies Wishlist These things may get implemented in the future, depending on my free time and motivation: - I²C support Examples Writing To Display Basic text output with multiline control. >>> from RPLCD import CharLCD >>> lcd = CharLCD() >>> lcd.write_string(u'Raspberry Pi HD44780') >>> lcd.cursor_pos = (2, 0) >>> lcd.write_string(u'\n\rdbrgn/RPLCD') Context Managers Unlike other uses of context managers, these implementations prepare the configuration before writing to the display, but don’t reset it after the block ends. >>> from RPLCD import CharLCD, cleared, cursor >>> lcd = CharLCD() >>> >>> with cleared(lcd): >>> lcd.write_string(u'LCD is cleared.') >>> >>> with cursor(lcd, 2, 0): >>> lcd.write_string(u'This is he 3rd line.') Custom Characters The HD44780 supports up to 8 user created characters. A character is defined by a 8x5 bitmap. The bitmap should be a tuple of 8 numbers, each representing a 5 pixel row. Each character is written to a specific location in CGRAM (numbers 0-7). To actually show a stored character on the display, use unichr() function in combination with the location number you specified previously (e.g. write_string(unichr(2)). >>> from RPLCD import CharLCD, cleared, cursor >>> lcd = CharLCD() >>> >>> smiley = ( ... 0b00000, ... 0b01010, ... 0b01010, ... 0b00000, ... 0b10001, ... 0b10001, ... 0b01110, ... 0b00000, ... ) >>> lcd.create_char(0, smiley) >>> lcd.write_string(unichr(0)) The following tool can help you to create your custom characters: Scrolling Text I wrote a blogpost on how to implement scrolling text: To see the result, go to. Installing Manual Installation You can also install the library manually without pip. Either just copy the scripts to your working directory and import them, or download the repository and run python setup.py install to install it into your Python package directory. API Init, Setup, Teardown import RPi.GPIO as GPIO from RPLCD import CharLCD # Initialize display. All values have default values and are therefore # optional. lcd = CharLCD(pin_rs=15, pin_rw=18, pin_e=16, pins_data=[21, 22, 23, 24], numbering_mode=GP unicode string to the display. You can use newline (\n) and carriage return (\r) characters to control line breaks. - clear(): Overwrite display with blank characters and reset cursor position. - home(): Set cursor to initial position and reset any shifting. - shift_display(amount): Shift the display. Use negative amounts to shift left and positive amounts to shift right. - create_char(location, bitmap): Write a new character into the CGRAM at the specified location (0-7). See the examples section for more information. Mid Level Functions - command(value): Send a raw command to the LCD. - write(value): Write a raw byte to the LCD. Context Managers - cursor(lcd, row, col): Control the cursor position before entering the block. - cleared(lcd): Clear the display before entering the block. Writing Special Characters You might find that some characters like umlauts aren’t written correctly to the display. This is because the LCDs usually don’t use ASCII, ISO-8859-1 or any other standard encoding. There is a script in this project though that writes the entire character map between 0 and 255 to the display. Simply run it as root (so you have permissions to access /dev/mem) and pass it the number of rows and cols in your LCD: $ sudo python show_charmap.py 2 16 Confirm each page with the enter key. Try to find the position of your desired character using the console output. On my display for example, the “ü” character is at position 129 (in contrast to ISO-8859-1 or UTF-8, which use 252). Now you can simply create a unicode character from the bit value and write it to the LCD. On Python 2: >>> u'Z%srich is a city in Switzerland.' % unichr(129) u'Z\x81rich is a city in Switzerland.' And on Python 3, where strings are unicode by default: >>> 'Z%srich is a city in Switzerland.' % chr(129) 'Z\x81rich is a city in Switzerland.' In case you need a character that is not included in the default device character map, there is a possibility to create custom characters and write them into the HD44780 CGRAM. For more information, see the “Custom Characters” section in the “Examples” chapter. Testing To test your 20x4 display, please run the test_20x4.py script and confirm/verify each step with the enter key. If you don’t use the standard wiring, make sure to add your pin numbers to the CharLCD constructor in test_20x4.py. To test a 16x2 display, procede as explained above, but use the test_16x2.py script instead. Resources - TC2004A-01 Data Sheet: - HD44780U Data Sheet: License This code is licensed under the MIT license, see the LICENSE file or tldrlegal for more information. The module RPLCD/enum.py is (c) 2004-2013 by Barry Warsaw. It was distributed as part of the flufl.enum package under the LGPL License version 3 or later. - Downloads (All Versions): - 22 downloads in the last day - 205 downloads in the last week - 653 downloads in the last month - Author: Danilo Bargen - Keywords: raspberry,raspberry pi,lcd,liquid crystal,hitachi,hd44780 - License: MIT - Platform: any - Categories - Development Status :: 4 - Beta - Environment :: Other Environment - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Operating System :: POSIX - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Topic :: Software Development :: Libraries :: Python Modules - Topic :: System :: Hardware :: Hardware Drivers - Package Index Owner: gwrtheyrn - DOAP record: RPLCD-0.3.0.xml
https://pypi.python.org/pypi/RPLCD/0.3.0
CC-MAIN-2015-32
en
refinedweb
Motivation In March 2009 at a Cebit presentation I held on DB2, I burned out a 110V computer loudspeaker that I had thoughtlessly hooked up to Germany's 230V power grid without using a transformer. On that same trip, I also destroyed the charger for my electric toothbrush and my beard clipper in similar incidents. Given my inability to learn from mistakes, it comes as no surprise that one of my favorite sayings (origin unknown) is this: "The problem with standards is that there are so many of them." In the world of relational database management systems (RDBMS), we are blessed with at least three major standards and countless variations on those: - ANSI SQL and ANSI SQL/PSM - Oracle SQL and PL/SQL - Sybase and Microsoft® TSQL Figure 1 illustrates with a Venn diagram how the SQL dialects overlap. Figure 1. Babylonian confusion of SQL Whenever you write an application, you have to make the choice of which RDBMS vendor to utilize. Once you have made that choice, you are essentially committed. Any attempt to switch vendors to take advantage of lower prices, better technology, or a better partnership is thwarted by legacy code that requires extensive rewrite before it can be used with another RDBMS. In addition, your skill set cannot be transferred from one product to another as easily as you would expect. IBM® DB2® 10 for Linux®, UNIX®, and Windows® (DB2) dramatically lowers the barriers for applications written for Oracle when enabling them to DB2. This provides customers and vendors the ability to choose a DBMS based on its merits rather than application history. DB2 10 includes Oracle compatibility features To allow an application written for one RDBMS to run on another virtually unchanged, many pieces have to fall into place. Different locking mechanisms, data types, SQL, procedural language residing on the server, and even the client interfaces used by the application itself need to be aligned not only in syntax but also in semantics. All of these steps have been taken in DB2. Changes are the exception, not the rule (you can rapidly assess the application changes needed). Table 1 provides a quick overview of commonly used features. Table 1. Commonly used features With DB2, you do not need to port an application. You merely enable the application. In the case of a packaged application, it is even possible to share one source for DB2 and Oracle. In other words, enabling an Oracle application to DB2 becomes no more complex than enabling a C program written for HP-UX to run on AIX. Concurrency control In the past, one of the most prominent differences between Oracle and DB2 has been the approach to concurrency control. The catchy phrase is "Readers don't block writers, and writers don't block readers." Table 2 shows the concurrency behavior for Oracle. Table 2. Oracle concurrency behavior Without going into detail on isolation levels, suffice it to say that the vast majority of applications that use the Oracle default Statement Level Isolation will work just fine using the DB2 default of Cursor Stability (CS). Traditionally, CS has been implemented so that writers block readers and, in some cases, readers can block writers. The reason for that is that, traditionally, a transaction under CS isolation will "wait for the outcome" of a pending concurrent transaction's changes. Table 3 shows the concurrency behavior with CS. Table 3. Traditional DB2 concurrency behavior with CS It turns out that there is no semantic reason why a transaction running under CS isolation waits for outcome when encountering a changed row. An equally satisfactory behavior is to read the currently committed version of the changed row. This behavior has been implemented since DB2 9.7. What happens is that DB2 simply retrieves the currently committed version of a locked row from the log. In most common cases, the row is still in the log buffer because the change has not been committed yet. But even if the row has been written out and has also been overwritten in the log buffer DB2 knows exactly where to find it, so that a single IO will bring the desired version into the bufferpool. As shown in Figure 2, imagine a user updating a name in an employee table. Before that user has committed the change another user scans that table. Traditionally, the second user would have had to wait for the first user to commit or rollback. Thanks to read currently committed data, the scan for the second user will simply retrieve the version of the row from the log buffer which that does not contain the first user's changes. Figure 2. Writers don't block readers It is important to note that this behavior: - Introduces no new objects such as a rollback segment. - Has no performance overhead for the writer since the log needs to be written anyway. - Cannot cause any situation such as a "snapshot too old" because in the extremely unlikely event that the log file needed has been archived (while a transaction was still open), DB2 will simply fall back and wait for the lock to go away. In addition to these changes, additional lock avoidance techniques have been introduced into DB2 to eliminate a reader holding a lock under CS isolation. Table 4 shows the new concurrency behavior DB2 has with CS. Table 4. New DB2 concurrency behavior with CS As you can see, the concurrency behavior is now identical to that of Oracle. In fact, any DB2 database created since DB2 9.7 exhibits this behavior by default. New data types The heart of every database is its data. Mismatched types or mismatched semantics of these types can seriously impact the ability to enable an application to another RDBMS. So to allow Oracle applications to run on DB2, it is crucial to support its non-standard basic types, such as strings, dates, and numerics. Beyond aligning these basic types, there are other more complex types that are commonly used in Oracle's PL/SQL that are available since DB2 9.7, as shown in Table 5. Table 5. New DB2 data types Implicit casting and type resolution "If it walks like a duck, and it talks like a duck, then it must be a duck." This is the mantra of many of the new languages such as PHP and Ruby. Every literal is a string and then gets used as another type based on context. In adherence with the SQL standard and following a philosophy that a type mismatch is likely an indication of a coding mistake, DB2 has traditionally followed strong typing rules, where strings and numerics cannot be compared unless one is explicitly cast to the other. Unfortunately, when an Oracle application uses weak typing in its SQL, that application would have previously failed to compile against DB2. Since DB2 9.7, implicit casting (or weak typing) has been the default. That is, strings and numbers can be compared, assigned, and operated on in a very flexible fashion. In addition, untyped NULLs can be used in many more places, while untyped parameter markers can be used nearly anywhere, thanks to deferred prepare. That is, DB2 will not resolve the type of a parameter marker until it has seen the first actual value. To round out implicit casting, DB2 also supports defaulting of routine parameters as well as the association of arguments to parameters by name. Extended built-in function library in DB2 All RDBMs provide libraries of functions to operate on the data. The problem is that no two use the same names for these functions, even if in the end the functionality is very similar. In addition to its own traditional set of functions, DB2 now supports a library compatible with Oracle. The following list provides a quick overview, but it is by no means an exhaustive list: - Conversion and cast functions TO_DATE TO_CHAR TO_CLOB TO_NUMBER TO_SINGLE_BYTE TO_TIMESTAMP Each of these functions supports a rich set of compatible formatting strings. - Date arithmetic EXTRACT ADD_MONTHS MONTHS_BETWEEN NEXT_DAY - Plus (+) adding fractions of days - String manipulation INITCAP INSTR INSTRB INSTR2 INSTR4 LENGTHB LENGTH2 LENGTH4 LISTAGG LPAD LTRIM RPAD RTRIM SUBSTRB SUBSTR2 SUBSTR4 - Extensions to SUBSTR - Miscellaneous NVL NVL2 HEXTORAW DECODE LEAST GREATEST BITAND RATIO_TO_REPORT Native NUMBER support for MOD The greatly increased overlap in supported functions between the two products implies a greatly improved out-of-the-box success, enabling an Oracle application to DB2. Oracle SQL dialect support This article, so far, has covered concurrency, data types, typing, and functions. But the differences between Oracle and DB2 go deeper than this. The very fabric of the SQL dialects, their keywords and semantics differ in some areas. Also each product supports some features that the other simply does not. When these features are popular, they limit the ability to submit common SQL against both products, which can prompt many small and big language tweaks. Table 6 lists some highlights. Table 6. New SQL support This concludes the overview of the changes made to DB2 so that Oracle applications that submit SQL against the database can run largely unchanged. There are, however, major sections of many applications that are executing at the server itself. The server-side language of choice for Oracle applications is PL/SQL. No claim of compatibility could be seriously made without support for PL/SQL. DB2 support for PL/SQL Commonly, when an application is ported from one product to another, the SQL and procedural language is translated from one SQL dialect to the other. This poses several problems, including: - The resulting translated code tends to be convoluted due to automation and impedance mismatch between the source and target dialect. - The application developers are not familiar with the target SQL language dialect. That makes it hard to debug the ported code. Over time, further maintenance becomes a challenge due to the lack of skills. - In the case of packaged applications, translation needs to be repeated for every new release of the application. - In the end, the result is an emulation, which by definition runs slower than the original. To avoid these issues, DB2 includes native PL/SQL support. What does this mean? As you can see in Figure 3, the DB2 engine includes a PL/SQL compiler side by side with the SQL PL compiler. Both compilers produce virtual machine code for DB2's SQL Unified Runtime Engine. It is important to note that monitoring and development tools such as Optim Development Studio are hooked into DB2 at the runtime engine level. Figure 3. PL/SQL compiler support The integration into DB2 as a first-class procedural language has several implications, including the following. - There is no translation. The source code remains as it is in the schema catalog. - Developers can continue working in the language they are familiar with. There is no need to move logic to DB2's dialect even if new logic is written in SQL PL. Routines using different dialects can call each other. - Packaged application vendors can use one source code against both Oracle and DB2. - Both PL/SQL and SQL PL produce the same virtual machine code for DB2's SQL Unified Runtime Engine. Therefore, by design, both PL/SQL and SQL PL perform at the same speed. - Since the debugger infrastructure hooks directly into the SQL Unified Runtime Engine, PL/SQL is naturally supported by Optim Development Studio. Figure 4 shows a PL/SQL debugging session. The debugger supports standard features such as step into, step over, and breakpoints. In addition, it allows the user to change local PL/SQL variables while the program is running. Figure 4. PL/SQL debugger support in DB2 PL/SQL syntax details So what exactly does PL/SQL support imply? First, there is the core syntax support. DB2 supports all the common constructs of PL/SQL, such as the following. if then else whileloops :=assignments - local variables and constants #PRAGMA EXCEPTIONand exception handling - Various forms of forloops (range, cursor, and query) %TYPEand %ROWTYPEanchoring of variables and parameters to other objects - Local types can be declared within any PL/SQL block to be consumed within that block. - Local procedures can be delared within PL/SQL blcoks and be called from within that same block. SUBTYPEdeclarations are used to restrict variables and parameters to domains within built-in types. #PRAGMA AUTONOMOUStransactions, which allow procedures to execute in a private transaction. CCFLAGSconditional compilation allows different sections of PL/SQL to be compiled based on context. This feature is particularly useful to minimize any DB2 specific code within a shared PL/SQL code base. - Vendors can obfuscate precious intellectual property in PL/SQL by wrapping PL/SQL objects such as package bodies using the DBMS_DDL.WRAPand DBMS_DDL.CREATE_WRAPPEDfunctions. PL/SQL object support PL/SQL can be used in various different objects that allow procedural logic. - Scalar functions including support for the following. INOUTand OUTfunction parameters - Invocation using named parameter association - Parameter defaulting - Runtime "purity" enforcement - Pipelined table functions including support for the following. - Invocation using named parameter association - Parameter defaulting - Before each row or statement multi-action triggers. - After each row or statement multi-action triggers. - Procedures including support for the following. INOUTand OUTprocedure parameters. - Invocation using named parameter association. - Parameter defaulting. - Anonymous blocks. - PL/SQL packages. PL/SQL package support Most PL/SQL in Oracle applications is contained within so-called PACKAGEs. A PL/SQL package — not to be confused with a DB2 package — is a collection of individual objects with the ability to differentiate between externally accessible objects and those that are mere helpers for use within the package. The ANSI SQL equivalent of a package is a MODULE. DB2 now provides support for ANSI SQL modules as well as PL/SQL packages. In particular, the following capabilities are provided. CREATE [OR REPLACE] PACKAGE, which defines prototypes for externally visible routines. It also defines all externally visible, non-procedural objects, such as variables and types. CREATE [OR REPLACE] PACKAGE BODY, which implements all private and public routines as well as all other private objects. - Within a package or package body, the following objects can be defined: - Variables and constants - Data types - Exceptions - Scalar functions - Procedures - Cursors - Package initialization. - Public synonyms on packages. DB2 provides common built-in packages Some Oracle applications utilize packages provided by the RDBMS. In particular, libraries that provide reporting, email, or cross-connection communication can be popular. To facilitate enablement of these applications to DB2, the packages listed in Table 7 are provided by DB2. Table 7. Built-in packages provided by DB2 More packages are available as-is at the Oracle Application Enablement to DB2 for LUW wiki. At this location, you can also find various other hints and tips as well as background information. Oracle-specific JDBC extensions JDBC is a standard Java client interface. There are, however, extensions that have been added to Oracle's JDBC driver in order to support specific non-standard data types. To maximize the level of compatibility for Java technology-based applications, the DB2 JDBC driver provides, among other things, support for calling procedures with reference cursor, VARRAY, and ROW parameters. Support for OCI applications Many older Oracle C applications use the Oracle Client Interface (OCI) to communicate with the DBMS. Since DB2 9.7 provides an OCI-compatible client called DB2CI, which supports in excess of 150 OCI-compatible functions from OCIAttrGet to OCITransRollback. With Fix Pack 2, even OCI7 APIs are available. In many cases, it is possible to simply relink an OCI application to the DB2CI library, allowing the OCI application to work against DB2 virtually unchanged. - Simply create the following include file named oci.h: #include <db2ci.h>. - Ensure that the new oci.h is before Oracle's oci.h in the PATH. - Then replace the -loci.dll link option with -ldb2ci.dll in your makefile and recompile your application. SQL*Plus script support using CLPPlus Oftentimes, DDL scripts and even reports are written using the SQL*Plus command-line processor. To make it easier to transfer these scripts as well as the skills of developers writing them, DB2 provides an SQL*Plus-compatible command-line processor, called CLPPlus, as shown in Figure 5. Figure 5. SQL*Plus-compatible CLPPlus tool The tool provides the following functionality. - SQL*Plus-compatible command options - Variable substitution - Column formatting - Reporting functions - Control variables Oracle Forms enablement Oracle Forms is a legacy software product used to create data entry systems for the database. Some customers and ISVs have hundreds of Oracle Forms screens that constitute or are part of an application. IBM has partnered with realease LLC and WonderWorks Group to provide seamless Oracle Forms to Java conversion capability. Both partners provide tooling that translates Oracle forms to the Java language in a highly automated, efficient, and maintainable way preserving the look and feel of the original Forms GUI without creating any new dependencies on a third-party product. WonderWorks FusionPaaS can convert a classic Oracle Fusion Forms application, such as the one shown in Figure 6, into a J2EE application, as shown in Figure 7, that is 100-percent web-enabled, running on an iPad for example or other smart device, and be integrated into Web 2.0 applications, such as Google Maps. Figure 6. Original Oracle Fusion Forms Figure 7. Converted form using FusionPaaS For more information about WonderWorks FusionPaaS, see the white paper on the Oracle Application Enablement to DB2 for LUW wiki. Continuous refinement As more and more customers and partners enable to DB2, feature requests and refinements are being added into the DB2 Fix Pack streams to continually improve the level of compatibility. DB2 9.7 Fix Pack 1 Fix Pack 1 introduces the following capabilities. CCFLAGSsupport to maximize the amount of shared code between Oracle and DB2 for vendor applications. FORALLand BULK COLLECTsupport. - The SUBSTRBfunction and refinements to SUBSTR. - Improvements to the handling of Boolean expressions. - The OCI application-compatible DB2CI client. CREATE OR REPLACEfor types. - Extended support for PL/SQL functions, such as INOUTparameters, the ability to write to tables from a function, and more. - The partnership with realease to enable Oracle Forms applications. DB2 9.7 Fix Pack 2 Fix Pack 2 introduces the following capabilities. NCHAR, NVARCHAR2, and NCLOBsupport, along with various NCHARfunctions, such as TO_NCHARand TO_NCLOB. - PL/SQL code obfuscation using the DBMS_DDL.WRAPand DBMS_DDL.CREATE_WRAPPEDfunctions. - Named parameter invocation and DEFAULTs in functions. DEFAULTs in procedures that are not at the end. BULK COLLECTperformance improvements. DB2 9.7 Fix Pack 3 Fix Pack 3 introduces the following capabilities. - Ability to compare small LOBs to overcome page size limitations. NUMBERperformance enhancements. RATIO_TO_REPORTfunction. - Improvements to RAISE_APPLICATION_ERROR. - Runtime "purity level" enforcement. DB2 9.7 Fix Pack 4 Fix Pack 4 introduces the following capabilities. - JDBC support for ROWand ARRAY OF ROW. - Multi-action trigger support. - Support for BEFOREtriggers that update the database. - Support for LIKEwith non-constant patterns. LISTAGGstring aggregation support. - Improvements to autonomous transaction support. DB2 9.7 Fix Pack 5 Fix Pack 5 introduces the following capabilities. - JDBC support for nested ROWand ARRAY. - Nested ROWand ARRAYsupport for PL/SQL. - New NVL2, SUBSTR2, and HEXTORAWfunctions. - Support for BOOLEANin ROWand ARRAYtypes. - Improvements to anonymous blocks support. - Pro*C support. - Performance improvements for SQL comparing CHARcolumns with VARCHAR2values. DB2 10.1 DB2 10.1 introduces the following capabilities. - Up to a magnitude better performance for logic heavy SQL PL and PL/SQL. - Support for type declarations local to a PL/SQL block. - Support for procedure declarations local to a PL/SQL block. - Statement level PL/SQL triggers - Row and Column level Access Control (RCAC) which makes it easy to map FGAC. DB2 10.1 Fix Pack 2 Fix Pack 2 introduces the following capabilities. - Better performance for NOTFOUND exception handling in PL/SQL. - Support for SUBTYPE in PL/SQL. - Support for PL/SQL pipelined table functions. - Allow specification of the trim character for LTRIM and RTRIM functions. - Native NUMBER support for the MOD function. - Greatly improved performance for the Data Studio PL/SQL and SQL PL debugger. DB2 10.5 DB2 10.5 introduces the following new capabilities. - Support for indexing of expressions. - Support for Oracle compatible unique index semantics. - Allow table definitions with rows exceeding 32KB. - Support for the INSTRB, INSTR2, INSTR4, LENGTHB, LENGTH2, LENGTH4, and SUBSTR4 functions. - Support for @dblink syntax when referencing remote tables. Subsequent Fix Packs and upcoming major releases of DB2 are expected to provide an even higher level of compatibility. Enabling to DB2 is as easy as drag and drop Given the close alignment of DB2 with PL/SQL and Oracle SQL, there is now no need for a complex migration toolkit. Instead, you can use the IBM Data Movement Tool used to simply drag and drop tables, packages, or entire schemas from Oracle onto DB2. Only minimal adjustments are needed on an exception basis to either move an application to DB2, or modify an application so the same source can operate against both DB2 and Oracle. The steps can be as simple as follows. - Setting the necessary registry variables: db2set DB2_COMPATIBILITY_VECTOR=ORA db2set DB2_DEFERRED_PREPARE_SEMANTICS=YES - Restart the database manager: db2stop db2start - Create an Oracle-compatible database: db2 create database mydb pagesize 32 K db2 update db cfg for mydb using auto_reval deferred_force - Start up the Data Extract Tool and connect to the Oracle and DB2 databases as shown in Figure 8. Figure 8. Drag and drop Oracle schemata onto DB2 using IBM Data Extract Tool Once you are connected, you can choose to extract the DDL only or both DDL and data. Finally, you have two choices: deploy directly by executing the generated scripts, or continue with the interactive deploy pane. The later is recommended for most non-trivial applications. - Move desired schemas from Oracle to DB2 using interactive deploy as shown in Figure 9. Figure 9. Drag and drop Oracle schemata onto DB2 using IBM Data Extract Tool In interactive deploy mode, you see a navigation tree that displays all objects extracted from the Oracle database. Select all objects and execute the deploy menu option. As a result, the tool will copy the objects over to DB2 and record its progress. Some objects may fail to deploy successfully, and the tool gives you the option to work with those. When selecting an object, you will see the DDL along with the error DB2 encountered. Now you can fix the definition as required and redeploy. The goal is to interactively move all the objects to DB2 on an exception basis. Sizing up the enablement to DB2 So how easy will it really be to enable your application to DB2 10? The answer is, of course, that it depends. IBM has a tool for download called MEET DB2. This tool can analyze all the objects in your Oracle database and score them. It produces a report of what will work out of the box and where adjustments need to be made. To get a quick idea, you can download MEET DB2 and try it for yourself. If you prefer, your IBM team, reachable at [email protected], can help to quickly provide a compatibility assessment of your current Oracle database. Figure 10 shows an example of output from the MEET DB2 tool. Figure 10. MEET DB2 report tool for assessment In the example report, the tool indicates that 98% of the PL/SQL statements are immediately transferable to DB2. It also breaks down the statements by object type and shows how many of each object type would require attention during a migration. Since DB2 9.7, hundreds of applications totaling millions of lines of PL/SQL code have been analyzed in detail with an average out-of-the-box transfer rate of 90-100 percent. Figure 11 shows statistics that were run against DB2 9.7 Fix Pack 5. Figure 11. Median of over 98 percent of statements supported Th2 biggest 74 applications of 171 yielded over 2.5 million lines of code. Between 90.1 percent and 99.9 percent of the lines of code were immediately transferable to DB2. Architecture DB2's approach to application conversion is fundamentally different from that of classic conversion. Most vendors provide tooling that translates from the source language to the target language offline and with manual assistance. The outcome is very convoluted code that is very hard to maintain. Few vendors add intercept layers that emulate the source language in the target language dynamically. Of course, emulations are slow by design, especially when any attempt is made to achieve correct semantics. For DB2, compatibility is built deep into all layers of its engine, which deals with semantic and syntax. Instead of working around limitations, DB2 has been extended natively to support necessary features. The architecture diagram in Figure 12 highlights the areas of DB2 that underwent significant change to support the Oracle SQL and PL/SQL dialect. Figure 12. Architecture of DB2 Compiler and Runtime engine The sections for SQL parser and rewrite in the SQL compiler, PL/SQL parser in the PL compiler, the PL runtime, and the SQL runtime were significantly extended or added to accommodate the Oracle dialects. Frequently asked questions Is this technology ready for critical applications? One of the biggest US banks has entrusted its online banking system to this technology. If you checked your US bank account today you have likely driven execution of PL/SQL Packages on DB2. Which version of Oracle does DB2 10 support? The coverage provided for the SQL and PL/SQL dialects is strictly based on what is being used by applications. There are features that have been introduced in releases as recent as Oracle 11g that are supported, while some constructs available in Oracle 8i are not supported. In one study of 71 applications totaling over 2.5 million lines, between 90.1 percent and 99.9 percent of the code moved to DB2 9.7.5 without change with a median of over 98%. Many of the remaining adjustments can be automated, have since been addressed in DB2 10, or are otherwise repetitive. How fast will my Oracle application run on DB2? That is the million-dollar question! Unfortunately, Oracle licensing terms prohibit anyone from publishing benchmarking results without prior written consent. By its very design, and confirmed by quality assurance benchmarking, an application written against PL/SQL on DB2 is as fast as one written against SQL PL on DB2. Vendors who have gone through the enablement process already are generally pleasantly surprised. How much work was it to provide these features? Not nearly as much as one might think. Some initial work, such as CONNECT BY and NUMBER, was done for DB2 9.5 in a tactical fashion. In earnest, the effort in DB2 9.7 was completed in less than 18 months. Since then, we continue to incrementally improve the level of compatibility as customers require. What are common complications in enabling from Oracle to DB2? The compatibility of DB2 is obviously not 100 percent, so there will likely be some hiccups when you first convert to DB2. Many of these challenges are trivial and easy to fix. For example, DB2 does not support locally declared functions. So you would need to pull declared functions out into the package or replace them with locally declared procedures. Which editions of DB2 support Oracle SQL compatibility features? All features described above are supported across all platforms supported by DB2 for Linux, UNIX, and Windows. This includes Windows, Linux (on Intel, Power, and the mainframe), HP, and Sun Solaris. All editions of DB2 are supported. There are some technical restrictions when using PL/SQL functions and triggers in DPF. Does DB2 Express-C support PL/SQL? Yes! You can break out of the database size restrictions of Oracle XE by moving to DB2 10 Express C. And best of all: you can try it out on the cloud (see the Resources section). What was the hardest feature to implement? By far, VARCHAR2 semantics are the hardest to implement. The "trailing blanks matter" semantics and "NULL is empty string" continue to challenge. Conclusion Thanks to its native multi-dialect SQL support, the new DB2 allows for easy enablement of Oracle applications to DB2. Vendors of packaged applications can offer their applications on Oracle and DB2 at minimal additional cost. Customers can freely choose the vendor that offers the technology they need without being limited by past choices. To test these features, download a trial version of DB2 10 (see the Resources section). Resources Learn - Take the free, self-paced DB2 Workshop for Oracle Professionals to learn more on this topic. - Read "What's new in DB2 10 for Linux, UNIX and Windows". - "A Matter of Time: Temporal Data Management in DB2 10" offers more information. - Learn more by reading "DB2 V10.1 Multi-Temperature Data Management Recommendations". - Visit IDUG DB2 Tech Channel to listen to and watch DB2 Tech Talk videos on topics such as a DB2 10 technical deep dive on the key features of the new product release. - See the DB2 10 launchpad to get an overview on the value of DB2. - The Oracle Application Enablement to DB2 for LUW wiki serves as the primary resource supporting customers and IBM business partners who enable their Oracle applications to DB2 for Linux, UNIX, and Windows. - Check out MEET DB2 and analyze your Oracle database to estimate the level of compatibility with DB2. - Watch the Moving to DB2 is Easy video for a demonstration giving an overview of application enablement to DB2. - DB2 CLPPlus demonstrates the usage of the SQL*Plus-compatible CLPPlus shell. - Watch DB2: Native PL/SQL Support which showcases the many PL/SQL features supported by DB2. - Check out DB2 pureScale. With DB2 pureScale, you can scale out and provide continuous availability for your OLTP applications. - Check out IBM DB2 e-kit for Database Professionals, register for this e-kit, and learn how easy it is to get trained and certified for DB2 for Linux, UNIX, and Windows. Expand your skills portfolio, or extend your DBMS vendor support to include DB2. -. - Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more. Get products and technologies - Try IBM DB2 Express-C 10 in compatibility mode and in the cloud. Using the cloud, you can easily test the DB2 compatibility with Oracle without the need to locally install DB2. - Download a free trial version of DB2 10 for Linux, UNIX, and Windows. - Download DB2 Express-C 10, a no-charge version of DB2 Express database server for the community. - DB2 Database Professionals Community. Connect with other users while exploring the developer-driven blogs, forums, groups, and wikis. - Request an assessment of your Oracle-to-DB2 enablement project from [email protected].
https://www.ibm.com/developerworks/data/library/techarticle/dm-0907oracleappsondb2/
CC-MAIN-2015-32
en
refinedweb
Its a little gap in between my third and fourth part, anyways I am going to write 4th part of the series. First you can read my earlier parts from the link below. As if you have read my earlier posts in the series, you can visualize that in my first post I discussed about the Idea about Claim based authentication, basics and various components of Claim based authentication. In part 2, we discussed about Windows Identity Foundation (WIF) and also created a step by step simple example using WIF. In part 3, we discussed the problem of the approach discussed in part 2 and discussed that these problems can be avoided if we use a Federated Provider. In this post we’ll implement the federated provider in a simple solution. To continue reading this post, it’ll be better to go through at least part 3 post in the series. Just to brief, In federated scenario, two Identity Providers comes into the picture : one is Identity Provider (where user puts its credentials) and another is Federated Provider which understands all the Identity Providers and accordingly creates a new token that is understood by the relying party. Relying party does not care about Identity Providers, rather it just trust the Federated Provider and can understand the token issued by it. It can be pictorially depicted as As here in above image, we can see that every Identity Provider send different type of token. Access control service process these and create a new token and sends to Relying Party. We can have our own custom Identity Provider and Federated provider as well. In Microsoft stack, Access Control Service is one of the most popular service which serves the purpose of a federated provider. It is Pay-As-You-Go service on Azure and provides all the basic infrastructure and requires less or no amount of coding. Access Control Service is now called Windows Azure Active Directory. In today’s post, I’ll create a step by step example to implement the Federated provider using Windows Azure Active Directory. First let me discuss the various components while using federated provider . So there would be at least four components As we have already discussed all the above components in my last post. we ‘ll create a step by step example of Federated Provider. Here Identity provider could be any third party Identity Provider or our own Identity Provider. Windows Azure Active Directory works as Federated Provider and a application hosted on my local machine will be relying party. Let’s go step by step. First we need to create a namespace at azure Access control. For this first one need to be registered on. If you have account here then you can continue else you’ll require to register here before proceed. Here we’ll login azure portal () and create a Active Directory Access Control namespace for our Federated Provider that will be used by our application (Relying Party). After login it redirects to home page and provides many options in left pane of the Home page as Here to quickly create the Active Directory Access Control namespace click on the encircled new button in left bottom as in above screen shot. It opens up a window. Select App Service ->Access Control -> Quick Create. It asks for namespace and region. The namespace should be globally unique, else it will not be accepted. After filling it click on Create I have given here namespace CBAPart4 and selected region Southeast Asia. Here you can see encircled green tick mark which means the namespace available. After creating, it will be enlisted in your account with status active few minutes as: On this page, click on Manage button at bottom middle of the page which takes us at different page in a new tab/window with left pane as Now click on Identity Providers which allows to add Identity providers for this Federated Provider. It takes us at below screen It has added by default added Windows Live Id as an Identity Provider. We can add other providers by click on Add link encircled in the above screen shot. Here we have some already pre-configured Identity Providers like Google, Yahoo and custom Identity Providers like facebook, ADFS 2.0 (Active Directory Federation Services). Let’s add Google here It asks the that display and any image for that if we want to show that at login link. I provided an image and Clicked on save. Similarly we can add Yahoo as an Identity provider. Now we have three Identity providers for our federation provider: Google, Yahoo and Windows Live Id. Next we require to add the Relying Party. Means our application that will be authenticated via this Federated Provider. To add the Relying Party applications, click on Relying Party applications link, it takes us Here I provided the name to my Application as MyRelyingPartyApp . There are two ways to add settings. Select the first one (Enter settings manually) as encircled in mode section. We need to provide two URLs. Here one is for which the token is issued (also called Realm) and another where the federation provider returns the access token. Leave the rest of the settings of section Relying Party Application Settings section and move to next section Authentication settings on the same page as Here it lists all the Identity Providers that we added , we need to select all which we want to use. I selected all. Now we need to create the rules. So first require to create a Rule Group then rules under that rule group. We either can use any existing rule group (if exists which can serve the purpose) or can create a new one. I’ll be telling why do we require the rules in coming section. Token is required to be signed to check the integrity at the target application. Here I have selected the default option for this demo. We can provide our own certificate that can be used for token signing. Rule – Rules are very important here. Identity Providers sends token with claims to Federated Provider. As Federated provider trust Identity Provider so it knows the format and details about token and can read it. Federated provider first checks the integrity of token then read the required data and creates a new token with claims. Then Federated provider sends this token to Relying party. Here this rule is to convert the claim provided by Federated Provider to some other claim type that can be understood by Relying party. We can easily pass through the same just creating a rule with pass through. Rules also are very helpful because we not want to sends all the claims to Relying party or want to combine some claims. These scenarios is handled by these rules. To add the rules, click on the Rule Groups from left pane It adds by default a rule group for added Relying party. We can create new one . We need to add the rules in the rule group assigned to Relying party. For that click on Default rule group on the above screen shot and it redirects us to Here we can see the details of the rule group. It is for the Relying party that we added as second encircled . Now in this group there is no rule. Either we can add some rules or we can simply check on Generate button, which redirects us a page which list down all the Identity Providers for this Federated Provider namespace and again click on generate button which just add the default claim rules that just pass the claims returned by Federated Provider as is. A rule is created is for each claim which has input claim and output claim as Else we can create rule by our self by clicking on the Add link as encircled in the previous screenshot as: It has three section, first section If first option Identity provider, once we select it accordingly it populates all the claims provided. As I selected Google and enlist all the claims by provided by Google in dropdown as marked in second encircled area. We can add here another input claim as last encircled link. Now let’s move to next section that is Then Here either we can pass through as it is or we can send it in some another type that is available in second encircled drop-down. It has a list of standard claims that we can choose off. Also we can put our own custom claim type in the text box. Every claim type is in URI format as per standard so custom type also should follow the standard. I have selected pass through. The last section is Rule Information . here we can put the details about the rules. All the required steps has been completed on windows azure portal, Now need to get the metadata for the Federated provider. Click on the Application Integration link under the Development section in left pane. As this provides many option but we require here WS federation metadata as encircled that will be used in relying party . Copy that encircled URI. As discussed, I have created a ASP.NET web application project using default template and hosted at IIS. Now the next step to build a trust relationship with Relying party and Federated Provider. So as we have already seen in our part 2 post that how to build relationship between Identity Provider and relying party. We require here the federation metadata from the identity provider and add the STS (Securty Token Service) reference. I am adding those steps for the completeness of the post. So let us add the STS reference as It opens a popup as First encircled area contains Relying party configuration settings and another is Application URI. Click on next Here as we are using existing STS so selected the last one. We require to provide the metadata here so to get the metadata URL from Federated Provider. I have the provided the metadata URI that we copied from Application Integration section. Click on next Here I have selected to Disable certificate chain validation as my certificate is not verified. For development purpose it can be disabled but at production, this option should not be selected. Click Next Select No Encryption. Click Next. Here we have two claims Name and Identity Provider. Click Next Now click finish. And we have built a trust relationship between federated provider and our own Application (Relying Party). Now it’s time to run the application. So let us run it It takes us a page and asks us to choose the Identity Providers that we added as Identity Provider while configuring Federated Provider on Azure Portal. The images for Identity Providers, I provided while adding these on Azure Portal. Now I clicked on Google and it took me at Google login page after providing my credentials and successful authentication, I got redirected my own application as an authenticated user. Similarly can get authenticated via Windows Live Id or Yahoo Id. Reading Claims provided by Identity Providers Reading the claims is very easily that are provided by these Identity Providers. I read the claims and displayed it via a Data Control. To read the claims, we can use the following code IDictionary<string, string> claims = new Dictionary<string, string>(); //; gdViewClaims.DataSource = claimsIdentity.Claims; gdViewClaims.DataBind(); Now If I login through Google account then it shows the following claims I have displayed three values here ClaimType, Value, and ValueType. Here Google returned four Claims which has it’s descriptions and my mail-id and name. Similarly if I login through another Identity Providers, they will show different claims. We can use these values in our relying party for different purposes. ClaimType Value ValueType I have written this part is little long (could not make it short). If you face any issue while working on any steps, Do let me know here I’ll be happy to answer. Cheers,Bri.
http://www.codeproject.com/Articles/604723/Claim-based-Authentication-Part-4
CC-MAIN-2015-32
en
refinedweb
XIncludes in the XML Schema-based XML file -- How to? Discussion in 'XML' started by Zbyszek Cybulski, Mar03 - Markus - Nov 23, 2005 - Replies: - 0 - Views: - 632 - Andy - Nov 18, 2004 [XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML - Replies: - 3 - Views: - 1,444 - Stanimir Stamenkov - Apr 25, 2005 w3c Schema naming patterns and template-based schema generationSteve Jorgensen, Aug 9, 2005, in forum: XML - Replies: - 0 - Views: - 611 - Steve Jorgensen - Aug 9, 2005 - Replies: - 3 - Views: - 3,804
http://www.thecodingforums.com/threads/xincludes-in-the-xml-schema-based-xml-file-how-to.168949/
CC-MAIN-2015-32
en
refinedweb
Mr. Brightside Wait, now Java does too much? The forum section of the front page gets four items today: one more than usual, but two less than I had been considering. There are a bunch of really good discussions going on. Well, there's also a new spam trick -- spammer posts a "hey, does anyone have bulk e-mailing software" message from one account, and then the SEO-targeted link-tacular followup from another -- but I caught those and deleted the messages (the users will be deleted in a few hours once the U.S. West Coast wakes up). Anyways, back to the legitimate discussions, alexj33 posted a provocative message in the JavaTools forum -- probably should have been in JDK feedback, but again I digress -- to ask Has Java lost its way? I used to be a Java developer back in the late 90s. Java has always been a frustrating enigma for me. Allow me to explain. Several times I've browsed in and out of the latest Java books and forums, and I cannot believe how *complex* it has all become. Quite simply, does there have to be a new API and a new 3rd party framework for every little thing a developer wants to do? My understanding is that Java was meant to take the complexity out of previous development platforms, yet in my eyes it has become exactly what it was meant to replace, albeit in different ways. Who can keep up with this dizzying array of acronyms and libraries? Who has the time to? Is it even worth it to do so? Who has time to research the frameworks to find out even what is worthwhile to invest more time in? Is this a valid criticism? We've heard a lot over recent years about Java "using up its complexity budget", but some of that is aimed at language features like generics (and possibly closures). This complaint has to do with the breadth and depth of the libraries available to the Java developer. On the one hand, there are a lot. A quick word-count of the JDK 6 Javadocs shows that Java SE has over 3700 public classes, and that's before you bring in libraries like Java EE. But on the other hand, does anyone really try to keep up with it all? Java is large enough to have specialization, and it's likely the server-side programmer never looks at anything in Swing, just as the desktop developer neither knows nor cares how servlets work. By this argument, Java's no different than any other large platform: do you suppose there are a lot of MSDN developers who excel both at Direct X graphics and .NET web services? alexj33finishes up with some interesting questions that are worth keeping in mind as we approach Java 7 and, no doubt, a new set of APIs: OK, rant over. Now onto the real question. My question to you all is threefold: 1. What do you think the original purpose of Java was? 2. Is Java on the right path today? Can it continue in its current direction? 3. Is Java itself embracing or alienating the developer community? Also in today's Forums, terrencebarr posts a reminder of Java ME's value in Re: Why not get rid of the JVM and JIT for mobile device? "Sure, Java ME fragmentation is a source of extra effort and frustration but one of the key reasons is because the underlying native platforms are so radically different (not really Java's fault). Deploying native apps not only requires developers to deal with the native differences but adds a new layer of complexity with different processors, executable formats, operating systems, libraries, security mechanisms, etc, etc. It grows the problem space by two or three dimensions." Over in the Project Wonderland forum, nicoley posts a New Proposal for Voice Calling in 0.5. "I have just added to the wiki the first draft of a proposal for Voice Calls in Wonderland version 0.5:. The general concept of this design is to unify "voice chat" and telephone integration. The proposal suggests a single user interface for making VoIP calls and PBX calls. It also includes the design of an in-world personal virtual phone. This phone can be used by each Wonderland user to both place and receive voice calls." Finally, ingridy clarifies a timeline question in the followup Re: "Next Generation" Plug-In Support for AMD64?, saying "64bit plug-in will not be ready for 6u10 final release." In Java Today, the ME Framework 1.2.1 release is now available on the project's download page. "This release went through an extensive QA cycle and is ready to be used for test suite development. If you downloaded the ME Framework 1.2.1 development releases, please switch to this final milestone release." The ME Framework is a set of JT harness plugins that supports the Java ME platform. TCK architects use the JT harness and the ME Framework to construct TCK test suites for Java ME technologies. Kirill Grouchnikov has started a new series of blogs in which he talks about the specific tasks involved in taking a UI definition from your designer and turning it into a working application. Step 1 is analyzing the original design or more specifically, identifying the application's decoration areas and functional areas. Step 2 is mapping design to UI toolkit, which means "map[ping] the application functional areas to Swing container hierarchy and the application decoration areas to Substance decoration areas." Sathish K. Palaniappan and Pramod B. Nagaraja's article Efficient data transfer through zero copy." In today's Weblogs, Fabrizio Giudici begins with a story about Remote profiling with NetBeans. "Varun Nischal, one of the fresh new members of the NetBeans Dream Team, is hosting some friends on his blog. I've just published a small story about the NetBeans Profiler used in remote mode." In Add resetValue() to EditableValueHolder?, Ed Burns "polls the community about whether to break backwards compatibility in one small interface in JSF 2.0." Finally, Marina Sum promotes a Join OpenSSO and Single Sign-On Presentation in Second Life. "Registration is now open for the September 30 session." Current and upcoming Java Events : - September 1-24 - O&B's Enterprise Architect's Boot Camp - September 2008 - September 12-14 - New England Software Symposium 2008: Fall Edition - -. Wait, now Java does too much? - Login or register to post comments - Printer-friendly version - editor's blog - 1327 reads by bharathch - 2008-09-11 07:50alexj33, it doesn't matter what you or I believe as developers. The language experts, theorists supreme and the guardians of Java will tell you what's good for you. They'lI tell you that Java will become obsolete if it doesn't "catch up" with other functional programming languages or doesn't get more "expressive and concise" like morse code by biting a silver bullet called BGGA closures. The gods have spoken. Comply or be left behind, mortal. :-) by darted - 2008-09-10 18:27So, the complaint is that due to Java becoming more capable, it is now too complex? Well, I can see that as standards have evolved some items have been kept for 'backward compatibility' that quite honestly add bloat and confusion. However, if you need portal technology and Java didn't offer an API to do it then you would complain. The problem I see is a lack of integration with current API's or total replacement, pick one. I have news for you. As the needs of the business change you will have to continue to learn new frameworks. Pick you language(s) to do that in. Is Java the correct one for all of them. No. But it is the one for a lot of them and will continue to be for me. That said, I think Java needs to be cognizant of the fact that it should not try and be all things to all people and consider what should it provide more capability to plug into vs. implement. by sjoyner - 2008-09-10 14:58The ever increasing complexity of Java has convinced me to move away from it. All one needs to do is look at the Java EE tutorial to get a feeling of hopelessness. Do I seriously need 5-6 different roles to deploy a web app to a production environment? I work in one of those roles at a large company, and I can tell you from experience that it causes more problems than it solves. Java is far too complex. Look at the Portal framework. I need a whole new set of skills to develop a portal application vs. developing in a servlet environment. Why? What's the point? I don't want to have to learn 16 different frameworks and standards just to create a halfway complex portlet. Not only that, why should I have my programming capabilities limited by some ill conceived standard (JSR 168 - you can't access http cookies. Just ran into that yesterday). The Java community has convinced much of the business world that all these standards and frameworks are necessary to maintain compatibility and interoperability, but in the end, not even the big Java vendors can keep applications compatible between versions of their software. Thanks for the ride Java. You've taught me an important lesson. Things can get too big and too complex to be useful. by keithjohnston - 2008-09-10 14:03Java has to keep adding APIs to support new functionality and programming models. I think the complaints about generics are overblown, and I don't think closures will kill the language. What works to kill a language is simply a new generation of programmers. They see Java and how huge it has become, and then they see python, which seems small, and hey - no compiler needed! So they start learning Python and they grow with it over time so they don't mind the additional complexity. Ten years from now, Python will have a gazillion APIs and the next generation will say it is too complicated and start learning "Foobar" or whatever the small, simple language is of the day. And this language will have a compiler! No more bugs in my programs because of typos! Cool!
https://weblogs.java.net/node/240832/atom/feed
CC-MAIN-2015-32
en
refinedweb
This document is a guide to the behaviour of the twisted.internet.defer.Deferred object, and to various ways you can use them when they are returned by functions. This document assumes that you are familiar with the basic principle that the Twisted framework is structured around: asynchronous, callback-based programming, where instead of having blocking code in your program or using threads to run blocking code, you have functions that return immediately and then begin a callback chain when data is available. See these documents for more information: After reading this document, the reader should expect to be able to deal with most simple APIs in Twisted and Twisted-using code that return Deferreds. - what sorts of things you can do when you get a Deferred from a function call; and - how you can write your code to robustly handle errors in Deferred code. Unless you're already very familiar with asynchronous programming, it's strongly recommended you read the Deferreds section of the Asynchronous programming document to get an idea of why Deferreds exist. Callbacks A twisted.internet.defer.Deferred is a promise that a function will at some point have a result. We can attach callback functions to a Deferred, and once it gets a result these callbacks will be called. In addition Deferreds allow the developer to register a callback for an error, with the default behavior of logging the error. The deferred mechanism standardizes the application programmer's interface with all sorts of blocking or delayed operations. from Multiple callbacks can be added to a Deferred. The first callback in the Deferred's callback chain will be called with the result, the second with the result of the first callback, and so on. Why do we need this? Well, consider a Deferred returned by twisted.enterprise.adbapi - the result of a SQL query. A web widget might add a callback that converts this result into HTML, and pass the Deferred onwards, where the callback will be used by twisted to return the result to the HTTP client. The callback chain will be bypassed in case of errors or exceptions. from twisted.internet import reactor, defer class Getter: def gotResults(self, x): """ The Deferred mechanism provides a mechanism to signal error conditions. In this case, odd numbers are bad. This function demonstrates a more complex way of starting the callback chain by checking for expected results and choosing whether to fire the callback or errback chain """ if x % 2 == 0: self.d.callback(x*3) else: self.d.errback(ValueError("You used an odd number!")) def _toHTML(self, r): """ This function converts r to HTML. It is added to the callback chain by getDummyData in order to demonstrate how a callback passes its own result to the next callback """ return "Result: %s" % r def getDummyData(self, x): """ The Deferred mechanism allows for chained callbacks. In this example, the output of gotResults is first passed through _toHTML on its way to printData. Again this function is a dummy, simulating a delayed result using callLater, rather than using a real asynchronous setup. """ self.d = defer.Deferred() # simulate a delayed result by asking the reactor to schedule # gotResults in 2 seconds time reactor.callLater(2, self.gotResults, x) self.d.addCallback(self._toHTML) return self.d def Deferred's error handling is modeled after Python's exception handling. In the case that no errors occur, all the callbacks run, one after the other, as described above. If the errback is called instead of the callback (e.g. because a DB query raised an error), then a twisted.python.failure.Failure is passed into the first errback (you can add multiple errbacks, just like with callbacks). You can think of your errbacks as being like except blocks of ordinary Python code. Unless you explicitly raise an error in except block, the Exception is caught and stops propagating, and normal execution continues. The same thing happens with errbacks: unless you explicitly return a Failure or (re-)raise an exception, the error stops propagating, and normal callbacks continue executing from that point (using the value returned from the errback). If the errback does returns a Failure or raise an exception, then that is passed to the next errback, and so on. Note: If an errback doesn't return anything, then it effectively returns None, meaning that callbacks will continue to be executed after this errback. This may not be what you expect to happen, so be careful. Make sure your errbacks return a Failure (probably the one that was passed to it), or a meaningful return value for the next callback. Also, twisted.python.failure.Failure instances have a useful method called trap, allowing you to effectively do the equivalent of: try: # code that may throw an exception cookSpamAndEggs() except (SpamException, EggException): # Handle SpamExceptions and EggExceptions ... You do this by: def errorHandler(failure): failure.trap(SpamException, EggException) # Handle SpamExceptions and EggExceptions d.addCallback(cookSpamAndEggs) d.addErrback(errorHandler) If none of arguments passed to failure.trap match the error encapsulated in that Failure, then it re-raises the error. There's another potential gotcha here. There's a method twisted.internet.defer.Deferred.addCallbacks which is similar to, but not exactly the same as, addCallback followed by addErrback. In particular, consider these two cases: # Case 1 d = getDeferredFromSomewhere() d.addCallback(callback1) # A d.addErrback(errback1) # B d.addCallback(callback2) d.addErrback(errback2) # Case 2 d = getDeferredFromSomewhere() d.addCallbacks(callback1, errback1) # C d.addCallbacks(callback2, errback2) If an error occurs in callback1, then for Case 1 errback1 will be called with the failure. For Case 2, errback2 will be called. Be careful with your callbacks and errbacks. What this means in a practical sense is in Case 1, If a Deferred is garbage-collected with an unhandled error (i.e. it would call the next errback if there was one), then Twisted will write the error's traceback to the log file. This means that you can typically get away with not adding errbacks and still get errors logged. Be careful though; if you keep a reference to the Deferred around, preventing it from being garbage-collected, then you may never see the error (and your callbacks will mysteriously seem to have never been called). If unsure, you should explicitly add an errback after your callbacks, even if all you do is: # Make sure errors get logged from twisted.python import log d.addErrback(log.err) Handling either synchronous or asynchronous results In some applications, there are functions that might be either asynchronous or synchronous. For example, a user authentication function might be able to check in memory whether a user is authenticated, allowing the authentication function to return an immediate result, or it may need to wait on network data, in which case it should return a Deferred to be fired when that data arrives. However, a function that wants to check if a user is authenticated will then need to accept both immediate results and Deferreds. In this example, the library function authenticateUser uses the application function isValidUser to authenticate a user: def authenticateUser(isValidUser, user): if isValidUser(user): print "User is authenticated" else: print "User is not authenticated" However, it assumes that isValidUser returns immediately, whereas isValidUser may actually authenticate the user asynchronously and return a Deferred. It is possible to adapt this trivial user authentication code to accept either a synchronous isValidUser or an asynchronous isValidUser, allowing the library to handle either type of function. It is, however, also possible to adapt synchronous functions to return Deferreds. This section describes both alternatives: handling functions that might be synchronous or asynchronous in the library function ( authenticateUser) or in the application code. Handling possible Deferreds in the library code def asynchronousIsValidUser(d, user): d = Deferred() reactor.callLater(2, d.callback, user in ["Alice", "Angus", "Agnes"]) return d, even if isValidUser is a synchronous function: from twisted.internet import defer def printResult(result): if result: print "User is authenticated" else: print "User is not authenticated" def authenticateUser(isValidUser, user): d = defer.maybeDeferred(isValidUser, user) d.addCallback(printResult) Now isValidUser could be either synchronousIsValidUser or asynchronousIsValidUser. It is also possible to modify synchronousIsValidUser to return a Deferred, see Generating Deferreds for more information. DeferredList Sometimes you want to be notified after several different events have all happened, rather than waiting for each one individually. For example, you may want to wait for all the connections in a list to close. twisted.internet.defer.DeferredList is the way to do this. To create a DeferredList from multiple Deferreds, you simply pass a list of the Deferreds you want it to wait for: # Creates a DeferredList dl = defer.DeferredList([deferred1, deferred2, deferred3]): def) deferred1.callback('one') deferred2.errback(Exception('bang!')) deferred3.callback('three') # At this point, dl will fire its callback, printing: # Success: one # Failure: bang! # Success: three # (note that defer.SUCCESS == True, and defer.FAILURE == False) A standard DeferredList will never call errback, but failures in Deferreds passed to a DeferredList will still errback unless consumeErrors is passed True. See below for more details about this and other flags which modify the behavior of DeferredList.. def printResult(result): print result def addTen(result): return result + " ten" # Deferred gets callback before DeferredList is created deferred1 = defer.Deferred() deferred2 = defer.Deferred() deferred1.addCallback(addTen) dl = defer.DeferredList([deferred1, deferred2]) dl.addCallback(printResult) deferred1.callback("one") # fires addTen, checks DeferredList, stores "one ten" deferred2.callback("two") # At this point, dl will fire its callback, printing: # [(1, 'one ten'), (1, 'two')] # Deferred gets callback after DeferredList is created deferred1 = defer.Deferred() deferred2 = defer.Deferred() dl = defer.DeferredList([deferred1, deferred2]) deferred1.addCallback(addTen) # will run *after* DeferredList gets its value dl.addCallback(printResult) deferred1.callback("one") # checks DeferredList, stores "one", fires addTen deferred2.callback("two") # At this point, dl will fire its callback, printing: # [(1, 'one), (1, 'two')] Other behaviours DeferredList accepts three keyword arguments that modify its behaviour: fireOnOneCallback, fireOnOneErrback and consumeErrors. If fireOnOneCallback is set, the DeferredList will immediately call its callback as soon as any of its Deferreds call their callback. Similarly, fireOnOneErrback will call errback as soon as any of the Deferreds call their errback. Note that DeferredList is still one-shot, like ordinary Deferreds, so after a callback or errback has been called the DeferredList will do nothing further (it will just silently ignore any other results from its Deferreds). The fireOnOneErrback option is particularly useful when you want to wait for all the results if everything succeeds, but also want to know immediately if something fails. The consumeErrors argument will stop the DeferredList from propagating any errors along the callback chains of any Deferreds it contains (usually creating a DeferredList has no effect on the results passed along the callbacks and errbacks of their Deferreds). Stopping errors at the DeferredList with this option will prevent Unhandled error in Deferred warnings from the Deferreds it contains without needing to add extra errbacks Class Overview This is an overview API reference for Deferred from the point of using a Deferred returned by a function. It is not meant to be a substitute for the docstrings in the Deferred class, but can provide guidelines for its use. There is a parallel overview of functions used by the Deferred's creator in Generating Deferreds. your method. There are various convenience methods that are derivative of addCallbacks. I will not cover them in detail here, but it is important to know about them in order to create concise code. addCallback(callback, *callbackArgs, **callbackKeywords) Adds your callback at the next point in the processing chain, while adding an errback that will re-raise its first argument, not affecting further processing in the error case. Note that, while addCallbacks (plural) requires the arguments to be passed in a tuple, addCallback (singular) takes all its remaining arguments as things to be passed to the callback function. The reason is obvious: addCallbacks (plural) cannot tell whether the arguments are meant for the callback or the errback, so they must be specifically marked by putting them into a tuple. addCallback (singular) knows that everything is destined to go to the callback, so it can use Python's *and **syntax to collect the remaining arguments. addErrback(errback, *errbackArgs, **errbackKeywords) Adds your errback at the next point in the processing chain, while adding a callback that will return its first argument, not affecting further processing in the success case. addBoth(callbackOrErrback, *callbackOrErrbackArgs, **callbackOrErrbackKeywords) This method adds the same callback into both sides of the processing chain at both points. Keep in mind that the type of the first argument is indeterminate if you use this method! Use it for finally:style blocks. Chaining Deferreds If you need one Deferred to wait on another, all you need to do is return a Deferred from a method added to addCallbacks. Specifically, if you return Deferred B from a method added to Deferred A using A.addCallbacks, Deferred A's processing chain will stop until Deferred B's .callback() method is called; at that point, the next callback in A will be passed the result of the last callback in Deferred B's processing chain at the time. If this seems confusing, don't worry about it right now -- when you run into a situation where you need this behavior, you will probably recognize it immediately and realize why this happens. If you want to chain deferreds manually, there is also a convenience method to help you. chainDeferred(otherDeferred) Add otherDeferredto the end of this Deferred's processing chain. When self.callback is called, the result of my processing chain up to this point will be passed to otherDeferred.callback. Further additions to my callback chain do not affect otherDeferred This is the same as self.addCallbacks(otherDeferred.callback, otherDeferred.errback) See also - Generating Deferreds, an introduction to writing asynchronous functions that return Deferreds.
http://twistedmatrix.com/projects/core/documentation/howto/defer.html
crawl-002
en
refinedweb
. def my_function(file_name_or_object): # parameter is string if isinstance(file_name_or_object, (str, unicode)): return open(file_name_or_object) # parameter is an file like object that has at least a read method elif hasattr(file_name_or_object, “read”): return file_name_or_object Comment by me — May 9, 2008 @ 9:37 pm What about this? It would depend of course in how “filelike” the object has to be. I don’t know whether it is considered the most idiomatic, but it’s used for example in xml.etree.Elementree for the parse function. if not hasattr(file_name_or_object, ‘read’): filename_or_object = open(file_name_or_object) return filename_or_object Comment by Steven — May 9, 2008 @ 9:51 pm @me: why do you use ‘hasattr’ with ‘elif’? Why not simply use ‘isinstance(file_name_or_object,file)’? Is it faster? I’m asking because many classes can have ‘read’ method, not necessarily being ‘file-like’. Comment by vArDo — May 9, 2008 @ 10:02 pm I agree that in this circumstance a type check ( isinstance(file_name_or_object, basestring) ) is perfectly acceptable. ConfigObj does this. (Particularly as there is no ’string protocol’ and so any filename will have to be passed in as a real string or at least a subclass.) Michael Comment by Michael Foord — May 9, 2008 @ 10:02 pm Isn’t the try/except to be preferred over isinstance/hasattr? Or will it lead to bad logic? The OP’s case will assume its a file object if it is not open-able; as opposed to assuming it is a file object if it is an instance of basestring. I suspect that in this case it amounts to the same thing if open itself only works if given an instance of basestring, but style-wise I thought it was preferable to limit the use of isinstance/hasattr? Hell, I’m nit-picking. either would work! - Paddy. Comment by Paddy3118 — May 9, 2008 @ 11:55 pm If the code is only going to read the data in the file a line at a time then I would say it is far better to specify that the function takes an iterable sequence of strings, so that the client code is responsible for creating the file object and passing it in. This way the client could equally give the function a urlib object, or a StringIO object, or a list of strings, or a generator, or anything else that the user can come up with. This not only makes the function more flexible and useful, it also makes it much easier to test. I have lots of tests where the production version of the code takes a file object, but the unit tests pass in something like this: testdata = ”’ lines of test data ”’.splitlines() result = myFunction(testdata) assert result == expected This makes the test self-contained, instead of being split across lots of supplementary files. Having a function that takes either a string or a filename is a code smell IMHO - if you must allow either then have two separate functions, the first takes a file (or string iterable), and the second takes a filename, opens the file and calls the first function. e.g. def doStuff(fileObj): # do stuff with the file def openFileAndDoStuff(filename): doStuff(open(filename)) Comment by Dave Kirby — May 10, 2008 @ 12:09 am You could use named arguments def my_function(file_name=None, file_object=None): ….if file_object == None: ……..file_object = open(file_name’, ‘r’) ….# Do stuff with file_object my_function(file_name=’test.txt’) my_function(file_object=open(’test.txt’,'r’) Comment by Snazz — May 10, 2008 @ 12:43 am Another version: def getfile(obj): ….if all(hasattr(obj, attr) for attr in ['read', 'seek', 'write']): ……..return obj ….elif isinstance(obj, basestring) and os.path.isfile(obj): ……..return file(obj) ….else: ……..raise ValueError(”Not a file-like object or valid path.”) Comment by Steve Kryskalla — May 10, 2008 @ 3:12 am I think you could also turn this type of function into a decorator that will automatically coerce an argument of the wrapped function. Comment by Steve Kryskalla — May 10, 2008 @ 3:16 am
http://halfcooked.com/blog/2008/05/09/opening-a-file-in-python/
crawl-002
en
refinedweb
Resin provides a robust and tested connection pool that is used to obtain connections to databases. A basic <database> configuration specifies the following: <database jndi- <driver type="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource"> <url>jdbc:mysql://localhost:3306/test</url> <user></user> <password></password> </driver> </database> This <database> will configure a javax.sql.DataSource and store it in JNDI at java:comp/env/jdbc/test_mysql. To use the data source, follow the database use pattern in the DataSource tutorial. Sample <database> configurations are available in the thirdparty driver page. Although some deployments will specify driver and connection pool parameters, the default values will be fine for most applications. A database connection is used to allow the Java program, running in a JVM, to communicate with a database server. Connection pools are used to reduce the overhead of using a database. Establishing a connection to the database is a costly operation. A connection pool keeps a pool of open connections, each connection can be used for a time as needed, and then released back to the pool. A connection that has been released back to the pool can then be reused. Connection pooling is especially important in server applications. The overhead of opening a new connection for each new client request is too costly. Instead, the database pool allows for a connection to be opened once and then reused for many requests. Resin provides an implementation of DataSource. Resin's implementation of DataSource is a connection pool. DataSource A Driver provides an interface and is responsible for the communication with the database. Every different database (i.e Oracle, MySQL) has their own means of enabling communication from the client (in this case Resin and you applications) and the database. The Driver provides a common interface that hides the details of that communication. Transactions are especially important in server applications where many threads of processing may be interacting with the database at the same time. For a simple example, imagine a set of operations that reads a value, calculates a new value, and then updates the database. read value A=1 calculate A=A+1 update A=2 read value A=2 calculate A=A+1 update A=3 Imagine if one thread is performing this operation, and in the middle of this read/calculate/update, another thread performs an update. The data that the first thread obtained from the read and is using for the calculation and update is no longer valid. Thread 1 Thread 2 -------- -------- read value A=1 read value A=1 calculate A=A+1 calculate A=A+1 update A=2 update A=2 Placing the read/calculate/update operations in a transactions guarantees that only one thread can perform those operations at a time, if a second thread comes along and tries to perform the operation, it will have to wait for the first thread to finish before it can begin. Thread1 Thread 2 ------- -------- read value A=1 calculate A=A+1 (tries to read A, but has to wait for thread 1) update A=2 read value A=2 calculate A=A+1 update A=3 If the guarantees that transactions apply need to apply to operations that occur on two databases within the same transaction, distributed transactions are needed. If A and B in the following example are in two different databases, then a distributed transaction is needed: A B read value db1.A=1 read value db2.B=99 calculate A=A+1 calculate B=B-A update db1.A=2 update db2.B=97 Distributed transactions are rarely needed, and few databases really support them. Configure a database resource, which is a database pool that manages and provides connections to a database. java:comp/env info All times default to seconds, but can use longer time periods: The class that corresponds to <database> is com.caucho.sql.DBPool Configure a database driver. The driver is a class provided by the database vendor, it is responsible for the communication with the database. The jar file with the driver in it can be placed in WEB-INF/lib, although it is often best to place your datbase driver's jar file in $RESIN_HOME/lib/local/, which makes the driver available to all of your web applications. WEB-INF/lib $RESIN_HOME/lib/local/ Examples of common driver configurations are in Third-party Database Configuration. The class that corresponds to <driver> is com.caucho.sql.DriverConfig Database vendors usually provide many different classes that are potential candidates for type. The JDBC api has developed over time, and is now being replaced by the more general JCA architecture. The driver you choose depends on the options the vendor offers, and whether or not you need distributed transactions. JCA is replacing JDBC as the API for database drivers. JCA is a much more flexible approach that defines an API that can be used for any kind of connection, not just a connection to a database. If a database vendor provides a JCA interface, it is the best one to use. A JCA driver implements ManagedConnectionFactory. When you specify such a class for type, Resin will notice that it is a JCA driver and take advantage of the added functionality that the JCA interface provides. ManagedConnectionFactory The same JCA driver is used for both non-distributed and distributed transactions JDBC 2.0 defined the interface ConnectionPoolDataSource. A ConnectionPoolDataSource is not a connection pool, but it does provide some extra information that helps Resin to pool the connection more effectively. ConnectionPoolDataSource A driver that implements ConnectionPoolDataSource is better than a JDBC 1.0 driver that implements Driver. JDBC 2.0 defined the interface XADataSource for connections that can participate in distributed transactions. A distributed transaction is needed when transactions involve multiple connections. For example, with two different database backends, if the guarantees that transactions apply need to apply to operations that occur on both databases within the same transaction, distributed transactions are needed. Distributed transactions are rarely needed, and few databases really support them. Some vendors will provide XADataSource drivers even though the database does not really support distributed transactions. Often, XADataSource drivers are slower than their ConnectionPoolDataSource counterparts. XADataSource XADataSource should only be used if distributed transactions are really needed, and can probably be safely ignored for most applications. Driver is the original JDBC interface, and is the least desirable kind of driver to use. Resin can still pool database connections using these drivers, but it will not be as efficient as the newer drivers. init-param is used to set properties of the database driver that are specific to the driver and are not generic enough for resin to provide a named configuration tag. For example, MySQL drivers accept the useUnicode parameter, if true the driver will use Unicode character encodings when handling strings. useUnicode <database> <jndi-name>jdbc/mysql</jndi-name> <driver> <type>com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource</type> <url>jdbc:mysql://localhost:3306/dbname</url> <user>username</user> <password>password</password> <init-param </driver> ... </database> Pooling configuration controls the behaviour of Resin's pooling of database connections. For most applications and databases the only needed change is to increase the max-connections value to meet high demand. Other pooling parameters have defaults that are based on our years of experience with many different databases in many different applications. Changes from the defaults should only be done in response to specific problems, and with a good understanding of how pooling works. Resin's database pool can test if the pooled database connection is still alive by configuring a ping query. This is typically only necessary if the pooling parameters are changed from their default values. If the pool is configured with a long max-idle-time the database connection may become stale if the database is restarted, or if the database is configured with a shorter connection timeout value than the configuration of the Resin pool. Normally when a database connection is returned to the pool it will wait there until the next request or the idle-time expires. If the database goes down in the meantime or closes the connection, the connection will become stale. The ping configuration can test the database connection. When pinging, Resin's DBPool will test a table specified with the ping-table parameter before returning the connection to the application. If the ping fails, the connection is assumed to be no good and a different connection from the pool is returned. For a ping-table of> The DataSource is a factory that is used to obtain a connection. The DataSource is obtained using the jndi-name specified when configuring the database resource. Ideally, the JNDI lookup of DataSource is done only once, the DataSource obtained from the lookup can be stored in a member variable or other appropriate place. The stored DataSource can then be used each time a connection is needed. If it is not stored, there will be an impact on performance from having to do the lookup each time you want to get a connection. import javax.sql.*; import javax.webbeans.*; public class .... { @Named("jdbc/test") DataSource _pool; ... } A connection is obtained from the DataSource. The connection is used as needed, and then released with a call to close() so that Resin knows it is available for a subsequent request. It is very important that the close() is always called, even if there as an exception. Without the close(), Resin's database pool can loose connections. If you fail to close() a connection, Resin does not know that it is available for reuse, and cannot allocate it for another request. Eventually, Resin may run out of connections.() { Connection conn = null; try { conn = _pool.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(" ... "); ... rs.close(); stmt.close(); } catch (SQLException e) { throw new ServletException(e); } finally { try { if (conn != null) conn.close(); } catch (SQLException e) { } } } }.xml.
http://caucho.com/resin/doc/config-database.xtp
crawl-002
en
refinedweb
- Articles - Documentation - Distributions - Forums - Sponsor Solutions. If you are wondering why you see AJAX but JavaScript is missing from the above description, the following quote from the ZK Web site reveals the philosophy of the project -- "Our belief is that the best way to use Ajax is not to know its existence." The idea is that all the client-side code is contained within the ZK framework, and all the code that you develop runs on the server side, so you don't have to worry about AJAX or interpreting XML at all. There are three binary downloads of ZK for standard server installs: Standard, Professional, and Enterprise editions. Each download includes all of the features of the lesser distribution and some additional ones. For example, the Professional Edition includes everything in the Standard one plus CAPTCHA, JFreeChart, and other features. There are also two distributions targeting mobile clients, one using Mobile Interactive Language, and a specific download targeting Google's Android and the Handset Interactive Language. For this article I used a 32-bit Fedora 9 machine and Enterprise Edition version 3.0.6 which is about 21MB as a tar.gz. To use ZK you need a servlet container such as Apache Tomcat on your Web server. There are two ways to install ZK on your server: either as part of your Web application or as a shared component available to all Web applications in your servlet environment. The former method allows you to move your site easily, because ZK is embedded in your Web app, while the second method saves administration if you are using ZK in many sites on the same server. Tomcat is available in the tomcat6 package for Fedora 9 and openSUSE 11, and the tomcat5.5 package for Debian-based distributions. The steps shown below install ZK as a shared component, allowing all Web applications to use it. # yum install tomcat6 tomcat6-webapp # service tomcat6 stop # cd /tmp # tar xzf /.../zk-bin-3.0.6.tar.gz # cd zk-bin-3.0.6/ # vi /etc/tomcat6/catalina.properties ... shared.loader=/usr/share/java/tomcat6/shared/lib/*.jar ... # mkdir -p /usr/share/java/tomcat6/shared/lib # cd ./dist/lib # find . -name "*.jar" -exec \ install -m 644 {} /usr/share/java/tomcat6/shared/lib/ \; # service tomcat6 start To test that your installation of Tomcat and ZK is working, download the zk-demo zip file and put the minimal Web application archive (WAR) file into your Tomcat webapps directory as shown below. The minimal war file does not include ZK, so the demo must be able to find it as a shared install on your Tomcat server. Once you execute the commands below, opening the URL should show you a demo of ZK loaded from your server. # cd /tmp # unzip /.../zk-demo-3.0.6.zip # cd ./zk-demo-3.0.6/ # install -m 644 zkdemo-min.war /var/lib/tomcat6/webapps # service tomcat6 restart The ZK Tutorial gives a mile-high overview of what it's like to use ZK with the nuts and bolts covered in the Developer's Guide. The Web interfaces are defined using an XML file that is similar to the Mozilla XML user interface file format (XUL). These ZK XML files are referred to as ZK User Interface Markup Language (ZUML) or ZUL files. A trap for those unfamiliar with developing applications with Tomcat is that you will need a WEB-INF/web.xml file under the directory of any ZUL files you create for testing. Copying the web.xml file from the Resort example will allow you to start loading raw ZUL files. The ZUL files can also contain some behavior. Things are of course much cleaner if you separate the Java code from the ZUL interface descriptions, but embedding code is still useful for experimentation. The embedded code has to be quoted so that it is legal XML, so double quotes in your Java code must become " for things to work correctly. If you have installed the Enterprise version of ZK you can also embed code written in JavaScript, Ruby, and Groovy directly into your ZUL files. The zscript element can be used to include larger blocks of code in ZUL files, with the added benefit that you do not have to XML escape double quote characters inside zscript elements. A simple ZUL file Web interface is shown below. It uses embedded Java to open a message box when a button is clicked. <window title="Hello" border="normal"> <button label="Say Hello" onClick="Messagebox.show("Hello World!")"/> </window> If you want to keep the embedded code in ZUL files small, the two main features you are likely to use are EL expressions, where you can embed a small snippet of code to access a property of a Java object or perform a simple comparison, and the forEach statement, to populate listboxes and other repetitive ZUL elements. For reacting to events that the user interface creates, you can define your own subclasses of objects and explicitly use those subclasses in your ZUL files. This allows you to change your Java code, and thus how your Web app reacts to events, without having to touch the ZUL interface definition files. To avoid subclassing buttons just to react to them being pressed, you can route their press events to their parent object using the forward attribute. ZK includes support for what is called "live data" models. Using live data, you implement the org.zkoss.zul.ListModel interface, and the data exposed through that interface can be viewed by different controllers on the Web page. In the following example, the Bookmark class is used as the tuple level, storing the URL, page title, and time the page was last visited. Accessor methods are not shown. ... public class Bookmark { private String m_title; private String m_earl; private Date m_lastVisit; public Bookmark() {} public Bookmark( String title, String earl, Date lvisit ) { m_title = title; m_earl = earl; m_lastVisit = lvisit; } ... To render Bookmark objects in a grid you need to provide a Renderer subclass that knows how you want to render these objects. The Renderer shown below renders each interesting piece of data into its own cell. ... public class BookmarkRenderer implements RowRenderer { public void render(Row row, java.lang.Object nastyDataRef) { Bookmark bm = (Bookmark)nastyDataRef; new Label( bm.getTitle() ).setParent(row); new Label( bm.getURL() ).setParent(row); new Label( bm.getLastVisitTime().toString() ).setParent(row); } }; ZK provides a mutable implementation of org.zkoss.zul.ListModel in the ListModelList class. The controller object shown below creates a few example bookmarks in the data model, which could easily be fetched from JDBC or another source. ... public class BookmarkController extends Window { ListModelList m_bookmarks = new org.zkoss.zul.ListModelList( new ArrayList() ); Bookmark m_selected = new Bookmark(); public BookmarkController() { m_bookmarks.add( new Bookmark( "Slashdot: News for nerds, stuff that matters", "", new Date() )); m_bookmarks.add( new Bookmark( "Linux.com :: Feature", "", new Date() )); m_bookmarks.add( new Bookmark( "The libferris VFS!", "", new Date() )); } ... Finally, the following ZUL file creates the Web interface shown in the screenshot below. Notice the use attribute for the window element, which tells ZK to use our custom Java class for this widget. For the grid, the model is specified as the Bookmarks member of our window class; more specifically the model will be m_bookmarks from the BookmarkController class. The renderer has to be specified so that ZK knows how to present our custom Java object in the grid. <?init class="org.zkoss.zkplus.databind.AnnotateDataBinderInit" ?> <window id="win" use="com.linux.zk.example.bookmark.ui.BookmarkController" title="Bookmarks" width="800px" border="normal"> <hbox> <grid model="@{win.Bookmarks}" rowRenderer="com.linux.zk.example.bookmark.ui.BookmarkRenderer" > <columns> <column label="Title" width="70px"/> <column label="URL" width="250px"/> <column label="Last Visited" width="100px" /> </columns> </grid> <timer id="timer" delay="3000" repeats="false" onTimer="win.dataArrived()"/> </hbox> </window> Although ZK is a finalist in the SourceForge.net Community Choice Awards, it is not prepackaged for mainstream distributions. While ZK is not extremely difficult to install, it is always nice to be able to automatically track updates to Web software through your Linux distribution's package manager. Although interaction between the Web browser and the Web server is done using asynchronous calls, things like sorting a data grid displaying a custom Data model which require interaction with the server to complete may make the network latency noticeable to users when they click to sort a grid. Whether this becomes an issue depends upon the latency of the network link between your server and the client. If you are using Java and want to provide a Web interface, using ZK can free you from having to worry about browser-dependent JavaScript code or creating sophisticated widgets to execute in a Web browser. ZK's event-based programming model is also a breath of fresh air compared to having to constantly think about either page loads or issuing and responding to AJAX calls from your own custom JavaScript code. Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions. Note: Comments are owned by the poster. We are not responsible for their content. Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 220.133.44.37] on July 22, 2008 12:34 PM It's a good introduction. Rob # Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 88.73.206.91] on July 22, 2008 05:23 PM Wow, what a turn-off. # Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 77.72.233.48] on July 22, 2008 08:12 PM Do you mind telling us what the Enterprise Edition version costs - it is conspiciously absent from the price list at # Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 60.198.13.54] on July 23, 2008 05:09 PM This is really a brilliant introduction of ZK. Bravo! This is Robbie from the ZK team. We really appreciate your effort, and we would like to list you on our hero list. Please email me your profile to robbiecheng at zkoss dot org. /robbie # Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 10.10.0.176] on July 25, 2008 06:47 AM # Re: Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 24.218.94.46] on August 07, 2008 04:44 PM # Re: Designing rich AJAX Web interfaces with ZKPosted by: Anonymous [ip: 220.133.44.37] on August 08, 2008 07:41 AM # This story has been archived. Comments can no longer be posted.
http://www.linux.com/archive/feature/141601
crawl-002
en
refinedweb
Overview This chapter focuses on how to use PB to pass complex types (specifically class instances) to and from a remote process. The first section is on simply copying the contents of an object to a remote process ( pb.Copyable). The second covers how to copy those contents once, then update them later when they change ( Cacheable). Motivation From the previous chapter, you've seen how to pass basic types to a remote process, by using them in the arguments or return values of a callRemote function. However, if you've experimented with it, you may have discovered problems when trying to pass anything more complicated than a primitive int/list/dict/string type, or another pb.Referenceable object. At some point you want to pass entire objects between processes, instead of having to reduce them down to dictionaries on one end and then re-instantiating them on the other. Passing Objects The most obvious and straightforward way to send an object to a remote process is with something like the following code. It also happens that this code doesn't work, as will be explained below. class LilyPond: def __init__(self, frogs): self.frogs = frogs pond = LilyPond(12) ref.callRemote("sendPond", pond) If you try to run this, you might hope that a suitable remote end which implements the remote_sendPond method would see that method get invoked with an instance from the LilyPond class. But instead, you'll encounter the dreaded InsecureJelly exception. This is Twisted's way of telling you that you've violated a security restriction, and that the receiving end refuses to accept your object. Security Options What's the big deal? What's wrong with just copying a class into another process' namespace? Reversing the question might make it easier to see the issue: what is the problem with accepting a stranger's request to create an arbitrary object in your local namespace? The real question is how much power you are granting them: what actions can they convince you to take on the basis of the bytes they are sending you over that remote connection. Objects generally represent more power than basic types like strings and dictionaries because they also contain (or reference) code, which can modify other data structures when executed. Once previously-trusted data is subverted, the rest of the program is compromised. The built-in Python batteries included classes are relatively tame, but you still wouldn't want to let a foreign program use them to create arbitrary objects in your namespace or on your computer. Imagine a says all User objects in the system are referenced when authorizing a login session. (In this system, User.__init__ would probably add the object to a global list of known users). The simple act of creating an object would give access to somebody. If you could be tricked into creating a bad object, an unauthorized user would get access. So object creation needs to be part of a system's security design. The dotted line between trusted inside and untrusted outside needs to describe what may be done in response to outside events. One of those events is the receipt of an object through a PB remote procedure call, which is a request to create an object in your inside namespace. The question is what to do in response to it. For this reason, you must explicitly specific what remote classes will be accepted, and how their local representatives are to be created. What class to use? Another basic question to answer before we can do anything useful with an incoming serialized object is: what class should we create? The simplistic answer is to create the same kind that was serialized on the sender's end of the wire, but this is not as easy or as straightforward as you might think. Remember that the request is coming from a different program, using a potentially different set of class libraries. In fact, since PB has also been implemented in Java, Emacs-Lisp, and other languages, there's no guarantee that the sender is even running Python! All we know on the receiving end is a list of two things which describe the instance they are trying to send us: the name of the class, and a representation of the contents of the object. PB lets you specify the mapping from remote class names to local classes with the setUnjellyableForClass function InsecureJelly exception. In general you expect both ends to share the same codebase: either you control the program that is running on both ends of the wire, or both programs share some kind of common language that is implemented in code which exists on both ends. You wouldn't expect them to send you an object of of a User object might differ from the recipient's, either through namespace collisions between unrelated packages, version skew between nodes that haven't been updated at the same rate, or a malicious intruder trying to cause your code to fail in some interesting or potentially vulnerable way. pb.Copyable Ok, enough of this theory. How do you send a fully-fledged object from one side to the other? #! /usr/bin/python from twisted.spread import pb, jelly from twisted.python import log from twisted.internet import reactor class LilyPond: def setStuff(self, color, numFrogs): self.color = color self.numFrogs = numFrogs def countFrogs(self): print "%d frogs" % self.numFrogs class CopyPond(LilyPond, pb.Copyable): pass class Sender: def __init__(self, pond): self.pond = pond def got_obj(self, remote): self.remote = remote d = remote(): from copy_sender import CopyPond # so it's not __main__.CopyPond pond = CopyPond() pond.setStuff("green", 7) pond.countFrogs() # class name: print ".".join([pond.__class__.__module__, pond.__class__.__name__]) sender = Sender(pond) factory = pb.PBClientFactory() reactor.connectTCP("localhost", 8800, factory) deferred = factory.getRootObject() deferred.addCallback(sender.got_obj) reactor.run() if __name__ == '__main__': main() """PB copy receiver example. This is a Twisted Application Configuration (tac) file. Run with e.g. twistd -ny copy_receiver.tac See the twistd(1) man page or for details. """ import sys if __name__ == '__main__': print __doc__ sys.exit(1) from twisted.application import service, internet from twisted.internet import reactor from twisted.spread import pb from copy_sender import LilyPond, CopyPond from twisted.python import log #log.startLogging(sys.stdout) class ReceiverPond(pb.RemoteCopy, LilyPond): pass pb.setUnjellyableForClass(CopyPond, ReceiverPond) class Receiver(pb.Root): def remote_takePond(self, pond): print " got pond:", pond pond.countFrogs() return "safe and sound" # positive acknowledgement def remote_shutdown(self): reactor.stop() application = service.Application("copy_receiver") internet.TCPServer(8800, pb.PBServerFactory(Receiver())).setServiceParent( service.IServiceCollection(application)) The sending side has a class called LilyPond. To make this eligble for transport through callRemote (either as an argument, a return value, or something referenced by either of those [like a dictionary value]), it must inherit from one of the four name and state are sent over the wire to the receiver. The receiving end defines a local class named ReceiverPond to represent incoming LilyPond instances. This class derives from the sender's LilyPond class (with a fully-qualified name of copy_sender.LilyPond), which specifies how we expect it to behave. We trust that this is the same LilyPond class as the sender used. (At the very least, we hope ours will be able to accept a state created by theirs). It also inherits from pb.RemoteCopy, which is a requirement for all classes that act in this local-representative role (those which are given to the second argument of setUnjellyableForClass). RemoteCopy provides the methods that tell the Jelly layer how to create the local object from the incoming serialized state. Then setUnjellyableForClass is used to register the two classes. This has two effects: instances of the remote class (the first argument) will be allowed in through the security layer, and instances of the local class (the second argument) will be used to contain the state that is transmitted when the sender serializes the remote object. persisted (across time) differently than they are transmitted (across [memory]space). When this is run, it produces the following output: [-] twisted.spread.pb.PBServerFactory starting on 8800 [-] Starting factory <twisted.spread.pb.PBServerFactory instance at 0x406159cc> [Broker,0,127.0.0.1] got pond: <__builtin__.ReceiverPond instance at 0x406ec5ec> [Broker,0,127.0.0.1] 7 frogs % ./copy_sender.py 7 frogs copy_sender.CopyPond pond arrived safe and sound Main loop terminated. % Controlling the Copied State By overriding getStateToCopy and markers before sending, then upon receipt replace those markers with references to a receiver-side proxy that could perform the same operations against a local cache of data. Another good use for getStateToCopy is to implement local-only attributes: data that is only accessible by the local process, not to any remote users. For example, a attribute could be removed from the object state before sending to a remote system. Combined with the fact that Copyable objects return unchanged from a round trip, this could be used to build a challenge-response system (in fact PB does this with pb.Referenceable objects to implement authorization as described here). Whatever getStateToCopy returns from the sending object will be serialized and sent over the wire; setCopyableState gets whatever comes over the wire and is responsible for setting up the state of the object it lives in. #! /usr/bin/python from twisted.spread import pb class FrogPond: def __init__(self, numFrogs, numToads): self.numFrogs = numFrogs self.numToads = numToads def count(self): return self.numFrogs + self.numToads class SenderPond(FrogPond, pb.Copyable): def getStateToCopy(self): d = self.__dict__.copy() d['frogsAndToads'] = d['numFrogs'] + d['numToads'] del d['numFrogs'] del d['numToads'] return d class ReceiverPond(pb.RemoteCopy): def setCopyableState(self, state): self.__dict__ = state def count(self): return self.frogsAndToads pb.setUnjellyableForClass(SenderPond, ReceiverPond) #! /usr/bin/python from twisted.spread import pb, jelly from twisted.python import log from twisted.internet import reactor from copy2_classes import SenderPond class Sender: def __init__(self, pond): self.pond = pond def got_obj(self, obj): d = obj(): pond = SenderPond(3, 4) print "count %d" % pond.count() sender = Sender(pond) factory = pb.PBClientFactory() reactor.connectTCP("localhost", 8800, factory) deferred = factory.getRootObject() deferred.addCallback(sender.got_obj) reactor.run() if __name__ == '__main__': main() #! /usr/bin/python from twisted.application import service, internet from twisted.internet import reactor from twisted.spread import pb import copy2_classes # needed to get ReceiverPond registered with Jelly class Receiver(pb.Root): def remote_takePond(self, pond): print " got pond:", pond print " count %d" % pond.count() return "safe and sound" # positive acknowledgement def remote_shutdown(self): reactor.stop() application = service.Application("copy_receiver") internet.TCPServer(8800, pb.PBServerFactory(Receiver())).setServiceParent( service.IServiceCollection(application)) In this example, the classes are defined in a separate source file, which also sets up the binding between them. The SenderPond and ReceiverPond are unrelated save for this binding: they happen to implement the same methods, but use different internal instance variables to accomplish them. The recipient of the object doesn't even have to import the class definition into their namespace. It is sufficient that they import the class definition (and thus execute the setUnjellyableForClass statement). The Jelly layer remembers the class definition until a matching object is received. The sender of the object needs the definition, of course, to create the object in the first place. When run, the copy2 example emits the following: % twistd -n -y copy2_receiver.py [-] twisted.spread.pb.PBServerFactory starting on 8800 [-] Starting factory <twisted.spread.pb.PBServerFactory instance at 0x40604b4c> [Broker,0,127.0.0.1] got pond: <copy2_classes.ReceiverPond instance at 0x406eb2ac> [Broker,0,127.0.0.1] count 7 % ./copy2_sender.py count 7 pond arrived safe and sound Main loop terminated. % Things To Watch Out For - The first argument to setUnjellyableForClassmust refer to the class as known by the sender. The sender has no way of knowing about how your local importstatements are set up, and Python's flexible namespace semantics allow you to access the same class through a variety of different names. You must match whatever the sender does. Having both ends import the class from a separate file, using a canonical module name (no sibiling imports), is a good way to get this right, especially when both the sending and the receiving classes are defined together, with the setUnjellyableForClassimmediately following them. (XXX: this works, but does this really get the right names into the table? Or does it only work because both are defined in the same (wrong) place?) - The class that is sent must inherit from pb.Copyable. The class that is registered to receive it must inherit from pb.RemoteCopy 2. - The same class can be used to send and receive. Just have it inherit from both pb.Copyableand pb.RemoteCopy. This will also make it possible to send the same class symmetrically back and forth over the wire. But don't get confused about when it is coming (and using setCopyableState) versus when it is going (using getStateToCopy). InsecureJellyexceptions are raised by the receiving end. They will be delivered asynchronously to an errbackhandler. If you do not add one to the Deferredreturned by callRemote, then you will never receive notification of the problem. - The class that is derived from pb.RemoteCopywill be created using a constructor __init__method that takes no arguments. All setup must be performed in the setCopyableStatemethod. As the docstring on RemoteCopysays, don't implement a constructor that requires arguments in a subclass of RemoteCopy. XXX: check this, the code around jelly._Unjellier.unjelly:489 tries to avoid calling __init__just in case the constructor requires args. More Information pb.Copyableis mostly implemented in twisted.spread.flavors, and the docstrings there are the best source of additional information. Copyableis also used in twisted.web.distribto deliver HTTP requests to other programs for rendering, allowing subtrees of URL space to be delegated to multiple programs (on multiple machines). twisted.manhole.exploreralso uses Copyableto distribute debugging information from the program under test to the debugging tool. pb.Cacheable Sometimes the object you want to send to the remote process is big and slow. big means it takes a lot of data (storage, network bandwidth, processing) to represent its state. slow means that state doesn't change very frequently. It may be more efficient to send the full state only once, the first time it is needed, then afterwards only send the differences or changes in state whenever it is modified. The pb.Cacheable class provides a framework to implement this. pb.Cacheable is derived from pb.Copyable, so it is based upon the idea of an object's state being captured on the sending side, and then turned into a new object on the receiving side. This is extended to have an object publishing on the sending side (derived from pb.Cacheable), matched with one observing on the receiving side (derived from pb.RemoteCache). To effectively use pb.Cacheable, you need to isolate changes to your object into accessor functions (specifically setter functions). Your object needs to get control every single time some attribute is changed You derive your sender-side class from pb.Cacheable, and you add two methods: getStateToCacheAndObserveFor and stoppedObserving. The first is called when a remote caching reference is first created, and retrieves the data with which the cache is first filled. It also provides an object called the observer. The first time a reference to the pb.Cacheable object is sent to any particular recipient, a sender-side Observer will be created for it, and the getStateToCacheAndObserveFor method will be called to get the current state and register the Observer. The state which that returns is sent to the remote end and turned into a local representation using setCopyableState just like pb.RemoteCopy, described above (in fact it inherits from that class). After that, your setter functions on the sender side should call callRemote on the Observer, which causes observe_* methods to run on the receiver, which are then supposed to update the receiver-local (cached) state. When the receiver stops following the cached object and the last reference goes away, the pb.RemoteCache object can be freed. Just before it dies, it tells the sender side it no longer cares about the original object. When that reference count goes to zero, the Observer goes away and the pb.Cacheable object can stop announcing every change that takes place. The stoppedObserving method is used to tell the pb.Cacheable that the Observer has gone away. With the pb.Cacheable and pb.RemoteCache classes in place, bound together by a call slave object in sync with the sender-side master object. Example Here is a complete example, in which the MasterDuckPond is controlled by the sending side, and the SlaveDuckPond is a cache that tracks changes to the master: #! /usr/bin/python from twisted.spread import pb class MasterDuckPond(pb.Cacheable): def __init__(self, ducks): self.observers = [] self.ducks = ducks def count(self): print "I have [%d] ducks" % len(self.ducks) def addDuck(self, duck): self.ducks.append(duck) for o in self.observers: o.callRemote('addDuck', duck) def removeDuck(self, duck): self.ducks.remove(duck) for o in self.observers: o.callRemote('removeDuck', duck) def getStateToCacheAndObserveFor(self, perspective, observer): self.observers.append(observer) # you should ignore pb.Cacheable-specific state, like self.observers return self.ducks # in this case, just a list of ducks def stoppedObserving(self, perspective, observer): self.observers.remove(observer) class SlaveDuckPond(pb.RemoteCache): # This is a cache of a remote MasterDuckPond def count(self): return len(self.cacheducks) def getDucks(self): return self.cacheducks def setCopyableState(self, state): print " cache - sitting, er, setting ducks" self.cacheducks = state def observe_addDuck(self, newDuck): print " cache - addDuck" self.cacheducks.append(newDuck) def observe_removeDuck(self, deadDuck): print " cache - removeDuck" self.cacheducks.remove(deadDuck) pb.setUnjellyableForClass(MasterDuckPond, SlaveDuckPond) #! /usr/bin/python from twisted.spread import pb, jelly from twisted.python import log from twisted.internet import reactor from cache_classes import MasterDuckPond class Sender: def __init__(self, pond): self.pond = pond def phase1(self, remote): self.remote = remote d = remote.callRemote("takePond", self.pond) d.addCallback(self.phase2).addErrback(log.err) def phase2(self, response): self.pond.addDuck("ugly duckling") self.pond.count() reactor.callLater(1, self.phase3) def phase3(self): d = self.remote.callRemote("checkDucks") d.addCallback(self.phase4).addErrback(log.err) def phase4(self, dummy): self.pond.removeDuck("one duck") self.pond.count() self.remote.callRemote("checkDucks") d = self.remote.callRemote("ignorePond") d.addCallback(self.phase5) def phase5(self, dummy): d = self.remote.callRemote("shutdown") d.addCallback(self.phase6) def phase6(self, dummy): reactor.stop() def main(): master = MasterDuckPond(["one duck", "two duck"]) master.count() sender = Sender(master) factory = pb.PBClientFactory() reactor.connectTCP("localhost", 8800, factory) deferred = factory.getRootObject() deferred.addCallback(sender.phase1) reactor.run() if __name__ == '__main__': main() #! /usr/bin/python from twisted.application import service, internet from twisted.internet import reactor from twisted.spread import pb import cache_classes class Receiver(pb.Root): def remote_takePond(self, pond): self.pond = pond print "got pond:", pond # a DuckPondCache self.remote_checkDucks() def remote_checkDucks(self): print "[%d] ducks: " % self.pond.count(), self.pond.getDucks() def remote_ignorePond(self): # stop watching the pond print "dropping pond" # gc causes __del__ causes 'decache' msg causes stoppedObserving self.pond = None def remote_shutdown(self): reactor.stop() application = service.Application("copy_receiver") internet.TCPServer(8800, pb.PBServerFactory(Receiver())).setServiceParent( service.IServiceCollection(application)) When run, this example emits the following: % twistd -n -y cache_receiver.py [-] twisted.spread.pb.PBServerFactory starting on 8800 [-] Starting factory <twisted.spread.pb.PBServerFactory instance at 0x40615acc> [Broker,0,127.0.0.1] cache - sitting, er, setting ducks [Broker,0,127.0.0.1] got pond: <cache_classes.SlaveDuckPond instance at 0x406eb5ec> [Broker,0,127.0.0.1] [2] ducks: ['one duck', 'two duck'] [Broker,0,127.0.0.1] cache - addDuck [Broker,0,127.0.0.1] [3] ducks: ['one duck', 'two duck', 'ugly duckling'] [Broker,0,127.0.0.1] cache - removeDuck [Broker,0,127.0.0.1] [2] ducks: ['two duck', 'ugly duckling'] [Broker,0,127.0.0.1] dropping pond % % ./cache_sender.py I have [2] ducks I have [3] ducks I have [2] ducks Main loop terminated. % Points to notice: - There is one Observerfor each remote program that holds an active reference. Multiple references inside the same program don't matter: the serialization layer notices the duplicates and does the appropriate reference counting 5. - Multiple Observers need to be kept in a list, and all of them need to be updated when something changes. By sending the initial state at the same time as you add the observer to the list, in a single atomic action that cannot be interrupted by a state change, you insure that you can send the same status update to all the observers. - The observer.callRemotecalls can still fail. If the remote side has disconnected very recently and stoppedObservinghas not yet been called, you may get a DeadReferenceError. It is a good idea to add an errback to those callRemotes to throw away such an error. This is a useful idiom: observer.callRemote('foo', arg).addErrback(lambda f: None)(XXX: verify that this is actually a concern) getStateToCacheAndObserverFormust return some object that represents the current state of the object. This may simply be the object's __dict__attribute. It is a good idea to remove the pb.Cacheable-specific members of it before sending it to the remote end. The list of Observers, in particular, should be left out, to avoid dizzying recursive Cacheable references. The mind boggles as to the potential consequences of leaving in such an item. - A perspectiveargument is available to getStateToCacheAndObserveFor, as well as stoppedObserving. I think the purpose of this is to allow viewer-specific changes to the way the cache is updated. If all remote viewers are supposed to see the same data, it can be ignored. XXX: understand, then explain use of varying cached state depending upon perspective. More Information - The best source for information comes from the docstrings in twisted.spread.flavors, where pb.Cacheableis implemented. twisted.manhole.exploreruses Cacheable, and does some fairly interesting things with it. (XXX: I've heard explorer is currently broken, it might not be a good example to recommend) - The spread.publishmodule also uses Cacheable, and might be a source of further information. Footnotes Note that, in this context, unjellyis a verb with the opposite meaning of jelly. The verb to jellymeans to serialize an object or data structure into a sequence of bytes (or other primitive transmittable/storable representation), while to unjellymeans to unserialize the bytestream into a live object in the receiver's memory space. Unjellyableis a noun, (not an adjective), referring to the the class that serves as a destination or recipient of the unjellying process. A is unjellyable into Bmeans that a serialized representation A (of some remote object) can be unserialized into a local object of type B. It is these objects Bthat are the Unjellyablesecond argument of the setUnjellyableForClassfunction. In particular, unjellyabledoes not mean cannot be jellied. Unpersistablemeans not persistable, but unjelly, unserialize, and unpicklemean to reverse the operations of jellying, serializing, and pickling. pb.RemoteCopyis actually defined as flavors.RemoteCopy, but pb.RemoteCopyis the preferred way to access it - of course you could be clever and add a hook to __setattr__, along with magical change-announcing subclasses of the usual builtin types, to detect changes that result from normal =set operations. The semi-magical property attributesthat were introduced in Python-2.2 could be useful too. The result might be hard to maintain or extend, though. - this is actually a RemoteCacheObserver, but it isn't very useful to subclass or modify, so simply treat it as a little demon that sits in your pb.Cacheableclass and helps you distribute change notifications. The only useful thing to do with it is to run its callRemotemethod, which acts just like a normal pb.Referenceable's method of the same name. - this applies to multiple references through the same Broker. If you've managed to make multiple TCP connections to the same program, you deserve whatever you get.
http://twistedmatrix.com/projects/core/documentation/howto/pb-copyable.html
crawl-002
en
refinedweb
Re: [Fink-devel] xml-simple-pm here is what I get after installing storable-pm/1_XMLin...ok [Fink-devel] GNU getopt I ported GNU getopt or gengetopt and I used the getopt.c, getopt.h and getopt1.c to port lftp and fix a few other ports that use GNU getopt, GNU getopt provide getopt_long which libSystem.dylib doesn't have. I propose making gengetopt essential so that we can add a UpdateGNUgetopt:. Like the [Fink-devel] QT? Can i fix this? configure:8472: checking if Qt compiles without flags configure:8532: c++ -o conftest -g -O2 -I -I/sw/include -L/usr/X11R6/lib conftest.C -lqt-mt -lXext -lX11 15 conftest.C:2: qglobal.h: No such file or directory conftest.C:3: qapplication.h: No such file or directory [Fink-devel] pthread_kill anyone know a work around for the undefined symbol _pthread_kill ?? I have -lpthread set. I think I remember Finlay telling my that darwin couldn't do _pthread_kill IIRC. So is there a work aroung to this? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett Re: [Fink-devel] pthread_kill hmm I might to that...I guess i could just add it to just about any .h in the pkg, or should I make a new file and patch what needs it? [EMAIL PROTECTED] writes: pthread_kill isn't implemented in Darwin, although support went into CVS HEAD a few days back!!! You could do what MySQL does, which [Fink-devel] mplayer Where would you start?? :) cc -no-cpp-precomp -Iloader -Ilibvo -I/sw/include -I/sw/include/gtk-1.2 -I/sw/include/glib-1.2 -I/sw/lib/glib/include -I/usr/X11R6/include -o mplayer mplayer.o mp_msg.o open.o parse_es.o ac3-iec958.o find_sub.o aviprint.o dec_audio.o dec_video.o aviwrite.o Re: [Fink-devel] mplayer I believe aalib [EMAIL PROTECTED] writes: Where does that libaa.dylib come from, I wonder? I.e. which package? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett CAISnet Inc. 2nd Floor, 11635 - 160 Street Edmonton, Re: [Fink-devel] package validator I't like to request that %n is being used in Souce and SouceDirectory fields. Like ti checks to %v [EMAIL PROTECTED] writes: No worries. I thought about this, too, and plan to add it. Another thing: it should verify the Maintainer field is valid: Full Name [EMAIL PROTECTED] [Fink-devel] I think i broke something What does this mean and how can i fix it? I changed to order in the configure script to try .dylib before .so and this happened. make cd . aclocal cd . automake --gnu --include-deps Makefile cd . autoconf ./aclocal.m4:448: error: m4_defn: undefined macro: _m4_divert_diversion aclang.m4:173: Re: [Fink-devel] Dpkg deleted my /etc directory did you do a dpkg --purge of a pkg that had soemthing in /etc [EMAIL PROTECTED] writes: I know this shouldn't have happened, but dpkg deleted my /etc directory, didn't even warn me that it was full. It told me about /sbin, but not /etc ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., [Fink-devel] gnome panel fish applet ** WARNING **: CORBA plugin load failed: not a Mach-O MH_BUNDLE file type ** CRITICAL **: file goad.c: line 840 (goad_server_activate_shlib): assertion `gmod' failed. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett CAISnet Inc. [Fink-devel] QT3 When i try and run qmake this is what I get. Should we be setting the value for this in the Qt3 install? Or is it a temp value I should be setting in the CompileScript? QMAKESPEC has not been set, so configuration cannot be deduced. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Re: [Fink-devel] ideas on pseudo packages okay so we need to determ the system versions, so make a configure file that will fail if no = a certain version then of course the sucky part we need a Darwin1.3, 1.4 and 5.2 etc etc and the user has to install the pkg...that is off the top of my head i know sucks but it's a starting point so Re: [Fink-devel] ideas on pseudo packages unless we add this in fink it's self with a fink base-upgrade which will run the checks and find the closest pkgs. That could be done, but then we have the dpkg problem. [EMAIL PROTECTED] writes: package darwin52 checks in post-install (by running script) if it is darwin 5.2 which is on the Re: [Fink-devel] db/db3/db4 and shlibs -- request for action I heard the shlibs is your little project, i'd like to know why we decided on -bin, I missed lots od emails when tis was discussed and sorry if i'm rehashing out topics but to me pkg (current -bin + the base dylib and .a and .la) pkg-shlibs (current -shlibs, versioned.dylibs) pkg-dev (includes Re: [Fink-devel] db/db3/db4 and shlibs -- request for action sure I totally understand the -shlibs and agree it's the -bin i have a problem with, I think -bin should be the main pkg and that if need be a -dev package with the headers and stuff (which will be optional install of course), that would help clean up the huge include dir, since we are [Fink-devel] clanlib or c++? Why won't this work, I have to ^C from it or in about one hour it kills Finder and crashes the system...it just sits there. justin@localhost [~/tmp-clan]$ cat conftest.C #include unistd.h justin@localhost [~/tmp-clan]$ justin@localhost [~/tmp-clan]$ cc -E -v -I/sw/include Re: [Fink-devel] Re: possible readline/ncurses problem I'm running the latest of each and my bash is fine. [EMAIL PROTECTED] writes: I just installed bash to verify and it works for me, so do you update readline after having build bash ? Ah and which version-revision of readline are you using ? Re: [Fink-devel] Re: possible readline/ncurses problem is there a bug open for this? [EMAIL PROTECTED] writes: I suppose Dave has libxpg4 installed. It has this effect. libxpg4 sets the environment variable DYLD_FORCE_FLAT_NAMESPACE, which will break any binary that is compiled with twolevel_namespace (the default) and has multiply defined symbols. Re: [Fink-devel] fink install audiofile-shlib has odd results it's not you it's probably me, and now that I'm running the new splitoff code this is gonna be hard to fix...Anyidea as to when the splittoff code will make a release or cvs? Anyhow I'll look into this. [EMAIL PROTECTED] writes: So, first I notice that my audiofile needs updating: $ fink Re: [Fink-devel] fink install audiofile-shlib has odd results found it, turns out it was you :P, it's shlibs not shlib fink install audiofile-shlibs, the pkg name is too long and gets cut off. [EMAIL PROTECTED] writes: $ fink install audiofile-shlib ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Re: [Fink-devel] glib 2.0 me either and i just made a symlink from my cvsroot splitoff dir into fink so i can install all the splitoff pkgs and there not there. [EMAIL PROTECTED] writes: I put their packages into shared-libraries/splitoff in CVS. Hm, don't see them there. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Re: [Fink-devel] glib 2.0 okay just testing it right now. [EMAIL PROTECTED] writes: Oh sorry, I forgot to commit them. Now you can. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax: (780)-454-3200 E-Mail: [EMAIL Re: [Fink-devel] glib 2.0 also what is in the -common and are we gonna use -common and -base, this a general question for splitoff. [EMAIL PROTECTED] writes: Nice. Now one more question: why are the packages named like glib2-0 and not just glib2 ? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Re: [Fink-devel] glib 2.0 OH we've been using -conf. We need a standard I think. I like -common and -base but those are debians anyhow that is just babling we need a standard. [EMAIL PROTECTED] writes: -common package is a common files for -shlibs packages. fooN-shlibs packages should be installable at the same time, Re: [Fink-devel] glib 2.0 still waiting for Max to comment. All i'm saying is that we all need to use the same convension. Or should I think. [EMAIL PROTECTED] writes: -conf is a good naming, if it contains only config files. but shlibs packages may share these files: - config files - locale files - modules - program Re: [Fink-devel] question about splitoffs Okay once again, I have the arts pkg done... [EMAIL PROTECTED] writes: I'm working on the arts package for the kde3 stuff, and I have a small question about whether I'm doing this right for the splitoff. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Re: [Fink-devel] BuildDependsOnly hmmm I can't see why not but instead of adding more to the build time run add it to the fink check command, which I hope all fauthors are using right? :P [EMAIL PROTECTED] writes: I have another small proposal to make related to the long-term shared libraries project: I suggest that we add a Re: [Fink-devel] BuildDependsOnly the sub heredoc is in the works by Max ATM. for now i think it have to be one long line as far as i know. [EMAIL PROTECTED] writes: Another thing that occurred to me while packaging is, is there a way to do multilines inside a splitoff? While making the kdelibs package, I have a huge line Re: [Fink-devel] BuildDependsOnly okay point taken and I'm game if approved could we add documentation for it on the website. my docs are far behind now with all the new changes :P I'm a paper guy still need to print em :P And BTW it was fink validate that i was referring to with fink check. [EMAIL PROTECTED] writes: Re: [Fink-devel] Planning this mail - comment please looks good to me, thought qt and bind9 should be added to the list IMHO. But I think it's a good email to send like once week :P [EMAIL PROTECTED] writes: I plan to send out the following mail to fink-beginners, fink-user, and fink-announce. Please tell me what you think of it, if should Re: [Fink-devel] shlibs I fully agree with this, it hurts on one to have them but helps us developpers in the mean time. [EMAIL PROTECTED] writes: Someday, later, we will want to introduce Shlibs and start to use it. If we are sure that this will be the name of the field, it would be nice to have fink validate not Re: [Fink-devel] qt-3.0.1-2 build failure (fwd) and it will be released tomorrow since I just got 3.0.2 done :P [EMAIL PROTECTED] writes: Yeah, it's actually on but is not world-readable yet. They're taunting us. =) ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Re: [Fink-devel] qt-3.0.1-2 build failure (fwd) [EMAIL PROTECTED] writes: libqt-mt.dylib.3.0.1 is not prebound this tells me it's an old revision...update from cvs, I beleive Jeff has added my changes... it'll make libqt-mt.3.0.1.dylib now and the links are made properly. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Re: [Fink-devel] ANN: Fink Package Manager 0.9.9 released I think we are gonig to adding a fink.conf switch for till in the near future [EMAIL PROTECTED] writes: Hmm, i like the Fink list width fix, but shouldn't it default to auto? Having to add -w=auto to every list command is kinda silly. -B ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Re: [Fink-devel] fink install failing.... yes I had just got it...I had to dpkg --purge the pkg remove the info and patch files run fink selfudpate and then rebuild the pkg...I don't know what is causing it though. [EMAIL PROTECTED] writes: Pick one: [1] Failed: Internal error: node for automake already exists Re: [Fink-devel] porting minicom two things, ./configure --help and see if you can force a dir I'd use /sw/var/lock or read the config.log in the build dir to see what it's looking for. [EMAIL PROTECTED] writes: I am interested in making a fink package for minicom, so we can use serial consoles, but ./configure is failing Re: [Fink-devel] porting minicom find where #include dirent.h is and put #include sys/types.h right before it. [EMAIL PROTECTED] writes: where are these types defined? i cant find them anywhere. i used ./configure --enable-lock-dir=/sw/var/lock --enable-dfl- port=/dev/tty.modem --includedir=/sw/include/ cc -DHAVE_CONFIG_H -I. Re: [Fink-devel] porting minicom...almost done....i hope _getopt_long is defined by GNUgetopt install my gengetopt package and hopefully it will use the getopt.h in the /sw/include dir if not it's a long change. if doesn't work here are your options #1 if getopt.c, getopt1.c and getopt.h are present in the pkg which i doubt if your getting this error, Re: [Fink-devel] porting minicom...almost done....i hope check to see if this has #include getopt.h or #include getopt.h if it uses then installing gengetopt will fix this if not..then i'd have to see more. [EMAIL PROTECTED] writes: cc -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -I../intl -c getopt_long.c ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Re: [Fink-devel] qt-3.0.1-2 build failure (fwd) 3.0.1-2 was my first version with a patch to fix a dylib creation error. You'll need to force remove it and then install the new one...the next upgrade should hopefully work seamless. Sorry about this. [EMAIL PROTECTED] writes: I am getting the exact same error now, and this for fink install Re: [Fink-devel] porting minicom...almost done....i hope you'll prolly need to keep ncurses, but you'll have to add getopt_long, either by adding it the way i mentioned before or by patching the getopt file currently in it...I'll look at righ tnow and send you it...send me the info and patch you have righ tnow. [EMAIL PROTECTED] writes: -BEGIN PGP Re: [Fink-devel] diff/patch formats I use diff -ruN but it's up to the user i think. [EMAIL PROTECTED] writes: Is there a 'proper' way to make a diff for fink? I mean the options used: -cr, -ur, whatever. I was wondering if it mattered or it was as long as patch can understand it. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Re: [Fink-devel] minicom...and sf.net will not accept ssl connections I use IE with SF all the time. [EMAIL PROTECTED] writes: You need to use OmniWeb or Mozilla, IE does not work with sourceforge. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax: (780)-454-3200 Re: [Fink-devel] Updates for the nessus package sorry been updating sooo many pkgs lately I must have over looked this one..thanks I'll get it up to date ASAP. [EMAIL PROTECTED] writes: However, Nessus is now at version 1.1.14, with many new features and vulnerability detection scripts that are not porteable to the old 1.0.9 version. Re: [Fink-devel] qt probs add UpdateLibTool: true and add --with-pic --enable-static --enable-shared to the configureparams [EMAIL PROTECTED] writes: nevermind. I got it. but I do get this tidbit: checking if libtool supports shared libraries... no checking whether to build shared libraries... no checking whether to Re: [Fink-devel] qt probs first off make sure your QTDIR env var is set. if not then your qt install isn't complete. then read the config.log from licq and see why it's failing, it might be an other reason. [EMAIL PROTECTED] writes: Install the QT libraries, or if you have them installed, override this check with the Re: [Fink-devel] Package nessus and gtk+ friends gtk+ is a splitoff package which means it has the info for shlibs in the main info file which should be in the gnome dir. If your not running full unstable you'll need to copy over the gtk+ pkg from unstable gnome dir to local. [EMAIL PROTECTED] writes: fink selfupdate-cvs; fink update gtk+ Re: [Fink-devel] pilot-link-shlibs does not compile this is very odd it makes it for me everytime...can you scroll up to the part that is attempts to make the lib i think it's the second phase and paste that to me? [EMAIL PROTECTED] writes: hi, pilot-link-shlibs-0.10.99-1 does not compile. it breaks as follows: mkdir -p Re: [Fink-devel] pilot-link-shlibs does not compile okay thanks I can fix it [EMAIL PROTECTED] writes: after uninstalling pilot-link, the error was reproducible. i attached the complete compile log. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Fax: Re: [Fink-devel] What is the current state of the OpenOSX Installer SW? I wouldn't mind trying to make it but we would once again be back to needing a logo:P Sorry trying to look at the bright side :P Thought for a fink cd we would just need the binary installer since it's web updatable and since there is no software on the cd execpt the installer ther eis no need Re: [Fink-devel] What is the current state of the OpenOSX Installer SW? or we put a cd iso online and email the link to macworld no cost, execpt time and bandwidth. [EMAIL PROTECTED] writes: Not a bad collection for sixty bucks, but then on the other hand you can get it all for free. But on the gripping hand, they admit this. Sorta: Re: [Fink-devel] What is the current state of the OpenOSX Installer SW? but that costs money to an open source project but we can get free bandwidth hell maybe even off of apple's site. I'd be willing to make the dmg or iso. [EMAIL PROTECTED] writes: Maybe we can co-host with linuxiso.org? I was thinking more of mailing them a burned CD. Re: [Fink-devel] Fink CD but you;'d be using disk space that doesn't need to be so. at least not for the bin dist since apt can get from cd. [EMAIL PROTECTED] writes: Make a package which uses the Apple installer to install a bunch of .deb files into /sw/fink/debs and a bunch of source files into /sw/src, after Re: [Fink-devel] Fink CD agreed :P [EMAIL PROTECTED] writes: 3) Yeah, having a logo would be nice for a CD, and for other stuff, too, but I don't see it as a strict requirement... OK, Justin? 8-) ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: Re: [Fink-devel] Fink CD If FinkCommander or a fink gui of somesort would include configuration GUIs for fink.conf and source.list and was included as a fink pkg so they could get updates with out having to buy an other CD or reinstall the software, I'd be game in make the CD, which would install fink/finkcommander as a Re: [Fink-devel] libtool hacker needed... I'm sorry I don't remember there being an issue, as a matter a fact I thought giftcurs worked fine?? [EMAIL PROTECTED] writes: I have a problem with giFT. I need to disable dlopen, and manually add - -ldl to the LDFLAGS, or else it compiles fine, but says it cannot find symbols at runtime. I Re: [Fink-devel] Fink CD IE is fine. [EMAIL PROTECTED] writes: I take it I first need a Sourceforge account, and that IE can't be used to do this? I'll go sign up for one with Mozilla... ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - Systems Analyst Phone: (780)-408-3094 Re: [Fink-devel] dealing with libtool 1.4a install libtool14 from unstable copy the /sw/share/libtool/ltmain.sh to the build dir and edit the VERSION tag in ltmain.sh to be 1.4a or do a diff on them.. [EMAIL PROTECTED] writes: Anyway, pspell comes with libtool 1.4a (VERSION=1.4a in ltmain.sh). The source package has both ltmain.sh and Re: [Fink-devel] libtool hacker needed... well I got giFT-skt to work and sent it to beren, he informed me that he had just got it to work as well..but I do agree I'd use the curs version first. [EMAIL PROTECTED] writes: GiFTcurs works fine, and so does the giFT daemon, but the GTK+ front-end doesn't (it is obsolete anyway, I believe... Re: [Fink-devel] mozilla help Thanks that fixed it, you were right, I wonder why it was owned by root. 0.9.9 works perfectly now other then the ruff fonts, not mozilla's fault I don't think. [EMAIL PROTECTED] writes: The problem was that root owns ~/.mozilla ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - [Fink-devel] mozilla help I don't even get anything on my display, just a pause the Gdk-WARNING and then my prompt again. Here is the gdb thing thought ther eis no bt cause the program just exits and doesn't give me any msgs. --- Reading symbols for shared libraries .. done (gdb) run Starting program: [Fink-devel] Xfree libs I'm not sure but I think it needs the new dlcompat. I need this libs for a few port and they are always unsable. And for some reason this two libs are the only two that don't have dynamic libs only the a.out. Anyhow keep me posted. configure:8754: checking for XvShmCreateImage in -lXv [Fink-devel] cc errors why am i getting lib errors when it's not linking?? snip terminal.app Missing C++ runtime support for CC (/usr/bin/CC). Compilation of the following test program failed: -- #include iostream.h int main(){ cout Hello World! endl; return Re: [Fink-devel] bug in fink? or bug in me? =) shlibs prolly depends on %N (= %v-%r) I'd remove rrdtool. rrdtool-shlibs then try to instal version specific. [EMAIL PROTECTED] writes: [g4:main/finkinfo/libs] ranger% fink --version Package manager version: 0.9.10 Distribution version: 0.3.2a.cvs [g4:main/finkinfo/libs] ranger% fink install Re: [Fink-devel] cc errors good point thanks i just reinstalled libxpg4 not used to having it installed yet...thanks, I'll try that. [EMAIL PROTECTED] writes: Isn't this the true error message? This one is the well-known DYLD_FORCE_FLAT_NAMESPACE/libxpg4 bug. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett Re: [Fink-devel] elf's dl*() ported... it would be nice if we didn't have to do this. I know in my xine port i need to beable to use both which is a pain, though i'm not sure why i need both yet. [EMAIL PROTECTED] writes: So does ours, if you change the cc line to be cc -Ddlsym=dlsym_prepend_underscore Re: [Fink-devel] elf's dl*() ported... well if you have fink then 10.1.3 does have dlcompat. [EMAIL PROTECTED] writes: Well, yes, however, Mac OS X 10.1.3 does not have dlcompat, and I have noticed many posters seem to be using a Mac OS X system for 'darwin' development. (myself included...) Also, it was a less than easy install [Fink-devel] Re: Nessus 1.2.0 released (was Re: [Fink-users] nessus 1.0.10) it is updated, I also added libnessus-ssl which needs to be compiled first so if you move over to ssl support rebuild libnasl, nessus-common or nessus-common-nox and nessus-plugins and also the plugins now provide the needed nes files. Let me know how it all works, I've had one compile bug Re: [Fink-devel] Move aggressive I've also been using both especially mozilla 0.9.9 and I give them the thumbs up as well. [EMAIL PROTECTED] writes: Also, my Galeon package has a few users (at least I've gotten feedback from around 6 or 7), but it depends on gnome-vfs (= 1.0.3-4) and mozilla (= 0.9.9). I've been using both Re: [Fink-devel] Move aggressive once Max, uses the tree() that I co wrote in fink to add it's functionalty to fink list and fink info it will help :P [EMAIL PROTECTED] writes: Also many others, is there a quick and easy way to get a list of packages installed and not yet in stable? Use FinkCommander :-) (The smiley does Re: [Fink-devel] mozilla to stable (was Re: Move agressive) I know there was also a fix for mozilla 0.9.9, and 0.4 is released I think 0.9.9 should be re looked at now. [EMAIL PROTECTED] writes: I'm one of the people who has not been able to get mozilla-0.9.9 running (on either of my machines). I can use mozilla-0.9.8 just fine. There were other Re: [Fink-devel] mozilla to stable (was Re: Move agressive) simple check ~/.mozilla and make sure it's owned by user not root. I had the same problem and Feanor helped me fix it on the devel list. [EMAIL PROTECTED] writes: OK, we can look at it. I still have it installed. (It compiled and installed just fine.) I unlimit the stacksize (just in case), Re: [Fink-devel] mozilla to stable (was Re: Move agressive) I think it's a problem with the install script...since it's run in sudo mode it's prolly making the ~/.mozilla directly from the install script and not to %i. That would be my guess. Maybe run a check on ~/.mozilla if owned by root nuke it?? I don't know haven't thought of that part or hack Re: [Fink-devel] mozilla to stable (was Re: Move agressive) hmm I was just a suggestion...I don't know very much about hte mozilla port haven't looked at it. Maybe add a check to the mozilla script? I dont' know how it's being made or why it's wrong when being made. But i don't know that is the problem. I'm glad that you figured out the issue with rm [Fink-devel] pingus port since I'm very new to gdb, where would I start my search to fix this. I've checked the fonts.src (datafile) and it's okay and the refering font file is present and where it says it should be. any ideas?? Program received signal SIGABRT, Aborted. 0x7001a70c in kill () (gdb) bt #0 0x7001a70c in Re: [Fink-devel] mozilla to stable (was Re: Move agressive) sounds good to me though i believe it was 0.9.9-1 that introduced this bug. and i know it was fixed in -4 the inbetween version s I'm not certain of so only users that install 0.9.9-1 need worry about this issue. [EMAIL PROTECTED] writes: OK, I can confirm that simply upgrading to Re: [Fink-devel] ld option to supress multiple definitions (from apple's list) as far as I'm concerned if you can compile it, and run it why worry about the warnings. I think it would be better to try and keep as close as the author intended it to be. [EMAIL PROTECTED] writes: Would it be better to use the two-level namespace support of ld instead of fighting against it? Re: [Fink-devel] Problems getting a configure script to recognise libiconv now there has to be a reason why, but it's missing -liconv [EMAIL PROTECTED] writes: configure:7877: cc -o conftest -g -O2 -Wall -Wno-unknown-pragmas -I/sw/include - L/sw/lib conftest.c -lm 15 /usr/bin/ld: Undefined symbols: _iconv ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett Re: [Fink-devel] Problems getting a configure script to recognise libiconv I think the configure script should be fixed, since it might not need to link libiconv to every link. [EMAIL PROTECTED] writes: You're right... Hrmmm... Weird indeed! I'll see if adding '-liconv' to the LDFLAGS helps... Thanks! ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin F. Hallett - [Fink-devel] link problems how can a build link the same two libs in two different link cmds and one fail but the other doesn't?? cc -DHAVE_CONFIG_H -I. -I. -I.. -I.. -I../build -I../lib -I../rpmdb -I../rpmio -I../popt -I../misc -I/sw/include -I../misc -I/sw/include -no-cpp-precomp -D_GNU_SOURCE -D_REENTRANT -Wall Re: [Fink-devel] fink info format docs yes they are in the porting notes. [EMAIL PROTECTED] writes: Are the % variables/hashes/whatever documented anywhere? It seems every once in a while I stumble on another. I tried searching the fink sources, but I could not find them anywhere. I was thinking of working on a subsection of the Re: [Fink-devel] FYI: qt 3.0.4 upgrade will break old libraries updating all the pkgs will also force a rebuild on all of them to avoid any issues as RangerRick (Ben) had previously mentioned. [EMAIL PROTECTED] writes: Just a note, I've just committed a new version of the QT info file that updates it to 3.0.4, and also fixes some bugs. The most important Re: [Fink-devel] QT2 package qt2-shlibs should in theory be able to co exist if someone makes the split. [EMAIL PROTECTED] writes: Hi everyone! I was wondering if it would be possible for a qt2 package to be created, that would conflict with and replace the qt3 package. I currently have some packages (qcad, bbkeysconf) Re: [Fink-devel] Re: [Fink-users] qt3-shlibs linking error shoot your right the only one that I didn't think of and the only one that couldn't be avoided. See the problem is that when qt pkg was made it should have started with qt2 and then when the first qt3 pkg was made it should not have followed the same error as qt2 and been called qt3. So we had Re: [Fink-devel] new QT packages the problem is that qt was made to live in seperate directories, Not that I want to say it but I think we should see how debian handled this to avoid more problems. The problem with this I think will be with the bin portion. I think that the qt2-shlibs will need qt2-bin I'm sure the qt2-bin and Re: [Fink-devel] FYI: Porting fakeroot it doesn't appear to, do you know what headers normally provide this on linux? I appears that kde and mozilla both define there own, it might be possible to copy theirs? [EMAIL PROTECTED] writes: PS: Anyone know if Darwin has stat64 and friends? ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Re: [Fink-devel] FYI: Porting fakeroot you could use -Dstat64=__USE_FILE_OFFSET64 is they are equal, but I didn't really understand your question. [EMAIL PROTECTED] writes: sys/stat.h There is some magic in there to make stat64 be called stat if __USE_FILE_OFFSET64 is defined, too. ¸.·´^`·.,][JFH][`·.,¸¸.·´][JFH][¸.·´^`·., Justin Re: [Fink-devel] LINGUAS ? that or maybe fink should be unsetting before the make process? [EMAIL PROTECTED] writes: I do get a number of build failures, where to correct things I have first to unsetenv LINGUAS _ twice this morning (I sent a note to the maintainers), and again this evening (gal19). Question: is it Re: Solved: [Fink-devel] perl warning in fink 0.9.12-1 right but in order to do a apt-get update and not have error for the first time since the local dir won't have the packages.gz file in them you need to run fink scanpackages so it should be run before apt-get update to update the local apt packages.gz [EMAIL PROTECTED] writes: FWIW, Re: [Fink-devel] system-passwd? since we are on the topic, can i get a user list and group list added to the passwd file for my up comming mailman port? [EMAIL PROTECTED] writes: What remains to mention is that the user *IDs* must be fixed if I am mistaken. This makes a system-passwd package basically useless. It doesn't [Fink-devel] lftp or port? Since I suck at gdb still, maybe someone could make sense of this for me. Since 2.5.1 I can no longer use local tab completion in lftp I get a bus error, I want to make sure it's an lftp problem and not in my port. Remote tab completion works fine, it's only local, like put, lcd, etc... lftp Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem Xinerama and Xv, in the shared lib format are *NOT* needed. They are only used if present. Since Xinerama and Xv are built statics in the Xfree build by default, and we turned them on, since I needed them for an other port, and Ben build the KDE binaries from fink and the fink verion of Xfree, Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem no there isn't. Plus why would you want to revert the change? It doesn't hurt anything unless you try and mix to systems. hmmm...I think this might end up being a problem with other pkgs for the bin dist as well. shared libs and system pkgs are not gonna mix well in the bin dist. [EMAIL Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem This is true, how ever the system-xfree86 pkg is flawed in other ways as well. Since some pkgs may depend on a certain version of xfree, and since there is no way of knowing which is install with system-xfree86. The make a pkg management system as fink all pkgs almost need to be controlled by Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem Firstly i asked about the shared libs to the Xfree team, and there was no good reason for them being disabled, according to my replies from the list. Secondly this would be a concern if we were making .pkg OS X style installed that aren't managed. (i.e. kinda like rpm thought I'll give rpms Re: [KDE-Darwin] Re: [Fink-devel] KDE-Fink library problem of course i can't remember which pkg it was now, though it might be mplayer, and it's cause Xinerama and Xv statics are flawed on darwin that this happens mostly, and there are instructions on how to get xfree to make the shared versions in the install notes (accutally I think it was xine now Re: [Fink-devel] - verry thanks ! - yes and no. precompiled version may or may not work, and compiling these programs for source if available might need some patches. Since IIRC most of these are commercial programs and will never be in fink and are probably distributed via binaries only form. You can try them but since they are [Fink-devel] lesstif and kdebase3-ssl is it just me me or are these two choices odd, first lesstif, and with out doing a fink list to see that the revision on -dev is -4 and the revision on lesstif is -6 I'd never known to use lesstif. Then I told it to install kde-ssl why ask me if i want kdebase3 to have ssl?? Anyhow i don't know Re: [Fink-devel] lesstif and kdebase3-ssl thanks for the quick kleen explination Dave...I understand now... [EMAIL PROTECTED] writes: Hi Justin. The lesstif choice is a temporary thing... lesstif-dev is going away, and once the new lesstif has been tested and I can move it to stable, lesstif-dev will go away completely. The other one,
https://www.mail-archive.com/[email protected]&q=from:%22Justin+Hallett%22
CC-MAIN-2019-43
en
refinedweb
Components and supplies Apps and online services About this project Short Introduction This four wheeled robot is able to move wirelessly via Bluetooth connection. It is controllable by a retrolink Nintendo 64 USB controller using Johnny-Five library. A Nerf gun is mounted to the chassis and users are able to shoot with it. There is also a flashlight attached to lighten dark areas. You can see the circuit of the main parts in figure 1. Powering Due to the amount of electronic components (4 DC motors, 3 servos, etc) the best option is the usage of an USB power bank with high capacity. I have used a 10000 mAh Proda bank for the project. There is only one problem with most power banks. If there is not enough current drained from the batteries, the bank will shut down within a few seconds. Thankfully there are enough active power consumers in the circuit so this feature will not cause any trouble. I had to break the original USB cable and create another endpoint for powering the motor shield as well. So Arduino gets 5V from the power bank directly through it's USB slot and the motor shield gets another 5V from parallel connection. If you remove the jumper from the motor shield, you can power both boards separately. This is more optimal than powering motors and servos directly from the Arduino board. You can find more info at Adafruit. Bluetooth Connection I have used an HC-06 Bluetooth module to establish connection between the car and my computer. I wanted to use as few cables as possible, so I decided to build a custom shield by soldering the Bluetooth module to an empty prototype shield and attach to the Arduino. Movement I have used a Nintendo 64 controller with USB connection. You can see it in figure 2. To be able to use this controller in Javascript context, you need to install some third party libraries which support it. I have used node-gamepad which can offer a simple API and understandable documentation. The robot is able to do the following movements/actions: - Joystick - move forward, backward and can rotate the robot - C Up, C Down, C Right, C Left - Move the Nerf mount to the preferred direction - A - Move the trigger servo to shoot with the Nerf gun - B - Move back the trigger servo Nerf Gun The idea of the Nerf automation is to mount it onto a Pan/Tilt camera platform. There are tons of these on the internet, usually without servos, so make sure you have a pair if you ordered just the simple mount. I've used two TowerPro SG90 servos for positioning and one continuous servo for triggering fire. I have used a short fishing line attached to the servo and the gun trigger. The servo has the power to make enough force to pull the trigger when activated. See it in Action You can see the Nerf gun in action in the following video. The project was in an early state when I captured this, so some parts are missing and the power sources are 4 rechargeable AA batteries plus a 9V battery for the Arduino instead of a power bank (they drained really fast). The Code I have used NodeJS with Johnny-Five to make this project. You can find the full code in my GitHub repository linked below. There are three main components of the code: - Board Module - Handle the Arduino board - Motor Module - Handle the motor movements and control bindings - Servo Module - Handle the servo movements and control bindings In app.js you can see them together: import { BoardModule } from './src/boardModule'; import { MotorModule } from './src/motorModule'; import { ServoModule } from './src/servoModule'; const port = 'COM26'; const speed = 255; let boardModule = new BoardModule(port); let motorModule = new MotorModule(speed); let servoModule = new ServoModule(); boardModule.initBoard().then(() => { motorModule.initMotors(); servoModule.initServos(); }); After initialization of motors and servos, the gamepad should be instantiated and we are free to set the bindings for controlling each part via our Nintendo 64 controller. See this snippet for example: initGamePad() { let gamePad = new GamePad('n64/retrolink'); gamePad.connect(); gamePad.on('cUp:press', () => this.platformUp()); gamePad.on('cDown:press', () => this.platformDown()); gamePad.on('cRight:press', () => this.platformRight()); gamePad.on('cLeft:press', () => this.platformLeft()); gamePad.on('cUp:release', () => this.servoStop()); gamePad.on('cDown:release', ()=> this.servoStop()); gamePad.on('cRight:release', () => this.servoStop()); gamePad.on('cLeft:release', () => this.servoStop()); gamePad.on('a:press', () => this.triggerServoCW()); gamePad.on('b:press', () => this.triggerServoCCW()); gamePad.on('a:release', () => this.triggerServoStop()); gamePad.on('b:release', () => this.triggerServoStop()); } As you can see, each key press triggers two events: press and release. Press callback will be executed after the button pressed and release will be executed when we release it. In this way, we have the opportunity to stop our moving parts. The joystick of the gamepad have a bit more different configuration. It's about coordinates based on the direction where we move the stick. You can see the setup in the following code snippet: gamePad.on('center:move', (coords) => { if (coords.x == 127 && coords.y == 0) this.motorsForward(); if (coords.x == 127 && coords.y == 255) this.motorsBackward(); if (coords.x == 0 && coords.y == 127) this.motorsLeft(); if (coords.x == 255 && coords.y == 127) this.motorsRight(); if (coords.x == 127 && coords.y == 127) this.motorsStop(); }); Last Words Hope you like this little project which is another proof of concept about proving the power of Johnny-Five and Javascript robotics in general. The flashlight is just an extra, sitting on a phone mount. It is not controllable at the moment but can be with relays. Maybe in my next project. Code Github Schematics Author Dominik Filkus - 2 projects - 5 followers Published onApril 18, 2017 Members who respect this project you might like
https://create.arduino.cc/projecthub/dominikfilkus/arduino-node-tank-a46e86
CC-MAIN-2019-43
en
refinedweb
Saturday, May 28, 2011 stackoverflow - 20k rep Four months after crossing the 10k milestone, I've now achieved a reputation of 20k on stackoverflow! The following table shows some interesting stats on my time at SO: As I mentioned in my previous post,. It's hard juggling work and stackoverflow. The only times I get to use it are during the weekends and in the evenings after work. Most of the questions have already been answered by then! However, I still like browsing the questions and checking out the answers. 30k, here I come! Posted by Fahd Shariff at 12:29 PM 0 comments Links to this post Labels: stackoverflow Saturday, May 14, 2011 Lazily Instantiate a Final Field Java only allows you to instantiate finalfields in your constructor, like this: public class Test{ private final Connection conn; public Test(){ conn = new Connection(); } public Connection getConnection(){ return conn; } }Now, let's say that this field is expensive to create and so you would like to instantiate it lazily. We would like to be able to do something like the following (which won't compile because the finalfield has not been initialised in the constructor): //does not compile! public class Test{ private final Connection conn; public Test(){ } public Connection getConnection(){ if(conn == null) conn = new Connection(); return conn; } }So, there is no way to lazily instantiate a finalfield. However, with a bit of work, you can do it using Memoisation (with Callables). Simply wrap your field in a final Memoas shown below: public class Memo<T> { private T result; private final Callable<T> callable; private boolean established; public Memo(final Callable<T> callable) { this.callable = callable; } public T get() { if (!established) { try { result = callable.call(); established = true; } catch (Exception e) { throw new RuntimeException("Failed to get value of memo", e); } } return result; } } public class Test { private final Memo<Connection> conn; public Test() { conn = new Memo<Connection>(new Callable<Connection>() { public Connection call() throws Exception { return new Connection(); } }); } public Connection getConnection() { return conn.get(); } } Posted by Fahd Shariff at 3:42 PM 0 comments Links to this post Labels: Java, programming
https://fahdshariff.blogspot.com/2011/05/
CC-MAIN-2019-43
en
refinedweb
java.lang.Object akka.io.UdpMessageakka.io.UdpMessage public class UdpMessage Java API: factory methods for the message types used when communicating with the Udp service. public UdpMessage() public static Udp.NoAck noAck(java.lang.Object token) Udp.Sendcan optionally request a positive acknowledgment to be sent to the commanding actor. If such notification is not desired the Udp.Send.ack()must be set to an instance of this class. The token contained within can be used to recognize which write failed when receiving a Udp.CommandFailedmessage. token- (undocumented) public static Udp.NoAck noAck() Udp.NoAckinstance which is used when no acknowledgment information is explicitly provided. Its “token” is null. public static Udp.Command send(ByteString payload, java.net.InetSocketAddress target, Udp.Event ack) Udp.SimpleSenderquery to the UdpExt.manager()as well as by the listener actors which are created in response to Udp.Bind. It will send the given payload data as one UDP datagram to the given target address. The UDP actor will respond with Udp.CommandFailedif the send could not be enqueued to the O/S kernel because the send buffer was full. If the given ackis not of type Udp.NoAckthe UDP actor will reply with the given object as soon as the datagram has been successfully enqueued to the O/S kernel. The sending UDP socket’s address belongs to the “simple sender” which does not handle inbound datagrams and sends from an ephemeral port; therefore sending using this mechanism is not suitable if replies are expected, use Udp.Bind in that case. payload- (undocumented) target- (undocumented) ack- (undocumented) public static Udp.Command send(ByteString payload, java.net.InetSocketAddress target) send(payload, target, noAck()). payload- (undocumented) target- (undocumented) public static Udp.Command bind(ActorRef handler, java.net.InetSocketAddress endpoint, java.lang.Iterable<Inet.SocketOption> options) UdpExt.manager()in order to bind to the given local port (or an automatically assigned one if the port number is zero). The listener actor for the newly bound port will reply with a Udp.Boundmessage, or the manager will reply with a Udp.CommandFailedmessage. handler- (undocumented) endpoint- (undocumented) options- (undocumented) public static Udp.Command bind(ActorRef handler, java.net.InetSocketAddress endpoint) handler- (undocumented) endpoint- (undocumented) public static Udp.Command unbind() Udp.Boundmessage in order to close the listening socket. The recipient will reply with an Udp.Unboundmessage. public static Udp.Command simpleSender(java.lang.Iterable<Inet.SocketOption> options) Udp.SimpleSenderReadynotification. The “simple sender” is a convenient service for being able to send datagrams when the originating address is meaningless, i.e. when no reply is expected. The “simple sender” will not stop itself, you will have to send it a PoisonPill when you want to close the socket. options- (undocumented) public static Udp.Command simpleSender() public static Udp.Command suspendReading() Udp.Boundmessage) to have it stop reading datagrams from the network. If the O/S kernel’s receive buffer runs full then subsequent datagrams will be silently discarded. Re-enable reading from the socket using the Udp.ResumeReadingcommand. public static Udp.Command resumeReading() Udp.SuspendReadingcommand.
https://doc.akka.io/japi/akka/2.3/akka/io/UdpMessage.html
CC-MAIN-2019-43
en
refinedweb
On Feb 15, 2008, at 12:04 PM, David Bateman wrote: Ben Abbott wrote:On Feb 15, 2008, at 11:31 AM, David Bateman wrote:Ben Abbott wrote:How does that help you if we can't convince mathworks to do the same?How does that help you if we can't convince mathworks to do the same?On Feb 15, 2008, at 5:02 AM, David Bateman wrote:Use Octave 3.0.0 and use the matlab syntax everywhere, in most cases it should then just work.. If there are any other differences that preventit working then they should be reported as bugs. A function that doeswhat you want is function ret = isoctave () persistent isoct if (isempty (isoct)) isoct = exist('OCTAVE_VERSION') ~= 0; end ret = isoct; end Regards DavidMight this be added to the core functions? BenD.Good point <blushing> Perhaps an octave version of an existing Matlab function ("ver", "version", "verLessThan", ?) could do the job?Under matlab R2007b I seea = ver('matlab')a = Name: 'MATLAB' Version: '7.5' Release: '(R2007b)' Date: '02-Aug-2007'a = ver('octave')a = 0x0 struct array with fields: Name Version Release Date The octave "ver" function doesn't take an argument. To get thefunctionality you want this way you could modify Octave's ver so that it assumes the argument is 'octave' if it is missing and then something likeif (strcmpi (pack, "octave")) ## Do what is already done, plus set ret if needed else lst = pkg("list"); ret = []; for i = 1 : length (lst) if (strcmpi (pack, lst{i}.name)) ret = struct ("Name", lst{i}.name, "Version", lst{i}.version, "Release", [], "Date", lst{i}.date); break; endif endfor if (isempty (ret)) ## How do you create an empty structure?ret = struct ("Name", [], "Version", [], "Release", [], "Date", []);ret(1) = []; endif endif would get the type of behavior you want as long as someone doesn't create an "octave" toolbox in matlab or a "matlab" package in octave. D. That's a good start.I'm busy for the next several hours, but will look more closely at how Matlab's version works and take a shot at this later today. Ben
https://lists.gnu.org/archive/html/help-octave/2008-02/msg00196.html
CC-MAIN-2019-43
en
refinedweb
In today’s article, we will explore how to unit test Vue.js single file components using the ava test framework and the vue-test-utils package. I’ve decided to write this article because ava is my favorite test framework, if you’re more into the Mocha test framework, I highly recommend you to watch the Testing Vue series on Laracasts. Testing Vue.js components is different from testing regular JavaScript modules or classes in two ways. First of all Vue.js components depend on Vue.js, its global state and oftentimes on plugins like Vuex or the vue-router. Second, Vue.js single file components usually are compiled with webpack, the regular workflow of using Babel to compile JavaScript code before testing it, is not sufficient in this case. Let’s find out how to deal with those challenges. Setting up the test environment In preparation for this article, I’ve set up a simple demo project, using the official Vue.js CLI PWA template. You may view the complete code used in this article on GitHub. After creating a new project with the Vue.js CLI (already including the vue-router package), we can now start by installing all the necessary dependencies that we need to build and test our app. npm install vuex npm install --save-dev ava babel-plugin-transform-object-rest-spread jsdom jsdom-global require-extension-hooks require-extension-hooks-babel require-extension-hooks-vue sinon vue-test-utils Let’s take a closer look at this long list of dependencies. The only non development dependency which we’re adding is vuex, we’ll use Vuex to manage the state of our demo component which we’re going to build using TDD. ava is the test framework which we’re going to use. The babel-plugin-transform-object-rest-spread makes it possible to test components which are using the new JavaScript spread operator, we’re going to use the spread operator in combination with Vuex’ mapActions() function. We’re going to use jsdom and jsdom-global to simulate a browser environment in our tests. The require-extension-hooks-* packages are required in order to being able to test single file Vue.js components without having to compile them with webpack first. sinon is a mocking library which makes it possible to create spies and stubs of objects. Last but not least comes the vue-test-utils package, which is the official helper package for testing Vue.js components. Configuring ava Because Vue.js single file components can’t be compiled by ava on the fly, we have to create a setup.js file in a newly created test directory, which runs before the test and compiles the tested single file component into pure JavaScript code which can be interpreted by ava. // test/setup.js const hooks = require('require-extension-hooks'); // Set up a virtual browser environment. require('jsdom-global')(); // Setup `.vue` files to be processed by `require-extension-hooks-vue`. hooks('vue').plugin('vue').push(); // Setup `.vue` and `.js` files to be processed by `require-extension-hooks-babel`. hooks(['vue', 'js']).plugin('babel', { plugins: ['transform-object-rest-spread'] }).push(); In the code above we’re using jsdom-global to set up a virtual browser environment, this makes it possible to access browser specific APIs although we’re running our tests in a Node.js environment. Using jsdom instead of a real browser environment or PhantomJS, helps us to keep our tests as fast as possible. In the next step, we have to tell ava about the setup.js file. To do so we can add the following snippet to our package.json file. "ava": { "require": [ "./test/setup.js" ] } The last thing we have to do before we can get started with writing our first test, is to add a test npm script to our package.json file, to make it possible to quickly trigger an ava test run. "scripts": { "dev": "node build/dev-server.js", "start": "node build/dev-server.js", "build": "node build/build.js", "test": "ava test/**/*.spec.js" } Using TDD to build a component Now that we’ve set up our testing environment, let’s build a to-do app using the TDD approach. <template> <div class="to-do"></div> </template> <script> export default { name: 'ToDo', }; </script> Because we’re using TDD, we’re starting with an empty ToDo.vue component in src/components, just so that we can import something. // test/components/ToDo.spec.js import { shallow } from 'vue-test-utils'; import test from 'ava'; import ToDo from '../../src/components/ToDo'; test('It should render an `<div>`.', (t) => { const wrapper = shallow(ToDo); t.true(wrapper.is('div')); }); In the test code above you can see, that we’re importing a function named shallow from vue-test-utils. This function makes it possible to initialize a Vue.js component, but instead of also initializing all its child components, it automatically stubs them. If you want to initialize a component including all its child components, you have to use the mount function. The first test case you can see in the code snippet above, tests if the wrapper element, rendered by the component, is a <div> tag. You might wonder what this test is good for: with this very simple test, we test not primarily the functionality of the component but whether the setup works in principle. If this test fails and we’ve made sure that the component in fact should render a <div>, we know that something is wrong with the setup, but not necessarily with the component. Implementing the functionality When following the TDD approach, the test is written before the implementation. The first thing we want to implement is a list of to-do items. test('It should show a list of to-do items if there are any.', (t) => { const wrapper = shallow(ToDo, { data() { return { items: [ 'Hello World', 'This is a test', ], }; }, }); t.true(wrapper.contains('.qa-to-do-item')); }); In the test above we’re initializing a new instance of our ToDo component with some data. We specify that our component should render a list of items and check if this is true by asserting that the wrapper contains an element with the CSS selector .qa-to-do-item (read more about why qa prefixes are awesome). If we run this test with npm run test the test should fail, because we don’t have implemented the functionality yet. <template> <div class="to-do"> <ul class="to-do__list qa-to-do-list"> <li class="to-do__item qa-to-do-item" v- {{ item }} </li> </ul> </div> </template> <script> export default { name: 'ToDo', data() { return { items: [], }; }, }; </script> In the example above you can see the implementation for displaying a list of to-do items. If we run our test again, this time it should pass. In the next step we want to specify what happens, if there are no to-do items. test('It shouldn\'t render a list if there are no items.', (t) => { const wrapper = shallow(ToDo); t.false(wrapper.contains('.qa-to-do-list')); }); If there are no items, we don’t want to display anything at all. If we run our test again we can see that it fails. Let’s change that. <template> <div class="to-do"> <ul class="to-do__list qa-to-do-list" v- <li class="to-do__item qa-to-do-item" v- {{ item }} </li> </ul> </div> </template> By adding a v-if binding, which is checking the length of the items array, on the to-do list item, we make sure that this element is only rendered if there are any items to be displayed. If we run our tests again, we can see that now all of them are passing again. So far so good. The only thing that’s missing from our little to-do app, is the possibility to add new items. test('It can add new to-do items.', (t) => { const wrapper = shallow(ToDo);.is(wrapper.find('.qa-to-do-item').text().trim(), 'New to-do item'); }); In the test code above, we specify, that there should be an input and a button element. If text is entered into the input field and the button is clicked, a new to-do item containing the text should be added to the list. We’re checking this by comparing the text of the element with the selector .qa-to-do-item with the text which we’ve previously entered into the input element. ="items.push(newItem)">Add</button> </div> </template> <script> export default { name: 'ToDo', data() { return { items: [], newItem: '', }; }, }; </script> In the code above, you can see that we’ve added a new input and a button element. By using v-model on the input element we’re binding its value to the newItem data key. The click event listener on the button element pushes the value of newItem into the items array when activated. Testing Vuex powered components We now have a working to-do app. But this is a rather simple example of how to build a Vue.js component, in a real world application you’ll most likely use a global state to store your data. This is the right time to bring Vuex into the equation. To get Vuex up and running we need to add the following directories and files. . └── src └── store ├── index.js └── modules └── todo.js // src/store/index.js import Vue from 'vue'; import Vuex from 'vuex'; import todo from './modules/todo'; Vue.use(Vuex); export default new Vuex.Store({ modules: { todo, }, }); // src/store/modules/todo.js const getters = { items: state => state.items, }; const mutations = { ADD(state, { item }) { state.items.push(item); }, }; const state = { items: [], }; export default { namespaced: true, getters, mutations, state, } One thing to mention is, that we’re using the Vuex namespace feature. This prevents naming collisions from happening. Additionally we have to register our newly created Vuex powered store in our Vue instance which is created in the src/main.js file. import Vue from 'vue'; import App from './App'; import router from './router'; import store from './store'; new Vue({ el: '#app', router, store, render: h => h(App), }); If you’re not quite sure whats happening in the files above, please head over to the official Vuex documentation – explaining how Vuex works is out of the scope of this article. After creating and registering our Vuex store, we have to update our to-do app component to make use of the global store instead of relying on its own local state. > </div> </template> <script> import { createNamespacedHelpers } from 'vuex'; const { mapGetters, mapMutations } = createNamespacedHelpers('todo'); export default { name: 'ToDo', data() { return { newItem: '', }; }, computed: { ...mapGetters(['items']), }, methods: { ...mapMutations({ add: 'ADD', }), }, }; </script> In the code above you can see, that we’ve changed the click handler in the template. The click handler now calls a new add method. We’re using Vuex map functions to map getter and mutation functions. If we’d run our tests again, we’d see them fail. In order to make them pass again, we have to mock the store and pass the mocked store instance to the instance of the component under test. // test/components/ToDo.spec.js import Vuex from 'vuex'; import sinon from 'sinon'; import { createLocalVue, shallow } from 'vue-test-utils'; import test from 'ava'; import ToDo from '../../src/components/ToDo'; const localVue = createLocalVue(); localVue.use(Vuex); // Mock the `ADD` mutation to make it // possible to check if it was called. const mutations = { ADD: sinon.spy(), }; // This function creates a new Vuex store // instance for every new test case. function createStore(items = []) { const modules = { todo: { namespaced: true, getters: { items: () => items, }, mutations, }, }; return new Vuex.Store({ modules, }); } test('It should render an `<div>`.', (t) => { const wrapper = shallow(ToDo, { localVue, store: createStore() }); t.true(wrapper.is('div')); }); test('It should show a list of to-do items if there are any.', (t) => { const wrapper = shallow(ToDo, { localVue, store: createStore([ 'Hello World', 'This is a test', ]), }); t.true(wrapper.contains('.qa-to-do-item')); }); test('It shouldn\'t render a list if there are no items.', (t) => { const wrapper = shallow(ToDo, { localVue, store: createStore() }); t.false(wrapper.contains('.qa-to-do-list')); }); test('It can add new to-do items.', (t) => { const wrapper = shallow(ToDo, { localVue, store: createStore() });.true(mutations.ADD.calledWith({}, { item: 'New to-do item' })); }); Let’s walk through the changes we’ve made to make the test work with Vuex. First of all, we’re importing three new dependencies: Vuex, sinon and createLocalValue. createLocalValue is a helper function which makes it possible to pass globals into the Vue instance of our component – we need this functionality to pass our mock store to the component with localVue.use(Vuex) later we use localVue and the store instance to create a new component instance with shallow(ToDo, { localVue, store: createStore() }). In the last test case, we’ve changed the assertion from checking if the list of to-do items was updated, to making sure, that the ADD mutation was called. In unit tests, we assume that everything outside of the scope of the current test works as expected. By applying this logic, we can safely assume that the ADD mutation does its job correctly, and it will indeed add a new to-do item to the store. In a previous test we’ve already tested if items in the store render correctly, therefore in this test it is sufficient to check if the mutation function was called with the correct parameters. Testing vue-router powered components Now that we’ve built a Vuex powered to-do app, let’s take a look at how to test Vue.js components, which are using the vue-router package. In this example we’ll assume that we want to link to a statistics page and we want to handle a click event on the router link. Usually, if you’re using the shallow function, the vue-test-utils will stub all child components of the component under test, but this makes it impossible to handle a click event on a child component. Vue.js requires you to use @click.native if you want to handle (click) events on child components, but native events are not fired if the component is not initialized. Because of this, we have to use the mount function instead of shallow whenever we want to test if an event bound to a child component was emitted correctly. // src/router/index.js import Vue from 'vue'; import Router from 'vue-router'; import ToDo from '@/components/ToDo'; import ToDoStats from '@/components/ToDoStats'; Vue.use(Router); export default new Router({ routes: [ { path: '/', name: 'To-Do', component: ToDo, }, { path: '/stats', name: 'Stats', component: ToDoStats, }, ], }); <template> <div class="to-do-stats"> <h1>Stats</h1> </div> </template> <script> export default { name: 'ToDoStats', }; </script> In the code snippets above, you can see that we’ve added a new route and a new component ( src/components/ToDoStats.vue) to render at this route. The ToDoStats component has no other functionality than to make it possible to add the new route. > <router-link Go to the stats </router-link> </div> </template> The code you can see above is the modified template of our ToDo component. The only thing which has changed is that we’ve added a <router-link> and bound a click handler to it. Now we wan’t to test if the event is emitted correctly. import Vuex from 'vuex'; import Router from 'vue-router'; import sinon from 'sinon'; import { createLocalVue, shallow, mount } from 'vue-test-utils'; import test from 'ava'; import ToDo from '../../src/components/ToDo'; const localVue = createLocalVue(); localVue.use(Vuex); localVue.use(Router); // ... // Initialize a new router with // the route data needed for the test. const router = new Router({ routes: [ { path: '/stats', }, ], }); // ... test('It should emit an event when clicking the stats link.', (t) => { const wrapper = mount(ToDo, { localVue, store: createStore(), router }); wrapper.find('.qa-to-do-stats-link').trigger('click'); t.truthy(wrapper.emitted().clickStatsLink); }); In order to mount our ToDo component with the <router-link> handled by the vue-router, we have to import the vue-router and register it with our Vue.js instance. In the test case we trigger a click event on the <router-link> element and we check if a clickStatsLink event was emitted. If we’ve done everything correctly our test should pass. Wrapping it up Thanks to the vue-test-utils package, using a TDD approach for building Vue.js components has become a breeze. However, things can become tricky when external plugins and dependencies are being used. I hope this article answers some questions about how to test Vuex and vue-router powered Vue.js single file components.
https://markus.oberlehner.net/blog/unit-testing-vue-single-file-components-with-ava/
CC-MAIN-2019-43
en
refinedweb
In codelab MDC-101, you used two Material Components to build a login page: text fields and buttons with ink ripples. Now let's expand upon this foundation by adding navigation, structure, and data. In this codelab, you'll build a home screen for an app called Shrine, an e-commerce app that sells clothing and home goods. It will contain: To start developing mobile apps with Flutter you need: Flutter's IDE tools are available for Android Studio, IntelliJ IDEA Community (free), and IntelliJ IDEA Ultimate. To build and run Flutter apps on iOS: To build and run Flutter apps on Android: Get detailed Flutter setup information Before proceeding with this codelab, make sure that your SDK is in the right state. If the flutter SDK was installed previously, then use flutter upgrade to ensure that the SDK is at the latest state. flutter upgrade Running flutter upgrade will automatically run flutter doctor. If this a fresh flutter install and no upgrade was necessary, then run flutter doctor manually. See that all the check marks are showing; this will download any missing SDK files you need and ensure that your codelab machine is set up correctly for Flutter development. flutter doctor If you completed MDC-101, your code should be prepared for this codelab. Skip to step: Add a top app bar. The starter app is located in the material-components-flutter-codelabs-102-starter_and_101-complete/mdc_100_series directory. To clone this codelab from GitHub, run the following commands: git clone cd material-components-flutter-codelabs git checkout 102-starter_and_101-complete The following instructions assume you're using Android Studio (IntelliJ). The following instructions assume you're testing on an Android emulator or device but you can also test on an iOS Simulator or device if you have Xcode installed. Success! You should see the Shrine login page from the MDC-101 codelab in the simulator or emulator. Now that the login screen looks good, let's populate the app with some products. Right now, if you click the "Next" button you will be able to see the home screen that says "You did it!". That's great! But now our user has no actions to take, or any sense of where they are in the app. To help with that, it's time to add navigation. Material Design offers navigation patterns that ensure a high degree of usability. One of the most visible components is a top app bar. To provide navigation and give users quick access to other actions, let's add a top app bar. In home.dart, add an AppBar to the Scaffold: return Scaffold( // TODO: Add app bar (102) appBar: AppBar( // TODO: Add buttons and title (102) ), Adding the AppBar to the Scaffold's appBar: field, gives us perfect layout for free, keeping the AppBar at the top of the page and the body underneath. Save the project. When the Shrine app updates, click Next to see the home screen. AppBar looks great but it needs a title. In home.dart, add a title to the AppBar: // TODO: Add app bar (102) appBar: AppBar( // TODO: Add buttons and title (102) title: Text('SHRINE'), // TODO: Add trailing buttons (102) Save the project. Many app bars have a button next to the title. Let's add a menu icon in our app. While still in home.dart, set an IconButton for the AppBar's leading: field. (Put it before the title: field to mimic the leading-to-trailing order): return Scaffold( appBar: AppBar( // TODO: Add buttons and title (102) leading: IconButton( icon: Icon( Icons.menu, semanticLabel: 'menu', ), onPressed: () { print('Menu button'); }, ), Save the project. The menu icon (also known as the "hamburger") shows up right where you'd expect it. You can also add buttons to the trailing side of the title. In Flutter, these are called "actions". There's room for two more IconButtons. Add them to the AppBar instance after the title: // TODO: Add trailing buttons (102) actions: <Widget>[ IconButton( icon: Icon( Icons.search, semanticLabel: 'search', ), onPressed: () { print('Search button'); }, ), IconButton( icon: Icon( Icons.tune, semanticLabel: 'filter', ), onPressed: () { print('Filter button'); }, ), ], Save your project. Your home screen should look like this: Now the app has a leading button, a title, and two actions on the right side. The app bar also displays elevation using a subtle shadow that shows it's on a different layer than the content. Now that our app has some structure, let's organize the content by placing it into cards. Let's start by adding one card underneath the top app bar. The Card widget alone doesn't have enough information to lay itself out where we could see it, so we'll want to encapsulate it in a GridView widget. Replace the Center in the body of the Scaffold with a GridView: // TODO: Add a grid view (102) body: GridView.count( crossAxisCount: 2, padding: EdgeInsets.all(16.0), childAspectRatio: 8.0 / 9.0, // TODO: Build a grid of cards (102) children: <Widget>[Card()], ), Let's unpack that code. The GridView invokes the count() constructor since the number of items it displays is countable and not infinite. But it needs some information to define its layout. The crossAxisCount: specifies how many items across. We want 2 columns. The padding: field provides space on all 4 sides of the GridView. Of course you can't see the padding on the trailing or bottom sides because there's no GridView children next to them yet. The childAspectRatio: field identifies the size of the items based on an aspect ratio (width over height). By default, GridView makes tiles that are all the same size. Adding that all together, the GridView calculates each child's width as follows: ([width of the entire grid] - [left padding] - [right padding]) / number of columns. Using the values we have: ([width of the entire grid] - 16 - 16) / 2. The height is calculated from the width, by applying the aspect ratio:: ([width of the entire grid] - 16 - 16) / 2 * 9 / 8. We flipped 8 and 9 because we are starting with the width and calculating the height and not the other way around. We have one card but it's empty. Let's add child widgets to our card. Cards should have regions for an image, a title, and secondary text. Update the children of the GridView: // TODO: Build a grid of cards (102) children: <Widget>['), ], ), ), ], ), ) ], This code adds a Column widget used to lay out the child widgets vertically. The crossAxisAlignment: field specifies CrossAxisAlignment.start, which means "align the text to the leading edge." The AspectRatio widget decides what shape the image takes no matter what kind of image is supplied. The Padding brings the text in from the side a little. The two Text widgets are stacked vertically with 8 points of empty space between them (SizedBox). We make another Column to house them inside the Padding. Save your project: In this preview, you can see the card is inset from the edge, with rounded corners, and a shadow (that expresses the card's elevation). The entire shape is called the "container" in Material. (Not to be confused with the actual widget class called Container.) Cards are usually shown in a collection with other cards. Let's lay them out as a collection in a grid. Whenever multiple cards are present in a screen, they are grouped together into one or more collections. Cards in a collection are coplanar, meaning cards share the same resting elevation as one another (unless the cards are picked up or dragged, but we won't be doing that here). Right now our Card is constructed inline of the children: field of the GridView. That's a lot of nested code that can be hard to read. Let's extract it into a function that can generate as many empty cards as we want, and returns a list of Cards. Make a new private function above the build() function (remember that functions starting with an underscore are private API): // TODO: Make a collection of cards (102) List<Card> _buildGridCards(int count) { List<Card> cards = List.generate( count, (int index) =>'), ], ), ), ], ), ), ); return cards; } Assign the generated cards to GridView's children field. Remember to replace everything contained in the GridView with this new code: // TODO: Add a grid view (102) body: GridView.count( crossAxisCount: 2, padding: EdgeInsets.all(16.0), childAspectRatio: 8.0 / 9.0, children: _buildGridCards(10) // Replace ), Save the project: The cards are there, but they don't show anything yet. Now's the time to add some product data. The app has some products with images, names, and prices. Let's add that to the widgets we have in the card already Then, in home.dart, import a new package and some files we supplied for a data model: import 'package:flutter/material.dart'; import 'package:intl/intl.dart'; import 'model/products_repository.dart'; import 'model/product.dart'; Finally, change _buildGridCards() to fetch the product info, and use that data in the cards: // TODO: Make a collection of cards (102) // Replace this entire method List<Card> _buildGridCards(BuildContext context) { List<Product> products = ProductsRepository.loadProducts(Category.all); if (products == null || products.isEmpty) { return const <Card>[]; } final ThemeData theme = Theme.of(context); final NumberFormat formatter = NumberFormat.simpleCurrency( locale: Localizations.localeOf(context).toString()); return products.map((product) { return Card( clipBehavior: Clip.antiAlias, // TODO: Adjust card heights (103) child: Column( // TODO: Center items on the card (103) crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ AspectRatio( aspectRatio: 18 / 11, child: Image.asset( product.assetName, package: product.assetPackage, // TODO: Adjust the box size (102) ), ), Expanded( child: Padding( padding: EdgeInsets.fromLTRB(16.0, 12.0, 16.0, 8.0), child: Column( // TODO: Align labels to the bottom and center (103) crossAxisAlignment: CrossAxisAlignment.start, // TODO: Change innermost Column (103) children: <Widget>[ // TODO: Handle overflowing labels (103) Text( product.name, style: theme.textTheme.title, maxLines: 1, ), SizedBox(height: 8.0), Text( formatter.format(product.price), style: theme.textTheme.body2, ), ], ), ), ), ], ), ); }).toList(); } NOTE: Won't compile and run yet. We have one more change. Also, change the build() function to pass the BuildContext to _buildGridCards() before you try to compile: // TODO: Add a grid view (102) body: GridView.count( crossAxisCount: 2, padding: EdgeInsets.all(16.0), childAspectRatio: 8.0 / 9.0, children: _buildGridCards(context) // Changed code ), You may notice we don't add any vertical space between the cards. That's because they have, by default, 4 points of padding on their top and bottom. Save your project: The product data shows up, but the images have extra space around them. The images are drawn with a BoxFit of .scaleDown by default (in this case). Let's change that to .fitWidth so they zoom in a little and remove the extra whitespace. Change the image's fit: field: // TODO: Adjust the box size (102) fit: BoxFit.fitWidth, Our products are now showing up in the app perfectly! Our app has a basic flow that takes the user from the login screen to a home screen, where products can be viewed. In just a few lines of code, we added a top app bar (with a title and three buttons) and cards (to present our app's content). Our home screen is now simple and functional, with a basic structure and actionable content. With the top app bar, card, text field, and button, we've now used four core components from the MDC-Flutter library! You can explore even more components by visiting the Flutter Widgets Catalog. While it's fully functioning, our app doesn't yet express any particular brand or point of view. In MDC-103: Material Design Theming with Color, Shape, Elevation and Type, we'll customize the style of these components to express a vibrant, modern brand.
https://codelabs.developers.google.com/codelabs/mdc-102-flutter/
CC-MAIN-2019-43
en
refinedweb
(such as I2C and UART) to connect to hardware peripherals. To bootstrap the getting started process, Google provides the Peripheral Driver Library on GitHub. This library is an open-source repository of pre-written drivers for common peripherals. In this codelab, you will be using drivers from the Peripheral Driver Library to quickly build a fully-functional application that interacts with multiple peripherals simultaneously. You will learn how to open Peripheral I/O connections and transfer data between the devices. Before you begin building apps for Things, you must: If you have not already installed Android Things on your development board, follow the official image flashing instructions for your board: Install the Rainbow HAT on top of your developer peripherals on the Rainbow HAT used in this codelab are connected to the following signals. These are also listed on the back of the Rainbow HAT: Click the following link to download the starter project for this codelab: ...or you can clone the GitHub repository from the command line: $ git clone weatherstation-startsubdirectory. Verify that the project has successfully launched on the device by verifying the startup message in the log: $ adb logcat WeatherStationActivity:V *:S ... D WeatherStationActivity: Started Weather Station The starter project contains the following source files: RainbowUtil: Helper class to compute the Rainbow colors displayed on the Rainbow HAT RGB LED strip. Your code will reference this class to convert sensor readings into the proper display colors. WeatherStationActivity: Main activity of the application. The code you write in this codelab will be placed here. The app-level build.gradle file includes a dependency for Android Things support library to enable access to the Peripheral I/O API: dependencies { compileOnly 'com.google.android.things:androidthings:1.0' } The Rainbow HAT includes an HT16K33 segment display driver connected over the I2C serial bus and a strip of seven APA102 RGB LEDs connected over the SPI serial bus. You will be accessing these devices using drivers provided for the Rainbow HAT. Add a dependency for the RainbowHat driver to your app-level build.gradle file: dependencies { ... implementation 'com.google.android.things.contrib:driver-rainbowhat:1.0' } Open the WeatherStationActivity class, and declare new fields for an AlphanumericDisplay and Apa102 LED strip: import com.google.android.things.contrib.driver.apa102.Apa102; import com.google.android.things.contrib.driver.ht16k33.AlphanumericDisplay; import com.google.android.things.contrib.driver.rainbowhat.RainbowHat; public class WeatherStationActivity extends Activity { ... private AlphanumericDisplay mDisplay; private Apa102 mLedstrip; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(TAG, "Weather Station Started"); } } Open a connection to both peripherals in onCreate()using the RainbowHat driver. Set the AlphanumericDisplay to show the value 1234, and the Apa102 to display Color.RED across all the LEDs in the strip: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(TAG, "Weather Station Started"); // Initialize 14-segment display try { mDisplay = RainbowHat.openDisplay(); mDisplay.setEnabled(true); mDisplay.display("1234"); Log.d(TAG, "Initialized I2C Display"); } catch (IOException e) { throw new RuntimeException("Error initializing display", e); } // Initialize LED strip try { mLedstrip = RainbowHat.openLedStrip(); mLedstrip.setBrightness(LEDSTRIP_BRIGHTNESS); int[] colors = new int[7]; Arrays.fill(colors, Color.RED); mLedstrip.write(colors); // Because of a known APA102 issue, write the initial value twice. mLedstrip.write(colors); Log.d(TAG, "Initialized SPI LED strip"); } catch (IOException e) { throw new RuntimeException("Error initializing LED strip", e); } } Add the following code to onDestroy() to turn off both output devices and close the connections once they are no longer needed: @Override protected void onDestroy() { super.onDestroy(); if (mDisplay != null) { try { mDisplay.clear(); mDisplay.setEnabled(false); mDisplay.close(); } catch (IOException e) { Log.e(TAG, "Error closing display", e); } finally { mDisplay = null; } } if (mLedstrip != null) { try { mLedstrip.setBrightness(0); mLedstrip.write(new int[7]); mLedstrip.close(); } catch (IOException e) { Log.e(TAG, "Error closing LED strip", e); } finally { mLedstrip = null; } } } Deploy the app to the device by selecting Run → Run 'app' from the menu, or click the Run icon in the toolbar. Verify that the display shows "1234" and all the LEDs on the Rainbow HAT are glowing red. The Rainbow HAT includes a BMP280 temperature and pressure sensor that is connected over the I2C serial bus. This driver handles the low-level I2C communication and binds the new sensor to the Android framework as a user-space driver. Once the driver is registered, your code will retrieve the sensor data using the standard SensorManager system service. Add the MANAGE_SENSOR_DRIVERS permission to your app's manifest file. This permission is required to register any new sensor as a user-space driver. <manifest xmlns: <uses-permission android: ... </manifest> Declare a new Bmx280SensorDriver and initialize it in onCreate(). Register the user-space drivers for each individual sensor on the BMP280 with the sensor framework: import com.google.android.things.contrib.driver.bmx280.Bmx280SensorDriver; public class WeatherStationActivity extends Activity { ... private Bmx280SensorDriver mEnvironmentalSensorDriver; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(TAG, "Weather Station Started"); // Initialize temperature/pressure sensors try { mEnvironmentalSensorDriver = RainbowHat.createSensorDriver(); // Register the drivers with the framework mEnvironmentalSensorDriver.registerTemperatureSensor(); mEnvironmentalSensorDriver.registerPressureSensor(); Log.d(TAG, "Initialized I2C BMP280"); } catch (IOException e) { throw new RuntimeException("Error initializing BMP280", e); } ... } } Obtain a reference to the SensorManager system service in onCreate(): private SensorManager mSensorManager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(TAG, "Weather Station Started"); mSensorManager = getSystemService(SensorManager.class); ... } Implement the updateTemperatureDisplay() method to write the temperature sensor value out to the segment display: private void updateTemperatureDisplay(float temperature) { if (mDisplay != null) { try { mDisplay.display(temperature); } catch (IOException e) { Log.e(TAG, "Error updating display", e); } } } Implement the updateBarometerDisplay() method to update the RGB LED strip based on the pressure sensor value: private void updateBarometerDisplay(float pressure) { if (mLedstrip != null) { try { int[] colors = RainbowUtil.getWeatherStripColors(pressure); mLedstrip.write(colors); } catch (IOException e) { Log.e(TAG, "Error updating ledstrip", e); } } } Declare a new SensorEventListener to receive events generated for new temperature and pressure values and call the two update methods you just implemented: // Callback when SensorManager delivers new data. private SensorEventListener mSensorEventListener = new SensorEventListener() { @Override public void onSensorChanged(SensorEvent event) { final float value = event.values[0]; if (event.sensor.getType() == Sensor.TYPE_AMBIENT_TEMPERATURE) { updateTemperatureDisplay(value); } if (event.sensor.getType() == Sensor.TYPE_PRESSURE) { updateBarometerDisplay(value); } } @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { Log.d(TAG, "accuracy changed: " + accuracy); } }; Register the listener for each sensor on the BMP280. User-space drivers register the sensors with the framework as dynamic sensors, so you can access them from the getDynamicSensorList() method: @Override protected void onStart() { super.onStart(); // Register the BMP280 temperature sensor Sensor temperature = mSensorManager .getDynamicSensorList(Sensor.TYPE_AMBIENT_TEMPERATURE).get(0); mSensorManager.registerListener(mSensorEventListener, temperature, SensorManager.SENSOR_DELAY_NORMAL); // Register the BMP280 pressure sensor Sensor pressure = mSensorManager .getDynamicSensorList(Sensor.TYPE_PRESSURE).get(0); mSensorManager.registerListener(mSensorEventListener, pressure, SensorManager.SENSOR_DELAY_NORMAL); } @Override protected void onStop() { super.onStop(); mSensorManager.unregisterListener(mSensorEventListener); } Add the following code to onDestroy() to close the driver connection: @Override protected void onDestroy() { super.onDestroy(); if (mEnvironmentalSensorDriver != null) { try { mEnvironmentalSensorDriver.close(); } catch (IOException e) { Log.e(TAG, "Error closing sensors", e); } finally { mEnvironmentalSensorDriver = null; } } ... } Deploy the app to the device by selecting Run → Run 'app' from the menu, or click the Run icon in the toolbar. Verify that the Rainbow HAT display now shows the current temperature in ℃ and the LEDs show a rainbow gauge that corresponds to the pressure reading. Congratulations! You've successfully built a weather station using the Rainbow HAT! Here are some things you can do to go deeper. Review the documentation for Google Cloud IoT. This service makes it easy for users to ingest data from connected devices and integrate them with the Google Cloud Platform. Can you enable your Weatherstation to publish the environmental sensor data to a Pub/Sub topic using MQTT? Read through the Peripheral I/O API Guides in the Android Things documentation to learn more about all the industry-standard interfaces that you can use to connect hardware to your application project. Experiment with the other peripherals included on the Rainbow HAT, like the piezo speaker and capacitive buttons. Explore the PWM speaker and other driver samples to get some experience with these elements.
https://codelabs.developers.google.com/codelabs/androidthings-weatherstation/index.html?index=..%2F..%2Findex
CC-MAIN-2019-43
en
refinedweb
Hi. Can anyone provide me with a minimum working example for mkl_ddnscsr? I have tried this so far #include <stdio.h> #include <stdlib.h> #include <mkl.h> int main(int argc, char *argv[]) { MKL_INT info; MKL_INT m = 3; //Number of rows of A MKL_INT n = 4; //Number of columns of A MKL_INT nnz = 6; //Number of non zero elements MKL_INT job[6] = {0,0,1,2,nnz,1}; double *Acsr = (double *) calloc(nnz, sizeof(double) ); MKL_INT *Aj = (MKL_INT *) calloc(nnz, sizeof(MKL_INT) ); MKL_INT *Ai = (MKL_INT *) calloc(m+1, sizeof(MKL_INT) ); double A[3][4] = {{1.,3.,0.,0.},{0.,0.,4.,0.},{2.,5.,0.,6.}}; mkl_ddnscsr ( job, &m, &n, A[0], &m, Acsr, Aj, Ai, &info); for (int i=0; i< nnz; i++) { if (Acsr[i] != 0) { printf( "column = %i, A = %fn", Aj[i], Acsr[i] ); } } for (int i=0; i< m+1; i++) { printf("Ai[%i] = %in", i, Ai[i]); } return 0; } But it returns these results column = 1, A = 1.000000 column = 2, A = 3.000000 column = 4, A = 4.000000 column = 1, A = 4.000000 column = 3, A = 2.000000 column = 4, A = 5.000000 Ai[0] = 1 Ai[1] = 3 Ai[2] = 4 Ai[3] = 7 If I play with the value for the lda I can almost get the correct result, however I believe this is as the manual suggests. I am on using Ubuntu 12.04 and Composer 2013.3.163 if that makes difference. Thanks Chris
https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/392856
CC-MAIN-2017-30
en
refinedweb
! My brother made it to the Borland conference this year and has provided me with some good information from Borland and Microsoft folks there. This is what I see as a beautiful future for scripting... :) First, the Microsoft.CSharp.Compiler class that allows you to compile C# code at runtime is NOT a good fit because of the way it is compartmentalized away from the compiling application. The best fit is "Reflection". At the bottom are some links I collected about it for your viewing pleasure. The short of it is that with Reflection we can discover types (ie classes, properties, methods, method parameter names and data types) all dynamically. This is *perfect* for code completion, etc. Also, you can call any method and pass parameters at runtime without ever having known about the type before. This is perfect for wrapped execution. The real kicker is that part of Reflection is IL (Intermediate Language) code generation on-the-fly. This means your running app can generate ILcode (like assembler for .NET platform) and have the .NET framework execute it! Which (in plain speech) means your compiled program can dynamically generate compiled code and execute it. So far I haven't seen any actual scripting projects to take advantage of this. But there are quite a few objects available in the Reflection namespace to ease the generation of IL code. You can generate complex types like classes dynamically too. Isn't that just so cool? It is *perfect* for a scripting engine and it runs at native .NET speeds! If you have any interest in pursuing this then please let me know. I have many ideas for ways of doing this and wonderful uses. Just start to imagine what this can mean for your apps. The first link gives some vision for what could be possible outside of scripting. Neat stuff. The project would probably need to be a whole new thing. I thought perhaps some of you guys would like to participate. -Mark E. PS> It would appear that this should theoretically work on mono as well. I saw some discussion and documentation about their implementation of things. ======================================================= Googled on "emit IL code" (without the quotes). Willibald Krenn wrote: [...] > Do you plan to use Delphi 8 (D.NET) or C# for implementing such a scripting > engine? I plan to use C#. Maximize the audience and contributors. Improve my marketable skills. Still be usable to D.NET developers. > And what syntax will the scripting language have? (DWS like, C# like?) I have a different idea. ;) I want to build it similar to the design of DWS. An object to represent a class, a method, a parameter, an expression, etc. Then you tell the object to "CompileYourself" and it will generate the IL similar to the high level way the expressions work. They will emit IL code to implement what they represent. Then, to address your question, I want it to be designed where the real stuff is done by the objects. Then, you can create a Delphi-syntax language and create the objects from that or a C#-syntax language or a VB or a C++, etc. To really reach the widest audience let people use the interface they want. If someone wants to build a Python based syntax then let them. If the parser is the front end to creating the objects then it doesn't matter. The objects do the work. Interested? PS> I'm sending a bit more to you off the list. Let me know if you don't get it. -Mark E. I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/dws/mailman/dws-developer/thread/[email protected]/
CC-MAIN-2017-30
en
refinedweb
from pylab import * from numpy import * from numpy.random import normal from scipy.optimize import fmin # parametric function, x is the independent variable # and c are the parameters. # it's a polynomial of degree 2 fp = lambda c, x: c[0]+c[1]*x+c[2]*x*x real_p = rand(3) # error function to minimize e = lambda p, x, y: (abs((fp(p,x)-y))).sum() # generating data with noise n = 30 x = linspace(0,1,n) y = fp(real_p,x) + normal(0,0.05,n) # fitting the data with fmin p0 = rand(3) # initial parameter value p = fmin(e, p0, args=(x,y)) print 'estimater parameters: ', p print 'real parameters: ', real_p xx = linspace(0,1,n*3) plot(x,y,'bo', xx,fp(real_p,xx),'g', xx, fp(p,xx),'r') show()The following figure will be showed, in green the original curve used to generate the noisy data, in blue the noisy data and in red the curve found in the minimization process: The parameters will be printed also: Optimization terminated successfully. Current function value: 0.861885 Iterations: 77 Function evaluations: 146 estimater parameters: [ 0.92504602 0.87328979 0.64051926] real parameters: [ 0.86284356 0.95994753 0.67643758] Thanks this was helpful. I've seen some pretty bad tutorials on how to use fmin so this makes me pretty happy. Hi this is very helpful and I have put some variation of it to good use, but I wanted to ask, is it possible to modify it so that it works when we deal with 2 or more independent variables, or should I stick to the 1-variable version? Thanks in advance. Of course, check out this post: Here I used the fmin to minimize a function of 4 variables.
http://glowingpython.blogspot.it/2011/05/curve-fitting-using-fmin.html
CC-MAIN-2017-30
en
refinedweb
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. django-vicar - live vicariously django-vicar is a middleware class and a view for allowing superusers to impersonate other users. It's not even right to call this thing an app, it's one file and <100 lines. This is an itch-scratch thing, I'm tired of going to /stop/ with django-impersonate, and I have no interest in making the functionality I want work with it. Deliberately excluding: - any kind of search or listing - any authz beyond superuser - putting a User object in session - any kind of filtering in the middleware beyond the session data There is no urlconf. There is only one view. Integrate like so: from django_vicar import vicar # ...url(r'^vicar/(.+)$', vicar), # ... And by putting 'django_vicar.VicarMiddleware' somewhere in your MIDDLEWARE_CLASSES. It probably needs to go either before or after session middleware and/or authentication middleware, but I didn't really check, just put it wherever you'd put ImpersonateMiddleware. Similar Projects - django-impersonate -
https://bitbucket.org/adamkg/django-vicar/overview
CC-MAIN-2017-30
en
refinedweb
I want to fetch product data from Flipkart for which Flipkart provide Product Feed API. I wanted to know how to use the json or xml provided to fetch data for my site preferably in Django. Please explain explicitly as I have no idea on how to do this. Here is the link for flipkart API: Hope this helps. import urllib2 def your_function(): response = urllib2.urlopen("") json_data = response.read() return json_data # use this in any of your views to read json with product details
https://codedump.io/share/KupqJsuL1sMG/1/use-flipkart-api-in-django
CC-MAIN-2017-30
en
refinedweb
csEventHandlerRegistry Class ReferenceThe csEventHandlerRegistry maintains a global one-to-one mapping from strings to csHandlerIDs, and a one-to-(zero or one) mapping from csHandlerIDs to iEventHandler pointers. More... [Event handling] #include <csutil/eventhandlers.h> Detailed DescriptionThe. Implements iEventHandlerRegistry. Get a csHandlerID based upon some string. This should only ever be done to reference generic (non-instantiated) handler names or single-instance handlers. Implements iEventHandlerRegistry. Returns the handler registered for a csHandlerID (will be 0 if csHandlerID is a generic name, i.e., if (!csEventHandlerRegistry->IsInstance(id)). Implements iEventHandlerRegistry. Get the ID for a given event handler name. This should usually not be used, since it does not handle magic creation of ":pre" and ":post" signpost handlers or any other such bookkeeping magic. Implements iEventHandlerRegistry. Get the csHandlerID for a specified event handler, which provides its own name via the iEventHandler::GetInstanceName() method. Implements iEventHandlerRegistry. Returns the string name for a csHandlerID. Implements iEventHandlerRegistry. returns true if id is a handler instance (i.e., not a generic name). Implements iEventHandlerRegistry. returns true if instanceid is a handler instance, genericid is a generic instance, and instanceid is an instance of genericid in particular. Implements iEventHandlerRegistry. Register an event handler to obtain a handler ID. - Remarks: - Every call must be balanced with a call to ReleaseID() to ensure proper housekeeping. Otherwise, event handler instances may be leaking. Implements iEventHandlerRegistry. Used when an iEventHandler is destroyed to remove our reference. Implements iEventHandlerRegistry. Used when an iEventHandler is desroyed to remove our reference. Implements iEventHandlerRegistry. The documentation for this class was generated from the following file: - csutil/eventhandlers.h Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/classcsEventHandlerRegistry.html
CC-MAIN-2017-30
en
refinedweb
19 results of 19 Hi, I'm using scigraphica since quite a while (0.6 or so) for simple data visualisation - which works great :) Now I tried to do simple math on the data und ran into serious problems. Scigraphica dies each time I try to save my projects and dies when I try to import ASCII data (which it produced). It displays a memory allocation problem, then dies. If you would like the data.sg file or the *.dat file please let me know. A point for further development would be saving the formulas for a given colum, because otherwise the next time you open it all calculated values are lost. I use scigraphica-0.8.0-1mdk8_1 on a "nearly" standard Mandrake 8.1 (free download version) Bye the way keep going on the project please - It'll be a good alternative to origin :) best regards Marcus Menzel Hi, I have just tried installing scigraphica on a mandrake 8.2 bets system. The RPm installation went fine but on starting i get this message in the terminal; scigraphica: Symbol `PyFloat_Type' has different size in shared object, consider re-linking scigraphica: Symbol `PyString_Type' has different size in shared object, consider re-linking scigraphica: Symbol `PyLong_Type' has different size in shared object, consider re-linking scigraphica: Symbol `PyCObject_Type' has different size in shared object, consider re-linking scigraphica: Symbol `PyInt_Type' has different size in shared object, consider re-linking I then get the splash screen and a segmentaion fault windw. I have installed python 2.2, python-numeric all the gtk stuff and Scientific python. The only thing i can think of is thst scigraphica asks for libpython2.1.so.0.0 when it installs but as i have python 2.2 libpython2.1.so.0.0 is linked to libpython2.2.so.0.0 . The only other problem that i can see is that Scientific python installs into /usr/lib/python2.1 and not python2.2. I do nat want to uninstall python 2.2 if that is the problem so is there another way round it. Thanks Ryan I sent a message about our difficulties regarding libtermcap on SuSE distros compared to others and here's what they said: ---------- Forwarded message ---------- Date: Thu, 21 Feb 2002 17:10:00 +0100 From: Feedback <feedback@...> To: John Bland <shrike@...> Subject: Ticket [20020215990000273] Dear Mr Bland, thank you for your contribution. We always try to use the libncurses. Consequently we have to avoid mixing the includes or the libs of termcap and ncurses. Please, next time try while compiling: gcc -I /usr/include/termcap and while linking gcc -L /usr/lib/termcap to use the termcap libraries and includes. Best regards, Your SuSE Feedback Team ------------------------------------------------------------ SuSE GmbH, E-Mail: feedback@... Deutschherrnstr. 15-19 Web : 90429 Nuernberg ------------------------------------------------------------ On Thu, 21 Feb 2002, John Bland wrote: > > Hi, > > I'm a developer on the SciGraphica project and we consistently have a > problem with SuSE with regards to the termcap libraries. > > SuSE has the necessary libraries but places them in a seemingly unorthodox > place. In /usr/lib/termcap/ there is a symlink called libtermcap.so > pointing to ../libtermcap.so.2.0.8 whilst in /usr/lib/ there are > libtermcap.so.2 and libtermcap.so.2.0.8 but no libtermcap.so. > > Placing /usr/lib/termcap/ in /etc/ld.so.config doesn't appear to macke > much difference. Simply doing > > ln -s /usr/lib/libtermcap.so.2.0.8 /usr/lib/libtermcap.so > > fixes any problems we have and seems to solve a few problems with other > programs as well. > > Is /usr/lib/termcap/ part of LSB or something? It seems to be a SuSE > specific problem and is easily fixed by this one symlink. > > Cheers, > JB > > -- > John Bland M.Phys (Hons) AMInstP / \ PhD Student & Sys Admin > / \ Liverpool University > "We're not at home to Mr Cockup." -- Baldrick > > Jordi Jaen Pallares Development and documentation SuSE GmbH, Email: jordi@... Deutschherrnstr. 15-19, Phone: +49-911-74053-294 D-90429 Nuernberg, Germany Web: Hello all! I like to implement the interface for fileformat plugins. I have had a look at sg_plugin.c and sg_file_dialog.c and I think that it shouldn't be a big problem. If no one wants to do that task himself, I am going to start with the work. After that is finished, I can also release the first (and very limited) version of my Origin plugin. Bye, Thorsten To start off, SG seems to be quite a useable program, especially given the Python-scriptability. I've found graph-plotting programs to be uniformly a pain in the neck, so this faint praise is unusual from me :) That out of the way, on to my first gripe. It seems nobody is able to think up an autoscale algorithm that works. None of the interactive graph plotting software I've used to any great extent on unix -- Grace, Qwt, and now Scigraphica -- (can't remember what the windows one I used did) has a sane autoscale algorithm. IIRC even gnuplot, which is usually reasonably capable despite its nasty interface, gets this wrong. Is it really so hard?? OTOH, I can see it isn't as easy as it might at first appear. I know, I should really stop complaining and write it myself, but here are some ideas for anybody who feels like having a go at it. One thing that all the packages I've used seem to miss completely is that the border width (ie. (axis_max-axis_min)/2) and tick label positioning are two separate things. As a result, they only get one of them right. Scigraphica doesn't really get either of them right -- you can't autoscale with zero border as far as I can see, and the tick spacing is not rounded. Other packages (Grace) frequently give good tick positioning (tick spacing reasonably rounded, around the right number / spacing of ticks, and not looking too asymmetrical) but get the borders so hopelessly wrong as to be useless. Actually, Qwt (a widget set) doesn't too *too* bad a job, though the borders seem to be constrained to the tick marks, which is often too big. Some requirements: 1. Rounded scale -- an axis with a range of (-190293, 2932) may have perfect borders, but be unreadable 2. Choice between: 'zero border' 'no constraint', 'snap to ticks', 'snap to major ticks' for border spacing 3. Border width setting (in paper or graph units?) 4. Adjustable number of ticks 5. Adjustable tick spacing 6. Number of ticks on an axis, by default, should reflect the aspect ratio of the graph 7. Borders shouldn't be too asymmetrical -- eg. one tick right up against the left hand edge, and a big gap at the right hand edge of the axis Obviously, some of these things have to be variable for X and Y axes, or top / bottom / left / right. More obviously, not all of these choices are independent. That doesn't mean there shouldn't be control of them! I suppose what is needed is a merit function with adjustable parameters and constraints to suit individual taste, and then some means of optimising for the best overall scaling and tick positioning. Sounds like a linear programming problem ('linear programming' in the mathematical sense is a class of optimisation algorithms, and is not a kind of computer programming or linear algebra; in fact to be more correct I mean 'integer programming', I suppose). I expect there must be libaries out there to solve this kind of thing, but somebody has to figure out exactly what the problem is first, and what algorithm is needed to solve it. Maybe there's a simpler way to solve it, I don't know. SG is already reasonably user-friendly -- a hard thing to achieve -- and an autoscale that works is an important part of that. example integer programming application: SAL page on optimisation: (see lp_solve, for example) John Hi Alfonso, thank you for your nice words. I'm glad that you like the program, and specially, want to contribute :-) I think that we can use your skills in gtk for moving gtkextra to gtk-2.0. First we need to do some research about the new changes in gtk-2.0: Font handling using pango, the GObject library, the changes in the widget internals. Once we have an idea, we are going to start the move. In principle it should be easy, since we only need to touch a few lines of code in the widgets, except a few that could be problematic because of Pango. Let me know if you like the idea, and you can subscribe to the gtkextra mailing list to discuss a roadmap. If you have other things in mind, let me know. Thanks again!, Saludos!, <ADRIAN> On Thu, 14 Feb 2002, alba wrote: >. > > > _______________________________________________ > Scigraphica-devel mailing list > Scigraphica-devel@... > >. That's fine, take it easy. I fixed about 10 bugs, and I added a new feature: You can reflect the axes in 2d plots, i.e. put the origin or coordinates in the right or top of the plot. You need the new gtkextra to compile. I have to work a lilttle bit on the delaunay triangulization, before freezing gtkextra and switch to gtk-2.0 Saludos, <ADRIAN> Dude, download the new gtkextra-cvs ;-) Saludos, <ADRIAN> On Thu, 14 Feb 2002, R. Lahaye wrote: > >. > > _______________________________________________ > Scigraphica-devel mailing list > Scigraphica-devel@... > >. Adrian Feiguin wrote: > > It would be also good if you could take a look at the compilation problems > that some people have experienced. This is if you have time, of course ;-) Hi, I remember a couple of complaints about the previous release; I've tried to patch a few in present CVS. I can't really remember which one (sorry, I did not enter this into the ChangLog). Moreover, there has not been response whether my fixes actually dealt with the complaints; you may trace the threads in the mailinglist archive. Hmmm, so some autogen/make/compile bugs may still be around, some others may be fixed. We need more CVS users that test the current state of SG in CVS. I have to apologize for keeping a low profile a few more weeks. Life & work is rather busy and that may last until end of March. Regards, Rob. Conrad, I've fixed a number of bugs, and I assigned three to you. They are related to a tab delimited import problem and a crash setting formulas out of the visible range: bugs #497356,#497362,#498952 It would be also good if you could take a look at the compilation problems that some people have experienced. This is if you have time, of course ;-) I'm going to work on the others now. Thanks a lot, Saludos, <ADRIAN> Hi Laurent, I'm glad you like the program >. 1) Select a column 2) click on the right mouse button to open the popup menu. Click on column->set column value 2') You can save the previous steps just clicking on the "set values" button in the toolbar. 3) Enter a formula like col("C") = (col("A")-6)/2. Click "OK". Voila! Saludos, <ADRIAN> Wouaw ! I just discovered Scigraphica two days ago and it is exactly what I need. Thank you all for the work done. I am sorry to post this on a devel list but I don't know where to post for such a question.. Laurent. Debian 2.2r0 doesn't seem to have libart.h *even with libart-dev installed*. I hacked configure.in to deliberately break detection of libart so it would compile without it, but for some reason src/Makefile still had a -DWITH_LIBART, which I had to remove by hand. After that it seemed to work OK, but it eventually fails with gcc -g -O2 -I -I -DREADLINE_4 -I/usr/lib/glib/include -I/usr/X11R6/include= -I/usr/X11R6/include -DWITH_GDK_IMLIB -I/usr/lib/glib/include -I/usr/X11R6= /include -I/usr/lib/glib/include -I/usr/X11R6/include -I/usr/include -DNEED= _GNOMESUPPORT_H -I/usr/lib/gnome-libs/include -I/usr/lib/glib/include -I/us= r/X11R6/include -DWITH_GNOME -DWITH_GNOME_PRINT -I../zvt -I../../zvt -o s= cigraphica gtkpixmapmenuitem.o gtkplotart.o gtkplotgnome.o sg.o sg_arrange= _dialog.o sg_axis_dialog.o sg_clipboard.o sg_column_dialog.o sg_config.o sg= _convert_dialog.o sg_dataset.o sg_dataset_dialog.o sg_dialogs.o sg_edit_fun= ction_dialog.o sg_edit_data_dialog.o sg_edit_columns_dialog.o sg_edit_exp_d= ialog.o sg_edit_3d_dialog.o sg_ellipse_dialog.o sg_entry.o sg_file.o sg_fil= e_dialog.o sg_formula_dialog.o sg_frame_dialog.o sg_function_dialog.o sg_gr= ids_dialog.o sg_import_dialog.o sg_labels_dialog.o sg_layer.o sg_layer_cont= rol.o sg_layer_dialog.o sg_legends_dialog.o sg_line_dialog.o sg_logo_dialog= s.o sg_matrix_menu.o sg_matrix_convert.o sg_matrix_dialog.o sg_menu.o sg_mi= sc.o sg_misc_dialogs.o sg_new_data_dialog.o sg_page_dialog.o sg_planes_dial= og.o sg_plot.o sg_plot_file.o sg_plot_file_xml.o sg_plot_menu.o sg_plot_too= ls.o sg_plugin.o sg_preferences_dialog.o sg_project.o sg_project_menu.o sg_= project_file_xml.o sg_project_file_sax.o sg_project_rescue.o sg_project_aut= osave.o sg_rectangle_dialog.o sg_stock.o sg_style_dialog.o sg_text_dialog.o= sg_title_dialog.o sg_toggle_combos.o sg_toolbox.o sg_worksheet.o sg_worksh= eet_file.o sg_worksheet_file_ascii.o sg_worksheet_file_html.o sg_worksheet_= file_tex.o sg_worksheet_file_xml.o sg_worksheet_menu.o sg_worksheet_tools.o= sg_wrap.o sg_xy_formula_dialog.o -L./python python/libpint.a -L/usr/li= b -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodule -lglib -ldl -lXi -lXext = -lX11 -lm -L/usr/local/lib -lgtk -lgdk -lgtkextra -lglib -lm -L/usr/lib -lx= ml -lz -L/usr/local/lib/python2.1//config -lpython2.1 -lreadlin= e -ltermcap -lcrypt -L/usr/lib -ltk8.2 -ltcl8.2 = -L/usr/X11R6/lib -lX11 -lncurses -ltermcap -lpanel -lncurses -L/usr/l= ocal/lib -lgdbm -L/usr/local/lib -lz -lpthread -ldl -l= util -lutil -L/usr/lib -lart_lgpl -lm -rdynamic -L/usr/lib -L/= usr/X11R6/lib -lgnomeprint -lgnomeui -lart_lgpl -lgdk_imlib -lSM -lICE -lgt= k -lgdk -lgmodule -lXi -lXext -lX11 -lgnome -lgnomesupport -lesd -laudiofil= e -lm -ldb -lglib -ldl -lxml -lz -L -L -lreadline -ltermcap -lncurses -L/us= r/lib -lgdk_imlib -L/usr/lib -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodu= le -lglib -ldl -lXi -lXext -lX11 -lm -L/usr/lib -L/usr/X11R6/lib -lgtk -lgd= k -rdynamic -lgmodule -lglib -ldl -lXi -lXext -lX11 -lm -L/usr/local/lib -l= gtk -lgdk -lgtkextra -lglib -lm -rdynamic -L/usr/lib -L/usr/X11R6/lib -lgno= meui -lart_lgpl -lgdk_imlib -lSM -lICE -lgtk -lgdk -lgmodule -lXi -lXext -l= X11 -lgnome -lgnomesupport -lesd -laudiofile -lm -ldb -lglib -ldl -L/usr/li= b -lart_lgpl -lm -L../zvt -lsgvt gtkplotgnome.o: In function `init': /usr/local/src/scigraphica-0.8.0/src/gtkplotgnome.c:453: undefined referenc= e to `gnome_print_beginpage' collect2: ld returned 1 exit status make[3]: *** [scigraphica] Error 1 make[3]: Leaving directory `/home/bucket/src/scigraphica-0.8.0/src' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/bucket/src/scigraphica-0.8.0/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/bucket/src/scigraphica-0.8.0' I have gnome-print 0.10, libgnomeprint-2.0.0. The headers I have in /usr/include/libgnomeprint (or anywhere else, as far as I can see) don't define gnome_print_beginpage. I suppose I need a newer gnome-print. If so, is it easy to just compile a new gnome-print without recompiling large chunks of the rest of gnome? John Am Mittwoch, 6. Februar 2002 01:07 schrieb Jay L. de Halas: Hi Jay, one of my coworkers, Tim Schwebel, is dealing with a win32 port of Scigraphica. For details, please contact Tim.Schwebel@... At the moment, Tim has to work hard for one of our project so that the work on sga for win32 has to be handled with rather low priority. However, I am confident that Tim will pursue the work as soon as there will be time available. Regards Randolf > Is there anyone developing a Win32/98 version out there ? If so, could > you point me in the right direction. So far I have had no luck in > finding a win32/98 version. > > Best Regards > > Jay de Halas > > > _______________________________________________ > Scigraphica-devel mailing list > Scigraphica-devel@... > -- +--------------------------------------------------------+ | Dr. Randolf Mock | | Siemens AG | | Dept. ZT MS 2 | | Otto-Hahn-Ring 6 | | 81730 Muenchen | | F.R.G. | | | | Phone: (+49 89)636-40052 | | | +--------------------------------------------------------+ ...brought to you by KMail () Adrian Feiguin <feiguin@...> writes: | Is anybody using SG on Solaris? I look into it every second month or so when I have the time... | Have you found any trouble installing it? yes, a few, but I suspect this is mostly due to the local setup. We have a mix of old gtk, gnome, and other GNU tools on the system, so this confuses autoconf a lot. Also I have Numeric python only in my home directory, which causes problems. This is a peek into my «log» -- what I need to do to get it up and running from CVS. (in December 2001 that is...) (I use recent versions of gcc, 3.01 probably. I think I tried cc as well along time ago, but got into problems and abandoned it after I succeeded with gcc) 1) build gtkextra. this is normally problem free. 2) autogen.sh comment out the check for FreeBSD as this does not work in Solaris. to circumvent the fucked up local system I gather all needed macros in my home directory and then say ACLOCAL="aclocal --acdir=/opt/gnome/share/aclocal -I${HOME}/share/aclocal-temp -I macros" #AUTOCONF=autoconf AUTOCONF="autoconf --macrodir=$(HOME)/share/autoconf" by doing this there is no more complaints about macros from autogen.sh, so things might be ok... 3) I then go directly into the configure script and comment out the check for readline, I have readline, but it is never found no matter what options I give, I dont have the patience to debug it... 4) now I run configure configure --prefix=$HOME --disable-gtktest --disable-gtkextratest --disable-imlibtest --with-readline-path=$HOME/lib 5) enter config.status and add -R/opt/gnome/lib -R$(HOME)/lib everywhere where -L... is used. This compiles in the paths to the dynamic libraries into the executable. Setting LD_LIBRARY_PATH can lead to a very messy system. the local setup also cannot find the headers for Numeric python in my home directory, but it actually finds an old version located at a really weird placce. I replace all these includes with the correct -I$(HOME)/include/python1.5/Numeric... then rerun config.status after this, gmake, gmake install and everything seems to work.... Helge Is there anyone developing a Win32/98 version out there ? If so, could you point me in the right direction. So far I have had no luck in finding a win32/98 version. Best Regards Jay de Halas Is anybody using SG on Solaris? Have you found any trouble installing it? We need your feedback! Thanks! <ADRIAN> I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/scigraphica/mailman/scigraphica-devel/?viewmonth=200202
CC-MAIN-2017-30
en
refinedweb
omanMembers Content count1 Joined Last visited Community Reputation104 Neutral About Aeroman - RankNewbie Aeroman posted a topic in For BeginnersHey, I'm not really new to game development in general- mostly the programming behind it. Anyway I'm having trouble loading more than 1 image and using getmousestate-but mostly just displaying multiple images. Here I'll post my code: [source lang="cpp"]#ifdef __cplusplus #include <cstdlib> #else #include <stdlib.h> #endif #ifdef __APPLE__ #include <SDL/SDL.h> #else #include <SDL.h> #endif int main ( int argc, char** argv ) { if ( SDL_Init( SDL_INIT_VIDEO ) < 0 ) { printf( "Unable to init SDL: %s\n", SDL_GetError() ); return 1; } atexit(SDL_Quit); SDL_Surface* screen = SDL_SetVideoMode(1300, 630, 16, SDL_HWSURFACE|SDL_DOUBLEBUF); if ( !screen ) { printf("Unable to set 640x480 video: %s\n", SDL_GetError()); return 1; } SDL_WM_SetCaption("Z Soft", NULL); // load an image SDL_Surface* bmp = SDL_LoadBMP("TaskBar.bmp"); if (!bmp) { printf("Unable to load bitmap: %s\n", SDL_GetError()); return 1; } SDL_Rect dstrect; dstrect.x = (screen->w - bmp->w) / 2; dstrect.y = (screen->h - bmp->h) / 14; //This does; if (event.key.keysym.sym == SDLK_UP); break; } } // end switch } // end of message processing // clear screen SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 0, 0, 0)); // draw bitmap SDL_BlitSurface(bmp, 0, screen, &dstrect); // DRAWING ENDS HERE SDL_Flip(screen); } // end main loop // free loaded bitmap SDL_FreeSurface(bmp); printf("Finally you closed it!\n"); return 0; } [/source] So can anyone help to pinpoint the problem?
https://www.gamedev.net/profile/200292-aeroman/?tab=classifieds
CC-MAIN-2017-30
en
refinedweb
- The The DefaultMutableTreeNode Class The first thing we're going to do with this class is to show you how to use it to build a simple tree from scratch. DefaultMutableTreeNode has three constructors: public DefaultMutableTreeNode() public DefaultMutableTreeNode(Object userObject) public DefaultMutableTreeNode(Object userObject, boolean allowsChildren) The first constructor creates a node with no associated user object; you can associate one with the node later using the setUserObject method. The other two connect the node to the user object that you supply. The second constructor creates a node to which you can attach children, while the third can be used to specify that child nodes cannot be attached by supplying the third argument as false. Using DefaultMutableTreeNode, you can create nodes for the root and for all of the data you want to represent in the tree, but how do you link them together? You could use the insert method that we saw above, but it is simpler to use the DefaultMutableTreeNode add method: public void add(MutableTreeNode child); This method adds the given node as a child of the node against which it is invoked and at the end of the parent's list of children. By using this method, you avoid having to keep track of how many children the parent has. This method, together with the constructors, gives us all you need to create a workable tree. To begin to create a tree, you need a root node: DefaultMutableTreeNode rootNode = new DefaultMutableTreeNode(); Below the root node, two more nodes are going to be added, one to hold details of the Apollo lunar flights, the other with information on the manned Skylab missions. These two nodes will be given meaningful text labels: DefaultMutableTreeNode apolloNode = new DefaultMutableTreeNode("Apollo"); DefaultMutableTreeNode skylabNode = new DefaultMutableTreeNode("Skylab"); The nodes are then added directly beneath the root node: rootNode.add(apolloNode); rootNode.add(skylabNode); Under each of these nodes, a further node will be added for each mission and beneath each of these a leaf node for each crew member. There's an implementation of this in the example programs that you can run using the command: java JFCBook.Chapter10.TreeExample1 The result of running this example is shown in Figure 108. Figure 108 A tree built using DefaultMutableTreeNodes. This program shows a root folder with no associated label and nodes labeled Apollo and Skylab. Clicking on the expansion icons of either of these opens it to show the numbered missions, and clicking on any of these shows the crew for that flight. Let's look at an extract from the source of this example: import javax.swing.*; import javax.swing.tree.*; public class TreeExample1 extends JTree { public TreeExample1() { DefaultMutableTreeNode rootNode = new DefaultMutableTreeNode(); DefaultMutableTreeNode apolloNode = new DefaultMutableTreeNode("Apollo"); rootNode.add(apolloNode); DefaultMutableTreeNode skylabNode = new DefaultMutableTreeNode("Skylab"); rootNode.add(skylabNode); // CODE OMITTED this.setModel(new DefaultTreeModel(rootNode)); } public static void main(String[] args) { JFrame f = new JFrame("Tree Example 1"); TreeExample1 t = new TreeExample1(); t.putClientProperty("JTree.lineStyle", "Angled"); t.expandRow(0); f.getContentPane().add(new JScrollPane(t)); f.setSize(300, 300); f.setVisible(true); } } This class is defined as an extension of JTree, which allows the creation of its data to be encapsulated within it. The root node and all of the child nodes are created and a tree structure is built from the nodes as described earlier. The JTree needs a data model in order to display anything, so the last step of the constructor is to install a model that contains the structure that has just been created: this.setModel(new DefaultTreeModel(rootNode)); This creates a new DefaultTreeModel and initializes it with our root node, then uses the JTree setModel method to associate the data model with the tree. Since our class is derived from JTree, its default constructor will have been invoked at the start of our constructor. As noted earlier, this creates a tree with a model containing dummy data. When setModel is called at the end of the constructor, this data is overwritten with the real data. Another way to create a JTree is to directly pass it the root node. If you use this method, it creates a DefaultTreeModel of its own and wraps it around the node that you pass to its constructor. Here's a short example of that: DefaultMutableTreeNode rootNode = new DefaultMutableTreeNode(); DefaultMutableTreeNode apolloNode = new DefaultMutableTreeNode("Apollo"); DefaultMutableTreeNode skylabNode = new DefaultMutableTreeNode("Skylab"); rootNode.add(apolloNode); rootnode.add(skylabNode); JTree t = new JTree(rootNode); If you look at the main method in the code extract shown above, you'll notice the following line after the tree was created: t.expandRow(0); This line uses the expandRow method to ensure that row 0 of the tree is expanded to display the children that the node on that row contains. In fact, this line is redundant in this example because the root node is expanded by default. You can force a node to be shown in an unexpanded state by calling the collapseRow method, which also requires the index of the row within the tree. We'll say more about the methods that can be used to expand and collapse parts of the tree later in this chapter. Apart from when you create it, the JTree control doesn't deal with nodes directly. Instead, you can address items in the tree and obtain information about them using either their TreePath or their row number. Let's look at the row number first. The row number refers to the number of the row on the screen at which the node in question appears. There is only one node ever on any row, so specifying the row identifies a node without any ambiguity. Furthermore, provided it's actually displayed, row 0 is always occupied by the root node. The problem with using row numbers is that the row numbers for all of the nodes apart from the root node change as nodes are opened or closed. When you start TreeExample1, the root node is on row 0, the "Apollo" node on row 1, and the "Skylab" node occupies row 2. However, if you click on the expansion icon for "Apollo," the "Skylab" node moves downward and, in this case, becomes row number 9, because the "Apollo" node opens to show seven child nodes, which will occupy rows 2 through 8. Because keeping track of row numbers is not very convenient, it is more usual to address the content of a tree using TreePath objects.
http://www.informit.com/articles/article.aspx?p=26327&seqNum=6
CC-MAIN-2017-30
en
refinedweb
I’m trying to replicate the methodology from this article, 538 Post about Most Repetitive Phrases, in which the author mined US presidential debate transcripts to determine the most repetitive phrases for each candidate. I'm trying to implement this methodology with another dataset in R with the tm prune_substrings() def prune_substrings(tfidf_dicts, prune_thru=1000): pruned = tfidf_dicts for candidate in range(len(candidates)): # growing list of n-grams in list form so_far = [] ngrams_sorted = sorted(tfidf_dicts[candidate].items(), key=operator.itemgetter(1), reverse=True)[:prune_thru] for ngram in ngrams_sorted: # contained in a previous aka 'better' phrase for better_ngram in so_far: if overlap(list(better_ngram), list(ngram[0])): #print "PRUNING!! " #print list(better_ngram) #print list(ngram[0]) pruned[candidate][ngram[0]] = 0 # not contained, so add to so_far to prevent future subphrases else: so_far += [list(ngram[0])] return pruned tfidf_dicts trump.tfidf.dict = {'we don't win': 83.2, 'you have to': 72.8, ... } tfidf_dicts = {trump.tfidf.dict, rubio.tfidf.dict, etc } prune_substrings else if A. create list : pruned as tfidf_dicts; a list of tfidf dicts for each candidate B loop through each candidate: - so_far = start an empty list of ngrams gone through so so_far - ngrams_sorted = sorted member's tf-idf dict from smallest to biggest - loop through each ngram in sorted - loop through each better_ngram in so_far - IF overlap b/w (below) == TRUE: - better_ngram (from so_far) and - ngram (from ngrams_sorted) - THEN zero out tf-idf for ngram - ELSE if (WHAT?!?) - add ngram to list, so_far C. return pruned, i.e. list of unique ngrams sorted in order 4 months later but here's my solution. I'm sure there is a more efficient solution, but for my purposes, it worked. The pythonic for-else doesn't translate to R. So the steps are different. nngrams. t, where each element of the list is a logical vector of length nthat says whether ngram in question overlaps all other ngrams (but fix 1:x to be false automatically) tinto a table, t2 t2row sum is zero set elements 1:n to FALSE (i.e. no overlap) Ouala! #' GetPrunedList #' #' takes a word freq df with columns Words and LenNorm, returns df of nonoverlapping strings GetPrunedList <- function(wordfreqdf, prune_thru = 100) { #take only first n items in list tmp <- head(wordfreqdf, n = prune_thru) %>% select(ngrams = Words, tfidfXlength = LenNorm) #for each ngram in list: t <- (lapply(1:nrow(tmp), function(x) { #find overlap between ngram and all items in list (overlap = TRUE) idx <- overlap(tmp[x, "ngrams"], tmp$ngrams) #set overlap as false for itself and higher-scoring ngrams idx[1:x] <- FALSE idx })) #bind each ngram's overlap vector together to make a matrix t2 <- do.call(cbind, t) #find rows(i.e. ngrams) that do not overlap with those below idx <- rowSums(t2) == 0 pruned <- tmp[idx,] rownames(pruned) <- NULL pruned } #' overlap #' OBJ: takes two ngrams (as strings) and to see if they overlap #' INPUT: a,b ngrams as strings #' OUTPUT: TRUE if overlap overlap <- function(a, b) { max_overlap <- min(3, CountWords(a), CountWords(b)) a.beg <- word(a, start = 1L, end = max_overlap) a.end <- word(a, start = -max_overlap, end = -1L) b.beg <- word(b, start = 1L, end = max_overlap) b.end <- word(b, start = -max_overlap, end = -1L) # b contains a's beginning w <- str_detect(b, coll(a.beg, TRUE)) # b contains a's end x <- str_detect(b, coll(a.end, TRUE)) # a contains b's beginning y <- str_detect(a, coll(b.beg, TRUE)) # a contains b's end z <- str_detect(a, coll(b.end, TRUE)) #return TRUE if any of above are true (w | x | y | z) }
https://codedump.io/share/TuRR6ZFmI9mr/1/understanding-another39s-text-mining-function-that-removes-similar-strings
CC-MAIN-2017-30
en
refinedweb