text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Adding UCM as a search source in Windows Explorer By Kyle Hatlestad on Feb 10, 2011 A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns="" xmlns: <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="" xmlns: <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="{searchTerms}%3C%2fqsch%3E+%29&SortField=dInDate&SortOrder=Desc&ResultCount=200&SearchQueryFormat=Universal"/> <Url type="text/html" template="{searchTerms}%3C%2Fqsch%3E&listTemplateId=&ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection=&MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="' to the rss definition: <rss version="2.0" xmlns:dc="" xmlns:sy="" xmlns: Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM). Posted by Al on February 11, 2011 at 06:30 AM CST # Posted by Jonathan Hult on February 11, 2011 at 01:27 PM CST # Posted by anika on March 14, 2011 at 03:08 AM CDT # Posted by kyle.hatlestad on March 22, 2011 at 11:28 PM CDT # Posted by Ron on April 29, 2011 at 03:39 AM CDT # Posted by kyle.hatlestad on April 29, 2011 at 03:57 AM CDT # *According to Microsoft, federated search supports single sign-on through the desktop (NTLM or Kerberos for example) but not forms based authentication.* Wonder if you changed the URL used to connect to Content Server to include the "_dav" port instead of the straight /cs/ entry? There's a Windows tweak you'd probably have to add to the registry to allow the use of basic authentication, but it may be worth a shot. Posted by William Phelps on March 06, 2012 at 10:12 AM CST #
https://blogs.oracle.com/kyle/entry/ucm_search_source_windows_explorer
CC-MAIN-2015-32
en
refinedweb
DynamicObject.TryConvert Method Provides implementation for type conversion operations. Classes derived from the DynamicObject class can override this method to specify dynamic behavior for operations that convert an object from one type to another. Namespace: System.DynamicNamespace: System.Dynamic Assemblies: System.Dynamic.Runtime (in System.Dynamic.Runtime.dll) System.Core (in System.Core.dll) Parameters - binder - Type: System.Dynamic.ConvertBinder Provides information about the conversion operation. The binder.Type property provides the type to which the object must be converted. For example, for the statement (String)sampleObject in C# (CType(sampleObject, Type) in Visual Basic), where sampleObject is an instance of the class derived from the DynamicObject class, binder.Type returns the String type. The binder.Explicit property provides information about the kind of conversion that occurs. It returns true for explicit conversion and false for implicit conversion. - result - Type: System.Object The result of the type conversion operation. a type conversion should be performed for a dynamic object. When the method is not overridden, the run-time binder of the language determines the behavior. (In most cases, a language-specific run-time exception is thrown.) In C#, if this method is overridden, it is automatically invoked when you have an explicit or implicit conversion, as shown in the code example below. In Visual Basic, only explicit conversion is supported. If you override this method, you call it by using the CTypeDynamic or CTypeDynamic functions. Assume that you need a data structure to store textual and numeric representations of numbers, and you want to define conversions of this data structure to strings and integers. The following code example demonstrates the DynamicNumber class, which is derived from the DynamicObject class. DynamicNumber overrides the TryConvert method to enable type conversion. It also overrides the TrySetMember and TryGetMember methods to enable access to the data elements. In this example, only conversion to strings and integers is supported. If you try to convert an object to any other type, a run-time exception is thrown. // The class derived from DynamicObject. public class DynamicNumber :; } // Converting an object to a specified type. public override bool TryConvert( ConvertBinder binder, out object result) { // Converting to string. if (binder.Type == typeof(String)) { result = dictionary["Textual"]; return true; } // Converting to integer. if (binder.Type == typeof(int)) { result = dictionary["Numeric"]; return true; } // In case of any other type, the binder // attempts to perform the conversion itself. // In most cases, a run-time exception is thrown. return base.TryConvert(binder, out result); } } class Program { static void Test(string[] args) { // Creating the first dynamic number. dynamic number = new DynamicNumber(); // Creating properties and setting their values // for the dynamic number. // The TrySetMember method is called. number.Textual = "One"; number.Numeric = 1; // Implicit conversion to integer. int testImplicit = number; // Explicit conversion to string. string testExplicit = (String)number; Console.WriteLine(testImplicit); Console.WriteLine(testExplicit); // The following statement produces a run-time exception // because the conversion to double is not implemented. // double test = number; } } // This example has the following output: // 1 // One
https://msdn.microsoft.com/en-us/library/system.dynamic.dynamicobject.tryconvert
CC-MAIN-2015-32
en
refinedweb
TAG See also: Agenda, IRC log regrets NDW DO to arrive later. no news re ER <scribe> Scribe: DanC upcoming scribes... <Vincent> Scribe list: NDW, DC, ER, RF, NM, DO, HT VQ: comments on the agenda? [none just now] Date of Next telcon? 7 June conflicts with AC meeting HT not available 7 Jun <timbl> I would not be there next meeting seems to be ftf in Cambridge RESOLVED to cancel 7 Jun telcon; meet next in Cambridge RESOLVED to approve 3 May minutes and 10 May minutes. (item 2 AC prep deferred pending DO's arrival) (weird... pointer gone bad) DanC: I asked if it was OK to drop "DanC to draft comment about splitting fn:escape-uri into separate" from 12 Apr... ... relates to ftf prep; I hope to discuss XQuery namespaces VQ: yes... speaking of which, I'm a bit behind on our ftf agenda; any feedback would be best in the next day or two DanC: I suppose we have enough overlap with XQuery/XPath, with HT and Norm... do they need any heads-up? HT: not really DanC: TimBL, do you still think fn:escape-uri needs splitting? TimBL: well, yes, different task... one of them is invertible, the other is not <timbl> TimBL: Yes, I do - into0 one ifnormation-losinga nd one reversible function. VQ: merits ftf time? HT: yes, but cap at 30min VQ: ok. DanC fails to withdraw his action. it continues. NM: pls make the ftf agenda have good background pointers; danc points out a broken link VQ: will do looking at NDW to work with HT, DO on namespaceState [recorded in] HT: no progress; sorry. looking at Tim to provide a draft of new namespace policy doc () and start discussion on www-tag [recorded in] TBL: I discussed this internally a bit, I think... ... it still has the "note" in it [that shouldn't be there] ... I should follow that up, yes. action continues. looking at NDW to take GRDDL/RDDL discussion to www-tag to solicit feedback on directions for namespaceDocument-8 [recorded in] VQ: I don't see progress there. VQ: I gather NDW has made some progress on this... made a list. looking at Henry to monitor [RFC3023bis wrt fragmentInXML-28] and bring back up when time is appropriate. [recorded in] HT: I've made some progress, talking with various people. ... the process is kinda complicated. (hmm... this relates to ) HT's action is done. looking at: Noah to own draft skeleton of SchemeProtocols-49 finding and send around for comments. [recorded in] NM: things above this on my todo list are done-ish, and I've started on it... ... I see some difference of opinion ... if you have input, now would be a good time to send it to me (via www-tag) ... it might merit ftf time HT: I've talked with NM about this a bit... it's subtle and complex, and yes, it does seem to merit ftf time NM motivates the issue to the point where TBL is tempted to discuss in substance... VQ is convinced it merits ftf time. <Zakim> DanC, you wanted to swap in unanswered mail from HT making progress on httpRange-14 -- yet another suggestion <ht> DanC: dc:title is the URI that's mentioned in the SWBPG message to us <ht> It's a hashless URI for a non-information-resource, i.e. an RDF property <ht> But you don't get a 200 if you try to retrieve it. <ht> you get a redirect. . . They're evidently sensitive to claiming dc:title has representations. So a hashless URI is more trouble when it comes to publishing in that way. If they didn't set up a redirect, a 200 from a hashless URI is a claim that the web page is identical to the RDF property, which causes trouble for some consumers. <ht> DanC: When asked how to choose/publish RDF properties, I say -- pick a part of webspace, divide it up, slap a hash on the end, that's your name, then put something useful at the URI w/o the # <ht> NM: [missed the question] <ht> DanC: leads to confusion about e.g. 'author' assertions about that property vs. 'author' assertions about the document describing it <ht> NM: Indeed my concern was about 200 codes NM: so far we've talked about dividing between InformationResources and others... ... so if I get a 200 response for /noah , that seems kinda fishy, since I didn't really contact Noah, but rather a proxy for [or description of] Noah. ... [missed some...] but consider { ?SOMETHING measures:wieghtInLbs 200 } ... <Zakim> ht, you wanted to ask what you _get_ with your 200 NM: consider an actual computer... ... that responds to HTTP GETs about itself ... in the case of a computer, though it's clearly not an InformationResource, the 200 OK response doesn't seem to introduce ambiguity <ht> 200 for dc:title amounts to identifying the property with the page, which is a realistic confusion <ht> [that was DanC] <ht> DanC: 200 for computer is not confusing, because everything true about the computer is true about [what]???? <Zakim> timbl, you wanted to say that a computer is not an information resource, 200 would be innapropriate. TBL: to me, it's quite clear: the computer is not an information resource, and hence a hashless http URI for it, and a 200 OK response, is inappropriate. NM: ok, so this conversation confirms that there are a couple ways to look at this which are each internally consistent... <ht> Towers of abstraction are a long-standing problem for AI/Knowledge Representation where HT wrote "not confusing" I meant to say "not formally contradictory". I do think it's confusing. [missed some...] <ht> Right, Roy favours the "far context" approach to disambiguation, i.e. information about the RDF property of the triple in which the URI appears NM: what about documents about documents? TimBL: sure... <a> and <b>. <a#foo> might denote <b>. <ht> "far context" is from my initial message DC: as to "OK -- why do we need or want to maintain that notion of identity across the SemWeb/OFWeb boundary?" I think webarch speaks to the value of a global space. I'm somewhat conflicted about this; I wonder if the principle has limitations. TBL: [missed] NM: this is an easy one for me, the traditional Metcalf/economy-of-scale arguments convince me. <Zakim> ht, you wanted to ask about the history HT: in some histories of RDF, RDF statements were metadata, i.e. data about documents. ... nowadays, that's less emphasized, and RDF statements are more about things in the world... biotech and such... ... in the "RDF is for metadata" world, yes, it's nutso not to take the identifier spaces the same... <Vincent> MarkN is Dave <timbl> TBL: We have written about the importance of an unambiguous identifier throughout the OFWweb, and the semantic web depends in it throughout the SemWeb. We could, yes, have an architecture in which the two were separated: the same URI string would identifying different things as a OFURI and as a SWURI. That would mean putting a membrane between the two worlds, never mixing them. [I think this would be a major drawback and very expensive] HT: but it's less obvious when you get to lifesci etc. ... have I got the history right? TBL: in a sense; to me, RDF was always a generic thing, but the initial motivation and funding was metadata. So yes, the "center of gravity" has shifted. <ht> Thanks, that helps <noah> From AWWW: Software developers should expect that sharing URIs across applications will be useful, even if that utility is not initially evident.webarch/#identification <timbl> But remember that pre RDF, there was MCF and various KR things which were more general KR oriented. <noah> I actually believe this. <noah> This suggests that SemWeb and OFWeb should share an identification space DO: what are the logistics of creating AC slide presentations? <noah> I seem to remember that Chris Lilley did this quite regularly? (I have internal mail saying says how to do AC presentation materials) HT: if you can make vanilla HTML, with one h2 per slide, I can help do the rest... we have a CSS+javascript thingy DO: umm... how to set/meet slide review expectations? DC: I'm happy to delegate to DO+VQ <noah> +1 I don't need to review unless someone wants help <timbl> +1 VQ: ok, DO will send a draft to [email protected] and folks can send comments DO: I expect to be able to make a draft toninght or tomorrow... I'm travelling... VQ: so we'll wrap up and get them to Ian by the end of this week DO: I don't have [ ] in front of me... <ht> I like the idea of giving some time to the binary and XRI stuff DO: how much time to spend on external communications e.g. XRI? VQ: let's see... we have 45 minutes, so there seems to be plenty of time <ht> We got good feedback on our binary message [good work Ed and Noah!] <noah> Thanks. yes, talk to the AC about XRI and XBC NM: re XBC, note there's been discussion on member-xml-binary ... I hope folks are happy with what I sent there. <noah> a msg I wrote: In messages in the thread starting at [1], the question is raised as to whether the TAG is asking that the benefits of binary XML be quantified before or after the chartering of a new workgroup. Though this is not an official TAG communication, I think I am accurately conveying the sense of the TAG on this question. Specifically, we believe that the TAG should emphasize technical analysis in its work, and that where possible we should leave process decisions to others. See for example the discussion of Binary XML in the (as yet unapproved) minutes of our meeting of 10 May [2], in which Dan Connolly quotes from the TAG charter [3]... <ht> Noah, I thought your reply was well-judged <noah> Thanks. DO: FYI, I've requested a lightning talk so that I can explicitly put on my BEA hat to speak of the XML binary stuff. ... it's traditional to ask questions to the AC. continue that tradition? TBL: I'm not inclined to ask the AC how the TAG should work... <ht> That reminds me -- DO should say somethign about the education material stuff DanC: let's ask the AC "how have you used the webarch doc? not at all? read it yourself? internal training?" DO: good idea. ... slides on XRI, XBC, questions, educational stuff. something like that. ADJOURN. for 2 weeks, meet next in Cambridge.
http://www.w3.org/2005/05/31-tagmem-minutes
CC-MAIN-2015-32
en
refinedweb
In this program I have to write a program that has to take the inputs of a file and output the average of the sum of numbers. My question is regarding how to average the numbers by adding the total numbers in the file. Would I have to do a count? Code: #include <fstream>#include <iostream> int main() { using namespace std; ifstream in_stream; cout << "I am going to take the numbers from the input file and average them." << endl; in_stream.open ( "input.dat" ); if ( in_stream.fail( ) ) { cout << "Please check if the file is saved properly. It could not open." << endl; } double first_number, next_number, total_numbers; int count = 0; in_stream >> first_number >> next_number >> total_numbers; first_number = 0; next_number = first_number + next_number; cout << "The average of the numbers is "; cout << ( first_number + next_number ) / total_numbers << endl; in_stream.close (); cout << "The program will now close." << endl; }
http://cboard.cprogramming.com/cplusplus-programming/142530-cplusplus-file-question-printable-thread.html
CC-MAIN-2015-32
en
refinedweb
Decision Tree - Description - Advantages - Disadvantages - Training Data Format - Example Code - Code & Resources - Documentation Description This class implements a basic Decision Tree classifier. Decision Trees are conceptually simple classifiers that work well on even complex classification tasks. Decision Trees partition the feature space into a set of rectangular regions, classifying a new datum by finding which region it belongs to. The Decision Tree algorithm is part of the GRT classification modules. Advantages The Decision Tree algorithm is a good algorithm to use for the classification of static postures and non-temporal pattern recognition. The main advantage of a Decision Tree is that the model is particularly fast at classifying new input samples. The GRT Decision Tree algorithm enables you to define your own custom Decision Tree Nodes (the logic/model used to define how a Decision Tree should spilt the data at each node in the tree), giving you a lot of flexibility to which classification tasks the Decision Tree is applied to. Disadvantages The main limitation of the Decision Tree algorithm is that very large models will frequently overfit the training data. To prevent overfitting, you should experiment with the maximum depth of the Decision Tree, or you can use a GRT Random Forest algorithm to combine multiple Decision Trees together into one ensemble classifier. Training Data Format You should use the ClassificationData data structure to train the Decision Tree classifier. Example Code This examples demonstrates how to initialize, train, and use the Decision Tree algorithm for classification. The example loads the data shown in the image below and uses this to train a Decision Tree model. DecisionTree This examples demonstrates how to initialize, train, and use the DecisionTree algorithm for classification. Decision Trees are conceptually simple classifiers that work well on even complex classification tasks. Decision Trees partition the feature space into a set of rectangular regions, classifying a new datum by finding which region it belongs to. In this example we create an instance of a DecisionTree algorithm and then train a model using some pre-recorded training data. The trained DecisionTree model is then used to predict the class label of some test data. This example shows you how to: - Create and initialize the DecisionTree algorithm - Load some ClassificationData from a file and partition the training data into a training dataset and a test dataset - Train a DecisionTree model using the training dataset - Test the DecisionTree model using the test dataset - Manually compute the accuracy of the classifier */ #include "GRT.h" using namespace GRT; int main(int argc, const char * argv[]) { //Create a new DecisionTree instance DecisionTree dTree; //Set the node that the DecisionTree will use - different nodes may result in different decision boundaries //and some nodes may provide better accuracy than others on specific classification tasks //The current node options are: //- DecisionTreeClusterNode //- DecisionTreeThresholdNode dTree.setDecisionTreeNode( DecisionTreeClusterNode() ); //Set the number of steps that will be used to choose the best splitting values //More steps will give you a better model, but will take longer to train dTree.setNumSplittingSteps( 1000 ); //Set the maximum depth of the tree dTree.setMaxDepth( 10 ); //Set the minimum number of samples allowed per node dTree.setMinNumSamplesPerNode( 10 ); //Load some training data to train the classifier ClassificationData trainingData; if( !trainingData.loadDatasetFromFile("DecisionTreeTrainingData.grt") ){ cout << "Failed to load training data!\n"; return EXIT_FAILURE; } //Use 20% of the training dataset to create a test dataset ClassificationData testData = trainingData.partition( 80 ); //Train the classifier if( !dTree.train( trainingData ) ){ cout << "Failed to train classifier!\n"; return EXIT_FAILURE; } //Print the tree dTree.print(); //Save the model to a file if( !dTree.save("DecisionTreeModel.grt") ){ cout << "Failed to save the classifier model!\n"; return EXIT_FAILURE; } //Load the model from a file if( !dTree.load("DecisionTreeModel.grt") ){ cout << "Failed to load the classifier model!\n"; return EXIT_FAILURE; } //Test the accuracy of the model on the test data double accuracy = 0; for(UINT i=0; i<testData.getNumSamples(); i++){ //Get the i'th test sample UINT classLabel = testData[i].getClassLabel(); VectorDouble inputVector = testData[i].getSample(); //Perform a prediction using the classifier bool predictSuccess = dTree.predict( inputVector ); if( !predictSuccess ){ cout << "Failed to perform prediction for test sampel: " << i <<"\n"; return EXIT_FAILURE; } //Get the predicted class label UINT predictedClassLabel = dTree.getPredictedClassLabel(); VectorDouble classLikelihoods = dTree.getClassLikelihoods(); VectorDouble classDistances = dTree.getClassDistances(); //Update the accuracy if( classLabel == predictedClassLabel ) accuracy++; cout << "TestSample: " << i << " ClassLabel: " << classLabel << " PredictedClassLabel: " << predictedClassLabel << endl; } cout << "Test Accuracy: " << accuracy/double(testData.getNumSamples())*100.0 << "%" << endl; return EXIT_SUCCESS; } Code & Resources DecisionTreeExample.cpp DecisionTreeTrainingData.grt Documentation You can find the documentation for this class at DecisionTree documentation.
http://www.nickgillian.com/wiki/pmwiki.php/GRT/DecisionTree
CC-MAIN-2015-32
en
refinedweb
AS7: ReSTEasy EJB InjectionRaj Tiwari Jul 28, 2011 5:45 PM I have a JAX-RS application where I am trying to inject an EJB (@EJB). The injection annotation works for a servlet in teh same app, but not in ReSTEasy application. Is this not supported in AS7? 1. Re: AS7: ReSTEasy EJB Injectionjaikiran pai Jul 29, 2011 10:32 AM (in response to Raj Tiwari) Please post relevant code and configurations. 2. Re: AS7: ReSTEasy EJB InjectionRaj Tiwari Jul 31, 2011 2:39 AM (in response to jaikiran pai) Hi Jai, In the code below, userManagerLocal is the local interface of an EJB. The exact same injection puts the right EJB in a servlet in the same project, but not in the JAX-RS code below. In JAX-RS, it is injected as null. @ApplicationPath( "/api" ) @Path( "/users" ) public class UserManager extends Application { /** * Queries existence of a user. * @param sUserId * @return HTTP OK if user exists and NOT_FOUND if user does not exist * @throws NamingException */ @HEAD @Path( "/{userid}" ) public Response queryUserExists( @PathParam( "userid" )final String sUserId ) throws NamingException { final boolean bUserExists = userManager.queryUserExists( sUserId ); ... } ... @EJB private UserManagerLocal userManager; } 3. Re: AS7: ReSTEasy EJB InjectionRaj Tiwari Aug 2, 2011 12:09 PM (in response to jaikiran pai) Hi Jai, Any updates on this? Thanks. -Raj 4. Re: AS7: ReSTEasy EJB Injectionjaikiran pai Aug 7, 2011 11:20 AM (in response to Raj Tiwari) Is this still an issue against latest AS7 nightly build? 5. Re: AS7: ReSTEasy EJB InjectionAdams Tan Aug 8, 2011 4:04 AM (in response to Raj Tiwari) I had a similar issue before; problem was fixed by adding a "beans.xml" into WEB-INF. Do double check. Ad 6. Re: AS7: ReSTEasy EJB InjectionSamuel Tannous Dec 12, 2011 5:46 PM (in response to Raj Tiwari) I had the same issue and it took a while to find the answer but all I had to do was add the @Stateless annotation to the REST service. Apparently it must also be a session bean to add injection capabilities as mentioned here This is the original post I found which led me to Adam's blog: 7. Re: AS7: ReSTEasy EJB InjectionMaximos Sapranidis Apr 26, 2012 10:39 AM (in response to Raj Tiwari) I know this is older post but I am facing the same issue. I have tried both solutions (adding beans.xml file and setting the @Stateless annotations to rest class) but unfortunatelly none of them worked. Also tried to change the @EJB to @Inject but I keep getting null reference! Have anyone found a solution to this? am I doing something wrong? 8. Re: AS7: ReSTEasy EJB Injectionalvarohenry May 8, 2012 1:08 PM (in response to Maximos Sapranidis) Maximos I had the same problem. the problem was in my AppApplication class that extends of Application class. The resources with @Path annotation have to be in the getClasses() method instead of getSingletons() method. I changed: singletons.add(new MyService()); to: classes.add(MyService.class); I hope it help you. Regards. 9. Re: AS7: ReSTEasy EJB Injectionjack chen Sep 3, 2013 10:57 AM (in response to Raj Tiwari) I had same problem .I try use both ways (add beans.xml and change some codes in the class extends Application)and It works for me.If i only used one method it did not work for me. I need to add beans.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns:xsi="" xsi: </beans> and change the public class YourApplication extends Application public YourApplication () { getClasses().add(yourRESTServices.class); } @Override public Set<Class<?>> getClasses() { HashSet<Class<?>> set = new HashSet<Class<?>>(); return set; } } 10. Re: AS7: ReSTEasy EJB InjectionTiago Rico May 5, 2014 11:50 AM (in response to jack chen) I confirm Jack solution! Thank you all!
https://community.jboss.org/thread/170194
CC-MAIN-2015-32
en
refinedweb
I realize this is not completely an RE activity and want to get your feedback on how/where to communicate about the other parts. This is based on long time request from l10n team. - they need to translate info.xml for uc modules; not sure if they need to do it for all modules - but info.xml is not in l10n kits so they need to get, in the large nbms file, the info.xml of each module they need to translate for then they need to send by mail to RE the translated file REQUEST: 1. have the info.xml in nb cvs so it can be added to l10n.list of each applicable module (or l10n.list.uc) so then it will be in the regular or uc l10n kit. 2. they will putback the info.xml to translatedfiles/src - so that RE can use that info.xml in the building of the ml nbms. QUESTION for #1, is it technically possible for the info.xml to go to nb cvs ? if so, then I think that dev needs to add it to l10n.list or l10n.list.uc BTW, where do the info.xml files live ? is it in a cvs, or are they dynamically created from a template when a given nbm is built ? ALTERNATE IDEA ? there is one large nbm file in the builds - if one had a list of the names of each nbm that needed to have info.xml localized, a script could extract the info.xml for each and to a directory. But there would be a namespace collision since all info.xml are in Info/info.xml. But perhaps the directory name could be the basename of each nbm, so that each info.xml would go in a unique location. then these files could be provided as a special kit, and then putback to translatedfiles in the same structure, and used for building the ml nbms ? Please allow me add a comment. I'm always delivering translated info.xml for UC releases, but I'm not doing the translation. Because all the strings in that xml are contained message resource files Bundle.properties. What I always do is: 1. Pick up appropriate strings from Bundle_ja.properties or Bundle_$locale.properties for other locales 2. replace English string with the above string 3. license information is English in recent releases I don't think we have to use CVS and l10n-kit for info.xml localization. English info.xml is always generated from MakeNBM ant task in the build script, so I would like to request to generate localized info.xml (e.g. info_ja.xml, info_zh_CN.xml) from MakeNBM ant task as same as English xml. Please see another issue for the autoupdate: Since autoupdate has been fixed to support localized info.xml, I guess the following tasks can be automated: - Generate localized info.xml - re-package NBMs with the localized info.xml Would you give your any thoughts? I filed this based on feedback over the years from various members of translation team as well as recent feedback from your project management. Can I ask that translation team discuss among yourselves to see if its still something that is wanted (info,xml in kits and related processes) - its not about anything else. Please let me know in some separate mails since base team should not spend time doing this if its not important to have; it was communicated to me in past that it was important to have, [email protected] As per keiichio, all the information that is required for building an info.xmml are already part of the workspace and also part of l10n kit. Therefore, the repository/kit issue, which used to exist, seems to have been resolved by itself over time. There are still two issues that need to be addressed: - Even though all the individual pieces of data are available, localized info.xml files themlseves are still not prepared by the build script. This forces the l10n teams to prepare them manually and to store the generated info.xml files on their servers for later reference. As keiichio has mentioned, this problem can be solved if MakeNBM is modified to produce localized info.xml automatically. - Once the localized info.xml files are present, the question is how should the nbms be produced? 1. A single nbm can be produced with all the info.xml files. Since au client now supports multiple info.xml files in a single nbm, this should work. 2. Prepare a separate nbm for each locale with code+lang_bundles+lang_info_xml. 3. Prepare a separate nbm for each locale but only with lang_bundles+lang_info_xml. Then we can use the process outlined in. I think we should file a separate RFE to discuss both the above issues. Pl. inform if you agree and i will file a new RFE. I don't recall seeing info.xml in l10n kits and I don't recall seeing them in l10n.lists and thus localized nbms wont be built from using the info.xml translated files put back into translatedfiles -- and all that is the context of this issue. I realize that getting the info.xml into the cvs and into the lists is activitiy of developers but this issue is about initiating that activity to happen, not from filing separate issues on each module but a unified approach with dev team leadership. Can we keep discussion of the above items to this issue as filed ? For the other things mentioned, I do suggest filing a separate issue(s) [email protected] There is alternate idea. First of all I have to say that the info.xml file is *generated file* and thus is not supposed to be part of L10Nkit. Don't get sad, there's a solution. It's generated from some properties file. It's very likely that such a "Localizing-Bundle" is regular part of L10Nkit and thus translated and available somewhere in translatedfiles/src. I've also heard there's possibility to have Info/info_${locale}.xml inside of NBM file. So the alternate idea is to utilize the information described above and generate ML NBMs with multiple translated info.xml files by default. Moving to RE queue for further re-assignments. Changing from DEFECT to ENHANCEMENT. Targeting to Milestone 11 for now. As an alternative to producing ML nbms with multiple jars and info.xml files, have you considered building individual ML nbms each targeting a language, as described in ? Reassigning to me. Checking in nbbuild/antsrc/org/netbeans/nbbuild/MakeNBM.java; /cvs/nbbuild/antsrc/org/netbeans/nbbuild/MakeNBM.java,v <-- MakeNBM.java new revision: 1.78; previous revision: 1.77 done <makenbm> is used in the external build harness. Will this patch have any effect on existing build scripts? It is too large for me to follow what it is doing. Jesse, it should be safe for external build harnesses. I'm sorry if following description is cryptic, feel free to ask for further explanation. The change has added support to allow <makenbm locales="${locales}"/> attribute and the task now reads OpenIDE-Module-Localizing-Bundle manifest value and tries to locate localized versions of that bundle in localized module jarfile(s). For example you can have module jarfile "modules/org-foo-bar.jar" with localizing Bundle "org/foo/bar/Bundle.properties" and some localized jarfiles in "modules/locale/org-foo-bar_${locale}.jar", which contain "org/foo/bar/Bundle_${locale}.properties" files (each localized jarfile can have one localized localizing bundle file. These localized jarfiles are checked for existence of such localized localizing bundle and if found, their values are used for creation of localized info.xml file in Info/locale/info_${locale}.xml file. These localized info.xml files are then added to NBM.
https://netbeans.org/bugzilla/show_bug.cgi?id=98893
CC-MAIN-2015-32
en
refinedweb
fennec:also, shouldn't that be website.renameTo() instead of Website.RenameTo()? Publius:I think the real WTF here is that I'll never have to hear what the real WTF here is again. Censoring FAILS!:Yet another website falls for the censoring way of life. In every country (especially USA) you see this shit happening :S stop bloody censoring things just cos some ppl can't handle vulgare names! Up yours! Jackal von ÖRF:What happened to the "show full articles" checkbox on the front page? Today when I came to the site, there were 3 new articles, which meant that I had to (1) click "Full Article" link on the oldest article and read it, (2) click "Back" button, (3) click "Full Article" link on the second oldest article and read it, (4) click "Back" button, (5) click "Full Article" link on the latest article and read it. So a total of 5 clicks compared to 0 clicks if the full articles had been visible on the front page. Censoring FAILS!:Basically, I am completly against censoring, and as this is censoring I am against this website, and as I am against this website I will stop reading this website as regulary as I did before. fennec:-- also, shouldn't that be website.renameTo() instead of Website.RenameTo()? GeneWitch:I don't care what the site is named, that's so petty. WORSE and FAILURE: GeneWitch:I don't care what the site is named, that's so petty. Then why change it, changing it is petty and truly a bad marketing decision. I know programmers dont' do marketing but branding works. The whole site was spawned from one thread titled the Daily WTF, why be petty and change it. Jochen Ritzel:"IBM stands for Industrial Buisness Machines". :/ swapo:I recently had an similar situation about what WTF stands for. I like the new Backronym, keep up the good work (and I should send in some WTFs myself...). fennec:also, shouldn't that be website.renameTo() instead of Website.RenameTo()? Not if you're from the .NET-World as Alex obviously is, they use some really WTFish naming conventions over there ;-) But a lot of people in the “real world” don’t quite share that sentiment. Especially my relatives: Soro: Admit it, having "fuck" in the name was not business friendly. So when you sold out you had to change it. Otto: I will also continue to give thedailywtf.com out as the domain to people when it comes up (and it has, many times). 10 read from $s.mag = 8*"TheDailyWTF" print; next i; john q public:Stupid to change the name. dumb move. That said, there were a few reasons that I wanted to rename the site, perhaps the biggest being that “Daily” and “What The F*” don’t quite describe it anymore. PetPeeve:Isn't it easier just to keep everything the same and just change only the official name to "The Daily "Worse Than Failure". <script type="text/javascript"> function setDisp(disp) { var exp = new Date(); exp.setDate(exp.getDate()+365); document.cookie = "HPDISPALL=" + (disp?"Y":"N") + ";expires=" + exp.toGMTString(); window.location = '/'; } </script> ... <script type="text/javascript"> if (document.cookie) { document.write('Display: '); if (true) { document.write('<b>Full Article</b> | <a href="JavaScript:setDisp(false)">Summary</a>'); } else { document.write('<a href="JavaScript:setDisp(true)">Full Article</a> | <b>Summary</b>'); } } else { document.write("Tip: Enable Cookies to toggle full article text"); } </script> <noscript> Tip: Enable JavaScript & Cookies to toggle full article text </noscript> EnterUserNameHere:The REAL WTF is that the intent of renaming the site has had the opposite effect and actually resulted in MORE of the F-word! wgh:Man, this is a sad day. I _really_ like the old name so much better. J:First, this is not censorship. Censorship is when an official body such as the government comes in and declares that you aren't allowed to say or write certain things. This is a voluntary move away from what the site owner considers to be a vulgarity. whiny wee bastard:I also wanted to add that BOFH would be ashamed. unlimitedbullshit:as the owner of the unlimitedbullshit domain thedailywtfisanawesomenameandbrand:Ads - OK Occasional self promotion - OK Bean bag girl - Totally hot Name change - Fucking lame Nik:I've been following these for about a year now, and I've recruited a readers.. DaveE1:Changing the name is a weak move. oooooo oooooo oooo oooo . `888. `888. .8' `888 .o8 `888. .8888. .8' 888 .oo. .oooo. .o888oo `888 .8'`888. .8' 888P"Y88b `P )88b 888 `888.8' `888.8' 888 888 .oP"888 888 `888' `888' 888 888 d8( 888 888 . `8' `8' o888o o888o `Y888""8o "888" . oooo .o8 `888 .o888oo 888 .oo. .ooooo. 888 888P"Y88b d88' `88b 888 888 888 888ooo888 888 . 888 888 888 .o "888" o888o o888o `Y8bod8P' oooooooooooo oooo .oooooo. .o. `888' `8 `888 dP' `Y8b 888 888 oooo oooo .ooooo. 888 oooo 88o .d8P 888 888oooo8 `888 `888 d88' `"Y8 888 .8P' `"' .d8P' Y8P 888 " 888 888 888 888888. `88' `8' 888 888 888 888 .o8 888 `88b. .o. .o. o888o `V88V"V8P' `Y8bod8P' o888o o888o Y8P Y8P Quicksilver:oh please please name it back... whats so bad about wtf? your granny has brought some cildren to the world in her life ... she knows what fuck means ... pinkduck:Don't underestimate the power of TheDailyWTF brand, and remember who your core audience is (or were). wtf:When I used to tell a friend about this site, they'd get a kick out of the name and that alone would make them want to check it out. "Worse Than Failure" sounds like a shitty local emo band or something. Very disappointing. Granny/Boss/Priest: The Daily WTF? What does WTF stand for? You + Balls: "Worse Than Failure".. Ahnfelt: J:First, this is not censorship. Censorship is when an official body such as the government comes in and declares that you aren't allowed to say or write certain things. This is a voluntary move away from what the site owner considers to be a vulgarity.. bullseye:I miss the days when the average IQ of posters here was higher. At any rate, I suspect the complaining was to be expected. You should have posted the article on a school night... michael:Is it just me but is this a bike shed argument to rival the fbsd sleep discussion. We have 4 pages of replies, and really in the grand scheme of things, who cares? do people really come here to read the url? how about the banner? personally i come to read the stories and they will still be here. Ryan:I think "The Daily WTF" was part of this site's success and an excellent brand. We'll see if Worse Than Failure captures new audience as well, I doubt it. Ryan:When do people make business plans based on how their Grandma likes the name? I thought that the target market determined the name. Martin: ... Or maybe you think the name hurts you in your professional life as too many people judge it negatively before you have a chance to explain why the name is excellent to address a specfic demography of technical people. David Cameron: ...Most people seemed to grow out of that by the age of 12. <snip> I'll give it a week or so to see if the name changes back. If not I'll be deleting this from my bookmarks and I won't be visiting again. Icarium:YOU RE A SISSY! D:Can't you just change it back? *Sob* I like considerate people:. fennec: -- also, shouldn't that be website.renameTo() instead of Website.RenameTo()? baddognotbarking:The new name is a crude reinterpretation of the WTF acronym. It may be politically correct but it reflects just a fraction of the site's content.! Anonymous:I like the new one too, only shortening it to just WTF could cause some confusion... WoTF perhaps? Any better ideas? fennec:I've done basic updates to the Wikipedia article. bstorer:Jeez, people, fear change much? You all complain about the site's name while simultaniously searching the web on Google, and doing your online shopping on websites called eBay and Amazon. The name doesn't matter, the content does. meh:Yeah I love the renaming. Its like renaming Terminator II to The Robot from The Future. The name is lame. yeah, but i bet none of them would dilute their brand by renaming. bstorer:yeah, but i bet none of them would dilute their brand by renaming. Dilute the brand? Are you completely insane? It's a blog! Oh, and tell that to Verizon, Mandriva, Wireshark, Ask.com, CBS Radio, and USAirways... I could keep going. I like considerate people: Censoring FAILS!:Basically, I am completly against censoring, and as this is censoring I am against this website, and as I am against this website I will stop reading this website as regulary as I did before.. Someone peeved at the twats:Jeeze, it's like a little bunch of children arguing over which pokemon is better. public class Website { public static void RenameTo(string name) { if (name.Equals("Worse Than Failure")) { Website.Flush(); Website.Close(); } } } JTK:The Daily SNAFU (can use Situation Normal - All Fouled Up as the tag, while the script kiddies snicker in the back of the room) The Daily FOOBAR (as above, dates back to K&R C) yoz:well it's not the name but the content which matters way to go alex doof:It's really sad how folks can't comprehend the difference between "censorship" and "tact / discretion / decorum". Edowyth: void clear(int* x, int* y) { x = 0; y = 0; } ??? Cthulhu:[this] reminds me of the BFG (10K) in quake2. ID software could have gone all PC or worried about 'inherent meaning' and called it the "What's going on gun" instead. They didn't. Paul:WHAT THE FUCK Your grandma can handle the FUCK word. Seriously. To think otherwise is rather condescending. Raina:While I have nothing against the 'wtf' in the original name, I know what you mean about explaining to grandma .. I had the same problem with my multimedia programming class last year. me> you should check out this site to know what not to do... [writes url on board] fairly clueless female student > what does wtf stand for ? me> err ... err ... 'what the fudge' :) class> *laughs* Cthulhu:The real WTF are comments like this:. CAPTCHA: Quake - this relevant captcha reminds me of the BFG (10K) in quake2. ID software could have gone all PC or worried about 'inherent meaning' and called it the "What's going on gun" instead. They didn't. It's better this way. Thiago Berti:you culd stick with "Daily WTF" and just say that WTF stands for Worse than failure to your relatives. kettch:50 years in the future, what will our grandkids be doing that we now find unspeakably foul? I'm not saying that certain things should be banned, but they should probably not become a common part of "polite society". Thiago Berti:Well, in my humble opinion (sorry if someone had already said that, i didn't read all the comments) you culd stick with "Daily WTF" and just say that WTF stands for Worse than failure to your relatives. This way you'd have both names and people could choose between the one they liked more. Plasmab:Just change it back. As soon as the urls stops working i stop reading. Roy:How about "WTF Technology Folklore"? Bot2:10 "Worse Than Failure is a lame name". gc: <sn <snip> Sell out!:Another person bows to public pressure. I say fuck at least 60 times a day if not more. Just because it makes people uncomfortable. JS:I for one welcome the name change. Mostly because it has fewer syllables. I hate saying "doubleyou". Name:This is censorship at its finest. F in WTF means "fuck" not fudge, failure, fridge or fool. This web site is not dedicated to our granmas, we can say "fuck" here and that's why I liked it here. I don't know if I'll stay any longer, the new name is really so stupid. Ned Flanders:You could always just say it stands for "what the fiddilydangdiggily" <b style="color:black;background-color:#ffff66">John Robo</b>:FUCK YOU! You didn't mention the tweaked looked -- I like it!! I like the new one too, only shortening it to just WTF could cause some confusion... WoTF perhaps? Any better ideas? Addendum (2007-02-24 16:21): -- also, shouldn't that be website.renameTo() instead of Website.RenameTo()? wtf Not if you're from the .NET-World as Alex obviously is, they use some really WTFish naming conventions over there ;-) The Real WTF? RSS link!(“Worse_Than_Failure”).aspx clicked in RSS newsreader opens up url: ... Bad Request (Invalid URL) Up yours! Addendum (2007-02-25 04:09): mario, the hyperlink was not visible when I wrote that message (using Firefox + NoScript). Now it at least says "Tip: Enable JavaScript & Cookies to toggle full article text" so I will know to enable JavaScript to get the link visible. So the WTF is only that the site requires JavaScript enabled. Uh, I think you need to look at the new name a little more carefully guys... Capcha: Darwin Or, you could have cliked on hyperlink right where the checkbox used to be that says "Display: Full Article. Sure, it would have required 1 click more than 0 clicks, but you would have saved at least 500 keystrokes, and an additional 2 clicks to post the comment. Heck, you would have even saved me a bunch of keystrokes and clicks as well. Just here to defend my statement, and wanna see people defending the renaming :). How is this change at all censorship? Censorship would be deleting your comment because you're whining. Censoring would be not printing the real names of WTF companies involved in the stories. Oh wait. That's already done ... so ... wait, how are you reading the WTF still if Alex already does that? unfuckingbelievable. - daily was in the name (it just sounds more frequent, visit daily) - wtf is the real wtf (and an internet lingo - worse than failure is just a cop out to keep WTF acronym but changes the meaning) - mentally seeing FAILURE at the top of your page in red is bad for the irrational marketing side. - longer URL (new name) - dailywtf was a name that caught attention, new name does not have the same ring - your relatives disliked the name, usually means you are onto something good. At least grandma is happy. come on saying fuck to your grandma can't be worth it :) let's hope this site doesn't turn into something that sounds similar eh :) I'm shocked that anyone who would have used the original name in the first place would later cave to this sort of pressure. Definitely a sad day and I will close by saying that if you stick with the neutered name, then the terrorists have truly won. Looking forward to the programming contests though! Fuck-fuckity-fuck-fuck-fuck. Jesus Christ - it doens't hurt anybody! Captcha: smile (I wrote to them suggesting that 'eWeak' might be a better monicker given their submission to industry peer pressure, but I guess they didn't think it was funny.) CAPTCHA: Xevious. Back in the day. No, it should be: :1,$s/The Daily WTF/Worse Than Failure/ The REAL WTF is that apparently, according to the title, no one at the site's end bothered to edit the configuration files or templates or update the database table or whatever the heck this monstrous piece of software uses. Instead they just dived in the CMS logic, added a line of code to change the site title, and recompiled. So, um, the title is obviously in line with the contents of the site. I'll still visit and be an asshole sometimes and be nice othertimes and appreciate all the help you guys give me. and advice! One thing though Alex... Obfuscate the real reason next time... say that the TLD .com people made you change your domain name because they were scared they'd be featured on your main page or something. And don't sell off the old domain, forever forward it here! Weeeeee! EDIT: Do you guys write letters when vivendi buys companies and makes them change their names? How about Time? Warner? Bell labs/ Ma Bell? The lot of you that are being cantankerous about this really should look at what you are doing... you get this site for FREE... it's funny and entertaining... and you guys are giving the webmaster shit because he changed something. Get a grip. Would it be possible: * For the old domain not to redirect to "worsethanfailure.com" * For the old domain to keep the old DWTF branding ? It's pretty trivial to do so the tech part shouldn't be much of an issue That way, those who really prefer their WTF with more dailys and more fucks would get to keep it, and those who prefer failures could fail as hard as they can? TenToTheTenToTheTen.com - World's fastest search engine. I'm sorry about you talking to your grandma but so what? Just tell her it's an acronym that means something doesn't make sense. Changing the whole site because the F stands for a swear is silly. Changing the format of the site is up to you, Want to have a "worse than failure" section that's fine, but changing the name is going to lose you readers. And when I say WTF I'm being polite here, I mean the What the **** not the letters. Then why change it, changing it is petty and truly a bad marketing decision. I know programmers dont' do marketing but branding works. The whole site was spawned from one thread titled the Daily WTF, why be petty and change it. Yeah shut up! that's more like it bitches (captcha: slashbot, pwn3d) Makes me think of Monthy Python: The Usage Of Fuck. captcha: fuck (just kidding, it's kungfu) New name = shit. Bring back tdwtf! The name just does not fit. If you want to go all PC, pick a name that really reflects what the site contains. The new name is now the site's "Curious Perversion". Sheesh. CAPTCHA: stinky (my thoughts exactly) That makes me curious. Where and when was this thread posted and can it still be found somewhere? The name really set the tone for this website. I too think differently of WTF and the F-word - WTF seems so much more innocuous for us web users. I have to agree that the best solution would be keeping both domains and trying to set it up so the header and all references to the acronym are replaced with family-friendly versions. Also, I agree with whoever said that the word "daily" isn't a problem - check the site daily! Not having daily in the name tempts less frequent updating and traffic. If I ever have to explain to people WTF, I just tell them straight up that it's an acronym that stems from a vulgar expression, but its meaning and seriousness have changed throughout frequent usage on the internet. If anything, tell people that it stands for "Worse than Failure," which I have to admit is a pretty good reinterpretation of the acronym. WTF? Actually, it's "INTERNATIONAL Business Machines". ;) CAPTCHA: scooter. Captha: what the fook! As far as I'm aware, though, the curly quotes will still probably cause compilation problems. :P Hope it works.. Alex, if your grandma' is really the reason for changing the name of one of TheDailyWTF...I don't exactly know what to say. The new name is a WTF as per both the old and new definitions. I also had an urgent need to bring out my feelings about this new name thing. The daily WTF really is a brand and a concept. People who find wtf vulgar should maybe loose it up a bit. The new name is literally worse than failure. Sorry, it just is... I don't know what the real reason for the change was, but the grandma story sounds extremely lame -- even too lame to be true? One more vote for restoring the old name! It's not? Well...then fuck. I don't actually dislike the new name, but I think the old name was better for what this site is about, and the branding is well worth keeping, and, honestly, you can just tell your grandmother it stands for "worse than failure" if "fuck" is not a word you can say in her presence. Tell you what: I'm going to keep going to thedailywtf.com. I'm going to keep calling it the daily wtf. The day that url quits working is the day I quit reading. You don't care about my readership, since I'm only one random person, but that's the only vote I've got. Cheers! ... } What the fuck is "worse than failure" about a stupid piece of code? Fuck nothing, that's what. The new name is totally idiotic. What a heap of shit. This software sucks the sweat from my taint too. Sad. I'll still continue to read the site, but I'll continue thinking of it as the dailyWTF and it's definitely moved down a notch or two in my list of must read sites. Don't let the perpetually juvenile whiners get you down. I think you are on the right track re; vulgarity, but I would suggest that there are better expansions of the acronym that do not require profanity. I actually had already thought a few up since I started reading about a week ago. They all fit the pattern What T* Folly. T*: Terrible Tremendous Ticklesome Total ... The advantage I see is that Folly is just more lyrical than failure. It has classical roots, and is relatively non-judgemental. As to the ranting and cursing about the word "fuck", the best I can do is quote my late father: "Profanity is the sound of an inarticualte person seekng to express himself forcefully." The truly creative, and articulate, person can almost always find a better, more expressive word to use than the guttueral and troglodytic expression to which so many posters on this thread seem so attached. One practical issue that I do have is that I have had some trouble replying on pre-existing threads. Nothing whatsoever happens when I click the 'Submit" button. I am now wondering if there is a relationship between this problem and the change in domain name. I'd appreciate it if you could look into this. But if its a _must_ to change the name. At least come up with something better. I understand that you want to keep the 'wtf' acronym but it doesn't make sense and has a confusing meaning. ..and I'm sure that we can all agree that RTFM abbreviates the word "Fine" as well. Seriously though... four letter words and "vulgarities" are part of our language. I've always insisted those who would excise them from my speech or those who are unwilling to hear them are, at some fundamental level, immature and hypocritical. All you have to do is look at the comments on any code over 10,000 lines in size and you *WILL* find this type of language in use. I'm particularly curious about how honest people are in their assessment of Open Source code through the use of four (and more) letter words. I suspect that there's maybe 10x more usage when your audience accepts the language without judgement and when there's an absence of "fear" (personally, I'm not sure which is more vulgar: using fear to limit another person's speech or being intellectually dishonest enough to deny the distinct possibility that a four letter word just might be the most apropos lexicon to describe what is happening in a particular piece of code) ...but anyway, congratulations on finding your new crutch and caving to mediocre (yes, every person of excellence I have ever known in my life uses "vulgar" language... some even in their public writing). Watch out SNAFU and FUBAR, you're next. i guess you'd have issue when steven colbert talks about "balls" because of the inappropriateness? part of the appeal was that this was slightly risqué. i dont know- you still have "Submit Your WTF" so are you trying to make that stand for worse than failure? thats a wtf in and of itself. very lame. as if grandma didnt know what fuck was. cmon now. since when have the standards for the site been tailored to grandmothers?! ps- people voting against the new name arent all whiny kids. some of us actually liked the fact that this was an off color area to share our stories in IT. i suppose its a good name- the change is definitely worse than failure See ya Daily WTF. First, this is not censorship. Censorship is when an official body such as the government comes in and declares that you aren't allowed to say or write certain things. This is a voluntary move away from what the site owner considers to be a vulgarity. Second, why the crap would you quit reading the site? The content is the same, so unless you spent most of your time on the website laughing at the name, it won't make a darn bit of difference. It's like firing an employee because he decided to break his swearing habit. At the risk of sounding like a "whiney kid," this is why people are up in arms about the name change. At TheDailyWTF, you have built a wonderful community of people who fight the good fight against every day against clueless managers, stupid users, and both coworker and self-inflicted wounds. The site is not only good for a laugh, it is good for a morale boost for any IT worker who has had to deal with this on a daily basis. Seeing another person's "curious perversion" experience makes our own seem better. Yes, in a sense, you have done a valuable therapy service to the IT community. And with this name change, much of that therapy will be gone. WTF is a perfect shorthand for the moment we all see every day. It is an expression of the community, for the community, and in use every day by the community. It is ours. "Worse Than Failure" is theirs -- it is for others, an attempt to impose a name from the outside on a decidedly insider phenomenon. Instead of MySpace, it is Walmart's MySpace clone. At the end of the day, it is about the content, and if Alex continues his outstanding work serving up good "curious perversions" under any name, I will continue to read and to be a fan. But I hope you can at least see why this name change has IT people leery of the future direction. And "Worse than failure" is *horrible*. It sounds like something dreamt up by a committee. If you have to drop "WTF" (and I don't see why you do), please, please pick something else. Perhaps this is to appeal to a wider audience than just coders. It probably has increased readership. Nevertheless it's sad. This is not to say the new series' (or new writers) are all bad. The quality drop is across the board.. Jesus Fucking Christ you people are whiny fucking children. The dumbfucks around you that didn't get Alex didn't change it because of his grandmother are even dumber, and should really be brought out back and put down. I hope the majority of you rot in hell. I am yet another long time reader and first time poster who feels the need to make a comment on this event. But for me, the event is not the so much the change of name (I care only slightly), but something more significant. Once, this board was a obviously done for the sole purpose of ammusment. There was no intent to capitalize on it, and the aesthetic was based on the aribitray personal choices of one person. This gave it Quality. Quality comes from a clear vision, and bold choices from the right person. Then the move was made to capitalize on the site. The introduction of the advertising, the non-WTF jobs, etc. Now unlike many people, I don't believe that all attempts to make money are evil. I myself am a capitalist and entrepreneur. I welcomed these changes for the most part. I was slightly concerned that sponsers might begin to impact the content of the site, etc, but things remained mostly as they were. There was some shift in the aesthetic, and it had a hint of "caring too much about other people's opinions", but I tried to write it off as a simple and natural change on Alex's part. However, this latest change, being significantly incompatiable with the prior aesthetic, seems to strongly indicate that Alex has lost his nerve. He has lost the ability to make bold personal choices, instead, he considers first how others will react to them. He brings up the example of his grandmother, but I think the story is made up, Grandma is just a stand in for 'the other', 'everyman', 'the majority', name them what you will. And Alex has now put them above his personal aesthetic, which really is "worse than failure". It is the descent into mediocrity. It is the end of the "great", and the enshrining of the "not terrible". The way of the committee. It didn't have to go this way. Many individuals can make the transition from hobby to occupation and keep their faith in the original goals. Take a look at Penny Arcade for a clear example. But it has gone this way. Even if Alex changes the name back, it will most likely be due to the pressure of his readers. Perhaps with great force of will, Alex could regain his nerve, but it's unlikely. So much so that I probably wont bother to check this site again. It's certainly leaving my bookmarks. Not really because the name changed, but because it's been getting more lame for a while now. Until this point, I thought it might be temporary, transitional, just someone trying to find their center. But now I'm pretty sure it's getting lame because Alex is not trying to find his center, but the center of everyone else. And I've seen the middle. It's boring and I hate it. Someday in a few weeks I'll try the old URL because I'm bored, and the because the name has always been easy to remember. Maybe it'll be good, if so, I might keep coming back. Or maybe it will be lame, in which case I'll engage in some exponential backoff, and try again in a month or two. Eventually, I'll forget. Thanks for the good times while they lasted. Occasional self promotion - OK Bean bag girl - Totally hot Name change - Fucking lame At any rate, I suspect the complaining was to be expected. You should have posted the article on a school night... I will not be changing my bookmarks to the new domain. I will also continue to give thedailywtf.com out as the domain to people when it comes up (and it has, many times). And if the old URL stops redirecting, I will stop reading. Why? Because there's no way I'm going to remember a crap name like this. Bad decision with the new name. This site will only ever appeal to software developers. 95% of them will not be offended by 'WTF' or what it really stands for. DailyWTF is a brand that you've spent time building, destroying that and starting again is madness. If you really wont go back to the old name, someone posted a good idea... Have a 'clean' site and a 'DailyWTF' site. I often IM or email my developer friend and say "There's a good daily WTF today". Now I have to say "Look at 'worsethanfailure.com'" Lame lame lame. Can't you hear the site caling? "Don't try to fix me, I'm not broken... Hello?" WTFs are not business-friendly either and yet... I'll second that: I can't speak for the future, but for now it seems it will always be TheDailyWTF to me. I don't like the new name. Does grandma read the site? Than what does it matter?! If your family doesn't appreciate this site than don't tell them about it in detail. @#$%, lie if you have to! Hell, the Internet porn industry is thriving... Don't you think those developers have grandmothers!? I doubt they have the conversation: “So do you still write that website ... oh, what did you call it? MILF..?” “MILF Doggy-Style...” “Ah yes. I can never remember, what does M-I-L-F stand for again?” “Well, Grandma,...” [runs away!] Not all WTFs are worse than failure. Sometimes they work, but incredibly inefficiently, etc... But they >>work<<. What's better: 3 months of work for a bad solution that works or a complete and utter failure with no deliverable? (Even though we all hate maintaining WTF projects and it would be great to just start over and do it right). I don't know. The name seems less catchy and less witty. When I first heard "TheDailyWTF" I was immediately interested... Not just in the article, but in the SITE. I'm not sure "Worse Than Failure" will have that same effect. But again, I don't hate the new name. Maybe it'll grow on me. Without the name the sites is only half as funny, and grandma won't read it anyway, nor is this a "family" site. Stupid move. Content was lacking anyway, so site removed from bookmarks and good riddance. kthxbye. Website.RenameTo("The Daily WTF") Aye! -The name (sucks as a brand; psychologically bad) -The forum software (sucks) -Granny marketing (WFT?!) -Alex himself in certain regards (myopic and fanatical resistance to open source; opposition to anything non-corporate, non-Windows; and to open the political can of worms, conservative/republican worldview; etc) Thus, as Zach said, "at least symmetry". this is incredibly lame. wtf is going on here lately? Quick, better rename a site that was memorable to its target audience to one that is an expansion of an acronym that ISN'T EVEN THE CORRECT EXPANSION! It's bad enough that we have to contend with so many acronyms, but making up new meanings for old acronyms is a wtf in itself. Naughty, naughty Alex. Then again, ironically the name change perfectly encapsulates what this site is all about, as my first reaction the the article was: What the fuck! Though it has to be said that it was the first genuine WTF in a while as I thought the overall quality of the submissions had been going down. That's like Microsoft changing their company name to Miniware??! you get -300 respect from me. keep up the good WTF-posting thou I can't believe you didn't go with my suggestion of I can't decide which is the bigger WTF - wilfully destroying a brand which was perfect for your target audience or the absurd self-censoring reasoning behind it. We'll probably get used to it in no time, like I had to get used to wrestling's WWF now being WWE. MAN I HATE THOSE PANDAS!! Website.RenameTo("Worse Than Failure").NOT! It may seem like a trivial move to some, but this is seriously disappointing. It's like when a raunchy comedian/actor becomes a parent and decides he wants to clean up his act, so he takes on family comedy fare. At least bring the old domain back up and tweak a bit the code for the site so when you get to the site from thedailywtf.com it shows that in the title. If you can't afford to pay for that domain too, just say so, and I bet $10 that you will gather enough money to pay that domain for a life. Same applies to the code I was talking about. I am guessing that you won't care about this but I am going to post it and then that will be that. I am deleting this website from my bookmarks. I will never look at it again. I am going to outline the reasons to you, you may take this criticism as you see fit. Since 2005, the site has gotten worse and worse. Here are some of my quarrels. 1) Moderation: for a time, you made available a script to let all users see what moderation was taking place. You stopped that. A moderated forum by secret moderators is not of interest to me. [Not to mention the fact that the moderators are still doing a piss poor job.] 2) Long stories: I don't know why you made this shift but the long stories with no code are terrible. If this is truly the direction you want to take this site in, I would suggest that you either hire a writer or really work on your writing skills. They leave much to be desired. 3) The data-backup blurb: you put this advertisement into one of your posts as though it were part of the story. I am fine with advertisements in the side panel. Placing ads in the text of the article as though it were part of the piece is just terrible. And I question the validity of the post itself. You received an ad for a data backup site. Did you then make up that story about data backup so that the story would fit the ad? Curious. 4) Disregard for forum regulars. 5) Arrogant attitude. Again, I don't expect you to change from the current road you are travelling down. I would suggest that I and others, who were regular members of this site, have left in numbers. If losing the people that supported your site from the beginning was part of your vision, then congratulations - you are on your way! Last post from me. sincerely, Richard Nixon The whole WTF hasn't been used in it's original meaning on this site anyway. (More used as another word for "horrible mistake") And Oh well. Just sorta feels like a bait and switch now. Think I'll dump it from my daily surfing. Was getting a bit boring anyway. Bottom line is i respect your decision, but i resent it. A friend told me to look it up, because it had stuff worth reading, and the name itself made me curious enough to actually do it. If that same dude had said "go to 'worsethanfailure.com', it's cool", i'd probably think it's something about crash tests gone bad. End Note: does this mean that 'the daily wtf' is available? Because i'd like to take it - it's a cool name, and my grandma is usually online on ymsg on her laptop anyway, so she's probably used to web acronyms :) Captcha: bathe. wtf? how did they know? Bastards! :D I've visited your site daily and this change is absurd. The mere reason that this is my first comment should mean something - you really dissapointed me! This site really is an impressive and useful accomplishment. Alex should be proud of it. But it's hard to flaunt something, or list it on your own résumé, when it has the word "fuck" in its name, even implicitly. However, I directly disagree with this statement: That's utterly wrong. Long before I found this site, and many times since having found it, I have been looking at someone else's clearly novice code, in a commercial product, and I have muttered, "What... the... fuck?" And you know what, I even make a face that's very similar to that red circle-guy at the top of the page. And I've seen other people make that face when looking at legacy code. Exactly what should have been done. No one tried to make a replacement for RTFM; they just found a gentler expansion of it. If you want to describe the site to your grandmother, just say, "Oh, it stands for `Worse Than Failure.'" Heck, that's downright funny. It should be pointed out that "worse than failure" is not descriptive. Heck, it's utterly obfuscatory. It'd not even a tenth as memorable as "thedailywtf.com" is, which is important if you want to have repeat visitors. If you really want a less vulgar name, and since you're obviously willing to move to a whole new domain anyway, consider some of these: The Daily DGI ("don't get it") The Daily ITI ("IT idiocy" or "IT injustice") The Daily HTEW ("how'd that ever work") The Daily PFN ("paid for nothing") The Daily Unprofessional (I hereby place all of the above names in the public domain. Go wild with 'em.) You want to cater to the clue-deprived masses, fine - but the internet's too big to waste time with weak-ass, pussy sell-outs. Send up a signal when you grow a new pair. (I wonder if theoccasionalWTF.com is taken?) All of this should be done server side, so that seeing the full articles on the front page would not require JavaScript nor even cookies. The solution is to have links such as <a href="CurrentPage.aspx?full=1">Full Article</a> which would set the cookie and show full articles. This way enabling JavaScript would not be required for all NoScript users. Also, this solution would not even require cookies, if the "?full=1" parameter would be kept on the address line after clicking the link (so no redirects for "cleaning up" the URL). I've removed the site from my RSS feeds - not just because of this - I was starting to get a bit bored with it anyway, this is just the straw that broke the camel's back. I'll probably come back and have a look from time to time, but not daily like before, and I doubt I'll remember the new name, so I hope the old one stays up for a while. Seeing the current title, if I was a first-time reader, it would tend to make me think that TDWTF *is* the one that is worse than failure... (a WTF in itself) Yup me too :-| CAPTCHA: ewww, indeed bad idea, imho - I predict that you won't draw in nearly as many new readers after this Alex, you could have told your grandmother it stands for "What the frak/freak/fudge. The old title was *the perfect name* for this blog. The new name, well, let's just say that it describes itself (the name, not the blog) very, very accurately :-( So it with a bit of sadness that I now unsubscribe from this once-treasured RSS feed. Farewell and goodnight.. WTF? Such self-deprecation! I'd have written "yeah, I still write 'Worse Than Failure'." ;) Second that. If I had other ideas kicking around I would get other domains, not change one that already worked will you be carrying wtfs? I'm deleting you from my bookmark list right now. PS: Wtf, this CAPTCHA software is cool - "slashdot" !!! I would like to see still load the site, but for it to detect the hostname used and customise the branding accordingly. Don't underestimate the power of TheDailyWTF brand, and remember who your core audience is (or were). "When I used to tell a friend about this site, they'd get a kick out of the name and that alone would make them want to check it out." Sums my experience up perfectly. Goodbye, TDWTF. Finn. Last time poster. I won't be back. What next? Are you gonna remove nekkid women as well? Maybe granny likes bald clams.\ Oh wait,...I was thinking of wtfpeople.com. No titties here? Never mind. Agreed! Everyone I've shown this site to immediately had the reaction: "the daily wtf? that's hilarious" and most, just because of that, gave it a shot and still read it. Also, the acronym "wtf" is perfect for programmers. Especially when you are trying to fix someone else's busted code. Considering the workplace, I think that profanity is acceptable as long as it is used sparingly and in appropriate situations, ESPECIALLY when used in a acronyms such as NFG ("no f***ing good"). Couldn't you just leave it as thedailyWTF and then on the "about" page note that the acronym stands for "worse than failure?" Then we could all just think of it as sarcasm, kinda like when Naughty by Nature said the OPP stood for "Other people's property" but we all knew what it really stood for. Changing the name is a weak move. Couldn't agree more. You'll basically have to decide at some point what the target audience for your site is going to be, and by changing the name, you've just alienated a large part of the "hardcore" regulars, who've come to respect you because of the name, not despite it. I personally don't care much about the name change, but "recruiting" new ("hardcore") regulars from the industry to read worsethanfailure.com is, as others have already said, going to be a largely futile task, because the name doesn't cater to them anymore. Neither does it cater to me, and as others have said, I'm going to keep calling it The Daily WTF, no matter what happens here. Anyway, I'll be following whether this is just the latest move in a general downward trend of this blog (which I've noticed from mid of last year already). That'll be the point when I'll decide to drop reading this blog, or not. whats so bad about wtf? your granny has brought some cildren to the world in her life ... she knows what fuck means ... where is the problem with it? its not a bad word... its a bit to early for bad april first jokes... and if you were a company this renameing announcement would have been presented on thedailywtf.com because it is so hillarious ... imagine Apple renameing itself because Jobs granny always thinks her son goes to the market for selling fruits.. If you really dont like the word "fuck", then replace it with Frack, which I've started to use latelly. Secondly renaming it to Worst than Failure is along the same lines as spelling FUBAR FOOBAR. It's just wrong. Thirdly and finally Worse Than Failure is just a lame name, sorry, but it's rubbish. If he was really following .NET standards, this website should just expose a "Name" property. It should then be: Website.Name = "Worse Than Failure"; Captcha: ewww (that adequately describes this name change) I agree with those who say that the old name was part of the identity of the site. To the ones who decided to say goodbye: hey, fuck, were you here just because you liked having that name once a day on your screen, or was it for the contents ? Old-time readers will still be able to say "WTF", while the new ones will be happy to find a new meaning to the acronym made by "worse than failure". captcha: sanitarium. Still damn context-based ! -- vlekk Seriously though. Grow a pair of nuts and change it back to What The Fuck. No, you have to now write "Really, the forum software is still WTF." Just a simple syntactic change...] If the core audience "was" dorks who get their panties in a twist over such a trivial detail then, well, I'm sure they'll find asomeone else to satisfy their special needs. And driving away idiots may raise the average IQ of the eyeballs, and thus improve response to advertisments as they can be targetted at people who can keep a job. ;) Although the whole "I'll never read this site again! This is my first post, just wanted you guys to know that you have to change so I'll come back even though I'll never find out if you did change so there's no point" thing does have a certain amusement value. It would be itneresting to know how many of those geniuses actually keep their promises - there always seem to be a lot of "that WTF was so bad that I'll never read this site again even though I've been reading since the very start" posts, and there's a finite number of people who can make that claim. I doubt your pagerank will ever recover. Everyone who likes the old name start putting up lots of links to it, domain redirects are one of the biggest pagerank killers. I doubt your advertisers will appreciate that. (I see you're getting public service google ads now, that should be your first clue that something is badly wrong...) Why change something that doesn't need to be changed. I concur.. Putting the subtitle as "What the fark" would have done, for goodness sakes. Who reads this site and doesn't know what wtf means? honestly? Would this ever appear in any other capacity? If it was going to start appearing in Newspapers or on a big news site, then it might warrant a change (but still, isn't the column BoFH still a regular at The Register? that never got its name changed...). Shame you won't even keep the old domain or name in any capacity. Why it needed to be changed anyway is a big question that you didn't answer either - the other "sections" are still "wtf"'s, and why the name wouldn't fit a coding contest, I've no idea. Kinda loses the point of the name now. Oh well, you might as well do what you like with it - its all your content of course. At least your grandma can be provided a full "U" grade explaination now. While your right in a lot of the posts are immature, they do express a valid concern that I'm not sure Alex has thought through. ANY time a company changes a brand, espically a well-loved cherished brand, there is the very real concern that they will alienate a good portion of their customers. If Alex is not able to retrieve his long term base of readers or find a new base of readers (for the WorseThenFailure site), then his blog is going to lanquish in obsurity with minimal readers. Much like a lot of blogs on the internet. While the profanity was not a "cherished" thing for me, it was what I was able to use to get other programmers to read the site. When I say "Visit the The Daily What The Fuck?, you'll love the code examples they put up", they would actually laugh and go to the site. Now I feel that if I say "Visit the Worse then Failure site. They have lots of code examples of bad code....." that I will get a luke-warm reception, and they may or may not go. I feel like a few posts already have said, I will probably still come, but I almost certainly won't be elicting anymore new readers. And will probably start looking else where for these type of blog entries. --D no naturally it doesn't follow (except in transmogrified logic) also the things you are interpreting in the word are very culture based ... for example in an american background fuck can be used for nearly everything (ie get the fucking bread out of the fridge! )... while in other countrys it may be only used for the sexual, term if ever.. So please accept my view is diffrent than yours on that word.. Also I have learned that a lot of elder people seem to have less problems with talking about sex as younger ones think they have .. often its rather the younger ones that are so inhibited. But that may also change from culture to culture... but what we are here is geek culture isn't it... when I read wtf I don't think about fuck or any other meaning ... the meaning of wtf has nothing to do with fuck its just the curiosity I see when I come to this place.. more mature or not I don't care... worseThanFailure simply sounds bad ... its not sounding as round as thedailywtf besides I feel discriminated ... I am treated like a child that is not allowed to read "fuck"... cowardice or what ever no .. this renameing is imho simply childish and very uptight ... Movie recommendation for the Mormons: Orgazmo (makes you less uptight) Catcha: Atari .. works also also I don't think people will leave this site because of the renaming ... though I think names have power .. I don't wan't to thing of this site as worsethanfailure ... uncool and bad karma Dont be such a drama queen. Do you honestly visit the site b/c it had the word fuck in it or because you enjoy the content? Had they not made a thread about the name change I wouldnt have even noticed - thedailywtf.com forwards over to this new one. Seriously who gives a shit what its called? thedailywtf.com was a perfect name for the site, I remember reading the articles and thinking What The Fuck… not “this is Worse Than Failure”. And if you had an objection to the word Fuck in the name why not rename the site to thedailywhatthe.com and let the user insert the expletive of choice (e.g. heck, hell, fuck, frick, frack) worse than failure is so wrong on so many levels, the words worse and failure are off-putting to start with but you essentially keep the WTF anachronism which defeats the point of your censorship, that’s just plain daft dude… besides that I cant imagine anything worse than failure, and it doesn’t inspire confidence that there may be a chuckle at the end of it… I used to come here and was guaranteed a laugh if not at the article then at some of the comments, and I don’t mean a half hearted chuckle I mean a roll about the floor tears streaming down the face kind of laugh. Over the last year or so the articles have gone down hill and I must confess I don’t visit as much as I used to do, I guess that you have capitulated to the god of commercialism and Grandma… I do have one suggestion though: keep the “thedailywtf.com” name host it on another server reinstall the old crappy forum software and post articles like you did back in the beginning at least it was funny reading mangled posts bitching about the forum software, the isTrue() jokes and constant references to Paula…. Ah happy days… V Go Fuck Yourself I will terribly miss the Daily WTF. Way To Funny I think you missed the point you read the URL first then go to the site, a piss poor URL will put people off before they come for example which site would you go to: sexywomen.com Or fatuglyslags.com And you know who to show the site to, by whether or not they'd be intimidated by the off-color name. Such filters are a good thing. Christ on crutches... Motherfucking signed. And for the record, for all those self-righteous pricks who think the name-change will change the IQ-level or whatever of the site (good luck on that): I'll still come here. But I don't have to like it. Fuck you Alex. What the fuck, dude? I've been enjoying this site since sometime in 2004. It's on the front page of my rss feed page and has consistently been a hilarious take on the programming side of the world. I have watched as the site has increasing become commercialized, castrated and reborn as a figment of what it once was. This name change itself (albeit sucky) isn't the reason I will stop reading this site today, it is however truly the straw that broke the camel's back. The transformation into shit is complete, goodbye dailywtf, it was a good run. Agree. name a site "Worse than failure" ??? WHAT THE FUCK?????? what was you thinking???? This name sucks, and 200 readers already agree. Forget this dumb idea, rollback all this thing, and make a mock site to show to your grandmother, (put some bunnies in there too, it makes no sense, but she will love such cute thing) PS. My CAPTCHA Test says "gygay" This renaming is just back-assward. Nevertheless: Nice heart warming, probably made up story about the grandma. But why don't you keep the original WTF and just tell her it means worse than failure. It is not that she will check. Or maybe you think the name hurts you in your professional life as too many people judge it negatively before you have a chance to explain why the name is excellent to address a specfic demography of technical people. The old name is still better, IMHO. It condenses the web site perfectly, the political correct version is lame. My best wishes, nonetheless. Captcha: ewww. Indeed. Hell, even my grandma understands what WTF means. Yes, may be he wants to put it in his curriculum vitae and this name doesn't look professional enough.....! I also agree with other comments that the new name might not attract the semi-curious web surfer as strongly, but look at us-- we've already found the site! It's Alex's problem if new people don't find the site, and it sounds like he's happy with his decision. WTF really summarized what this site was all about. Worse than failure just sucks As the quality of the posts has gone down (I'd much rather have one GOOD post per day than the multiple pieces of mediocre content that have appeared on this site of late.), the WTF name would at least bring a smile to my face. Once the old URL stops working, I'm going to stop reading. Now, how can I get a good deal on the leftover TheDailyWTF swag? You win the daily WTF award on this one - and I don't think I've ever posted a comment on this site before, ever - until now. You couldn't even think to say, "... it stands for 'what the frig'" or something else? There's quite a few replacement slangs that begin with the letter f and would fit quite nicely and wouldn't hurt your virgin grandmothers ears - and she wouldn't hit you with her jesus cross either and tell you that you're going to H E DOUBLE HOCKEY STICKS! Wow, just wow. Check your pants man, I think your vagina is dry. long time reader, rarely posting. The name change is a bad idea for the reason others have pointed out: WTF is the perfect phrase to describe what is posted here. It is appropriate for the industry. I say this as someone who rarely swears. It is pretty immature of all the people swearing to prove that can. Most people seemed to grow out of that by the age of 12. However the acronym WTF is embedded in the industry. I'll give it a week or so to see if the name changes back. If not I'll be deleting this from my bookmarks and I won't be visiting again. I'd guess that you are going to lose 75% of your readers. In the end it is your choice, it is your site. I personally think it is a bad choice. David Cameron Funny, I thought the whining mentality was supposed to die off around the age of 12 too... If you wanted to carry some extra weight to your "I won't read your site any more!!" sobbing, threaten not to digg any of the posts ;P That is all. Thank you. fuck fuck fuck fuck also cocks captcha: sanitarium; NO U However it is not my site, and I'll continue reading anyway. Yours truly, Lauri ... because you're not adult enough to swear in front of your family? So you renamed your site to three hastily assembled words that could "explain away" the acronym and sacrificed the entire spirit of the site in the doing? You tell me, What The Fuck is wrong with this picture? "Worse Than Failure" doesn't really describe the content and see a bit "made up" to that it fits the original acronym. However, life moves on... (Keep up the good work) This name change is plain silly. Sad that Alex quoted his grandma as the person responsible for this. OMFG WTF ??1? About the new name, nice and quite relevant too. It's no about failures but strange survival of monsters than shouldn't even be born. It's about kind of corporate anti-darwinism. Instead of being still-born projects, resources are wasted at them, and like I said before, it's often due to overconfidence but it's more than that ; I think this new name really embraces the website's purpose from its very start. The only grief I have : it's less funny. I've enjoyed What The Failure for so long, and as you so accurately wrote, it's the only natural reaction to some of the content. MAN, I'm soooo failured off!!! To me, I dont care, because is the content. But I will not remeber the new name, so please NEVER remove the old name. I will continue typing thedailywtf.com on the browser address. - terrible new name - even worse because you have tried to keep the old acronym - grandma? seriously?! - PC = not-cool anymore = are you selling out?? Politically correct my ass - to me this site is called What-the-Fuck!.. Dude... "Worse than Failure"?! You got that right... :-( Biggest wtf to date, right here. No, he's a pussy. Goodbye DailyWTF :) captcha: darwin (these 250+ comments evolved from apes) I don't understand why so many people are getting worked up about it either, do you come to this website just to stare at the url and write stupid comments? New name sucks monkey balls. WTF were you thinking? *shakes head* :( Daz captcha: alarm (bells) I'm really quite bored by the whole PC-stuff (and I'm not talking about Personal Computers...) did any of the "correct" phrases ("xyz challenged" etc.) change anything in the way people think about things? or are these phrases just covering up narrow-minded and chauvinist thinking which does not change the least by using new labelling? I'm quite convinced that it is the latter - so there's really no sense in giving in to PCness. I'm disappointed. the new name is sort of "lamely correct"... thankfully, this whole PCness did not catch on so heavy here in europe. might be it was not as necessary in the first place ;o) captcha: kungfu - yeah, let's shatter PCness! ;o) Remember kids, 'Failure' is a very bad word. I think Richard Pryor said it best: "Fuck censorship, and its moma.." It's one drop too far... first there was that summary/full article change, then now choice is gone and the name change, as well as the domains. I'm just removing the old wtf bookmark from my bar, no promise, no menace, just dropping. I did enjoy Paula, and the others articles, but not enough to bother with a changing spirit. Do not bother to address me with a reply, i'm just gone. Bye. If that isn't the reason, well I'm sorry but that's what I and many people gather from this post, which is just sad. I always find it quite funny when people try to coddle and protect their grandparents and elderly people because they think "ZOMG, SWEAR WORDS, OH NOOOES!", and it turns out that the old person in question doesn't really give two shits. Come on, dude... changing the name just so you don't have to say "The 'F'? It uh.. stands for.. uh...". Now THAT is a WTF in itself. What about a Schroedinger's cat solution? It's still TheDailyWTF, but nobody is told what WTF stands for unless they ask you, and depending on your mood, you can answer What The Fuck or Worse Than Failure? Or even better, you leave it to the reader to decide whether they prefer to think of WTF as What the Fuck, Worse Than Failure or the new weird thing on technorati? *Sob* Now there's an idea worth of credence ;) Daz You and all the smart persons saying "that's no censorship", have you ever heard the word "Self-censorship"? Correcting one's own speech for being more political correct is just a form of (Self-)censorship. Being political correct is nice and sometimes just necessary, but being PC just for being PC is siiiillllly! So yes, I'm another 'new name sucks' whiner, but at least I'm free and mature enough to use swear words when necessary. Strange: the country that invented the idea of 'free speech' and strangely still stands for liberty in some parts of the world (mostly in your own land I suppose) - but it's inhabitans doesn't even realize that they conduct self-censorship on a broad, daily basis. Too sad... The site's written in .NET (check the URL extension - .aspx), which has a convention of naming methods with an initial capital letter. I think it's to piss off Java programmers that decide to learn C#... captcha: dreadlocks WTF? The Real WTF is that if he'd changed the name to "The Daily WOE" (that's for "What On Earth") then he'd have kept the original sense of the WTF part of the name and had a site name acceptable to prudes. He'd have even been able to avoid having to explain it to lay-people. Now, if only we could guess the real identity of the prude Alex's worried about. Given his propensity for changing the names involved, I'd guess it must not be a grandmother. Maybe a potential future employer instead? Captcha: WTF (means What The Fuck, and don't you forget it buddy!) gone chicken? push from the advertisers? WHAT THE FUCK???? Its like renaming Terminator II to The Robot from The Future. The name is lame. After at least 265 comments, for whoever might be counting, here's yet another gratuitous occurence of each of the words, with my compliments: pussy, sell-out, lame, weak, vagina and naturally Fuck. Pitty. TheDailyRestInPeace. Nerf undead. not sure if this has been edited out or not, but it currently ends with the following: "On February 24, 2007, the site was renamed to Worse Than Failure because Alex was not man enough to say the name in front of his family" yeah, but i bet none of them would dilute their brand by renaming. Amazon Exec - hmm, this jungle name just doesnt make sense to our non-core user base....hey, check and see if bookshop.com is registered..... Pleaaaaaaase The site's written in .NET (check the URL extension - .aspx), which has a convention of naming methods with an initial capital letter.[/quote] No I will not say that it's one more thing they copied from Delphi, I will resist Well as long as it's not The SpamBot from the Future... :-) -Lars Oh well. The content is still great, though, keep it up! Dilute the brand? Are you completely insane? It's a blog! Oh, and tell that to Verizon, Mandriva, Wireshark, Ask.com, CBS Radio, and USAirways... I could keep going. Thats the most sensible thing said so far. haha lots of wtf's in that sentance. Promote worsethanfailure.com, but leave the old name for all those who can't stand change. I don't think the name change is good or bad, but it's a good thing if it makes Alex happy with his work... Though I expect our pleas are falling on deaf ears, please change it back. Though I expect our pleas are falling on deaf ears, please change it back. Please reconsider. This just doesn't feel right. In any case, I don't care much about the name itself, although I liked the old one better. I do care about the political correctnes reason, but sure, it's Alex' decision. Recruiting new readers will be hard because the new name is a turn-off, except maybe for some percentage of the US population who cannot approve of internet slang. In any case, consider being a bit more mature the next time you feel the need to disagree. Name calling is pretty weak. The new brand is weak. Please change it back! I personally don't care what the site is named; I like to read the stories, and I like to read the comments. I'm actually hoping that with the number of people who have threatened to stop reading we might cut down on the number or people putting "frist psot!" comments in. Perhaps the site's name should remain The Daily WTF, but change the official meaning of WTF to "Worse Than Failure". That way we can tell grandma's aboout the site, but we can also still attract the "cursing is cool" crowd. Bingo - it is a blog. That is it. And for all of you who say you are leaving and never coming back you're full of shit. You dont come here b/c it was called WTF (at least I hope not). So I guarantee you will return. Alex, I think you're a good person and sometimes good people make mistakes. Let's hope that Worse Than Failure ends up on the same junk pile as Microsoft Bob. It's creative because all these code segments are worse than failure...at least if the person writing it had failed to implement something then someone could have done it correctly the first time. It explains the site better because, as Alex said, it's more than daily now, there are multiple forums, and not all of the code segments/ stories make you say "WTF", but they are all worse than failure. It's an improvement because of the above reasons and because those who do have relations who care about that stuff can redirect them to something we think is funny and show them a small part of the headaches we deal with daily without having to obfuscate the website. To those who are angry about the idea of political correctness: You'll find that Alex actually increases his readership by changing the name because. How can you work with syntax specific languages all the time and not see that using phrases like "What the fuck?" are like writing this: void c(int* x, int* y){x=y;y=0;(int)x!=0?c(x,y)} when you could have written this: void clear(int* x, int* y) { x = 0; y = 0; } ??? The last time i held back about anything in front of my relatives was when i still depended on them giving me those extra bucks when i visit :P And a coding contest? And "other things"? Just keep the site like it is dude. The site is fine as it is. If you're bored with it, make another one. But don't fuck up a site people already love ;) I shouldn't actually really care, cause, well, it's your site. But the new name really sucks :P Actually, not all censorship is directly externally imposed, perhaps you should do more studying. For example, at one point in US history there was the "Equal Time Doctrine" which held that when a media outlet provided space and/or time for a political view, it was required to provide an equal amount to the other side(s). This led inevitably to the outlets not allowing any political content because they were then forced to double the amount of time/space available. This was correctly held to be a form of censorship as it effectively prohibited certain speech. This is but one example. If you truly did as you suggested you will find many many cases of this. Most of us do in fact practice self-censorship at various times - especially those of us who are married. The "consideration of others" is one of the bricks on the road of good intentions. Are people forced to read thedailywtf.com? Nope. It is a voluntary choice. Thus, there is no need or benefit to the alleged "consideration of others". Further there is no evidence that there exists any consideration of others among a self-selecting audience. That self-selecting audience made a conscious choice to read the site - many due to the name. It was apropos. It made sense. Now it does not. It is no "immature whining" on the part of readers to disagree and to stop reading or suggest that they might. indeed, if anything is immature it is those of you who call the "whiners" children. That is known as an ad hominem attack and is a good way to show you have nothing of value or merit in your argument. Instead of attacking the argument, you attack the poster. Quite immature. Indeed some of the arguments made are made from experience. As history and experience shows us when groups or entities start shifting towards more political correctness, it is more often than not a sign that change is coming. And more often than not that change is in watering down the content, intent, or purpose. The stories have slacked off over the last few months IMO. Taken with the PC name change it is not illogical to conclude there is a chance that the essence of what was thedailywtf.com is waning. Will it go away? At this point it is quite likely. As people who are in the IT world and seeing these WTFs leave the site due to it losing it's essence (it's purity some would say), the submissions of good, genuine WTFs will decrease. This will further decrease the essence that was thedailywtf.com thus perpetuating the fall. Will it come to pass? Perhaps. It would be neither the first nor last time such an event occurs. And yeah, Slashdot-style fr1st psot bullshit irritates the fuck out of me. Hopefully this will cut them down, if not I might have to get hold of a chainsaw (and not by the blade). Jeeze, it's like a little bunch of children arguing over which pokemon is better. It's pretty simple really ... there is a purpose in life for swearing, but not everyone likes to hear a nonstop string of profanities. Sure, this name change might upset all of you stuck up twits that seem to think we give a damn if you come here and post your drivel. Sorry if you're so used to being able to cuss a storm and get your own way ... sorry if you feel you've been oppressed by the man ... hell I'm even sorry if you got dropped on your head as a baby ... BUT WE DON'T GIVE A SHIT! I mean seriously ... how much credibility do you think any major organization would have if they swore in their names or slogans? "" ... yeah, sure bet that'd do wonders for their user base... "Dell gives a damn about you" ... Yeah, so much B2B sales gonna come from that one... "MicroShit Windows" ... actually that might be kinda appropriate Are we getting the point yet dumbdumbs? Since Alex appears to wish to expand the readership and activity of the site, the best move for his credibility is to get a name that won't destroy his credibility as site worth viewing. And for the record, I'm one of these people that consider it highly inappropriate to swear in front of children, your parents/grandparents, or complete strangers ... but I am willing to make some exceptions when dealing with 2 short planks. Now can we all please get over it and go back to laughing at the crazy code? *puts his straight jacket back on* CAPTCHA: Quake - this relevant captcha reminds me of the BFG (10K) in quake2. ID software could have gone all PC or worried about 'inherent meaning' and called it the "What's going on gun" instead. They didn't. It's better this way. -Harrow. But now you mention it that Pikachu one is overrated I know give out free stickers that'll help. Now all we need is a classic WTF story about rebranding and how it always is a diaster. New name is Lame. I'll prob still read, but change it back. The new name is just fine; I had a somewhat similar problem, if only because saying "Double U Tee Ef" out loud sounds weird. Now I can just tell them to check out Worse Than Failure. :) way to go alex 1) Wuss Towards Family 2) Wet, Tiny, Flacid 3) Write Terrible. Flee. 4) Win The For 5) Well Timed Fart Having said that, I have to admit that "Worse Than Failure" does not exactly roll trippingly off the tongue. There have been some much better suggestions in all the above drivel: The Daily WOE (describes my job to a T) The Daily SNAFU (can use Situation Normal - All Fouled Up as the tag, while the script kiddies snicker in the back of the room) The Daily FOOBAR (as above, dates back to K&R C) Alex -- It's your site, and the rest of us should remember we are but guests. I applaud you for having the guts to stick with your convictions, regardless of what the rest of us freeloaders think. First, I started reading the site because of the name. I'm not one to swear, so telling others can make me blush, but I think it attracts readers. I'm glad WTF wasn't spelled out or I might not have visited. (I don't think I could visit a site with f*** spelled out in the URL at work!) But when I viewed the code submitted here I always had "WTF?" running through my head. I thought it was very appropriately named site. I'm doubtful people will stop visiting just because of the name change. If they do, fine. I know won't stop visiting. (I will still go to thedailywtf.com instead of the new URL however!) But I do have to say, the content seems to have gone downhill lately. (But if the "frist" posters all leave, great!) I would have rather seen you tell your granny that WTF stood for Worse Than Failure than change the name of the site (I know I could never said f*** to my grandma!), but I understand how a site with WTF in it could turn off potential employers or companies wanting advertise here. It is kind of sad though. I like the suggestion of keeping thedailywtf.com domain name and having a different header for visitors to that site. As for the free stickers, I think I'm going to try to score one and then just cut off the bottom. How is SNAFU or FOOBAR (originally FUBAR) any better that WTF? They all started with the F meaning f*ck. (At least to me! And still do!) About the meaning of the old name, some have said that 'daily' it's not good now because it's not 'daily' anymore and 'WTF' is because 'cursing is cool' :P I've never thought of it this way I always thought that 'daily' comes from the idea of being a newspaper and 'WTF' not because of the 'fuck' but because 'WTF' is a very very (extremely very) common expression. Anyways, let's hope for the best (= Tact? Discretion? Decorum? When it comes to WTFs for some of which the perpetrators should be sentenced for life in front of a firing squad?? Get real. We're big boys and we talk here of things where swearing is the least you should do. I'd manhandle some of the WTF'ers had I worked with them. Tact, my ass. What the WTF perps do is tactless already, so The Daily WTF was simply following that direction, and it made sense to me. The name change is quite spineless to me. captcha: waffles -- as in, "thinks of a good website name, but then waffles" My recommendation would be to -- as others have said -- to keep multiple version of this site, with different branding depending on the url. By the way, I'm the guy that recommended you to register therealwtf.com. Hurry up before some one else does so. This site deserves it since the tradition originated here. Me, myself, I'll keep on reading this web site, but i promise to use the word FUCK at least once in every comment and commend people to do the same. Oh by the way, this is in reply to: "Profanity is the sound of an inarticulate person seeking to express himself forcefully." Except when you choose so. You miss the point, programmers are incredibly articulate people, yet we still like to express ourselves forcefully. Besides bending your speech to show off your literary prowess for fear of being mistaken as inarticulate is a sign of insecurity and immaturity, unless you do it because you choose so. Disclaimer: not a native speaker, spare me of reaching a thesaurus. Worse than Failure doesn't have the same rhythm and flow as DailyWTF. WTF was a perfect name, because almost all of those that (I believe) this site is aimed at know that expression. And have probably used it. However WTF itself is simply an acronym, or a collection of letters. Why not leave the name as it is, and just explain to your Grandmother that it means Worse Than Failure? Or heck, use the Phonetic alphabet: WhiskyTangoFoxtrot. Of course I'll keep reading; it's the same site. But I really think you've made a bad decision here. I just can't see the point. Look, bring back the old name. I don't want a sticker. This is like M$ getting rid of the My Pictures toolbar in IE7 (the only good feature ever added to IE6). Complained to someone in India who didn't get it and pointed me to the tech department for installation problems...WTF?!? It's a feature I can't install cause you GOT RID OF IT! Look, nobody liked New Coke either: Ironically, the new name describes exactly what it is: worse than [a] failure. I understand your desire to try something new, but I'm not keen on the new name. I don't think it is at all descriptive of what gets posted here. "WTF" is what we say when we see the things that happen to us that relate to the stories and code snippets that are on here. It's relevant, topical, and has "brand" value, since we've been reading "theDailyWTF" for as long as we have. Let me know when you change it back. AL Vinny. let's hear what our good old friend Bill Hicks thinks about the new name: "Piieeeeecccee ooooooofff sssssshhhhiiiiiitttt !!!" c'mon change it back! you know it sucks! Can't we use some Firefox Plugins to replace text and images? No, they called it Blast Field Gun/Generator, IIRC. If they use the same database i don't think it should be any problem. me> you should check out this site to know what not to do... [writes url on board] fairly clueless female student > what does wtf stand for ? me> err ... err ... 'what the fudge' :) class> *laughs* Your grandma can handle the FUCK word. Seriously. To think otherwise is rather condescending. So be honest with us, is it really because of Grandma? Whatever, i'm not going to lose any sleep over it but it's a pretty retarded move IMO. New name sucks, old name rocks. AWFUL AWFUL AWFUL. Captcha: atari!!!!!!! Hell, Grandma had to do it herself, otherwise she wouldn't be a GRANDMA!!!! I've worked for a lot of small companies, and I see it all the time - as soon as things get going people in charge will start making a truly stupid mistakes (like changing the name) and derail themselves. And then fragile ego's prevent them from acknowledging the mistakes and fixing them. Dont follow that road to failure. Let me give you a bit of free advice. GO WITH WHAT GOT YOU THERE. There's tons of sites out there that cater to IT people and cubicle monkeys, and their need for sufficiently watered down, office safe humor. But there is and always will be just one TheDailyWTF.com! DONT SCREW THAT UP!!! STUPID!! Great site, great stories, great original name, and I'll stop there. I'm also holding seminars - well, not for students, but for grown-up real-world developers and decision makers, admittedly ;o) however, there never was any problem spelling out the name of this site for those rare occasions where someone did not know what WTF meant. mind, that I am in Austria, where "fuck" is seen as a much stronger swear word and not as commonplace as in english (e.g. as intensifier in phrases like "this is fucking great" etc.). interestingly, in german the prevalent swear words tend to be more of an anal than a sexual connotation, i.e. most common ones include "scheisse" (="shit"), "Arsch(loch)" (="ass(hole)") etc. as an intensifier we are often using something like "verdammt": "it's fucking cold" = "es ist verdammt kalt" (="it's damned cold") or also "es ist scheisskalt" (=literally: "it is shitty-cold"). also interestingly, "fuck(-ing/-ed)" was sort of incorporated into the german language in the last years as "verfickt" (one wouldn't have said that 20 years ago, and it's still regarded to be stronger than the original german swear words like "scheisse" etc.). hey, this may become a linguistic discussion, nice ;o) Anyway, I read the site via RSS, and I would never have known about the name change if it hadn't been for the announcement. This is because the RSS feed has never been branded with the name of the website. The posts are always tagged with the personal name of the poster, and the only mention of the website was in the link to the discussion. So, I don't follow the logic of the people who are unsubscribing their RSS feeds as if they are suddenly being deprived of a vital part of their lives. If you need it that bad, go get a girlfriend, or a calculator. BTW, is the real WTF still copyright? Please change it back Also, sorry if I've repeated something for the nth time - I cbf'ed reading through 8 pages of comments. CAPTCHA: Completely irrelevant sentence (to the discussion) having to do with my previously overstated and completely unsupported opinion. <analogy> <compares> <problem> <name>Won't Work</name> <description>XML won't fix the problem</description> </problem> <solution> <name>Violence</name> <description>Use more XML</description> </solution> </compares> <compares> <problem> <name>Whine</name> <description>I don't like what he said</description> </problem> <solution> <name>Whine</name> <description>Give my opinion and imply 'He's dumb!' or 'My Dad's stronger than your dad!'</description> </solution> </compares> </analogy> Horse I can sort of understand wanting to distance yourself from the vulgarity. However wose than failure doesn't have the same feeling as wtf. Basically, I'd say call it what you like, but just keep using the wtf label. Also, the daily part, since no one really cares if its more than daily, only if its less. Its cathcy to sayl the daily wtf. Worse than failure sounds like a cheesy comic. I'll still keep reading whatever is chosen, because I some here for the content, not becuase of the name. This way you'd have both names and people could choose between the one they liked more. Please, pretty please with sugar on top, work up some backbone, change the name back and get your act together again. For now, alas, I'm deleting the shortcut from my 'dailies'. So the PC crowd has "won" another battle but they can rest assured that they'll never win the WAR. I know a good proctologist that might help them locating their heads. :) Somebody posted an interesting suggestion to make the old domain use the same content with the old branding - a brilliant idea. captcha: ewww. Yeah, 'sabout right. I completely agree. I feel vaguely let down by this name change - it loses the spirit of the site. Those turlingdromes will probably be saying belgium to everything ;) Actually it would be even funnier than the old title as most people would think it's the "what the fuck" at first until they see the subtitle. it's your loss. I've read all the comments on this article so far, and it's rather interesting to see where the readership's perspectives lie. The collective intelligence of the WTF readers seem to express a negative opinion of the recent name change. A few commenters are complaining of Alex "selling out" to political correctness, and some raised an excellent point about the failure of brand renamings in the corporate world. Another person made an excellent observation about WTF's pagerank, which will inevitably decrease due to the site redirect. The poster "Richard Nixon," who I have a fair amount of respect for, made an excellent argument against the name change. It's a pity that he plans to delete his WTF bookmark and never return to this website again. I personally prefer this website's prior name, but I do not plan on leaving. The content is still the same, and I expect plenty of excellent articles in the future. In my mind I will always refer to this website as "The Daily WTF" though. That's how rebranding often works: consumers are inherently stubborn, and refuse to recognize an entity by its new name. I would like to make a comment to those who plan on leaving. The content is still the same. You enjoyed reading the articles, and a mere name change isn't going to change the quality of the writing. Also, I believe having "thedailywtf.com" retain the old logo is an excellent idea. Though I doubt Alex will approve: it creates a disunity amongst WTF readers. Sincerely, Gregory excellent.. - Look at this WTF! - What? - Look at this Weird Tech Finding, here... There are numerous WTFs (weird terrible features) in the name change, for instance if Alex just said he was messing with us we could have even laugh, instead, he said he was embarrassed in from of his grandma!? fucking lame! And a non sequitor by itself, does your Granny read tech news at all? CAPTCHA: burned, That's how we feel! Oh wait, yeah, that never happened. Boo on you, Alex. DailyWTF forever! PUT THE FUCKING NAME BACK THANK YOU Your ZX81 basic is WTF material on it's own. Line 10 fails with a syntax error. You need a PRINT before the quoted text you pillock. p.s. please change the name back. Your new site name sucks. 2)And to all you who are saying "Well I'm still going to keep coming. The content hasn't changed." YOU ARE HIGH! The content of this site HAS changed drastically in the last few months. And THAT is the reason I'm am coming here less and less. 3)You are overreaching, Alex. KISS (keep it simple stupid). There is elegance and and permanance in simplicity. Does the term "Bloatware" mean anything to you. Capcha: stinky (how apropos) The new name is terrible, I have unsubscribed from your feed. Don't get me wrong though, I'm not going on an ultimatum tantrum declaring I'll stop reading, I read this site for the contents, not some ridiculous reason like the title or "coolness". And the contents just keep on getting better and better. Keep that part up! :) captcha: (i know) kungfu. "WTF" is a perfect description of the articles on this site and has "mind share" (as the marketing drones call it) with the target audience. When I tell people I saw something on "The Daily WTF", they either get it immediately or will never understand either the name or the site's contents. "Worse Than Failure" is a completely artifical phrase, has no meaning to anyone, and doesn't convey the true meaning and horror of the articles. Tell your grandmother to suck it up and deal with the profanity, or just stop telling her what it means. Changing the name to match someone who isn't in the target audience and who therefore doesn't have the background to understand most of the articles (and therefore won't read them anyway) is ridiculous. Awareness of an alternative to "classic" WTF has been raised; just keep it in the tagline. The old URL was much better, punchier and to the point. But this is the best you came up with? Another cool website jumps the shark. To keep everyone happy, can I suggest it changes again to WTWTF (Worse than WTF). And can I just add, in the interests of being fair and balanced: "What the FUCK?!!!" Get real, switch back. Shame on you... The site's written in .NET (check the URL extension - .aspx), which has a convention of naming methods with an initial capital letter.[/quote] No I will not say that it's one more thing they copied from Delphi, I will resist[/quote] And I will resist pointing out that the guy who made Delphi also made .NET 1. Alex started a fun site to mock his own industry 2. It caught on and grew 3. Alex recognized the opportunity to generate some ad revenue offset some of the investment in costs and 6. Upon announcing the change, the children whined and cried 7. Life went on, with loyal readers and advertisers staying-the-course (sorry - it really does hurt to use this particular phrase) Of course, changing the title doesn't prevent people from using bad words in their posts. Hmmm.... S*** P*** F*** C*** C***S***** M*****F***** and T*** !!! --gc Everyone's dad is a M*****F*****! --me This is the point at which a successful business decides whether or not to turn down business because, on one hand, it is income, and who doesn't want that? On the other hand, what the customer (in this case, advertisers) demands distracts the company from it's core offering, ultimately driving away customers. Personally, I like the old name, and won't stop reading, but I don't have high hopes as to the quality of the site getting much better. It's not the name change, per se, but what the name change is indicative of. Too many times, a decision like this marks a turning point in a company's offerings, usually for the worse. In this case, the site may go on for years to come, but not with the bite that a name like "Daily WTF" connotates. It's precisely the word "Fuck" that causes the contributors to unconsciously feel okay (and possibly go out of their way) in writing their posts in that wonderfully sardonic and irreverent way we love so much. It's like the difference between a Sunday car ride and a couple laps around the race track. They're both really great for their intended audiences, but don't expect a fan of one to like doing the other. I'm a race car guy who likes taking it easy once in a while, but I fear this wbesite may have just slowed down a bit too much. I don't feel it was an encounter with Grandma that caused this change, but more likely it was one or more advertisers. However, keep this in mind; this website is an entertainment venue, which goes by different rules than the rest of the business world. Whether you hate him or like him, do you really think Johnny Knoxville's resume has been doomed because he had the word "Jackass" on it? I wish I was that cool ;-) By taking that problem away from me, I'm very much grateful for the namechange (especially since that they can claim the acronym is a cunning coincidence than a bowdlerisation). But anyways, I think this whole thing has convinced me never to use swears in names, because if you try and change it, no matter how much better you might like the new name, people will accuse you of bowdlerising it to suit the Man. "dailywtf", no that's not it. "worse than fucked?" no, that's not it either. It's some name that didn't have "fuck" in it. Um... Oh now google found dailywtf for me. Thanks google! Yeah, not really marketable. And just who are you selling out to? There isn't a demographic that A.) isn't familiar with the expression "WTF" and B.) understands programming well enough to understand this site. While neither "The Daily WTF" or "Worse Than Failure" give any indication that it's an IT site, it was a link on TheOldNewThing that drew me here, not the name itself, so that's not a deal breaker for me. I agree with many previous comments the snigger factor is way down with the new name, and possibly in another direction where it looks like you're trying to pull one over the audience by disguising a well known acronym with a fairly trite title. However, there's a difference in spirit in the two names -- if you are planning to change the direction of the site then it's possible you'll lose some readers (maybe me, who knows, I'll wait until it happens). Anyway all up, it's your site dude, good luck to you. "What" has only one syllable. ... to be 15 again... Grow a pair and change it back. new name is much better. Tal. - I'm submitting the 450th (or something like that) comment thinking that it will actually be read by anyone. - (the real real wtf): So many code monkeys are so endeared to a vulgar word. I for one (and I do mean one) am glad I will no longer say, "I don't like the name, but the stories are great". this is pathetic.i still can't believe it. its like google.com change its name to gsearchengine.com. The feature is the same, but it lost some something. Something that make a site so special. ...... I still can't believe it. Fuck! Out of my bookmark dammit! 'Worse than Failure'.. no kidding If any of the advertisers signed up with "The Daily WTF" (N.B. the "advertise" page still uses that name) then by changing the name there is probably a breech of contract! They probably signed up to this site BECAUSE it has a brand name and url that will continue to attract NEW visitors, which the new name will not at anything like the same rate. Another problem is that a lack of new visitors should cause a sharply reduced flow in new WTF's to post on the site...the existing people only know so many. Such a lack of new content should cause a second lowering in the eyeballs as the interest level of the site to existing people goes down. there are only 3 lame articles. It's not too late to change it back. ok... everybody was shocked, everybody laughed *g*... now change it back! Some things you didn't know about grandma: - she has watched porn - she has been to a strip club - she has taken drugs - she has told very dirty jokes - she has tried every sex position in the book - she has done as much fucked up shit that you have (most likely more because she is older) - she has used the word FUCK thousands of times - yes she is old, but she is also a human just like you and everybody else you know and people were not any tamer back in her day. She puts on an act for the grand kids just like all grandparents because she sees you as innocent in the same way you see her. Removing this site from my daily list and I won't be adding it back till the name is back. fubar Same concept. They'd understand. And by the way, the site's new name is fubar. :P If he changed it for the actual reasons he sited (his grandmother) then it would have been simpler to just tell her that WTF stood for “Worse than failure.” It would have saved the time and money of having to register another domain and create a new logo. thedailyWTF (btw daily does not necessarily mean once per day, it can also mean every day) worse than failure SUX0RZ :( LMFAO i would so love THAT name. captcha ATARI (would laugh too) Like many others said before: - Old name is way better! - Reason for change is sad, pathetic, hypocrite and what not. Nothing wrong with the word Fuck. (Listen to 'The F-word') You could always explain to Grandma that WTF in theDailyWTF stands for 'Worse than Failure'. That said I will continue reading the site, cause the stories makes me smile everyday. Hope thedailywtf.com stays forwarding (cause I don't want to update my bookmark), or like mentioned by another reader, maybe you can use both domains with the same content, just a different header. Captcha: stinky - Yeah, this smells bad! "The Daily WTF" was hilarious - it alone made people laugh when I mentioned it. This new name feels more like indulging in stories that make us feel superior. Yuck, I'm embarrassed to mention the name now. A Lange & Sohne watches Audemars Piguet watches Ulysse Nardin watches Vacheron Constantin watches p.s. the 'f' totally stands for "frack", everyone knows that. Or "frell", that works too. ("I see you drivin' round town, with the girl I love, and I'm like: "Frack you!")
http://thedailywtf.com/articles/comments/Announcement_0x3a__Website_0x2e_RenameTo(_0x201c_Worse_Than_Failure_0x201d_)
CC-MAIN-2015-32
en
refinedweb
crowdflower 0.0.12 CrowdFlower API - Python client Client library for interacting with the CrowdFlower API with Python. Installation Install from PyPI: easy_install -U crowdflower Or install the latest version from GitHub: git clone cd crowdflower python setup.py develop Basic usage Import like: import crowdflower CrowdFlower API keys are 20 characters long; the one below is just random characters. (You can find your API key at make.crowdflower.com/account/user.) conn = crowdflower.Connection(api_key=') Inspecting existing jobs Loop through all your jobs and print the titles: for job in conn.jobs(): print job.properties['title'] Creating a new job Create a new job with some new units: data = [ {'id': '1', 'name': 'Chris Narenz', 'gender_gold': 'male'}, {'id': '2', 'name': 'George Henckels'}, {'id': '3', 'name': 'Maisy Ester'}, ] job = conn.upload(data) update_result = job.update({ 'title': 'Gender labels', 'included_countries': ['US', 'GB'], # Limit to the USA and United Kingdom # Please note, if you are located in another country and you would like # to experiment with the sandbox (internal workers) then you also need # to add your own country. Otherwise your submissions as internal worker # will be rejected with Error 301 (low quality). 'payment_cents': 5, 'judgments_per_unit': 2, 'instructions': 'some <i>instructions</i> html', 'cml': 'some layout cml, e.g., ' '<cml:text', 'options': { 'front_load': 1, # quiz mode = 1; turn off with 0 } }) if 'errors' in update_result: print(update_result['errors']) exit() job.gold_add('gender', 'gender_gold') Launch job for on-demand workers (the default): job.launch(2) Launch job for internal workers (sandbox): job.launch(2, channels=['cf_internal']) Check the status of the job: print job.ping() Clean up; delete all the jobs that were created by the above example: for job in conn.jobs(): if job.properties['title'] == 'Gender labels': print 'Deleting Job#%s' % job.id print job.delete() View annotations collected so far: for row in job.download(): print row Debugging / Logging To turn on verbose logging use the following in your script: import logging logging.basicConfig(level=logging.DEBUG) The CrowdFlower blog is the definitive (but incomplete) source for API documentation: The source code for the official ruby-crowdflower project is also helpful in some cases. This package uses kennethreitz’s Requests to communicate with the CrowdFlower API over HTTP. Requests is Apache2 licensed. Support Found a bug? Want a new feature? File an issue! Contributing We love open source and working with the larger community to make our codebase even better! If you have any contributions, please fork this repository, commit your changes to a new branch, and then submit a pull request back to this repository (peoplepattern/crowdflower). To expedite merging your pull request, please follow the stylistic conventions already present in the repository. These include: - Adhere to PEP8 - We’re not super strict on every single PEP8 convention, but we have a few hard requirements: - Four-space indentation - No tabs - No semicolons - No wildcard imports - No trailing whitespace - Use docstrings liberally The Apache License 2.0 contains a clause covering the Contributor License Agreement.): - 53 downloads in the last day - 422 downloads in the last week - 1240 downloads in the last month - Author: Christopher Brown - Keywords: crowdflower crowdsourcing api client - License: Copyright 2014 People Pattern: chbrown - DOAP record: crowdflower-0.0.12.xml
https://pypi.python.org/pypi/crowdflower/0.0.12
CC-MAIN-2015-32
en
refinedweb
Hey Justin On 31 Jul 2012, at 15:14, Justin Mclean wrote: > Nor am I suggesting one namespace per new component added. I was aware of that, I meant to imply we would have got a new namespace with 4.5 and the introduction of Spark DataGrid, Form, Image, Module, Busy Indicator, SkinnablePopUpContainer, Date/Time, Number/Currency Formatters & Number/Currency Validators. If we start out with a pattern of introducing a new namespace when u have new components in a release, we should continue that pattern no matter how many components you introduce in a release. For me if they are spark based classes they should be in spark packages and in the spark namespace, if they are mx based classes they should be in mx packages and namespace. The packages and namespace names should not be influenced by the release a component was introduced in. > The postccode formatter/validator classes are not quite mx and not quite spark so it's a little difficult to know where to add them if we didn't add a new namesake. The PostCodeValidator implementation extends the mx validator and the PostCodeFormatter extends the mx validator. Neither extend the spark validation or formatting base classes, therefore IMO they should both go in mx packages and be in the mx namespace. If someone then adds a spark based solution they can be also be easily added to the SDK without conflict and preserving backwards compatibility. > +1 to that. We might be able to change that via adding duplicate packages via manifest files (I think not tired). Along the same lines as mx:Rect/s:Rect? mx:Rect/s:Rect are namespace issues. My point was a little off topic i guess although the package names do fit into the 2 broad namespaces of mx and spark. Moving the components out of "components" and into "containers" & "controls" could again be a bit of a headache for those that have used them in AS (find and replace would do the job again though). By updating the manifest, there wouldn't be any mxml namespace issue. We in agreement that changing the URI for the namespaces to remove 'adobe' would also be good. maybe we could provide a simple AIR tool that performs the find and replace on a codebase for both the URI, and spark container/controls packages. Tink
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201207.mbox/%[email protected]%3E
CC-MAIN-2015-32
en
refinedweb
I'm on a foreign exchange program and studying computer architecture at university. The problem is this university it assumes the students have studied C-programming as their course in basic programming, whereas at my regular university the course in basic programming is ADA. I am trying to catch up but need some help since I don't really know how C-language is structured. My current problem: Shift operations, 32-bit. Got two functions int func1 (unsigned word) { return (int) ((word<<24) >> 24); } int func2 (unsigned word) { return ((int) word <<24) >> 24; } How does these functions work? I understand the logical shift but not the functions. My guess is func1 first shift all bits 24 bits to left, and then 24 bits to right, resulting in the 8 LSB will be the same but the 24 MSB will turn into 0's. Even if my guess is correct I can't see how func2 would work differently. I would greatly appreciate if someone could ellaborate for me. Examples with 127, 128, 255, 256 as in values? EDIT: Represented by two's complement
http://cboard.cprogramming.com/c-programming/130129-studying-computer-architecture-without-knowledge-c-programming-got-some-questions.html
CC-MAIN-2015-32
en
refinedweb
/* * MSDecoder.h - decode, decrypt, and/or verify signatures of messages in the * Cryptographic Message Syntax (CMS), per RFC 3852. * * See CMSEncoder.h for general information about CMS messages. */ #ifndef _CMS_DECODER_H_ #define _CMS_DECODER_H_ #include <CoreFoundation/CoreFoundation.h> #include <Security/SecCertificate.h> #include <Security/SecTrust.h> #include <stdint.h> #ifdef __cplusplus extern "C" { #endif /* * Opaque reference to a CMS decoder object. * This is a CF object, with standard CF semantics; dispose of it * with CFRelease(). */ typedef struct _CMSDecoder *CMSDecoderRef; CFTypeID CMSDecoderGetTypeID(void); /* * Status of signature and signer information in a signed message. */ enum { kCMSSignerUnsigned = 0, /* message was not signed */ kCMSSignerValid, /* message was signed and signature verify OK */ kCMSSignerNeedsDetachedContent, /* message was signed but needs detached content * to verify */ kCMSSignerInvalidSignature, /* message was signed but had a signature error */ kCMSSignerInvalidCert, /* message was signed but an error occurred in verifying * the signer's certificate */ kCMSSignerInvalidIndex /* specified signer index out of range */ }; typedef uint32_t CMSSignerStatus; /* * Create a CMSDecoder. Result must eventually be freed via CFRelease(). */ OSStatus CMSDecoderCreate( CMSDecoderRef *cmsDecoderOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Feed raw bytes of the message to be decoded into the decoder. Can be called * multiple times. * Returns errSecUnknownFormat upon detection of improperly formatted CMS * message. */ OSStatus CMSDecoderUpdateMessage( CMSDecoderRef cmsDecoder, const void *msgBytes, size_t msgBytesLen) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Indicate that no more CMSDecoderUpdateMessage() calls are forthcoming; * finish decoding the message. * Returns errSecUnknownFormat upon detection of improperly formatted CMS * message. */ OSStatus CMSDecoderFinalizeMessage( CMSDecoderRef cmsDecoder) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * A signed CMS message optionally includes the data which was signed. If the * message does not include the signed data, caller specifies the signed data * (the "detached content") here. * * This can be called either before or after the actual decoding of the message * (via CMSDecoderUpdateMessage() and CMSDecoderFinalizeMessage()); the only * restriction is that, if detached content is required, this function must * be called befoere successfully ascertaining the signature status via * CMSDecoderCopySignerStatus(). */ OSStatus CMSDecoderSetDetachedContent( CMSDecoderRef cmsDecoder, CFDataRef detachedContent) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the detached content specified in CMSDecoderSetDetachedContent(). * Returns a NULL detachedContent if no detached content has been specified. * Caller must CFRelease() the result. */ OSStatus CMSDecoderCopyDetachedContent( CMSDecoderRef cmsDecoder, CFDataRef *detachedContentOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Optionally specify a SecKeychainRef, or an array of them, containing * intermediate certs to be used in verifying a signed message's signer * certs. By default, the default keychain search list is used for this. * Specify an empty CFArrayRef to search *no* keychains for intermediate * certs. * If this is called, it must be called before CMSDecoderCopySignerStatus(). */ OSStatus CMSDecoderSetSearchKeychain( CMSDecoderRef cmsDecoder, CFTypeRef keychainOrArray) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the number of signers of a message. A result of zero indicates that * the message was not signed. * This cannot be called until after CMSDecoderFinalizeMessage() is called. */ OSStatus CMSDecoderGetNumSigners( CMSDecoderRef cmsDecoder, size_t *numSignersOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the status of a CMS message's signature. A CMS message can * be signed my multiple signers; this function returns the status * associated with signer 'n' as indicated by the signerIndex parameter. * * This cannot be called until after CMSDecoderFinalizeMessage() is called. * * Note that signature and certificate verification of a decoded message * does *not* occur until this routine is called. * * All returned values are optional - pass NULL if you don't need a * particular parameter. * * Note that errors like "bad signature" and "bad cert" do NOT cause this * routine to return a nonzero error status itself; such errors are reported * in the various out parameters, listed below. * * Inputs: * ------- * cmsDecoder : a CMSDecoder which has successfully performed a * CMSDecoderFinalizeMessage(). * signerIndex : indicates which of 'n' signers is being examined. * Range is 0...(numSigners-1). * policyOrArray : Either a SecPolicyRef or a CFArray of them. * These policies are used to verify the signer's certificate. * evaluateSecTrust : When TRUE, causes the SecTrust oebject created for the * evaluation of the signer cert to actually be evaluated * via SecTrustEvaluate(). When FALSE, the caller performs * the SecTrustEvaluate() operation on the SecTrust object * returned via the secTrust out parameter. * NOTE: it is hazardous and not recommended to pass in FALSE * for the evaluateSecTrust parameter as well as NULL for the * secTrust out parameter, since no evaluation of the signer * cert can occur in that situation. * * Outputs: * -------- * signerStatusOut -- An enum indicating the overall status. * kCMSSignerUnsigned : message was not signed. * kCMSSignerValid : both signature and signer certificate verified OK. * kCMSSignerNeedsDetachedContent : a call to CMSDecoderSetDetachedContent() * is required to ascertain the signature status. * kCMSSignerInvalidSignature : bad signature. * kCMSSignerInvalidCert : an error occurred verifying the signer's certificate. * Further information available via the secTrust and * certVerifyResultCode parameters. This will never be * returned if evaluateSecTrust is FALSE. * kCMSSignerInvalidIndex : specified signerIndex is larger than the number of * signers (minus 1). * * secTrustOut -- The SecTrust object used to verify the signer's * certificate. Caller must CFRelease this. * certVerifyResultCodeOut -- The result of the certificate verification. If * the evaluateSecTrust argument is set to FALSE on * input, this out parameter is undefined on return. * * The certVerifyResultCode value can indicate a large number of errors; some of * the most common and interesting errors are: * * CSSMERR_TP_INVALID_ANCHOR_CERT : The cert was verified back to a * self-signed (root) cert which was present in the message, but * that root cert is not a known, trusted root cert. * CSSMERR_TP_NOT_TRUSTED: The cert could not be verified back to * a root cert. * CSSMERR_TP_VERIFICATION_FAILURE: A root cert was found which does * not self-verify. * CSSMERR_TP_VERIFY_ACTION_FAILED: Indicates a failure of the requested * policy action. * CSSMERR_TP_INVALID_CERTIFICATE: Indicates a bad leaf cert. * CSSMERR_TP_CERT_EXPIRED: A cert in the chain was expired at the time of * verification. * CSSMERR_TP_CERT_NOT_VALID_YET: A cert in the chain was not yet valie at * the time of verification. */ OSStatus CMSDecoderCopySignerStatus( CMSDecoderRef cmsDecoder, size_t signerIndex, CFTypeRef policyOrArray, Boolean evaluateSecTrust, CMSSignerStatus *signerStatusOut, /* optional; RETURNED */ SecTrustRef *secTrustOut, /* optional; RETURNED */ OSStatus *certVerifyResultCodeOut) /* optional; RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the email addresserEmailAddress( CMSDecoderRef cmsDecoder, size_t signerIndex, CFStringRef *signerEmailAddressOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the certificateerCert( CMSDecoderRef cmsDecoder, size_t signerIndex, SecCertificateRef *signerCertOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Determine whether a CMS message was encrypted. Returns TRUE if so, FALSE if not. * Note that if the message was encrypted, and the decoding succeeded, (i.e., * CMSDecoderFinalizeMessage() returned noErr), then the message was successfully * decrypted. * This cannot be called until after CMSDecoderFinalizeMessage() is called. */ OSStatus CMSDecoderIsContentEncrypted( CMSDecoderRef cmsDecoder, Boolean *isEncryptedOut) __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the eContentType OID for a SignedData's EncapsulatedContentType, if * present. If the message was not signed this will return NULL. * This cannot be called until after CMSDecoderFinalizeMessage() is called. * The returned OID's data is in the same format as a CSSM_OID; i.e., it's * the encoded content of the OID, not including the tag and length bytes. */ OSStatus CMSDecoderCopyEncapsulatedContentType( CMSDecoderRef cmsDecoder, CFDataRef *eContentTypeOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain an array of all of the certificates in a message. Elements of the * returned array are SecCertificateRefs. The caller must CFRelease the returned * array. If a message does not contain any certificates (which is the case for * a message which is encrypted but not signed), the returned *certs value * is NULL. The function will return noErr in this case. * This cannot be called until after CMSDecoderFinalizeMessage() is called. */ OSStatus CMSDecoderCopyAllCerts( CMSDecoderRef cmsDecoder, CFArrayRef *certsOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); /* * Obtain the actual message content (payload), if any. If the message was * signed with detached content this will return NULL. * Caller must CFRelease the result. * This cannot be called until after CMSDecoderFinalizeMessage() is called. */ OSStatus CMSDecoderCopyContent( CMSDecoderRef cmsDecoder, CFDataRef *contentOut) /* RETURNED */ __OSX_AVAILABLE_STARTING(__MAC_10_5, __IPHONE_NA); #ifdef __cplusplus } #endif #endif /* _CMS_DECODER_H_ */
http://opensource.apple.com/source/libsecurity_cms/libsecurity_cms-55002/lib/CMSDecoder.h
CC-MAIN-2015-32
en
refinedweb
Apologies in advance about my English and the brief descriptions, as my native language is German. The TService class wraps all the things around a NT-Service and allows to create a NT-Service with a few lines of code. TService virtual void ServiceProc (void) = PURE; This is the main service procedure. If you leave ServiceProc() your service is stopped. ServiceProc() virtual const char* GetName (void) = PURE; virtual const char* GetDisplayName (void) {return GetName();} Simply return a string to identify the service by name. You can also overwrite the GetDisplayName() procedure to install your service with a different display name. GetDisplayName() bool Execute (void); Run as Service. This should be the default call for your service from the main procedure. bool ConsoleMode (void); To check out a service you can run the service from a console window. The service runs in the logged on user account. bool Start (void); bool Stop (void); bool Install (void); bool Remove (void); virtual bool Help (DWORD context = 0); Control the service. Help() searches for a helpfile named "GetName().hlp" bool Terminated (void;} void Terminate (void;} Your ServiceProc() must check Terminated() periodically to get a stop request. When Terminated() returns true you should leave the ServiceProc() und cleanup your Service. Call Terminate() to stop the Service from outside ServiceProc(). ServiceProc() Terminated() Terminated() Terminate() const char * LastError (void) const {return m_ServiceError;} Returns the last error as string from the system. void PrintLastError (const char *Caption = NULL); Writes the last error to the standard error device if any. bool SetConfigValue (char* key,BYTE *value,DWORD nvalue,cfValType type = cfString); bool GetConfigValue (char* key,BYTE *buf,DWORD *nbuff,cfValType *ypet); enum cfValType { cfBinary = REG_BINARY, cfDword = REG_DWORD, cfString = REG_S }; Set and get Registry Values like RegSetValue() and RegQueryValue(). The values are located under : "HKLM\SYSTEM\CurrentControlSet\Services\%ServiceName%\ServiceConfig\". RegSetValue() RegQueryValue() void LogEvent (char* event,evLogType type = evInfo,WORD category = 0); enum evLogType { evError = EVENTLOG_ERROR_TYPE, evWarning = EVENTLOG_WARNING_TYPE, evInfo = EVENTLOG_INFORMATION_TYPE }; Write to the SystemEventLog. virtual bool Init (void) {return true;} Is be called before ServiceProc(), you can setup your service here. If Init() returns false the service will stop. Init() false virtual void Cleanup (void) {return;} The service is stopped, cleanup resources from Init(). Init() virtual void LogoffEvent (void) {return;} A user has logged off. If you would check that a user is logged in search in ServiceProc() for a window named "Shell_TrayWnd". ServiceProc() virtual void ShutdownEvent (void) {Terminate();} The Machine is shutting down, terminate the service. Here is a short quick and dirty sample how to create a service. To control the service run the executable from the console with the parameters : to test the service from a console window #include "cService.hpp" // cService.h AND cService.cpp class cMyService : private TService { public: cMyService (char *arg); private: const char* GetName (void){return "MyService";} void ServiceProc (void); }; cMyService::cMyService (char *arg) { if (arg != NULL) { unsigned short a = *((unsigned short*) arg); if (a == *((unsigned short*) "/i")) Install(); else if (a == *((unsigned short*) "/r")) Remove(); else if (a == *((unsigned short*) "/s")) Start(); else if (a == *((unsigned short*) "/q")) Stop(); else if (a == *((unsigned short*) "/t")) ConsoleMode(); } else Execute(); } void cMyService::ServiceProc (void) { while (! Terminated()) { Beep (400,100); Sleep (5000); } } void main (int argc,char *argv[]) { delete new cMyService(arg.
http://www.codeproject.com/Articles/7651/A-general-purpose-NT-Service-Class?fid=73515&df=90&mpp=10&sort=Position&spc=None&tid=1370540
CC-MAIN-2015-32
en
refinedweb
Ticket #10140 (closed enhancement: fixed) Base sage.geometry.cone on the Parma Polyhedra Library (PPL) Description (last modified by vbraun) (diff) As a first useful application of the PPL Cython library interface I have changed sage.geometry.cone.Cone to use the PPL wrapper instead of cddlib. Here is a quick benchmark with a fan that was somewhat challenging: sage: from sage.schemes.generic.toric_variety_library import toric_varieties_rays_cones sage: rays, cones = toric_varieties_rays_cones['BCdlOG'] sage: timeit('Fan(cones,rays)') 5 loops, best of 3: 1.95 s per loop With the old Polyhedron/cddlib interface, I got instead 5 loops, best of 3: 42.1 s per loop Apply - trac_10140_sublattice_intersection.patch - trac_10140_base_cone_on_ppl_original.patch - trac_10140_reviewer.patch - trac_10140_switch_point_containment_to_PPL.patch Apply trac_10140_sublattice_intersection.patch, trac_10140_base_cone_on_ppl_original.patch, trac_10140_reviewer.patch, trac_10140_switch_point_containment_to_PPL.patch Attachments Change History comment:2 Changed 3 years ago by novoselt Wonderful speed gain! However, when do you expect PPL to become a standard library? It is my impression that currently it is difficult to add new standard packages and it is more likely to become an optional one for a while. In which case there should be an option to use PPL, but its presence should not affect work of toric geometry. Little picks at the patch: - I don't see why it is necessary to remove the possibility of Cone construction from a Polyhedron. - I think that new possible input C_Polyhedron should be documented in the INPUT section rather than a note in the end. - Is it possible to make C_Polyhedrons immutable (on demand)? - I prefer all line generators to be put in the end of ray list for non-strictly convex cones, if they were determined automatically. Because if one works with faces of such a cone, then all these generators are the same and, therefore, others are more "important". That's very minor, of course. - I am trying to follow PEP8 style guide which says: When raising an exception, use "raise ValueError('message')" instead of the older form "raise ValueError, 'message'". New line 1167 tries to change this recommended form to an old one ;-) - Looking at what you have done in facet normals, maybe we should change cone.dual_lattice() method to always return ZZn if there is no dual method for the lattice? Thinking of the class group situation, it seems like a more sensible default. - I think cones should not be printed explicitly (i.e. using "print") when ValueError is raised due to attempt to intersect cones in lattices of different dimension. The reason is that maybe some users want to intercept this exception and deal with it in their own way without showing any output. Also, I think that the check should be done not for equality of lattice dimensions, but for equality of lattices themselves, because cones of different lattices should not be intersected. comment:3 Changed 3 years ago by vbraun - Well you can always write Cone(Polyhedron.rays()). I took out the possibility to directly pass a Polyhedron to make sure that I caught all instances of old code. - PPL stuff should not be visible to the user, I think. If anything, I'll probably move the note to a comment in the code. - In the C++ library there is no particular support for that. Of course, in C++ you just declare things const if you don't want them mutable. So I guess we should emulate that by get/set_immutable() methods. 4,5,6. Sounds good. - Oops left-over print() from debugging things :-) comment:4 Changed 3 years ago by novoselt - My concern was that it removes publicly exposed feature that has been in Sage for several months (and is likely to stay there a few months more). So I vote for resurrecting this possibility, which is actually very easy when things are PPL based - you just replace Polyhedron with Polyhedron.rays(), as you said. More importantly, if PPL will become something that users have to install separately, then ALL of the old code related to polyhedron -> cone should remain untouched, but there should be some new path to go around it. By the way, I am not quite familiar with packages - is it possible to change Sage library during package installation? If not and all PPL-related code should be in the library no matter whether PPL is installed or not, then "doctest-fixing" may be a bad approach here. Maybe we should change doctests to make them deterministic. Although I am not quite sure how. - I see your point, but having non-documented input rubs me the wrong way. If you don't want to mention PPL in the documentation (which is quite reasonable), maybe we can add a function like _Cone_from_PPL for such input? For the actual class there is no need in this - it is already not supposed to be called directly by users. - Yes, that's what I had in mind. My concern is that when you use PPL representation, you may need to actually change it somehow to get the result. So far in your code you were constructing a new object based on existing one, so that the original is intact, but it seems to me that this step is quite easy to forget and it would be nice to have something forcing you to do so. - That explains it ;-) comment:5 Changed 3 years ago by jdemeyer This is quite possibly a stupid question, but does it mean we can get rid of cddlib if this gets merged? comment:6 Changed 3 years ago by mhampton No! Currently cddlib is required by Gfan. It may be possible to convince Anders Jensen, the author of Gfan, to switch to PPL as well. comment:7 Changed 2 years ago by vbraun - Status changed from needs_work to needs_review The updated version now cleans up all doctests. comment:8 Changed 2 years ago by vbraun comment:9 Changed 2 years ago by vbraun comment:10 Changed 2 years ago by fbissey I have trouble applying trac_10140_base_cone_on_ppl.patch. The following section doesn't apply at least: @@ -1578,7 +1667,7 @@ Let's take a 3-d cone on 4 rays:: - sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)]) + sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)], check=False) Then any ray generates a 1-d face of this cone, but if you construct such a face directly, it will not "sit" inside the cone:: Is it due to #9972? I can see that particular section in #9972 but slightly shifted in line number. comment:11 Changed 2 years ago by novoselt It is quite likely, since #9972 was invasive and written mostly by me, i.e. it was not coordinated with this one. What exactly do you mean by "have trouble"? I just tried to apply it on top of my queue and it succeeded with two fuzzy hunks. For the record: I am going to review this ticket but I am waiting for PPL package to get approved ;-) comment:12 Changed 2 years ago by fbissey comment:13 Changed 2 years ago by novoselt - Status changed from needs_review to needs_work - Work issues set to rebasement This is on a fresh installation of 4.6.2.alpha2 without any patches applied: applying trac_10140_base_cone_on_ppl.patch patching file sage/geometry/cone.py Hunk #30 succeeded at 1981 with fuzz 1 (offset 10 lines). Hunk #39 succeeded at 2332 with fuzz 2 (offset 29 lines). now at: trac_10140_base_cone_on_ppl.patch Changed 2 years ago by vbraun - attachment trac_10140_fix_toric_variety_doctests.patch added Rebased patch comment:14 Changed 2 years ago by vbraun - Status changed from needs_work to needs_review I've uploaded my rebased patch. comment:15 Changed 2 years ago by novoselt - Work issues rebasement deleted François, does the new version work for you? (Now I don't get any fuzz for both patches.) comment:16 Changed 2 years ago by fbissey It applies if I start from 4.6.2.alpha2, but I have fuzz: patching file sage/geometry/cone.py Hunk #29 succeeded at 1969 (offset 10 lines). Hunk #30 succeeded at 1980 with fuzz 1 (offset 10 lines). Hunk #31 succeeded at 2076 (offset 30 lines). Hunk #32 succeeded at 2110 (offset 30 lines). Hunk #33 succeeded at 2129 (offset 30 lines). Hunk #34 succeeded at 2217 (offset 30 lines). Hunk #35 succeeded at 2237 (offset 30 lines). Hunk #36 succeeded at 2271 (offset 30 lines). Hunk #37 succeeded at 2289 (offset 30 lines). Hunk #38 succeeded at 2314 (offset 30 lines). Hunk #39 succeeded at 2330 with fuzz 2 (offset 29 lines). Hunk #40 succeeded at 2464 (offset 29 lines). Hunk #41 succeeded at 2492 (offset 29 lines). Hunk #42 succeeded at 2523 (offset 29 lines). Hunk #43 succeeded at 2608 (offset 29 lines). Hunk #44 succeeded at 2874 (offset 38 lines). Hunk #45 succeeded at 3100 (offset 38 lines). Hunk #46 succeeded at 3119 (offset 38 lines). Hunk #47 succeeded at 3189 (offset 38 lines). Hunk #48 succeeded at 3204 (offset 38 lines). patching file sage/geometry/fan.py Hunk #3 succeeded at 1824 (offset 13 lines). patching file sage/geometry/fan_morphism.py Presumably the same fuzz you had before. I wonder if it is because of #10336 which was merged in alpha2? But it applies and that's the main thing. comment:17 Changed 2 years ago by fbissey On a positive note, not only it applies but the tests pass without problem on a variety of platform (linux-x86,linux-amd64, OS X 10.5 (x86)). comment:18 Changed 2 years ago by novoselt - Reviewers set to Andrey Novoseltsev - Milestone changed from sage-feature to sage-4.7 Now that PPL package got positively reviewed, I will go over this one shortly. comment:19 Changed 2 years ago by mhampton This looks good to me, and #10039 has positive review now. It seems that Andrey's request to let Polyhedron objects be allowed as Cone input has been addressed, as well as the other issues he raised. All tests in the geometry module pass with these patches on linux (64-bit), solaris (mark on skynet), and OS X 10.6. I would give a positive review, but I will defer to Andrey in case of design disagreements. comment:20 Changed 2 years ago by novoselt Thank you, I definitely want to look over it again. In particular, I don't like that the order of rays has changed in doctests in fan_morphism (the very end of the first patch). So I want to figure out why does it happen there and how to fix it, if possible. comment:21 Changed 2 years ago by novoselt comment:22 Changed 2 years ago by novoselt - Status changed from needs_review to needs_info So - am I right that PPL does not preserve the order of rays when it computes the minimal set of generators? I am a strong believer in preserving this order (i.e. if the given set is already minimal, the output should be the same, and if there are "extras" then they should be removed, but the rest should be still in order). If PPL does not preserve it, I propose adding such "sorting" to the cone constructor. It should also eliminate the need for many doctest fixes. The only argument against, I think, is performance, but I doubt that it will be noticeable in toric geometry computations, where one usually wants to work with simplicial cones that don't have many rays. comment:23 Changed 2 years ago by novoselt One more suggestion: can we please completely return my code for constructing a cone from a Polyhedron? (Modulo parameter names, of course). Because: - I had checks that it is a cone and its vertex is at the origin. - While it is easy to get rays from it and proceed using PPL, it is unnecessary since it is already known that rays are minimal generators, and we also get splitting into rays/lines for free, which was cached. comment:24 Changed 2 years ago by vbraun Neither PPL nor cddlib preserve ray orders. PPL does return them in a canonical (internal) order, irregardless of the original order. Changed 2 years ago by vbraun - attachment trac_10140_base_cone_on_ppl.patch added Updated patch comment:25 follow-up: ↓ 26 Changed 2 years ago by vbraun. comment:26 in reply to: ↑ 25 Changed 2 years ago by novoselt. It is very annoying to deal with several objects that are "the same" but are slightly different, e.g. have different ordering of rays. Also, users may have some ideas about "convenient order" and provide rays in it: if I create the first quadrant as Cone([(1,0), (0,1)]), I expect that the first ray will be the first basis vector and the second - the second, as it was given. Remembering to add check=False is annoying. Making check=False default is very dangerous, as I have learned the hard way. I really really want to preserve the order whenever possible. I even plan to add the possibility to manually provide the order of facets and points for lattice polytopes, since it will allow convenient work with Cayley cones and polytopes of nef-partitions - they are easy to create with current code, but keeping track of face duality is very ugly. I will be very happy to add the sorting myself on top your patch, fixing all affected doctests, so pleeease let me do it ;-). I am more concerned about validity checks than performance here. And I am also happy to do the necessary copy-pasting here. So you don't really have to do anything, just say that you are OK with it! comment:27 follow-up: ↓ 28 Changed 2 years ago by vbraun I also want to remind you of the case of non-strict cones where the ray representation is not unique; There need not be a subset of the initial rays that is minimal. I understand that its sometimes convenient to keep rays ordered in some particular way, but then you are really working with decorated rays and not just with cones. Instead of enforcing some particular order, maybe we can have a DecoratedCone (Though how is that different from IntegralRayCollection) / DecoratedLatticePolytope? Having said that, I'm not totally opposed to it. But we should at least think a little bit about the points I mentioned... comment:28 in reply to: ↑ 27 Changed 2 years ago by novoselt That was indeed useful to think about it! But I didn't change my mind ;-) Regarding non-strictly convex cones, I think that there is no point in trying to preserve user's order, but it would be nice to state it in the documentation: if users care about the order in these cases, they must use check=False option. Otherwise I think the output should be as it is now: strict_ray_0, ..., strict_ray_k, line_1, -line_1, ..., line_m, -line_m For strictly convex cones I indeed want to work with decorated cones, but I'd rather not call them that way explicitly everywhere. The reason is that these cones are designed for toric geometry and so their rays (or ray generators) correspond to homogeneous coordinates. If I want to have rays r1, r2, r3 and I want to associate to them variables x, y, z, then I would probably create a cone with rays r1, r2, r3, and then create a toric variety based on this cone with variable names x, y, z. If the order of rays may change during construction, it means that I need to construct the cone, look at the order of rays, and then rearrange my variable names accordingly, if necessary. This is inconvenient, so while mathematically the cone is determined by the set {r1, r2, r3} and associated variables are not x, y, z or even x1, x2, x3, but x_r1, x_r2, x_r3 and one should refer to them specifying the ray, in practice it is convenient to have some fixed order. Since there is no natural order on ray, the best one seems to be the one given by the user. I think that in most case users will actually provide the generating set, so computing it is almost redundant, but sometimes users (if they are like me ;-)) THINK that they provide the generating set, but it is not - that's why it is important to have check=True by default. Regarding this option, it also seems natural that if it is acceptable to give check=False for certain input, with check=True the output will be the same, but slower. I am not sure if there is any sense in having separate classes for cones and decorated cones. It is also not obvious which one should be the base class. On the one hand, decoration is extra structure. On the other hand, storing stuff as tuples gives this structure "for free". Personally, I never wished to have cones without any order. That does not mean that others don't need them, but right now one can use current cones with is_equivalent instead of == and ray_set instead of rays to disregard the order. Some more personal experience: - The first time I had problems due to order change was with getting nef-partitions from nef.x. Because in general PALP preserved the vertices if they are vertices, but nef.x was reordering them when computing nef-partitions (I think this is no longer true in the last version of PALP, but I started using it a little earlier). The solution was to add sorting to some function in the guts of lattice polytope, so that this reordering is not exposed to the user. - The second annoying issue with PALP was that the i-th face of dimension 0 could be generated by j-th vertex for i != j. While in principle vertices as points of the ambient space and vertices as elements of the face lattice are different, this discrepancy can be quite inconvenient: you need to remember that it exists and you need to insert extra code to do translation from one to another. So eventually I added sorting for 0-dimensional faces. - Faces of reflexive polytopes and their polars are in bijection. I find it very convenient to write things like for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))This eliminates the need for each face to be able to compute its dual. While it is not terribly difficult, it is certainly longer than having no need to do any computations at all. In order to accomplish it, there is some twisted logic in lattice polytopes that ensures that only one polytope in the polar pair computes its faces and the other one then "copies it with adjustments". I hope to redo it in a better way, but at least it is isolated from users, for whom the dual of the i-th face of dimension d is the i-the face of codimension d. Somewhat related behaviour of other parts of Sage: - Submodules compute their own internal basis by default, but there is an option to give user's basis. In the latter case the internal basis is still computed and one has access to either basis. One can also get coordinates of elements in terms of the user basis or in terms of the standard basis using coordinates for one and coordinate_vector for another and I never remember which one is which. Personally, I think that this design can be improved. But in any case they do have some order on the generators as well as other Parents without strict distinction between decorated and "plain" objects. - QQ["x, y"] == QQ["y, x"] evaluates to False, so while these rings are generated by the same variables the order matters. (To my surprise, one can also define QQ["x, x"] and it will be a ring in two variables with the same name...) By the way - currently cones DON'T try to preserve the given order of rays, they just happen to do it often and I didn't really notice it in the beginning (or maybe I assumed that cddlib preserves the order and there is no need to do anything). But I consider it a bug. As I understand, some people need to work with polyhedra that have a lot of vertices, maybe thousands. In this case sorting according to the "original order" may be a significant performance penalty, so it makes sense that PPL does not preserve it. It may also be difficult for general polyhedra with generators of different type and non-uniqueness of minimal generating set (as it is the case for non-strictly convex cones). But cones for toric geometry in practice tend to be relatively simple and strictly convex (maybe with non-strictly convex dual, but users are less likely to construct those directly), so I think that this is not an issue. We can add an option to turn sorting on or off, but, of course, I'd like to have it on by default. As I said, I will be very happy to do writing of sorting and doctest fixing myself!-) comment:29 follow-up: ↓ 30 Changed 2 years ago by novoselt Marshall, what's your opinion on the order topic? comment:30 in reply to: ↑ 29 Changed 2 years ago by mhampton Marshall, what's your opinion on the order topic? I don't have strong feelings on it. I think it might be worth sorting things to some canonical order - sorting rays or vertices or whatever other input should be fast, even in python, compared to the actual polyhedral work. But I know with cddlib that was a bit of a hassle since the internal ordering might be different. comment:31 follow-up: ↓ 32 Changed 2 years ago by vbraun. As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials.. comment:32 in reply to: ↑ 31 Changed 2 years ago by novoselt() I agree that the second form is better (and I think that it is implemented). But in order to implement it efficiently, one still has to copy the face lattice of the original polytope. Now let's consider faces of a polytope P and of the polytope 2*P. It is also more natural to write for face in P.faces(dim=3): print y, 2*y but if you would like 2*y to be a face of 2*P, not just a polytope obtained from y, how should it be implemented? If they are stored in the same order, you can easily and very efficiently iterate over faces and their multiples. Move on to cones. Let tau be a face of sigma. What is tau.dual()? Is it the cone dual to the cone tau or the face of the cone dual to cone sigma which is dual to the face tau? So we need to have a different method name then, tau.dual_face() perhaps. Now let P, P*, C, and C* be Cayley polytopes and cones with their duals. Faces of P and C are in direct relation, let F be a face of P. How should I get the face of C corresponding to it? Perhaps C(F) is a natural notation, but how is it going to be implemented? Finding a face generated by rays coinciding with vertices of F is quite inefficient and no matter how it is done the result probably should be cached somewhere. Where? Maybe in P or C and perhaps it is easy to achieve with a special class for supporting polytopes and supported cones. It is probably a good idea to implement this in a long run. But there are very many good ideas that have to be implemented and if faces are ordered in a consistent way, then those ideas can wait a little, because there is a satisfactory access to related faces now. And even when they are implemented, I think that the most efficient implementation will still rely on internal consistency of orders. Maybe it is mysterious currently that the face order represents duality. Then it is my fault for not documenting it correctly and I will try to fix it. If it was clearly stated in the documentation, with an example how to use it, there would be nothing mysterious about it. Note also, that I don't want to SORT faces in any particular way. I just want to have related faces in the same order as the "initial ones" happened to be. If nothing else, it leads to efficient implementation, so it is good to do it even if end-users will use some other interface.. What I was trying to say is that some kind of decoration occurs on its own just because rays are internally stored as tuples. If you are really against it and want to store them as sets, that means that anything that uses rays of that cone should refer to complete rays and instead of other lists and tuples with order matching the order of rays you need to use dictionaries with rays as keys. Which certainly can be convenient but you can do it even if cones have some internal order of rays, so I don't see a reason for having extra classes.. I'd rather go with UnorderedCone() ;-) I suggested adding a parameter that will turn off sorting for performance gain. I also don't want to have IMPLICIT order, I want to completely and clearly document it in the description of the Cone constructor and its face methods when/if there is any particular order on anything! As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials. Another good idea, but even after this conversion do you really want to use names like x_(1,0,0,0,0), x_(0,1,0,0,0), x_(0,0,1,0,0), x_(0,0,0,1,0), x_(0,0,0,0,1), x_(-1,-1,-1,-1,-1) for coordinates on P^5?! And for more complicated varieties expressions in such variables will be completely unreadable! So it is very natural to fix some order of rays and create variables whose index refers to the index of the corresponding ray. Even better is to have both "order matching" and "direct conversion".. Let's leave QQ["x, x,"] and come back to QQ["x, y"]. This notation asks for a polynomial ring over QQ with variables named x and y, correct? Well, QQ["y, x"] asks exactly the same, yet the rings are different. So, really, QQ["x, y"] asks for a polynomial ring over QQ with the first variable named "x" and the second one "y". In the same way I think that Cone[(1,0), (0,1)]) asks for a cone with the first ray (1,0) and the second ray (0,1). If we do have def Cone(..., keep_ray_order=True) then interactive usage will be pretty, yet there will be no performance penalty for internal computations. Users don't construct Kaehler cone explicitly, they call Kaehler_cone and somewhere inside there is a call to Cone which is written once and may have as many extra parameters as necessary. People refer to the first/second/third equation of the system, generator of the ideal etc. all the time even without assigning some labels/indexed names to them. It is natural to assume that you have specified the order of something when you have written this something in a certain order. Perhaps, the optimal solution is to let Cone([(1,0), (1,1), (0,1)]) to be the cone generated by three (ordered) rays - all of the given ones. And then have a function like minimal_ray_generators. I guess that's kind of what ideals are doing. I didn't think about this option before. And now I am a bit hesitant to implement it since I am not sure how many places in the code rely on the minimality of the set of generators for the strictly convex case... I do realize that all your points are valid and my arguments significantly reflect my personal taste, however I also do use the toric code in addition to writing it and if I find something to be convenient it is likely that some other people also will find it convenient. Unfortunately, it is a bit hard to say how many (or how few) of these other people would like my way. Although I suspect that not very many will strongly dislike it since it is harmless (except for some potential slow down, which should not be a noticeable issue and I suggest to have a way around it when it is.) I'll ask on sage-combinat for some input. At the very least I would like to be able to call a function sage.geometry.cone.keep_ray_order(True) that will turn on user-order preservation for the current session, even if the default is not to keep the order. But I still strongly believe in trying to preserve the order whenever possible, i.e. in the strictly convex case. (Overall, this issue is terribly similar to ambient_ray_indices discussed some time ago on #9972.) comment:33 Changed 2 years ago by novoselt comment:34 follow-up: ↓ 35 Changed 2 years ago by vbraun? The underlying implementation may very well rely on some ray order, this is why we have the check=False option. Just make sure that it is exposed by some reasonable class/method hierarchy, and make sure to spell out that the underlying implementation is subject to change and everyone has to go through the dual_face() method or whatever it is called. Its funny that you mention the ideals, because they make no promise about the generators. The MPolynomialIdeal class tries to abstract the mathematical notion of ideal, there is no particular choice of generators implied (and any particular choice is subject to change without notice). This is why methods like ideal.groebner_basis() return a PolynomialSequence and not a MPolynomialIdeal. comment:35 in reply to: ↑ 34 Changed 2 years ago by novoselt? I was talking about P and 2*P, whose faces are in as direct relationship as it goes. But it also implies that faces of P and (2*P).polar() are in a canonical inclusion-reversing bijection. If the order of their faces "match", one can iterate over related pairs without any extra effort/code and I found it to be extremely convenient and natural (having said that, I admit that I am biased and there can be a cleaner interface). Necessity to do such iterations arises in working with Hodge numbers and generating functions corresponding to nef-partitions. Regarding time: my first straightforward implementation using 2*y as just a polytope obtained from y was taking about an hour on a certain (simple) example. After some optimization and in particular enforcing face order on P and 2*P and using 2*y as a face of 2*P brought it down to less than a minute. Of course, part of the problem is that PALP is used via system calls so combining many of them into a single one helps a lot, but in any case nothing can beat zip(P.faces(2), twoP.faces(2)) ;-) Anyway, I plan to redo lattice polytopes ones I'm done with my thesis and I'll try to both explain this efficient method in the documentation and provide a more natural interface whenever possible. Its funny that you mention the ideals, because they make no promise about the generators. I meant that by default they don't even compute a minimal set of generators. But I don't recall any use for non-minimal representation of cones, so I think that there isn't any point to do the same for cones. We got a reply from Nicolas on sage-combinat, which I copy below: Just two cents from an outsider (I'll certainly will have a need for Cone's at some point, but don't have practical experience). When there is no clear cut answer for a design decision, I tend, whenever possible, to just postpone it; more often than not, the answer will become clear by itself after accumulating practical experience. In that case, there could be an option like: Cone(rays=[(1,0),(0,1)], keep_order = True) and the documentation could explicitly specify that the default value is currently *undefined*, and will be chosen later on. I guess for the moment I would unofficially set it to False, since that's the cheapest while True is somehow "adding a feature"; so that's less likely to break code in case of change later on. So, I guess, I am currently loosing 1:2, unless you have changed your mind ;-) If not, I propose the following: - State in the documentation of Cone that by default the order of rays is going to be "fixed but random" and in particular may change in future versions of Sage. (Which may very well happen due to upgrade in PPL. Personally, I don't like when these "random orders" change and it is yet another reason to stick with the user ordering ;-)) - Add keep_order=False to the list of parameters. If keep_order=True, well, keep the order as much as possible in the strictly convex case, i.e. through away extra generators from the original ordered list. If keep_order=True and the cone is not strictly convex, perhaps give a UserWarning like "keep_order=True does not affect not strictly convex cones, see check=False instead!" - Add a function sage.geometry.cone.keep_order(True/False), without importing it into the global name space, that will switch the default behaviour for the current session, so that if users always want keep_order=True, they don't have to repeat it all the time. Mention this function in the description of keep_order parameter in Cone. - Perhaps in the documentation of that function we may mention that if users feel strongly that it should be the default always, they can explain it on the above sage-combinat discussion. (I suppose it is OK to include such links in docstrings.) - In the documentation examples where the ray order is important, use keep_order=True instead of check=False (there are some examples in the patch where you have added this option). - Maybe keep_ray_order is better than just keep_order. Does it sound like a good compromise? (I.e. the one that leaves everyone mad (c) Calvin ;-)) comment:36 Changed 2 years ago by vbraun How about ordered=[True|False], just to throw out yet another possibility. Though either name is fine with me. I agree that passing this option is preferable to check=False. I don't see any point in documenting a way to change the default behavior. If anybody wants that then they can just overwrite Cone() in the global namespace with their own version. At least then its expected that no doctests work any more. I don't remember seeing a similar function anywhere else in the Sage library. comment:37 Changed 2 years ago by novoselt ordered seems unintuitive to me, let's stick with keep_order as the middle-length option. I think that functions adjusting default behaviour are awesome and convenient (for those who use them). automatic_names comes to mind and there is something else similar in spirit. But if you are totally against it, scratch 3&4. comment:38 Changed 2 years ago by vbraun I don't see automatic_names as anything in that spirit. It only changes the preparser. Which is special since you don't call it yourself usually, so you can't pass options to it. comment:39 Changed 2 years ago by novoselt - Cc nthiery added Hi Volker, I have started work on the reviewer patch, so please don't change the attached ones. (They don't seem to apply cleanly anymore, but I will take care of it.) Hi Nicolas, I am adding you to the cc field in case you have further comments on the ray order ;-) I have realized that - I mostly care about preserving the order of rays only if they are already the minimal generating rays of a strictly convex cone and - implementing this does not introduce any performance penalty: once we have constructed the PPL object, we know if the cone is strictly convex or not and we can just compare the number of its rays with the input - it these numbers are the same, then we can use the input rays as minimal generators directly. So instead of keep_order I would like to add the above check. Advantages: - It is convenient if the user cares about the ray order, e.g. for constructing an affine toric variety straight from a cone, when one may want to associate particular variable names with particular rays. - It allows easy writing of doctests which explicitly show rays without worrying that the order will change in future releases and a bunch of trivial adjustments will be required. - It constructs the same cones with check=True and check=False options in cases when it is indeed OK to use check=False option. - It is a bit more intuitive to the user and in particular nice for tutorials and live demonstrations. - It makes me happy ;-) Frankly, I cannot really think of any disadvantages... So - can I proceed and implement this? comment:40 Changed 2 years ago by novoselt Why lattice = rays[0].parent() was replaced with lattice = rays[0].parent().ambient_module() on the new line 378? I think if the input rays live in some sublattice, then the constructed cone also should live in the same sublattice. This distinction is important e.g. for constructing dual cones (I don't think that such duals will currently work, but it is not a reason to prohibit using sublattices ;-)) There are also several doctests where you have replaced things like Cone([(0,0)]) with Cone([], lattice=ToricLattice(2)) - what is your objection against the first form? I think it is a convenient and natural way to construct the origin cone, if you don't care about lattices. comment:41 Changed 2 years ago by novoselt In the intersection method of cones there is now if self.lattice_dim() != other.lattice_dim(): raise ValueError('The cones must be in same-dimensional lattices.') Why? And why in this form? I think that we probably should check that lattices are compatible, but not by dimension equality. We probably don't want to allow intersecting a cone in N with a cone in M. On the other hand, it is reasonable to allow intersection of cones living in sublattices of the same lattice. The lattice of the intersection should be the intersection of sublattices. Any objections? comment:42 Changed 2 years ago by novoselt - Status changed from needs_info to needs_work - Description modified (diff) - Work issues set to intersect sublattices First of all, what I have done with Volker's patches: - Folded and rebased them (on top of 3 FanMorphism patches as I wanted to check that nothing breaks, without them there is one fuzzy hunk but it still applies). - Removed doctest fixes related to the ray order as it is restored by the reviewer patch. - Removed the modification to cone-from-polyhedron code since the old one still works fine and makes all the checks. - Left the original of the line mentioned in since it seems to be the correct one to me. The reviewer patch does the following: - Preserves ray order for strictly convex cones given by the minimal generating set of rays, without performance penalty. - Removes self.dual_lattice() is self.lattice() check as dual_lattice now returns ZZ^n if there is no "honest dual". (This was not the case when this ticket was created.) - Modifies the intersection code (discussed above) slightly. This is not its final version, I need to play a bit with lattice intersection but I think it is the way to go. While I was writing it, I ran tests on top of clean sage-4.7.alpha4 and there are breaks which seem to be related to rays[0].parent() without .ambient_module(). I believe that they are fixed by #10882. comment:43 Changed 2 years ago by vbraun Sounds good. I don't remember why I added rays[0].parent().ambient_module() but I'm pretty sure you'll find out when you run the doctests. I don't know any place where we actually use intersections of cones in different sublattices. I would be fine with leaving this NotImplemented. comment:44 Changed 2 years ago by novoselt Fan morphisms (will) use cones in sublattices: the kernel fan naturally lives either in the domain lattice or the kernel sublattice, which corresponds to different toric varieties. I also suspect that while you were working on this patch zero rays passed to the cone constructor were breaking the code and that's why you have replaced the related doctests. However, there is now a catch for this case, so they should be fine. In general, I think it is important to allow zero rays in the input, e.g. if you are constructing a projected cone and some of the rays are mapped to the origin. Anyway, I'll take care of sublattices and move here appropriate chunks from #10943, #10882, and #11200 (unless they get reviewed in the near future and can be left in front of this one ;-)) comment:45 Changed 2 years ago by vbraun Either syntax for constructing the trivial cone is fine with me. Personally I prefer the one I used since it is a bit more obvious. Changed 2 years ago by novoselt - attachment trac_10140_base_cone_on_ppl_original.patch added Folded Volker's patches without some of the doctest fixes. comment:46 Changed 2 years ago by novoselt - Description modified (diff) - Work issues changed from intersect sublattices to remove/review dependencies I have removed the first change of the trivial cone construction since it was in the place where two variants are explicitly described. The new patch slightly changes the general method of intersection modules and adds an extending method to toric lattices. Now there is no need to check lattice compatibility in the cone intersection: it will fail if they are wrong. Added #10882 as a dependency for now, but may remove it on the weekend by moving some code here. It is not difficult, but it will break my "thesis queue" so I want to do it carefully and take care of consequences. Changed 2 years ago by novoselt - attachment trac_10140_sublattice_intersection.patch added comment:47 Changed 2 years ago by novoselt - Status changed from needs_work to needs_review - Work issues remove/review dependencies deleted - Authors changed from Volker Braun to Volker Braun, Andrey Novoseltsev - Dependencies set to #10039 - Reviewers changed from Andrey Novoseltsev to Andrey Novoseltsev, Volker Braun - Description modified (diff) OK, the new patches apply cleanly to sage-4.7.alpha4 and pass all long tests, documentation builds without warnings. The first patch now includes some adjustments to lattice treatment, moved here from #10882 (I'll update the patch there shortly.) Volker, if you are fine with my patches, please switch it to positive review! Changed 2 years ago by novoselt - attachment trac_10140_switch_point_containment_to_PPL.patch added comment:48 Changed 2 years ago by novoselt One more patch: I was learning how to use PPL and trying to switch point containment check in cones so that it does not call polyhedron method. In the process I have discovered a bug with constructing cones without rays, i.e. like Cone([], lattice=N): the PPL representation in this case didn't know the ambient space of this cone leading to mistakes. It is fixed in the second hunk of the last patch. Regarding original goal, here are the timings before sage: timeit("c = Cone([(1,0), (0,1)]); (1,1) in c", number=1000) 1000 loops, best of 3: 27.8 ms per loop sage: c = Cone([(1,0), (0,1)]) sage: timeit("(1,1) in c", number=1000) 1000 loops, best of 3: 729 µs per loop and after sage: timeit("c = Cone([(1,0), (0,1)]); (1,1) in c", number=1000) 1000 loops, best of 3: 2.3 ms per loop sage: c = Cone([(1,0), (0,1)]) sage: timeit("(1,1) in c", number=1000) 1000 loops, best of 3: 290 µs per loop As we see, even on very simple example we get 10x speedup for "single checks" when most of the time is spent constructing different representations of the cone. When everything is precached and we count only actual containment, we have 3x speed up. A more complicated example before:: c.ray_matrix() [ -4 -2 -2 -1 0 5 0 1] [ 5 -2 1 3 -1 -1 2 -2] [ 1 -73 -1 2 -23 -1 -1 0] [ 2 1 1 -1 0 1 4 1] [ 3 0 -1 0 1 1 15 -3] sage: timeit("(1,2,3,4,5) in c", number=1000) 1000 loops, best of 3: 4.52 ms per loop and after: timeit("(1,2,3,4,5) in c", number=1000) 1000 loops, best of 3: 1.3 ms per loop (I didn't bother with fresh start timing here.) Conclusion: Volker's PPL wrapper rocks! comment:49 Changed 2 years ago by vbraun - Status changed from needs_review to positive_review I'm happy with the reviewer patch, so positive review alltogether ;-) comment:50 Changed 2 years ago by jdemeyer - Milestone changed from sage-4.7 to sage-4.7.1 comment:51 Changed 2 years ago by jdemeyer - Status changed from positive_review to needs_work @ Andrey Novoseltsev: Can you upload your reviewer patch again using hg export tip? The user and date fields are missing. Changed 2 years ago by novoselt - attachment trac_10140_reviewer.patch added comment:52 Changed 2 years ago by novoselt - Status changed from needs_work to positive_review Done! comment:53 Changed 2 years ago by jdemeyer - Status changed from positive_review to closed - Resolution set to fixed - Merged in set to sage-4.7.1.alpha1 comment:54 Changed 2 years ago by vbraun I'm adding an unformatted "Apply" section in the ticket description to help the patch buildbot figure out the correct patches. comment:55 Changed 2 years ago by vbraun Apply trac_10140_sublattice_intersection.patch, trac_10140_base_cone_on_ppl_original.patch, trac_10140_reviewer.patch, trac_10140_switch_point_containment_to_PPL.patch comment:56 follow-up: ↓ 57 Changed 16 months ago by dimpase How hard would it be to set up PPL to do Vrepresentation and Hrepresentation of Polyhedron? In the current patch I fixed all doctest errors in cone.py and fan.py. There are some more broken doctests in other modules due to a different output ray ordering, and I will fix these once the ppl spkg and wrapper are reviewed.
http://trac.sagemath.org/sage_trac/ticket/10140
CC-MAIN-2013-20
en
refinedweb
23 September 2008 23:12 [Source: ICIS news] WASHINGTON (ICIS news)--US business leaders warned on Tuesday that the entire economy could collapse unless Congress quickly approves a $700bn (€476bn) banking bailout, but the urgent rescue plan continued to draw fire from legislators. ?xml:namespace> The US Chamber of Commerce warned that congressional delay in working out final terms for the bailout could trigger the worst ?xml:namespace> “Make no mistake - if Congress does not act quickly, decisively and responsibly to prevent a total freeze up of our financial system, the entire economy could collapse with devastating consequences,” said Bruce Josten, the chamber’s vice president for government affairs. “Without a doubt, it would be the greatest financial calamity since the Great Depression, impacting consumers and small and large businesses alike,” Josten added. However, despite urgent appeals from top US federal financial officials and business interests, the Treasury Department’s hurriedly drawn $700bn rescue proposal came under heavy criticism from both Democrats and Republicans on Capitol Hill. Senator Chris Dodd (Democrat-Connecticut), chairman of the Senate Banking Committee, said that the plan put together by Treasury Secretary Henry Paulson “is stunning and unprecedented in its scope and lack of detail”. Dodd said the Paulson plan would give the Treasury secretary unprecedented powers without meaningful oversight. Speaking to Paulson at a Banking Committee hearing on the proposal, Dodd said, “After reading this proposal, I can only conclude that it is not just our economy that is at risk, Mr Secretary, but our Constitution as well.” Earlier, Paulson and Federal Reserve Board Chairman Ben Bernanke urged quick approval of the plan, warning that congressional efforts to amend it or add other provisions would create delays that could put the economy at dire risk. Despite that appeal for congressional approval of the Treasury proposal, Dodd said a rescue plan must include more congressional oversight and audit provisions, limits on banking executives’ incomes and measures to forestall further bank foreclosures on homeowners unable to make mortgage payments. The chamber’s Josten warned that extensive legislative manoeuvring to effect the kind of changes that Dodd wants puts the whole plan at risk. “All of those issues can, should and must be addressed - but not now,” Josten said. “The current legislation must not become a vehicle to advance pet interests, completely overhaul the financial regulatory system or exact revenge against those believed to have gotten us into this mess,” Josten added. “Now we must put partisan differences aside to save our economy. There can be no further delay.” Congressional deliberation on the Treasury bailout plan is expected to continue on Wednesday. Paulson and Bernanke are to testify before the House Committee on Financial Services. Dodd did indicate, however, that he recognises the urgency of the financial crisis and the need for prompt action. Despite his sharp reservations about the Paulson plan, Dodd said, “Nevertheless, in our efforts to restore financial security to American families and stability to our markets, this committee has a responsibility to examine this proposal carefully and in a timely manner.” ($1 = €.
http://www.icis.com/Articles/2008/09/23/9158496/us-business-warns-congress-of-economic-collapse.html
CC-MAIN-2013-20
en
refinedweb
Exceptions should be class objects. The exceptions are defined in the module exceptions. This module never needs to be imported explicitly: the exceptions are provided in the built-in namespace as well as the exceptions module. PendingDeprecationWarning. In future versions, support for string exceptions will be removed. Two distinct string objects with the same value are considered different exceptions. This is done to force programmers to use exception names rather than their string value when specifying exception handlers. The string value of all built-in exceptions is their name, but this is not a requirement for user-defined exceptions or exceptions defined by library modules.. For string exceptions, the associated value itself will be stored in the variable named as the second argument of the except clause (if any). For class exceptions, that variable receives the exception instance. If the exception class is derived from the standard root class Exception, the associated value is present as the exception instance's args attribute, and possibly on other attributes as well. base class. More information on defining exceptions is available in the Python Tutorial under the heading ``User-defined Exceptions.'' The following exceptions are only used as base classes for other exceptions.. This class is derived from EnvironmentError. See the discussion above for more information on exception instance attributes. from ... importfails to find a name that is to be imported. os.errorexception. See EnvironmentError above for a description of the possible associated values. New in version 1.5.2. Instances of this class have atttributes filename, lineno, offset and text for easier access to the details. str() of the exception instance returns only the message. Exception
http://docs.python.org/release/2.3.3/lib/module-exceptions.html
CC-MAIN-2013-20
en
refinedweb
#include <EC_Lifetime_Utils_T.h> Collaboration diagram for TAO_EC_Servant_Var< T >: p rhs 0 Constructor. Assumes ownership of p. Copy constructor. Adds reference to rhs. Destructor. Removes a reference from the underlying object, possibly destroying it. Template member constructor from a pointer that will implicitly cast to type T. Assumes ownership of p. This constructor allows constructs such as: Servant_Base<Base> p(new Derived); Template member copy constructor from a TAO_EC_Servant_Var<Y>, where Y can be implicitly cast to type T. As an IN parameter. As an INOUT parameter. Dereference the underlying object. Return a void pointer to the underlying object. This allows it to be used in conditional code and tested against 0. Smart pointer operator-> provides access to the underlying object. Template member assignment operator from a pointer to Y, where Y can be implicitly cast to type T. Template member assignment operator from a TAO_EC_Servant_Var<Y>, where Y can be implicitly cast to type T. Assignment operator. Assumes ownership of p. Assignment operator. Adds reference to rhs. As an OUT parameter. [private]
http://www.dre.vanderbilt.edu/Doxygen/5.4.7/html/tao/rtevent/classTAO__EC__Servant__Var.html
CC-MAIN-2013-20
en
refinedweb
Java UDP with a short response. Read more at: http:/... Java UDP TCP and UDP are transport protocols used for communication between computers. UDP UDP Client in Java UDP Client in Java In this section, you will know how to send any request or messages for UDP server by the UDP client. For this process you must require UDP - User Datagram Protocol a request to UDP server in Java Here, you will know how to receive and send... interconnection model (OSI). UDP Client in Java In this section, you will know how to send any request or messages for UDP server by the UDP client Image transfer using UDP - Java Beginners UDP. I have used core java technologies like JFC,JDBC,UDP. My main... dont know how to convert ASCII format to original image. i.e .txt (ASCII net beans net beans Write a JAVA program to validate the credit card numbers using Luhn Check algorithm. You will need to search the Internet to understand how the algorithm works. Hi Friend, Try the following code: import UDP Server in Java UDP Server in Java  ... of UDP server. This section provides you the brief description and program of the UDP server only for received your messages or information. The UDP Disadvantages..... of java and .net Disadvantages..... of java and .net Disadvantages of Java and .Net net beans net beans Write a JAVA program to read the values of an NxN matrix and print its inverse net beans2 net beans2 Write a JAVA program to find the nearest two points to each other (defined in the 2D-space J2ME -- Stream video from a udp server - MobileApplications ..For example from the following url : udp://222.222.222.121:2211...J2ME -- Stream video from a udp server HI, I wanted to develope a mobile application in j2ME to stream video from a udp server by providing the Ip Java programming or net beans - Java Beginners Java programming or net beans Help with programming in Java? Modify the dog class to include a new instance variable weight (double) and the Cat... on how I can create the program on a step by step basis or the solution would be even java vs .net - Java Beginners java vs .net which language is powerful now java or .net net beans net beans Write a JAVA program to parse an array and print the greatest value and the number of occurrences of that value in the array. You can initialize the array random values in the program without the need to read them JPA Many-to-Many Relationship the many-to-many relationship and how to develop a many-to-many relation in your JPA Application. Many-to-many: In this relationship each record in Table-A may... JPA Many-to-Many Relationship   net beans net beans Write a JAVA program to auto-grade exams. For a class of N students, your program should read letter answers (A, B, C, D) for each student. Assume there are 5 questions in the test. Your program should finally print .Net dll to Java - Java Beginners .Net dll to Java Hi, I've a .Net dll file and need to call into JAVA. Can i get any sample code on this, please? Thanks Chinnapa java file with many methods - Ajax java file with many methods I have to send response to a java file where there are many methods and I have to call one of them by passing parameter .How can I do Multicast under UDP(client server application) Multicast under UDP(client server application) UDP is used to support mulicast. Recall that UDP is connectionless and non reliable. Hence... for multicast applications under UDP. For more information, check the following NET BEAN - IDE Questions NET BEAN Thanks for your response actually i am working on struts and other window application. so if you have complete resources abt it then tell me.... and if you have link of this book ""Java EE Development with Net Beans Hibernate Many-to-one Relationships Relationships - Many to one relationships example using xml meta-data This current... and between levels in a hierarchy. In this example multiple stories (Java... the Group.java, Story.java and Group.hbm.xml file in our one-to-many example section Hibernate One-to-many Relationships - One to many example code in Hibernate using the xml file as metadata. Here... in Hibernate. In next section we will learn how to create and run the many-to-many... Hibernate One-to-many Relationships net beans 4 net beans 4 Write a JAVA program to read an initial two number x1 and x2, and determine if the two numbers are relatively prime. Two numbers are relatively prime. Two numbers are relatively prime if the only common factor Hibernate Many-to-many Relationships Relationships - Many to many example in Hibernate. In this example we have used xml...-to-many example. The many-to-many tag is used to define the relationships... Hibernate Many-to-many Relationships UDP (User Datagram Protocol) times. IN spite of being so many demerits, UDP is very useful in some... UDP (User Datagram Protocol) The User Datagram Protocol (UDP) is a transport protocol JPA Retrieve Data By Using Many-to-Many Relation ; In the previous section, you had read about the many-to-many relation. Here, you will learn how to retrieve data to database table by using the JPA many-to-many relation... JPA Retrieve Data By Using Many-to-Many Relation Hibernate Many To Many Annotation Mapping Hibernate Many To Many Annotation Mapping How to use Many To Many Annotation Mapping in Hibernate? Hibernate requires metadata like... for example read an image in java read an image in java qns: how we can read an image tell me about its code Java FTP Example Java FTP Example Is there any java ftp example and tutorials available on roseindia.net? Thanks Hello, There are many examples and tutorials that teaches you how to user FTP in your Java project. Most commonly Dot Net Architect Dot Net Architect Position Vacant: Dot Net Architect Job Description Candidates will be handling Dot Net Projects.   Wicket ; Wicket on Net Beans IDE This tutorial will take you... framework. In it each application consists of simply JAVA file and HTML file. "Hello World" example dot net dot net how to open a new window having detailed contents by clicking a marquee text in a page(like news details opening on clicking flash news title) in dot net 2003 Many Public Classes in One File - Java Tutorials Many Public Classes in One File 2003-10-13 The Java Specialists' Newsletter [Issue 080] - Many Public Classes in One File Author: Dr. Heinz M. Kabutz.... Welcome to the 80th edition of The Java(tm) Specialists' Newsletter ask user how many numbers to be inputted and determine the sum and highest number using an array in java ask user how many numbers to be inputted and determine the sum and highest number using an array in java ask user how many numbers to be inputted and determine the sum and highest number using an array in java array example - Java Beginners a question about how many dependents 10. Use a loop to get the names and add... i cannot solve this example Java AWT Package Example Example In Java In this section, you will learn how to create BorderLayout... Java AWT Package Example  .... Many running examples are provided that will help you master AWT package. Example PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT i need a project based on connectivity..it can be based on any of the following topics...:// Java File Programming Java File Programming articles and example code In this section we will teach... tutorial will teach you how to create, read and write in a file from Java program. Java programming language provides many API for easy file management. Java Java Example Codes and Tutorials Introduction to Java Applet and example explaining how to write your first Applet. Java Applet tutorials for beginners, collection many Java Applets example...; Swing Example Here you will find many Java Swing examples Java example for Reading file into byte array Java example for Reading file into byte array. You can then process the byte array as per your business needs. This example shows you how to write a program...: Java file to byte array Java file to byte array - Example 2 Wicket on Net Beans IDE Wicket on Net Beans IDE  ... consists of simply JAVA file and HTML file. Each and every component in this web framework application is created in java and it is later rendered into the HTML java code - Development process java code to setup echo server and echo client. Hi Friend, Please visit the following links: Hope Example of HashSet class in java unique. You can not store duplicate value. Java hashset example. How...Example of HashSet class in java. In this part of tutorial, we.... Example of Hashset iterator method in java. Example of Hashset size() method Java Args example Java FTP Client Example Java FTP Client Example How to write Java FTP Client Example code? Thanks Hi, Here is the example code of simple FTP client in Java which downloads image from server FTP Download file example. Thanks   Socket Wheel to handle many clients - java tutorials Socket Wheel to handle many clients 2001-06-21 The Java Specialists' Newsletter [Issue 023] - Socket Wheel to handle many clients Author: Dr. Heinz M... or RSS. Welcome to the 23rd issue of "The Java(tm) Specialists' Newsletter - String sort Java: Example - String sort Sorting is a mechanism in which we sort the data in some order. There are so many sorting algorithm are present to sort the string. The example given below is based on Selection Sort. The Selection sort Java hashset example. Java hashset example. HashSet is a collection. You can not store duplicate value in HashSet. In this java hashset exmple, you will see how to create HashSet in java application and how to store value in Hash pattern java example pattern java example how to print this 1 2 6 3 7 10 4 8 11 13 5 9 12 14 15 Example Code - Java Beginners Example Code I want simple Scanner Class Example in Java and WrapperClass Example. What is the Purpose of Wrapper Class and Scanner Class . when i compile the Scanner Class Example the error occur : Can not Resolve symbol Java HashMap example. Java HashMap example. The HashMap is a class in java. It stores values in name..., you will see how to create an object of HashMap class. How to display vlaue of map. Code: HashMapExample .java package net.roseindia.java java string comparison example java string comparison example how to use equals method in String... strings are not same. Description:-Here is an example of comparing two strings using equals() method. In the above example, we have declared two string.   Synchronized with example - Java Beginners Synchronized with example Hi Friends, I am beginner in java. what i know about synchonized keyword is,If more that one 1 thread tries to access... that how the lock is released and how next thread access that.Please explain Java Map Example Java Map Example How we can use Map in java collection? The Map interface maps unique keys to value means it associate value to unique... Description:- The above example demonstrates you the Map interface. Since Map Java collection Stack example Java collection Stack example How to use Stack class in java... :- -1 Description:- The above example demonstrates you the Stack class in java.... Here is an example of Stack class. import java.util.Stack; public class How to implement FTP using java client and FTP server. Could anyone help me for How to implement FTP using java? Thanks Hi, There are many FTP libraries in Java, but you should... is the best tutorials and example of Apache FTP Library: FTP Programming in Java Inheritance java Example Inheritance java Example How can we use inheritance in java program... for bread Description:- The above example demonstrates you the concept... properties of the superclass. In the given example, the class Animal is a super Java FTP file upload example ; Hi, We have many examples of Java FTP file upload. We are using Apache... Programming in Java tutorials with example code. Thanks...Java FTP file upload example Where I can find Java FTP file upload Java ArrayList Example Java ArrayList Example How can we use array list in java program ? import java.util.ArrayList; public class ArrayListExample { public static void main(String [] args){ ArrayList<String> array = new How to declare String array in Java? Following example will show you how to declare string array in java. There are many ways to declare an array and initialize them. We can use 'new'... the use of 'new' keyword. Following example declare, initialize and access Asp with C#.Net Asp with C#.Net How to generate barcodes in aspx page java Java count occurrence of a word there is file called "story.txt",in a program we want to count occurrence of a word (example bangalore) in this file and print how many time word is present in the file. The given What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. Hi, The Vector is a collect of Object... many legacy methods that are not part of the collections framework. For more Struts Links - Links to Many Struts Resources you how to develop Struts applications using ant and deploy on the JBoss Application Server. Ant script is provided with the example code. Many advance topics... Struts Links - Links to Many Struts Resources Jakarta Java Get Example is method and how to use the get method in Java, this example is going... will learn how to use the method getGraphics(). Java example program... Java Get Example   Java bigdecimal movePointLeft example Java bigdecimal movePointLeft example Example below demonstrates bigdecimal class.... To how many digits this shifting will done, will depend on the integer number passed Java BigDecimal movePointRight example Java BigDecimal movePointRight example Example below demonstrates bigdecimal class.... To how many digits this shifting will done, will depend on the integer number Java Word Occurrence Example Java Word Occurrence Example In this example we will discuss about the how many times words are repeated in a file. This example explains you that how you... will demonstrate you about how to count occurrences of each word in a file. In this example Java - Continue statement in Java Java - Continue statement in Java Continue: The continue statement is used in many programming languages such as C, C++, java etc. Sometimes we do not need to execute some Clone method example in Java Clone method example in Java Clone method example in Java programming language Given example of java clone() method illustrates, how to use clone() method. The Clone Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/77485
CC-MAIN-2013-20
en
refinedweb
std::collate::transform, std::collate::do_transform 1) public member function, calls the protected virtual member function do_transform of the most derived class. 2) Converts the character sequence [low, high) to a string that, compared lexicographically (e.g. with operator< for strings) with the result of calling transform() on another string, produces the same result as calling do_compare() on the same two strings. [edit] Parameters [edit] Return value The string transformed so that lexicographic comparison of the transformed strings may be used instead of collating of the originals. In the "C" locale, the returned string is the exact copy of [low, high). In other locales, the contents of the returned string are implementation-defined, and the size may be considerably longer. [edit] Notes In addition to the the use in collation, the implementation-specific format of the transformed string is known to std::regex_traits<>::transform_primary, which is able to extract the equivalence class information. [edit] Example #include <iostream> #include <iomanip> #include <locale> int main() { std::locale::global(std::locale("sv_SE.utf8")); auto& f = std::use_facet<std::collate<wchar_t>>(std::locale()); std::wstring in1 = L"\u00e4ngel"; std::wstring in2 = L"\u00e5r"; std::wstring out1 = f.transform(&in1[0], &in1[0] + in1.size()); std::wstring out2 = f.transform(&in2[0], &in2[0] + in2.size()); std::wcout << "In the Swedish locale: "; if(out1 < out2) std::wcout << in1 << " before " << in2 << '\n'; else std::wcout << in2 << " before " << in1 << '\n'; std::wcout << "In lexicographic comparison: "; if(in1 < in2) std::wcout << in1 << " before " << in2 << '\n'; else std::wcout << in2 << " before " << in1 << '\n'; } Output: In the Swedish locale: år before ängel In lexicographic comparison: ängel before år
http://en.cppreference.com/w/cpp/locale/collate/transform
CC-MAIN-2013-20
en
refinedweb
Statements (C# Programming Guide) A statement is a procedural building-block from which all C# programs are constructed. A statement can declare a local variable or constant, call a method, create an object, or assign a value to a variable, property, or field. A control statement can create a loop, such as a for loop, or make a decision and branch to a new block of code, such as an if or switch statement. Statements are usually terminated by a semicolon. For more information, see Statement Types (C# Reference). A series of statements surrounded by curly braces form a block of code. A method body is one example of a code block. Code blocks often follow a control statement. Variables or constants declared within a code block are only available to statements within the same code block. For example, the following code shows a method block and a code block following a control statement: Statements in C# often contain expressions. An expression in C# is a fragment of code containing a literal value, a simple name, or an operator and its operands. Most common expressions, when evaluated, yield a literal value, a variable, or an object property or object indexer access. Whenever a variable, object property or object indexer access is identified from an expression, the value of that item is used as the value of the expression. In C#, an expression can be placed anywhere that a value or object is required as long as the expression ultimately evaluates to the required type. Some expressions evaluate to a namespace, a type, a method group, or an event access. These special-purpose expressions are only valid at certain times, usually as part of a larger expression, and will result in a compiler error when used improperly. For more information, see the following sections in the C# Language Specification: 1.5 Statements 7 Expressions 8 Statements
http://msdn.microsoft.com/en-US/library/ms173143(v=vs.80).aspx
CC-MAIN-2013-20
en
refinedweb
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */ The SGML parser is a state machine. It is called for every character of the input stream. The DTD data structure contains pointers to functions which are called to implement the actual effect of the text read. When these functions are called, the attribute structures pointed to by the DTD are valid, and the function is parsed a pointer to the curent tag structure, and an "element stack" which represents the state of nesting within SGML elements. See also the the generic Stream definition The following aspects are from Dan Connolly's suggestions: Binary search, Strcutured object scheme basically, SGML content enum type. This module is implemented by SGML.c, and it is a part of the W3C Sample Code Library. #ifndef SGML_H #define SGML_H #include "HTStream.h" #include "HTStruct.h" typedef enum _SGMLContent{ SGML_EMPTY, /* no content */ SGML_LITERAL, /* character data. Recognized exact close tag only. Old www server compatibility only! Not SGML */ SGML_CDATA, /* character data. recognize </ only */ SGML_RCDATA, /* replaceable character data. recognize </ and &ref; */ SGML_MIXED, /* elements and parsed character data. recognize all markup */ SGML_ELEMENT /* any data found will be returned as an error*/ } SGMLContent; Describes the SGML tag attribute typedef struct _HTAttr { char * name; /* The (constant) name of the attribute */ /* Could put type info in here */ } HTAttr; extern char * HTAttr_name (HTAttr * attr); typedef struct _HTTag { char * name; /* The name of the tag */ HTAttr * attributes; /* The list of acceptable attributes */ int number_of_attributes; /* Number of possible attributes */ SGMLContent contents; /* End only on end tag @@ */ } HTTag; extern char * HTTag_name (HTTag * tag); extern SGMLContent HTTag_content (HTTag * tag); extern int HTTag_attributes (HTTag * tag); extern char * HTTag_attributeName (HTTag * tag, int attribute_number); Not the whole DTD, but all this parser uses of it. #define MAX_ATTRIBUTES 32 /* Max number of attributes per element */ typedef struct { HTTag * tags; /* Must be in strcmp order by name */ int number_of_tags; const char ** entity_names; /* Must be in strcmp order by name */ int number_of_entities; } SGML_dtd; extern HTTag * SGML_findTag (SGML_dtd * dtd, int element_number); extern char * SGML_findTagName (SGML_dtd * dtd, int element_number); extern SGMLContent SGML_findTagContents (SGML_dtd * dtd, int element_number); extern int SGML_findElementNumber(SGML_dtd *dtd, char *name_element); Create an SGML parser instance which converts a stream to a structured stream. extern HTStream * SGML_new (const SGML_dtd * dtd, HTStructured * target); #endif /* SGML_H */
http://www.w3.org/Library/src/SGML.html
CC-MAIN-2013-20
en
refinedweb
Each request in asp.net MVC will be assigned to a specific action (hereinafter referred to as “method”) under the corresponding controller (hereinafter referred to as “controller”) for processing. Normally, it is OK to write code directly in the method. However, if you want to process some logic before or after the method is executed, you need to use a filter here. There are three commonly used filters: authorize (authorization filter), handleerror (exception filter) and actionfilter (user-defined filter). The corresponding classes are: authorizeattribute, handleerrorattribute and actionfilterattribute. Inherit these classes and rewrite the methods to achieve different functions. 1. Authorize authorization filter As the name suggests, the authorization filter is used for authorization. The authorization filter is executed before the method is executed. It is used to restrict whether requests can enter this method. Create a new method: public JsonResult AuthorizeFilterTest() { return Json(new ReturnModel_Common { msg = "hello world!" }); } Direct access results: Now let’s assume that the authorizefiltertest method is a background method, and the user must have a valid token to access. The normal practice is to receive and verify the token in the authorizefiltertest method. However, once there are many methods, it is obviously impractical to write verification code in each method. At this time, the authorization filter is used: public class TokenValidateAttribute : AuthorizeAttribute { /// <summary> ///Logical processing of authorization verification. If true is returned, authorization is passed, and if false, the opposite is true /// </summary> /// <param name="httpContext"></param> /// <returns></returns> protected override bool AuthorizeCore(HttpContextBase httpContext) { string token = httpContext.Request["token"]; if (string.IsNullOrEmpty(token)) { return false; } else { return true; } } } A new class inheriting the authorizeattribute is created, and the authorizecore method is rewritten. This pseudo code realizes that if the token has a value, it returns true, and if it does not, it returns false. It is marked on the method that can be accessed only after authorization: [TokenValidate] public JsonResult AuthorizeFilterTest() { return Json(new ReturnModel_Common { msg = "hello world!" }) } After tokenvalidate is marked, the authorizecore method is executed before the authorizefiltertest. If the authorizecore returns true, the authorization successfully executes the code in the authorizefiltertest, otherwise the authorization fails. Do not pass token: Pass token: When the authorization fails to pass the token, the default unauthorized page of MVC is entered. Improvements are made here: no matter whether the authorization is successful or failed, ensure that the return value format is consistent to facilitate front-end processing. At this time, override the handleunauthorizedrequest method in the authorizeattribute class: /// <summary> ///Authorization failure handling /// </summary> /// <param name="filterContext"></param> protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext) { base.HandleUnauthorizedRequest(filterContext); var json = new JsonResult(); json.Data = new ReturnModel_Common { success = false, code = ReturnCode_ Interface.token expired or error, msg = "token expired or error" }; json.JsonRequestBehavior = JsonRequestBehavior.AllowGet; filterContext.Result = json; } effect: Actual combat: the most widely used authorization filter is the permission management system. After the user logs in successfully, the server outputs an encrypted token, which will be brought with subsequent requests. The server unties the token in the authorizecore method to get the user ID, and checks whether the database has the permission to request the current interface according to the user ID. if yes, it returns true, otherwise it returns false. Compared with successful login, the advantage of this method for authorization to cookies and sessions is that one interface is used by both PC and app. 2. Handleerror exception filter The exception filter handles code exceptions and is executed when the system code throws errors. MVC has implemented the exception filter by default and registered with the app_ Filterconfig.cs in the start Directory: filters.Add(new HandleErrorAttribute()); This takes effect in the whole system. Any interface or page that reports an error will execute the MVC default exception handling and return to a default error reporting page: views / shared / error (this page can be seen only when the program reports an error to the server. If the local debugging permission is high, you can still see the specific error reporting information) @{ Layout = null; } <!DOCTYPE html> <html> <head> <meta http- <meta name="viewport" content="width=device-width" /> < title > error < / Title > </head> <body> <hgroup> <h1>Wrong</ h1> <h2>An error occurred while processing your request</ h2> </hgroup> </body> </html> The default exception filter obviously cannot meet the use requirements. Rewrite the exception filter to meet the actual needs of the project: 1) Error reporting can record the controller and method where the error code is located, as well as the request parameters and time when the error is reported; 2) Return JSON in a specific format to facilitate front-end processing. Because most of the current system are Ajax requests, which report errors and return to the MVC default error page, the front end is difficult to handle Create a new class logexceptionattribute, inherit handleerrorattribute, and override the internal onexception method: public override void OnException(ExceptionContext filterContext) { if (!filterContext.ExceptionHandled) { string controllerName = (string)filterContext.RouteData.Values["controller"]; string actionName = (string)filterContext.RouteData.Values["action"]; string param = Common.GetPostParas(); string ip = HttpContext.Current.Request.UserHostAddress; LogManager.GetLogger("LogExceptionAttribute").Error("Location:{0}/{1} Param:{2}UserIP:{3} Exception:{4}", controllerName, actionName, param, ip, filterContext.Exception.Message); filterContext.Result = new JsonResult { Data = new ReturnModel_ Common {success = false, code = returncode_interface. Error thrown by the server, MSG = filtercontext. Exception. Message}, JsonRequestBehavior = JsonRequestBehavior.AllowGet }; } if (filterContext.Result is JsonResult) filterContext.ExceptionHandled = true;// If the returned result is jsonresult, the setting exception has been handled else base.OnException(filterContext);// Execute the logic of the base class handleerrorattribute and turn to the error page } The exception filter is not marked on the method like the authorization filter, and directly to the app_ Register in filterconfig.cs under the start directory so that all interfaces can take effect: filters.Add(new LogExceptionAttribute()); NLog is used as a logging tool in the exception filter. Nuget installation command: Install-Package NLog Install-Package NLog.Config Compared with log4net, NLog configuration is simple, just a few lines of code. Nlog.config: <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="" xmlns: <targets> <target xsi: <target xsi: </targets> <rules> <logger name="*" minlevel="Debug" writeTo="f2" /> </rules> </nlog> If an error is reported, the log is recorded in the mvcextension directory under the log directory of disk D. one log directory for each project is convenient for management. After all configurations are completed, see the following code: public JsonResult HandleErrorFilterTest() { int i = int.Parse("abc"); return Json(new ReturnModel_Data { data = i }); } If the string is forcibly converted to int type, an error must be reported, and the page response is: At the same time, the log also records: 3. Actionfilter user defined filter Custom filters are more flexible and can be accurately injected into pre request, request and post request. Inherit the abstract class actionfilterattribute and override the methods in it: public class SystemLogAttribute : ActionFilterAttribute { public string Operate { get; set; } public override void OnActionExecuted(ActionExecutedContext filterContext) { filterContext.HttpContext.Response.Write("<br/>" + Operate + ":OnActionExecuted"); base.OnActionExecuted(filterContext); } public override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Write("<br/>" + Operate + ":OnActionExecuting"); base.OnActionExecuting(filterContext); } public override void OnResultExecuted(ResultExecutedContext filterContext) { filterContext.HttpContext.Response.Write("<br/>" + Operate + ":OnResultExecuted"); base.OnResultExecuted(filterContext); } public override void OnResultExecuting(ResultExecutingContext filterContext) { filterContext.HttpContext.Response.Write("<br/>" + Operate + ":OnResultExecuting"); base.OnResultExecuting(filterContext); } } This filter is suitable for system operation logging: [systemlog (operate = "add user")] public string CustomerFilterTest() { Response. Write ("< br / > action in progress...); Return "< br / > end of action execution"; } Look at the results: The execution order of the four methods is onactionexecuting – > onactionexecuted – > onresultexecuting – > onresultexecuted, which controls the whole request process very accurately. In practice, the process of logging is as follows: write an operation log in the onactionexecuting method to the database, save the primary key of the record in the global variable, and indicate that the request is over in the onresultexecuted method. At this time, you naturally know whether the user’s operation is successful, and update the success field of the operation log according to the primary key. Two, model binding (ModelBinder) Let’s look at a common method: public ActionResult Index(Student student) { return View(); } The parameter accepted by this method is a student object. If the parameters passed from the front end remain the same as the properties in the student object, they will be bound to this object automatically. There is no need to bind the new student object in the method and bind the properties one by one. The binding process is completed by the defaultmodelbinder in MVC, which also inherits the imodelbinder interface, Now use the imodelbinder interface and defaultmodelbinder to achieve more flexible model binding. Scenario 1: an encrypted string token is passed from the front end. Some fields in the token need to be used in the method, so you have to receive the string, decrypt the string and convert it into an object in the method. It’s OK for such a method. If there are too many duplicate codes, even if the general method is extracted, you still need to call the general method in the method, Is there any way to encapsulate this object directly in parameters? Model bound objects: public class TokenModel { /// <summary> ///Primary key /// </summary> public int Id { get; set; } /// <summary> ///Name /// </summary> public string Name { set; get; } /// <summary> ///Introduction /// </summary> public string Description { get; set; } } Create a tokenbinder that inherits the imodelbinder interface and implements the bindmodel method: public class TokenBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var token = controllerContext.HttpContext.Request["token"]; if (!string.IsNullOrEmpty(token)) { string[] array = token.Split(':'); if (array.Length == 3) { return new TokenModel() { Id = int.Parse(array[0]), Name = array[1], Description = array[2] }; } else { return new TokenModel() { Id = 0 }; } } else { return new TokenModel() { Id = 0 }; } } } This method receives a token parameter, parses and encapsulates the token parameter. The code part is completed. You need to go to the application_ Register in the start method: ModelBinders.Binders.Add(typeof(TokenModel), new TokenBinder()); Now simulate this interface: public JsonResult TokenBinderTest(TokenModel tokenModel) { var output = "Id:" + tokenModel.Id + ",Name:" + tokenModel.Name + ",Description:" + tokenModel.Description; return Json(new ReturnModel_Common { msg = output }); } Call next: It can be seen that “1: Wang Jie: oppoic. Cnblogs. Com” has been bound to the tokenmodel object. However, if a more complex model is bound to imodelbinder, there is nothing that imodelbinder can do. Scenario 2: remove the first space of an attribute of the object public class Student { public int Id { get; set; } public string Name { get; set; } public string Class { get; set; } } If there are spaces in the name attribute passed from the front end, how to remove them? More flexible control with defaultmodelbinder public class TrimModelBinder : DefaultModelBinder { protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { var obj = base.GetPropertyValue(controllerContext, bindingContext, propertyDescriptor, propertyBinder); If (obj is string & & propertydescriptor. Attributes [typeof (trimattribute)]! = null) // judge whether it is of string type and marked with [trim] { return (obj as string).Trim(); } return obj; } } Label the entities that need to format the first attribute: [ModelBinder(typeof(TrimModelBinder))] public class Student { public int Id { get; set; } [Trim] public string Name { get; set; } public string Class { get; set; } } Well, test: public JsonResult TrimBinderTest(Student student) { if (string.IsNullOrEmpty(student.Name) || string.IsNullOrEmpty(student.Class)) { Return JSON (New returnmodel_common {MSG = "parameter not found"}); } else { Return JSON (New returnmodel_common {MSG = "Name:" + student. Name + ", length:" + student. Name. Length + "class:" + student. Class + ", length:" + student. Class. Length}); } } It can be seen that the name length marked with trim attribute is the length excluding spaces: 7, while the length of class attribute not marked is 6. The above is the whole content of this article. I hope it will be helpful to your study, and I hope you can support developpaer.
https://developpaper.com/explain-the-common-extension-points-of-asp-net-mvc-filter-and-model-binding/
CC-MAIN-2021-43
en
refinedweb
Source code for globus_sdk.services.auth.identity_map import uuid from typing import Any, Dict, Iterable, Optional, Set, Tuple from .client import AuthClient def is_username(val: str) -> bool: """ If the value parses as a UUID, then it's an ID, not a username. If it does not parse as such, then it must be a username. """ try: uuid.UUID(val) return False except ValueError: return True def split_ids_and_usernames(identity_ids: Iterable[str]) -> Tuple[Set[str], Set[str]]: ids = set() usernames = set() for val in identity_ids: if is_username(val): usernames.add(val) else: ids.add(val) return ids, usernames[docs]class IdentityMap: r""" There's a common pattern of having a large batch of Globus Auth Identities which you want to inspect. For example, you may have a list of identity IDs fetched from Access Control Lists on Globus Endpoints. In order to display these identities to an end user, you may want to resolve them to usernames. However, naively looking up the identities one-by-one is very inefficient. It's best to do batched lookups with multiple identities at once. In these cases, an ``IdentityMap`` can be used to do those batched lookups for you. An ``IdentityMap`` is a mapping-like type which converts Identity IDs and Identity Names to Identity records (dictionaries) using the Globus Auth API. .. note:: ``IdentityMap`` objects are not full Mappings in the same sense as python dicts and similar objects. By design, they only implement a small part of the Mapping protocol. The basic usage pattern is - create an ``IdentityMap`` with an AuthClient which will be used to call out to Globus Auth - seed the ``IdentityMap`` with IDs and Usernames via :py:meth:`~IdentityMap.add` (you can also do this during initialization) - retrieve identity IDs or Usernames from the map Because the map can be populated with a collection of identity IDs and Usernames prior to lookups being performed, it can improve the efficiency of these operations up to 100x over individual lookups. If you attempt to retrieve an identity which has not been previously added to the map, it will be immediately added. But adding many identities beforehand will improve performance. The ``IdentityMap`` will cache its results so that repeated lookups of the same Identity will not repeat work. It will also map identities both by ID and by Username, regardless of how they're initially looked up. .. warning:: If an Identity is not found in Globus Auth, it will trigger a KeyError when looked up. Your code must be ready to handle KeyErrors when doing a lookup. Correct usage looks something like so:: ac = globus_sdk.AuthClient(...) idmap = globus_sdk.IdentityMap( ac, ["[email protected]", "[email protected]"] ) idmap.add("[email protected]") # adding by ID is also valid idmap.add("c699d42e-d274-11e5-bf75-1fc5bf53bb24") # map ID to username assert ( idmap["c699d42e-d274-11e5-bf75-1fc5bf53bb24"]["username"] == "[email protected]" ) # map username to ID assert ( idmap["[email protected]"]["id"] == "c699d42e-d274-11e5-bf75-1fc5bf53bb24" ) And simple handling of errors:: try: record = idmap["[email protected]"] except KeyError: username = "NO_SUCH_IDENTITY" else: username = record["username"] or you may achieve this by using the :py:meth:`~.IdentityMap.get` method:: # internally handles the KeyError and returns the default value record = idmap.get("[email protected]", None) username = record["username"] if record is not None else "NO_SUCH_IDENTITY" :param auth_client: The client object which will be used for lookups against Globus Auth :type auth_client: :class:`AuthClient <globus_sdk.AuthClient>` :param identity_ids: A list or other iterable of usernames or identity IDs (potentially mixed together) which will be used to seed the ``IdentityMap`` 's tracking of unresolved Identities. :type identity_ids: iterable of str, optional :param id_batch_size: A non-default batch size to use when communicating with Globus Auth. Leaving this set to the default is strongly recommended. :type id_batch_size: int, optional .. automethodlist:: globus_sdk.IdentityMap include_methods=__getitem__,__delitem__ """ # noqa _default_id_batch_size = 100[docs] def __init__( self, auth_client: AuthClient, identity_ids: Optional[Iterable[str]] = None, *, id_batch_size: Optional[int] = None, ): self.auth_client = auth_client self.id_batch_size = id_batch_size or self._default_id_batch_size # uniquify, copy, and split into IDs vs usernames self.unresolved_ids, self.unresolved_usernames = split_ids_and_usernames( [] if identity_ids is None else identity_ids ) # the cache is a dict mapping IDs and Usernames self._cache: Dict[str, dict] = {}def _fetch_batch_including(self, key: str) -> None: """ Batch resolve identifiers (usernames or IDs), being sure to include the desired, named key. The key also determines which kind of batch will be built -- usernames or IDs. Store the results in the internal cache. """ # for whichever set of unresolved names is appropriate, build the batch to # lookup up to *at most* the batch size # also, remove the unresolved names from tracking so that they will not be # looked up again batch = [] set_to_use = ( self.unresolved_usernames if is_username(key) else self.unresolved_ids ) for _ in range(0, min(self.id_batch_size - 1, len(set_to_use))): batch.append(set_to_use.pop()) # avoid double-adding the provided key, but add it if it's missing if key not in batch: batch.append(key) else: try: batch.append(set_to_use.pop()) except KeyError: # empty set, ignore pass if is_username(key): response = self.auth_client.get_identities(usernames=batch) else: response = self.auth_client.get_identities(ids=batch) for x in response["identities"]: self._cache[x["id"]] = x self._cache[x["username"]] = x[docs] def add(self, identity_id: str) -> bool: """ Add a username or ID to the ``IdentityMap`` for batch lookups later. Returns True if the ID was added for lookup. Returns False if it was rejected as a duplicate of an already known name. :param identity_id: A string Identity ID or Identity Name (a.k.a. "username") to add :type identity_id: str """ if identity_id in self._cache: return False if is_username(identity_id): if identity_id in self.unresolved_usernames: return False else: self.unresolved_usernames.add(identity_id) return True if identity_id in self.unresolved_ids: return False self.unresolved_ids.add(identity_id) return True[docs] def get(self, key: str, default: Optional[Any] = None) -> Any: """ A dict-like get() method which accepts a default value. """ try: return self[key] except KeyError: return default[docs] def __getitem__(self, key: str) -> Any: """ ``IdentityMap`` supports dict-like lookups with ``map[key]`` """ if key not in self._cache: self._fetch_batch_including(key) return self._cache[key]
https://globus-sdk-python.readthedocs.io/en/stable/_modules/globus_sdk/services/auth/identity_map.html
CC-MAIN-2021-43
en
refinedweb
This topic describes breaking changes for Optimizely CMS in relation to previous version 10, and the steps needed to update affected code. To view the complete list of changes, see the release notes feed. Some changes are binary breaking but do not necessarily require code changes but rather just a recompilation of the project. Breaking changes are changes in method signatures or behavior of methods compared to the documented API in the previous version, which are described in this document. New NuGet packages In Optimizely CMS 11, some functionality, as for example TinyMCE, has been moved to separate NuGet packages. These packages have their own version number and may or may not be upgraded together with the core CMS packages. XForms has also been moved to its own NuGet package. This means that the Forms viewer gadget, which was deprecated in CMS 10, is now also removed. Most upgrades do not require changes to code but require a re-compilation. It is important to have the correct packages installed. A missing package causes compilation errors after upgrading a site. See New NuGet packages for which packages you need to manually add. Separate NuGet packages for ASP.NET dependencies (CMS-8106) - The integration with ASP.NET has moved to separate packages EPiServer.Framework.AspNet and EPiServer.Cms.AspNet. - EPiServer.MirroringService.MirroringMonitoring.MirroringMonitoringModule has moved from EPiServer.Enterprise to EPiServer.Cms.AspNet. It must be manually changed in web.config for the mirroring module. - Usage of StructureMap as IOC container was moved to a separate package, EPiServer.ServiceLocation.StructureMap. - .NET Framework 4.61 is required. - MVC 5.2.3 is required. - CreatePropertyControl has been removed from PropertyData. Use IPropertyControlFactory instead to register and create controls for PropertyData types. - Providers (for example ContentProvider, BlobProvider) no longer inherit ProviderBase. If a provider that supports configuration through web.config has an initialization method with the signature void Initialize(string name, NameValueCollection config), it will be called during the initialization phase - IServiceLocator no longer supports named instances. The specific implementation in EPiServer.ServiceLocation.StructureMap supports named instances. - Some methods in IServiceLocator have been removed from interface and are now extension methods in EPiServer.ServiceLocation namespace. - EPiServer.Search.SearchSettings.Config has been replaced by EPiServer.Search.SearchSettings.Options. - It is no longer supported to get an ILogger instance from IOC container. - Calls to logger that occur before IConfigurableModules are created, are not guaranteed to be persisted. - The setting for logger factories "episerver:LoggerFactoryType" must now always be quailfied with assembly name, even for built-in logger factories. - The method IStringFragment.GetControl has been removed. Use IStringFragmentControlResolver instead to create controls for string fragments. - Configuration sections EPiServerFrameworkSection, StaticFileSection, EPiServerDataStoreSection and SearchSection have been moved to assembly EPiServer.Framework.AspNet. - Property Database on DataAccessBase has been removed and is replaced by the property Executor. - PageData.LinkUrl no longer contains the query parameter epslanguage. - ContentProviders can no longer use CacheSettings.Filenames (previously obsoleted) to set up dependencies to files. - EPiServer.Web.InitializationModule has been moved to EPiServer.Cms.AspNet assembly and can be used as dependency module for other modules that want to run after Optimizely CMS is initialized when running as a ASP.NET application. There is a new module EPiServer.Initialization.CmsCoreInitialization in assembly EPiServer. It can be used as dependency module for modules that want to run after CMS Runtime is initialized when running CMS outside ASP.NET context. - Previously obsoleted methods Rebase/MakeRelative on type EPiServer.Web.UrlBuilder in assembly EPiServer have been moved to extensions method defined in assembly EPiServer.Cms.AspNet. Type UrlBuilder.RebaseKind is also moved to EPiServer.Web.RebaseKind in EPiServer.Cms.AspNet assembly. - AccessControlList.Save has been deprecated and is no longer supported. Use IContentSecurityRepository instead. Manually and automatically registered templates share the same behavior (CMS-4161) - Templates registered manually using ITemplateRepository.AddTemplates are now associated with the model type for which they are added, rather than the model type indicated through any IRenderTemplate interface of the template. This may affect the template selection during runtime, as the model type association is used to decide which template to use when rendering a content item. This change does not affect automatically registered templates, as they are already registered using the type specified by the IRenderTemplate interface. - The default implementation of ITemplateRepository.AddTemplates now checks if a template exists before adding it. If the template is already registered, it does not add another one. - The default implementation of ITemplateRepository.AddTemplates now makes sure that all template models are read-only before adding them to the repository. PropertList<T> (CMS-7212) - EPiServer.Core.Transfer.IRawContentRetriever no longer populates the value of each RawProperty. Instead, this is done by calling the IPropertyExporter.ExportProperties method. - EPiServer.Core.Transfer.IPropertyImporter has changed slightly to match the new IPropertyExporter interface. - PropertyData.ToRawString() is no longer called when exporting data. If a Custom PropertyData type had previously overridden the ToRawValue method, it must now move that functionality to a class that implements IPropertyExportTransform and register it with the Container. - PropertyJson-based properties, such as PropertyList<T>, no longer rely on IObjectSerializer and container-registered JsonConverters for their serialization. They now use Newtonsoft.Json.JsonConvert directly. This means that JsonConverters must be defined as attributes on the classes and properties to which they apply. Explicit IVersionable implementation on PageData is removed (CMS-7700) - PageData no longer exposes a separate explicit implementation of IVersionable.StartPublish and IVersionable.StopPublish. Performance improved when loading large amounts of uncached content (CMS-7735) - CreateWritableClone is used to create new instances of content instances when loading from database. Target .NET Standard 2 in CMS.Core and Framework (CMS-8133) - ThumbnailManager has been moved to EPiServer.Cms.AspNet. - Castle.Core dependency has been changed to [4.2.1, 5.0). - Castle.Windsor dependency has been changed from [4.1.0, 5.0). Scheduled jobs have a shorter default content cache expiration (CMS-8653) - Content loaded from a database and added to cache by scheduled jobs has a shorter cache expiration (default 1 minute). Memory usage optimization: Data class PropertyData does not expose services (CMS-8659) - SettingsID is obsoleted. Use extension method GetSettingsID or service IPropertyDataSettingsHelper instead. - SettingsContainer is obsoleted. Use the extension method GetSettingsContainer or service IPropertyDataSettingsHelper instead. - TranslateDisplayName()/TranslateDescription() was moved to extension method. Other changes CMS Core - CMS-7749 The Provider property is no longer supported, as it is not possible to expose a fully thread-safe IList implementation. - CMS-7791 Simple address is now the last registered router. - CMS-6988 MoveContentEventArgs.ContentLink is updated when content is moved between providers. - CMS-9129 Unsupported HostType's Service, Installer, VisualStudio have been obsoleted. CMS UI - CMS-1130 The UIHint.BlockFolder and UIHint.MediaFolder have been obsoleted and are replaced by UIHint.AssetsFolder. - CMS-1252 Error message now displayed when updating partially rendered content. - CMS-6217 Content repository descriptor keys are now not case sensitive. - CMS-6815 IContentChangeManager.Move(IContent source, IContent destination, bool createAsLocalAsset) now returns the ContentReference of the moved content instead of void. This adds support for the scenario where content is assigned a new ContentLink value when it is moved between different content providers. - CMS-8229 ApplicationUIUserManager<TUser>.ResetPassword(IUIUser user) now throws a not supported exception. This is due to the fact that ASP.NET Identity does not support generating new passwords for security reasons. Use the new method ResetPassword(IUIUser user, string newPassword) instead. - CMS-8802 The deprecated datetime.js methods serialize() and deserialize() have been removed. - CMS-8816 The deprecated slash method has been removed from epi/string. - CMS-8817 epi-cms/widget/HierarchicalList has been replaced by epi-cms/asset/HierarchicalList. - CMS-8830 The deprecated property SiteDefinitionResolver on ContentSearchProviderBase has been removed. Use SiteResolver instead. - CMS-8832 The deprecated EPiServer.Shell.Web.Mvc.Html.DojoExtensions has been removed, use .ConfigureDojo instead. DateTimeproperties on the visitor group criterion model are serialized to ISO 8601 when being sent to the client. - The MissingConfigurationExceptionnow inherits directly from Exception. - The UIHint.LongStringhas been obsoleted and should be replaced with UIHint.Textarea. - Properties which use LegacyPropertyEditorDescriptor should be explicitly annotated with UIHint.Legacy. - Obsoleted constructors are removed where there are public constructors available that can be used instead. - Obsoleted properties have been removed from PropertyUpdateResult, ComponentContainer, ContentSearchProviderBase, and CmsModuleViewModel. - Obsoleted classes ZipFileVirtualPathProviderand SecurityEntityUtilityhave been removed. - Obsoleted methods have been removed from UIDescriptorRegistry. _GrigWidgetBasenow emits selectionChangedwhen deselecting row. Previously, it was triggered only when a row was selected. - The obsoleted delegate DependencyGetterhas been removed. - The obsoleted methods watchModelChangeand unwatchModelChangein epi/shell/_Commandhas been changed from public to internal. - The class ContentReferenceStorethat was available by epi.cms.contentreferences key is now obsoleted. Manual changes required to legacy feature Mirroring An updated DLL is available in the EPiServer.CMS.Core NuGet package (packages/EPiServer.CMS.AspNet.11.1.0/tools/MirroringService/), which you must manually patch when upgrading. Change EPiServer.MirroringService.MirroringTransferProtocol.WCF.MirroringTransferClient,EPiServer.Enterprise to EPiServer.MirroringService.MirroringTransferProtocol.WCF.MirroringTransferClient,EPiServer.Cms.AspNet and the type EPiServer.Security.WindowsMembershipProvider,EPiServer to EPiServer.Security.WindowsMembershipProvider, EPiServer.Cms.AspNet in web.config. Last updated: Feb 09, 2018
https://world.optimizely.com/documentation/upgrading/optimizely-cms/cms-11/breaking-changes-cms-11/
CC-MAIN-2021-43
en
refinedweb
This article will demonstrate how to start a Spring Boot application from another Java program. A Spring Boot application is typically built into a single executable JAR archive. It contains all dependencies inside, packaged as nested JARs. Likewise, a Spring Boot project is usually built as an executable JAR file by a provided maven plugin that does all the dirty work. The result is a convenient, single JAR file that is easy to share with others, deploy on a server, and so on. Starting a Spring Boot application is as easy as typing java -jar mySpringProg.jar, and the application will print on console some nicely formatted info messages. But what if a Spring Boot developer wants to run an application from another Java program, without human intervention? How Nested JARs Work To pack a Java program with all dependencies into a single runnable JAR file, dependencies that are also JAR files have to be provided and somehow stored inside the final runnable JAR file. “Shading” is one option. Shading dependencies is the process of including and renaming dependencies, relocating the classes, and rewriting affected bytecode and resources in order to create a copy that is bundled alongside with an application’s (project) own code. Shading allows users to unpack all classes and resources from dependencies and pack them back into a runnable JAR file. This might work for simple scenarios, however, if two dependencies contain the same resource file or class with the exact same name and path, they will overlap and the program might not work. Spring Boot takes a different approach and packs dependency JARs inside runnable JAR, as nested JARs. example.jar | +-META-INF | +-MANIFEST.MF +-org | +-springframework | +-boot | +-loader | +-<spring boot loader classes> +-BOOT-INF +-classes | +-mycompany | +-project | +-YourClasses.class +-lib +-dependency1.jar +-dependency2.jar A JAR archive is organized as a standard Java-runnable JAR file. Spring Boot loader classes are located at org/springframework/boot/loader path, while user classes and dependencies are at BOOT-INF/classes and BOOT-INF/lib. Note: If you’re new to Spring, you may also want to take a look at our Top 10 Most Common Spring Framework Mistakes article. A typical Spring Boot JAR file contains three types of entries: - Project classes - Nested JAR libraries - Spring Boot loader classes Spring Boot Classloader will first set JAR libraries in the classpath and then project classes, which makes a slight difference between running a Spring Boot application from IDE (Eclipse, IntelliJ) and from console. For additional information on class overrides and the classloader, you can consult this article. Launching Spring Boot Applications Launching a Spring Boot application manually from the command line or shell is easy as typing the following: java -jar example.jar However, starting a Spring Boot application programmatically from another Java program requires more effort. It’s necessary to load the org/springframework/boot/loader/*.class code, use a bit of Java reflection to instantiate JarFileArchive, JarLauncher, and invoke the launch(String[]) method. We will take a more detailed look at how this is accomplished in the following sections. As we already pointed out, a Spring Boot JAR file is just like any JAR archive. It is possible to load org/springframework/boot/loader/*.class entries, create Class objects, and use them to launch Spring Boot applications later on. import java.net.URLClassLoader; import java.util.Enumeration; import java.util.HashMap; import java.util.Map; import java.util.jar.JarEntry; import java.util.jar.JarFile; . . . public static void loadJar(final String pathToJar) throws IOException . . . { // Class name to Class object mapping. final Map<String, Class<?>> classMap = new HashMap<>(); final JarFile jarFile = new JarFile(pathToJar); final Enumeration<JarEntry> jarEntryEnum = jarFile.entries(); final URL[] urls = { new URL("jar:file:" + pathToJar + "!/") }; final URLClassLoader urlClassLoader = URLClassLoader.newInstance(urls); Here we can see classMap will hold Class objects mapped to their respective package names, e.g., String value org.springframework.boot.loader.JarLauncher will be mapped to the JarLauncher.class object. while (jarEntryEnum.hasMoreElements()) { final JarEntry jarEntry = jarEntryEnum.nextElement(); if (jarEntry.getName().startsWith("org/springframework/boot") && jarEntry.getName().endsWith(".class") == true) { int endIndex = jarEntryName.lastIndexOf(".class"); className = jarEntryName.substring(0, endIndex).replace('/', '.'); try { final Class<?> loadedClass = urlClassLoader.loadClass(className); result.put(loadedClass.getName(), loadedClass); } catch (final ClassNotFoundException ex) { } } } jarFile.close(); The end result of the while loop is a map populated with Spring Boot loader class objects. Automating the Actual Launch With loading out of the way, we can proceed to finalize the automatic launch and use it to actually start our app. Java reflection allows the creation of objects from loaded classes, which is quite useful in the context of our tutorial. The first step is to create a JarFileArchive object. // Create JarFileArchive(File) object, needed for JarLauncher. final Class<?> jarFileArchiveClass = result.get("org.springframework.boot.loader.archive.JarFileArchive"); final Constructor<?> jarFileArchiveConstructor = jarFileArchiveClass.getConstructor(File.class); final Object jarFileArchive = jarFileArchiveConstructor.newInstance(new File(pathToJar)); The constructor of the JarFileArchive object takes a File(String) object as an argument, so it must be provided. The next step is to create a JarLauncher object, which requires Archive in its constructor. final Class<?> archiveClass = result.get("org.springframework.boot.loader.archive.Archive"); // Create JarLauncher object using JarLauncher(Archive) constructor. final Constructor<?> jarLauncherConstructor = mainClass.getDeclaredConstructor(archiveClass); jarLauncherConstructor.setAccessible(true); final Object jarLauncher = jarLauncherConstructor.newInstance(jarFileArchive); To avoid confusion, please note that Archive is actually an interface, while JarFileArchive is one of the implementations. The last step in the process is to call the launch(String[]) method on our newly created jarLauncher object. This is relatively straightforward and requires just a few lines of code. // Invoke JarLauncher#launch(String[]) method. final Class<?> launcherClass = result.get("org.springframework.boot.loader.Launcher"); final Method launchMethod = launcherClass.getDeclaredMethod("launch", String[].class); launchMethod.setAccessible(true); launchMethod.invoke(jarLauncher, new Object[]{new String[0]}); The invoke(jarLauncer, new Object[]{new String[0]}) method will finally start the Spring Boot application. Note that the main thread will stop and wait here for the Spring Boot application to terminate. A Word About the Spring Boot Classloader Examining our Spring Boot JAR file will reveal the following structure: +--- mySpringApp1-0.0.1-SNAPSHOT.jar +--- META-INF +--- BOOT-INF | +--- classes # 1 - project classes | | | | | +--- com.example.mySpringApp1 | | \--- SpringBootLoaderApplication.class | | | +--- lib # 2 - nested jar libraries | +--- javax.annotation-api-1.3.1 | +--- spring-boot-2.0.0.M7.jar | \--- (...) | +--- org.springframework.boot.loader # 3 - Spring Boot loader classes +--- JarLauncher.class +--- LaunchedURLClassLoader.class \--- (...) Note the three types of entries: - Project classes - Nested JAR libraries - Spring Boot loader classes Both project classes ( BOOT-INF/classes) and nested JARs ( BOOT-INF/lib) are handled by the same class loader LaunchedURLClassLoader. This loader resides in the root of the Spring Boot JAR application. The LaunchedURLClassLoader will load the class content ( BOOT-INF/classes) after the library content ( BOOT-INF/lib), which is different from the IDE. For example, Eclipse will first place class content in the classpath and then libraries (dependencies). LaunchedURLClassLoader extends java.net.URLClassLoader, which is created with a set of URLs that will be used for class loading. The URL might point to a location like a JAR archive or classes folder. When performing class loading, all of the resources specified by URLs will be traversed in the order the URLs were provided, and the first resource containing the searched class will be used. Wrapping Up A classic Java application requires all dependencies to be enumerated in the classpath argument, making the startup procedure somewhat cumbersome and complicated. In contrast, Spring Boot applications are handy and easy to start from the command line. They manage all dependencies, and the end user does not need to worry about the details. However, starting a Spring Boot application from another Java program makes the procedure more complicated, as it requires loading Spring Boot’s loader classes, creating specialized objects such as JarFileArchive and JarLauncher, and then using Java reflection to invoke the launch method. Bottom line: Spring Boot can take care of a lot of menial tasks under the hood, allowing developers to free up time and focus on more useful work such as creating new features, testing, and so on. Understanding the basics Spring Boot makes it easy to create standalone, production-grade, Spring-based applications that are easy to run or deploy, while the Spring framework is a comprehensive set of Java libraries used to develop rich web, desktop, or mobile applications. Spring Boot provides maven templates, built-in Tomcat web server, and some predefined configurations to simplify the use of Spring. Most Spring Boot applications need very little Spring configuration. Spring Boot is used to create Java applications that can be started by using java -jar or more traditional war deployments. Spring Boot architecture provides starters, auto-configuration, and component scan in order to get started with Spring without the need for complex XML configuration files.
https://www.toptal.com/spring-boot/spring-boot-application-programmatic-launch
CC-MAIN-2021-43
en
refinedweb
Motivation: The other day I found myself looking for info on how to implement responsive design in React components, I could not find anything clear, nothing that could make reference about any pattern or recommended method, so I decided to start thinking a little about this subject. As soon as I started to search information about responsive design, the use of media queries come up quickly, but commonly related to the device's window in which it is being drawn, which does not seem to contribute much for isolated components. Making a component to respond to the changes of the entire window dimensions does not seem to make sense, the component should respond to it's own dimensions, shouldn't it?? It is also true that some css tools can be used to manage the layout of the elements within the available space, for example with flexbox or css-grid some responsive behavior can be given to the elements but I don't think that can get to the same level as using media queries. For this reason I thought that maybe using the same concept of media queries but oriented to components could be a good idea. What do we want to achieve? Something like this... How to implement it? As soon as I started to wonder how I could implement something like this the ResizeObserver appeared, a browser API that allows us to detect the component's size changes and react to that, so it seems that it could be useful for what I want to do. The other thing that would be needed is to provide a standard way to define breakpoints for the element and a method to detect the component's size range in any given time, both of which can be implemented without much difficulties. My approach for this task was: - First, choose a structure to establish how the breakpoints for the component should be defined. - From those breakpoints identify a list of size ranges and generate a css class for each one of them. - Also it will be needed to identify the component's size after each change, find in which range it's on and assign the corresponding css class to it. This way it could have the same behavior as with media queries. Each time a component changes its range we can assign the propper css class and the necessary styles will be applied. As you can see the idea is simple and so is the procedure. I decided to encapsulate the logic in a hook to be able to reuse it in a fast way where it is necessary. How does this hook work? The hook receives a reference to the component to be controlled and optionally breakpoints to be used. In case of not receiving breakpoints, predefined ones will be used. Breakpoints must implement the following interface: interface breakpointsInput { readonly [key: string]: number; } Example (defaults breakpoints): const MEDIA_BREAKPOINTS: breakpointsInput = { small: 420, big: 768, }; Width ranges (mediaBreakpoints) will be created according to the breakpoints used (with their respective generated css classes). The generated mediaBreakpoints will comply with the following interface: interface mediaBreakpoints { class: string; from: number; toUnder: number; } And... createMediaBreakpoints(MEDIA_BREAKPOINTS); ...should return: [ { class: "to-small", from: 0, toUnder: 420, }, { class: "from-small-to-under-big", from: 420, toUnder: 768, }, { class: "from-big", from: 768, toUnder: Infinity, }, ]; Whenever a change in the size of the component is detected, the getCurrentSizeClass method will be called and the css class corresponding to that width range will be returned. getCurrentSizeClass(elementWidth, mediaBreakpoints) How to use this hook: npm i @jrx2-dev/use-responsive-class import { useResponsiveClass } from '@jrx2-dev/use-responsive-class'; /* const elementBreakpoints: breakpointsInput = { small: 420, big: 768, }; */ const elRef = createRef<HTMLDivElement>(); const [responsiveClass] = useResponsiveClass(elRef); // const [responsiveClass] = useResponsiveClass(elRef, elementBreakpoints); return ( <div ref={elRef} className={classes[responsiveClass]}> Some content </div> ); The styles should be something like this (css modules are used in demo project): .root { &.to-small { background-color: green; } &.from-small-to-under-big { background-color: yellow; } &.from-big { background-color: red; } } Demo: I used this custom hook in a component library that I made to use in personal demo projects . You can see this technique at work with an example component in the project's Storybook. Note: I have to say that I got a little distracted adding an animation between the change of layouts of the component, the logic is encapsulated in the hook useFadeOnSizeChange, I think it was necessary to make the transition between layouts a little more fluid. Conclusion: This experiment served me as a proof of concept to develop a system that allows the design of truly responsive components in react. Obviously the code can be improved, any comments or suggestions are welcome. The idea of this article was mostly a veiled question... how would you do it? :) Regarding the peformance, a third party hook (@react-hook/resize-observer) optimized for the ResizeObserver implementation is being used and seems to give good results. What I am interested in highlighting here is not so much the implementation itself but the concept used, I would like to hear opinions and suggestions on how you handle this issue. Discussion (3) How about a zero packages approach using css media queries? They are clean, easy to write and by using them you don't need to use javascript. Just css Thanks for your comment. At first I thought the same thing and tried to find a way to do this with css only but media queries always work depending on the size of the device/viewport, or its orientation/resolution, it doesn't work when what is important for us is to apply styles depending on the space an element is going to be able to occupy at a given time. There is an interesting discussion about this on the w3c github and the recommended way for this seems to be the use of js. That's actually right. First time I read your article I did not understand that you wanted different styles depending on the "space" of the element and not on the device/viewport. I have not yet encountered this scenario in real projects though, but it is a nice approach, will definitely use it if I ever need it.
https://dev.to/jrx2/responsive-design-in-react-components-7d1
CC-MAIN-2021-43
en
refinedweb
Use DateTimeOffset in a DateTimePicker and Grid Environment Description The DateTimeOffset structure represents a new date-time data structure which defines a point relative to the UTC time zone. However, neither databases nor JS are able of storing this structure as is since the DateTimeOffset is serialized as an object. On the other hand, the UI for ASP.NET MVC components which use dates strongly depend on the JavaScript Date type API which means that they need to work with a JavaScript Date. The default MVC binder is capable of binding a DateTimeOffset only if the submitted parameter is in the 2017-04-17T05:04:18.070Z format. In other words, if the UI for ASP.NET MVC components implement the DateTimeOffset overload, their use will be limited to only the above format or a custom binder would be required. Solution Map database models with DateTimeOffset fields to view models with DateTime fields and use those view models in the UI for ASP.NET MVC and Core components. AutoMapper is a popular third-party library that can map the models as shown below - Install AutoMapper from NuGet Package Manager or from the package manager console:. PM> Install-Package AutoMapper Upon a successful installation, the packages.config file should include a similar package: <package id="AutoMapper" version="7.0.1" targetFramework="net45" /> - Add a Mapping profile class that inherits from AutoMapper.Profileand list the required mappings. public class MappingProfile : Profile { public MappingProfile() { CreateMap<Car, CarViewModel>(); CreateMap<CarViewModel, Car>(); CreateMap<DateTime, DateTimeOffset>(); CreateMap<DateTimeOffset, DateTime>(); } } public class Car { public DateTimeOffset ProductionDate { get; set; } } public class CarViewModel { public DateTime ProductionDate { get; set; } } using AutoMapper; public class HomeController : Controller { private readonly IMapper mapper; public HomeController() { if(mapper == null) { var mappingConfig = new MapperConfiguration(mc => { mc.AddProfile(new MappingProfile()); }); mapper = mappingConfig.CreateMapper(); } } // to bind the model to any UI for ASP.NET Date/Time picker in the Index.cshtmls view public ActionResult Index() { return View(mapper.Map<CarViewModel>(cars.FirstOrDefault())); } // to use DateTimeOffset in the context of the Ajax() bound grid public ActionResult AllCars([DataSourceRequest] DataSourceRequest request) { // map the database models to the viewmodels var result = cars.Select(car => mapper.Map<CarViewModel>(car)); // call the ToDataSourceResult() extension method over the mapped collection return Json(result.ToDataSourceResult(request)); } } For the complete implementation refer to this GitHub project.
https://docs.telerik.com/aspnet-mvc/knowledge-base/datetimepicker-datetimeoffset-bind-to-model
CC-MAIN-2021-43
en
refinedweb
Morgan Stanley FTE Interview Experience | On-Campus 2021 (Virtual) Morgan Stanley visited our campus for Internship and Full-Time Positions in July’21. I was offered a Full Time position along with a 6-month internship. Out of everyone who applied, around 400 were shortlisted to appear for the Preliminary Round based on their CGPA. recruitment process was as follows: Preliminary Round(Online Test): This round was held on AMCAT Portal. The total duration of the test was 2 hours and consisted of 4 sections, each of which was individually timed. The sections were: Quantitative Aptitude: This section consisted of MCQs from topics like Geometry, Time and Distance, Probability, Permutations and Combinations, Time and Work, etc. Computer Science Fundamentals: This section had MCQs from Data Structures, Database Management Systems, Operating Systems, Computer Networks, and Linux. Here are some topics from which questions were asked. - Hamming Distance (CN) - Packet Switching (CN) - Routing Protocol (CN) - Multithreading (OS) - Threads (OS) - Shell (Linux) - Normalization (DBMS) Debugging: There were 7 code snippets in this section, each of which was to be corrected and tested against some public and private test cases. The languages offered were C++ and Java, and the preferred language was to be selected before starting this section. Types of questions: - Logical – We had to correct the logic of the given snippet to produce the correct output. For example, - Remove semicolon after looping statement (to execute the code inside the loop). - Change System.out.println() to System.out.print() (in Java) - Compilation – Fix the code to remove compilation errors without changing the logic. For example, - Check for infinite loops - Code Reuse – A pre-defined function was given. We were supposed to use this function to write the logic for our problem statement. For example, - The function to find the Euclidean distance between any two points was given. We had to write the logic to check if the three given points form a right-angled triangle using this function. Programming: There were 3 questions in this section which were to be solved in an hour. The questions asked were: - A simple array question which could be solved using Brute Force Method - A variation of Maximum number of overlapping Intervals - A variation of Longest Common Subsequence Round 1(Technical Interview – I): Out of 400 students who had appeared for the test, 48 were shortlisted for this round. This round was held on Zoom. The interview started with my introduction. My interviewer then asked me about my most preferred programming language, which was Python, following which I was asked Python-specific questions. Here are some of them: - Difference between list and tuples - Is Python compiled or interpreted? - What is namespace in python? - What is PEP? - Difference between break, continue, and pass - Difference between Arrays and lists in Python - What is __init__? Around 15-20 Python-specific questions were asked, and I was able to answer most of them correctly. The interviewer then moved on to Data Structures and Algorithms. The questions were: - Without using any collection class, how to find the frequency of each character of a word. - Suggest an effective method to sort a large set of floating point numbers which are in range from 0.0 to 1.0 I could solve the first question and suggested the method (Bucket Sort) to solve the second one, but could not code it out correctly. Then he moved on to Operating Systems and asked the following questions: Round 2(Technical Interview – II): I was told that I was shortlisted for this round 2 hours after giving the previous interview. This interview started with my introduction as well. Since I’m primarily a Backend Developer, most of the questions revolved around Backend, DBMS, and DevOps. Some of them were: - How do you begin working on a backend project? - How to reduce the query time of a large database? - How will you reduce the response time of a GET request which is being hit very frequently? (Caching using Redis) - Have you worked with Redis? - Principles of REST - Types of HTTP response status codes () - Can a Database be partitioned? (Sharding) - How will you manage the situation if your server is getting very high traffic? (Load Balancing) - How will you efficiently manage the situation if there is too much load on a server at a particular time and too little at another time? (Elasticity) - What is the difference between Horizontal Scaling and Vertical Scaling? The interviewer also asked me about Docker and Kubernetes, but I told him I had not worked with these technologies yet. A few other questions that I was asked were: - I was asked to explain one of my projects. - What was your role in your previous internship, and what are you working on in your current internship? - You have to design an Airbnb clone. What considerations will you keep in mind while designing the recommendation system? - You have to design a goods delivery application. How will you manage the delivery scheduling of the delivery valets? I managed to answer almost all the questions correctly. Round 3(Managerial Interview): About 3 days later, I got the call to appear for the Managerial Round. - This interview was taken by an Executive Director and lasted 45 mins. I was asked to introduce myself. We spoke about my role in my projects, internships, and the College Chapter that I was a board member of. - He asked me how I overcame challenges and managed the team. - Some of the questions revolved around the core values of Morgan Stanley. The interviewer was looking for honest responses and was quite impressed by my answers. The results were declared a few days later. 6, including me, were offered 6-month Internship + FTE, and 3 were offered a 6-month Internship. My Tips - For the Programming Section, practice a lot of problems from Arrays, Greedy and Dynamic Programming - Make sure you revise quantitative aptitude before the test, because you need to clear the cutoff for each section to make it to the next round. - For the technical MCQs, go through Last Minute Notes on GeeksForGeeks. - For the interviews, know everything about your most preferred language. If you are choosing Java/C++, make sure you are thorough with OOPs Concepts. - Revise Operating System concepts before the interview, especially Memory Management and Disk Scheduling. - Know everything in your resume. - Be honest during the interview. If you don’t know something, do not bluff for a long time. - The managerial round is about your previous experiences, make sure you are open with the interviewer and keep your answers believable. Good Luck!
https://www.geeksforgeeks.org/morgan-stanley-fte-interview-experience-on-campus-2021-virtual/?ref=rp
CC-MAIN-2021-43
en
refinedweb
Code: public class FridgeMagnets extends QWidget { public FridgeMagnets(QWidget parent) { super(parent); QFile dictionaryFile; dictionaryFile = new QFile("classpath:com/trolltech/examples/words.txt"); dictionaryFile.open(QIODevice.OpenModeFlag.ReadOnly); QTextStream inputStream = new QTextStream(dictionaryFile);In the constructor, we first open the file containing the words on our fridge magnets.()) { String word = ""; word = inputStream.readLine(); if (!word.equals("")) { DragLabel wordLabel = new DragLabel(word, this); wordLabel.move(x, y); wordLabel.show(); x += wordLabel.sizeHint().width() + 2; if (x >= 245) { x = 5; y += wordLabel.sizeHint().ColorRole.Window, QColor.white); setPalette(newPalette); setMinimumSize(400, Math.max(200, y)); setWindowIcon(new QIcon("classpath:com/trolltech/images/qt-logo.png")); setWindowTitle(tr("Fridge Magnets"));We also set the FridgeMagnets widget's palette, minimum size, window icon). Note that to fully enable drag and drop in our FridgeMagnets widget, we must also reimplement the dragEnterEvent(), dragMoveEvent() and dropEvent() event handlers inherited from QWidget: public void dragEnterEvent(QDragEnterEvent event) : if (event.mimeData().hasFormat("application/x-fridgemagnet")) { if (children().contains(event.source())) { event.setDropAction(Qt.DropAction.DropAction. public void dragMoveEvent(QDragMoveEvent event) { if (event.mimeData().hasFormat("application/x-fridgemagnet")) { if (children().contains(event.source())) { event.setDropAction(Qt.DropAction. public void dropEvent(QDropEvent event) { if (event.mimeData().hasFormat("application/x-fridgemagnet")) { com.trolltech.qt.core.QMimeData mime = event.mimeData();Note that the dropEvent() event handler behaves slightly different: If the event origins from any of this application's fridge magnet widgets, = new QDataStream(itemData, new QIODevice.OpenMode(QIODevice.OpenModeFlag.ReadOnly)); String text = dataStream.readString(); QPoint offset = new QPoint(); offset.readFrom(dataStream); DragLabel newLabel = new DragLabel(text, this); newLabel.move(new QPoint(event.pos().x() - offset.x(), event.pos().y() - offset.y())); newLabel.show(); if (children().contains(event.source())) { event.setDropAction(Qt.DropAction.MoveAction); event.accept(); } else { event.acceptProposedAction(); }Then we retrieve the data associated with the "application/x-fridgemagnet" MIME type and use it to create a new DragLabel object. We use QDataStream and our own custom readString() and readQPoint() convenience methods (which we will describe shortly) to retrieve the moving fridge magnet's text and stored offset..DropAction.MoveAction and call the event's accept() method. Otherwise, we simply accept the proposed action like we did in the other event handlers. } else if (event.mimeData().hasText()) { String[] pieces = event.mimeData().text().split("\\s+"); QPoint position = event.pos(); for (String piece : pieces) { if (piece.equals("")) continue; DragLabel newLabel = new DragLabel(piece, this); newLabel.move(position); newLabel.show(); position.add(new. public static void main(String args[]) { QApplication.initialize(args); FridgeMagnets fridgeMagnets = new FridgeMagnets(null); fridgeMagnets.show(); QApplication.exec(); } }Finally, we provide a main() method to create and show our main widget when the example is run. class DragLabel extends QLabel { private String labelText; public DragLabel(final String text, QWidget parent) { super(parent); QFontMetrics metrics = new QFontMetrics(font()); QSize size = metrics.size(12, text); QImage image = new QImage(size.width() + 12, size.height() + 12, QImage.Format.Format_ARGB32_Premultiplied); image.fill(0); QFont font = new QFont(); font.setStyleStrategy(QFont.StyleStrategy.ForceOutline);In the DragLabel constructor, we first create a QImage object on which we will draw the fridge magnet's text and frame. Its size depends on the current font size, and its format is QImage.Format.StyleStrategy.ForceOutline forces the use of outline fonts. QPainter painter = new QPainter(); painter.begin(image); painter.setRenderHint(QPainter.RenderHint.Antialiasing); painter.setBrush(QColor.white); QRectF frame = new QRectF(0.5, 0.5, image.width() - 1, image.height() - 1); painter.drawRoundRect(frame, 10 * 100 / image.width(), 10 * 100 / image.height()); painter.setFont(font); painter.setBrush(QColor.black); QRect rectangle = new QRect(new QPoint(6, 6), size); painter.drawText(rectangle, Qt.AlignmentFlag.AlignCenter.value(),. The end() method deactivates it. Note that the latter method is called automatically upon destruction when the painter is actived by its constructor. The QPainter.RenderHint. Earlier we set our main application widget's acceptDrops property and reimplemented QWidget's dragEnterEvent(), dragMoveEvent() and dropEvent() event handlers to support drag and drop. In addition, we must reimplement mousePressEvent() for our fridge magnet widget to make the user able to pick it up in the first place: public void mousePressEvent(QMouseEvent event) { QByteArray itemData = new QByteArray(); QDataStream dataStream; dataStream = new QDataStream(itemData, new QIODevice.OpenMode(QIODevice.OpenModeFlag.WriteOnly)); dataStream.writeString(labelText); QPoint position = new QPoint(event.pos().x() - rect().topLeft().x(), event.pos().y() - rect().topLeft().y()); position.writeTo(dataStream). com.trolltech.qt.core.QMimeData mimeData = new com.trolltech.qt.core(new QPoint(event.pos().x() - rect().topLeft().x(), event.pos().y() - rect().topLeft().y())); QDr.DropAction.MoveAction) == Qt.DropAction.MoveAction) close(); else show(); }Then we start the drag using QDrag's start() method requesting that the magnet is moved when the drag is completed. The method returns the performed drop action; if this action is equal to Qt.DropAction.MoveAction we will close the acttvated fridge magnet widget because we then create a new one (with the same data) at the drop position (see the implementation of our main widgets dropEvent() method). Otherwise, e.g., if the drop is outside our main widget, we simply show the widget in its original position.
https://doc.qt.io/archives/qtjambi-4.5.2_01/com/trolltech/qt/qtjambi-fridgemagnets.html
CC-MAIN-2021-43
en
refinedweb
table of contents NAME¶ signal - ANSI C signal handling SYNOPSIS¶ #include <signal.h> typedef void (*sighandler_t)(int); sighandler_t signal(int signum, sighandler_t handler); DESCRIPTION¶ WARNING:¶ signal() returns the previous value of the signal handler, or SIG_ERR on error. In the event of an error, errno is set to indicate the cause. ERRORS¶ CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008, C89, C99. NOTES¶ the disposition SIGCHLD is set to SIG_IGN. See signal-safety(7) for a list of the async-signal-safe functions that can be safely called from inside a signal handler. The use of sighandler_t is a GNU extension, exposed if _GNU_SOURCE is defined; glibc also defines (the BSD-derived) sig_t if _BSD_SOURCE (glibc 2.19 and earlier) or _DEFAULT_SOURCE (glibc 2.19 and later) is defined. Without use of such a type, the declaration of signal() is the somewhat harder to read: void ( *signal(int signum, void (*handler)(int)) ) (int); Portability¶action. SEE ALSO¶) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/bullseye/manpages-dev/signal.2.en.html
CC-MAIN-2021-43
en
refinedweb
How to Build a Chat App With Next.js & Firebase In this tutorial, we will guide you through building a simple chat application with Next.js and Firebase. Adding a basic chat feature to your app is an amazing way to connect your users. It allows your users to negotiate, get information, make friends and even fall in love without ever leaving your app. However, modern chat apps don't just display messages. They also show images, have @-mentions, reaction gifs, stickers and all sorts of other features... One of which is the typing indicator. This tutorial will teach you how to add an indicator while a user is typing, adding one more UX improvement to your iOS app and ensuring your users don't escape your app for something more feature complete. Since CometChat already has support for typing notifications in the iOS SDK, all we have to do is build out the UI to our liking! In the process of adding the typing indicator, you'll also learn how to make custom views in iOS, as well as how to make cool-looking animations using Core Animation. By the end of this iOS typing indicator tutorial, this is what you'll end up with: You can find the finished project on GitHub. You'll have a typing indicator that: Pretty cool, right? Let's get started. This tutorial assumes you already have a chat app that you built. Thankfully, we have two tutorials to get you started: Or, you can start by downloading the starting project for this tutorial. Head to the [start-here]() branch of the GitHub repo [TODO: LINK] and clone or download the repo. Open CometChat.xcwrorkspace (not the project file) in Xcode to get started. Setting your app ID Before you can run the app, you'll have to set your CometChat app ID. If you already have a CometChat app, navigate to the CometChat dashboard and click Explore for your app, then click on API Keys in the sidebar.A as the region and enter CometChat as the app's name. Click the + button to create the app. After 15 or so seconds, you'll have an up-and-running chat service online! All you need to do is connect to it. Open your new app's dashboard by clicking Explore. Head over to API Keys in the sidebar. Take a look at the keys with full access scope. %} Now that you're all set, let's get started working on the typing indicator. You'll start working on the typing indicator by creating a new Swift file called, appropriately, TypingIndicatorView.swift. Add the following contents to the file: {% c-block %} import UIKit class TypingIndicatorView: UIView { private enum Constants { static let width: CGFloat = 5 static let scaleDuration: Double = 0.6 static let scaleAmount: Double = 1.6 static let delayBetweenRepeats: Double = 0.9 } } {% c-block-end %} You create a new custom view that will hold your typing indicator. You won't use Interface Builder for this. Instead, you'll do everything in code to make it as flexible as possible. This means you'll have to store a couple of constants, so you create an enum that holds these constants. You'll use them throughout the tutorial. Next, add the following properties and inits to the class: {% c-block %} private let receiverName: String private var stack: UIStackView! init(receiverName: String) { self.receiverName = receiverName super.init(frame: .zero) createView() } required init?(coder: NSCoder) { fatalError() } {% c-block-end %} The receiverName is the name of the person that's typing. The stack is your main stack view where you'll add all of your subviews to build out the typing indicator. To set the receiver name, you'll add a new init that receives the name as a parameter. This init will also call createView – a function you'll create shortly. Since this is a subclass of UIView, there's a required init that has to exist, but it doesn't necessarily need to do anything. So, you'll just fatalError out of it ever gets called. (Don't worry, it won't!) Next, implement the method you called in the initializer: {% c-block %} private func createView() { translatesAutoresizingMaskIntoConstraints = false stack = UIStackView() stack.translatesAutoresizingMaskIntoConstraints = false stack.axis = .horizontal stack.alignment = .center stack.spacing = 5 } {% c-block-end %} This function will build out your whole typing indicator, so this is just its start! First, you set translatesAutoresizingMaskIntoConstraints to false. This mouthful of a property tells UIKit that it shouldn't make any constraints for this view – you'll add those yourself. Next, you create the main stack view. This stack will eventually hold the dots of the ellipsis as well as the text saying "Jane Doe is typing". You make sure it's a horizontal stack and that everything is vertically centered with a bit of spacing between each item. Creating the dot The first view you'll add to the stack will be an unassuming, lonely dot. You'll start by making one, and then copy it two more times to create the ellipsis. Add the following method to the class: {% c-block %} func makeDot(animationDelay: Double) -> UIView { let view = UIView(frame: CGRect( origin: .zero, size: CGSize(width: Constants.width, height: Constants.width))) view.translatesAutoresizingMaskIntoConstraints = false view.widthAnchor.constraint(equalToConstant: Constants.width).isActive = true } {% c-block-end %} First, you create a new view to hold the dot and set its width and height to the one defined in Constants. You also add an Auto Layout constraint to make sure it has a fixed width of the same value. Next, you'll add a circle to the view using Core Animation: {% c-block %} let circle = CAShapeLayer() let path = UIBezierPath( arcCenter: .zero, radius: Constants.width / 2, startAngle: 0, endAngle: 2 * .pi, clockwise: true) circle.path = path.cgPath {% c-block-end %} CAShapeLayers let you draw a custom shape in the screen defined as a Bézier path – a common way to describe paths and shapes as a set of numerical values. You don't have to construct these yourself. Instead, you can use UIBezierPath's initializers to construct different shapes like arcs, lines, ovals, rectangles and any other shape you can think of. In this case, you create a circular path by creating an arc that starts at 0 degrees and rotates around a full circle, ending up at the same spot at 2π, or 360 degrees. You give it a radius equal to the half of the width so that it spans the whole view. You then set that path on the shape layer to draw the circle. Then, add the following lines to the method: {% c-block %} circle.frame = view.bounds circle.fillColor = UIColor.gray.cgColor {% c-block-end %} You set the layer's frame and give the circle a gray fill. Next, add the circle layer to the view and return the view: {% c-line %}view.layer.addSublayer(circle){% c-line-end %} return view This concludes makeDot. At least, for now. Back in createView, add the following code to add a dot to the stack: {% c-block %} let dot = makeDot(animationDelay: 0) stack.addArrangedSubview(dot) addSubview(stack) } {% c-block-end %} Next, you'll add a few constraints to make sure the stack spans the width and height of TypingIndicatorView: {% c-block %} NSLayoutConstraint.activate([ stack.leadingAnchor.constraint(equalTo: leadingAnchor), stack.trailingAnchor.constraint(equalTo: trailingAnchor), stack.topAnchor.constraint(equalTo: topAnchor), stack.bottomAnchor.constraint(equalTo: bottomAnchor) ]) {% c-block-end %} You set the leading, trailing, top and bottom anchor of the stack to all equal the same respective anchors of the view.Speaking of sizing, there's one final thing you need to do to make sure your layout doesn't break. When Auto Layout is laying out your view, it doesn't currently know how wide it should be. Views like UILabel or UIStackView calculate their own size – this size is called the intrinsic content size. You can replicate the same behavior in your views. Add the following override to the class: {% c-block %} override var intrinsicContentSize: CGSize { stack.intrinsicContentSize } {% c-block-end %} This tells Auto Layout how to size your view. In your case, you can return the stack's size since it will always match the while typing indicator view. Now that you built your dot, it's time to show it from the chat screen. Head to ChatViewController.swift and add the following property to the class: {% c-line %}private var typingIndicatorBottomConstraint: NSLayoutConstraint!{% c-line-end %} Later in this tutorial, you'll animate this constraint to show and hide the typing indicator. Next, add this method to the class: {% c-block %} private func createTypingIndicator() { let typingIndicator = TypingIndicatorView(receiverName: receiver.name) view.insertSubview(typingIndicator, belowSubview: textAreaBackground) } {% c-block-end %} This method will set up the typing indicator and add it as a subview. So, in the implementation, you first call TypingIndicatorView's initializer with the receiver's name and add it as a subview below the text area. This is important because the indicator needs to pop up from behind the text field. Next, add the following code to the method: {% c-block %} typingIndicatorBottomConstraint = typingIndicator.bottomAnchor.constraint( equalTo: textAreaBackground.topAnchor, constant: -16) typingIndicatorBottomConstraint.isActive = true {% c-block-end %} Here, you create a constraint to make sure the typing indicator is 16 points from the top of the text field. You'll set the property you created earlier so that you can animate the spacing and hide the typing indicator. Next, add this code to the method to add a couple of additional constraints: {% c-block %} NSLayoutConstraint.activate([ typingIndicator.heightAnchor.constraint(equalToConstant: 20), typingIndicator.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 26) ]) {% c-block-end %} These constraints make sure that the typing indicator's height is 20 and that it has a margin of 26 from the leading edge of the view. Finally, call the method from the bottom of viewDidLoad: {% c-line %}createTypingIndicator(){% c-line-end %} Run your project and enter superhero1 as your email. This is a pre-made test user that CometChat creates for you. Select a contact to chat with and take a look at your dot in all of its glory: Well, it is just a dot, so it might not look that impressive. But, the potential is there! Let's breathe some life into it by making it animate. To make the ellipsis animate you'll animate each dot to scale up and down in the same way. You'll then add a small delay to the second and third dot so that they don't all scale at the same time. That's a pretty neat trick to add the feeling of movement without adding complex animations. [TODO: Screenshot finished animation] Head back to TypingIndicatorView.swift and add the following code to the bottom of makeDot, right before return: {% c-block %} let animation = CABasicAnimation(keyPath: "transform.scale") animation.duration = Constants.scaleDuration / 2 animation.toValue = Constants.scaleAmount animation.isRemovedOnCompletion = false animation.timingFunction = CAMediaTimingFunction(name: .easeInEaseOut) animation.autoreverses = true {% c-block-end %} You might be used to the convenient API of UIView animation, but here's you're working at a lower level: Core Animation layers. To animate these, you'll also have to use a lower-level animation API: CAAnimation. CAAnimations are objects that animate almost anything between a toValue and a fromValue. In your case, you'll animate the scale of the dot, so you supply transform.scale as the key path for the property you want to animate. You then set the toValue to the scale amount from the constants and make sure there's an easing on the animation. By setting autoreverses to true, the layer will automatically reverse the animation so that it scales up and down in the same way. Next, you'll add this animation to an animation group. Add this code to the method: {% c-block %} let animationGroup = CAAnimationGroup() animationGroup.animations = [animation] animationGroup.duration = Constants.scaleDuration + Constants.delayBetweenRepeats animationGroup.repeatCount = .infinity animationGroup.beginTime = CACurrentMediaTime() + animationDelay {% c-block-end %} Usually, animation groups are useful for starting and stopping multiple animations at once. However, in this case, you'll use the group to delay the start of the scaling animation. By giving the group a duration that's longer than the scaling, you ensure a delay between each repeat of the animation. You set the group to repeat infinitely and add another delay to its start time. Finally, add the animation group to the circle layer by adding this line to the method: {% c-line %}circle.add(animationGroup, forKey: "pulse"){% c-line-end %} Run the app and take a look at your circle. Now it looks a bit more impressive: It scales up and down infinitely! Let's turn this dot into an ellipsis by copying it a couple of times. Adding more dots It's time to add some more dots! Still in TypingIndicatorView.swift, add a new method to the class: {% c-block %} private func makeDots() -> UIView { let stack = UIStackView() stack.translatesAutoresizingMaskIntoConstraints = false stack.axis = .horizontal stack.alignment = .bottom stack.spacing = 5 stack.heightAnchor.constraint(equalToConstant: Constants.width).isActive = true } {% c-block-end %} This method creates another stack view which you'll place inside the main stack. This inner stack view will hold all of your dots, so you make it horizontal and align everything to the bottom. You also give it a height equal to the width of one dot. Next, add the following code to the method to add the dots: {% c-block %} let dots = (0..<3).map { i in makeDot(animationDelay: Double(i) * 0.3) } dots.forEach(stack.addArrangedSubview) return stack {% c-block-end %} First, you use map to convert a range of integers (0, 1, 2) to dot views by calling makeDot. Each dot will have a larger delay, starting from zero and increasing by 0.3 seconds with each index. Then, you add the views to the stack by calling addArrangedSubview and, finally, return the stack. Next, inside createView, replace the lines where you create and add dot... {% c-block %} let dot = makeDot(animationDelay: 0) stack.addArrangedSubview(dot) {% c-block-end %} ...with the following two lines: {% c-block %} let dots = makeDots() stack.addArrangedSubview(dots) {% c-block-end %} Build and run the project and take a look. You now have an ellipsis with a neat animation that signals to the user that something is going on. We're still missing the text, so let's get on that! Don't worry, adding the dots was the hard part. Adding the text bit should be smooth sailing. Still in TypingIndicatorView.swift, add a label in createView right under the lines to create and add dots: {% c-block %} let typingIndicatorLabel = UILabel() typingIndicatorLabel.translatesAutoresizingMaskIntoConstraints = false stack.addArrangedSubview(typingIndicatorLabel) {% c-block-end %} You create a label, remove default constraints and add it to the main stack view. Next, add some text to the label: {% c-block %} let attributedString = NSMutableAttributedString( string: receiverName, attributes: [.font: UIFont.boldSystemFont(ofSize: UIFont.systemFontSize)]) {% c-block-end %} {% c-block %} let isTypingString = NSAttributedString( string: " is typing", attributes: [.font: UIFont.systemFont(ofSize: UIFont.systemFontSize)]) {% c-block-end %} {% c-block %} attributedString.append(isTypingString) typingIndicatorLabel.attributedText = attributedString {% c-block-end %} Instead of using plain strings, you use an attributed string to bold part of the text. Specifically, the username will be bold, so you create the username part of the text by adding an attribute to make the font bold. This bit of text is a mutable string. Next, you create an immutable attributed string with the rest of the text and a regular, non-bold font. You then append the non-bold part to the mutable string, like you were concatenating two regular strings. The result is that attributedString now holds a bold username and regular text saying "is typing". Run the project to take a look. Now the user can see who's typing. The typing indicator looks nice, but there's a little problem. It's always there! In the rest of this iOS typing indicator tutorial, you'll show and hide the indicator with an animation based on whether the user is typing or not. First, let's add an animation for when the indicator pops up and down. Open ChatViewController.swift and add a new method to the class: {% c-block language="swift" %} private func setTypingIndicatorVisible(_ isVisible: Bool) { let constant: CGFloat = isVisible ? -16 : 16 UIView.animate(withDuration: 0.4, delay: 0, options: .curveEaseInOut, animations: { self.typingIndicatorBottomConstraint.constant = constant self.view.layoutIfNeeded() }) } {% c-block-end %} Remember the constraint you declared earlier that determines the spacing between the text field and the typing indicator? Here, you'll toggle its constant between -16, when it's raised, and 16, when it should be hidden. Because the text field is on top of the typing indicator (in the z-axis), you won't see it if the constraint is set to 16. Next, in createTypingIndicator, locate the line where you set typingIndicatorBottomConstraint and change its constant to 16 instead of -16: {% c-block %} typingIndicatorBottomConstraint = typingIndicator.bottomAnchor.constraint( equalTo: textAreaBackground.topAnchor, // Change this line: constant: 16) {% c-block-end %} This ensures the typing indicator is hidden when the view loads. Then, test out the hiding and showing animation by temporarily adding the following two lines at the top of sendMessage: {% c-line %} setTypingIndicatorVisible(typingIndicatorBottomConstraint.constant == 16) {% c-line-end %} return This will toggle the typing indicator whenever you press the send button. Run the project to try it. Press the send button a couple of times and you should see the typing indicator pop up and down. Looking good! Remove the two lines you just added. You won't be needing them anymore, because you'll track the actual typing state. Thankfully, CometChat makes it easy to track when someone is typing. If you followed our previous iOS chat app tutorials, this approach will be familiar to you. Open ChatService.swift and add the following two closures to the top of the class: var onTypingStarted: ((User)-> Void)? var onTypingEnded: ((User)-> Void)? ChatService will call these functions whenever a user starts or stops typing. Next, add the following method to the class: {% c-block language="swift" %} func startTyping(to receiver: User) { let typingIndicator = TypingIndicator(receiverID: receiver.id, receiverType: .user) CometChat.startTyping(indicator: typingIndicator) } {% c-block-end %} You'll call this method when a user starts typing. It creates a new TypingIndicator object that holds information about the typing that's going on. It then uses that object to tell CometChat that typing has begun. Add another, very similar method below the one you just added: {% c-block language="swift" %} func stopTyping(to receiver: User) { let typingIndicator = TypingIndicator(receiverID: receiver.id, receiverType: .user) CometChat.endTyping(indicator: typingIndicator) } {% c-block-end %} This method is analogous to the previous one, only this one stops the typing. Then, you'll implement delegate methods to track when another user is typing. Add a new method inside the CometChatUserDelegate extension at the bottom of the file: {% c-block language="swift" %} func onTypingStarted(_ typingDetails: TypingIndicator) { guard let cometChatUser = typingDetails.sender else { return } DispatchQueue.main.async { self.onTypingStarted?(User(cometChatUser)) } } {% c-block-end %} CometChat calls this method when typing begins for a user. First, you'll grab the user and then switch to the main queue. From there, you'll call the closure you declared earlier, but not before you convert the user to your own User struct. Finally, add another delegate method below the previous one: {% c-block language="swift" %} func onTypingEnded(_ typingDetails: TypingIndicator) { guard let cometChatUser = typingDetails.sender else { return } DispatchQueue.main.async { self.onTypingEnded?(User(cometChatUser)) } } {% c-block-end %} This method is almost the same, except it gets called when a user stops typing, and calls the appropriate closure for when typing stops. That's all we need to do in ChatService. Let's move to the view controller to hook everything up. Open ChatViewController.swift and start by adding the following code to the bottom of viewDidLoad: {% c-block language="swift" %} ChatService.shared.onTypingStarted = { [weak self] user in if user.id == self?.receiver.id { self?.setTypingIndicatorVisible(true) } } ChatService.shared.onTypingEnded = { [weak self] user in if user.id == self?.receiver.id { self?.setTypingIndicatorVisible(false) } } {% c-block-end %} Here, you assign the two closures you created earlier. When a user starts typing, you'll check that the user is your receiver. If they are, you'll show the typing indicator. This is necessary because the typing closure could be called for a different contact that's not open in this view controller. Similarly, if you get notified that the receiver stopped typing, you'll hide the typing indicator. Next, you'll need to check if the current user is typing and send that information to CometChat. Scroll down to the UITextViewDelegate extension and add a new method to the extension: {% c-block %} // 1 func textView( _ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool { // 2 let currentText: String = textView.text let range = Range(range, in: currentText)! let newText = currentText.replacingCharacters(in: range, with: text) // 3 switch (currentText.isEmpty, newText.isEmpty) { // 4 case (true, false): ChatService.shared.startTyping(to: receiver) // 5 case (false, true): ChatService.shared.stopTyping(to: receiver) default: break } // 6 return true } {% c-block-end %} This method looks jam-packed with information, so let's take it step-by-step: Now you're sending the typing state to CometChat! There's one final line of code you need to add. Scroll up to sendMessage, and add this line right before the call to ChatService.shared.send: ChatService.shared.stopTyping(to: receiver) This makes sure to reset the typing state when your user sends a message. Run the project on two devices (or two simulator instances). On one device, log in as superhero1 and start chatting with Captain America. On the other device, log in as superhero2 and start chatting with Iron Man. As you start typing, you should see a typing indicator pop up on the other device! You can find the finished project on GitHub. Using a combination of CometChat, UIView animation, Core Animation and your chops, you made a nice looking animated typing indicator! It animates into view, has a pulsing ellipsis animation and shows the username of the person that's typing. Armed with this knowledge, you can go even further and explore other ways to improve your UX: If you want a more detailed look at how to build a chat app with CometChat, take a look at the iOS one-on-one chat app tutorial or the iOS group chat app tutorial. If you're ahead of the curve and want to build a SwiftUI chat app, you can go through our SwiftUI course on building a chat app in SwiftUI. I hope this iOS typing indicator tutorial helped give your users a better chat experience!
https://www.cometchat.com/tutorials/add-an-animated-typing-indicator-to-your-ios-chat-app
CC-MAIN-2021-43
en
refinedweb
If this was really the intended behavior as opposed to incomplete design (none of us are ever guilty of that are we?), I think the choice was incorrect. See the example REACTOR. |> In this way, if the exception raised by the finally clause was caused |> by a programming error, and presuming it is not masked in turn by a |> too generous outer except clause, the point of the error in the |> finally clause shows up in the stack trace, as it should. |> I do not quite understand what you mean here. If the scope of an exception that is raised inside a finally clause is in a more "outer" scope than the current exception, that should be legal. If it has two exception clauses for that exception, one inside the current exception clause and one outside of it, the inner one should be discarded and the outer one handles it. If this is not clear, another example might |> I stole this idea from Modula-3, like |> most of the exception stuff in Python. As far as I can tell M3 has |> the same semantics as Python in the situation you describe. I certainly believe that M3 has the same behavior, early lisps did too. The current implementation is easier to do compared to what I propose. So unless this scenario was thought of during the design of the exception stuff, the easier route would be natural. |> |> Finally. I don't think I have a different model of exceptions. We disagree on the extent of an exception handler. That difficulty allows a finally clause to step in on an exception and decide to pass control to a more RECENTLY nested handler, thus ignoring the orginal exception. My previous example recast to show the serious side effects that can happen. reactor_overheat= 'reactor_overheat' def jjj(x): try: # - outer exception to protect world try: jjk(x) except ZeroDivisionError, val: # - inner error to protect minor errors print 'execption ZeroDivisionError raised', val print 'If the reactor_overheats, we are in deep trouble' except reactor_overheat, val: print 'Drop the Control Rods QUICK', val def jjk(x): try: print "about to make reactor_overheat" raise reactor_overheat, "reactor_overheat" finally: print "average lifespan around reactor ", 70/x >>> jjj(3) # - do not trigger the ZeroDivisionError about to make reactor_overheat average lifespan around reactor 23 Drop the Control Rods QUICK reactor_overheat >>> jjj(0) # - trigger the ZeroDivisionError about to make reactor_overheat average lifespan around reactor execption ZeroDivisionError raised integer division or modulo If the reactor_overheats, we are in deep trouble >>> So in the jjj(0) call, the reactor overheating exception is lost (and the world is doomed of course). Note that the reverse ordering on the jjk function does work correctly for both our interpretations of how exceptions should be processed. Ie. if jjk was defined as def jjk(x): try: print "average lifespan around reactor ", 70/x finally: print "about to make reactor_overheat" raise reactor_overheat, "reactor_overheat" we then get- >>> jjj(0) average lifespan around reactor about to make reactor_overheat Drop the Control Rods QUICK reactor_overheat The finally clause raises an exception that is outside (nested) of the divide by zero exception. Anyone other than Guido and I interested in this issue? -- ========================================================= Mark Riggle | "Give me LAMBDA or [email protected] | give me death" SAS Institute Inc., | SAS Campus Drive, Cary, NC, 27513 | (919) 677-8000 |
https://legacy.python.org/search/hypermail/python-1994q2/0985.html
CC-MAIN-2021-43
en
refinedweb
libssh2_session_supported_algs - get list of supported algorithms #include <libssh2.h> int libssh2_session_supported_algs(LIBSSH2_SESSION* session, int method_type, const char*** algs); session - An instance of initialized LIBSSH2_SESSION (the function will use its pointer to the memory allocation function). method_type - Method type. See libssh2_session_method_pref. algs - Address of a pointer that will point to an array of returned algorithms Get a list of supported algorithms for the given method_type. The method_type parameter is equivalent to method_type in libssh2_session_method_pref. If successful, the function will allocate the appropriate amount of memory. When not needed anymore, it must be deallocated by calling libssh2_free. When this function is unsuccessful, this must not be done. In order to get a list of all supported compression algorithms, libssh2_session_flag(session, LIBSSH2_FLAG_COMPRESS, 1) must be called before calling this function, otherwise only "none" will be returned. If successful, the function will allocate and fill the array with supported algorithms (the same names as defined in RFC 4253). The array is not NULL terminated. #include "libssh2.h" const char **algorithms; int rc, i; LIBSSH2_SESSION *session; /* initialize session */ session = libssh2_session_init(); rc = libssh2_session_supported_algs(session, LIBSSH2_METHOD_CRYPT_CS, &algorithms); if (rc>0) { /* the call succeeded, do sth. with the list of algorithms (e.g. list them)... */ printf("Supported symmetric algorithms:\n"); for ( i=0; i<rc; i++ ) printf("\t%s\n", algorithms[i]); /* ... and free the allocated memory when not needed anymore */ libssh2_free(session, algorithms); } else { /* call failed, error handling */ } On success, a number of returned algorithms (i.e a positive number will be returned). In case of a failure, an error code (a negative number, see below) is returned. 0 should never be returned. LIBSSH2_ERROR_BAD_USE - Invalid address of algs. LIBSSH2_ERROR_METHOD_NOT_SUPPORTED - Unknown method type. LIBSSH2_ERROR_INVAL - Internal error (normally should not occur). LIBSSH2_ERROR_ALLOC - Allocation of memory failed. Added in 1.4.0 libssh2_session_methods(3), libssh2_session_method_pref libssh2_free This HTML page was made with roffit.
https://libssh2.org/libssh2_session_supported_algs.html
CC-MAIN-2021-43
en
refinedweb
How to Use the Requests Library for Python on Devices Running Junos OS The Requests library for Python is available on certain devices running Junos OS that support the Python extensions package. You can use the requests module in Python scripts to send HTTP/1.1 requests. On devices running Junos OS with Enhanced Automation, you can also use the requests module in Python interactive mode. The Requests library provides additional methods for supporting initial deployments as well as for performing routine monitoring and configuration changes on devices running Junos OS. For information about the requests module and its functions, see the Requests documentation at. Issuing Requests You can use the requests module in onbox Python scripts to send HTTP/1.1 requests. To make a request, import the module in your script, and call the function corresponding to the desired request. The module supports HTTP GET and POST requests as well as HEAD, DELETE, and PUT requests. The request returns a Response object containing the server’s response. By default, requests are made using the default routing instance. The Requests library can be used to execute RPCs on devices running Junos OS that support the REST API service. The target device must be configured with the appropriate statements at the [edit system services rest] hierarchy level to enable Junos OS commands over HTTP or HTTPS using REST. For example, the following op script performs a GET request that executes the get-software-information RPC on a remote device running Junos OS that has the REST API service over HTTP configured on the default port (3000). The script prints the response status code, and if the status code indicates success, it prints the response content. To retrieve just the headers, you can send a simple HEAD request. If a GET request requires additional parameters, you can either include the params argument and supply a dictionary or a list of tuples or bytes to send in the query string, or you can pass in key/value pairs as part of the URL. Similarly, you can supply custom headers by including the headers argument and a dictionary of HTTP headers. The following request executes the get-interface-information RPC with the terse option for the given interface and returns the response in text format: The following example supplies the arguments as key/value pairs in the URL: To execute multiple RPCs in the same request, initiate an HTTP POST request, and set the data parameter to reference the RPCs to execute. See sections Executing Operational RPCs and Managing the Configuration for examples that execute multiple RPCs. Executing Operational RPCs You can use the requests module to execute RPCs from the Junos XML API on a remote device running Junos OS that has the REST API service enabled. The following op script uses the requests module to execute the RPC equivalent of the show interfaces ge-2/0/1 terse operational mode command on the target device: The following op script sends a POST request that executes multiple RPCs on the target device. The data parameter references the RPCs to execute, which are defined in a multiline string for readability. You can also create a generic op script for which the user supplies the necessary variables and the script constructs and executes the request. Consider the following op script configuration, which configures the host, rpc, and rpc_args command line arguments for the requests-rpc.py op script The following sample op script connects to a remote device running Junos OS, which has been configured with the appropriate statements at the [edit system services rest] hierarchy level to enable Junos OS commands over HTTP using REST. The script prompts for the connection password and connects to the host and port provided through the host argument. The script then uses the requests module to send a GET request executing the RPC that was provided through the command-line arguments. Starting in Junos OS Release 21.2R1 and Junos OS Evolved Release 21.2R1, when the device passes command-line arguments to a Python op script, it prefixes a single hyphen (-) to single-character argument names and prefixes two hyphens (--) to multi-character argument names. In earlier releases, the devices prefixes a single hyphen (-) to all argument names. When you execute the script, it executes the RPC with the specified options on the remote device and prints the response to standard output. Managing the Configuration You can use the requests module to retrieve or change the configuration on a device running Junos OS that has the REST API service enabled. The following op script retrieves the [edit system] hierarchy from the candidate configuration using a POST request: HTTP POST requests also enable you to execute multiple RPCs in a single request, for example, to lock, load, commit, and unlock a configuration. The following sample op script connects to the remote device and configures an address on the given interface. The lock, load, commit, and unlock operations are defined separately for readability, but the RPCs are concatenated in the request. When you execute the op script, it returns the RPC results for the lock, load, commit, and unlock operations. On some devices, the response output separates the individual RPC replies with boundary lines that include -- followed by a boundary string and a Content-Type header. Other devices might include just the Content-Type header. Using Certificates in HTTPS Requests The HTTP basic authentication mechanism sends user credentials as a Base64-encoded clear-text string. To protect the authentication credentials from eavesdropping, we recommend enabling the RESTful API service over HTTPS, which encrypts the communication using Transport Layer Security (TLS) or Secure Sockets Layer (SSL). For information about configuring this service, see the Junos OS REST API Guide. By default, the Requests library verifies SSL certificates for HTTPS requests. You can include the verify and cert arguments in the request to control the SSL verification options. For detailed information about these options, see the Requests documentation. When you use Python 2.7 to execute a script that uses the requests module to execute HTTPS requests, the script generates InsecurePlatformWarning and SubjectAltNameWarning warnings. The following op script sends a GET request over HTTPS, and sets the verify argument to the file path of a CA bundle or a directory containing trusted CA certificates. The specified CA certificates are used to verify the server’s certificate. To specify a local client-side certificate, set the cert argument equal to the path of a single file containing the client’s private key and certificate or to a tuple containing the paths of the individual client certificate and private key files. Specifying the Routing Instance By default, requests are executed using the default routing instance. You can also execute requests using the mgmt_junos management instance or another non-default routing instance. When you execute scripts through the Junos OS infrastructure, you can specify the routing instance by calling the set_routing_instance() function in the script. Certain devices also support specifying the routing instance and executing a script in the Unix-level shell. On devices running Junos OS Evolved, the set_routing_instance() function only supports using the management routing instance. In a Python script, to execute a request using a non-default routing instance, including the mgmt_junos instance: - Import the jcsmodule. - Call the set_routing_instance()function, and specify the instance to use for the connection. - Establish the connection with the target device. The following op script uses the mgmt_junos management instance to connect to the target device and execute requests. For information about using the set_routing_instance() function in Python scripts, see set_routing_instance(). In addition to specifying the routing instance in the script, certain devices support specifying the routing instance and executing a script from the Unix-level shell. On devices running Junos OS with Enhanced Automation (FreeBSD Release 7.1 or later), you can use the setfib command to execute requests with the given routing instance, including the management instance and other non-default routing instances. The following Python script simply executes the get-software-information RPC on a remote device and prints the response: To use setfib to execute the script using a non-default routing instance on a device running Junos OS with Enhanced Automation: Find the software index associated with the routing table for that instance. In the following example, the device is configured to use the non-default dedicated management instance mgmt_junos. The routing table index is referenced in the command output. To execute the op script with the given routing instance, use the setfibcommand to execute the script and reference the index. For example: In the following example, the device is configured with a non-default routing instance, vr1, and the vr1.inet routing table index is 8: The following command executes the op script using the vr1 routing instance: Performing ZTP Operations Zero touch provisioning (ZTP) enables you to provision new Juniper Networks devices in your network automatically, with minimal manual intervention. To use ZTP, you configure a server to provide the required information, which can include a Junos OS image and a configuration file to load or a script to execute. When you physically connect a device to the network and boot it with a factory-default configuration, the device retrieves the information from the designated server, upgrades the Junos OS image as appropriate, and executes the script or loads the configuration file. When you connect and boot a new networking device, if Junos OS detects a file on the server, the first line of the file is examined. If Junos OS finds the characters #! followed by an interpreter path, it treats the file as a script and executes it with the specified interpreter. You can use the Requests library in executed scripts to streamline the ZTP process. For example, consider the following sample Python script, which the new device downloads and executes during the ZTP process. When the script executes, it first downloads the CA certificate from the ca_cert_remote location on the specified server and stores it locally in the ca_cert_local location. The script then connects to the configuration server on port 8000 and issues a GET request to retrieve the new device configuration. The request includes the path to the CA certificate, which is used to verify the server’s certificate during the exchange. The script then uses the Junos PyEZ library to load the configuration on the device and commit it.
https://www.juniper.net/documentation/us/en/software/junos/automation-scripting/topics/task/junos-python-modules-requests-module.html
CC-MAIN-2021-43
en
refinedweb
Maps SDK for Android. Java public class MyItem implements ClusterItem { private final LatLng position; private final String title; private final String snippet; public MyItem(double lat, double lng, String title, String snippet) { position = new LatLng(lat, lng); this.title = title; this.snippet = snippet; } @Override public LatLng getPosition() { return position; } @Override public String getTitle() { return title; } @Override public String getSnippet() { return snippet; } } Kotlin inner class MyItem( lat: Double, lng: Double, title: String, snippet: String ) : ClusterItem { private val position: LatLng private val title: String private val snippet: String override fun getPosition(): LatLng { return position } override fun getTitle(): String? { return title } override fun getSnippet(): String? { return snippet } init { position = LatLng(lat, lng) this.title = title this.snippet = snippet } } In your map activity, add the ClusterManager and feed it the cluster items. Note the type argument <MyItem>, which declares the ClusterManager to be of type MyItem. Java // Declare a variable for the cluster manager. private ClusterManager<MyItem> clusterManager; private void setUpClusterer() { // Position the map. map.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(51.503186, -0.126446), 10)); // Initialize the manager with the context and the map. // (Activity extends context, so we can pass 'this' in the constructor.) clusterManager = new ClusterManager<MyItem>(context, map); // Point the map's listeners at the listeners implemented by the cluster // manager. map.setOnCameraIdleListener(clusterManager); map.setOnMarkerClickListener(clust, "Title " + i, "Snippet " + i); clusterManager.addItem(offsetItem); } } Kotlin // Declare a variable for the cluster manager. private lateinit var clusterManager: ClusterManager<MyItem> private fun setUpClusterer() { // Position the map. map.moveCamera(CameraUpdateFactory.newLatLngZoom(LatLng(51.503186, -0.126446), 10f)) // Initialize the manager with the context and the map. // (Activity extends context, so we can pass 'this' in the constructor.) clusterManager = ClusterManager(context, map) // Point the map's listeners at the listeners implemented by the cluster // manager. map.setOnCameraIdleListener(clusterManager) map.setOnMarkerClickListener(clusterManager) // Add cluster items (markers) to the cluster manager. addItems() } private fun addItems() { // Set some lat/lng coordinates to start with. var lat = 51.5145160 var lng = -0.1270060 // Add ten cluster items in close proximity, for purposes of this example. for (i in 0..9) { val offset = i / 60.0 lat += offset lng += offset val offsetItem = MyItem(lat, lng, "Title $i", "Snippet $i") clust: Java clusterManager.setAnimation(false); Kotlin clust: Java //. clusterManager.addItem(infoWindowItem); Kotlin // Set the lat/long coordinates for the marker. val lat = 51.5009 val lng = -0.122 // Set the title and snippet strings. val title = "This is the title" val snippet = "and this is the snippet." // Create a cluster item for the marker and set the title and snippet using the constructor. val infoWindowItem = MyItem(lat, lng, title, snippet) // Add the cluster item (marker) to the cluster manager. clust.
https://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/android-sdk/utility/marker-clustering
CC-MAIN-2021-43
en
refinedweb
CAPNG_SAVE_STATE(3) Libcap-ng API CAPNG_SAVE_STATE(3) capng_save_state - get the internal library state #include <cap-ng.h> void . The structure returned by capng_save_state is malloc'd; it should be free'd if not used. capng_restore_restore_state(3)
https://man7.org/linux/man-pages/man3/capng_save_state.3.html
CC-MAIN-2021-43
en
refinedweb
Authentication and authorization in ASP.NET Core SignalR View or download sample code (how to download) Authenticate users connecting to a SignalR hub. Multiple connections may be associated with a single user. The following is an example of Startup.Configure which uses SignalR and ASP.NET Core authentication: public void Configure(IApplicationBuilder app) { ... app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapHub<ChatHub>("/chat"); endpoints.MapControllerRoute("default", "{controller=Home}/{action=Index}/{id?}"); }); } public void Configure(IApplicationBuilder app) { ... app.UseStaticFiles(); app.UseAuthentication(); app.UseSignalR(hubs => { hubs.MapHub<ChatHub>("/chat"); }); app.UseMvc(routes => { routes.MapRoute("default", "{controller=Home}/{action=Index}/{id?}"); }); } Note The order in which you register the SignalR and ASP.NET Core authentication middleware matters. Always call UseAuthentication before UseSignalR so that SignalR has a user on the HttpContext. Note If a token expires during the lifetime of a connection, the connection continues to work. LongPolling and ServerSentEvent connections fail on subsequent requests if they don't send new access tokens. Note If a token expires during the lifetime of a connection, by default the connection continues to work. LongPolling and ServerSentEvent connections fail on subsequent requests if they don't send new access tokens. For connections to close when the authentication token expires, set CloseOnAuthenticationExpiration. Cookie authentication In a browser-based app, cookie authentication allows your existing user credentials to automatically flow to SignalR connections. When using the browser client, no additional configuration is needed. If the user is logged in to your app, the SignalR connection automatically inherits this authentication. Cookies are a browser-specific way to send access tokens, but non-browser clients can send them. When using the .NET Client, the Cookies property can be configured in the .WithUrl call to provide a cookie. However, using cookie authentication from the .NET client requires the app to provide an API to exchange authentication data for a cookie. Bearer token authentication The client can provide an access token instead of using a cookie. The server validates the token and uses it to identify the user. This validation is done only when the connection is established. During the life of the connection, the server doesn't automatically revalidate to check for token revocation. In the JavaScript client, the token can be provided using the accessTokenFactory option. // Connect, using the token we got. this.connection = new signalR.HubConnectionBuilder() .withUrl("/hubs/chat", { accessTokenFactory: () => this.loginToken }) .build(); In the .NET client, there's a similar AccessTokenProvider property that can be used to configure the token: var connection = new HubConnectionBuilder() .WithUrl("", options => { options.AccessTokenProvider = () => Task.FromResult(_myAccessToken); }) .Build(); Note The access token function you provide is called before every HTTP request made by SignalR. If you need to renew the token in order to keep the connection active (because it may expire during the connection), do so from within this function and return the updated token. In standard web APIs, bearer tokens are sent in an HTTP header. However, SignalR is unable to set these headers in browsers when using some transports. When using WebSockets and Server-Sent Events, the token is transmitted as a query string parameter. Built-in JWT authentication On the server, bearer token authentication is configured using the JWT Bearer middleware:(options => { // Identity made Cookie authentication the default. // However, we want JWT Bearer Auth to be the default. options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer(options => { // Configure the Authority to the expected value for your authentication provider // This ensures the token is appropriately validated options.Authority = /* TODO: Insert Authority URL here */; // We have to hook the OnMessageReceived event in order to // allow the JWT authentication handler to read the access // token from the query string when a WebSocket or // Server-Sent Events request comes in. // Sending the access token in the query string is required due to // a limitation in Browser APIs. We restrict it to only calls to the // SignalR hub in this code. // See // for more information about security considerations when using // the query string to transmit the access token. options.Events = new JwtBearerEvents { OnMessageReceived = context => { var accessToken = context.Request.Query["access_token"]; // If the request is for our hub... var path = context.HttpContext.Request.Path; if (!string.IsNullOrEmpty(accessToken) && (path.StartsWithSegments("/hubs/chat"))) { // Read the token out of the query string context.Token = accessToken; } return Task.CompletedTask; } }; }); services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1); services.AddSignalR(); // Change to use Name as the user identifier for SignalR // WARNING: This requires that the source of your JWT token // ensures that the Name claim is unique! // If the Name claim isn't unique, users could receive messages // intended for a different user! services.AddSingleton<IUserIdProvider, NameUserIdProvider>(); // Change to use email as the user identifier for SignalR // services.AddSingleton<IUserIdProvider, EmailBasedUserIdProvider>(); // WARNING: use *either* the NameUserIdProvider *or* the // EmailBasedUserIdProvider, but do not use both. } If you would like to see code comments translated to languages other than English, let us know in this GitHub discussion issue. Note The query string is used on browsers when connecting with WebSockets and Server-Sent Events due to browser API limitations. When using HTTPS, query string values are secured by the TLS connection. However, many servers log query string values. For more information, see Security considerations in ASP.NET Core SignalR. SignalR uses headers to transmit tokens in environments which support them (such as the .NET and Java clients). Identity Server JWT authentication When using Identity Server, add a PostConfigureOptions<TOptions> service to the project: using Microsoft.AspNetCore.Authentication.JwtBearer; using Microsoft.Extensions.Options; public class ConfigureJwtBearerOptions : IPostConfigureOptions<JwtBearerOptions> { public void PostConfigure(string name, JwtBearerOptions options) { var originalOnMessageReceived = options.Events.OnMessageReceived; options.Events.OnMessageReceived = async context => { await originalOnMessageReceived(context); if (string.IsNullOrEmpty(context.Token)) { var accessToken = context.Request.Query["access_token"]; var path = context.HttpContext.Request.Path; if (!string.IsNullOrEmpty(accessToken) && path.StartsWithSegments("/hubs")) { context.Token = accessToken; } } }; } } Register the service in Startup.ConfigureServices after adding services for authentication (AddAuthentication) and the authentication handler for Identity Server (AddIdentityServerJwt): services.AddAuthentication() .AddIdentityServerJwt(); services.TryAddEnumerable( ServiceDescriptor.Singleton<IPostConfigureOptions<JwtBearerOptions>, ConfigureJwtBearerOptions>()); Cookies vs. bearer tokens Cookies are specific to browsers. Sending them from other kinds of clients adds complexity compared to sending bearer tokens. Consequently, cookie authentication isn't recommended unless the app only needs to authenticate users from the browser client. Bearer token authentication is the recommended approach when using clients other than the browser client. Windows authentication If Windows authentication is configured in your app, SignalR can use that identity to secure hubs. However, to send messages to individual users, you need to add a custom User ID provider. The Windows authentication system doesn't provide the "Name Identifier" claim. SignalR uses the claim to determine the user name. Add a new class that implements IUserIdProvider and retrieve one of the claims from the user to use as the identifier. For example, to use the "Name" claim (which is the Windows username in the form [Domain]\[Username]), create the following class: public class NameUserIdProvider : IUserIdProvider { public string GetUserId(HubConnectionContext connection) { return connection.User?.Identity?.Name; } } Rather than ClaimTypes.Name, you can use any value from the User (such as the Windows SID identifier, and so on). Note The value you choose must be unique among all the users in your system. Otherwise, a message intended for one user could end up going to a different user. Register this component in your Startup.ConfigureServices method. public void ConfigureServices(IServiceCollection services) { // ... other services ... services.AddSignalR(); services.AddSingleton<IUserIdProvider, NameUserIdProvider>(); } In the .NET Client, Windows Authentication must be enabled by setting the UseDefaultCredentials property: var connection = new HubConnectionBuilder() .WithUrl("", options => { options.UseDefaultCredentials = true; }) .Build(); Windows authentication is supported in Internet Explorer and Microsoft Edge, but not in all browsers. For example, in Chrome and Safari, attempting to use Windows authentication and WebSockets fails. When Windows authentication fails, the client attempts to fall back to other transports which might work. Use claims to customize identity handling An app that authenticates users can derive SignalR user IDs from user claims. To specify how SignalR creates user IDs, implement IUserIdProvider and register the implementation. The sample code demonstrates how you would use claims to select the user's email address as the identifying property. Note The value you choose must be unique among all the users in your system. Otherwise, a message intended for one user could end up going to a different user. public class EmailBasedUserIdProvider : IUserIdProvider { public virtual string GetUserId(HubConnectionContext connection) { return connection.User?.FindFirst(ClaimTypes.Email)?.Value; } } The account registration adds a claim with type ClaimsTypes.Email to the ASP.NET identity database. // create a new user var user = new ApplicationUser { UserName = Input.Email, Email = Input.Email }; var result = await _userManager.CreateAsync(user, Input.Password); // add the email claim and value for this user await _userManager.AddClaimAsync(user, new Claim(ClaimTypes.Email, Input.Email)); Register this component in your Startup.ConfigureServices. services.AddSingleton<IUserIdProvider, EmailBasedUserIdProvider>(); Authorize users to access hubs and hub methods By default, all methods in a hub can be called by an unauthenticated user. To require authentication, apply the Authorize attribute to the hub: [Authorize] public class ChatHub: Hub { } You can use the constructor arguments and properties of the [Authorize] attribute to restrict access to only users matching specific authorization policies. For example, if you have a custom authorization policy called MyAuthorizationPolicy you can ensure that only users matching that policy can access the hub using the following code: [Authorize("MyAuthorizationPolicy")] public class ChatHub : Hub { } Individual hub methods can have the [Authorize] attribute applied as well. If the current user doesn't match the policy applied to the method, an error is returned to the caller: [Authorize] public class ChatHub : Hub { public async Task Send(string message) { // ... send a message to all users ... } [Authorize("Administrators")] public void BanUser(string userName) { // ... ban a user from the chat room (something only Administrators can do) ... } } Use authorization handlers to customize hub method authorization SignalR provides a custom resource to authorization handlers when a hub method requires authorization. The resource is an instance of HubInvocationContext. The HubInvocationContext includes the HubCallerContext, the name of the hub method being invoked, and the arguments to the hub method. Consider the example of a chat room allowing multiple organization sign-in via Azure Active Directory. Anyone with a Microsoft account can sign in to chat, but only members of the owning organization should be able to ban users or view users' chat histories. Furthermore, we might want to restrict certain functionality from certain users. Using the updated features in ASP.NET Core 3.0, this is entirely possible. Note how the DomainRestrictedRequirement serves as a custom IAuthorizationRequirement. Now that the HubInvocationContext resource parameter is being passed in, the internal logic can inspect the context in which the Hub is being called and make decisions on allowing the user to execute individual Hub methods. [Authorize] public class ChatHub : Hub { public void SendMessage(string message) { } [Authorize("DomainRestricted")] public void BanUser(string username) { } [Authorize("DomainRestricted")] public void ViewUserHistory(string username) { } }.Identity.Name.EndsWith("@microsoft.com")) { context.Succeed(requirement); } return Task.CompletedTask; } private bool IsUserAllowedToDoThis(string hubMethodName, string currentUsername) { return !(currentUsername.Equals("[email protected]") && hubMethodName.Equals("banUser", StringComparison.OrdinalIgnoreCase)); } } In Startup.ConfigureServices, add the new policy, providing the custom DomainRestrictedRequirement requirement as a parameter to create the DomainRestricted policy. public void ConfigureServices(IServiceCollection services) { // ... other services ... services .AddAuthorization(options => { options.AddPolicy("DomainRestricted", policy => { policy.Requirements.Add(new DomainRestrictedRequirement()); }); }); } In the preceding example, the DomainRestrictedRequirement class is both an IAuthorizationRequirement and its own AuthorizationHandler for that requirement. It's acceptable to split these two components into separate classes to separate concerns. A benefit of the example's approach is there's no need to inject the AuthorizationHandler during startup, as the requirement and the handler are the same thing.
https://docs.microsoft.com/en-us/aspnet/core/signalr/authn-and-authz?view=aspnetcore-3.0
CC-MAIN-2021-43
en
refinedweb
Connect an ASP.NET application to Azure SQL Database There are various ways to connect to databases within the Azure SQL Database service from an application. For .NET apps, you can use the System.Data.SqlClient library. The web app for the university must fetch and display the data that you uploaded to your SQL database. In this unit, you will learn how to connect to a database from a web app and use the System.Data.SqlClient library to process data. System.Data.SqlClient library overview The System.Data.SqlClient library is a collection of types and methods that you can use to connect to a SQL Server database that's running on-premises or in the cloud on SQL Database. The library provides a generalized interface for retrieving and maintaining data. You can use the System.Data.SqlClient library to run SQL commands and transactional operations and to retrieve data. You can parameterize these operations to avoid problems that are associated with SQL-injection attacks. If an operation fails, the System.Data.SqlClient library provides error information through specialized exception and error classes. You handle these exceptions just like any other type of exception in a .NET application. The System.Data.SqlClient library is available in the System.Data.SqlClient NuGet package. Connect to a single database You use an SqlConnection object to create a database connection. You provide a connection string that specifies the name and location of the database, the credentials to use, and other connection-related parameters. A typical connection string to a single database looks like this: Server=tcp:myserver.database.windows.net,1433;Initial Catalog=mydatabase;Persist Security Info=False;User ID=myusername;Password=mypassword;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; You can find the connection string for your single database on the Connection strings page for your database in the Azure portal. The following code example shows how to create an SqlConnection object: using System.Data.SqlClient; ... string connectionString = "Server=tcp:myserver.database.windows.net,..."; SqlConnection con = new SqlConnection(connectionString); The database connection isn't established until you open the connection. You typically open the connection immediately before you run an SQL command or query. con.Open(); Some databases only support a finite number of concurrent connections. So, after you finish running a command and retrieving any results, it's good practice to close the connection and release any resources that were held. con.Close(); Another common approach is to create the connection in a using statement. This strategy automatically closes the connection when the using statement completes. But you can also explicitly call the Close method. using (SqlConnection con = new SqlConnection(connectionString)) { // Open and Use the connection here con.Open(); ... } // Connection is now closed Define an SQL command or query Create an SqlCommand object to specify an SQL command or query to run. The following example shows an SQL DELETE statement that removes rows for a given customer from an Orders table. You can parameterize commands. This example uses a parameter that's named CustID for the CustomerID value. The line that sets the CommandType property of the SqlCommand object to Text indicates that the command is an SQL statement. You can also run a stored procedure rather than an SQL statement. In that case, you set the CommandType to StoredProcedure. SqlCommand deleteOrdersForCustomer = new SqlCommand("DELETE FROM Orders WHERE CustomerID = @custID", con); deleteOrdersForCustomer.CommandType = CommandType.Text; string customerID = <prompt the user for a customer to delete>; deleteOrdersForCustomer.Parameters.Add(new SqlParameter("custID", customerID)); The final parameter to the SqlCommand constructor in this example is the connection that's used to run the command. The next example shows a query that joins the Customers and Orders tables together to produce a list of customer names and their orders. SqlCommand queryCmd = new SqlCommand( @"SELECT c.FirstName, c.LastName, o.OrderID FROM Customers c JOIN Orders o ON c.CustomerID = o.CustomerID", con); queryCmd.CommandType = CommandType.Text; Run a command If your SqlCommand object references an SQL statement that doesn't return a result set, run the command by using the ExecuteNonQuery method. If the command succeeds, it returns the number of rows that are affected by the operation. The next example shows how to run the deleteOrdersForCustomer command that was shown earlier. int numDeleted = deleteOrdersForCustomer.ExecuteNonQuery(); If you expect the command to take a while to run, you can use the ExecuteNonQueryAsync method to perform the operation asynchronously. Execute a query and fetch data If your SqlCommand contains an SQL SELECT statement, you run it by using the ExecuteReader method. This method returns an SqlDataReader object that you can use to iterate through the results and process each row in turn. You retrieve the data from an SqlReader object by using the Read method. This method returns true if a row is found and false if there are no more rows left to read. After a row is read, the data for that row is available in the fields in the SqlReader object. Each field has the same name as the corresponding column in the original SELECT statement. However, the data in each field is retrieved as an untyped object, so you must convert it to the appropriate type before you can use it. The following code shows how to run the queryCmd command that we illustrated earlier to fetch the data one row at a time. SqlDataReader rdr = queryCmd.ExecuteReader(); // Read the data a row at a time while (rdr.Read()) { string firstName = rdr["FirstName"].ToString(); string lastName = rdr["LastName"].ToString(); int orderID = Convert.ToInt32(rdr["OrderID"]); // Process the data ... } Handle exceptions and errors Exceptions and errors can occur for various reasons when you're using a database. For example, you might try to access a table that no longer exists. You can catch SQL errors by using the SqlException type. An exception might be triggered by various events or problems in the database. An SqlException object has a property Errors that contains a collection of SqlError objects. These objects provide the details for each error. The following example shows how to catch an SqlException and process the errors that it contains. ... using (SqlConnection con = new SqlConnection(connectionString)) { SqlCommand command = new SqlCommand("DELETE FROM ...", con); try { con.Open(); command.ExecuteNonQuery(); } catch (SqlException ex) { for (int i = 0; i < ex.Errors.Count; i++) { Console.WriteLine($"Index # {i} Error: {ex.Errors[i].ToString()}"); } } } Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/develop-app-that-queries-azure-sql/4-connect-aspnet-to-azure-sql
CC-MAIN-2021-43
en
refinedweb
React Router is a popular and complete routing library for React.js that keeps UI in sync with the URL. It supports lazy code loading, dynamic route matching, and location transition handling, and was initially inspired by Ember's router. TODO: this section should also mention any large subjects within react-router, and link out to the related topics. Since the Documentation for react-router is new, you may need to create initial versions of those related topics. This getting started assumes you are working with create-react-app, or something equivalent using Babel and all the goodies out there. Also check out the great documentation right here. First, install react-router-dom: npm install react-router-dom or yarn add react-router-dom . Then, create a component that exists of a basic Navbar with two items and basic pages: import React from 'react' import { BrowserRouter, Route, Link } from 'react-router-dom' const Home = () => ( <div> <p>We are now on the HOME page</p> </div> ) const About = () => ( <div> <p>We are now on the ABOUT page</p> </div> ) const App = () => ( <BrowserRouter> <div> <ul> <li><Link to="/">Home</Link></li> <li><Link to="/about">About</Link></li> </ul> <hr/> <Route path="/" component={Home}/> <Route path="/about" component={About}/> </div> </BrowserRouter> ) export default App Let's go step by step through this code: import React from 'react': Make sure you import React import { BrowserRouter as Router, Route, Link } from 'react-router-dom'split up: BrowserRouteris the actual router itself. Make sure to wrap your component within the BrowserRoutercomponent. Routeis one particular route that can be navigated to Linkis a component that produces an <a href="...">tag, which you can use as a hyperlink. const Homeis a function that returns the homepage. const Aboutis a function that returns the About page. const App is the main component: <BrowserRouter> is the JSX component that wraps the components in which you want to use the <Route> component. ' is a single element to wrap all JSX inside the BrowserRouter` in. <ul> is the Navbar. It contains a link to Home and a link to About. <li><Link to="/">Home</Link></li> links to the homepage. You can see that, since the link refers to "/", an empty relative path renders the homepage. <li><Link to="/about">About</Link></li> links to the About page. <Route path="/" component={Home}/> describes which component should be rendered if the relative path is "/" . <Route path="/about" component={About}/> describes which component should be rendered if the relative path is "/about" . Lot to learn from here, but hopefully this explains the fundamentals, so from here you can continue your learnings. Once you've installed react and react-router , Its time to use both of them together. The syntax is very simple, you specify the url and the component you want to render when that url is opened <Route path="hello" component={ HelloComponent } /> This means when the url path is hello , Render the component HelloComponent FILENAME: app.js 'use strict'; import React from 'react'; import { render } from 'react-dom'; import { Router, browserHistory, Link } from 'react-router'; // These are just demo components which render different text. let DashboardPage = () => ( <div> <h1>Welcome User</h1> <p>This is your dashboard and I am an example of a stateless functional component.</p> <Link to="/settings">Goto Settings Page</Link> </div> ) let SettingsPage = () => ( <div> <h1>Manage your settings</h1> <p>display the settings form fields here...or whatever you want</p> <Link to="/">Back to Dashboard Page</Link> </div> ) let AuthLoginPage = () => ( <div> <h1>Login Now</h1> <div> <form action=""> <input type="text" name="email" placeholder="email address" /> <input type="password" name="password" placeholder="password" /> <button type="submit">Login</button> </form> </div> </div> ) let AuthLogoutPage = () => ( <div> <h1>You have been successfully logged out.</h1> <div style={{ marginTop: 30 }}> <Link to="/auth/login">Back to login page</Link> </div> </div> ) let ArticlePage = ({ params }) => ( <h3>Article {params.id}</h3> ) let PageNotFound = () => ( <div> <h1>The page you're looking for doesn't exist.</h1> </div> ) // Here we pass Router to the render function. render( ( <Router history={ browserHistory }> <Route path="/" component={ DashboardPage } /> <Route path="settings" component={ SettingsPage } /> <Route path="auth"> <IndexRoute component={ AuthLoginPage } /> <Route path="login" component={ AuthLoginPage } /> <Route path="logout" component={ AuthLogoutPage } /> </Route> <Route path="articles/:id" component={ ArticlePage } /> <Route path="*" component={ PageNotFound } /> </Router> ), document.body ); Route Parameters : Router path can be configured to take parameters so that we can read the parameter's value at the component. The path in <Route path="articles/:id" component={ ArticlePage } /> have a /:id . This id variable serves the purpose of path parameter and it can be accessed at the component ArticlePage by using {props.params.id} . If we visit then {props.params.id} at component ArticlePage will be resolved to 123. But visiting url , will not work because there is no id parameter. The route parameter can be made optional by writing it in between a pair of parenthesis: <Route path="articles(/:id)" component={ ArticlePage } /> If you want to use sub routes, then you can do <Route path="path" component={ PathComponent }> <Route path="subpath" component={ SubPathComponent } /> </Route> /pathis accessed, PathComponentwill be rendered /path/subpathis is accessed, PathComponentwill be rendered and SubPathComponentwill be passed to it as props.children You can use path="*" to catch all the routes that doesn't exist and render 404 page not found page. To install React Router, just run the npm command npm install --save react-router And you're done. This is literally all you have to do to install react router. Please Note : react-router is dependent on react , So make sure you install react as well. To set up: using an ES6 transpiler, like babel import { Router, Route, Link } from 'react-router' not using an ES6 transpiler var Router = require('react-router').Router var Route = require('react-router').Route var Link = require('react-router').Link A build is also available on npmcdn. You can include the script like this: <script src=""></script> The library will be available globally on window.ReactRouter .
https://riptutorial.com/react-router
CC-MAIN-2019-13
en
refinedweb
Google maps flash api animations työt Vastu consultant t... Alpha Format Flash animations for tv show Redraw character in Adobe Illustrator to be used in Adobe Animator, and create several animation segments/loops in Adobe Animator for use in Adobe Premier. ..: [kirjaudu nähdäksesi URL:n]¢erln=-73.97918701171875¢erlt=40 We have a website where we can no longer see the maps loading (see attached). We need to have an API key added. This is a multisite built on a custom theme. Hi ICAD Animations, I noticed your profile and would like to offer you my project. We can discuss any details over chat. I need someone capable of using the beat saber custom map editor to create custom maps for 6 songs. You will need to understand VR and preferably created custom maps before... Need lots of animations to be done by a junior resource. Approximately 75 per month for 8 months. Each video to be around 15... My Business is not appearing on Google maps for my main keyword. Like to find someone who can help me fix this problem. Looking for a 1-2 minute 3D animation for my music album release, as a trailer. Here is an awesome example [kirjaudu nähdäksesi URL:n] I can help to provide you some 3D characters to import in the movie such as Dragon, Warriors, etc. The music genre of the album is Epic/Cinematic. The Theme is ancient battles, warriors, myths... .. be) [kirjaudu nähdäksesi URL:n]) ? [kirjaudu nähdäksesi URL:n] import google maps point a to b in a web app on react js I want someone to prepare the mind maps of my:. Cartoon animations. $50 budget. Delivery within 1-2 days. I wan to put a marker on a google map and be able to write in it .. Hi ICAD Animations, I noticed your profile and would like to offer you my project. We can discuss any details over chat.: [kirjaudu nähdäksesi URL:...
https://www.fi.freelancer.com/job-search/google-maps-flash-api-animations/
CC-MAIN-2019-13
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Is there a way to change to the order sub-tasks when on the Main Screen viewing the Parent Issue? Sometimes I have to insert sub-tasks and it would be helpful to be able to control the order. short of that , can anyone tell me how the sort is currently ordered? Hi Jeff, Right now there is no way to reorganize the subtasks. We do have a couple open improvement requests, here and here, but they have been open for a quite a while. It looks like currently the subtasks are first sorted by what the subtask does (relates to, blocks, etc), in alphabetical order. Next they are sorted by what type of issue they are (bug, task, etc), and finally once more by whether they are open or closed. I hope this helps! Cheers, Miranda You can create a bookmarklet with the code below. Once created, you can reorder tasks by drag and drop. This re-order sticks across all users. javascript:(function(){jQuery(function(){jQuery("#issuetable>tbody").sortable({start:function() {old_order=getorder();} ,stop:function(event,ui){new_order=getorder();for(i=0;i<=new_order.length;i++) {id=ui.item[0].id;if(id==old_order[i])oldpos=i;if(id==new_order[i])newpos=i;} jQuery.ajax( {url:'/jira/secure/MoveIssueLink.jspa?id='+jQuery('#key-val').attr('rel')+'&currentSubTaskSequence='+oldpos+'&subTaskSequence='+newpos,} );}});function getorder(){order=[];jQuery('#issuetable>tbody').children('tr').each(function(idx,elm) {if(elm.id)order.push(elm.id)} );return order;}});})() Hi! I'm new to javascript, so I need a dumb-down version of step: "Create a bookmark with this code". How do I run the code and with what program? For example, in Chrome you would go to the menu, select Bookmarks, then Bookmark Manager. Then select the folder you would like to create the bookmark in and right click it and choose Add Page. Give it a name and use the javascript code as the web page URL. Afterwards, log in to Jira, click on a task that has subtasks, and then just click the bookmark so that it can work its magic. In order to make it remember the ordering, however, I had to click on one of the "Move" arrows. See my comment below. Sadly this doesn't appear to work in Firefox, or at least it doesn't in the latest version of FireFox. When I click the bookmark there is no change in functionality, at all. :-( @Tanner Worthamthis is a great idea and I really, really want to use it. Do you know of any reason it won't work in Firefox, or - hoping desperately - do you have a version that'll work in Firefox? I use this in the office now of a chrome extension. For me, the Java Script and the Extension did not work (drag&drop yes, but saving the new positions did not work) so I just changed the URL behind an arrow button manually by copying the link of the arrow button into the browser URL field and changed the "subTaskSequence=" part of the URL to the new position (it´s actually the new position - 1). Not so nice but still time saving... I tried the JavaScript Bookmark solution in IE 11 and Chrome 49.0 (on Chrome the Extension as well) on a Company Notebook which might has some restrictions...The Company uses JIRA v7.0.10 I have used the solution posted by Rob Calcroft did not need to enable Dev mode in Chrome extensions just downloaded the jave file from the link he posted and then dragged it over the extensions page and it was added fine. Thanks. I can't believe this still works! In order to make the changes stick I had to click on any of the subtask's "Move" arrows (pretty much clicked down on the last one or up on the first one). Much faster than going one by one clicking on those move arrows. I found a solution that I initially posted on I managed to use the EXOCET plugin to create a new "data-panel" in order to do so : the JQL used to show the subtasks of the current task beeing : issuetype in subTaskIssueTypes() ORDER BY "myCF" You can use ScriptRunner to change a sub-task sequence. In the example below a sub-task will be moved to the end of the list: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.link.IssueLink import com.atlassian.jira.issue.Issue def subTaskManager = ComponentAccessor.getSubTaskManager() def issueManager = ComponentAccessor.getIssueManager() Issue parentIssue = issueManager.getIssueObject('TEST-100') Issue subTaskIssue = issueManager.getIssueObject('TEST-200') List<IssueLink> subTaskIssueLinks = subTaskManager.getSubTaskIssueLinks(parentIssue.id) Long currentSeq = subTaskIssueLinks.findIndexOf {IssueLink link -> link.destinationId == subTaskIssue.id} Long newSeq = subTaskIssueLinks.size() - 1 // the end of the sub-taks list subTaskManager.moveSubTask(parentIssue, currentSeq, newSeq).
https://community.atlassian.com/t5/Jira-questions/Order-of-Subtasks-within-a-Parent-Task/qaq-p/366231
CC-MAIN-2019-13
en
refinedweb
Deploying MicroProfile apps on Microsoft Azure using the Azure Open Service Broker At the recently concluded Microsoft Ignite 2018 conference in Orlando, I had the honor of presenting to a crowd of Java developers and Azure professionals eager to learn how to put their Java skills to work building next-gen apps on Azure. Of course, that meant showcasing the technology coming out of the popular MicroProfile community, in which Red Hat plays a big part (and makes a fully supported, productized MicroProfile implementation through Thorntail, part of Red Hat OpenShift Application Runtimes). We did a demo too, which is the main topic of this blog post, showing how easy it is to link your Java MicroProfile apps to Azure services through the Open Service Broker for Azure (the open source, Open Service Broker-compatible API server that provisions managed services in the Microsoft Azure public cloud) and OpenShift’s Service Catalog. Here’s how to reproduce the demo. The Demo I was joined on stage by Cesar Saavedra from Red Hat (Technical Marketing for MicroProfile and Red Hat) and Brian Benz (Dev Advocate for Microsoft), and we introduced the MicroProfile origins, goals, community makeup, roadmap, and a few other items. Then it was time for the demo. You can watch the video of the session. The demo application should look familiar to you: This game is the classic Minesweeper, first introduced to the Windows world in 1992 with Windows 3.1, and much appreciated by the various graying heads in the audience (shout out to Nick Arocho for his awesome JavaScript implementation of the UI!). To this game, I added a simple scoreboard backed by a database, and it was our job in the demo to hook this application up to Azure’s Cosmos DB service using the MicroProfile Config API, as well integrate with OpenShift’s health probes using the simple MicroProfile Health APIs. The section below describes how to reproduce the demo. Re-creating the demo If you just want the source code, here it is, along with a solution branch that adds the necessary changes to link the app to Cosmos DB and OpenShift. But if you want to play along, follow these steps: Step 1: Get an Azure account Our first job is to get an Azure account and some credits, all of which are free. Easy, right? Since we’re using OpenShift, this could easily be deployed to the cloud of your choice, but Azure is super easy to use, and OpenShift is available in the Azure Marketplace for production-ready multi-node OpenShift deployments. There’s also a nice Reference Architecture for all the architects out there. Step 2: Deploy OpenShift Since this is a demo, we can take shortcuts, right? In this case, we don’t need the full power of a fully armed and operational and productized multi-node OpenShift deployment, so I used a nice “All in One” Azure deployment that Cesar created. To deploy this, simply click here (or click the Deploy to Azure button below, which takes you to the Azure Portal along with a payload that will deploy a single-node OpenShift instance to an Azure Resource Group of your choosing). You’ll need to fill out some information as part of the install. You’ll use these values later, so don’t forget them: - Resource Group: Create a new Resource Group to house all of the components (VMs, NICs, storage, etc). Resource Groups are the way Azure groups related resources together. If the named group does not exist, it’ll be created for you. - Location: Pick one close to you to deploy. - Admin Username / Admin Password: These will be the username and password you’ll use to log in to the OpenShift Web Console. - Ssh Key Data: You’ll need to generate an SSH keypair if you want to use sshto access the resulting virtual machines. Paste in the contents of the public key file once you’ve created it. - Vm size: Specify a VM size. A default value is provided. If another size or type of VM is required, ensure that the Location contains that instance type. Agree to the terms and conditions, and click the Purchase button. Then get a cup of coffee; it’ll take around 15 minutes to complete and you’ll start burning through your credits. (For the default machine type, you’ll eat US$20 through US$30 per week!) Click the Deployment Progress notification to watch the progress. Once it is done, click on the Outputs tab to reveal the URL to your new OpenShift console. Bookmark it, because you’ll need it later. Also, don’t forget the username/password you provided; you’ll need those later too. If you don’t get any outputs, you can always discover the public DNS hostname of your new OpenShift deployment by clicking on the Virtual Machines link at the far left of the Azure Portal, then click on the VM name (same name as the Resource Group you specified), then look for the DNS Name, and open a new browser tab and navigate to https://[THE_DNS_NAME]:8443. Step 2a: Give yourself cluster-admin access Although a user in OpenShift was created using the credentials you supplied, this user does not have the cluster-admin rights necessary for installing the service broker components. To give ourselves this ability, we need to use ssh to access the machine and run a command (you did save the SSH public and private key created earlier, right?). First, log in to the VM running OpenShift: ssh -i [PRIVATE_KEY_PATH] [ADMIN_USERNAME]@[VM HOSTNAME] Where: [PRIVATE_KEY_PATH]is the path to the file containing the private key that corresponds to the public key you used when setting up OpenShift on Azure. [ADMIN_USERNAME]is the name of the OpenShift user you specified. [VM HOSTNAME]is the DNS hostname of the VM running on Azure. Once logged in via ssh, run this command: sudo oc adm policy add-cluster-role-to-user cluster-admin [ADMIN_USERNAME] [ADMIN_USERNAME] is the same as what you used in the ssh command. This will give you the needed rights to install the service broker in the next steps. You can exit the ssh session now. Step 3: Deploy the Azure Service Broker Out of the box, this all-in-one deployment of OpenShift includes support for the OpenShift Service Catalog (our implementation of the Open Service Broker API), so all that is left to do is install the Open Service Broker for Azure to expose Azure services in the OpenShift Service Catalog. This can most easily by installed using Helm (here are installation instructions), and with Helm you can also choose which version of the broker. Microsoft has temporarily taken out experimental services from the GA version of the broker, and is slowly adding them back in, so you’ll need to specify a version from earlier this year that includes these experimental services like Cosmos DB’s MongoDB API, which the demo uses. Let’s first log in as our admin user to our newly deployed OpenShift deployment using the oc command (if you don’t have this command, install the client tools from here): oc login [URL] -u [ADMIN_USERNAME] -p [PASSWORD] Here, you need to specify the URL (including port number 8443) to your new OpenShift instance, as well as the username/password you used earlier when setting it up on Azure. Once logged in, let’s deploy the broker with Helm (note that this uses Helm 2.x and it’s Tiller-full implementation). oc create -f helm init --service-account tiller helm repo add azure With Helm set up and the Azure Helm Charts added, it’s time to install the Open Service Broker for Azure, but you’ll need four special values that will associate the broker with your personal Azure account through what’s called a service principal. Service principals are entities that have an identity and permissions to create and edit resources on an application’s behalf. You’ll need to first create a service principal following the instructions here, and while creating it and assigning it permissions to your new Resource Group, collect the following values: - AZURE_SUBSCRIPTION_ID: This is associated with your Azure account and can be found on the Azure Portal (after logging in) by clicking on Resource Groups and then on the name of the resource group created when you deployed OpenShift using the All-In-One deployment. Example: 6ac2eb01-3342-4727-9dfa-48f54bba9726 - AZURE_TENANT_ID: When creating the service principal, you’ll see a reference to a Tenant ID, also called a Directory ID. It will also look something like a subscription ID, but they are different! It is associated with the Active Directory instance you have in your account. - AZURE_CLIENT_ID: The ID of the client (application) you create when creating a service principal, sometimes called an application ID, also similar in structure to the above IDs but different! - AZURE_CLIENT_SECRET: The secret value for the client (application) you create when creating a service principal. This will be a long-ish base64-encoded string. Wow, that was fun. With these values, we can now issue the magic helm command to do the tasks and install the Open Service Broker for Azure: helm install azure/open-service-broker-azure --name osba --namespace osba \ --version 0.11.0 \ --set azure.subscriptionId=[AZURE_SUBSCRIPTION_ID] \ --set azure.tenantId=[AZURE_TENANT_ID] \ --set azure.clientId=[AZURE_CLIENT_ID] \ --set azure.clientSecret=[AZURE_CLIENT_SECRET] \ --set modules.minStability=EXPERIMENTAL Unfortunately, this uses an ancient version of Redis, so let’s use a more recent version and, to simplify things, let’s remove the need for persistent volumes (and never use this in production!): oc set volume -n osba deployment/osba-redis --remove --name=redis-data oc patch -n osba deployment/osba-redis -p '{"spec": {"template": {"spec": {"containers":[{"name": "osba-redis", "image": "bitnami/redis:4.0.9"}]}}}}' This will install the Open Service Broker for Azure in the osba Kubernetes namespace and enable the experimental features (like Cosmos DB). It may take some time to pull the images for the broker and for Redis (the default database it uses), and the broker pod might enter a crash loop while it tries to access Redis, but eventually, it should come up. If you screw it up and get errors, you can start over by deleting the osba namespace and trying again (using helm del --purge osba; oc delete project osba and waiting a while until it’s really gone and does not appear in oc get projects output). Run this command to verify everything is working: oc get pods -n osba You should see the following (look for the Running status for both): NAME READY STATUS RESTARTS AGE osba-open-service-broker-azure-846688c998-p86bv 1/1 Running 4 1h osba-redis-5c7f85fcdf-s9xqk 1/1 Running 0 1h Now that it’s installed, browse to the OpenShift Web Console (the URL can be discovered by running oc status). Log in using the same credentials as before, and you should see a number of services and their icons for Azure services (it might take a minute or two for OpenShift to poll the broker and discover all it has to offer): Type azure into the search box at the top to see a list. Woo! Easy, peasy. Step 4: Deploy Cosmos DB Before we deploy the app, let’s deploy the database we’ll use. (In the demo, I started without a database and did some live coding to deploy the database and change the app to use it. For this blog post, I’ll assume you just want to run the final code.) To deploy Cosmos DB, we’ll use the OpenShift Web Console. On the main screen, double-click on the Azure Cosmos DB (MongoDB API) icon. This will walk you through a couple of screens. Click Next on the first screen. On the second screen, elect to deploy Cosmos DB to a new project, and name the project microsweeper. Below that, you can keep all the default settings, except for the following: - Set defaultConsistencyLevel to Session. - Type 0.0.0.0/0in the first allowedIPRanges box, and click the Add button. Then click the X button next to the second allowedIPRanges box (don’t ask why). - Enter a valid Azure region identifier in the location box, for example, eastus. - In the resourceGroup box, enter the name of the ResourceGroup you previously created in step 2. - Click Next. On the final screen, choose the Create a secret in microsweeper to be used later option. This will later be referenced from the app. Finally, click the Create button and then OpenShift will do its thing, which will take about 5–10 minutes. Click Continue to the project overview to see the status of the Azure service. During this time, if you visit the Azure Portal in a separate tab, you’ll see several resources being created (most notably a Cosmos DB instance). Once it’s all done, the Provisioned Services section of the OpenShift Console’s Project Overview screen will show that Cosmos DB is ready for use, including a binding that we’ll use later. Step 5: Add MicroProfile health checks and configuration The app is using two of the many MicroProfile APIs: HealthCheck and Config. MicroProfile HealthCheck To the RestApplication class, we’ve added a simple @Health annotation and a new method: @Health @ApplicationPath("/api") public class RestApplication extends Application implements HealthCheck { @Override public HealthCheckResponse call() { return HealthCheckResponse.named("successful-check").up().build(); } } Simple, right? You can add as many of these as you want, and the health check can do whatever it needs to do and be as complex as you want (but not too complex!). MicroProfile Config More interesting I think is the addition of the Cosmos DB configuration. Since we exposed Cosmos DB through environment variables, we’re able to automatically inject their values using MicroProfile in the ScoreboardServiceCosmos class: @Inject @ConfigProperty(name = "SCORESDB_uri") private String uri; ... mongoClient = new MongoClient(new MongoClientURI(uri)); The @Inject @ConfigProperty MicroProfile annotations direct Thorntail to look for and dynamically inject values for the uri field, based on the specified name. The MicroProfile Config API specifies a well-defined precedence table to find these, so there are many ways to expose the values to your applications. We will use an environment variable in this demo, but you could also use properties files, ConfigMaps, etc. Step 6: Deploy the App The sample app uses Thorntail, Red Hat’s fully supported MicroProfile implementation. This is a Java framework, so you’ll first need to install Red Hat’s OpenJDK to OpenShift so it can be used to build and run the app: oc create -n openshift -f This uses a previous version of the image stream that references Red Hat Container Catalog. Next, let’s deploy the app to our newly created project: oc project microsweeper oc new-app 'redhat-openjdk18-openshift:1.3~' \ -e GC_MAX_METASPACE_SIZE=500 \ -e ENVIRONMENT=DEVELOPMENT oc expose svc/microsweeper-demo This will create a new S2I-based build for the app, build it with Maven and OpenJDK, and deploy the app. Initially, the app will be using an internal database (H2). It may take a few minutes to deploy. When it’s done, you should see the following output: % oc get pods -n microsweeper NAME READY STATUS RESTARTS AGE microsweeper-demo-1-build 0/1 Completed 0 1h microsweeper-demo-3-x8bdg 1/1 Running 0 1h You can see the completed build pod that built the app and the running app pod. Once deployed, you can click on the Route URL in the OpenShift Web Console next to the microsweeper-demo service and play the game. You can also get the URL with this: echo(oc get route microsweeper-demo -o jsonpath='{.spec.host}{"\n"}' -n microsweeper) Note that it is not yet using MicroProfile or Cosmos DB yet! In the game, enter your name (or use the default), and play the game a few times, ensuring that the scoreboard is updated when you win or lose. To reset the scoreboard, click the X in the upper right. To start the game again, click the smiley or sad face. Good times, right? Let’s hook it up to Cosmos DB! Step 7: Bind Cosmos DB to the app In a previous step, you deployed the Cosmos DB service to your project, so it is now said to be “provisioned” and bound to the project. You could at this point hard-code the app logic to use the provisioned service’s URI, username, password, etc., but that’s a terrible long-term approach. It’s better to expose the service’s credentials dynamically using OpenShift and then change the app to use the values through that dynamic mechanism. There are two easy ways to expose the service’s configuration: through Kubernetes Secrets (where the credentials are exposed through an ordinary file on the filesystem securely transmitted and mounted via a volume in the pod) or through environment variables. I used environment variables because it’s easy, but Thorntail/MicroProfile can use either. First, click on View Secret to view the contents of the secrets we need to bind to Cosmos DB, and then click on Add to Application. This will allow you to choose for which application to add the environment variables to the DeploymentConfig for the application. Select the microsweeper-demo application in the drop-down, and then select the Environment variables option and specify a prefix of SCORESDB_. (Don’t forget the underscore!) This will alter the environment of the application once it is re-deployed to add the new environment variables, each of which will start with SCORESDB_ (for example, the URI to the Cosmos DB will be the value of the SCORESDB_uri environment variable). Click Save. Now we’re ready to switch to Cosmos DB. To do this switch, simply change the value of the ENVIRONMENT environment variable to switch from the H2 database to Cosmos DB within the app: oc set env dc/microsweeper-demo ENVIRONMENT=PRODUCTION --overwrite At this point, the app will be re-deployed and start using Cosmos DB! Play the game a few more times, and then head over to the Azure Portal to verify data is being correctly persisted. Navigate to Azure Cosmos DB in the portal, and then click on the single long string that represents the ID of the database. You should see a single collection called ScoresCollection: Click on ScoresCollection and then on Data Explorer. This tool lets you see the data records (documents) in the database. Using the small “…” menu next to the name of the collection, click on New Query: Type the simplest of queries into the Query box: {}. Then click Execute Query to see the results. Play the game a few more times, and re-issue the query to confirm data is being persisted properly. Well done! Next Steps In this demo, we used two of the MicroProfile APIs that are instrumental in developing Java microservices ( HealthCheck and Config) to link the MicroProfile/Thorntail application to Azure services through the Open Service Broker API. There are many other MicroProfile APIs you can use, and I encourage you to check out the full specifications and the recent release (MicroProfile 2.1). MicroProfile is awesome and is a great way to build Java microservices using truly open, community-driven innovation. Also see these posts on modern application development, microservices, containers, and Java. Happy coding!.
https://developers.redhat.com/blog/2018/10/17/microprofile-apps-azure-open-service-broker/
CC-MAIN-2019-13
en
refinedweb
WLAN connectivity problem I am using Pycom WiPy v2.0 board. I have upgraded its firmware and downloaded pymakr plugin to run and debug the code, but i couldn't connect it with the home wifi and everytime i need to reset my board to get the wifi network of the pycom board. And i also have Pycom WiPy v1.3 board for which i couldn't find the firmware upgrade link. Please help me to find the solution. Thank you. facing exactly the same issue.Even changed the router and firewall but didn't work.do i have to install any kind of library? please revert asap as i have to finish my project by the end of this week. @SenaPR Hello SenaPR, What I'm missing is in the auth= parameter of the connect statement the WiFi password of your router. I guess you just dropped that for showing th example.. And you cannot give your own device the IP address of the router, which is 192.168.1.1. So you must use something different, which is not in use at the moment, like 192.168.1.100. You have to ask the network administrator which addresses you may use. Again, I see a problem with indentation. The last idle() statement is not indented properly and should raise an sytax error. Are you sure that this boot.py is really loaded to your device? It has to, because is is executed before pymakr steps in. Otherwise, for testing you can give is a different name, like connect.py, and just put a statement like: import connect into boot.py. I do not trust the atom plugin. Still: if your PC is connected to the office router, open the program cmd.exe, which gives you the good ol' MSDOS command prompt. At the prompt, enter the command: ipconfig That will tell you the netmask, the IP address of the router, and the IP-adress of the DNS-Service. @robert-hh i have attached a file of the WLAN program, using the code i am trying to connect pycom board wifi with the office router. Everything has been developed under your guidance but still i am unable to connect it with the office wifi. Please go through the program and suggest me if i went somewhere wrong. Thank you. - guyadams99 last edited by @SenaPR - also, most of these IP addresses are invalid - an IPv4 address can't have an octet above 255 and you have 478 and even 1090. I would expect the libraries to reject these, but in any case you will never get a working network without valid IP addresses. @SenaPR No, for the language itself and the standard modules as documented you do not need any special libraries. Especially the Bluetooth and WiFi modules are embedded in the firmware. For you special application, you might need some libraries and scripts, but that's another story. @robert-hh are there any libraries which we need to dump on our pycom board after upgrading its firware ?? Thank you. @SenaPR Line 7 must be indented by at least one space. It looks as if not. and that is a syntax error. while not wlan.isconnected(): print(".", end="") Also, the IP you give to your WiPy is in a different net than the router. So if 192.168.478.198 is the IP of your gateway, the IP of your WiPy should be 192.168.478.xxx. But you could remove (or comment out) line 8 which sets the IP parameters and just see what you router assigns to the WiPy. @robert-hh I have followed the same thing, i can connect with my pycom board wifi but i couldnot connect pycom board wifi with the office wifi. i have attached file for your reference. Please go through it and tell me if any changes i need to do to make pycom board wifi to the office wifi. Thank you for your cooperation. @SenaPR That's because the statement is not executed. The question is, what you want to achieve: In AP mode, the WiPy creates it's own WiFi network, and so you have to specify the SSID and password for that network. By default, this is something like "wipy_xxx". You must seen this since you connected ti your wipy. You can reuse that SSID and specify: wlan = WLAN(mode=WLAN.STA_AP, ssid = "your ssid", auth=(WLAN.WPA2, "")) In STA mode your WiPy would connect to your router. After the wlan=... statement, you would add: wlan.connect(ssid='mynetwork', auth=(WLAN.WPA2, 'my_network_key')) while not wlan.isconnected(): print(".", end="") wlan.ifconfig(config=('my.new.static.ip', '255.255.255.0', 'gateway_ip', 'gateway_ip')) print("\n", wlan.ifconfig()) Which connects to your router and sets a static ip, which is more convenient if you want to connect with your computer both to your Internet and your WiPy. The values for my.new.static.ip and gateway.ip depend on the setting of your router. Typical setting for the gateway_ip are 192.168.0.1. But you can tell by issuing the command /sbin/route -nunder linux/mac or ipconfigunder Windows in the command shell. @robert-hh i have tried the " wlan = WLAN(mode=WLAN.STA_AP)" function. Wifi network of pycom board has not disconnected but its showing error "AP SSID not given" on the console terminal. Below is attached file, please go through it do the needful. Thank you. @SenaPR said in WLAN connectivity problem: wlan = WLAN(mode=WLAN.STA) Once you execute this statement, any connection to the AP is lost. If you want to keep the AP connection in addition to the station mode, run wlan = WLAN(mode=WLAN.STA_AP) @robert-hh from network import WLAN wlan = WLAN(mode=WLAN.STA), when we execute the following functions, the board hangs up, wifi of pycom wipy board get disconnected and we have to do safe boot again to make it working. Please help us on this. Thank you. @SenaPR boot.py and main.py are ordinary files in the /flash directory. So you can edit and change the freely. Normally, the WiPy creates an WiFi access point at 192.168.4.1, with password. If you can connect there, you can use ftp to transfer files. If you are locked out from WiFi, and pymakr does not work either,. you can use other tools to access the WiPy file systems via USB, like adafruit's "AMPY" (), or Dave Hylands "rshell" (following instruction at link:). @robert-hh can we access default boot.py or main.py of Pycom WiPy v2.0 board ? If yes, then please can you suggest me on this. Thank you. @SenaPR WiPy 2does not reconnect automatically. You have to add the respective statements into boot.py or main.py, like described here under the heading "Connecting to a router". After being connected, you may also assign a fixed IP. For WiPy 1.3 the firmware is here: There's also a short instruction on how to update.
https://forum.pycom.io/topic/1797/wlan-connectivity-problem
CC-MAIN-2019-13
en
refinedweb
. A simple histogram can be a great first step in understanding a dataset. Earlier, we saw a preview of Matplotlib's histogram function (see Comparisons, Masks, and Boolean Logic), which creates a basic histogram in one line, once the normal boiler-plate imports are done: %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-white') data = np.random.randn(1000) plt.hist(data); The hist() function has many options to tune both the calculation and the display; here's an example of a more customized histogram: plt.hist(data, bins=30, normed=True, alpha=0.5, histtype='stepfilled', color='steelblue', edgecolor='none'); The plt.hist docstring has more information on other customization options available. I find this combination of histtype='stepfilled' along with some transparency alpha to be very useful when comparing histograms of several distributions: x1 = np.random.normal(0, 0.8, 1000) x2 = np.random.normal(-2, 1, 1000) x3 = np.random.normal(3, 2, 1000) kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40) plt.hist(x1, **kwargs) plt.hist(x2, **kwargs) plt.hist(x3, **kwargs); If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the np.histogram() function is available: counts, bin_edges = np.histogram(data, bins=5) print(counts) [ 12 190 468 301 29] Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins. We'll take a brief look at several ways to do this here. We'll start by defining some data—an x and y array drawn from a multivariate Gaussian distribution: mean = [0, 0] cov = [[1, 1], [1, 2]] x, y = np.random.multivariate_normal(mean, cov, 10000).T plt.hist2d(x, y, bins=30, cmap='Blues') cb = plt.colorbar() cb.set_label('counts in bin') Just as with plt.hist, plt.hist2d has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring. Further, just as plt.hist has a counterpart in np.histogram, plt.hist2d has a counterpart in np.histogram2d, which can be used as follows: counts, xedges, yedges = np.histogram2d(x, y, bins=30) For the generalization of this histogram binning in dimensions higher than two, see the np.histogramdd function. plt.hexbin: Hexagonal binnings¶ The two-dimensional histogram creates a tesselation of squares across the axes. Another natural shape for such a tesselation is the regular hexagon. For this purpose, Matplotlib provides the plt.hexbin routine, which will represents a two-dimensional dataset binned within a grid of hexagons: plt.hexbin(x, y, gridsize=30, cmap='Blues') cb = plt.colorbar(label='count in bin') plt.hexbin has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.). Another common method of evaluating densities in multiple dimensions is kernel density estimation (KDE). This will be discussed more fully in In-Depth: Kernel Density Estimation, but for now we'll simply mention that KDE can be thought of as a way to "smear out" the points in space and add up the result to obtain a smooth function. One extremely quick and simple KDE implementation exists in the scipy.stats package. Here is a quick example of using the KDE on this data: from scipy.stats import gaussian_kde # fit an array of size [Ndim, Nsamples] data = np.vstack([x, y]) kde = gaussian_kde(data) # evaluate on a regular grid xgrid = np.linspace(-3.5, 3.5, 40) ygrid = np.linspace(-6, 6, 40) Xgrid, Ygrid = np.meshgrid(xgrid, ygrid) Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()])) # Plot the result as an image plt.imshow(Z.reshape(Xgrid.shape), origin='lower', aspect='auto', extent=[-3.5, 3.5, -6, 6], cmap='Blues') cb = plt.colorbar() cb.set_label("density") KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off). The literature on choosing an appropriate smoothing length is vast: gaussian_kde uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data. Other KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example, sklearn.neighbors.KernelDensity and statsmodels.nonparametric.kernel_density.KDEMultivariate. For visualizations based on KDE, using Matplotlib tends to be overly verbose. The Seaborn library, discussed in Visualization With Seaborn, provides a much more terse API for creating KDE-based visualizations.
https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/matplotlib/04.05-Histograms-and-Binnings.ipynb
CC-MAIN-2019-13
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Sorry still new to behaviours so please forgive this ignorance which hopefully might be an easy question. We are currently on JIRA 6.3.1 using ScriptRunner 4.1.3.9. Specifically the issue is I have a behaviour created upon a custom field. For that specific field I am trying to detect when there is a change.for this field and the "Fix Version/s" is not set throw an error that prevents the user from updating. Currently I have the below where the specific field is customfield_10038. Is there something obvious that I am doing wrong or misunderstanding? def field = getFieldById(getFieldChanged()) if (field == "customfield_10038" && fixVersion.getValue() == "") { field.setError("You aren't allowed to checkin.") } else { field.clearError() } Thanks in advance. Though hadn't tried hands-on, I believe this could help you. def cusField = getFieldById(getFieldChanged()) def fix = getFieldByName("Fix Version/s") if ((fix.getValue() == null) || (fix.getValue() == "")) { cusField.setError("You aren't allowed to checkin.") } else { cusField.clearError() } Check the minor mistakes keenly. Hi Mahesh, Thanks for that response. I had initially tried this but had an issue where even though this custom field didn't change when I go to hit the edit button for the bug, it perceives that this field has changed and as such runs this check. This is problematic in that you may want to edit another field in that bug but cannot now. Any ideas for this? Thanks, Randy So, you could do a check to see whether the custom field has actually changed, as well as checking whether Fix Version is empty. def cusField = getFieldById(getFieldChanged()) def fix = getFieldByName("Fix Version/s") def fixVersionEmpty = (fix.getValue() == null) || (fix.getValue() == "") def customFieldChanged = cusField.getValue() != underlyingIssue.getCustomFieldValue(cusField) if (fixVersionEmpty && !customFieldChanged) { cusField.setError("You aren't allowed to checkin.") } else { cusField.clearEr.
https://community.atlassian.com/t5/Jira-Core-questions/With-the-behaviour-plugin-having-an-issue-detecting-a-change-for/qaq-p/286605
CC-MAIN-2019-13
en
refinedweb
The Practical Client If you want to ensure that the right code is loaded at the right time (and only loaded when you need it), you can use TypeScript code to organize your code into modules. As a side benefit, managing your script tags will get considerably easier. When combined with a module manager, TypeScript modules are great for two reasons. First, modules eliminate the need for you to make sure that you stack your script tags in the right order. Instead, within each script file you specify which other files your code depends on and the module manager establishes the right dependency hierarchy -- in your page, all you need is a single script tag for the code file that kicks everything off. Second, the module manager loads your code files asynchronously as they're needed. If, as a user works through your application, some code file is never needed, then it will never get hauled down to the browser. Even if your user does use all the code files, deferring loading script files until you need them can significantly improve the loading time for your pages. I discussed modules in my second Practical TypeScript column in May 2013 (except, back then, the column was called Practical JavaScript and I was using TypeScript 0.8.3). Much has changed in TypeScript since then and now's a good time to revisit the topic because some of those changes are part of the latest version of the language, TypeScript 2.1. There has been one major change since the first version of TypeScript: What were called "internal modules" in the earliest versions of TypeScript are now called namespaces. TypeScript namespaces act like .NET Framework namespaces and should be used to organize related components (classes, enums and so on) to help developers discover them. This article uses "modules" in the current sense of the term: as a way of managing and loading script resources on an as-needed basis. Setting Up For this column I'm using TypeScript 2.1 in Visual Studio 2015. I used NuGet to add RequireJS to my project as my module loader. You don't, however, have to use RequireJS. TypeScript syntax for working with modules is "implementation-neutral": Whatever code you write in TypeScript is translated by the compiler into whatever function calls are required by your module loader. To tell the TypeScript compiler to generate RequireJS-compatible JavaScript code, I added this line to my tsconfig.json file which specifies that I'm using the UMD format which is compatible with both AMD (the original module loader, implemented in RequireJS) and CommonJS (which is popular with Node developers): "module": "umd" The code in this article works equally well with module set to amd. Angular developers using SystemJS should set module to system (I'm told). Picking RequireJS does mandate the format of the script tag that I use to start my application. First, of course, I have to add a script tag to my page to load the RequireJS library. However, to cause RequireJS to load my application's initial script file, I must add a data-main attribute to the script tag that references that initial script file. The path to my script file is usually a relative filepath (relative to the location of the RequireJS script file). For this case study, I put all my files in the Script folder. However, my RequireJS script file is directly in my application's Scripts folder, while my initial script file (CustomerManagement.ts) is in a subfolder called Application. To invoke RequireJS and point it at my initial script file, I add this tag to my page's <head> element: <script src="Scripts/require.js" data-</script> Notice that, no matter how many files I organize my TypeScript code into, I only need a single script tag on each page. When RequireJS loads CustomerManagement.js, it will check to see what files CustomerManagement.js requires and load them (and so on, through the dependency hierarchy). Creating an Export File To create a file of useful code that I'd like to be available for use in some other file, all I need to do is create a TypeScript file and add my declarations to it. I add the keyword export to each declaration that I want to use outside the file. Listing 1 shows an example of a CustomerEntities file that exports an enum, a constant, an interface, a base class, a derived class and a function. To make this case study more interesting I put this file in another Scripts subfolder, which I called Utilities.; } function MakeNewPremiumCustomer(custId: string): PremiumCustomer { let pcust: PremiumCustomer; pcust = new PremiumCustomer(); pcust.Id = custId; pcust.CreditStatus = defaultCreditStatus; pcust.CreditLimit = 100000; return pcust; } I could have enclosed my exported components in a module, like this: export module Customers { export enum CreditStatusTypes { //...rest of module... However, that module name establishes a qualifier that I'd have to use when referring to any component in the module. For example, in any code that uses CreditStatusTypes enum I'd have to refer to it as Customers.CreditStatusType. That may not seem like a bad thing -- it might even sound helpful because it could help avoid name collisions if you import two modules, both of which contain something called CreditStatusType. But, as I'll discuss in a later column, you have other options to handle name collisions. As a result, using a module may simply insert another namespace level into your component's names without adding any value. Nor does enclosing all the components in the module save you any code -- you still have to flag each component with the export keyword (as I did with the CreditStatusTypes in my example). I don't use the module keyword in my modules. Importing Components Now, in my application code, to create a PremiumCustomer class, I need access to the MakeNewPremiumCustomer function and the PremiumCustomerClass. To get that access, I just add an import statement that imports those components from my module. Because my application code is in a file in my Scripts/Application folder, the import statement it uses (with a relative address pointing to my CustomerEntities module in Scripts/Utilities), looks like this: import { PremiumCustomer, MakeNewPremiumCustomer } from "../Utilities/CustomerEntities" By the way, if you've set noUnusedLocal to true in your tsconfig file, then you'll be obliged to use any component you reference in your import statements or your code won't compile. But I'm not done yet. While I've reduced the mass of script tags that I might need at the start of a page to a single reference to RequireJS, I've only looked at the simplest ways to export and import module components. Next month I'm going to focus on some more architectural issues you should consider when creating modules (along with the code you need to manage those modules, of
https://visualstudiomagazine.com/articles/2017/02/01/managing-modules.aspx
CC-MAIN-2019-13
en
refinedweb
Simplifying working with Azure Notification Hubs Postado em 5 setembro, 2014 Piyush Joshi Senior Program Manager, Azure Mobile Engagement Close to 97% of Service Bus namespaces today have either a Notification Hub or one of the messaging entities (Queue, Topic, Relay or Event Hub) in it and very rarely both together. In order to simplify the experience which today combines both Notification Hub and messaging entities together and to better manage these entities in the Service Bus backend, we are working towards splitting the user experience. As a first step towards this, we are making the selection of the purpose of a namespace explicit in whether you want to use it to create a Notification Hub or one of the messaging entities within it. In near future, with this change as the basis, we will also make it easy to consume Notification Hub from the SDKs without any of the overhead of the Service Bus specific components thereby addressing one of major pain point we have heard from customers. Note: This change will roll out next week. Please reach out to us if you have any concerns or you have any scenario which will not work with these changes. When you go to create a new namespace, you will see a new selection to determine the Namespace Type. If you want to create a Notification Hub in the namespace, choose Notification Hub otherwise keep it as default (Messaging) and create the namespace. If you want to create both Notification Hub and the messaging entities within a single namespace, then you will have to create separate namespaces going forward. Note that the existing namespaces where you may have both Notification Hub and the messaging entities in a single namespace will continue to work as is however we strongly encourage you to separate them out in their own namespaces. Once the namespace has been created, you will see a new column called "Type" in the Namespaces listing. The "Type" column can have either of the following three values: 1. Messaging - if you created a new namespace by specifying the type as 'Messaging' (default). 2. Notification Hub - if you created a new namespace by explicitly specifying the type as 'Notification Hub'. 3. Mixed - for the existing namespaces which have both Notification Hub and Messaging entities today. These namespaces and the entities within them will continue to work as is though we will not allow creating new namespaces with ‘Mixed’ type. Note that all the existing namespaces will show up as 'Mixed' type for the time being so the experience remains unchanged. We do however intend to run a backfill job in 1 month time which will update the 'Namespace Type' correctly so that if a namespace contains only Notification Hubs then its type will be update to 'Notification Hub' and if it only contains messaging entities then it will be updated to 'Messaging'. This will not have any impact on how you use the entities today and everything will continue to work as is. The 'Quick Create' & the 'Custom Create' experience for the Service Bus entities remains unchanged except that you will see an automatic filtering in the namespace dropdown depending on the entities you are creating e.g. if you are creating a Notification Hub then the namespaces you will see will be of ‘Notification Hub’ type (as well as ‘Mixed' type for the time being). Clicking on a namespace will also show a customized top level menu depending on the Type. You will be cable to create and configure Notification Hubs in a namespace of type 'Notification Hub' and any of the messaging entities in the namespace of type 'Messaging' in the same manner as before. If you currently have a ‘Mixed' namespace which contains both Notification Hubs and Messaging entities then we encourage you to split them out in their own separate namespaces. If you use PowerShell cmdlet or REST APIs directly to create a namespace then the namespace will be created using a ‘Mixed’ mode right now and we are not introducing any breaking changes here however we do plan to update the cmdlets in a subsequent release to add NamespaceType as optional parameter with 'Messaging' as the default so if you use the cmdlets to create a namespace that you will need to explicitly provide NamespaceType as 'Notification Hub'. Thank you! Mobile Notification Hub Service Bus
https://azure.microsoft.com/pt-br/blog/simplifying-working-with-azure-notification-hubs/
CC-MAIN-2017-22
en
refinedweb
Displaying image overlays on image filenames in Emacs Posted March 21, 2016 at 11:21 AM | categories: emacs, orgmode | tags: | View Comments Table of Contents It has always bothered me a little that I have to add a file image after code blocks in org-mode to see the results. That extra work… I also don't like having to explicitly print the figure in the code, since that is the extra work, just in a different place. Today I look into two approaches to this. First, we consider something like tooltips, and second just putting overlays of image files right on the file name. The plus side of this is no extra work. The downside is they won't export; that will still take the extra work, but you needed that for the caption anyway for now. Here is a video illustrating the code in this post: Here is a test. import matplotlib.pyplot as plt plt.plot([0, 1, 2, 4, 16]) plt.savefig("test-fig.png") 1 Tooltip approach Building on our previous approach of graphical tooltips, we try that here to show the images. I have solved the issue of why the images didn't show in the tooltips before; it was related to how Emacs was built. I used to build it with "cocoa" support so it integrates well in OSX. Here, I have build it with gtk3, and the tooltips work with images. (defvar image-tooltip-re (concat "\\(?3:'\\|\"\\)\\(?1:.*\\." (regexp-opt '("png" "PNG" "JPG" "jpeg" "jpg" "JPEG" "eps" "EPS")) "\\)\\(?:\\3\\)") "Regexp to match image filenames in quotes") (defun image-tooltip (window object position) (save-excursion (goto-char position) (let (beg end imgfile img s) (while (not (looking-at image-tooltip-re)) (forward-char -1)) (setq imgfile (match-string-no-properties 1)) (when (file-exists-p imgfile) (setq img (create-image (expand-file-name imgfile) 'imagemagick nil :width 200)) (propertize "Look in the minibuffer" 'display img))))) (font-lock-add-keywords nil `((,image-tooltip-re 0 '(face font-lock-keyword-face help-echo image-tooltip)))) (font-lock-fontify-buffer) Now these both have tooltips on them: "test-fig.png" and 'test-fig.png'. 2 The overlay approach We might alternatively prefer to put overlays in the buffer. Here we make that happen. (defun next-image-overlay (&optional limit) (when (re-search-forward image-tooltip-re limit t) (setq beg (match-beginning 0) end (match-end 0) imgfile (match-string 1)) (when (file-exists-p imgfile) (setq img (create-image (expand-file-name imgfile) 'imagemagick nil :width 300)) (setq ov (make-overlay beg end)) (overlay-put ov 'display img) (overlay-put ov 'face 'default) (overlay-put ov 'org-image-overlay t) (overlay-put ov 'modification-hooks (list 'org-display-inline-remove-overlay))))) (font-lock-add-keywords nil '((next-image-overlay (0 'font-lock-keyword-face t))) t) Here is the example we looked at before. import matplotlib.pyplot as plt plt.plot([-0, 1, 2, 4, 16]) plt.savefig("test-fig.png") You may want to remove those overlays. Here is one way. Note they come back if you don't disable the font-lock keywords though. (ov-clear 'org-image-overlay) I know you want to do that so here is: (font-lock-remove-keywords nil '((next-image-overlay (0 'font-lock-keyword-face t)))) (ov-clear 'org-image-overlay) Note you still have to clear the overlays. Font lock doesn't seem to do that for you I think. Copyright (C) 2016 by John Kitchin. See the License for information about copying. Org-mode version = 8.2.10
http://kitchingroup.cheme.cmu.edu/blog/2016/03/21/Displaying-image-overlays-on-image-filenames-in-Emacs/
CC-MAIN-2017-22
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Where should I define a state machine?: The tutorials are implemented in a simple cpp source file for simplicity. I want to model dynamic behavior of a class as a state machine, how should I define the state machine? Answer: Usually you'll want to implement the state machine as an attribute of the class. Unfortunately, a concrete state machine is a typedef, which cannot be forward-declared. This leaves you with two possibilities: Provide the state machine definition inside the header class and contain an instance as attribute. Simple, but with several drawbacks: using namespace directives are not advised, and compile-time cost for all modules including the header. Keep the state machine as (shared) pointer to void inside the class definition, and implement the state machine in the cpp file. Minimum compile-time, using directives are okay, but the state machine is now located inside the heap. Question: on_entry gets as argument, the sent event. What event do I get when the state becomes default-activated (because it is an initial state)? Answer: To allow you to know that the state was default-activated, MSM generates a boost::msm::InitEvent default event. Question: Why do I see no call to no_transition in my submachine? Answer: Because of the priority rule defined by UML. It says that in case of transition conflict, the most inner state has a higher priority. So after asking the inner state, the containing composite has to be also asked to handle the transition and could find a possible transition. Question: Why do I get a compile error saying the compiler cannot convert to a function ...Fsm::*(some_event)? Answer: You probably defined a transition triggered by the event some_event, but used a guard/action method taking another event. Question: Why do I get a compile error saying something like “too few” or “too many” template arguments? Answer: You probably defined a transition in form of a a_row or g_row where you wanted just a _row or the other way around. With Row, it could mean that you forgot a "none". Question: Why do I get a very long compile error when I define more than 20 rows in the transition table? Answer: MSM uses Boost.MPL under the hood and this is the default maximum size. Please define the following 3 macros before including any MSM headers: #define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS #define BOOST_MPL_LIMIT_VECTOR_SIZE 30 // or whatever you need #define BOOST_MPL_LIMIT_MAP_SIZE 30 // or whatever you need Question: Why do I get this error: ”error C2977: 'boost::mpl::vector' : too many template arguments”? Answer: The first possibility is that you defined a transition table as, say, vector17 and have 18 entries. The second is that you have 17 entries and have a composite state. Under the hood, MSM adds a row for every event in the composite transition table. The third one is that you used a mpl::vector without the number of entries but are close to the MPL default of 50 and have a composite, thus pushing you above 50. Then you need mpl/vector60/70….hpp and a mpl/map60/70….hpp Question: Why do I get a very long compile error when I define more than 10 states in a state machine? Answer: MSM uses Boost.Fusion under the hood and this is the default maximum size. Please define the following macro before including any MSM headers: #define FUSION_MAX_VECTOR_SIZE 20 // or whatever you need
http://www.boost.org/doc/libs/1_61_0/libs/msm/doc/HTML/ch05.html
CC-MAIN-2017-22
en
refinedweb
Common management annotations for JBoss projects?Heiko Rupp Jul 17, 2013 11:22 AM RHQ has a plugin generator, that I've been enhancing a bit in the last few days(*). Triggered by that effort I came back into an effort that we started together with Infinispan to have annotations on methods / fields to provide the metadata for management. E.g. public class Foo { @Metric(description = "Just a foo", dataType = DataType.TRAIT) String lastCommand; @Metric(description = "How often was this bean invoked", displayType = DisplayType.SUMMARY, measurementType = MeasurementType.DYNAMIC, units = Units.SECONDS) int invocationCount; @Operation(description = "Decrease the counter") public void decreaseCounter(@Parameter(description = "How much to decrease?", name = "by") int by) { invocationCount -= by; } } This has since diverged as ISPN has a different set of annotations to mark stuff to be exposed via jmx. And then there is as7 with DMR, that also has a way of providing that metadata. (And I guess Fuse does a similar job ). I would like to discuss initiate a common set of annotations (or other metadata) that we could use all over JBoss to create plugins and management interfaces etc. from it, so that a writer of an app only needs to use this set of annotations and we can then create DMR-resouce-description, RHQ plugins etc. from it (may also help with automatic UI generation). Clear requirements for RHQ are * units for metrics * Measurement type for metrics (trends up/down, dynamic) and I guess a few more that I just forgot 1. Re: Common management annotations for JBoss projects?David Lloyd Jul 17, 2013 11:27 AM (in response to Heiko Rupp) I've been working on annotations for WildFly. The set we have defines management resources and attributes, and also the infrastructure required to support them. The project is here: until or unless Jason kicks it out of there 2. Re: Common management annotations for JBoss projects?Elias Ross Jul 17, 2013 8:30 PM (in response to Heiko Rupp) I'd be awsome if: - JBoss ('WildFly') did the right thing with javax.management annotations () via CDI...I wrote a CDI extension that works great, but I haven't released yet as open source since I'm not sure if people seem to care about CDI/JMX that much: - although maybe I haven't found the right audience? This seems like a no brainer. - RHQ had a tool to generate a descriptor for rhq-agent.xml that used the JMX meta data. I wrote a JMX plugin generator that I use, although it's pretty crude. It works off of JMX metadata passed through the MBeanServer. It did a fine job of generating a plugin for standalone HornetQ. For providing RHQ metadata it wants, JMX offers 'descriptors' (), which should work for configuring units, trendsup, summary, etc. You annotate an attribute using something like this: @DescriptorFields({"units=bytes", "org.jboss.rhq.trending=up"}) Although this is not as great as custom annotations...For CDI support, you can maybe create a bunch of WildFly/RHQ custom annotations that are translated into standard JMX descriptors. Maybe these annotations could offer suggestions in terms of alert configuration as well? For example, "maxValue" or "minValue" are defined already. The workflow would be: - App writer comes up with product, puts annotations in their code, which is exposed by JBoss/Tomcat/Java via JMX - Operations/admin guy runs the plugin generator on a running system to capture the MBeans - Newly created plugin is edited (hopefully minimally), and then deployed to RHQ
https://developer.jboss.org/thread/230551
CC-MAIN-2017-22
en
refinedweb
Yes, I’ve finally managed got around to implementing BEPU Physics. This has meant introducing a new “PhysicsManager” class and extending the “SceneManager” class, but it’s worth it. If you haven’t already, download the latest code from the link above (I’m using 1.1.0). In the “ProjectVanquish” project, add a reference to the BEPU DLL. We’ll start with the new “PhysicsManager” class. In the “Core” folder, add a new class called “PhysicsManager” and include the following namespaces: using BEPUphysics; using BEPUphysics.Constraints; using BEPUphysics.Settings; using Microsoft.Xna.Framework; using ProjectVanquish.Models; Add the following variables: private Space space; private IList<PhysicsObject> physicsObjects; PhysicsObject doesn’t exist yet, so you’ll have to fight with Visual Studio for the time being with regards the Intellisense. Let’s create the constructor: public PhysicsManager(Vector3 gravity) { space = new Space(); space.ForceUpdater.Gravity = gravity; SolverSettings.DefaultMinimumIterations = 2; MotionSettings.DefaultPositionUpdateMode = BEPUphysics.PositionUpdating.PositionUpdateMode.Continuous; MotionSettings.UseExtraExpansionForContinuousBoundingBoxes = true; MotionSettings.CoreShapeScaling = 0.99f; space.Solver.IterationLimit = 20; // Check if we can use mutli-threading if (Environment.ProcessorCount > 1) { for (int i = 0; i < Environment.ProcessorCount; i++) space.ThreadManager.AddThread(); space.BroadPhase.AllowMultithreading = true; } physicsObjects = new List<PhysicsObject>(); } In the constructor we are instantiating a new BEPU physics object and setting some default values. We also check to see if we can use multi-threading that the BEPU engine now supports. Straight forward enough. We’ll add some properties: public IList<PhysicsObject> PhysicsObjects { get { return physicsObjects; } } public Space Space { get { return space; } } This will be used in the “SceneManager” class. Lastly, we just need an “Update” method: public void Update(GameTime gameTime) { space.Update((float)gameTime.ElapsedGameTime.TotalSeconds); } Excellent. That is our “PhysicsManager” class complete. Let’s extend our “SceneManager” class to use this new class. Open the “SceneManager” class and declare a new variable: static PhysicsManager physicsManager; In the constructor, we’ll instantiate it: physicsManager = new PhysicsManager(new Vector3(0, -9.81f, 0)); This will create an earth-like gravity. We’ll create a static property so we can access this from the “PhysicsManager” class: public static PhysicsManager PhysicsManager { get { return physicsManager; } } The last part of the “SceneManager” class is to update the “PhysicsManager” in the “Update” method: physicsManager.Update(gameTime); That’s all we need to do with the “SceneManager”. The last thing on the check-list is the new “PhysicsObject” class. Add a new class in the “Models” folder called “PhysicsObject” and add in the following namespaces: using BEPUphysics.Entities; using BEPUphysics.Entities.Prefabs; using BEPUphysics.MathExtensions; using Microsoft.Xna.Framework; using ProjectVanquish.Core; Make the class public abstract: public abstract class PhysicsObject Declare the following variables: bool movable = false; Entity entity; float mass = 1f; Add a constructor: public PhysicsObject(bool isMovable) { movable = isMovable; InitializeEntity(); } In the constructor, we are declaring if the object is movable or static. We are also calling the “InitializeEntity” method. We have 2 methods to add: protected void InitializeEntity() { entity = new Box(Vector3.Zero, 0f, 0f, 0f, mass); SceneManager.PhysicsManager.PhysicsObjects.Add(this); } public void Remove() { entity.Space.Remove(entity); SceneManager.PhysicsManager.PhysicsObjects.Remove(this); } The first initializes a new “Box” object entity and adds it to the “PhysicsManager” list, whilst the second removes the object. That’s all for now with this class. We will be returning to it to add properties later on, but for now, you have successfully integrated BEPU Physics into the engine.
https://projectvanquish.wordpress.com/tag/scene-manager/
CC-MAIN-2017-22
en
refinedweb
To gain access to all feature's geometries in a feature class, just do the following: import arcpy from arcpy import env fc = r"c:\temp\data.shp" geoms = arcpy.CopyFeatures_management(fc, arcpy.Geometry()) for g in geoms: print g.extent This sample allows anyone to directly access the geometries of the input feature class without having to use the Cursor objects. The same idea can be applied to other functions to the analysis function as well: import arcpy from arcpy import env fc = r"c:\temp\data.shp" geom = arcpy.Buffer_analysis(fc, arcpy.Geometry(), "100 Feet", "FULL", "ROUND")[0] print geom.extent Here the buffer tool outputs a single geometry to the geom object and the extent is displayed. Where this becomes really powerful is when you need to perform geometry operations on your data, and want to put the results back into that row. import arcpy from arcpy import env fc = r"c:\temp\data.shp" with arcpy.da.UpdateCursor(fc, ["SHAPE@"]) as urows: for urow in urows: geom = arcpy.Buffer_analysis(urow[0], arcpy.Geometry(), "100 Feet", "FULL", "ROUND")[0] row[0] = geom urows.updateRow(urow) del urow del geom Assuming that the input is a polygon, this snippet shows how geometries can be used as inputs and outputs thus allowing for easy insertion back into the original row. Hope this helps! 2 comments: Thanks for a "hot" tips for GIS objects. in your last example, you also could have used the built in buffer method for geometry objects: urow[0] = urow[0].buffer(100) mind you the 100 is in whatever units your data is in. also, line 7 should say urow[0] instead or row[0] Great post, not enough people using the geometry objects.
http://anothergisblog.blogspot.com/2013/11/geometry-objects-make-life-easier.html
CC-MAIN-2017-22
en
refinedweb
Serializing jasp calculations as json data Posted October 19, 2013 at 02:33 PM | categories: jasp, ase, vasp | tags: | View Comments Updated October 19, 2013 at 03:10 PM Table of Contents We use VASPto calculate materials properties in our research We use the jasppython module we have developed to setup, run and analyze those calculations. One of the things we have worked on developing recently is to more transparently share how do this kind of work by using org-mode supporting information files. Doing this should make our research more reproducible, and allow others to build off of it more easily. We have run into the following problem trying to share VASP results however. The VASP license prohibits us from sharing the POTCAR files that are used to run the calculations. That is unfortunate, but since these files are also what give VASP some competitive advantage, they are protected, and we agreed to that when we bought the license. The problem is that the jasp module requires the POTCAR files to work, so without them, our scripts are not reproducible by researchers without a VASP license. So, we have been looking at new ways to share the data from our calculations. In this post, we consider representing the calculation as a JSON file. We will look at a couple of new features built into the development branch of jasp 1 The simplest case of a simple calculation Here we setup and run a simple calculation, and output the JSON file. from ase import Atoms, Atom from jasp import * import numpy as np np.set_printoptions(precision=3, suppress=True)() with open('JSON', 'w') as f: f.write(calc.json) energy = -14.687906 eV [[ 5.095 0. 0. ] [-5.095 0. 0. ]] Now, we can analyze the JSON file independently of jasp. The json data contains all the inputs we used for the VASP calculation, the atomic geometry, and many of the outputs of the calculation. Here is the JSONfile. import json with open('molecules/simple-co/JSON', 'rb') as f: d = json.loads(f.read()) print('The energy is {0}'.format(d['data']['total_energy'])) print('The forces are {0}'.format(d['data']['forces'])) The energy is -14.687906 The forces are [[5.095488, 0.0, 0.0], [-5.095488, 0.0, 0.0]] 2 Including extra information in the JSON file If we use a slightly different syntax, we can also include the total DOS in the JSON file. from jasp import * with jasp('molecules/simple-co') as calc: with open('JSON-DOS', 'w') as f: f.write(calc_to_json(calc, dos=True)) To illustrate that we have done that, let us plot the DOS without using jasp from the JSON-DOSfile. import json import matplotlib.pyplot as plt with open('molecules/simple-co/JSON-DOS', 'rb') as f: d = json.loads(f.read()) energies = d['data']['dos']['e'] dos = d['data']['dos']['dos'] plt.plot(energies, dos) plt.savefig('molecules/simple-co/dos.png') We are still working on getting atom-projected DOS into the json file, and ensuring that all the spin cases are handled (e.g. the spin-up and spin-down DOS). 3 Limitations?.
http://kitchingroup.cheme.cmu.edu/blog/2013/10/19/Serializing-jasp-calculations-as-json-data/
CC-MAIN-2017-22
en
refinedweb
Improving library documentation From HaskellWiki (Difference between revisions) Revision as of 14:43, 26 November 2006 If you find standard library documentation lacking in any way, please log it here. At the minimum record what library/module/function isn't properly documented. Please also suggest how to improve the documentation, in terms of examples, explanations and so on. Example: package base Data.List unfoldr An example would be useful. Perhaps: -- A simple use of unfoldr: -- -- > unfoldr (\b -> if b == 0 then Nothing else Just (b, b-1)) 10 -- > [10,9,8,7,6,5,4,3,2,1] -- dons 00:31, 26 November 2006 (UTC) Tag your submission with your name by using 4 ~ characters, which will be expanded to your name and the date. If you'd like, you can directly submit your suggestion as a darcs patch via the bug tracking system. Please add your comments under the appropriate package: 1 base package base Data.Array.IO and Data.Array.MArray descriptions An example would be usefull. Arrays can be very difficult when you see them the very first time ever with the assumption that you want to try them right now and that Haskell is a relatively new to you. Maybe something like this could be added into the descriptions of the array-modules. module Main where import Data.Array.IO -- Replace each element with 1/i, where i is the index starting from 1. -- Loop uses array reading and writing. loop :: IOUArray Int Double -> Int -> Int -> IO () loop arr i aLast | i > aLast = return () | otherwise = do val <- readArray arr i writeArray arr i (val / (1+fromIntegral i)) loop arr (i+1) aLast main = do arr <- newArray (0,9) 1.0 -- initialise an array with 10 doubles. loop arr 0 9 -- selfmade loop over elements arr <- mapArray (+1) arr -- a map over elements elems <- getElems arr putStrLn $ "Array elements: " ++ (show elems) Isto 14:43, 26 November 2006 (UTC)
https://wiki.haskell.org/index.php?title=Improving_library_documentation&diff=8680&oldid=8674
CC-MAIN-2017-22
en
refinedweb
To date QuantStart has generally written on topics that are applicable to the beginner or intermediate quant practitioner. However we have recently begun to receive requests from academics and advanced practitioners asking for more content on research-level topics. This is the first in a new series of posts written by Imanol Pérez, a PhD researcher in Mathematics at Oxford University, UK, and a new expert guest contributor to QuantStart. In this introductory post Imanol describes the Theory of Rough Paths, applying Python to compute the Lead-Lag and Time-Joined transformations to a stream of IBM pricing data. - Mike. Since the theory of rough paths was introduced in the late 90s[5], the field has evolved considerably and at a very fast pace. Moreover, in the last few years there have been many papers[3], [2], [1] showing how to apply rough path theory to machine learning and time series analysis. Rough path theory, as the name suggests, deals with paths (that is, continuous functions $X:[0,T]\rightarrow \mathbb{R}^d$) that are rough, in the sense of being highly oscillatory. In this article, we will introduce the theory of rough paths from a theoretical point of view. This introduction will be followed by several articles that will show, with concrete examples, how some of the results that are stated here can be used in quantitative finance. Before defining the signature of a continuous path and introducing its properties, we shall define the tensor algebra space we will be dealing with. The $n$-th tensor power of $\mathbb{R}^d$ is defined as \begin{equation*} (\mathbb{R}^d)^{\otimes n} := \underbrace{\mathbb{R}^d\otimes \mathbb{R}^d\otimes \ldots \otimes \mathbb{R}^d}_n, \end{equation*} where $\otimes$ is the tensor product. We define the tensor algebra space as \begin{equation} T((\mathbb{R}^d)):=\{(a_0, a_1, a_2, \ldots) : a_n \in (\mathbb{R}^d)^{\otimes n} \mbox{ }\forall n\geq 0\}, \end{equation} were we take $(\mathbb{R}^d)^{\otimes 0}=\mathbb{R}$ by convention. Moreover, we shall also introduce the truncated tensor algebra space $T^n(E)$, which is defined as \begin{equation}\label{eq:truncated tensor algebra 1} T^n(\mathbb{R}^d):=\bigoplus_{i=0}^n (\mathbb{R}^d)^{\otimes i}. \end{equation} Given two elements $a=(a_0,a_1,\ldots),b=(b_0,b_1,\ldots)\in T((\mathbb{R}^d))$, we may introduce the sum and product of $a$ and $b$ as \begin{equation*} a+b:=(a_0+b_0, a_1+b_1, a_2+b_2,\ldots), \end{equation*}\begin{equation*} a\otimes b=ab:=\left (\sum_{i=0}^n a_i \otimes b_{n-i} \right )_{n\geq 0}. \end{equation*} Summation and multiplication in the truncated tensor algebra can be defined analogously. The signature of a path is a key object in the theory of rough paths. For a continuous path $X:[0,T]\rightarrow \mathbb{R}^d$ such that the integrals below make sense, the signature of $X$ is defined as the sequence $$S(X):=(1,X^1,X^2,\ldots)\in T((\mathbb{R}^d))$$ where $$X^n := \underset{\substack{0 < u_1 < u_2 < \ldots < u_n < T}}{\int\ldots\int} dX_{u_1} \otimes \ldots \otimes dX_{u_n} \in (\mathbb{R}^d)^{\otimes n}\quad \forall n \geq 1.$$ Sometimes we are only interested in the truncated signature, which is defined as $$S^n(X):=(1,X^1,X^2,\ldots,X^n).$$ The signature of a path can also be defined as the sequence $(X^I)_{I\in \mathcal{I}}$, where $\mathcal{I}$ is the set of all multi-indices with entries in $\{1,\ldots,d\}$, and $X^I$ is defined, for $I=(i_1,\ldots,i_n)$, as the iterated integral \begin{equation}\label{eq:alternative signature} X^I = \underset{\substack{0 < u_1 < u_2 < \ldots < u_n < T}}{\int\ldots\int} dX_{u_1}^{i_1} \ldots dX_{u_n}^{i_n}. \end{equation} This way of expressing the signature allows to have a better intuition of it, and how it may be calculated in practice. Let us take a closer look to what the iterated integrals look like for the low order terms, for instance. Take the above equation with $I=(i)$, for $i=1,\ldots,d$. Then, $X^{(i)}=X_T^i-X_0^i$, that is, the first order terms of the signature are nothing else than the increments of the path. For the second order iterated integral, let us consider $I=(i,j)$, with $i,j\in \{1,\ldots,d\}$ not necessarily distinct. Then, we have \begin{equation*} \begin{split} \left ( \int_{0}^T dX_{u_1}^{i}\right ) \left ( \int_{0}^T dX_{u_2}^{j} \right )=\underset{\substack{u_1, u_2 \in [0,T]}}{\int} dX_{u_1}^{i}dX_{u_2}^{j} = \\ =\underset{\substack{0 < u_1 < u_2 < T}}{\int} dX_{u_1}^{i}dX_{u_2}^{j}+\underset{\substack{0 < u_2 < u_1 < T}}{\int} dX_{u_1}^{i}dX_{u_2}^{j}. \end{split} \end{equation*} That is, we have the following identity: \begin{equation}\label{eq:identity second order signature} X^{(i,j)}+X^{(j,i)}=X^{(i)}X^{(j)}\quad \forall i,j\in \{1,\ldots,d\}, \end{equation} and in particular, letting $i=j$, \begin{equation} X^{(i,i)}=\dfrac{\left (X^{(i)}\right )^2}{2} \quad \forall i=1,\ldots,d. \end{equation} The equation shows how the first and second order terms are related in the signature. In the previous section we defined the signature of a continuous path. However, in real life we cannot observe a full continuous path, we can only expect to observe a stream of data that consists of a sampling of the path we are interested in. In financial securities, for instance, the market only shows the prices of the securities at specific times. Therefore, in order to apply techniques from rough path theory to financial securities, we need to define what we mean by the signature of a stream of data. Mathematically, a stream of data (which for now will be assumed to be one dimensional) is denoted by $\{(t, S_t)\}_{t\in \mathcal{D}}$, where $\mathcal{D}\subset [0,T]$ is the times at which information about the path is available, and $S_t$ denotes the value of the path at time $t$. In financial terms, $\mathcal{D}$ denotes the times at which we have information about the price of a security, and $S_t$ denotes the price of the security at time $t$. The way to proceed in this case is quite simple: we first transform the stream of data into a continuous path, and then calculate the signature of this new path. We will show two approaches for this task: the lead-lag transformation, and the time-joined transformation. In both cases the resulting continuous path is piecewise linear, which makes the calculation of the signature easier and computationally fast[3]. As mentioned, we want to transform the stream of data $\{(t, S_t)\}_{t\in \mathcal{D}}$ into a continuous path. If $\mathcal{D}$ is given by the times $0\leq t_0 < t_1 < \ldots < t_N \leq T$, we may then write the stream of data as $$\{(t_i, S_{t_i})\}_{i=0}^N.$$ Then, the lead-lag transformation of the stream of data is defined as \begin{align*} \widehat{S}_t^\mathcal{D}:= \begin{cases} (S_{t_i}, S_{t_{i+1}})&\mbox{for }t\in [2i, 2i+1)\\ (S_{t_i}, S_{t_{i+1}}+2(t-(2i+1))(S_{t_{i+2}}-S_{t_{i+1}})&\mbox{for }t\in \left [2i+1, 2i+\frac{3}{2}\right )\\ (S_{t_i}+2\left (t-\left (2i+\frac{3}{2}\right )\right )(S_{t_{i+1}}-S_{t_i}), S_{t_{i+2}})&\mbox{for }t\in \left [2i+\frac{3}{2}, 2i+2\right ). \end{cases}, \end{align*} for $t\in [0, 2N]$. As we see, this path is $2$-dimensional, and we often write it as $\widehat{S}^\mathcal{D}=(\widehat{S}^{\mathcal{D},b}, \widehat{S}^{\mathcal{D},f})$. The first term, $\widehat{X}^{\mathcal{D}, b}$, corresponds to the lead component and the second term, $\widehat{X}^{\mathcal{D}, f}$, corresponds to the lag component. The following code, implemented in Python, computes and plots the lead-lag transformation of a stream of data: import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.patches as patches def leadlag(X): ''' Returns lead-lag-transformed stream of X Arguments: X: list, whose elements are tuples of the form (time, value). Returns: list of points on the plane, the lead-lag transformed stream of X ''' l=[] for j in range(2*(len(X))-1): i1=j//2 i2=j//2 if j%2!=0: i1+=1 l.append((X[i1][1], X[i2][1])) return l def plotLeadLag(X, diagonal=True): ''' Plots the lead-laged transformed path X. If diagonal is True, a line joining the start and ending points is displayed. Arguments: X: list, whose elements are tuples of the form (X^lead, X^lag) diagonal: boolean, default is True. If True, a line joining the start and ending points is displayed. ''' for i in range(len(X)-1): plt.plot([X[i][1], X[i+1][1]], [X[i][0], X[i+1][0]], color='k', linestyle='-', linewidth=2) # Show the diagonal, if diagonal is true if diagonal: plt.plot([min(min([p[0] for p in X]), min([p[1] for p in X])), max(max([p[0] for p in X]), max([p[1] for p in X]))], [min(min([p[0] for p in X]), min([p[1] for p in X])), max(max([p[0] for p in X]), max([p[1] for p in X]))], color='#BDBDBD', linestyle='-', linewidth=1) axes=plt.gca() axes.set_xlim([min([p[1] for p in X])-1, max([p[1] for p in X])+1]) axes.set_ylim([min([p[0] for p in X])-1, max([p[0] for p in X])+1]) axes.get_yaxis().get_major_formatter().set_useOffset(False) axes.get_xaxis().get_major_formatter().set_useOffset(False) axes.set_aspect('equal', 'datalim') plt.show() Fig 1 - Path of IBM stock Fig 2 - Lead-lag transformation of IBM stock Figures 1 and 2 show the price of IBM's stock from October 2016 to November 2016, and its lead-lag transformation, which was computed using the code above. An alternative way of transforming the stream of data $\{(t_i, S_{t_i}\}_{i=0}^N$ into a continuous path is using the time-joined transformation, which is defined as \begin{align*} Y_t:=\begin{cases} (t_0, S_{t_0}t)&\mbox{for }t\in [0, 1)\\ (t_i+(t_{i+1}-t_i)(t-2i-1), S_{t_i})&\mbox{for }t\in [2i+1,2i+2)\\ (t_{i+1}, S_{t_i}+(X_{i+1}-X_{t_i})(t-2i-2))&\mbox{for }t\in [2i+2, 2i+3) \end{cases}, \end{align*} for $0\leq i\leq N-1$ and $t\in [0, 2N+1]$. The following piece of code computes this transformation, for a given stream of data: import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.patches as patches def timejoined(X): ''' Returns time-joined transformation of the stream of data X Arguments: X: list, whose elements are tuples of the form (time, value). Returns: list of points on the plane, the time-joined transformed stream of X ''' X.append(X[-1]) l=[] for j in range(2*(len(X))+1+2): if j==0: l.append((X[j][0], 0)) continue for i in range(len(X)-1): if j==2*i+1: l.append((X[i][0], X[i][1])) break if j==2*i+2: l.append((X[i+1][0], X[i][1])) break return l def plottimejoined(X): ''' Plots the time-joined transfomed path X. Arguments: X: list, whose elements are tuples of the form (t, X) ''' for i in range(len(X)-1): plt.plot([X[i][0], X[i+1][0]], [X[i][1], X[i+1][1]], color='k', linestyle='-', linewidth=2) axes=plt.gca() axes.set_xlim([min([p[0] for p in X]), max([p[0] for p in X])+1]) axes.set_ylim([min([p[1] for p in X]), max([p[1] for p in X])+1]) axes.get_yaxis().get_major_formatter().set_useOffset(False) axes.get_xaxis().get_major_formatter().set_useOffset(False) axes.set_aspect('equal', 'datalim') plt.show() Fig 3 - Path of IBM stock (repeated from above) Fig 4 - Time-joined transformation of IBM stock Figures 3 and 4 show the price path of IBM and its time-joined transformation, using the code above. In this article, we have introduced the signature of a continuous path, as well as the signature of a stream of data. As we will see in the following articles, this object will prove to have very interesting applications to machine learning and time series analysis. After showing how to apply signatures to these fields, we will analyse concrete examples of how to use signatures in quantitative finance.
https://www.quantstart.com/articles/rough-path-theory-and-signatures-applied-to-quantitative-finance-part-1
CC-MAIN-2017-22
en
refinedweb
ncl_ppinpo man page PPINPO — generates and returns the boundary of the "intersection" polygon, which consists of all points that are inside both the clip polygon and the subject polygon. Synopsis CALL PPINPO (XCCP,YCCP,NCCP,XCSP,YCSP,NCSP,RWRK,IWRK,NWRK,URPP,IERR) C-Binding Synopsis #include <ncarg/ncargC.h> void c_ppinpo( float *xccp, float *yccp, int nccp, float *xcsp, float *ycsp, int ncsp, float *rwrk, int *iwrk, int nwrk, int (*urpp_)( float *xcra, float *ycra, int *ncra), int *ierr) Description - XCCP (an input array of type REAL) is the X coordinate array for the clip polygon. - YCCP (an input array of type REAL) is the Y coordinate array for the clip polygon. - NCCP (an input expression of type INTEGER) is the number of points defining the clip polygon. - XCSP (an input array of type REAL) is the X coordinate array for the subject polygon. - YCSP (an input array of type REAL) is the Y coordinate array for the subject polygon. - NCSP (an input expression of type INTEGER) is the number of points defining the subject polygon. - RWRK (a scratch array, dimensioned NWRK, of type REAL) is a real workspace array. Because of the way in which they are used, RWRK and IWRK may be EQUIVALENCEd (and, to save space, they should be). - IWRK (a scratch array, dimensioned NWRK, of type INTEGER) is an integer workspace array. Because of the way in which they are used, RWRK and IWRK may be EQUIVALENCEd (and, to save space, they should be). - NWRK (an input expression of type INTEGER) is the length of the workspace array(s). It is a bit difficult to describe how much space might be required. At the moment, I would recommend using NWRK equal to about ten times the total of the number of points in the input polygons and the number of intersection points. This situation will change with time; at the very least, I would like to put in an internal parameter that will tell one how much space was actually used on a given call, but I have not yet done so. - URPP is the name of a user-provided routine to process the polygon-boundary pieces. This name must appear in an EXTERNAL statement in the routine that calls PPINPO and the routine itself must have the following form: SUBROUTINE URPP (XCRA,YCRA,NCRA) DIMENSION XCRA(NCRA),YCRA(NCRA) ...(code to process a polygon boundary piece)... RETURN END Each of the arguments XCRA and YCRA is a real array, dimensioned NCRA; the former holds the X coordinates, and the latter the Y coordinates, of a piece of the polygon boundary. It will be the case that XCRA(NCRA)=XCRA(1) and YCRA(NCRA)=YCRA(1). - IERR (an output variable of type INTEGER) is returned with the value zero if no errors occurred in the execution of PPINPO or with a small positive value if an error did occur. The value 1 indicates that a degenerate clip polygon was detected, the value 2 that a degenerate subject polygon was detected, and the value 3 that the workspace provided was too small; values greater than 3 should be reported to the author, as they indicate some problem with the algorithm. Currently, if IERR is returned non-zero, one can be sure that no calls to URPP were executed; in the future, this could change, but, in that case, there will be an internal parameter allowing one to request the current behavior. C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Usage The FORTRAN statement CALL PPINPO (XCCP,YCCP,NCCP,XCSP,YCSP,NCSP,RWRK,IWRK,NWRK,URPP,IERR) causes the formation of an intersection polygon (of the clip and subject polygons) and the delivery of that polygon's boundary, piece by piece, to the user-specified polygon-processing routine URPP. Examples Use the ncargex command to see the following relevant examples: ppex01, tppack, c_ppex01. Access To use PPINPO or c_ppinpo, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: polypack, ppdipo, ppditr, ppintr, ppplcl, ppppap, ppunpo, ppuntr, ncarg_cbind. Hardcopy: None. University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_ppinpo
CC-MAIN-2017-22
en
refinedweb
MooseX::Role::WithOverloading - Roles which support overloading package MyRole; use MooseX::Role::WithOverloading; use overload q{""} => 'as_string', fallback => 1; has message => ( is => 'rw', isa => 'Str', ); sub as_string { shift->message } package MyClass; use Moose; use namespace::autoclean; with 'MyRole'; package main; my $i = MyClass->new( message => 'foobar' ); print $i; # Prints 'foobar' MooseX::Role::WithOverloading allows you to write a Moose::Role which defines overloaded operators and allows those.
http://search.cpan.org/~bobtfish/MooseX-Role-WithOverloading/lib/MooseX/Role/WithOverloading.pm
CC-MAIN-2017-22
en
refinedweb
Parallel Port Box Back in college I acquired some old DJ light controlling equipment. This included one box with switches and buttons to turn eight channels on and off, two things that looked like big power strips where each outlet was switched by a relay, and two cables, each about fifty feet long to attach the switches and buttons to the relays and outlets. A few years back I made a box to control the relays from a parallel port. This is used at work to control as many as eight lights that indicate the status of various builds. I wanted my own to play with the same setup at home, so I made another one. (read: Phone-controlled Christmas lights?) It looks like this: It's just eight transistors, resistors, LEDs, and diodes with the necessary connectors in a little box. It went a lot faster this time. I did all of the soldering and most of the customization of the project box in one night. I think the previous time it took me the better part of a day. The only thing that prevented it from working the first time I tried it out was that the Centronics connector pinout I was relying on labelled more pins as ground than were actually connected to anything. I moved the ground wire to a different pin and it worked perfectly. I couldn't find the right kind of Centronics connector, so I ended up soldering to a connector that was supposed to be PCB-mounted. It's ugly, but it's safely tucked away inside the box and works fine. And finally, when I just about had it ready to test, I realized I didn't have a printer cable. Fortunately, AIT computers provided me with this annoyingly somewhat hard to find cable for a perfectly reasonable price. In other news, the Python parallel library is about as nice and simple as it gets: import parallel import time p = parallel.Parallel() p.setData(0b101010) time.sleep(1.0) p.setData(0b010101) Now I need to find something to do with a box of switches and buttons.: App Store Store This just in: Wikipedia to the rescue.
http://www.unprompted.com/projects/blog/2011/1
CC-MAIN-2014-42
en
refinedweb
See also: IRC log <scribe> scribe: Noah Mendelsohn <DanC_lap> action-106? <trackbot-ng> ACTION-106 -- Norman Walsh to make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list -- due 2008-05-08 -- OPEN <trackbot-ng> <timbl> SW: We've talked about this before. Norm reports no progress on ACTION-30. NW: That's right, I'm sorry I didn't get to it. SW: I'm not sure how we get started. <DanC_lap> action-106? <trackbot-ng> ACTION-106 -- Norman Walsh to make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list -- due 2008-06-05 -- OPEN <trackbot-ng> NM: Calling the discussion "Volume 2" prejudices the options a bit. We may want to do a lot of our work in a new edition of the current volume. The ultimate result should be document(s) that readers perceive as well organized. JR: We don't want to do technology for technology's sake, or writing for writing's sake. Would doing a use cases and/or requirements analysis be one way to focus the goals? SW: We have a skeletal outline. (Norm goes to look for it.) <Norm> SW: We've had substantive discussion of Semantic Web and of clients with richer application behavior. There's also the question of doing maintenance work on what we've already done. JR: Maybe we should be goal based. Is the goal to influence certain groups to do certain things? TBL: Describing the Web as a system, from the top level and as an integrated story, is important. The reason the topic areas seem disjoint is that having started sort of top down on the table of contents for Volume 1, we found that some areas were contentious and some not. Still, it's important that what we're shooting for is to tell one consistent story. Norm finds the list of topics at: JR: The purpose of the finding is to convince certain people to do certain things. NM: I think the audience is broader than one might think. For example, I often use the Architecture Document to educate people who are not coding applications, but who are making decisions like whether their mobile phone applications should integrate with the Web. From that they learn that they'll have to tell their programmers to identify things with URIs, etc. JR: The question is how do you decide which project to work on. TBL: Maybe the top level document should say the obvious things without very much depth. Consider naming. It's important, but not separate from other aspects of the Web. Viewed from one perspective, the Semantic Web is just one format or set of formats. There are other questions about how AJAX works with the Web. JR: Maybe use cases, as Noah has been talking about. NM: I think we sometimes have to publish the things where we've been able to make some progress. Sometimes we set a few priorities, but then find that we can only generate good insights into some areas we'd hoped to hit. Which ones will progress is not entirely predictable. So, we should at least consider sharing the things where we've made good progress. NW: I think that the business of working in a document, producing drafts, actually focusses our work. Also, looking at mining the information in the findings can be useful. TBL: The TAG was formed in part because working groups used to be told "you can't do that, it's not how the Web works", but there was no common point of reference for how the Web really does work. Now that we have the mandate, we work in two modes: sometimes we are responsive to an external question or an issue that arises, but sometimes we are proactive in setting priorities. When we produced Web Architecture Volume 1, many people told us it was useful. So, we weren't just satisfying ourselves. This time, I would like to see more formal methods, though. JR: I still think we could do a more careful job of capturing those goals and dynamics. SW: I think there are three bullet points in the TAG charter that pretty much capture what Tim said. <Zakim> timbl, you wanted to mention origin of the TAG in WG questions <Stuart> Mission statement <Stuart> The mission of the TAG is stewardship of the Web architecture. There are three aspects to this mission: <Stuart> to document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary; <Stuart> to resolve issues involving general Web architecture brought to the TAG; <Stuart> to help coordinate cross-technology architecture developments inside and outside W3C. <Stuart> From DO: I like a lot of what we've talked about doing either in Vol 2. or updates to Vol. 1. Doing versioning and/or self-describing Web and/or distributed extensibility would be really great, but.... I still feel as I've said before, that there's a huge buzz around AJAX, social networks, and the "latest cool things", and it's not clear to me that we're doing a good job of helping them. Not sure what to suggest, but it feels like it could be a good goal to really align better with where a lot of investment and energy is going. <Zakim> DanC_lap, you wanted to noodle on events, e.g. Future of Web Apps DC: I haven't heard anyone talk about usefulness of Web Arch. vol 1 in awhile. So, I'm interested in focussing on events like the Future of Web Apps. There's likely to be discussion of some things like the future of Twitter is. Would anyone like to talk about interesting conferences and how we could both have impact and be influenced by their needs? DO: Yes, that would be great. DC: We had some presence at XTech, and that seemed good. DO: Supernova's coming up in San Francisco in June. NW: Supernova 2008 () <DanC_lap> (hm... ) NM: Should we go through the details on these things here, or agree to do in email? DC: I prefer here. NM: I'm still enthusiastic about the idea of doing a piece of a TAG Web site that would be a really cool, useful tutorial and introduction to why Web architecture matters, and how to apply it. SW: Books, such as Web Architecture in a Nutshell or something of that ilk would be useful. JR: Not sure we can pick up much more work. DC: The nice thing about blog articles is that we don't have to agree before publishing. NM: Yes, we've decided to use our blog that way, and it seems to be proving useful. DC: I would have liked to talk about Javascript security, but we've since learned that in certain respects it's purposely obscure. <DanC_lap> that wasn't my point: <DanC_lap> DC: yes, javascript security is another topic where I'd like to know what forums/conferences are the place to talk about it SW: I need guidance as to how we make progress. DO: The issue has come up about walled gardens like Facebook in which things are not properly identified with URIs. <DanC_lap> (jar mentioned reviews in blog items... I tend to microblog in bookmark services... I have ~180 in ) NM: Engaging folks like Facebook would be great, if not necessarily easy on a subject like this, but I'm not sure how it informs what we'd write in a new edition of WebArch. TBL: One of the social network issues is: how does my social networking site get a list of friends from your site? One good answer of course is, it should be retrieved as a document probably using a technology like RDF. Then you find that you need what can be rather complex 3rd party authentication systems like oauth. Interesting that so much of this is being driven by the "friends list" use case. At the moment, it's not being done RESTfully. Now, all of this is being done in service to building applications that integrate data from two or more social networking sites. You can imagine doing this in the browser if you like. DO: Oauth is pretty cool in how it uses URIs. TBL: It does a lot of redirects. We looked at OpenID and the number of round trips is large. Oauth seems to do even more. Popping up: it would be really interesting to ask "how would you do something like this better using Web Arch"? DO: What would that look like? What would we on the TAG do? Look at OpenID and oauth from a Web Arch perspective? TBL: Yes those, but also the larger problem of portability of things like friends lists. NM: I'm not sure what to do about it, but if the sorts of applications one sees with Flash and Silverlight become more and more ubiquitous, we might ask what if anything Web Arch Ed. 2 should do to help you know how to build apps with that level of function (not necessarily using those proprietary technologies). Otherwise, there's some risk that by the time we publish, we'll be talking about applications that look a bit old fashioned. HT: Often what we're doing can be best viewed as industrial archeology. We often do best when we look back. So, Noah, I'm not sure those rich technologies belong in our document. NM: I agree actually. I tried to signal that I too think we often do best when we look back, but it runs the risk that by the time we say anything we don't influence the people who can still make choices about new things. DO: Someone has to figure out how to "put pen to paper". Once it gets done, I would be interested in putting a subset of the compatibility strategies into Web Arch vol 2. For example, Web Arch v.1 says "use version ids", and we've decided the story needs to be more subtle than that. SW: (Stuart shows a picture of a useful bridge over a river that's made in cast iron, using a design meant for wood). This is a good example of how architecture does not always come first. ... I got this from a talk by Peter Williams, who used to work here at HP. <Stuart> <DanC_lap> SW: Perhaps we need to start by doing a thorough review of Web Arch. vol 1? JR: Aren't the revisions a separate bit from the new stuff? SW: Revising is expensive, so doing it for one piece isn't a good bet. <Zakim> Stuart, you wanted to pick up on industrial archeaology NM: I think we need to focus not so much on separating revisions from new stuff, but on creating a document or documents that will serve the community for 3-5 years. We'll find out what's revised, what's new, and how it's best organized as we go, I think. <DanC_lap> action-106? <trackbot-ng> ACTION-106 -- Norman Walsh to make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list -- due 2008-06-05 -- OPEN <trackbot-ng> NW: I have an action to add details to our working list of possible priorities and to link them to our issues list. I suggest that the next time we should discuss Web Arch future is after I complete that action. SW: I suggest we do as Norm suggests. JR: I wonder if we need to collect things other than what are in SW: Yes, but it's a start. We'll change as necessary. I think that some of figuring out what needs to be done comes of reviewing what we've already done. I think we've heard that reviewing the written works like findings and AWWW vol. 1 is something that individual members will do as they see fit. JR: Where do we do this? SW: At least in www-tag, maybe sometimes in the TAG blog. <DanC_lap> close action-114 <trackbot-ng> ACTION-114 Henry to find the counter example that made it necesseary to make a terniary relationship closed <DanC_lap> close action-115 <trackbot-ng> ACTION-115 Henry to improve the presentation of the way the ontology reconstructs RDDL 'purpose', and to attempt to address skw's concern about the subject of the so-called purpose relation closed ****MORNING BREAK**** <DanC_lap> HT: Maybe we need to talk about how to get HTML and XML to converge? We have Sam Ruby's distributed extensibility proposal ( ), the IE8 proposed function, etc. <DanC_lap> ^ sam ruby on distributed extensibility <DanC_lap> ISSUE-41 Decentralized extensibility in the HTML WG HT: We've also talked about implicit namespace bindings. These could be implicit in the media-type, perhaps written into the specifications e.g. if svg; aria: prefixes were "written into" the HTML specification. So, these are things that led me to suggest putting this back on the agenda. <DanC_lap> Maciej Stachowiak TBL: Someone (Macieg Stachowiak?) has suggested having the occurrence of an element implicitly bind a namespace. Another option would be to declare in a specification that you can find by following your nose that, e.g., in HTML, the occurrence of a certain element such as SVG would trigger the binding. ... Alternative would be to have "svg:" prebound in HTML. ... The idea is to get a smooth slope, where millions of people can do the easy thing easily, but it scales architecturally to the general case with the full URI. <timbl> Got a pointer? <timbl> I feel it is important that the HTML5 spec be split into smaller chunks. TBL: I feel it is important that the HTML5 spec be split into smaller chunks. DO: I think Roy Fielding has made similar comments. SW: We received an invitation from Ian Hickson to review the HTML 5 spec: HT: What's the right way for us to say that it's going to be very challenging to review something that large? If we could find a way to agree to focus on more specific pieces, that might be helpful. TBL: I think Roy's feedback was fundamentally "where's the core language definition?" I think that would probably be roughly mine too. DC: I want to send comments primarily where it's likely to produce useful action. <DanC_lap> found Fielding's remarks that DO alluded to: " This draft has almost nothing to do with HTML. It is a treatise on browser behavior. That is a fine standard to have, but deserves a different title so that the folks who just want to implement HTML can do so without any of this operational/DOM nonsense." -- DC: I'm trying to figure out how many MIME types there should be for HMTL. Some people believe there should be application/xhtml+xml in addition to text/html. There's a point of view I might share that xhtml+xml isn't going to take off in practice, but some people feel very strongly we need to make xhtml+xml work. Poll on how many HTML media-types we should have in the long run: Ashok: pass Stuart: 1 DO: strong 1 NM: bits of opinions, but not coherent or informed enough, so I suppose I pass <DanC_lap> (I realize this poll is more about tagSoupIntegration than distribute extensibility) HT: Now or in the long run? DC: Long run. HT: 1, but only the very long run. NM: Not sure I have a well informed opinion (noodles on saying just text/html) HT: Um, not sure. As long as XHTML 1.n has any traction at all, I want an XML media type for it. Because I want what follows from that. <Norm> In an informal poll, mostly what Henry said, 1 if it's application/xml+html. If I can't have *that* one, then two. I guess. HT: Until we have some single thing that is really "both", with good statutory grounding. NW: Mostly what Henry said. Ultimately 1, if I could make it the XML one. DC: If application/xhtml+xml, it will have to do all the non-clean XML stuff, so text/html (1) JR: If the distinction gives you information that's useful early that's good, but I'm not sure I care as long as whatever we get supports "follow your nose". <DanC_lap> DC: FYI, Sam Ruby argues for 2. TLB: Convergence is really important between HTML and XML, because there's only one HTML. I'm not sure that having no "+xml" in the mime type explicitly is something one can work around as a special case, I.e. interpreting text/html as XML when necessary. I don't resent +xml, but I think text/html should migrate smoothly to XML over time. BTW: text/* should perhaps migrate to UTF-8. JR: What's the nature of the distinction? You're not going to treat the text differently. You can get the information from 3 places: from the media-type, from the start of file. If you can heuristically do a parser that "does the right thing", that might work. NM: If the media type spec. delegates the "is it XML?" switching to what's at the head of the file, then OK, otherwise not. You can't do it, e.g. for text/plain. JR: I wonder if 2 is the wrong number. <Zakim> ht, you wanted to mention the difference between RDF and SVG <Zakim> noah, you wanted to talk about mixins in media types <DanC_lap> issue-9? <trackbot-ng> ISSUE-9 -- Why does the Web use mime types and not URIs? -- CLOSED <trackbot-ng> NM: I tend to feel that one architectural issue is the lack of richness in the structure and semantics of media type names. If you had mixin syntax and semantics, you could come closer to saying "oh, by the way, this text/html or this text/plain happens to be well formed XML, and you have permission to interpret it as XML." HT: RDFa is an example of a small vocabularly of which you can say "it's not obviously wrong to say that you should just negotiate with the HTML WG to make them part of the HTML base language". RDFa has a handful of elements, ARIA, more, SVG has a large number of elements and MATHML more. All are enumerated in the specs. ... If I were approaching this a priori and adding RDFa to my <spans>, I would like to write <span my: where my:foo is an RDF relation. What's interesting is that my:foo's are open-ended. ... It seems to me that the tail has been wagging the dog in designing RDFa. In a sense, the reason we're not seeing a strong example of the need for distributed extensibility is that the lack of namespace-based extensibility in HTML bounded the discussion to require at most a few new attributes, DC: Your view of the aesthetics may be in the minority. DTD-based validators are part of what drives this. Also, people who work with markup languages believe that tags are "holy". HT: Attributes sometimes feel different. NM: One thing you get from always triggering a capability (e.g. RDFa) with well known attributes is that a client is more likely to know about a well-known "property" attribute than a domain specific foo:xxx. It it will at least know that what you were trying to do was RDFa. HT: RDFa wants support in browsers. NM: I think that whether there are various RDFa add ins to browsers or built in support isn't the point. It gets built in iff it so happens that the same function is needed by a large class of users. If lots of users need different variations on the UI or functions for handling RDFa, then selectable addins are right. SW: The relationship to HTML 4.01 XHTML 1.1 DOM2 HTML section in the HTML 5 draft positions HTML 5 as successor to XHTML. Yet, it seems not carry forward capabilities like modularization that depend on namespaces. HT: The XHTML modularization spec uses a trick that you (Dan) and I discovered independently to allow flexibility of prefixes with DTDs. DC: Adequate is in the eye of the beholder. The workgroups seem to disagree. Some people believe that all that happens is that attributes were added, and the availability of a fancy schema to describe them is secondary. <Zakim> jar, you wanted to say metadata ( link header) can do what media types can't JR: Metadata in link headers might take some pressure off the urge to enrich mime types. <DanC_lap> (hmm... I thought Adjourn meant the end of the whole meeting, but nope... "To adjourn means to suspend until a later stated time." -- ) =========== RESUMING AFTER LUNCH ============ <Askok> scribe: Ashok Malhotra SW: There was some discussion of versioning -- should we allocate time for it? HT: We should allocate time for JAR to present. Passing baton to Dave and Henry... HT: We sent msg to ARIA folks. Dave drafted a doc for the tag blog and asked me to look at it. I spent a fair bit of time on the point that XRIs allowed chacraters that are not allowed in URIs. This proved to be a mistake. So, HT has not reviewed Dave blog entry yet. DO: I can wait till tomorrow. HT: I'm going to update the finding using the doc I prepared for Vancouver. <DanC_lap> action-33? <trackbot-ng> ACTION-33 -- Henry S. Thompson to revise URNsAndRegistries-50 finding in response to F2F discussion -- due 2008-03-27 -- OPEN <trackbot-ng> NM: Introduces the subject. Machines and technologies continue to get more powerful, and UI's sooner or later exploit that. Where once images were an enhancement to the Web, they are now commonplace, but video is new. The machines are now capable of doing video, animation etc., and we are seeing the emergence of Flash, Silverlight, etc. These provide rich animation, multimedia, etc. Note that some of the applications are specifically media-focussed: MS persuaded NBC to broadcast the upcoming Olympics to the Web using Silverlight. The motivation, at least in the case of Microsoft, is claimed in part to be as a lever for getting into Web advertising (cf. Ray Ozzie keynote at Mix '08 conference.) Overall, these technologies are pushing the Web towards new types of apps and interfaces. Right now we don't have a standards-based approach with the capabilities of these platforms. This raises significant issues for the Web. More and more content being creating in these proprietary formats. They deliver over HTTP but tend to violate the Rule of Least Power (they tend to deliver as byte codes, not declaratively, though in the case of Silverlight if you look hard enough the proprietary but declarative XAML can usually be found burried in the executable of a Silverlight application). Often with these applications, you cannot copy/paste to clipboard, cannot do view source, etc. So, in those respects, a step backwards for the Web. <DanC_lap> (let the record show that the web-with-images came BEFORE the text-only web; it's just that the text-only web got more widely deployed than the NeXT client in early days) <Zakim> DanC_lap, you wanted to note that the new flash client has p2p support DC: Adaptive video streaming over HTTP. Chop up video into small bits. Move Networks does this. Strategic relationship with MS. <Stuart> DC: Network protocol is going on among proprietary players. SW: ISP are being stressed by video content and want more money. <Stuart> <Zakim> jar, you wanted to be random (threats, technical fixes, systematic problems) JAR: We are acting on behalf of the public. What might we do or say? <DanC_lap> aha... found the flash/p2p item... <DanC_lap> Flash P2P: Now That’s Disruptive Om Malik, Thursday, May 15, 2008 at 9:00 PM PT JAR: The technical part to fix these problems is not hard. <Zakim> noah, you wanted to respond to Jonathan <noah> Quoting from the TAG Charter: "The mission of the TAG is stewardship of the Web architecture. There are three aspects to this mission: 1. to document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary; 2. to resolve issues involving general Web architecture brought to the TAG; 3. to help coordinate cross-technology architecture developments inside and outside W3C." I think that working with these outside organizations to make their technologies better citizens on the Web is squarely within the scope of #3. NM: We could do a TAG finding that these technologies lack some features. The W3C could line up and discuss how to make these technologies better citizens on the web. DO: I think it's worth doing something. AM: Should we start standards efforts in these directions? NM: If we can involve people who are in a position to deliver to enough of the community to get critical mass. I don't think I see that yet with some of the existing bits and pieces like SVG. I especially don't see it near term when you consider the sophistication of the runtime and tooling technologies that would be needed to create applications at all comparable to what the proprietary offerings are doing. Of course, I'd love to see a standards-based answer: just that we have to be realistic about whether and how it gets done. DC: Disagrees -- standards are low risk. They work best when market has figured out where it is going ... The big deal I see is some combination of authoring tools and teaching people to author. NM: (Note from the editor of the minutes: I'm fairly sure that Tim asked Noah whether widgets were available for Flash and/or Silverlight): Yes, Flash has a rich set of widgets as far as I know. In the case of Silverlight, early releases did not have them, but later ones do, and users can subclass those to create their own. <Zakim> ht, you wanted to point to old blog entry <ht> ??: The basic Web story is dependent on the resource you get whan you do a GET. <DanC_lap> phpht <DanC_lap> <- ??: This is attentuated by new technologies ... whole story is being compromised <timbl> (Noah, By the way, ) ??: Threatens the foundations of the Web model <Zakim> DanC_lap, you wanted to note the least power issue in DVD-next-gen standards: <- <DanC_lap> "Blueray requires an implementation of Java on all Blueray players to run the interactive menus on those systems ... HD-DVD’s menu system is stored in a standardized document format" DC: Least power shows up in DVD standardization. <Zakim> Stuart, you wanted to mention WAF Access Control for Web Resources. <DanC_lap> HT and TBL note that HD-DVD has folded SW: WAF talks abt access control of resources by resources. HT: Ties in with idea that Browsers are not the only user agent. Bits that are on the page have less and less to do with what the user sees. GOOGLE is putting huge effort into image recognition. Page rank algorithm has changed and no longer counts linking nearly as heavily as in the past. Olden days most pages were hand authored. Today most pages are synthesised more or less automatically <ht> I believe that google's ranking of pages today makes relatively little use of the original incoming-links-based 'page rank' number TimBL: Hypertext analogy does not work. <Stuart> What I said was about lack of webarch vocab and coverage of user agent threads of behaviours. WAF document speaks of controlling access to resources by resources, but that the thing actually performing access is a thread of execution in a UA rather than the first party resource from which the page was loaded. <ht> HST remembers to point people to Sean McGrath's XTech closing keynote. . . TimBL: There is a lot of procedural code to do what they want to do. We could encourage them to be more declarative <ht> Lots of food for thought here: NM: URIs are not always as separable as you think. SW: Can talk about concrete next steps. <ht> Particularly page 63 DO: We could talk about why walled gardens are bad. <DanC_lap> (I think "walled gardens" like facebook are an important part of the marketplace) DO: We could do usecases re. Flash and Silverlight. NW: That's a great place to start. <timbl> TBL: There are three different scenarios we should distinguish. One, people use flash to make a web site which is still very much hypertext and would be much more reusable if declarative. Two, flash (etc) is used to make a user interface which is not hyopertext at all. Like Slife, tabulator, or maps. there is a concept of identify or place, so URIs are relevant, and indded SL has sluris. Different URI schemes may be appropriate here. Three, the procedural HT: I would recommend the Sean McGraths keynote cited above. <ht> Text of one conclusion therein (slide 63): "What we might loose? <ht> Hypertext and deep linking as we know it. <ht> Search as we know it (!) <ht> "Emergent properties: as we have come to know <ht> them :mashups, folksonomies etc. <ht> UI simplicity. Grandma won't be able to "surf"" HT: Quotes above from the keynote. <DanC_lap> (phpht. the KC meeting isn't in the schedule on . ) SW: Talks about future mtgs after Sept. HT: f2f East coast early Dec. <ht> NW: Bad idea -- XML Conference. SW: Tim, did you say Southhampton Oct/Nov? <DanC_lap> 23-25 September in Kansas City -- SW: Where would we like to meet after Sept? NW: Proposes January. NM: Concerned about proximity to TAG election. Could mean that first meeting for new members isn't until several months after they join. DC: Goes to whiteboard to try and work out dates. NW: Dec 11,12. In Cambridge Dec 10,12. <scribe> ACTION: Stuart to put up Web poll re. dates for Dec 10-12 f2f [recorded in] <trackbot-ng> Created ACTION-157 - Put up Web poll re. dates for Dec 10-12 f2f [on Stuart Williams - due 2008-05-28]. RESOLUTION: to thank the the hosts here at HP and Amanda at the Williams home, with applause. Break ... resuming at 2:30 <Zakim> timbl, you wanted to suggest different scenarios Resuming after the break DO will discuss changes to the document DO: Added section in Section 2 dealing with failure outcomes. JAR: Need to discuss outcomes where language is accepted but processor does the wrong thing. Discussion of wording on section 2 <dorchard> Applications succesfully process texts of an older version of a language. <dorchard> A newer language is backwards compatible with a newer language if an application written to the newer language successfully process texts of the older language. <dorchard> A newer language is backwards compatible with an older language if an application written to the newer language successfully process texts of the older language <Stuart> "A change in the definition of a language is backward compatible if consumers of the evolved language can correctly process text written for the previous version of the language." Agreement on above wording The last from Stuart. NM: Instead of "We specify ...", "For the incompatible strategies there are a range of possible In second to last para encourage people to think about versioning early. 3rd para of 2.1 -- Apps are written to assume a particular version of the text. Often no version id is in the text. If there is an id in the text, and the app supports just one version, then the version id is a crosscheck. SW: I wish this was all normalized into compatible and incompatible changes to the languages. <DanC_lap> (I don't think we *need* to define compatibilty in terms of application behavior, but I think talking about that way is straightforward ) AM agrees with DanC <Norm> I don't actually think we talk about the compatibility of applications Section 5 -- Text of Ed Note NW: The second version does not scan. NM: Against the first one. <Zakim> Norm, you wanted to observe that intended to be versioned and extensible don't work for me <Zakim> noah, you wanted to explain why dave's option 1 makes me unhappy JAR: Perhaps we need a better version of the second alternative <Norm> Part of the problem here is the scope. I expect most of the readers of this document to be thinking about designing an XML language, not ASCII or XML. <DanC_lap> noodling... "To facilitate independent evolution of producers and consumers, languages in distributed systems should be extensible". <DanC_lap> noodling... "Extensible languages facilitate independent evolution of parties in a distributed system" AM: Norm, this was my comment about the scope of the document JAR: Echoes Dan's sentiment DO and NM can live with "To facilitate independent evolution of producers and consumers, languages in distributed systems should be extensible". DO: Any objections to the above? No objections. <scribe> ACTION: JAR to write up thoughts on versioning and share with the group [recorded in] <trackbot-ng> Created ACTION-158 - Write up thought son versioning and share with the group [on Jonathan Rees - due 2008-05-28]. Vote of Thanks to SW and HP for hosting <scribe> ACTION: David to update compatibility strategies document in response to f2f discussion [recorded in] <trackbot-ng> Created ACTION-159 - Update compatibility strategies document in response to f2f discussion [on David Orchard - due 2008-05-28]. SW: The Bristol F2F has concluded.
http://www.w3.org/2001/tag/2008/05/21-minutes
CC-MAIN-2014-42
en
refinedweb
Details Description Add a way to add builders to the BuilderDirector so that users can make their own builders. From the users list: It's the GOF's Builder pattern First you need to implement this interface: org.trails.builder.Builder It should as simple as: public class InvoiceBuilder implements Builder<Invoice> { public Invoice build(){ Invoice invoice = new Invoice(); // modify your invoice here return invoice; } } Then (this is the part that's missing in Trails 1.2) add your builder to the BuilderDirector. builderDirector.add(org.trails.demo.Invoice.class, new InvoiceBuilder()); or something like this but using spring.
http://jira.codehaus.org/browse/TRAILS-174?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab
CC-MAIN-2014-42
en
refinedweb
21 November 2011 09:44 [Source: ICIS news] TOKYO (ICIS)--?xml:namespace> The country’s domestic shipments totalled 12,046 tonnes in October 2011, down by 9.3% from October 2010, while its exports fell by 3.3% to 382 tonnes, according to the Japan Polypropylene Film Industry Association. Among the domestic shipments, The country’s production of oriented PP film decreased by 1.6% to 18,906 tonnes in October from the same period a year before, according to the association. Its domestic shipments of oriented PP film fell by 11% to 17,673 tonnes in October year on year, while its exports of the product rose by 15% to 376 tonnes, the data showed.
http://www.icis.com/Articles/2011/11/21/9509949/japan-october-cast-polypropylene-film-production-falls-by.html
CC-MAIN-2014-42
en
refinedweb
UFDC Home | Search all Groups | UF Institutional Repository | The king's coffer Permanent Link: Material Information Title: The king's coffer proprietors of the Spanish Florida treasury, 1565-1702 Physical Description: ix, 198 p. : map ; 24 cm. Language: Creator: Bushnell, Amy Turner Publisher: University Presses of Florida Place of Publication: Gainesville Subjects Subjects / Keywords: Finance, Public -- History -- Spain ( lcsh ) Finance, Public -- History -- Florida ( lcsh ) Finanzas públicas -- Historia -- Florida, EE.UU Genre: bibliography ( marcgt ) non-fiction ( marcgt ) Notes Bibliography: Bibliography: p. 187-191. Statement of Responsibility: Amy Bushnell. General Note: "A University of Florida book." General Note: Includes index. Record Information Source Institution: University of Florida Rights Management: All applicable rights reserved by the source institution and holding location. Resource Identifier: oclc - 07554319 lccn - 81007403 isbn - 0813006902 ocm07554319 Classification: lcc - HJ1242 .T87 1981 ddc - 354.460072/09 ssgn - 7,34 System ID: AA00014878:00001 This item is only available as the following downloads: ( XML ) Full Text The King's Coffer 1566-87 (Paris Is.) Guale (St. Catherines Is.) Augustine 1565 Matanzas Inlet GULF OF MEXICO ...---...... Spanish roads AF* miles Lo 0 50 100 150 SPANISH FLORIDA Composite of Tribal Territories and Place Names before 1702 (after Boyd, Chatelain, and Boniface) The King's Coffer Proprietors of the Spanish Florida Treasury 1565-1702 Amy Bushnell A University of Florida Book UNIVERSITY PRESSES OF FLORIDA Gainesville 1981 HJT /2'2. (acksonville), University of South Florida (Tampa), University of West Florida (Pensacola). Library of Congress Cataloging in Publication Data Bushnell, Amy. The king's coffer. "A University of Florida book." Bibliography: p. Includes index. 1. Finance, Public-Spain-History. 2. Finance, Public-Florida-History. I. Title HJ1242.B87 354.460072'09 81-7403 ISBN 0-8130-0690-2 AACR2 Copyright 1981 by the Board of Regents of the State of Florida Typography by American Graphics Corporation Fort Lauderdale, Florida Printed in USA Contents Preface Q: vii 1. The Florida Provinces and Their Treasury Go 1 2. The Expenses of Position _Q0 15 3. Proprietary Office Q 30 4. Duties and Organization -Q 50 5. The Situado _Q 63 6. The Royal Revenues 2Q~ 75 7. Political Functions of the Royal Officials _Q 101 8. Accounting and Accountability 2~0 118 Conclusion Q 137 Appendixes 2 141 Glossary Q 149 Notes GF 151 Bibliography (z 187 Index 192 V To Catherine Turner and Clyde Bushnell Preface ] HE historiography of Spanish Florida has traditionally concen- L treated on Indians, friars, and soldiers, all dependent on the yearly situado, or crown subsidy. Other Floridians, poor and com- mon, appear to have had no purpose beyond witless opposition to the royal governor This was so unusual for a Spanish colony that I was sure the true situation must have been more complex. In the imperial bureaucracy, ecclesiastical, military, magisterial, fiscal, and judicial functions of government were customarily distributed among a number of officials and tribunals with conflicting jurisdictions.I believe tatresearch would reveal an elite in Florida, encouraged by the crown as a counterweightito t-e governor and that this eitwas pursuingits own rational economic interests. I began by studying a branch of the Menendez clan, the Menendez Marquez family, correlating their ranching activities to the determi- nants of economic expansion in Florida. Governors came and went, but the Menendez Marquezes exercised power in the colony and held office in the treasury from 1565 to 1743. It became apparent that the way to identify and study a Florida elite was prosopographically, through the proprietors of the royal treasury. Such an investigation would serve a second purpose of wider interest and value: revealing how a part of the Spanish imperial bureaucracy operated on the local level. On the small scale of Florida, imperial organization and crown policies would leave the realm of the theoretical to become the problems of real people. vii I do not present the results of my research as a quantitative economic or financial history. The audited accounts necessary to that type of history exist; those for the sixteenth century have been examined with profit by Paul Hoffman and Eugene Lyon, and schol- ars may eventually mine the exhaustive legajos for the seventeenth century. But my purpose has been different: to describe the adminis- trators of one colonial treasury in action within their environment. To keep the project manageable I have limited it chronologically to the Habsburg era, from the time St. Augustine was founded in 1565 to the change of ruling houses, which the city observed in 1702. My main source has been the preserved correspondence between the crown and its governors and treasury officials, whose overlapping responsibilities led to constant wrangling and countless reports, legal actions, and letters. In a sense, every scholarly work is a collaboration between the researcher and his predecessors, yet one feels a special obligation to those who have given their assistance personally, offering insights, transcripts, and bibliographies with a generosity of mind that sees no knowledge as a private enclave. The foremost person on my list is L. N. McAlister; the director of my doctoral program. In the course of our long friendship, his standards of scholarship, writing, and teaching have become the models for my own. He and Michael V Gannon, David L. Niddrie, Marvin L. Entner; Claude C. Sturgill, Cornelis Ch. Goslinga, Eugene Lyon, and Peter Lisca, reading and criticizing the manuscript for this book in various of its drafts, have delivered me from many a blunder. For the new ones I may have fallen into, they are not accountable. Luis R. Arana, of the National Park Service at the Castillo de San Marcos, supplied me with interesting data on the Menendez Marquez family. Overton Ganong, at the Historic Saint Augustine Preserva- tion Board, permitted me to spend a week with the Saint Augustine Historical Society's unfinished transcript of the Cathedral Records of St. Augustine Parish. Ross Morrell of the Division of Archives, History, and Records Management of the Florida Department of State allowed me to see translations and summaries made under the division's auspices. Mark E. Fretwell, editor of the journal of the St. Augustine Historical Society, granted permission to reprint Chapter 2, which appeared as "The Expenses of Hidalguia in Seventeenth- Century St. Augustine," El Escribano 15 (1978):23-36. Paul E. Preface viii Hoffman, JohnJ. TePaske, Charles Arnade, and Samuel Proctor gave me encouragement and advice. Elizabeth Alexander and her staff at the P. K. Yonge Library of Florida History, University of Florida, provided a research home. The care they take of that library's rich resources is something I never cease to appreciate. For financial support I am indebted to the University of Florida, in particular to the Latin American Center the Graduate School, and the Division of Sponsored Research. The United States government supplied three years of NDEA Title VI fellowships in Spanish and Portuguese, and the American Association of University Women awarded me the Meta Glass and Margaret Maltby fellowship. My greatest acknowledgment is to the people I live with. My writer and scholar husband, Peter, has freed my time for writing without telling me how to do it. He, Catherine, and Colleen, listen- ing with good grace to a hundred historical anecdotes, have helped me to believe that what I was doing mattered. Amy Bushnell Cluj-Napoca, Romania July 6, 1980 Preface ix 1 The Florida Provinces and Their Treasury SHE Spanish Habsburgs liked their treasure tangible, in bars of L gold, heavy silver coins, precious stones, chunks of jewel amber and strings of pearls. By their command, each regional branch of the royal treasury of the Indies (hacienda real de Indias), a part of their patrimony, had a heavily guarded room containing a coffer of solid wood, reinforced at the edges, bottom, and corners with iron, strongly barred, and bearing three or four locks the keys to which were held by different persons. The keepers of the keys, who had to meet to open the coffer, were the king's personal servants, with antecedents in the customs houses of Aragon and the conquests of Castile. They were called the royal officials of the treasury. In the Indies, individual treasuries grew out of the fiscal ar- rangements for expeditions of conquest. The crown, as intent on collecting its legitimate revenues as on the propagation of the faith, required every conquistador to take along officers of the exchequer. A factor guarded the king's investment, if any, in weapons and supplies and disposed of tribute in kind. An overseer of barter and trade (veedor de rescates y contrataciones) saw to commercial contacts with the natives and in case of war claimed the king's share of booty. An accountant (contador) recorded income and outgo and was the guardian and interpreter of royal instructions. A treasurer (tesorero) was entrusted with monies and made payments in the king's name. If the expedition resulted in a permanent settlement these officials continued their duties there, protecting the interests of the crown in a new colony.'1 *Notes begin on page 151. 2 The King's Coffer There was a strongly commercial side to these earliest treasuries, supervised after 1503 by the House of Trade (Casa de Contratacion) in Seville. The factor in particular served as the House's representative, watching the movement of merchandise and seeing that the masters of ships enforced the rules against unlicensed passengers and pro- hibited goods. He also engaged in active and resourceful trading, exchanging the royal tributes and taxes paid in kind for necessary supplies. In 1524 the newly created Council of the Indies (Consejo de Indias) assumed the supervision of overseas treasuries, a duty it retained throughout the Habsburg period except for the brief interval of 1557-62. By 1565, founding date of the treasury under study, Spanish presence in the Indies was seventy-three years old. The experimental stage of government was past; institutions of administration had taken more or less permanent shape. A network of royal treasuries existed, some subordinate to viceroyalties or presidencies and others fairly independent. Principal treasuries with proprietary officials were located in the capital cities; subordinate treasuries staffed by lieutenants were at seaports, mining centers, or distant outposts. The factor had become a kind of business manager; administering tributes and native labor. The overseer's original functions were forgotten as the crown turned its attention from commerce and conquest to the dazzling wealth of mines. As a result of the overriding interest in precious metals, overseers were confined to duty at the mints; in places without a mint their office was subsumed under the factor's. And wherever there was little revenue from tribute the factorship was disappearing as well. The treasury of Florida had its beginnings in a maritime enter- prise. This was no haphazard private adventure, but the carefully organized joint action of a corporate family and the crown.2 Pedro Menendez de Avil6s was a tough, corsairing Asturian sea captain known to hold the interests of his clan above the regulations of the House of Trade, but the king could not afford to be particular. In response to French settlement at Fort Caroline, Philip II made a three-year contract (capitulacion) with Menendez, naming him Adelantado, or contractual conqueror; of Florida. At his own cost, essentially, Men6ndez was to drive out Ren6 de Laudonniere and every other interloper from the land between Terra Nova (New- foundland) and the Ancones (St. Joseph's Bay) on the Gulf of The Florida Provinces and Treasury 3 Mexico. Before three years were out he was to establish.two or three fortified settlements and populate them.3 He did all of this, but as the French crisis escalated, the king had to come to his support.4 During the three years of the contract Menendez and his supporters invested over 75,000 ducats; the crown, more than 208,000 ducats, counting Florida's share of the 1566 Sancho de Archiniega reinforcements of 1,500 men and seventeen ships.5 Despite the heavy royal interests, the new colony was governed like a patrimonial estate. The adelantado nominated his own men to treasury office: his kinsman Esteban de las Alas as treasurer, his nephew Pedro Menendez Marquez as accountant, and a future son- in-law, Hernando de Miranda, as factor-overseer This was open and honorable patronage, as Menendez himself said: "Now, as never before, I have need that my kinfolk and friends follow me, trustwor- thy people who love me and respect me with all love and loyalty."6 It was also an effort to settle the land, as he once explained: They are people of confidence and high standing who have served your Majesty many years in my company, and all are married to noblewomen. Out of covetousness for the offices [for which they are proposed], and out of love for me, it could be that they might bring their wives and households. Because of these and of others who would come with their wives, it is a fine beginning for the population of the provinces of Florida with persons of noble blood.7 Since there were as yet neither products of the land to tax nor royal revenues to administer; these nominal officials of the king's coffer continued about their business elsewhere: Las Alas governing the settlement of Santa Elena (on present-day Parris Island), Miranda making voyages of exploration, and Menendez Marquez governing . for his uncle in Cuba. With most of the rest of the clan they also served in the new Indies Fleet (Armada Real de la Guardia de las Costas e Islas y Carrera de las Indias) that Pedro Menendez built in 1568, brought to the Caribbean, and commanded until 1573. From 1570 to 1574 the fiscal officers of that armada were the acting ir. als forlorida, wooosel ysuperyii h is records and nm Lyj li s.th .rio gisons They would not consent to live there.8 Meanwhile, the king issued Miranda and The King's Coffer Menendez Marquez their long-awaited titles. Las Alas, under inves- tigation for withdrawing most of the garrison at Santa Elena and taking it to Spain, was passed over in favor of a young nephew of the adelantado's called variously Pedro Menendez the Younger and the Cross-Eyed or One-Eyed (El Tuerto). Of the three royal appointees Pedro was the only one to take up residence.9 The others continued to name substitutes. Because it established claim by occupation to North America from the Chesapeake Bay southward, Florida was an outpost of empire to be maintained however unprofitable. Any one of its un- explored waterways might be the passage to the East. With this in mind, Philip II had renewed the Menendez contract when it expired, letting the subsidy for the Indies fleet cover the wages for 150 men of the garrisons. Three years later; in 1570, the king changed this provi- sion to give Florida a subsidy of its own.10 Despite this underwriting of the colony the adelantado remained to all purposes its lord proprie- tor. When he died in 1574, acting governorship shifted from his son-in-law Diego de Velasco to the already-mentioned Hernando de Miranda, husband of Menendez's one surviving legitimate heir. In 1576 the Cusabo Indian uprising resulted in the massacre of Pedro Menendez the Younger and two other treasury officials. Governor Miranda abandoned the fort at Santa Elena and returned to Spain to face charges of desertion.11 Once more the king came to the rescue, doubling the number of soldiers he would support.12 Florida began the slow shift from a proprietary colony to a royal one. The only person considered capable of holding the provinces against heretic and heathen alike was Admiral Pedro Menendez Marquez, awaiting sentence for misdeeds as lieutenant-governor of Cuba. The Council granted him both a reprieve and the acting governorship of Florida, and he sailed for St. Augustine. Along with the three new appointees to the treasury, he had permission to pay himself half his salary from the yearly subsidy, or situado,13 The provinces that he pacified did not remain quiet for long. In 1580 a French galleass entered the St. Johns estuary for trade and informa- tion. Menendez Marquez took two frigates to the scene and defeated the Frenchmen in the naval battle of San Mateo. Four years later the Potano Indians of the interior staged an uprising and were driven from their homes.14 Sir Francis Drake stopped by St. Augustine long enough to burn its two forts (the fifth and the just-finished sixth) and 4 The Florida Provinces and Treasury 5 the town, which is why subsequent financial reports gave no figures earlier than 1586, "the year the books were burned." To consolidate their forces the Spanish again abandoned Santa Elena, and this time did not go back. 15The twelve Franciscans who arrived in 1587 ready to commence their apostolic mission found Spanish settlement contracted to a single outpost.16 In these first uncertain years the presidios (garrison outposts) were little more than segments of an anchored armada, supported but not rigorously supervised by the crown. The sixteenth-century gover- nors, who could be called the "Asturian Dynasty," filled the little colony with family intrigues and profiteering. All the officers, treas- ury officials included, were captains of sea and war who could build a fort, command a warship, smuggle a contraband cargo, or keep a double set of books with equal composure. Juan de Posada, for instance, was an expert navigator who sometimes doubled as lieuten- ant governor for his brother-in-law Pedro Menendez Marquez. He once calculated for the crown that a good-sized galley, a 100-man fort, or four frigates would all cost the same per year: 16,000 ducats. Posada was bringing back a title of treasurer for himself in 1592 when his ship sank and he drowned off the Florida coast, which he had once called easy sailing.17 As the orders instituting the situado in 1570 explicitly stated that troops in Florida were to be paid and rationed the same as those in the Menendez fleet or the Havana garrison, the first royal officials mod- eled themselves after their counterparts in the king's armadas and garrisons rather than his civilian exchequers.18 Treasurer Juan de Cevadilla and Accountant Lazaro Saez de Mercado, taking office in 1580, thought that this system allowed the governor undue power. It was not appropriate to transfer all the fiscal practices from the ar- madas, they said, when "the exchequer can be looked after better on land than on sea."19 Auditor Pedro Redondo Villegas, who came in 1600, refused to accept any armada precedent without a cedula (writ- ten royal order) applying it to Florida.20 Thereafter the officials compared their treasury with wholly land-based ones and demanded equal treatment with the bureaucrats of Peru, Yucatan, Honduras, and the Philippines. As payroll and supply officers for a garrison, however they continued to envy the Havana presidio's royal slaves and the new stone fort built there between 1558 and 1578.21 During the course of the seventeenth century, the treasury at St. The King's Coffer Augustine built up precedents that achieved the practical force of law. Cedulas from the crown were respectfully received and recorded, but not necessarily implemented. In this the officials followed the ancient principle of"I obey but do not execute" ("obedezco pero no cumplo"), a form of particularism expounded for the adelantado in 1567 by his friend Francisco de Toral, Bishop of Merida: For every day there will be new things and transactions which will bring necessity for new provisions and new remedies. For the General Laws of the Indies cannot cease having mild interpretations, the languages and lands being different, inas- much as in one land and people they usually ignore things in conformity to the times. Thus it will be suitable for your lordship to do things there [in Florida] of which experience and the condition of those natives have given you under- standing.22 The Florida creoles, born in the New World of Spanish parents, referred to long custom to justify their actions, and this argument was taken seriously.23 The Franciscan commissary general for the Indies, writing the year after publication of the great Recopilacion de leyes de los Reynos de las Indias, observed that some practices in the Indies were not amenable to change after so long a time.24 Perhaps it was only right that there should be flexibility in the application of laws. Florida was an exception to the usual colony. It had been founded for reasons of dynastic prestige, and for those reasons it was maintained, at a cost out of all proportion to benefits received. The colony did not mature beyond its initial status of captaincy general. It was a perennial military frontier that was never under the Habsburgs, absorbed by another administrative unit. The governors were military men with permanent ranks of admiral, captain, sergeant major or colonel, who took orders from the Coun- cil and the Junta de Guerra (Council of War) alone. It was a dubious distinction, for wartime coordination with New Spain or Havana depended upon mutual goodwill rather than any sense of obligation. The French, when not at war with the Spanish, made more reliable allies.25 In his civil role the governor answered neither to the audiencia (high court and governing body) in Santo Domingo nor the one in 6 The Florida Provinces and Treasury 7 Mexico City, and he took orders from no viceroy. In the seventeenth century the crown moved with majestic deliberation to establish the authority, first of the Audiencia of Santo Domingo, then that of Mexico City, over civil and criminal appeals; responsibility for treasury audits was handed back and forth between the Mexico City Tribunal of Accounts and the royal auditor in Havana. These meas- ures did not affect the Florida governorship, which remained inde- pendent. As Governor Marques Cabrera explained more than once, no audiencia cared to be responsible for poor frontier provinces. Distances were great, navigation was perilous, and ministers were unwilling to make the journey. 26 If mines of silver had been found within its borders, New Spain would have annexed Florida without delay. Not everyone was satisfied with a separate status. The friars thought that prices would be lower if the governor were subject to some viceroy or audiencia (or were at least a Christian). And royal officials grumbled that there was little point in the king's having appointed them to a republic of poor soldiers, in which the governor disregarded his treasury officials and answered to no audiencia.27 For their own reasons, the accountant, treasurer and factor often made the governor look more autocratic than he was. Florida may not have been a popular democracy, but neither was it a dictatorship. There were within th community carefully drawn class distinctions based on inequalities of state din_ e n the officials ofthe treas were gentlemen, expecting and receiving the honQrs.ueAo their class. They were not mere quartermasters on the governor's staff As proprietors of treasury office and judges of the exchequer they were his quasi-peers, and as titled councilmen of the one Spanish city in Florida they were his civil advisory council, just as the sergeant major and captains were his council of war and the priests and friars his ecclesiastical counselors. The governor who ignored the advice of these men of experience was spoken of disparagingly as "carried off by his own opinions." The royal treasury of St. Augustine differed from the ones elsewhere mainly in that it had fewer revenues. For various reasons, t)e..com^L Florida never approached that op s, or productive region. European settlement there, however earlyby North American standards, had gotten a late start in Sanish terms. In the rest of the Indies, debate had been going on for years about Indian rationality, just wars and slavery, forced conversions, en- 8 The King's Coffer comiendas (allotments of tribute or service), and the alienationof native lands-and while theologians and lawyers argued, soldiersand settlers exploited. By the time the florida conquest began these questions were more or less settled. Although not advanced enough to be subject to the Inquisition, the Indian had beendetermined a rational being. He could not be held in servitude or have his lands taken. It was forbidden to enter his territory with arms and banners or to resettle him anywhere against his will.28 Florida was to be conquered through the Gospel-not the fastest way. As five Apalachicola chiefs once courteously told Governor Marques Ca- brera, if God ever wished them and their vassals to become Chris- tians they would let the governor know.29 Pacifying the natives by trade was not effective either; for the Spanish could maintain no monopoly. For over forty years the French continued to trade in Florida, and the Indians preferred them. In a single summer fifteen French ships were sighted off the coast of Guale, coming into the Savannah River for pelts and sassafras.30 Dutch and English interlopers bartered with the adamantly inde- pendent Indians ofAis, Jeaga, and the Carlos confederacy to the south for amber and the salvaged goods of shipwrecks.31 The Spanish crown no longer encouraged Indian trade in the late sixteenth century anyhow; it barely permitted it. St. Augustine was a coast guard station, a military base, and a mission center, not a commercial colony, and the government saw no reason to supply sailors, soldiers, and friars with trade goods. When Governor Mendez de Canzo made peace with the Guale Indians in 1600 the treasurer observed for the royal benefit that it was to be hoped the governor was acting out of a zeal for souls and His Majesty's service and was not influenced by the good price for sassafras in Seville.32 Pious disclaimers aside, Florida's colonists and governors did not agree with His Majesty's restrictions on Indian trade. The natives had many things that Spaniards wanted: sassafras, amber, deer and buf- falo skins, nut oil, bear grease, tobacco, canoes, storage containers, and, most of all, food. And the Indians soon wanted what the Spanish had: weapons, construction and cultivation tools, nails, cloth, blan- kets, bells, glass beads, church ornaments, and rum. The problem was not to create a market but to supply it. When the amber-trading Indians demanded iron tools Governor Rebolledo made them from 60 quintals (6,000 pounds) of the presidio's pig iron, plus melted The Florida Provinces and Treasury down cannons and arquebuses.33 The 1,500-ducat fund that the king intended for gifts to allied chiefs, the governors sometimes diverted to buy trade goods. Soldiers, having little else, exchanged their firearms; the Cherokees living on the Upper Tennessee River in 1673 owned sixty Spanish flintlocks. Without royal approval, however, there was a limit to the amount of trading that could be done, and the crown favored the regular commerce of the fleets and New Spain. Throughout the Habsburg period Florida was licensed to send no more than two frigates a year to Seville or the Canaries, and a bare " 2,000 to 3,000 ducats' worth of pelts.34 The English who colonized in NortA ericasufferednosuch handicaps. As early as 1678, four ships at a time could be seen in the Charles Town harbor; at St. Augustine the colonists would have been happy to receive one a year from Spain.35 Later on, when the English wanted in trade Hispanic Indian slaves or scalps, they had the wherewithal to pay for them. For a single scalp brought to the Carolina governor one warrior was supposedly given clothing piled to reach his shoulders, a flintlock with all the ammunition he wanted, and a barrel of rum.36 The Indians of the Southeast shifted to the English side with alacrity. The bishop of Tricale reported in 1736 that natives who had been baptized Catholic put their hands to their heads saying, "Go away, water! I am no Christian."37 Protected Indians, limited exports, and a shortage of trade goods were only three of the factors hampering normal economic growth in Florida. Another was the continuing silver rush to New Spain and Peru. St. Augustine was not the place of choice for,a Spanish immi- grant. Soldiers and even friars assigned to Florida had to be guarded in the ports en route to keep them from jumping ship. In this sense the other North Atlantic colonies were again more fortunate. There were no better places for Englishmen, Scots, and Germans to go. Ideally the presidio of St. Augustine should have been supplied through the free competition of merchants bringing their shiploads of goods to exchange for the money to be found in the king's coffer and the soldiers' wallets.38 It did not work out this way for several reasons. Under the Habsburgs the situado for Florida soldiers and friars never rose above 51,000 ducats or 70,000 pesos a year; payable from 1592 to 1702 from the Mexico City treasury.39 But supporting presidio in Florida was not one of that treasury's priorities. The Mexico City officials paid the situado, lrregurarly iand piecemeal. For 9 The King's Coffer a merchant, selling to the Florida presidio was equivalent to making a badly secured, long-term loan. The king, whose private interests might conflict with the national or general interest, once had all the Caribbean situados sequestered and carried to him in exchange for promissory notes.40 Sometimes an entire situado would be mort- gaged before it arrived, with creditors waiting on the Havana docks. In order to be supplied at all, St. Augustine was forced to take whatever its creditors would release: shoddy, unsuitable fabrics and moldy flour. The presidio was chronically in debt, and so was everyone dependent on it. Soldiers seldom saw money; Indians almost never used it.41 St. Augustine was a poor and isolated market with little to export. Its seaways were beset by corsairs in summer and storms in winter. No merchant could risk one of his ships on that dangerous journey without an advance contract guaranteeing the sale of his cargo at a profit of 100 to 200 percent.42 Citizens sometimes tried to circumvent the high cost of imports by going in together to order a quantity of goods, making sure that anyone they entrusted with money had local ties to guarantee his return. But the price of bringing goods to Florida was still prohibitive. A single frigate trip to the San Juan de Uhia harbor and back cost 400 ducats.43 It was of little help to be located along the return route of the Fleet of the Indies. Once a year the heavily laden galleons sailed northward in convoy just out of sight of land, riding the Gulf Stream up the Bahama Channel to Cape Hat- teras to catch the trade winds back to Spain, but the St. Augustine harbor, with its shallow bar which would pass only flat-bottomed or medium portage vessels, was not a place where these great 500- to 1,500-ton ships could anchor nor would they have interrupted their progress to stop there. When the Floridians wished to make contact with a vessel in the fleet they had to send a boat to await it at Cape Canaveral, a haunt of pirates.44 By Spanish mercantilist rules nothing could be brought into a Spanish port except in a licensed ship with prior registration. At times the presidio was so short on military and naval supplies that the governor and officials waived the regulations and purchased artillery and ammunition, cables and canvas off a ship hailed on the open seas; or a foreign merchantman entered the harbor flying a signal of distress, news bearing, or prisoner exchange, and sold goods either openly or under cover45 10 The Florida Provinces and Treasury Except for trade goods, metals, and military accoutrements, which always had to be imported, St. Augustine with its hinterland was surprisingly self-sufficient.46 The timber; stone, and mortar for construction were available in the vicinity; nails, hinges, and other hardware were forged in the town. Boats were built in the rivers and inlets. There was a gristmill, a tannery, and a slaughterhouse. Fruits, vegetables, and flowers grew in the gardens; pigs and chickens ran in the streets. Although it was a while before cattle ranching got started, by the late seventeenth century beef was cheap andplentiful.47 The swamps and savannahs provided edible roots, wild fruit, and game; lakes and rivers were full offish; oysters grew huge in the arms of the sea. Indians paddling canoes or carrying baskets brought their pro- duce; but especially they brought maizec Maize, not wheat~ as the staff of life in Florida. The poor; the slaves, the convicts and Indians all got their calories from it. When the maize crops were hurt, St. Augustine was hungry. But the problem was not so much supply as distribution. After the Indians were reduced to missions the friars had them plant an extra crop yearly as insurance against famine and for the support of the poor and beautifi- cation of the sanctuaries. The missionaries were highly incensed to have this surplus claimed for the use of the presidio, yet to guarantee an adequate supply the governor was ready to take desperate meas- ures: raid the church granaries, even plant maize within musket range of the fort, providing cover to potential enemies. Each province presented its problems. The grain from Guale was brought down in presidio vessels. That from Timucua was carried 15 to 30 leagues on men's backs for lack of mules or packhorses, and it was easier to bring in relays of repartimiento (labor service) Indians and raise it near the city. The inhabitants of Apalache had a ready market for maize in Havana, and the governor had to station a deputy in San Luis, their capital, to collect it and transmit it 2,000 miles around the peninsula to St. Augustine. To read the hundreds of letters bemoaning the tardiness or inade- quacy of the situado, one would suppose that the presidio was always about to starve. This was largely rhetoric, an understandable effort by 11 The King's Coffer the governors and royal officials to persuade His Majesty to take the support of his soldiers seriously. Florida was not so much dependent upon the subsidy as independent because the subsidy was unreliable. Supply ships were sometimes years apart, and not even a hardened Spaniard could go for years without eating.48 He might miss his olive oil, wheat flour; wine, sugar; and chocolate, but there was some sort of food to be had unless the town was suffering famine or siege and had exhausted its reserves. Such exigencies happened. After the attacks of buccaneers caused the partial abandonment of Guale Province in the 1680s, the maize source there dried up, while refugees increased the number of mouths in St. Augustine. Without pro- visions the militia and Indian auxiliaries could not be called out, nor repartimiento labor be brought in to work on the fortifications.49 Food reserves were a military necessity, and the governor and cabildo (municipal council) had emergency powers to requisition hoards and freeze prices.50 Toaggravate the economic problems,. the colony was almost never at peace. The peninsula could not be properly explored; as late as 1599 there was uncertainty over whether or not it was an island. Throughout the Habsburg era there were two fluctuating frontiers with enemies on the other sides, for; converted or not, Florida Indians saw no reason to halt their seasonal warfare. From the south, Ais, Jeaga, Tocobaga, Pocoy, and Carlos warriors raided the Hispanicized Indians; Chisca, Chichimeco, Chacato, Tasquique, and Apalachicolo peoples were some of the enemies to the north and northwest. The coasts were no safer In 1563, trading and raiding corsairs conducting an undeclared war were driven by Spanish patrols from the Antilles to the periphery of the Caribbean: the Main, the Isthmus, and Florida. The French crisis of1565-68 was followed by the Anglo-Spanish War of 1585-1603 and the Dutch War of 1621-48.51 Meanwhile, Floridians watched with foreboding the rival settlements of Virginia, Barbados, and, after 1655, Jamaica. When Charles Town was founded in 1670 they pleaded for help to drive off the colonists before there were too many, but the crown's hands were tied by a peace treaty, and its reaction-the building of a fort, the Castillo de San Marcos in St. Augustine-was essentially defensive. During the sixteenth and seventeenth centuries Florida was afflicted by a severe demographic slump which reached nadir in 1706. 12 The Florida Provinces and Treasury The first European slavers probably reached the peninsula with their pathogens and iron chains in the 1490s.52 As there is little basis for estimating the population at contact, there is no way of knowing what the initial demographic loss may have been, nor its dislocating effects.53 At the end of the sixteenth century Bartolome de Argiielles, who had been in Florida twenty-four years and traversed it from Santa Elena to the Keys, said it was his impression that there were relatively few natives.54 -- The first epidemic reported among mission Indians was in 1570; the next, in1591. Th-e "pests and contagions," lasting from 1613 to 1617, to the best of the friars' knowledge killed half the Indians in Florida.55 An incoming governor marveled in 1630 at the way "the Indians... die here as elsewhere."56 Six years later the friars reported that the natives between St. Augustine and Guale were almost totally gone. The Franciscans obtained gubernatorial consent to enter the province of Apalache partly because the depopulating of nearer provinces had depleted the Spanish food and labor supply. When Interim Governor Francisco Menendez Marquez suppressed a rebel- lion of the Apalaches in 1647 and condemned loyal and rebel alike to the labor repartimiento, he explained that the other provinces of Christians were almost used up.57 The worst years were yet to come. Between 1649 and 1659 three epidemi-cs descended on Florida: the first was either typhus or yellow fever the second was smallpox, and the last, the measles. Governor Aranguiz y Cotes said that in the seven months after he took posses- sion in February of 1659, 10,000 Indians died. These were also the years of famine and of the Great Rebellion of the Timucuans, which left their remnants scattered and starving.58 From 1672 to 1674 an unidentified pestilence reduced the population even further. There were so few Indians in Central Florida that the Spanish gave land in Timucua Province to anyone who would introduce cattle. As native town structure broke down under the barrage of disasters, Indians began detaching themselves from their families and parishes to work as day labor in construction and contract labor on the ranches, or as independent suppliers of some commodity to the Spanish: charcoal, wild game, baskets, or pots. Efforts to make this migratory labor force return home to their family, church, and repartimiento respon- sibilities were largely ineffective. In 1675 a governor's census showed 13 The King's Coffer only 10,766 Indians under Spanish obedience in all Florida, and four-fifths of them were in Apalache, 200 miles from St. Augustine across a virtually empty peninsula.59 Some people were managing to profit by the situation. The Florencia family had led in the opening up and settling of Apalache Province and were the ones who had started trade from there to Havana. Descended from a Portuguese pilot who came to Florida in 1591, for three generations they supplied most of Apalache's deputy governors and many of its priests, treating the province as a private fief. A look at the names of provincial circuit judges and inspectors (visitadores) shows that these ingenious Floridians even cornered the market on investigating themselves.60 Under their instigation, Apalache was considering breaking off administratively from the capital of Florida. The Florencias, the friars, and the Hispanic Indians all preferred to deal with Havana, only a week's sail from them and offering more opportunity.61 Whether this would in time have hap- pened, and what would then have become of St. Augustine, is a moot point. Colonel James Moore of Carolina and his Creek allies took advantage of the outbreak of Queen Anne's War in 1702 to mount slave raids against the Indians of Florida. By 1706 the raids had reduced the native provincial population to a miserable few hundred living beneath the guns of the fort.62 SIn thefacePofthe manyhindrances to the settlement and effective use of Florida-the crown's protective attitude toward natives, the obstacles to trade, the shortage of currency, the problems of food distribution, the slow Spanish increase in population and the rapid native decrease, and the exhausting wars-it was a remarkable achievement for the Spanish to have remained there at all!lThe way they did so, and the share of the royal officials of the treasury in the story, is a demonstration of human ingenuity and idealism, tenacity, and sheer greed. 14 2 The Expenses of Position U LORIDA, with its frequent wars, small Spanish population, and J relatively few exports, might not seem a likely place for the maintenance of a gentlemanly class, known to Spaniards as hidalgos. But wealth and position are relative, and people differentiate them- selves wherever there are disparities of background or belongings to be envied or flaunted. In the small society of St. Augustine, where everyone's business was everyone else's concern, social presump- tiveness was regarded severely.1 One of the grievances against Gov- ernor M6ndez de Canzo was that he had named one of his relatives, a common retail merchant, captain of a company and let him appoint as ensign a lad "of small fortune" who had been working in the tannery.2 From the list of vecinos (householders) asked to respond with voluntary gifts for public works or defense construction we can identify the principal persons in town, for avoluntarygift was the hidalgo's substitute for personal taxation, to which he could not submit without marking himself a commoner When Governor Hita Salazar needed to put the castillo into defensible order he gave the first 200 pesos himself, to put the others under obligation, and then collected 1,600 pesos from the royal officials of the treasury, the sergeant major, the captains, other officers and those receiving bonuses, and some private individuals who raised cattle.3 Whether transferred to Florida from the bureaucracy elsewhere or coming into office via inheritance, the royal official was presumed to be an hidalgQ or he would never have been appointed. This meant, 15 The King's Coffer technically, that he was of legitimate birth, had never been a shop- keeper or tradesman, had not refused any challenge to his honor and could demonstrate two generations of descent from hijos de algo ("sons of something") untainted by Moorish or Jewish blood and uncondemned by the Inquisition. The advantages of being an hidalgo someone addressed as don in a time when that title had significance were unquestioned. There were, however concomitant responsibilities and expenses. A gentleman was expected to "live decently," maintaining the dignity of his estate whether or not his means were adequate. Openhanded- ness and lavish display were not the idiosyncracies of individuals but the realities of class, the characteristics that kept everyone with pretensions to hidalguia searching for sources of income. The personal quality that St. Augustine appreciated most ear- nestly in a gentleman was magnanimity. The character references written for a governor at the end of his term emphasized alms: the warm shawls given to widows, the delicacies to the sick, and the baskets of maize and meat distributed by the benefactor's slaves during a famine. They also stressed his vows fulfilled to the saints: silver diadems, fine altar cloths, and new shrines.4 When local con- fraternities elected yearly officers, the governor and treasury officials were in demand, for they brought to the brotherhood gifts and favors besides the honor of their presence. The royal officials consistently turned over a third of their earnings from tavern inspections to the Confraternity of the Most Holy Sacrament, and the treasurer gave it his payroll perquisites.5 Alms and offerings were minor expenses compared to the cost of keeping up a household. The royal officials were admonished to be married; the crown wanted the Indies populated by citizens in good standing, not mannerless half-breeds, and a man with a family had given as it were hostages for his behavior.6 Regular marriage to someone of one's own class was, however expensive. According to one hard-pressed official, "The pay-of a soldier will not do for the position of quality demanded of a treasurer" Another argued that he needed a raise because his wife was "someone of quality on account of her parents."7 A woman of quality in one's house had to be suitably gowned. In 1607 six yards of colored taffeta cost almost 9 ducats, the equivalent of 96 wage-days for a repartimiento Indian. A velvet gown would have 16 The Expenses of Position cost 48 ducats.8 A lady wore jewels: ornaments on her ears and fingers, and necklaces. In 1659 a single strand of pearls was valued at 130 pesos. Between wearing the jewelry was kept in a locked case inside the royal coffer, which served the community as a safety deposit.9 A lady had female companions near her own rank-usually dependent kinswomen, although Governor Menendez Marquez introduced two young chieftainesses to be raised in his house and to attend his wife, dofia Maria.10 A gentlewoman maintained her own private charities; Catalina Menendez Marquez, sister of one gov- ernor, niece of another, widow of two treasury officials and mother-in-law of a third, kept convalescent, indigent soldiers in her home.11 The wives and daughters of hidalgos could become imperi- ous: Juana Caterina of the important Florencia family, married to the deputy governor of Apalache Province, behaved more like a feudal chatelaine than the wife of a captain. She required one native to bring her a pitcher of milk daily, obliged the town of San Luis to furnish six women to grind maize at her husband's gristmill, and slapped a chief in the face one Friday when he neglected to bring her fish.12 A gentlewoman's dowry was not intended for household ex- penses but was supposed to be preserved and passed on to her chil- dren. Debts a husband had incurred before marriage could not be collected from his wife's property nor was he liable for debts inher- ited from her family. A gentlewoman kept her own name as a matter of course, and if her family was of better quality than her husband's it was her name that the children took.13 Families were large-Lseena r eight persons, it was estimated around 1706.14 The four generations of the Menendez Marquez treasury officials are one example. In the first generation fourteen children were recorded in the Parish Regis- ter (all but two of them legitimate). In the second generation there were ten; in the third, nine; and the fourth generation numbered six. The number of recognized, baptized children in the direct line of this family averaged nearest to ten.15 All of the hidalgo's progeny, legitimate or illegitimate, had to be provided for. The daughters, called "the adornments of the house," had to have dowries if they were not to spend their lives as someone's servants. A common bequest was a sum of money so an im- poverished gentlewoman could marry or take the veil. Pedro Menendez de Aviles for this purpose endowed five of his and his 17 The King's Coffer wife's kinswomen with 200 to 300 ducats each.16 The usual dowryjn St. Augustine was a house for the bride, but it could also be a ranch,a. soldier's plaza(man-space or man-pay) in the garrison, or even a royal office. Juan Menendez Marquez became treasurer when he was betrothed to the daughter of the former treasurer Nicolas Ponce de Le6n II became sergeant major by marrying the illegitimate daughter of Sergeant Major Eugenio de Espinosa.17 If a man died before all his daughters had been provided for, that duty fell upon their eldest brotlier; even if a friar. Girls were taught their prayers, manners, and accomplishments, and they learned homemaking at their mother's side; they seldom received formal schooling. When two young ladies from St. Augustine were sent to be educated at the convent of Santa Clara in Havana, the question of the habit they were to wear was so unprecedented that it was referred to the Franciscan commissary general for the Indies.18 The plan for the sons of the family was to make them self- supporting. Once a boy had finished the grammar school taught by one of the friars he had two main career options: the church or the garrison. To become a friar he entered a novitiate at the seminary in St. Augustine, if there was one in operation-otherwise, in Santiago de Cuba or Havana. He was then given his orders and joined the missionary friars in the Custody or Province of Santa Elena, em- bracing both Cuba and Florida.19 If he was meant for a soldier his father purchased or earned for him a minor's plaza, held inactive from the time he got it at age nine or ten until he started guard duty around fifteen or regular service two years later Whether as friar or as soldier the young man was paid a meager 115 ducats a year including rations-enough for him to live on modestly but not to support dependents. Even so, there were governors who felt that no one born in Florida should be on the government payroll, either as a religious or as a fighting man.20 Advancement cost money, whether in the church, the military, or the bureaucracy. A treasury official generally trained his eldest son to succeed him and bought afutura (right of succession) if he could.21 The patronage of lesser offices was an important right and, if the family possessed any, every effort was made to keep them. When times were peaceful, markets favorable, and other conditions fell into line, an hidalgo might set up his son as a rancher or a merchant in the import-export business, but many sons of hidalgos found none of 18 The Expenses of Position these careers open to them. Sometimes, they were deposited with relatives in New Spain or Cuba; they left Florida of their own accord to seek their fortunes; or they remained to form the shabby entourage of more fortunate kinsmen, serving as pages, overseers, skippers, or chaplains.22 Sixteenth-century property inventories studied by Lyon show that the contrast between social classes around 1580 appeared in costly furnishings and apparel rather than houses. It made sense, in a city subject to piracy and natural disasters, to keep one's wealth portable, in the form of personal, not real, property. The goods of an hidalgo included silver plate, carpets, tapestries and leather wall hangings, linens and bedding, rich clothing, and writing desks. The value of such belongings could be considerable. Governor Trevifio Guillamas once borrowed 1,000 pesos against the silver service of his house. 23 During the seventeenth century, houses gradually became a more important form of property. Construction costs were modest. Tools and nails, at five to the real (one-eighth of a peso), were often the single largest expenditure.24 At mid-century it cost about 160 pesos to build a plain wattle-and-daub hut; a dwelling of rough planks and palmetto thatch rented for 3 pesos a month. Indian quarrymen, loggers, and carpenters were paid in set amounts of trade goods originally worth one real per day. When the price of these items went up toward the end of the seventeenth century, the cost of labor rose proportionately but was never high.25 Regidores set the prices on lots and it is not certain whether these prices rose, fell, or remained stable. Shipmaster and Deputy Governor Claudio de Florencia's empty lot sold for 100 pesos after he and his wife were murdered in the Apalache rebellion. Captain Antonio de Argiielles was quoted a price of 40 pesos on what may have been a smaller lot sometime before 1680, when the lot on which the treasurer's official residence stood was subdivided.26 The value of better homes in St. Augustine rose faster than the cost of living during the seventeenth century, perhaps indicating houses of larger size or improved quality. In 1604 the finest house in town was appraised at 1,500 ducats and sold to the crown for 1,000 as the governor's residence.27 The governor's mansion that the English destroyed in 1702 was afterward appraised at 8,000 pesos (5,818 ducats). In that siege all but twenty or thirty of the cheaper houses 19 The King's Coffer were damaged irreparably; 149 property owners reported losses totaling 62,570 pesos. The least valuable houses ran 50 to 100 pesos; the average ones, 200 to 500 pesos. Arnade mentions eight families owning property worth over 1,000 pesos, with the most valuable private house appraised at 6,000.28 Royal officials were entitled to live in the government houses, but in St. Augustine they did not always choose to. After the customs- counting house and the royal warehouse-treasury were complete, and even after the treasury officials obtained permission to build official residences at royal expense, they continued to have other houses. The Parish Register records one wedding at the home of Accountant Thomas Menendez Marquez and another in the home of his wife. Their son Francisco, who inherited the position of account- ant, owned a two-story shingled house which sold for 1,500 pesos after he died.29 In St. Augustine, houses were set some distance apart and had surrounding gardens. The grounds were walled to keep wandering animals away from the well, the clay outdoor oven, and the fruit trees, vines, and vegetables.30 Near town on the commons, the hidalgo's family like all the rest was allocated land for growing maize, and after the six-month season his cows browsed with those of commoners on the dry stalks. In 1600 the eighty families in town were said to own from two to ten head of cattle apiece. Some distance out of town, maybe two leagues, was the hidalgo's farm, where he and his household might spend part of the year consuming the produce on the spot.31 A gentleman was surrounded by dependents. The female rela- tives who attended his wife had their male counterparts in the numerous down-at-the-heels nephews and cousins who accom- panied his travels, lived in his house, and importuned him for a hand up the social ladder.32 As if these were not enough, through the institution ofcompadrazgo he placed himself within a stratified net- work of ritual kin. On the lowest level this was a form of social structuring. Free blacks or mulattoes were supposed to be attached to a patron and not to wander about the district answering to no one. Indians, too, accepted the protection of an important Spaniard, taking his surname at baptism and accepting his gifts. The progress of conquest and conversion could conceivably be traced in the surnames 20 The Expenses of Position of chiefs. Governor Ybarra once threatened to punish certain of them "with no intercession of godfathers.""33 The larger the group the hidalgo was responsible for the greater his power base. He himself had his own more important patron. Between people of similar social background, compadrazgo was a sign of friendship, business partnership, and a certain amount of complicity, since it was not good form to testify against a compare. TreasurerJuan Menendez Marquez was connected to many important families in town, including that of the Portuguese merchant Juan Nifiez de los Rios. Although it was illegal to relate oneself to gubernatorial or other fiscal authorities, Juan was also a compare of Governor Mendez de Canzo and three successive factors.34 Servants filled the intermediate place in the hidalgo's household between poor relatives and slaves. Sometimes they had entered service in order to get transportation to America, which was why the gentleman coming from Spain could bring only a few. One manser- vant coming to Florida to the governor's house had to promise to remain there eight years.35 The life of a servant was far from com- fortable, sleeping wrapped in his cloak at the door of his master's room and thankful to get enough to fill his belly. Still, a nondis- chargeable servant had a degree of security, and though not a family member he could make himself a place by faithful service. The Parish Register shows how Francisco Perez de Castafieda, who was sent from Xochimilco as a soldier came to be overseer of the Menendez Marquez ranch of La Chua and was married in the home of don Thomas.36 Slaves completed the household. Technically, these could be Indian or even Moorish, like the girl Isabel, who belonged to De Soto's wife Isabel de Bobadilla and was branded on her face.37 In actalit, almost all the slaves in Florida were black. Moors were uncommon, and the crown categorically refused to allow the en- slavement of Florida Indians, even those who were demonstrably treacherous. The native women whom Diego de Velasco had sold (one of them for 25 ducats), Visitor Castillo y Ahedo told through a translator that they were free, "and each one went away with the person of her choice."38 Governor Mendez de Canzo was forced to liberate the Surruques and Guales whom he had handed out as the spoils of war. One of the few Indian slaves after his time was the 21 The King's Coffer Campeche woman Maria, who was taken into the house of Gov- ernor Vega Castro y Pardo and subsequently bore a child "of father unknown. "39 The names of slaves were significant. Those who had come directly from Africa were identified by origin, as Rita Ganga, Maria Angola, or Arara, Mandinga, or Conga. Those born in the house were identified with the family: Maria de Pedrosa was Antonia de Pedrosa's slave; Francisco Capitin belonged to Francisco Menendez Marquez II, who in his youth had been Florida's first captain of cavalry. A good Catholic family saw that their slaves were Christian and the babies legitimate. In the Parish Register are recorded the occasions when slaves-married, baptized an infant, or served as sponsors to other slaves, mixed-bloods, or Indians. The parish priest entered the owner's name and, starting in 1664, frequently noted the shade of the slave's skin color: negro, moreno, mulato, orpardo. One family of house slaves belonging to the Menendez Marquez family are traceable in the registry for three generations. Sometimes there was evident affection between the races, as the time the slave Maria Luisa was godmother to the baby daughter of a captain.40 But on the whole, blacks were not trusted. Too many of them had run off and intermarried with thefierce Ais people of the coast.41 The hundred slaves in St. Augustine in 1606 (who included around forty royal slaves) were expected to fight on the side of any invader who promised them freedom. Treasury officials objected strongly to the captains' practice of putting their slaves on the payroll as drummers, fifers, and flag bearers. In their opinion the king's money should not be used to pay "persons of their kind, who are the worst enemies we can have."42 The number of slaves belonging to any particular family is not easy to determine. The problem with counting them from the Parish Register is that they never appear all at once, and about all that can be known is that from one date to another a certain slave owner had at least x number of different slaves at one time or another. By this rather uncertain way of numbering them, Juan Menendez Marquez owned seven slaves; his son Francisco, eleven (of whom three were infants buried nameless); Juan II had ten; and his brother Thomas, four besides those out at La Chua. When Francisco II died in penury he was still the owner of seven. Only three were of an age to be useful, the rest being either small children or pensioners rather like 22 The Expenses of Position himself.43 A conservative estimate of the number of adult slaves at one time in a gentleman's house might be about four. The price of slaves remained fairly constant during the sev- enteenth century. In 1616 Captain Pastrana's drummer; whose pay he collected, was worth 300 ducats (412 /2 pesos). During the 1650s a thirty-year-old Angola ranch hand sold for 500 pesos and a mulatto overseer for 600; a mulatto woman with three small children brought 955 pesos, and two other women sold for 600 and 300. As Account- ant Nicolhs Ponce de Le6n explained to the crown in 1674, an un- trained slave cost 150 pesos in Havana, but after he had learned a trade in St. Augustine he was valued at 500 pesos or more.44 The four trained adult slaves in the hypothetical household were worth some 2,000 pesos. All of these dependents and slaves the hidalgo fed, clothed, dosed with medicines, supplied with weapons or tools, andijpr-videdV with the services of the church in a manner befitting their station and connection with his house. There were other servants for whom he felt no comparable responsibility. Repartimiento Indians cleared the land and planted the communal and private maize fields with digging sticks and hoes, guarding the crop from crows and wild animals. Ordered up by the governor; selected by their chiefs, and adminis- tered by the royal officials, they lived in huts outside the town and were given a short ration of maize and now and then a small blanket or a knife for themselves and some axes and hoes. During the con- struction of the castillo as many as 300 Indians at a time were working in St. Augustine.45 In an attempt to stop the escalation of building costs, their wages were fixed at so many blankets or tools per week, with ornaments for small change. Indians were not supposed to be used for personal service but they often were, especially if for some misdeed they had been sentenced to extra labor. Commissary Gen- eralJuan Luengo declared that everyone of importance in Florida had his service Indians and so had all his kinsmen and friends.46 If one of these natives sickened and died he could be replaced with another. Native healers "curing in the heathen manner" had been discred- ited by their non-Christian origins and their inefficacy against Euro- pean diseases, but there was no prejudice against the native medicinal herbs, and even the friars resorted to the women who dispensed them. Medical care of a European kind was not expensive for anyone connected with the garrison. A surgeon, apothecary, and barber were 23 The King's Coffer on the payroll, and the hospital association to which every soldier belonged provided hospitalization insurance for one real per month. When an additional real began to be assessed for apothecary's insur- ance, the soldiers by means of petitions got the charge revoked and what they had paid on it refunded.47 With housing, labor; and medical care relatively cheap, consum- able supplies were the hidalgo's largest expense. There are two ways to estimate what it cost to feed and clothe an ordinary Spaniard in Florida: by the rations issued at the royal warehouses and by the prices of individual items. In the armadas, fighting seamen were issued a daily ration of one-and-a-half pounds hardtack, two pints wine, half a pound of meat or fish, oil, vinegar; and legumbres, which were probably dried legumes. During the period when the Florida garrisons were administered together with the Menendez armada, this practice was altered to enable a soldier to draw up to two-and-a- half reales in supplies per day from the royal stores.48 In spite of admonitions from governors and treasury officials that the cost of food was taking more and more of the soldiers' wages, the official allotment for rations was not changed, and any extra that the soldier drew was charged to his account. Gillaspie has figured that in the 1680s a soldier spent two-thirds of his regular pay on food, and it was probably more like 70 percent.49 By the end of the seventeenth century a soldier's wages would barely maintain a bachelor. A Franciscan, whose vow of poverty forbade him to touch money, received his stipend, tactfully called "alms," in two pounds of flour and one pint of wine a day, plus a few dishes and six blankets a year. He and his colleagues divided among themselves three arrobas (twenty-five-pound measures) of oil a year and the same of vinegar; six arrobas of salt, and some paper; needles, and thread. By 1640 the friars were finding their 115 ducats a year insufficient, in spite of the king's extra alms of clothing, religious books, wax, and the wine and flour with which to celebrate Mass. When they had their syndics sell the surplus from Indian fields to Havana, it was partly because they were 2,000 pesos in debt to the treasury.50 Commodity prices did not rise evenly throughout the period. According to the correspondence from St. Augustine, different necessities were affected at different times. From 1565 to 1602 the price of wine rose 40 percent and that of cotton prints from Rouen, 170 percent. The price of wheat flour seemed to rise fastest between 24 The Expenses of Position 1598 and 1602.51 From 1638 to the mid-1650s the primary problem was dependence upon moneylenders, compounded with the loss in purchasing power of the notes against unpaid situados and of the soldiers' certificates for back wages, both of which in the absence of currency were used for exchange. In 1654 the presidio managed to free itself from economic vassalage long enough to buy from suppliers other than those affiliated with the moneylenders. One situador (commissioned collector) said he was able to buy flour at one-sixth the price previous agents had been paying. 52 Between 1672 and 1689 there was rampant profiteering in the maize and trade goods used to feed and pay Indians working on the castillo. In 1687 the parish priest suddenly increased the costs on his entire schedule of obventions, from carrying the censer to conducting a memorial service.53 Throughout the Habsburg period the expense of keeping a slave or servant continued its irregular rise, whereas the salary of a royal official remained constant. Two undisputed facts of life were that imported items cost more in Florida than in either New Spain orHavanaand that any merchant able to fix a monopoly upon St. Augustine charged whatever the market would bear, Since prices of separate items were seldom reported except by individuals protesting such a monopoly, it is difficult to determine an ordinary price. Even in a ship's manifest the measurements may lack exactness for our purposes, if not theirs. How much cloth was there in a bundle or a chest? How many pints of wine in a bottle? Sometimes only a relative idea of the cost of things can be obtained. Wheat flour, which rose in 1598 from 58 to 175 ducats a pipe (126.6 gallons), at the new price cost two-and-a-half times as much by volume as wine or vinegar did in 1607. Nearly a hundred years later wheat was still so costly that the wages of the boy who swept the church for the sacristan were two loaves of bread a day, worth fifty pesos a year.54 In spite of the high Florida prices, an officer found it socially necessary to live differently from a soldier, who in turn made a distinction between himself and a common Indian. Indians supple- mented a maize, beans, squash, fish, and game diet with acorns, palm berries, heart-of-palm, and koonti root-strange foods which the Spanish ate only during a famine.5s An hidalgo's table was set with Mexican majolica rather than Guale pottery and seashells. It was supplied with "broken" sugar at 28 reales the arroba, and spices, kept 25 The King's Coffer in a locked chest in the dining room.56 His drinking water came from a spring on Anastasia Island. Instead of the soldier's diet of salt meat, fish, and gruelor ash-cakes, the hidalgo dined on wheaten bread, pork, and chicken raised on shellfish. Instead of the native cassina tea he had Canary wines at 160 pesos a barrel and chocolate at 3 pesos for a thousand beans of cacao.57 Pedro Menendez Marquez, the gov- ernor; said he needed 1,000 ducats a year for food in Florida, although his wife and household were in Seville.58 An hidalgo's lady did not use harsh homemade soaps on her fine linens; she had the imported kind at three pesos a pound or nineteen pesos a box.59 In the evening she lit lamps of nut oil or of olive oil at forty reales the arroba, instead of pine torches, smelly tallow candles, or a wick floating in lard or bear grease. There were wax candles for a special occasion such as the saint's day of someone in the family, but wax was dear: a peso per pound in Havana for the Campeche yellow and more for the white. When the whole parish church was lit with wax tapers on the Day of Corpus Christi the cost came to fifty pesos.60 In St. Augustine, where the common folk used charcoal only for cooking, the hidalgo's living rooms were warmed with charcoal braziers. One governor was said to keep two men busy at govern- ment expense cutting the firewood for his house.61 Even after death there were class distinctions. The hidalgo was buried in a private crypt, either in the sixteen-ducat section or the ten-ducat. Other plots of consecrated earth were priced at three or four ducats. A slave's final resting place cost one ducat, and a pauper was laid away free. It cost three times as much to bury an attache of Governor Quiroga y Losada's (thirty-six pesos) as an ordinary sol- dier (twelve pesos), on whom the priest declared there was no profit.62 Clothing was a primary expense and a serious matter. Uncon- verted Indians would readily kill parties of Spaniards for their clothes, or so it was believed. Blankets, cloth, and clothing served as currency. Tobacco, horses, and muskets were priced in terms of cloth or small blankets. Garrison debts to be paid by the deputy governor of Apalache in 1703 were not given a currency value at all but were expressed solely in yards of serge.63 Indians dressed in comfortable leather shirts and blankets. Rather than look like one of them a Spaniard would go in rags.64 A manifest for the Nuestra Senora del Rosario out of Seville gives the prices asked 26 The Expenses of Position in St. Augustine around 1607 for ready-made articles imported from Spain. Linen shirts with collar and cuffs of Holland lace cost forty- eight or sixty reales; doublets of heavy linen were twenty-nine, forty, and fifty-two reales; hose of worsted yarn cost twenty-eight reales the pair; a hat was thirty-four to forty-two reales. Breeches and other garments were made by local tailors and their native apprentices out of imported goods, with the cheapest and coarsest linen running six reales the yard, and Rouen cloth, ten and eighteen. Boots and shoes were made by a part-time cobbler from hides prepared at the tan- nery.65 The cheapest suit of clothes must have run to twenty ducats (twenty-seven-and-a-halfpesos). When Notary Juan Jimenez outfit- ted his son Alejandro as a soldier they ran up bills of seventy pesos to the shopkeeper; eleven pesos to the shoemaker and unstated amounts to the armorer; tailor, and washerman.66 An hidalgo had to be better dressed in his everyday clothes than the common soldier in his finest, and his dress clothes were a serious matter It was an age when state occasions could be postponed until the outfits of important participants were ready, and the official reports of ceremonies described costumes in detail. Governor Quiroga y Losada once wrote the king especially to say that he was having the royal officials wear cloaks on Sundays as it looked more dignified.67 The hidalgo's cloak, breeches, and doublet were colored taffeta at sixteen reales the yard or velvet at eight ducats. His boots were of expensive cordovan; his hose were silk and cost four-and-a- half pesos; his shirt had the finest lace cuffs and collar and detachable oversleeves that could cost twenty-four ducats. His dress sword cost eight ducats and his gold chain much more. When, to the professed shock of Father Leturiondo, Governor Torres y Ayala assumed the regal prerogative of a canopy during a religious procession, he may have been protecting his clothes.68 The elegant family and household, with sumptuous food and clothing-these were displays of wealth that anyone with a good income could ape. The crucial distinction of an hidalgo was his fighting capability, measured in his skill and courage, his weapons and horses, and the number of armed men who followed him. In Florida even the bureaucrats were men of war. Treasurer Juan Menendez Marquez went on visit (circuit inspection) with Bishop Juan de las Cabezas Altamirano, as captain of his armed escort. His son, Treasurer Francisco, subdued rebellious Apalache Province 27 The King's Coffer almost singlehandedly, executing twelve ringleaders and condemn- ing twenty-six others to labor on the fort. Francisco's son, Account- ant Juan II, defended the city from pirates, and in 1671 led a flotilla to attack the English settlement of Charles Town.69 Treasury officials were ordered to leave their swords outside when they came to their councils, for in a society governed by the chivalric code, war was not the only excuse for combat. Any insult to one's honor must be answered by laying hand to swordi, anidthe hidalgo who refused a formal challenge was disgraced. He couilTfno longer aspire to a noble title;the commonest soldier held him inr contempt.70 In Floridaevery free man and even some faverb-6ire arms. Soldiers, officers, officials, and Indian chiefs were issued weapons out of the armory and thereafter regarded them as private property. Prices of the regular issue of swords in 1607 andofifintlock muskets in 1702 were about the same: ten or eleven pesos. An ar- quebus, or matchlock musket, was worth half as much. Gunpowder for hand-held firearms was two-thirds peso per pound in 1702, twice as much as the coarser artillery powder. That other requirement of the knight-at-arms, his horse, was not as readily come by. A horse was expensiveand few survived the rough trip to Florida. On shipboard they were immobilized in slings, and when these swung violently against the rigging in bad weather the animals had to be cut loose and thrown overboard. Until midcen- tury the most common pack animal in Florida was still an Indian.72 Once horses had gotten a start, however they did well, being easily trained, well favored, and about seven spans high. Imported Cuban horses were available in the 1650s at a cost of 100 to 200 pesos, with a bred mare worth double. In the 1680s and '90s, mares were selling for 30 pesos and horses for 25, about twice as much as a draft ox.73 Horsemanship displays on the plaza had become a part of every holiday, with the ladies looking down in exquisite apprehension from second-story windows and balconies.74 The Indian nobility raised and rode horses the same as Spanish hidalgos. The chiefs of Apalache were carrying on a lively horse trade with English-allied Indians in 1700, when the Spanish put a stop to it for reasons of military secur- ity. 75 By that time a gentleman without his horse felt hardly present- able. When the parish priest Leturiondo locked the church on Saint Mark's Day and left for the woods to dig roots for his sustenance, his 28 The Expenses of Position mind was so agitated, he said, that he went on foot and took only one slave.76 The hidalgo of substance had an armed following of slaves or servants who were known as the people of his house, much as sailors and soldiers were called people of the sea or of war Sometimes there were reasons of security for such a retinue. A friar feared to travel to his triennial chapter meeting without at least one bodyguard, and, Bishop Diaz Vara Calder6n, when he made his visit to Florida in 1674-75, hired three companies of soldiers to accompany his prog- ress: one of Spanish infantry, one of Indian archers, and the other of Indian arquebusiers.77 About town an entourage was for prestige or intimidation. The crown, trying to preserve order and prevent the formation of rival authority in faraway places, forbade treasury officials to bring their followers to councils or have themselves accompanied in public; it was also forbidden to arm Indians or slaves.78 It was not merely the secular hidalgo who enjoyed his following. When Father Leturiondo went out by night bearing the Host to the dying, he summoned twelve soldiers from the guard- house and had the church bell tolled for hours to make the faithful join the procession.79 With all the expensive demands on him (public and private charities, providing for children, kee~ing_ upa large household, living on a grand scale, and maintaining his standing as a knight-at-arms), the hidalgo was in constant need of money-more money, certainly, than any royal office could presumably provide. As Interim Treasurer Portal y Maule6n once observed through his lawyer when one's parents were persons of quality it was not honor that one stood in need of, but a living.80 29 3 Proprietary Office SN the provinces of Florida, as elsewhere in the empire of the Spanish Habsburgs, a royal office was an item of property; the person holding title to it was referred to as the "proprietor." He had received something of value: the potential income not only of the salary but of numerous perquisites, supplements and opportunities for p rfit; and e he been recognizedubicysgtema hom the king delighted to honor. Perhaps he had put in twenty years of loyal drudgery on the books of the king's grants. Perhaps he had once saved the plate fleet from pirates. Perhaps it was not his services that were rewarded, but those of his ancestors or his wife's family. The archives are studded with bold demands for honors, rewards and specific positions, buttressed by generations of worthies. The peti- tioner himself might be deplorably unworthy, but such a possibility did not deter a generous prince from encouraging a family tradition of service. Appendix 2 (pages 143-48) shows the proprietors of treasury office, their substitutes and stewards, and the situadores. The date any one of them took office may have been found by accident in the correspondence or inferred from the Parish Register In some cases a proprietor went to New Spain before sailing to Florida, thus delaying his arrival by at least half a year. Treasurer Juan de Cevadilla and Accountant Lazaro Siez de Mercado were shipwrecked twice along their journey and reached St. Augustine two-and-a-half years after they were appointed.1 The scattering of forces among several forts 30 Proprietary Office before 1587 called for multiple substitutes and stewards. From 1567 to 1571 the fiscal officials assigned to Menendez's armada for the defense / of the Indies doubled as garrison inspectors and auditors and pos- sessed Florida treasury titles. During the Habsburg era a process occurred which could be called the "naturalization of the Florida coffer." To measure this phenomenon one must distinguish between those royal officials whose loyalty lay primarily with the Iberian peninsula and those who were Floridians, born or made. The simple typology of peninsular versus creole will not do, for many persons came to Florida and settled permanently. Pedro Menendez himself moved his household there. Of the twenty-one royally appointed or confirmed treasury officials who served in Florida, only eight had no known relatives already there. Four of the eight (Lazaro Siez de Mercado, Nicolis Ponce de Le6n, Juan de Arrazola, and Francisco Ramirez) joined the Floridians by intermarriage; another (Juan de Cueva), by compa- drazgo. One (Joseph de Prado) went on permanent leave, naming a creole in his place. Only two of the king's officials seem to have avoided entanglement in the Florida network: Santos de las Heras, who spent most of his time in New Spain, and Juan Fernandez de Avila, who was attached to the household of the governor and died after one year in office. From the time the king began issuing titles in 1571 until the Acclamation of Philip V in 1702 was a period of 131 years. The positions of treasurer and accountant were extant the entire time, and that of factor-overseer until 1628, making the total number of treasury office-years 319. The two royal officials who remained pristinely peninsular served eight years between them, deducting no time for communication lag, travel, or leaves. Florid- ians, whether born or naturalized, were in office at least 97 percent of possible time. One reason for the naturalization of the coffer was that the king felt obligated to the descendants of conquerors, and his sense of obligation could be capitalized on for appointments.2 The Menendez Marquez family, descended on the one side from the adelantado's sister; and on the other from a cousin of Governor Pedro Menendez Marquez, at one time or another held every office in the treasury, and their efforts to keep them were clearly encouraged by the crown. When in 1620 Treasurer Juan Menendez Marquez was appointed governor of Popayan in South America, he retained his Florida 31 The King's Coffer proprietorship by means of his eighteen-year-old son Francisco. As the treasurer was aged and might not live to return, he requested a future for the youth, assuring the Council that his son had been raised to the work of the office, had already served as an officer in the infantry, and was descended from the conquerors of the land. The official response was noncommittal: what was fitting would be provided.3 Francisco's position was ambiguous: neither interim treasurer nor proprietor. In 1627 word came to St. Augustine that the governor of Popayan was dead. When Francisco would not agree to go on half-salary and admit to being an interim appointee, Governor Rojas y Borja removed him from office and put in his own man, the former rations notary. The treasurer's son went to Spain to argue before the Council that "with his death the absence ofJuan Menendez Marquez was not ended that the use of his office should be." The young man pleaded that he was the sole support of his mother and ten brothers and sisters, and he bore down heavily on the merits of his ancestors. Philip IV's reaction was angry. If the king's lord and father (might he rest in glory) once saw fit to name Francisco Menendez Marquez treasurer with full salary in the absence of his father; it was not up to a governor to remove him without new orders from the royal person. Rojas y Borja, personally, was commanded to restore Francisco's salary, retroactively. Since the governor's term was concluding, he had to sign a note for the amount before he could leave town. Even without a formal future Francisco had found his right to succession supported by the crown. S Another wa in whiichlhcoffer became naturalized was by purchase, with Floridians coming forward to buy. The~ sa-e biffices was not shocking to sixteenth- and seventeenth-century administra- tors, who regarded popular elections as disorderly, conducive to corruption, and apt to set risky precedents.4 Many types of offices Were sold or "provided." In 1687 one could acquire a blank patent of captaincy for Florida by enlisting 100 new soldiers in Spain.5 To become the Florida governor; Salazar Vallecilla contracted to build a 500-ton galleon for the crown during his first year in office, and was suspended when the year passed and the galleon was not built.6 It was also possible to buy a benefice. When Captain Antonio de Argfielles, old and going blind, wanted to provide handsomely for his Francis- can son Joseph, he asked friends with influence to persuade the king 32 Proprietary Office to give him the position of preacher or some other honor and proudly promised to pay "though it should cost like a mitre."7 By the secondhalfof the sixteenth century most public offices in the Indies were venal, that is to say, salable by the crown.8 In 1604 these offices also became renounceable: they could be sold toa-second party for a payment of half their value to the coffer the first time, and one-third each time thereafter Offices of the treasury, however; were not included. It was feared that candidates would use fraud to recover the purchase price or that incompetents would find their way into office, and it was the crown's sincere purpose to approve only the qualified.9 This did not mean that no arrangement was possible. Juan Menendez Marquez obtained the Florida treasurership in 1593 when he was betrothed to the twelve-year-old daughter of the former treasurer, Juan de Posada, and of Catalina Menendez Marquez, the governor's sister. Francisco Ramirez received the accountancy in 1614 by agreeing to marry the former accountant's widow and sup- port her eight children.10 Not unnaturally, the members of the Council who made the proposals for treasury office regarded wealth as an evidence of sound judgment, and a candidate with means had ways to sweeten his selection. Bythe4630s-halfway-through the period we are studying-the king was desperatenoagh toxtendto the treasury the sal offices and also of renunciations, futures and /v retention. A governor in Florida might know nothing of the transaction until after the death of the incumbent, when the new proprietor presented himself with receipt and title; yet the only known opposi- tion to the sale of treasury offices came from Governor Marques Cabrera and was part of his campaign against creoles in general. When Thomis Menendez Marquez brought in the title to be ac- countant after the death of his brother Antonio, the governor refused to honor it, saying that Thomas was locally born and unfit. Marques Cabrera entreated the king to sell no more treasury offices to unde- v serving persons and to forbid the officials to marry locally-better) yet, to transfer them away from Florida altogether. The Junta de Guerra responded with a history of the official transactions in the case. According to its records, Antonio Menendez Marquez had paid 1,000 pesos cash to succeed his brotherJuan II in 1673, whenJuan was promoted from accountant in St. Augustine to factor in Havana. In 1682 Antonio (who was spending most of his time as situador in New 33 The King's Coffer Spain) had bought a future for their brother Thomas at a cost of 500 pesos. The Junta ordered the governor to install Thomas as account- ant immediately with retroactive pay.11 Three years later the Cdmara de Indias, which was the executive committee of the Council, ap- proved shipowner Diego de Florencia's request for a future to the next treasury vacancy for his son Matheo Luis.12 Floridians like Florencia were the ones who would know when offices were likely to fall vacant, and they may have been the only ones who wanted them. Proprietary offices were politely attributed to royal favorxaid y legizimizediby royal titles, but.the kinghad less and less to do with appointments. His rights of patronage were gradually alienated until all that remained as a royal prerogative was enforcing the contract. The complete contract between the king and his proprietor was contained in several documents: licenses, instructions, titles, and bond. The appointee leaving for the Indies from Spain received a number of licenses, of which some served as passports. Ordinarily, one could take his immediate family, three slaves, and up to four servants to the New World. Because the crown was anxious to preserve the faith pure for the Indians, there could be no one in the household of suspect orthodoxy. To discourage adventurers, tes- timony might have to be presented that none of the servants was leaving a spouse in Spain, and the official might have to promise to keep them with him for a period of time. Other licenses served as shipping authorizations. A family was permitted to take, free of customs, 400 to 600 ducats' worth ofjewels and plate and another 300 to 600 ducats' worth of household belongings. Sometimes the \ amount of baggage allowance was specified. Because of the crown policy of strict arms control, weapons were limited to the needs of a gentleman and his retinue. An official might be permitted six swords, six daggers, two arquebuses, and one corselet. At the option of the appointee the standard licenses could be supplemented by additional paperwork. Gutierre de Miranda carried instructions to the governor to grant him building lots and lands for planting and pasture as they had been given to others of his quality. Juan de Posada had a letter stipulating that situadores were to be chosen from the proprietary officials and were to receive an expense allowance.'3 Instructions for treasury office in the Indies followed a set for- mula, with most of the space devoted to duties at smelteries and 34 Proprietary Office mints. An official's copy could be picked up at the House of Trade or in Santo Domingo, or it might be sent to his destination. If he was already in St. Augustine he would receive his instructions along with his titles. Titles were equally standard in format. There were two of them: one to office in the treasury and the other as regidor of the cabildo. The treasury title addressed the appointee by name, calling him the king's accountant, treasurer; or factor-overseer of the provinces of Florida on account of the death of the former proprietor. After a brief description of the responsibilities of office the appointee was assured that in Florida "they shall give and do you all the honors, deferences, graces, exemptions, liberties, preeminences, prerogatives and im- munities and each and every other attribute which by reason of the said office you should enjoy." The salary was stated: invariably 400,000 maravedis a year from the products of the land.14 This was the only regular income the official was due, for municipal office in SFlorida was unsalaried. If the appointee was already in Florida, salaried time began the day he presented himself to be inducted into office; otherwise, on the day he set sail from San Luicar de Barrameda or Cadiz. By the time the crown withdrew coverage for travel time in S1695, Florida treasury offices had long been creole-owned. 15 The one thing that never appeared in an official's titles was a time limit. His Appointment, "at the king's pleasure," was understood to be for life. , The governor, by contrast, had a term of five or six years. He could threaten, fine, suspend, even imprison a proprietor; but he could not remove him. And when the governor's term expired and his residencia (judicial inquiry) came up, every official in the treasury would be waiting to lodge charges. The bond for treasury office, whether for the accountant, treas- urer, or factor-overseer, and whether for the status of proprietor, interim official, or substitute, was 2,000 ducats. The appointee was permitted to furnish it in the place of his choice and present a receipt. Once such offices began to be held by natives the bond was raised by subscription. As many as thirty-eight soldiers and vecinos at a time agreed to stand good if the treasury suffered loss because of the said official's tenure. The effect of this communal backing was that if the treasury official was accused of malfeasance and his bond was in danger of being called in, as in the cases of Francisco Menendez Marquez and Pedro Benedit Horruytiner; the whole town rose to his 35 / The King's Coffer defense. Nicolas Ponce de Le6n did not observe the formality of having his bond notarized. When the document was examined after his death, it was found that of his twenty-one backers, half had predeceased him, perhaps in the same epidemic, and only five of the others acknowledged their signatures.16 At the time of induction the treasury officialhoundhimselfby a -- --------- ---- ial -y -4-- ~ ~ solemn oathbefore God, the Evangels, and the True Cross-toabe i-honest and reliable. He presented his bond and his title. His belong- ings were inventoried, as they would again be at his death, transferral, or suspension. He was given key to the coffer and its contents als9 were inventoried. From that day forward he was meant never to take an independent fiscal action. Other officials at his treasury had access to the same books and locks on the same coffer He would join them to sign receipts and drafts. Together they would open, read, and answer correspondence. In the same solidarity they would attend auctions, visit ships, and initiate debt proceedings.17 Such cumber- some accounting by committee was intended to guarantee their probity, for the king had made his officials collegially responsible in order to watch each other. No single one of them could depart from rectitude without the collusion or inattention of his colleagues. A Spanish monarch had elevated ideals for his treasury officials. SBy law no proprietor might be related by blood or marriage to any other important official in his district. In Florida this was impractica- ble. The creole families were intricately intermarried and quickly absorbed eligible bachelors. Juan de Cevadilla described his predica- ment: [When] Your Majesty made me the grant of being treasurer here eight years ago I decided to establish myself in this corner of the world, and not finding many suitable to my quality I married dofia Petronila de Estrada Manrique, only daughter of Captain Rodrigo de Junco, factor of these provinces. If Your Majesty finds it inconvenient for father-in-law and son-in-law to be royal officials I shall gladly [accept a] transfer. / But the limitations of the land are such that not only are the royal officials related by blood and marriage, but the govern- ors as well.18 According to Spanish law, a proprietor was not to hold magisterial or political office or command troops.19 In St. Augustine the treasury 36 Proprietary Office officials were royal judges of the exchequer until 1621. They held the only political offices there were: places on the city council. They were also inactive officers of the garrison, who returned to duty with the first ring of the alarum. In a place known for constant war a man with self-respect did not decline to fight Indians or pirates. During the early sixteenth century, royal officials were necessarily granted sources of income to support them until their treasuries should have regular revenues. Juan de Afiasco and Luis Hernandez de Biedma, De Soto's accountant and factor; had permission to engage in Indian trade as long as the residents of Florida paid no customs. They and the two other treasury officials were to receive twelve square leagues of land each and encomiendas of tribute-paying natives.20 As the century wore on, such supplements to salaries were curtailed or forbidden. In most places treasury officials had already had their trading privileges withdrawn; they soon lost the right to operate productive enterprises such as ranches, sugar mills, or mines, for every time a royal official engaged in private business there was fresh proof of why he should not.21 The laws of the Indies lay lightly on St. Augustine, where the proprietors were more apt to be gov- erned by circumstance, and in 1580 the restriction on ranches and farms was removed. Accountant Thomas Menendez Marquez owned the largest ranch in Florida, shipping hides, dried meat, and tallow out the Suwannee River to Havana, where he bought rum to exchange for furs with the Indians who traded in the province of Apalache. Pirates once held him for ransom for 150 head of his cattle.22 Encomiendas were another matter. The New Laws of 1542-43 phasing them out for others forbade them altogether for officials of the treasury, who could not even marry a woman with encomiendas unless she renounced them.23 This created no hardship for the pro- prietors in St. Augustine. Although Pedro Men6ndez's contract had contained tacit permission to grant encomiendas in accordance with the Populating Ordinances of 1563, they were out of the question in Florida-where the seasonally nomadic Indians long refused to settle themselves in towns for the Spanish convenience, and the chiefs expected to receive tribute, not pay it.24 Eventually the natives consented to a token tribute, which in time was converted to a rotating labor service out of which the officials helped themselves- but there was never an encomienda.25 The expenses of a local treasury, including the salaries of its 37 The King's Coffer officers, were theoretically covered by its income. This was im- mediately declared impossible in Florida, where the coffer either had few revenues or its officials did not divulge them. The first treasurer; accountant, and factor-overseer occupied themselves in making their offices pay off at the expense of the crown and the soldiers. When instituting the situado, the king made no immediate provision for the payment of treasury officials. In 1577, however; when Florida was changed from a proprietary colony to a regular royal one, the crown was obliged to admit as a temporary expedient that half of the stated salaries might be collected from the situado. This concession was reluctantly repeated at two- to six-year intervals.26 The widows of officials who had served prior to regular salaries were assisted by grants.27 The royal officials pointed out between 1595 and 1608 that the revenues which they and the governor were supposed to divide pro rata were not enough to cover the other half of their salaries. Fines were insignificant, as were confiscations; the Indians paid little in tribute, and the tithes had been assigned to build the parish church. They did not think the colony could bear the cost of import duties. The treasure tax on amber and sassafras was difficult to collect.28 At last the crown resigned itself to the fact that the improvident treasury of the provinces of Florida would never pay its own way, much less support a garrison. The royal officials were allowed to collect the remainder of their salaries out of surpluses in the situado.29 In spite of an inflationary cost of living between 1565 and 1702, salaries, wages, and rations allowances did not rise in Florida. The king allowed his officials no payroll initiative. For a while the gov- ernor used the bonus fund of 1,500 ducats a year to reward merit and supplement the salaries of lower-echelon officers and soldiers on special assignment, but the crown gradually extended its control over that as well.30 Out of context, the figure of 400,000 maravedis, which was the annual salary of a proprietor, is meaningless.31 Table 1 shows the salary plus rations of several positions paid from the situado. The date is that of the earliest known reference after 1565. For comparative purposes, all units of account have been converted to ducats. Rations worth 2/ 2 reales a day were over and above salary for members of the garrison, among whom the treasury officials, the governor; and the secular priests counted themselves in this case.32 By 1676 at least, a 38 Proprietary Office TABLE 1 YEARLY SALARIES AND RATIONS IN ST AUGUSTINE IN THE SEVENTEENTH CENTURY v Salary Value Salary Position Governor Treasury proprietor Sergeant major Master of construction Master of the forge Parish priest Carpenter Company captain Chaplain Master pilot Surgeon Ensign Overseer of the slaves Sacristan Sergeant Officer in charge (cabo) Friar Infantryman Indian laborer 1693 Sacristan's sweeping boy without Rations Salary as Stated (in ducats) 2,000 ducats/yr 400,000 maravedis/yr 515 ducats/yr 500 ducats/yr 260 ducats/yr 200 ducats/yr 200 ducats/yr 200 ducats/yr 150 ducats/yr 12 ducats/mo 10 ducats/mo 6 ducats/mo 1,200 reales/yr and plaza 200 pesos/yr 4 ducats/mo 4 ducats/mo 115 ducats/yr 1,000 maravedis/mo 1 real/day in trade goods 2 lbs. flour/day 2,000 1,067 515 500 260 200 200 200 150 144 120 72 64 62 48 48 32 33 of Rations (in ducats) 83 83 83 83 83 83 83 83 83 83 83 83 83 83 83 83 115 83 50 36 including Rations (in ducats) 2,083 1,150 598 583 343 283 283 283 233 227 203 155 147 145 131 131 115a 115 83b 36c a. Beginning this year, stated supplies were given whose value increased with prices. b. Approximate. Depended upon value of trade goods and maize. c. Varied with the price of flour. repartimiento Indian received almost exactly the same pay before rations as a soldier.33 The soldier; of course, was often issued addi- tional rations for his family, while the Indian got only two or two- and-a-half pounds of maize per day, worth perhaps 1 /2 reales-and he might have brought it with him on his back as part of the tribute from his village.34 A Franciscan drew his entire 115-ducat stipend in goods and provisions. In 1641 the crown consented to let these items be constant in quantity regardless of price fluctuations.35 It is ironic / that natives and friars, both legendarily poor; were the only individu- als in town besides the sacristan's sweeping boy whose incomes could rise with the cost of living.36 It was acceptable to hold multiple offices. Pedro Menendez Mar- / 39 Year 1601 1601 1646 1655 1594 1636 1593 1601 1636 1593 1603 1601 1630 1693 1601 1601 1641 1601 1676 The King's Coffer quez's salary as governor of Florida began the day he resigned his title of Admiral of the Indies Fleet-more important than his concurrent one of Florida accountant.37 Don Antonio Ponce de Le6n usually exercised several positions at once. In 1687 he was at the same time chief sacristan of the church, notary of the ecclesiastical court, and notary of the tribunal of the Holy Crusade. Periodically he was appointed defense attorney for Indians. While visiting Havana, probably in 1701, he was made ecclesiastical visitador for Florida and church organist for St. Augustine. He returned home from Cuba on one of the troopships sent to break the siege of Colonel Moore, and as luck would have it, the day before he landed, the withdrawing Carolinians and Indians burned the church with the organ in it. Don Antonio presented his title as organist notwithstanding and was added to the payroll in that capacity since, as the royal officials pointed out, it was not his fault that there was no organ. By 1707 he had taken over the chaplaincy of the fort as well.38 Members of the religious community had sources of income other than the regular stipends. The parish priest was matter-of-fact in his discussion of burial fees and other perquisites. If these ran short, he could go to Havana, say a few masses, and buy a new silk soutane. In St. Augustine the value of a mass was set at seven reales, and the chaplain complained that the friars demanded cold cash for every one they said for him when he was ill and unable to attend to his duties.39 Parishioners brought the Franciscans offerings of fish, game, and produce in quantities sufficient to sell through their syndics. The income was intended to beautify churches and provide for the needy, but one friar kept out enough to dower his sister into a convent.40 In the garrison it was possible to collect the pay of a soldier without being one. There were seldom as many soldiers fit for duty as there were authorized plazas in the garrison, and the vacant spaces, called "dead-pays" (plazas muertas), served as a fund for pensions and allowances. A retired or incapacitated soldier held his plaza for the length of his life. A minor's plaza (plaza de menor or muchacho) could be purchased for or granted to someone's son to provide extra in- come, and if the lad developed no aptitude as a soldier the money did not have to be paid back. Plazas were used variously as honoraria to Indian chiefs, dowries, and salary supplements: a captain traditionally named his own servants or slaves to posts in his company and pock- eted their pay.41 40 Proprietary Office Understrength in the garrison due to these practices was a peren- nial problem. Sometimes it was the governor who abused his power to assign plazas. The crown refused to endorse nineteen of them awarded by Interim Governor Horruytiner to the sons, servants, and slaves of his supporters. At other times the government in Spain was to blame. Governor Hita Salazar complained that every ship to arrive bore new royal grants of plazas for youngsters, pensioners, and widows. A few of these were outright gifts; on most, the crown collected both the half-annate and a fee for waiving its own rule against creoles in the garrison. Again and again governors protested that of the plazas on the payroll only half were filled by persons who would be of any use to defend the fort and the town.42 A soldier's plaza was not his sole source of income. On his days off guard duty he worked at his secondary trade, whether it was to burn charcoal, build boats, fish, cut firewood, make shoes, grind maize, round up cattle, tailor; or weave fishnets. A sawyer or logger could earn 6 or 7 reales extra a day. Every family man was also a v part-time farmer; with his own patch of maize on the commons and cheap repartimiento labor to help him cultivate it.43 The soldier had still other advantages. When traveling on the king's business he could live off the Indians, commandeer their canoes, order one of them to carry his bedroll, and cross on their ferries free.44 His medicines cost nothing, although a single shipment of drugs for the whole presidio cost over 600 pesos. The same soldier's compulsory contribution to the hospital association of Santa Barbara, patroness of artillerymen, was limited to 1 real a month.45 When he became too old to mount guard he would be kept on the payroll, and after he died his family would continue to receive rations. The weapons in his possession went to the woman he had been living with, and his back wages paid for his burial and the masses said for the good of his soul.46 An officer was entitled to these privileges and more. Not only might his slaves and servants bring in extra plazas, but he was in a position to sell noncommissioned offices, and excuses and leaves from guard duty.47 It was possible for him to draw supplies from the royal storehouses almost indefinitely. With his higher salary he had readier cash and could order goods on the supply ship, purchase property at auction, or buy up quantities of maize for speculation.48 A treasury official possessed most of the advantages of an officer j plus others of his own. When he served as judge of the exchequer he 41 The King's Coffer Swas entitled to a portion after taxes, probably a sixth, of all confis- cated merchandise. When he was collector of the situado he drew a per diem of thirty reales, which may have been why Juan de Ce- vadilla asked the crown to supplement his low salary as treasurer With the good salary of a situador.49 As a manager of presidio supplies the treasury official favored his kinsmen and friends who were importers and cattle ranchers. In time of famine he drew more than his share of flour. As a payroll officer he credited himself with all the maravedis over a real, since there was no longer a maravedi coin in the currency. As a regidor the official took turns with his colleagues at tavern inspection. Each time a pipe was opened he collected one peso.50 There must have been many similar ways to supplement a salary, some acknowledged and others only implied. The duties of a royal official were not necessarily done by him personally. An official was often absent, traveling to New Spain or Havana, visiting the provinces, or looking after his property. He Chose a substitute, the substitute posted bond, and they divided the salary. If the substitute found it necessary to hire a replacement of his own, the subject of payment was reduced to a private deal between the parties. When a proprietary office fell vacant, the governor enjoyed the right of appointing ad interim, unless the crown had sold a future and the new proprietor was waiting. Interim officials were paid half of a regular salary, the same as substitutes.51 The routine work of the Florida treasury may have been done more often by substitutes than proprietors, especially in the late seventeenth cen- tury, when officials serving as situadores were kept waiting in Mexico City for years. This raised questions ofliability. Was the royal treasurer, Matheo Luis de Florencia, accountable for a deficit in the treasury when he had been in New Spain the entire five years since his installation? The crown referred the question to its auditors.52 The interim or substitute official was supposed to be someone familiar with the work of the treasury and possessed of steady charac- ter: rich, honorable, and married. It was unwise, though, to choose someone whose connections made him aspire to office himself. Alonso Sanchez Saez came to Florida with his uncle Lazaro Saez de Mercado, the accountant, and became a syndic for the friars. When Lazaro died the governor named Alonso ad interim on half-pay. At the next audit there was some question about his having been related to the former accountant, but the crown ruled that the governor 42 Proprietary Office could allow what was customary. Since at that time only a half of salaries was paid from the situado and the coffer had few revenues, the interim accountant's salary translated into 100,000 maravedis a year for a 400,000-maravedi position. His requests for a royal title and full salary were ignored, as were his complaints about his heavy duties. The next proprietor; Bartolome de Argfielles, kept Alonso substituting in the counting house during his own lengthy ab- sences.53 The embittered nephew, who had inherited the work but not the salary or honor of his office, made a name for himself in St. Augustine by sequestering funds, giving false alarms, and being generally contentious. The governor forbade him to sit on the same bench during Holy Week with the other treasury officials. Alonso circulated a rumor that the governor was a defrocked friar. The interim accountant and his wife, whom he always called "a daughter of the first conquerors," were eventually expelled from town, carry- ing the governor's charges against them in a sealed envelope.54 In a place as precedent-conscious as St. Augustine, the cases defining what was to be done about leaves of absence were impor- tant. Factor Alonso de las Alas quarreled with TreasurerJuan Menen- dez Marquez in 1595 over whose turn it was to go for thesituado. Las Alas thought he had won, but when he got back from New Spain the treasurer and the governor indicted him for bringing part of the situado in clothing instead of cash. At their recommendation the Council suspended him for four years without salary. 55After his reinstatement Las Alas requested a two-year leave to go to Spain. The treasurer had obtained a similar leave on half-salary the year before, but the crown felt no obligation to be consistent: Las Alas had to take his leave without pay.56 The accountant, Bartolome de Argiielles, also received a two-year leave to attend to personal business, and v when it expired he did not return. Years later his widow, dofia Maria de Quiiiones, was still trying to collect his half-pay to use for the dowries of four daughters.57 An official who experimented with informal leaves of absence was Accountant Nicolas Ponce de Leon. He was a veteran of Indian wars in Santa Marta, a descendant of conquerors in Peru, and, most important, the son-in-law of a Council of the Indies porter. From the preserved slate ofnominees, he was also the only one out of thirty-six candidates with no previous exchequer experience. When the gov- ernor of Florida died in 1631, shortly after his arrival, Nicolas found 43 The King's Coffer himself thrust into a co-interim governorship with the psychopathic Sergeant Major Eugenio de Espinosa.. In mortal fear of his partner; who had threatened to cut offhis head, he took refuge in the Francis- can convent until the next governor should arrive. He assured the crown that this caused the treasury no inconvenience for he had named a reliable and competent person to do what work could not be brought to the convent.58 In 1637 this same Nicolas Ponce de Le6n had Treasurer Francisco Menendez Marquez imprisoned on charges of having spent situado funds in Mexico City on gambling and other things "which for modesty and decency cannot be mentioned." Perhaps the accountant decided that unmentionable sins deserved closer examination. In 1641 he went to Mexico City himself, where he got the viceroy to throw Martin de Cueva, a former situador; into prison and settled down for a leisurely lawsuit before the audiencia. After three years the gov- ernor of Florida sent word for Nicolas to return or have his powers of attorney revoked. Nicolas appealed the governor's order to the audiencia. In 1645 the next governor declared the accountant absent without leave and replaced his substitute, who had let the papers of the counting house fall into confusion. The king finally intervened in the case and ordered the viceroy of New Spain to send the recreant accountant of Florida, who had been amusing himself for the last five years in Mexico City, home to look after his duties. After an absence of seven years Nicolas returned to resume his office and family. His holiday does not seem to have been held against him.59 A case of purchased leave of absence was that of TreasurerJoseph de Prado. Prado did not buy his office: the position was given him when he was almost fifty, for his services to the crown. He did not distinguish himself in Florida. During the Robert Searles raid of 1668 he was the only grown man in town to be captured in his bed and carried out to the ships for ransom along with the women. A month later he was sold a license to spend ten years in Guadalajara for the sake of his health. In 1674 he left St. Augustine and thereafter replied to no letters. When the ten years were up Governor Marques Cabrera reported that no one knew whether Prado was dead or alive and asked that the office be refilled. An indifferent Junta de Guerra clerk replied that Prado had paid 600 pesos for the privilege of absenting himself for unlimited periods as he pleased.60 The "honors, deferences, graces, exemptions, liberties, preemi- 44 Proprietary Office nences, prerogatives and immunities" promised to the royal official in his title were as dear to him as his salary and substitutes and maybe more so, for they acknowledged his position as one of rank and privilege. He had precedence. He and his family were persons of consequence. Such perquisites of office were partly tangible and partly deferential. Tanible symbols of office were the official's staff ofoffice (vara), his key to the coffer; and his residence in a government house. In the seventeenth century it was a common sign of authority to carry a staff The governor had his baton and so had Indian chiefs. Staves and banners even served as metonyms for office. Nicolas Ponce de Le6n II said that "the banner of the militia company being vacant," his son Antonio was appointed company ensign. Governor Marques Cabrera, being rowed out to the waiting galley on the day he de- serted, threw his baton into the sea, crying, "There's where you can go for your government in this filthy place!"61 In his role of royal \ judge a treasury official bore one staff, and as a regidor he was entitled / to another. When the choleric Sergeant Major Espinosa, enraged at Nicolas Ponce de Le6n I, was restrained by companions from killing him, he called into the counting house to his adjutant to seize the accountant's symbol of authority and arrest him. The officer did so, breaking Ponce de Le6n's staff to pieces.62 Keys were symbols of responsibility as staves were of authority. When the warden of te castifllo made his oath of fealty to defend his post, he took charge of the keys of the fort and marched through its precincts with the public notary, locking and unlocking the gates.63 A similar ceremony was observed with a new treasury official, who received his key to the treasure chest and immediately tried it. In legal documents this chest was sometimes called the "coffer of the four keys," from the days when there would have been four padlocks on it, one for each official. At important treasuries another key was frequently held by the viceroy, the archbishop, or an audiencia judge, who sent it with a representative when the chest was opened. The royal officials resented this practice as impugning their honor.64 It was the treasury officials' privilege and duty to reside in the houses of government (casas reales) where the coffer waskeptf. These buildings varied in number and location along with the relocations of the town. During the sixteenth century St. Augustine moved about with the sites of the fort. According to Alonso de las Alas, the first 45 The King's Coffer presidio, known to him as "Old St. Augustine," was built on an island facing the site of the town he lived in. St. Augustine was moved "across to this side" when the sea ate the island out from under it. In its new location on the bay front the town had a guard- house, an arsenal under the same roof as the royal warehouse, and perhaps a customs-counting house at the dock.65 There were no official residences. Three successive governors rented the same house on the seashore-Governor Ybarra thought it a most unhealthful location. This St. Augustine, and a new fort on the island of SanJuan de Pinillo, were destroyed by Drake in 1586; a later St. Augustine, by fires and a hurricane in 1599.66 Disregarding Pedro Menendez's idea to move the settlement to the site of an Indian village west of the San Sebastian inlet, Governor Mendez de Canzo rebuilt it a little to the south, where the landing was better protected and a curving inlet provided a natural moat. He laid a bridge across the nearby swamp, sold lots, and bought up lumber. In spite of the treasury officials' disapproval he began paying daily wages to repartimiento workers and put the soldiers to work clearing land. To finance his real development he exacted contribu- tions from those with houses still standing, approved harbor taxes, cut down on bonuses and expense allowances, and diverted the funds sent for castillo construction. The king helped with four years of tithes, 276 ducats from salvage, and 500 ducats besides.67 Following Philip II's 1573 ordinances for town planning, Mendez de Canzo laid out the plaza in back of the landing: 250 by 450 feet, large enough for a cavalry parade ground. Around the plaza he constructed a new guardhouse, a royal warehouse doubling as treasury, and a gover- nor's mansion. He also built a gristmill and an arsenal and started a counting house onto which a customs house could be added.68 The royal officials might have the right to live in government houses, but they did not intend to move into quarters that were inadequate. In the time of Governor Salinas the crown finally ap- proved construction of suitable residences to be financed from local revenues and, when these proved insufficient, from the castillo fund. The proprietors were satisfied. "In all places where Your Majesty has royal officials they are given dwelling houses," they had said, and now there were such houses in St. Augustine.69 When the factor- overseer's position was suppressed a few years later the vacated third residence was assigned by cedula to the sergeant major70 46 Proprietary Office All the buildings in town at this time seem to have been of wood, with the better ones tiled or shingled and the rest thatched with palm leaves. By 1666 the government houses, including the counting house and the arsenal, were ready to collapse. A hurricane and flood leveled half the town in 1674, but again rebuilding was done mostly in wood, although there was oyster shell lime and quarried coquina available on Anastasia Island for the stone masonry of the new castillo.71 There seems to have been some subdividing of original lots. During the governorship ofHita Salazar; Sergeant Major Pedro de Aranda y Avellaneda bought a lot within the compound of the government houses close to the governor's mansion, although he had applied for a different one in the compound of the treasury and royal warehouse. The royal officials not only sold it to him but supplied him with the materials to build a house next to the gover- nor's. The next governor, Marques Cabrera, managed to block Aranda's building there, but not on the lot beside the treasury. Displeased with what he called the deterioration of the neighbor- hood, the governor turned the gubernatorial mansion into a public _1 inn and requisitioned for his residence the house of Ana Ruiz, a widow, two blocks away.72 The next governor, Quiroga y Losada, proposed to sell the gov- ernment houses and put up a new stone building to contain the governor's residence, the counting house, and the guardhouse. The royal officials could move into his renovated old mansion and their houses be sold.73 Six months later-suspiciously soon-the new government house was finished. Appraised at 6,000 pesos, it had been built for 500. Quiroga y Losada had not followed his own submitted plan, for the counting house, treasury, and royal officials were still housed as they had been, in buildings that he and the next governor repaired and remodeled in stone.74 When Colonel Moore and his forces arrived to lay siege to the castillo in 1702, the treasury was on the point of being re-shingled. When they marched away, nothing was left of any of the government houses except blackened rubble.75 If the tangible symbols of office were staves, keys, and residences, the deferential symbols of office were precedence and form of ad- dress. Precedence was a serious matter. Disputes over who might walk through a door first, sit at the head of a table, or remain covered in the presence of someone else were not just childish willfulness but 47 The King's Coffer efforts to define the offices or estates that would take priority and those that would be subordinate.76 When parish priest Alonso de Leturiondo locked the church on Saint Mark's Day because the governor had sent someone less important than the treasurer to invite him to the official celebration, it was not solely from offended pride. As he said, he must maintain the honor of his office.77 The order of procession at feast days and public ceremonies was strictly observed. Treasury officials, who embodied both fiscal and municipal dignities, took precedence over all exclusively religious or military authorities. The two first ministers of Florida at the local Acclamation of Philip V were the interim accountant and the treas- urer, "who by royal arrangement follow His Lordship in seat and signature." The accountant stood at the governor's left hand and the treasurer, serving as royal standard-bearer for the city, at his right, leading the hurrahs of "Castilla Florida" for the new monarch and throwing money into the crowd.78 In some treasuries precedence among the royal officials was determined by the higher salaries of some or by the fact that proprie- tors were regidores of the cabildo and substitutes were not. In St. Augustine, where these differences did not exist, the only bases for precedence were proprietorship and seniority. The one who first stepped forward to sign a document was the one who had been a proprietor in Florida the longest.79 The final right of a royal official was not to be mistreated verbally. The form of address for each level in society was as elaborately prescribed as the rest of protocol, and a lapse could only be regarded as intentional. The governor was referred to as Su Senioria (His Lordship) and addressed as Vuestra Setnoria or Vuestra Excelencia (Your Excellency), abbreviated to Vuselensia or even Vselensia in the dis- patches of semiliterate corporals.80 Governor Ybarra implored the Franciscan guardian to keep the reckless Father Celaya confined in the convent, for "if he shows me disrespect [on the street] I shall have to put him into the fort... for I must have honor to this office."81 Friars were called Vuestra Paternidad (Your Fatherliness). A Spaniard of one's own rank was addressed as Vuestra Merced (Your Grace), shortened in usage to Usarced, Usarce, or Busted (precursors of Usted).82 Only the king could address officials in the familiar form, otherwise used for children, servants, and common Indians. After Governor Mendez de Canzo had addressed TreasurerJuan Menendez 48 Proprietary Office 49 Marquez publicly as "vos," his epithets of "insolent" and "shameless one" were superfluous. The crown's reaction to such disrespect toward its treasury officials was to reprimand the offender and order him in future to "treat them in speech as is proper to the authority of their persons and the offices in which they serve us, and because it is right that in everything they be honored."83 As proprietor of the exche Aqur-thergasuxFfficiaI had the second highest salary in town, job tenure, free housing, and the opportunity ttef substitut s-dhis work In his connection with the garrison he could count on regular rations, supplies, and a career for his sons. Because he was regidor of the cabildo, the whole regional economy was laid before him to adjust to his advantage. And beyond all this were the prized "honors, deferences, graces, exemptions, liberties, preeminences, prerogatives and immunities." A proprietary official of the royal treasury was as secure financially and socially as any person could be who lived in that place and at that time. 4 Duties and Organization HE work of the treasury was conducted mainly in the houses SCof government: the counting house, the customs house, the royal warehouse and arsenal, and the treasury. For all of this work the royal officials were collegially responsible, and much of it they did together; but each of them also had his own duties, his headquarters, and one or more assistants. The organization of the treasury is shown in Table 2, with the patron or patrons of each position. Those posi- tions for which wages are known are in Table 3. The title of accountant called for training in office procedures. Roving auditors might find errors and make improvements in the bookkeeping system, yet this could not take the place of careful routine. In the words of New Spain Auditor Irigoyen: "The account- ant alone is the one who keeps a record of the branches of revenue and makes out the drafts for whatever is paid out, and any ignorance or carelessness he displays must be at the expense of Your Majesty's exchequer."1 Unfortunately, not every hidalgo who was appointed accountant enjoyed working with figures. Some left everything in the hands of subordinates, signing whatever was put in front of them. The accountant did not handle cash. He was a records specialist, the archivist who preserved royal cedulas, governors' decrees, and treasury resolutions. He indexed and researched them, had them copied, and was the authority on their interpretation. He kept the census of native tributaries-a count supplemented but not dupli- cated by the friars' Lenten count of communicants.2 It was his busi- 50 Duties and Organization 51 ness to maintain personnel files, entering the date when an individual went on or off payroll and recording leaves and suspensions. No one was paid without his certification. Sometimes the crown asked for a special report: the whereabouts of small firearms in the provinces, a list of Indian missions and attendant friars with the distances between them in leagues, a cost analysis of royal slave earnings and expenses, even an accounting of empty barrels. Instructions came addressed to TABLE 2 TREASURY ORGANIZATION AND PATRONAGE IN ST AUGUSTINE, 1591-1702 Patrons King and Royal Treasury Positions and Dates Created Council Governor Officials Council Other Treasury Council Governor x Accountant x Factor-overseer to 1628 Treasurer to 1628 Treasurer-steward (1628) x Interim officials x Substitute officials xa x Public and govt. notary to 1631 1631 onb Commissioned Agents Situador xc Procurador x Supply ship masters x Provincial tax collectors x Expedition tax collectors x Counting House Chief clerk (1593) x Assistant clerk (1635) x Lieutenant auditor (1666) xd Internal auditor to 1666 Customs House Customs constable (1603) to 1636 1636 on Chief guard (1630) x Guards (as needed) x Warehouse and Arsenal Steward x Rations and munitions x notary a. With the governor's consent. b. Auctioned. c. Chosen by auditor and governor. d. Most common practice; varied frequently. 52 The King's Coffer TABLE 3 WAGES AT THE ST AUGUSTINE TREASURY IN THE SEVENTEENTH CENTURY Salary Per Diem or Total without Daily Rations (in Position Rations (in reales) Bonus ducats) Proprietor 400,000 mrs/yr 2'/2 1,150 Proprietor as situador 400,000 mrs/yr 30 2,062 Captain as procurador 200 ducats/yr 15 20 ducats/mo 938 Interim or substitute official 200,000 mrs/yr 21/2 618 Lieutenant auditor 500 pesos/yr 21/2 444 Chief clerk 1,000 mrs/mo 21/2 200 pesos/yr 260 Chief guard 250 ducats/yr* 250 Steward 50,000 mrs/yr 21/2 217 Customs constable 1,000 mrs/mo 21/2 25,000 mrs/yr 182 Rations notary 5 ducats/mo 21/2 400 reales/yr 179 Public and govt. notary 1,000 mrs/mo 2/2 400 reales/yr 151 Assistant clerk 1,000 mrs/mo 21/2 50 pesos/yr 151 *This figure may include rations. all the royal officials and they all signed the prepared report, but the accountant and his staff did the work.3 The counting house was staffed by a number of clerks. Before their positions were made official the accountant sometimes hired an accounts notary (escribano de cuentas) out of his own salary.4 In 1593 the crown approved a chief clerk of the counting house officiall mayor de la contaduria) to be paid a regular plaza and 200 pesos from the bonus fund. When the accountant was away this clerk usually served as his substitute. The position of assistant clerk of the counting house officiall menor de la contaduria) with a salary supplement of 50 pesos a year was approved in 1635. The assistant clerk was also known as the clerk of the half-annate officiall de la media anata), although the half- annate was seldom collected.5 If the work load at the office became heavy, temporary help might be hired, but the king did not want this charged to his treasury. When Accountant Ponce de Le6n and his substitute allowed the books to get eight years behind, the other officials were told to deduct from salaries the cost of bringing them up to date.6 In 1688, soon after a third infantry company was formed, Accountant Thomas Menendez Marquez requested permission to hire a third clerk for the increased paperwork. Instead, he was or- dered to reduce his staff from two clerks to one-an order that was neither rescinded nor, apparently, obeyed.7 There was by that time another official at the counting house: a lieutenant auditor chosen by Duties and Organization the royal auditor and the governor to replace the internal auditor who had been appointed periodically. These two positions are discussed in Chapter 8. The treasury officials were a committee of harbor masters, regis- tering the comings and goings of people as well as ships. It was their duty to see that no one entered the provinces without the correct papers, or left without the governor's consent and their own fiscal release. Impetuous young Pedro de Valdes, betrothed to Menendez's daughter Ana, was probably the only person ever to stow away for Florida, but convicts, soldiers, and even friars tried to escape. The presidio's ships had to be manned by Indians and mixed-bloods who could be relied upon to return home.8 When the royal officials first began collecting harbor taxes, they recognized the need of a customs constable and inspector (alguacil y fiel ejecutor de la aduana) to record what was loaded and unloaded from ships. Otherwise they had to take turns at the customs house them- selves, which Alonso Sanchez Siez, at least, was unwilling to do.9 The crown approved the new position in 1603, with a 25,000- maravedi bonus and no doubt a percentage of goods confiscated.10 The governor appointed as first constable Lucas de Soto, a better sort of soldier sentenced to serve four years in Florida for trying to desert to New Spain from Cuba. By 1608 De Soto was in Spain with dispatches, receiving the salary of customs constable but not doing the work. In 1630 the crown approved a position of chief guard (guardamayor) for all ports, to be chosen by the treasury officials and to select his own assistants. In St. Augustine he was paid a respectable salary of 250 ducats. The royal officials soon objected that the gov- ernor appointed all the guards and was thus able to unload ships by night or however he pleased without paying taxes; the customs constable was no more than his servant and secretary. In response to their letter the officials were assigned patronage of the constable's post as well. Within ten years they too were letting him serve by proxy.11 It was a temptation to double up on offices and hire out the lesser one. In the early 1670s a Valencian named Juan de Pueyo came to St. Augustine and begat to work his way up in the counting house, beginning as the clerk of the half-annate. According to the treasury officials, since counting house salaries were low they also gave him the post of constable, which carried its own assistant in the chief 53 The King's Coffer guard of the customs house. Pueyo knew the importance of family. He was promoted to chief clerk around the time his wife's sister married the accountant's son. As chief clerk of the counting house Pueyo supervised the assistant clerk, and as customs constable, the guards. By 1702 the Valencian, serving as interim accountant, stood at the governor's left hand during the Acclamation of Philip V as one of the provinces' first ministers. For someone who had started as an under-bookkeeper he had come a long way.12 As early as 1549 the offices of factor and overseer had begun to be combined in the Indies. Two years before St. Augustine was founded the crown determined that the smaller treasuries did not need the factor-overseer either and that that official's duties could be divided between the accountant and the treasurer. A factor-overseer was named for Florida nevertheless, because Spanish occupation there began as an expedition of conquest: a factor was needed to guard the king's property, and an overseer to claim the royal share of booty and to supervise trade. The adelantado expected Florida to become an important, populous colony with port cities, which would need a manager of commerce.13 Although the St. Augustine treasury turned out to be a small treasury indeed, it kept a factor-overseer for over sixty years. He was the business manager who received the royal revenues paid in kind and converted them to cash or usable supplies at auction, whether they were tithes of maize and cattle, the king's share of confiscated goods, tributes, or the slaves of an estate under- going liquidation. Whatever was to be auctioned was cried about town for several days, for it was illegal for the treasury to conduct a sale without giving everyone a fair chance to buy. Cash was pre- ferred, but the auctioneer sometimes accepted a signed note against unpaid wages.14 It was the factor in a presidio, as in an armada, who was account- able for the storage and distribution of the king's expendable prop- erties: supplies, provisions, trade goods, and confiscated merchandise. For these duties he had an assistant called the steward of provisions and munitions (tenedor de bastimientos y municiones). The first steward for the enterprise of Florida, it happened, was appointed ahead of the first factor. Pedro Menendez named his friend Juan deJunco to the position while they were still in Spain. In 1578Juan's brother Rodrigo became factor-overseer and technically Juan's superior. Rodrigo 54 Duties and Organization suggested that stewards were needed at both St. Augustine and Santa Elena, and the crown agreed to consider it.1x The other officials saw the need of two stewards, but not of their colleague Rodrigo. Treasurer Juan de Cevadilla, shortly after he arrived, said that in the beginning a treasurer had been in charge of the armada provisions and supplies, assisted by a steward, who was paid 50 ducats a year above his plaza. If the same were done in Florida the factor's position could be abridged. Accountant Bartolome de Argiielles tried to speed the cutback by saying that it looked as if Factor Rodrigo de unco had nothing to do. The office of factor was meant for places with mines, he said. The work of an overseer- looking after musters, purchases, and fortifications-was done by the accountant in Havana, and Argiielles thought he could handle it in St. Augustine.16 The Council might have been more impressed with his offer had he not gotten the duties of factor and overseer reversed. In 1586 permission arrived for an extra 50,000 maravedis a year with which to pay two stewards. It was better to have persons with rewards and regular salaries in positions of responsibility, the au- thorizing official noted; a plain soldier could not raise bond, and losses would result. 17 Juan de Cevadilla, by now Rodrigo de unco's son-in-law, had a brother Gil who became the second steward. This convenient arrangement lasted until Cevadilla died in New Spain in 1591. Junco was promoted to governor but, on his way to St. Au- gustine from Spain, was shipwrecked and drowned in the St. Johns estuary along with Treasurer-elect Juan de Posada. The king's choice for a new factor never made the trip to Florida. For the time being, Accountant Argiielles was the only royal official. With Santa Elena permanently abandoned there was need of only one steward. Argiielles persuaded the incoming governor to remove both Gil and Juan and install Gaspar Fernindez de Perete instead, on the full 50,000 maravedis salary.18 The accountant's 1591 instructions to the new steward show the care with which royal supplies and provisions were supposed to be guarded.19 Fernandez de Perete must not open the arsenal save in the presence of the rations and munitions notary, a constable, and the governor. To guarantee this, it had three padlocks. He must keep the weapons, matchcord, gunpowder and lead safe from fire. (There was little he could do about lightning. In 1592 a bolt struck the powder 55 The King's Coffer magazine and blew up 3,785 ducats' worth of munitions.) The stew- ard must protect the provisions against theft and spoilage, storing the barrels of flour off the ground and away from the leaks in the roof; keeping the earthenware jars of oil, lard, and vinegar also off the ground and not touching each other in a place where they would not get broken; examining the wine casks for leakage twice a day and tapping them occasionally to see whether the contents were turning to vinegar in the hot wooden buildings.20 It was the steward's re- sponsibility to keep a book with the values by category of everything kept in the warehouses, from ships' canvas to buttons. Once a year the royal officials would check this book against the items in inven- tory. Anything missing would be charged to the steward's account. On the first of every month they and the governor would make a quality inspection, in which anything found damaged due to the steward's negligence was weighed, thrown into the sea, and charged to him.21 Argiielles' opportunity to supervise the steward did not last. A new factor-overseer, Alonso de las Alas, quickly established his authority over the steward's position, which he had once held. When Las Alas' suspension was engineered a few years later the governor replaced both him and the steward with Juan L6pez de Aviles, a veteran of the Menendez armada.22 The harried interim official complained that of all the officials he was the busiest and most exposed to risk, answering for the laborers' wages, the royal ships, and the slaves, besides the rations and supplies.23 After Factor Las Alas was reinstated and recovered control of the warehouses, he used them for storage of his own goods flourr hardtack, wine, meat, salt, blankets), and, through a false door, the king's property found its way into his house. It was said that 100,000 pounds of flour went out through that door to be baked into bread and sold in one of the shops he and the treasurer owned in town-shops they secretly supplied with unregistered merchandise. Governor Fernandez de Olivera suspended them both. His interim appointees to the treasury found Las Alas short 125 pipes of flour; 5,540 pints of wine, 1,285 pints of vinegar and 94jugs of oil-and this was only in the provisions.24 On his way to defend himself before the Council, Las Alas was wounded by a pirate musketball and died owing the crown 5,400 ducats. Hoffman and Lyon, following his story, were surprised to discover 56 Duties and Organization three subsequent cedulas praising Las Alas' integrity and services during an attempted colonization of the Straits of Magellan to fore- stall Drake. An heir of the twice-suspended factor was granted 200,000 maravedis, the salary which had accumulated after Las Alas' death while his post was vacant.25 Although the king filled the factorship once more (or honored a commitment) withJuan de Cueva, this time was recognized to be the last. Governor Salinas suggested in 1620 that an accountant and a steward were all Florida really needed. The Council, considering his letter four years later; recommended that the royal will be to suppress the office of factor-overseer in Florida and combine the positions of treasurer and steward. A certain delay in implementing this will would be unavoidable, treasury offices being lifetime appointments, but Florida officials might be given first consideration for vacancies elsewhere. In 1624 Francisco Ramirez, the accountant, was offered a transfer to the treasury soon to be established at the mines of San Luis Potosi in New Spain; Factor Juan de Cueva was to become Florida accountant in his place. Ramirez declined to move. In 1628, the year Francisco Menendez Marquez won his case to be recognized as Florida treasurer; Cueva began serving as accountant in place of Ramirez, who was semiretired. He may have continued his stewardship duties as well, but not with his old title of factor- overseer. After the king's new appointee arrived in 1631, Cueva left for San Luis himself, to be that treasury's accountant.26 The treasurer's individual functions were those of a cashier. He received the royal revenues paid in specie and disbursed the sums that he, the other officials, and the governor had approved. The coffer was his particular responsibility; he lived in the building where it was kept. Because little money got to Florida the duties of this office were light. The gossipy Accountant Argiielles said that once the yearly payroll had been met the treasurer had nothing to do.27 Perhaps this was why in 1628, the year the factorship was suppressed, the duties of steward were given to the treasurer and the position treasurer- steward was created.28 In vain Accountant Nicolis Ponce de Le6n warned the king that letting Treasurer Francisco Menendez Marquez have access to the supplies as well as the money would make him more powerful than the accountant and the governor together.29 In 1754, after three sons and a grandson of Francisco had served their 57 The King's Coffer own proprietorships in the treasury, a Bourbon king took the further step of suppressing the accountancy and reducing the number of officials to one, the treasurer.30 But that is outside the scope of this study. Every day except Sundays and feast days the royal officials went to the work of the day directly after meeting at morning mass.31 It might be the day for an auction, or for the monthly inspection of munitions and supplies. When a pilot came in from coastal patrol his declaration of salvage and barter had to be taken and his equipment and supplies checked in. A deputy governor in from the provinces would present his report of taxes collected, or a new census of tributaries. The masters of supply ships brought in their receipts and vouchers to find out what balance they owed to the treasury. If the ship had brought a situado, the time required would be magnified several times. Once a week the treasury officials held a formal treasury council (acuerdo de hacienda) attended by the public and governmental notary (escribano ptiblico y de gobernaci6n). Without this notary's presence there could be no legal gathering for government business, no public pronouncement, and no official action or message. Any letter not in his script was considered a rough draft; his signature verified a legal copy. The public and governmental notary was paid a plaza plus salary, which began at 100 ducats a year but around 1631 was reduced to 400 reales. Since no money or supplies passed through his hands he did not furnish bond.32 Although in his public office a notary was supposed to be impartial and incorruptible, it was hard for him to oppose the governor who had appointed him, could remove him, and might fine him besides.33 Captain Hernando de Mestas, in a letter smuggled out of prison, said that the notary was his enemy and had refused him his office. "The former notary would not do what he was told," said Mestas, "so they took the office from him and gave it to the present one who does what they tell him, and he has a house and slaves, while I am poor."34 In an effort to get the notaryship out of the governor's power the royal officials suggested to the crown that the position be put up for public auction. The idea was quickly taken up, but the new system probably changed little. In a town both inbred and illiterate, notaries were not easy to find. When Alonso de Solana was suspended from that office by the king's command, and again when he died, the highest bid to replace him (and the one the royal 58 Duties and Organization officials accepted) was that of his son.35 For these reasons of autoc- racy, patronage, and inbreeding, little reliance can be placed upon local testimony about a controversial topic. As the bishop of Tricale, visiting St. Augustine in the eighteenth century, explained: "Here there is a great facility to swear to whatever is wanted."36 At their treasury council the royal officials checked the contents of the treasure chest against the book of the coffer. Whatever had been collected since the last time the chest was open was produced; they all signed the receipt and saw the money deposited in the coffer and entered into the book of the coffer kept inside it. It was not unheard of for an official to keep out part of the royal revenues, so that they never entered the record at all. As long as the chest was open, those vecinos using it as a safe might drop by to make a deposit or to check on the contents of their own small locked cases, for the treasure chest was the nearest thing to a bank vault in town. The treasury officials were not supposed to borrow the king's money or lend it to their friends, but from the number of deficits found by auditors, they must have done so regularly. When detected, they had to replace the money within three days or face suspension.37 The treasury was not the only source of credit. Soldiers pledged their wages to provide bond for an officeholder or situador. Shipowners sold their vessels to the presidio on time. To ration the garrison Governor Hita Salazar borrowed produce stored up for sale by the Franciscans.38 Even Indians operated on the deferred payment plan. The Christians of San Pedro sold cargoes of maize to the garrison on credit and so did the heathen on the far side of Apalache. The Guale women who peddled foodstuffs and tobacco required a pledge of clothing from Spanish soldiers, and it was complained that they returned the garments in bad condition.39 After the examination of the coffer the royal officials went on to important deliberations. Dispatches from the crown addressed to "my royal officials" were opened by all of them together. Their replies, limited to one subject per letter were signed collegially except when one official in disagreement with the others wrote on his own. The notary wrote up a resume of actions taken, which each official signed. If there had been disagreement each one signed after his own opinion (parecer), but the vote of the majority carried. The minutes of the meeting, in the book of resolutions (libro de acuerdos), had the force of a judicial action.40 59 The King's Coffer The governor was entitled to attend the treasury council and vote in it, but not to put a lock of his own on the coffer. The royal officials considered it a hazard to the treasury for the governor to hold a key to it, and the crown agreed. No one should have access to the king's money, plate, and jewels without corresponding responsibility for them. Governors in Florida were forbidden to put a lock on the royal coffer by a 1579 restraining order that was repeated after further complaints in 1591 and 1634.41 The governor's vote, although not decisive, had inordinate weight on the council, especially after the number of officials was reduced from three to two. The treasury officials were aware of what this meant. In 1646 they wrote: The governor who advised Your Majesty on this could have had no other motive than less opposition to his moneymak- ing, because three, Sire, are not as easy to trample on as just two; and besides, if one of them combines with the governor what can the other poor fellow do, in a place where a governor can do whatever he likes?42 The governor was most definite about this right when it was time to commission someone to collect the situado. Once a year the treasury council met. to decide on the year's situador and what he should bring back from Mexico City, Vera Cruz, and Havana. To- gether they wrote out his instructions, signed his powers of attorney, and accepted his 2,000-ducat bond. Since it was not feasible for the agent to contact them after he left, they must choose someone of independent judgment and wide experience-one of themselves, thought the royal officials. It did not deter them to remember that a proprietor on per diem drew higher wages than the governor. Mar- ques Cabrera, who was suspicious about most of what royal officials did, pointed out that the situador was supposed to answer to them and they could not fairly judge themselves. A governor usually proposed someone from his household. Nevertheless, of the twenty-one royally appointed officials from 1565 to 1702, seventeen are known to have been situador at least once, and many went several times.43 Another important commissioned agent was the procurador, or advocate, who represented the presidio on trips to Spain. As his primary duty was to bring back soldiers and military supplies, the procurador was usually either an officer with a patent from the crown 60 Duties and Organization or someone from the governor's house who had been appointed an officer ad interim. When he had time the procurador conducted business of his own, and for many this was their main object. The colony was allowed two ships-of-permission per year. Pelts worth 2,000 or (after Governor Salinas requested an increase) 3,000 ducats could be taken to Spain and cargoes brought back for resale, if anyone wanted to bother. Procurador Juan de Ayala y Escobar, whose career has been studied by Gillaspie, found the Spanish trade worth his while. His instinct for scenting profit in war and famine was some- thing the crown overlooked in return for his keeping St. Augustine supplied.44 At the weekly treasury council, bills were presented. The treasury officials examined the authorization for each purchase, along with the affidavit verifying the price as normal, the bill of lading, and the factor's or steward's receipt for goods delivered. If everything was in order the bill was entered into the book of libranzas (drafts), which was a sort of accounts payable. Each entry carried full details of date, vendor, items, quantity, price, and delivery, as well as the signed certifications of the royal officials and the governor Such entries had great juridical value. A libranza, which was acceptable legal tender was no more than the notarized copy of an entry. Sinchez-Bella maintains that the libranzas, or drafts on the royal treasury, were the primary cause of friction between governors and royal officials in the Indies.45 The governor of Florida had been ordered to make with- drawals in conjunction with the officials of the treasury, but many executives found this too restricting. When Governor Mendez de Canzo ordered payments against the officials' advice he did not want their objections recorded; Governor Treviiio Guillamas told them flatly that it was his business to distribute the situado, not theirs; Governor Horruytiner presented them with drafts made out for them to sign.46 The officials were not as helpless as they liked to sound. The governor endorsed the libranzas, but they had the keys to the coffer.47 Governors tried several methods of managing the royal officials. One was by appointment and control of their notary. Another was interference with the mails. Francisco de la Rocha and Salvador de Cigarroa complained that few of the cedulas they were supposed to index ever reached them. Another set of officials, going through the desk of Governor Martinez de Avendafio after his death, found many 61 The King's Coffer of their letters to the crown, unsent. Interim Governor Gutierre de Miranda stopped the passage of all mail, confiscating one packet smuggled aboard a dispatch boat in ajar of salt.48 Other governors operated by the threat of fines. IfArgiielles did not finish a report on the slaves and the castillo within six days-500 ducats; ifSinchez Saez did not remain on duty at the customs house-500 ducats; if Cuevas, Menendez Marquez, and Ramirez did not sign the libranza to pay the governor's nephew ahead of everyone else-500 ducats each, and 200 ducats from the notary.49 A truly intransigent official could be sus- pended or imprisoned in his house-a punishment that carried little onus. Governor Ybarra confined Sinchez Saez three times. Govern- ors Fernandez de Olivera and Torres y Ayala both arrived in St. Augustine to find the officials of the treasury all under arrest.50 While the governor could attend an ordinary treasury council and exercise a vote equal to a royal official's, he could also summon a general treasury council (acuerdo general de hacienda) with an agenda of his own. The crown insisted at length that there be no expense to the exchequer without prior approval, yet it was grudgingly conceded that in wartime, at a distance of 3,000 miles, emergencies could arise. At a general treasury council the governor authorized extraordinary expenditures personally and had the royal officials explain later. On the pretext that the counting house needed repairs, Governor Mar- ques Cabrera had all treasury councils meet at his house and, using the excuse of seasonal piracy and Indian raids, spent as he pleased.51 Their reports could be depended on to get such a headstrong executive into trouble, but the treasury officials did not wholly rely on the slow workings of royal justice. St. Augustine had its own ways to bedevil a governor and to make him write, as Marques Cabrera did to the king: "Next to my salvation there is nothing I long for more than to have the good fortune of leaving this place to wherever God may help me-anywhere, as long as I shall find myself across the bar of this harbor!"52 62 5 The Situado HE Florida coffer had two sources ofincome:;,Qcally- generated Royal taxes and revenues, which will be treated in the next chapter, and the situado, basically a transfer between treasuries. Although the connections between principal treasuries were tenuous and debts against one were not collectible at another; a more affluent treasury could have charged upon it the upkeep of defense outposts along its essential trade routes. The shifting fortunes of the West Indies can be traced in the various treasuries' obligations. At first the House of Trade and Santo Domingo did most of the defense spend- ing; then it was the ports of the circum-Caribbean. By the 1590s the viceregal capitals and presidencies had assumed the burden: Lima provided the situado for Chile; Cartagena, the subsidies of Santa Marta and Rio de la Hacha; Mexico City, those for the rest of the Caribbean.1 In Pedro Menendez's contract with the king (as rewritten fol- lowing news of a French settlement in Florida) the adelantado was promised certain trade concessions, the wages for 300 soldiers, and 15,000 ducats. This was the first stage of the Florida subsidy, lasting three years. When the contract was renewed, along with its trading privileges, only 150 men were provided for and their wages were to be taken from Menendez's new armada's subsidy, which was funded equally by the Tierra Firme and New Spain treasuries. In 1570 the Florida subsidy was separated from that of the Indies Fleet, though Florida support remained a charge on Tierra Firme, along with a new 63 The King's Coffer subsidy for Havana.2 When the Tierra Firme treasury was divided in 1573 into one at Nombre de Dios and one at Panama, Philip II moved responsibility for Florida's subsidy to the New Spain coffer of Vera Cruz, which had financed the luckless Tristan de Luna expedition to the Pensacola area in 1559. In 1592 the obligation was transferred to the royal treasury in Mexico City, where it remained for the rest of the Habsburg period.3 The situado was not a single subsidy but a cluster of them, mostly based on the number of authorized plazas in the garrison. The 23,436-ducat total that the officials of Tierra Firme were told to supply yearly beginning in 1571 consisted of 18,336 ducats to ration 150 men and 1,800 ducats to pay them (at the rate ofl ducat a month as in the fleet of Menendez), 1,800 ducats for powder and ammunition, and 1,500 ducats for "troop commodities."4 When Pedro Menendez Marquez went to Florida with reinforcements to restore a fort at Santa Elena, the crown doubled the size of the garrison but increased the subsidy by only four million maravedis, or 10,668 ducats. This was corrected in 1579 when the situado was raised to 47,770 ducats. Soon afterward, the crown accepted the new royal officials' plan for collecting the situado themselves, by turn, and administering the supplies.5 The total did not change substantially for the next ninety years. An inflationary rise in the cost of provisions was absorbed by the soldiers, whose plazas were converted to a flat 1,000 maravedis per month (115 ducats a year) to cover both rations and wages. In an effort to economize, and at the recommendation of Governor Men- dez de Canzo, the crown attempted to return the garrison size to 150 men; it succeeded only in making his successor; Governor Ybarra, unpopular.6 Active strength was diminished more gradually and effectively by increasing the number of "useless persons" (intitiles) holding the plazas of soldiers. Perhaps most of these were friars. In 1646 a ceiling was set of 43 Franciscans to be paid out of the subsidy and the additional ones became supernumerary, covered by a separate situado for friars which their lay treasurer; or syndic, was permitted to collect directly. Later in the seventeenth century the same threat from the north that stimulated the construction of Castillo de San Marcos brought an increase in the size of the garrison. Fifty plazas were added in 1669, and in 1673 the 43 friars became supernumerary along with their colleagues. This gave the presidio an authorized 64 strength of 350 soldiers, which in 1687 the crown increased to 355.7 The situado was not equivalently raised by 55 plazas of 115 ducats. Its yearly total in 1701 was only some 70,000 pesos, or about 51,000 ducats.8 Within that slowly rising total the nonplaza subsidies had varied considerably since 1571. New funds had been created, while old ones had been reduced in amount, changed in purpose, or elimi- nated. A governor appointed to Florida usually left Spain on a presidio frigate loaded with troops for the garrison and also with armor, gunpowder; and ammunition. The money for these essential military supplies was sometimes advanced by order of the crown from one of the funds at the House of Trade, the amount being deducted from the next situado by a coffer transfer. In wartime, a presidio-appointed procurador made extra trips to Spain for materiel. The funds for these large, irregularly spaced expenditures accumulated in a munitions reserve. The 1,500 ducats for "troop commodities" was a bonus (ventajas) fund used for increasing the base pay of officers and of soldiers on special assignment, such as working in the counting house or singing in the choir It doubled with the size of the garrison in the 1570s, but after the temporary reduction during the governorship of Ybarra the second 1,500 ducats was not restored. Periodically the crown asked for a list of bonus recipients, and any change was supposed to receive its approval.9 In time, bonuses were used like plazas, to reward or to pension petitioners. As was to be expected, the crown was more generous in allocation than in fulfillment, and recipients waited years for "first vacancies" and futuras of bonuses in Florida, as grantees waited elsewhere in the Indies for encomienda revenues.10 Toward the end of the seventeenth century the bonus fund was liquidated. As the holders of bonuses transferred or died, their portions were applied toward officers' salaries in the third infantry company, formed in 1687.11 In 1577 a new fund was added to the situado when Governor Menendez Marquez and the treasury officials were given permission to collect half their salaries out of it. The governor's half-salary was 1,000 ducats. When there were three proprietary officials in the treasury, each one getting 200,000 maravedis (533 ducats) in cash, the figure for administrative salaries came to 2,600 ducats a year. Menen- dez Marquez soon got permission to draw his entire salary from the The Situado 65 The King's Coffer situado-for a limited period, he was cautioned; but the privilege was extended to succeeding governors, raising the budget for salaries by 1,000 ducats. This was reduced by one royal official's half-salary when the position of factor-overseer was suppressed. Only when an auditor was residing in St. Augustine did the salaries fund rise above 3,067 ducats.12 In 1593 the crown authorized an unspecified fund for making gifts to Indians: thegasto de indios. Perhaps it was meant to take the place of the allowance for munitions, for Philip II was serious about his pacification policy. Those on the scene never achieved unanimity over whether to accomplish the conquest by kindness or by force. The friars once asserted that the cost of everything given to the natives up to their time, 1612, would not have bought the matchcord to make war on them. Anyway, since they moved about like deer, without property, there was no way to defeat them. Juan Men6ndez Marquez, an old Indian fighter had a contrary view. He observed that since the time of his cousin not one governor had extended a con- quest or made a discovery: all had gone about gratifying the Indians at the expense of His Majesty. This was not totally true, but it should not have been surprising. It must have been more pleasant to sail up the Inland Waterway, as Governor Ybarra did in 1604, distributing blankets, felt hats, mirrors, beads, and knives, than to burn houses and trample crops.13 In 1615 the Indian allowance was set at 1,500 ducats, but little effort was made to stay within it. Governor Rojas y Borja made 3,400 ducats' worth of gifts in a single year to the Indians, who called it tribute. Governor Salazar Vallecilla and the royal officials who substi- tuted for him distributed an average of 3,896 ducats' worth, and in one year 6,744.14 Unquestionably, part of this was used for trade, but when the Indian allowance was reduced or withheld, the chiefs attached to the Spanish by that means became surly. Eventually, the fund was used for purposes far from its original intention. Two hundred ducats and two rations of flour were assigned in 1698 from the "chiefs' fund" to the organist of the parish church and two altar boys respectively.15 The base figure of the situado was not necessarily the amount of money supplied. Superimposed upon it were the yearly variations. Occasionally funds were allocated for some special purpose: 26,000 pesos to rebuild the town after the 1702 siege of Colonel James 66 The Situado Moore; a full year's situado to replace the one stolen by Piet Heyn, and one lost in a shipwreck; 1,600 pesos to pay the Charles Town planters for runaway slaves the king wished to free.16 Additional money was sometimes sent for fortifications: commonly 10,000 ducats or pesos, delivered in installments.17 It was characteristic of these special grants that they were seldom used for the intended purpose. A greater emergency would intervene; the governor and royal officials would divert the fund to that, explain their reasons, and demand replacement.18 When it was possible the crown obliged. In the sixteenth century the officials of the supporting treasury were supposed to ask for a muster of the garrison and deduct the amount for vacant plazas from the situado. During the seventeenth century it was more common to use the surplus (sobras) from inactive plazas as a separate fund.19 In 1600, encouraged by the presence of a royal auditor; the officials volunteered that funds were accumulating in the treasury from the reserve for munitions, the freight on presidio vessels, and royal office vacancies. The surplus amounted to around 60,000 pesos by 1602-almost a whole year's situado. They sug- gested that as it was difficult to find the revenues locally to cover the unpaid half of their salaries, they could draw on these reserves.20 The king's financial advisors, greatly interested, told the officials of the Mexico City treasury to send the next Florida situado to Spain and to reduce future situados to reflect effective rather than authorized strength. From the Florida royal officials they asked an accounting of all unused monies to date. Much later; other officials received permis- sion to collect the rest of their salaries from reserves, but the reserves no longer existed.21 The crown had its own opinions on likely surpluses and what to do with them. The soldiers evacuated from Santa Elena in 1587 were reimbursed for their lost property by 1,391 ducats from the surpluses of the situado. In 1655, the year the English tookJamaica, the treasury officials were ordered to use the unpaid wages of deserters and the deceased to improve the presidio's defenses. Accountant Santos de las Heras objected that deserters forfeited their wages, the back wages of the deceased without heirs went to purchase masses for their souls, and, with situados three years behind, nobody was being paid any- way. The king's advisors replied that the accountant was to pay the living first and let the dead wait. Twenty years later a royal order arrived to use the unclaimed wages of deserters and deceased to 67 The King's Coffer provide plazas to crippled noncombatants.22 Not long afterward the royal officials were told to report on the funds from vacant plazas. It was the governor's prerogative to allocate the surplus, not theirs, the crown pointed out-a moot point, since a separate cedula of the same date instructed the governor to use the money on the castillo. At the end of the Habsburg period the viceroy of New Spain was instructed to send the surplus of the 1694 subsidy to Spain. It amounted to a third of the sum earmarked for plazas.23 There were many problems with the situado, part due to un- avoidable shortages and part to venality and graft. From the begin- ning there was a scarcity of currency. Both the Vera Cruz and the Mexico City officials were instructed to deliver the situado to its commissioned collectors in reales, since that was the coin in which to pay soldiers. Yet in spite of repeated injunctions to pay in coined reales (reales acuiados) the officials preferred to keep their specie at home. Instead they supplied silver in crudely shaped and stamped chunks calledplanchas or even in assayed ingots (plata ensayada) which the soldiers chopped into pieces. In 1601 Accountant Juan Menendez Marquez acting as situador could collect only 37 percent of the total in coin.24 Throughout the Habsburg period hard specie continued to be scarce in the provinces. In 1655 Auditor Santa Cruz estimated that in twenty years not 20,000 pesos in currency had entered the presidio. In place of money the creoles used such expediencies as yards of cloth or fractions of an ounce of amber.25 Wages were paid either in imported goods at high prices, in obsolete or inappropriate things the governor wanted to be rid of, or in libranzas or wage certificates that declined drastically in value and were bought up by speculators with inside knowledge.26 Although two resourceful Apalaches were caught passing homemade coins of tin, Indians ordinarily used no money, but bartered in beads, blankets, weapons, twists of tobacco, baskets, horses and other livestock, chickens, pelts and skins, and cloth. Great piles of belongings were gambled by the players and spectators at the native ball games. The Spanish governors begged for some kind of specie to be sent for small transactions and suggested 7,000 or 8,000 ducats in coins of silver and copper alloy (vellon) to circulate in the Florida provinces. Where there was no money, they explained, people were put to inconvenience.27 The quality of silver was another problem. In 1612 the Florida 68 officials sent over 1,000 reales' worth of miscellaneous pieces to the House of Trade for the receptor of the Council to buy them presidio weapons. The silver from Florida turned out to be of such low fineness that no one would accept it at more than 43 reales the mark. The crown demanded to be told the source of such degraded bullion and specie. In reply, the royal officials admitted that part of the consignment was in adulterated silver and clipped coins, but they had sent it as it had been received in fines, which the crown, in order to retire what was debased, allowed to be paid in any silver bearing the royal mark. They protested, however, that most of the offending silver had come from New Spain and could not be used in Florida, where the soldiers were supposed to be paid in reales. Certainly the base alloy had not been added to the silver while it was at their treasury.28 Over the seventeenth century the viceroy and royal officials of the Mexico City treasury allowed the situados to fall seriously behind. By 1642 the drafts against unpaid situados amounted to 250,000 pesos, four times the yearly subsidy. Four years later the situador was forced to ask for a cedula ordering the Mexico City officials to turn over the current situado to him instead of to Florida's creditors. In 1666 the situados were seven years behind, or some 461,000 pesos. In 1703 they were again 457,000 pesos in arrears.29 Although something was applied to these arrearages from time to time, the case seemed hopeless to the unpaid soldiers and to the local men and women who made their shoes and did their laundry. The crown set guidelines for paying back salaries in a fair manner, then circumvented its own instructions by giving out personal cedulas for some individuals to collect their wages ahead of the rest.30 The officials at St. Augustine treated payments toward back situados as a totally fresh and unex- pected revenue. They inquired in writing whether such money might not be used to build a stone fort or to found Spanish towns.31 The practice of letting some subsidies fall into arrears created new expenses to consume the other ones. Some of these expenses were for servicing the presidio's loans. A loan was taken out at the Mexico City treasury as early as 1595. Governor Salinas, in an effort to consolidate the treasury's debts, asked in 1621 for another loan of 30,000 or 40,000 pesos to be paid off in installments of 2,000 pesos from every situado. The crown was unhelpful about retiring this debt. A representative of the Council, making a grant of 150 ducats in The Situado 69 The King's Coffer 1627 to Florida's sergeant major observed that the money was to come from the situado surpluses as soon as there were any, "which will not be for many years because it is so far in debt now."32 In 1637 Governor Horruytiner inquired about yet another loan to pay the soldiers, who had had no wages in six or seven years. 33 The Francis- cans, dependent like the garrison on the situado, did their borrowing separately. In 1638 they were given permission to take out a travel loan from a fund at the House of Trade. Twenty years later they took out another, and in 1678 they were again forced to borrow, probably against their subsidy, paying 8 percent on a loan for 3,567 pesos.34 When the cost of credit was built into a bill of exchange to circumvent usury restrictions, the-price could be steep. The spice merchants (mercaderes en drogas) who exchanged the notes against unpaid situados discounted them 18 to 75 percent. Soldiers trying to spend their certificates for back wages were obliged to pay higher prices and accept inferior goods.35 Collection charges were nothing new. In 1580, when the subsidy came in care of the governor of Cuba, he kept out 530 ducats for himself, and the collectors charged an exorbitant 1,000 ducats. In the new system initiated by the proprietors who took office that year; one of them went for the situado, receiving an expense allowance of 1,000 maravedis (rounded to 30 reales) a day, double the per diem for a procurador or envoy to Spain. In the six or seven months that a situado trip was supposed to last, the per diem came to 500 or 600 ducats. The largest collection expense was probably for the bribes in the viceregal capital. Accountant-Situador Santos de las Heras said ruefully that to get anything accomplished there cost "a good pair of gloves. "36 Transportation costs varied according to whether the ships were chartered or presidio-owned. In 1577 it cost 2,000 ducats to bring a year's worth of supplies from New Spain in two hired frigates; the governor said that owning the ships would have saved three-fourths of it. Yet in 1600, Juan Menendez Marquez as situador had to charter three boats in San Juan de Ulua and a fourth in Havana.37 Auditor Santa Cruz, who wanted the Florida situado to pass through his hands, declared that the governor of Florida once had seven different situadores in Mexico City simultaneously, suing one another over who was to make which collections and receiving 30, 40, or 50 reales a day apiece while their boat and crew expenses 70 The Situado mounted in Vera Cruz. A single trip cost the treasury nearly 30,000 pesos, he said, out of a subsidy of 65,000. Ten years later the auditor added that the bribes at the Mexico City treasury came to 20,000 pesos, of which 18,000 went to the greater officials and 2,000 to the lesser Any situador could make a profit of 26,000 pesos, Santa Cruz insisted, by borrowing money to buy up Florida wage certificates and libranzas at a third to a half of their face value, then redeeming them at face value with situado funds. The rest of the money he could invest in clothing to be resold to the soldiers at high prices.38 Parish priest Leturiondo's accusations were vehement on a smaller and perhaps more accurate scale. The situador discounted 15 or 16 percent collection expenses from the priest's small stipend, he said, and took up to two years to deliver the items ordered.39 Partly because of the shortage of currency and the inadequate harbor-but also because Florida's east coast had little to export, once the sassafras boom ended-St. Augustine was not a regular port of call. This meant that whoever was chosen collector of the situado must double as garrison purchasing agent. Wine and flour produced in New Spain and sold in Havana cost over twice as much there, in 1577, as the same provisions in Spain. Governor Menendez Marquez found it necessary to exchange situado silver for gold at a loss and send it to Spain by a light, fast frigate, to buy meat and olive oil. In 1580 the presidio obtained permission to send two frigates a year to the mother country or the Canaries, but as prices and taxes rose there, flour and other foodstuffs had to be found increasingly in the col- onies.40 The rare accounts written by situadores en route describe the difficulties of collection, purchasing, and transportation from their point of view. After giving bond and receiving his instructions and power of attorney, the situador was issued a boat and crew. He left them in the harbor of San Juan de Ulia and journeyed up the road past Puebla de los Angeles to Mexico City. There he paid the appro- priate bribes and waited for his report on presidio strength to be checked, his supply ship's tonnage approved, and the situado deliv- ered. All of this took time. The situador executed private commis- sions, saw friends, and enjoyed a taste of big-city life. Perhaps he put a portion of the king's money out at interest or made other imaginative use of it. By the early seventeenth century, household items, coarse fabrics, and Indian trade goods were available at the workhouses of 71 The King's Coffer Mexico City and Puebla. In Vera Cruz there was flour of question- able quality. The paperwork for this large-scale shopping took more time, for local fiscal judges had to supply affidavits that Florida was not being charged an inflated price. Loading at San Juan de Ulhia proceeded relatively undisturbed by port authorities: presidio supplies were exempt by royal order from either sales tax or customs. With his ship loaded, the situador waited with his counterparts from other Caribbean presidios for a warship to escort them and carry their registered money as far as Havana.41 Floridians preferred to avoid this stop if they could, for creditors lay in wait at the Havana harbor and Cuban officials acting in the best interests of their island would attempt to attach part or all of the situado. The crown, which had interests of its own, might have sent them instructions to impound the situado for use in Spain. With creditors and crown outfoxed, the situador might still face a long wait until the coast guard reported the seaways clear of corsairs and the fleet was ready to sail northward through the Bahama Chan- nel. There is no telling how much of the Florida situado in both supplies and specie was lost en route.42 Buccaneers grew so bold in the late seventeenth century that they sometimes waited at anchor outside the St. Augustine harbor. To elude the enemy, Floridians crossed their bar at low tide; or they sailed in September and October under great danger of storms. The likelihood of disaster was com- pounded by defective ships. The Nuestra Seiora del Rosario capsized in the very harbor of San Juan de Ulia with 3,000 pesos' worth of supplies aboard. Another vessel, apparently being bought on time, was lost off Key Largo and the crown strongly advised that payment be stopped on it.43 A lost subsidy might be ordered replaced, but the sum could only be added to the arrears the Mexico City treasury already owed to the presidio. Safely unloaded in St. Augustine, the situador faced a personal obstacle: the rendering of his accounts. The royal officials checked his purchase invoices against goods delivered, comparing prices with affidavits; they examined his expense receipts and counted the money he turned over. The total ofinvoices, receipts, and cash must equal the amount of situado in his notarized papers of transmittal. For any shortage he was personally liable. The closing ofa situador's accounts might be delayed years waiting for all papers to arrive and be in order When the situador was expected, the officials went into action. 72 The Situado The public and governmental notary presented an up-to-date mus- ter; the master of construction turned in the number of days' labor owed to soldiers. From these and his own records the accountant certified the gross amount due each person on plaza. The factor or his steward (later, the treasurer-steward) supplied the total each soldier had drawn from the royal warehouse against his wages. The ac- countant deducted this figure, plus the compulsory contributions to service organizations and the notes presented by preferred creditors, to arrive at the net wages the treasurer should count out from the coffer Merchant Antonio de Herrera once brought the royal officials a list of 182 men in his debt for clothing and small loans. Although Governor Salinas authorized payment via payroll deduction Herrera was exiled shortly afterward. A few years later he reappeared by special, unexplained permission of the Council, and the soldiers were soon in debt to him again. Salinas, pleading their poverty, paid him with surplus situado funds. Governor-elect Rojas y Borja, of a more accommodating temperament, before he ever left Spain advanced Herrera directly from ensign to sergeant major of the garrison-an unlikely promotion for which the loan shark must have paid hand- somely.44 Any time a situador brought actual cash, St. Augustine became a busy place. Tables were set up in front of the guardhouse, and as the roll was called each man came by, picked up his wages, and took them to the next table under the eyes of his officers to pay his debts to local merchants, artisans, and farmers, whose order in line reflected their current favor with the administration.45 For the next few nights, while the soldiers had money in their purses, there was exuberant gambling in the guardhouse.46 The influx of currency threw the town into a short-lived flurry of economic activity during which St. Augustine resembled Jalapa during the fair, or opening day at the silver smeltery. Everyone on the payroll was supposed to get food and clothing at cost, but the original price became encrusted with surcharges.47 From time to time the crown ordered that the soldiers not be charged import or export duties, nor the cost of supplies for the situador's vessel, nor the cost of ship repairs and replacements.48 They were not to absorb the expense of supplies spoiled or mislaid, nor the 15 or 16 percent for handling, which may or may not have included the two reales per mark (nearly 8 percent) charge for changing silver.49 73 The King's Coffer Neither were they to have passed on to them the cost of loss and leakage, given in the form of percentages called mermas to the ship- master the steward, and presumably anyone else who transported or stored crown merchandise. According to Pilot Andres Gonzalez, the Council of the Indies allowed mermas of 3 percent on flour 4 percent on biscuit, 4 percent on salt, and 10 percent on maize. Given the density of rat population on the ships of the time, such allowances to a shipmaster may not have been excessive. Vazquez de Espinosa esti- mated that more than 4,000 rats were killed aboard his ship during a transatlantic crossing in 1622, not counting those the sailors and passengers ate.50 If there was any substance behind the prohibitions against add-on charges-and there is no reason to think otherwise-then prices "at cost" were costly indeed. Ex-Governor Hita Salazar; who had been governor of Vera Cruz before coming to Florida and who remained in St. Augustine as a private citizen after his term, once gave his experienced view of the situado. In spite of all the funds it contained-and he listed them: the 350 plazas for soldiers, the subsidy for friars, the allotment for administrative salaries, the 1,500-ducat Indian allowance, and the 1,500 ducats for bonuses-the common soldier still paid twice what he should have for shoddy goods he did not want, bought by profiteers with his own money.51 If private merchants could obtain no foothold in town, and no one could leave who was in debt to the exchequer then it is no wonder that by the 1680s garrison strength in Florida was being filled with sentenced malefactors and persons regarded as racial inferiors.52 The entire garrison below officer level was existing under the most inexorable debt peonage. 74 6 The Royal Revenues HE royal revenues that treasury officials in the Indies gathered C were varied. The Mexico City coffer; from which Florida received the situado, provides a good example. In 1598 its major accounts receivable were, in order of descending value: tribute, taxes on bullion, the monopoly of mercury, import and export taxes, sales tax, the tax of the crusade, the monopoly of playing cards, and the sale of offices. Grouped by category, the revenues of mines supplied the largest share of that treasury's yearly income, tribute came next, and commerce third. The impecunious crown soon exploited further sources of revenue: the clearing of land titles, the legitimizing of foreigner and mixed blood status, and voluntary contributions.1 On an infant colony such taxes were imposed lightly if at all, yet after a reasonable length of time a normal treasury was expected to begin producing revenue.2 This did not happen in Florida, where all the royal revenues put together were not enough for regular for- warding to the king. Still, the funds generated were sufficient to cover a number of ecclesiastical and provincial expenses, to aid in provisioning the garrison, and to occupy the royal officials' time. The crown's incomes fell into five categories: J 1. Ecclesiastical: tithes and indulgences. 2. Crown properties: lands, productive enterprises, slaves and convicts, royal offices and monopolies. 3. Shipping: freight charges and customs duties. 4. Barter salvage, and booty: the king's treasure taxes. 5. Personal levies: tribute and donations. 75 The King's Coffer In the Indies the tithes (diezmos) were meticulously divided. One-quarter of the revenue went to the bishop, one-quarter to the cathedral chapter Of the remainder; two-ninths went to the crown, four-ninths to local clerics, and three-ninths to the construction of churches and hospitals. Therefore, although the tithes were collected and administered by the treasury officials, they were of little or no profit to the crown.3 In theory, Indians had been legally exempt from tithing since 1533, but in practice this varied. Florida missionaries argued that even a native owed his tithes and firstfruits-to them, not to the crown or the bishop.4 We will return to the subject of Franciscan exactions under the heading of tributes. The legitimate tithes administered by the royal officials in St. Augustine came from Spanish Christians. To encourage production, new settlements of Spaniards in the Indies were usually free from tithing for the first ten years. While the adelantado's contract did not specify this, it probably held true for Florida.5 The tithes first gathered were so minimal that they enjoyed a certain independence-by-neglect. At the end of the sixteenth cen- tury the royal officials mentioned that they were collecting them in kind, auctioning the produce like any other royal property and using the proceeds to pay their own salaries. As can be seen in Table 4, the tithes of 1600 amounted to 840 pesos, three-fourths of which came from sales of maize, and the remainder from miscellany (menudos), probably other preservable produce. If the tithes of this period were collected at the rate of 212 percent, as they were later in the century, this suggests a titheable production worth over 33,000 pesos. Treas- urer Juan Menendez Marquez noted in 1602 that the tithes of 1600 had been auctioned immediately after harvest; the tithes of 1601 (which he did not disclose) only appeared to be higher because he and the other officials had stored the maize until its price had risen by half, then had the presidio buy it to ration slaves and soldiers.6 Around this time the crown ordered that the tithes go for four years toward construction of a parish church, a disposition that was gradually extended to twenty years. After that, church construction and maintenance were subsidized by 2,000 ducats from the vacancies of New Spain bishoprics, and the Florida officials were permitted to f use 516 ducats of the local tithes to pay secular clergy salaries, letting the remainder accumulate. Whenever the fund reached 4,000 or 5,000 reales the crown sent instructions on how to spend it. In the early 76 TABLE 4 VALUE OF TITHES AND TITHEABLE PRODUCTION (IN PESOS) Arrobas Tithes Misc. Tithes Year of from Tithes from Total Titheable Maize Maize (Menudos) Livestock Tithes Production 1600 -651 189 -840 33,600 1631 [569 166 -735 29,400 1632 2,691/2 569 135 -704 28,160 1633 569 167 -736 29,440 1648 847 847 70 220 1,137 45,480 1649 881 881 146 227 1,254 50,160 1650 1,468 1,468 120 265 1,853 74,120 1651 1,469 1,469 304 250 2,023 80,920 1652 1,391 1,391 116 152 1,659 66,360 1653 1,012 1,012 130 135 1,277 51,080 1654 1,024 1,024 142 176 1,342 53,680 1655 802 802 100 224 1,126 45,040 1656 743 743 150 193 1,086 43,440 1657 1,0431/2 1,043'/2 141 171 1,355 2 54,220 NOTE: Of the 2,691 '2 arrobas of maize from 1631 to 1633, 40612 were sold at 5 2 reales and 2,285 at 5 reales. The total of 1,707/2 pesos has been averaged on a yearly basis and rounded off to 569 pesos. From 1648 to 1657 the tithes of maize were bought at 1 peso the arroba. The King's Coffer 1620s and again in 1635 the bishop of Cuba inquired about his fourth of the Florida tithes. The crown, which was making up the difference in his income, referred his query to the Florida governor and royal officials, asking them whether their provinces were not suffragan to that bishop. They replied that don Juan de las Cabezas Altamirano had brought credentials as the bishop of Florida and Cuba when he made his visitation in 1605, but he had not asked for tithes, nor had any been sent to Cuba since.7 Orders must have followed to send the bishop his fourth, for several years later the royal officials mentioned that tithes were no longer being administered as a royal revenue.8 In 1634, before this happened, Accountant Nicolas Ponce de Le6n summarized what the tithes had been amounting to. The tithes of maize collected between 1631 and 1633 had come to 2,691 /2 arrobas, or about 897 arrobas a year Of this, 406 1/2had been sold at 5 /2 reales the arroba and the rest at 5 reales, making a three-year total of 1,707 /2 pesos in tithes of maize, which averaged to 569 pesos a year For the tithes from miscellaneous sources the accountant gave yearly figures, which totaled 468 pesos for the same period. Total tithes averaged 725 pesos a year indicating a titheable production of around 29,000 pesos a year between 1631 and 1633, less than in 1600.9 In the 1640s there were great expectations from a wheat farm started by Governor Salazar Vallecilla on the Apalache-Timucua border. There would be other farms and much revenue, he and the royal officials thought, enough to establish an abbacy in St. Au- gustine similar to the one in Jamaica and keep all the tithes at home. Ponderous inquiries were set in motion, without result. Governor Rebolledo, in 1655, joined the campaign. Florida tithes now amounted to 2,000 pesos annually, he said, exaggerating, and if that sum was not adequate to support an abbot, he would gladly dispense with his sergeant major No Cuban bishop had visited Florida in fifty years.10 The crown responded by asking for a report on the tithes of the previous ten years. Tithes of maize from 1648 to 1657 came to an average of 1,068 arrobas a year; which at 1 peso the arroba brought in 1,068 pesos. The increase in value over the average of 569 pesos a year between 1631 and 1633 was mainly due to the higher price per arroba. For the first time livestock (ganado mayor) appeared as a separate category. The average total tithes for the ten-year period came to 1,411 pesos. If tithes were 2/2 percent of both crops and calves- something of which we cannot be sure-this indicated a titheable 78 The Royal Revenues ranching and agricultural production of some 56,000 pesos a year."11 This was evidently insufficient to support an abbot, for that idea was dropped. When don Gabriel Diaz Vara Calderon came on episcopal visit in 1674-75, he arrived in St. Augustine four days after a flood and charitably devoted the tithes laid up for him to relieving the hungry.12 Soon after the bishop's visit, the treasury officials were ordered to begin sending the canons of the cathedral chapter their designated fourth of the tithes and explain why this had been neglected. They Protested that no one had ever asked for them, and that anyhow tithes in Florida were grossly overvalued. When the livestock was auc- tioned, soldiers bid four or five times what it was worth, charging the amount to the back salaries they never expected to see. In this way cattle worth less than 1,000 pesos had been sold for 4,400, giving a false impression of the provinces' resources. In order to correct this overpricing, the treasurer and accountant meant in future to purchase the tithes of livestock as they did the tithes of maize, for rationing the soldiers. They would pay the local clerics, the bishop, and the cathe- dral chapter in drafts against the situado. The three ecclesiastics serving the parish church and the soldiers' chapel were paid around 900 pesos a year.13 If half of the tithes were sent to Cuba, the total revenue must come to 1,800 pesos a year in order to cover clerical salaries, necessitating an annual titheable production of somewhere near 72,000 pesos. This level of production Florida's Spanish population was unable to maintain. In 1697 the crown inquired why the bishop was not receiving his tithes. The royal officials answered briefly that in Florida the tithe was paid in the form of grain and was distributed to the soldiers. The year 1697 had been one of famine, when even the parish priest's private store of maize had had to be requisitioned, to his great indignation. As the bishop himself said, in times of hunger all men quarreled and all had reason.14 The crown's other ecclesiastical revenue in Florida came from the cruzada, or bulls of the crusade, which Haring has called "the queerest of all taxes."15 This was a semicompulsory indulgence whose pro- ceeds had been granted by the popes to the Spanish crown in recogni- tion of its crusading activities. Royal officials and other dignitaries in the Indies paid two pesos a year; regular Spanish subjects one peso, and Indians and blacks two reales.16 The cruzada must have been 79 The King's Coffer permitted to go for local purposes in Florida, for the indulgences were independently requested by the royal officials, a priest, and perhaps a governor. Governor Marques Cabrera once received 5,000 of them, neatly divided between bulls for the living and for the dead.17 A cleric known as the minister or subdelegate of the Tribunal of the Holy Crusade did the preaching, and another cleric served as the notary. 18 By the end of the seventeenth century the market was glutted. The royal officials asked that no more indulgences be sent to Florida, where the people were poor and the last two shipments were sitting unsold in the warehouse. Ignoring pressure from the parish priest and the Council of the Indies, they refused to publicize the bulls any further 19 The second category of treasury income was provided by crown properties. Aside from the presidio's ships, which are treated later in this chapter; crown properties producing income consisted of lands, productive enterprises, slaves and convicts, royal offices, and monopolies. Wherever Spaniards settled in the Indies they first recognized the lands belonging to pacified Indian towns, then founded their own municipalities, each of which was provided with several square leagues for the use of its vecinos. Other grants of land were personal. Pedro Menendez, as part of his contract with the king, was entitled to claim an immense area 25 leagues on a side-more than 5,500 square miles, by Lyon's calculation.20 He was also privileged to give out large tracts (caballerias) to gentlemen and smaller ones (peonias) to foot soldiers. Although many of these grants were in Santa Elena, when the two presidios were combined the settlers from Santa Elena were given lands in and near St. Augustine as though they had been J there from the start.21 All of the remaining, unused lands (tierras baldias) in the ecumene became part of the royal demesne (realengo). Anyone wishing to use a portion of it for some productive purpose, such as a cattle ranch estanciaa de ganado or hato), applied to the governor. If the center or the headquarters of the proposed ranch was no nearer than 3 leagues from any native village and did not encroach upon another holding, the petitioner might be issued a provisional title.22 Possession was conditional: land lying vacant reverted to the crown. For over a century whatever taxes were paid on these lands held in usufruct went unreported. Treasury officials later claimed that the 80 The Royal Revenues governors had collected fifty Castilian pesos per ranch. In the 1670s Governor Hita Salazar instituted a regular quitrent along with an accelerated land grants program to raise money for the castillo. Hacienda owners were charged four reales per yugada, which was the area a yoke of oxen could plow in one day, with a minimum of five Spesos.23 The governor also offered to legitimize earlier land titles and make them permanent. A clear title to a ranch cost fifty pesos per legua cuadrada, though it is not certain whether this was a square league, a league on the side ofa square, or a radial distance in a circular grant.24 The chiefs of native towns followed suit, selling their extra fields or leasing them to Spaniards.25 As it was a royal prerogative to grant lands in perpetuity, the government in Madrid annulled all titles issued by chiefs or gover- nors, at the same time inviting more regular applications. Between S1677 and 1685 land sales and title clearances (confirmaciones) in Florida brought in 2,500 pesos to be applied to castillo construction.26 The crown also disallowed part of the governor's new taxation schedule. Lands granted at the foundation and still held by the heirs were not to be taxed, ever. Land distributed after then could be taxed, but at no more than Hita Salazar's 4 reales the yugada, later reduced to 1 real.27 Disposition of the revenues from lands beyond the confines of St. Augustine was a royal prerogative as much as granting the lands was. For several years the income was assigned exclusively to castillo construction, but starting in 1688 a modest sum was allowed for the expenses of holy days.28 In some parts of the Indies another kind of title clearance was going on: foreigners could legitimize their presence by a payment (composicion). Several times the crown asked the officials in St. Au- gustine for a list of resident foreigners, including Portuguese, but as there is no evidence that the aliens paid anything extra into the treasury, this was probably for reasons of military and anti- schismatic security.29 The crown made one brief foray into agricultural production in Florida. In 1650 Governor Salazar Vallecilla's experimental wheat farm had been in operation for five years. Six square leagues were under cultivation; buildings, granaries, and corral were complete; and the property inventory included two experienced slaves, eight horses and mules, eleven yokes of draft oxen, and the necessary plows and harrows. The governor had even sent to the Canaries for 81 The King's Coffer millstones and a miller Accountant Nicolas Ponce de Le6n thought that in New Spain such an hacienda would be worth over 20,000 pesos. Unfortunately Governor Salazar Vallecilla died in the epidemic of 1649-50. When his son Luis, anxious to leave Florida, tried to sell the wheat farm, either no one wanted it or no one could afford it. Ponce de Le6n, as interim governor; bought the hacienda for the crown at a cost of 4,259 pesos in libranzas that he estimated to be worth one-third less. He predicted that the farm would pay for itself within three years. The fiscal of the Council of the Indies, reading of the purchase, noted that even if the hacienda took longer than that to show a profit it would be valuable if it encouraged the production of flour in Florida. The Council sent word for the royal officials to administer this royal property without intervention from governors, making yearly reports on its progress.30 Before word got back of the crown's approval, the hacienda had vanished. Ponce de Le6n had survived his friend Salazar Vallecilla only a short time. Locally elected Interim Governor Pedro Benedit Horruytiner had been persuaded by the Franciscans that Spanish settlement in the provinces had provoked the Apalache rebellion of 1647. At their request he had dismantled the wheat farm and sold off its inventory without waiting for the due process of auction. Wheat continued to be grown in Apalache and Timucua, as well as rye and barley, but not for the presidio. Most of the grain was shipped out by the chiefs and friars to Havana.31 In 1580 the crown gave permission for the treasury officials to obtain thirty able-bodied male slaves left over from the building of a stone fort. From time to time these were replenished.32 When there was disagreement among the officials as to which of them was to manage the slaves, they were informed that the governor should do it, while they kept track of expenses. Their complaints that the governor used the slaves for personal purposes were ignored. When the slaves were not needed on the fortifications they were hired out and their earnings paid for their rations.33 The same policy applied to the convicts sentenced to Florida: their labor bought their food. One illiterate black convict who had become a skilled blacksmith during his term in St. Augustine elected to stay on as a respected member of the community.34 Native malefactors were sent to some other pre- sidio unless there was a labor shortage in Florida. Whether in Havana or St. Augustine, their sentence lengths were often forgotten and 82 The Royal Revenues their prison service then became indistinguishable from slavery.35 Slaves and convicts not only saved the crown money but were themselves a source of income. The timber they logged and sawed, the stones they quarried and rafted across the harbor from Anastasia Island, the lime they burned from oyster shells, the nails and the hardware they fashioned-not all was used in the construction of the castillo and government houses. Some was sold to private persons and converted into a revenue of the crown.36 Royal offices were a form of property expected to produce in- come every time they changed hands. Treasury offices became venal for the Indies in the 1630s; other offices already being sold were ecclesiastical benefices and military patents, which at least once included the Florida captaincy general.37 In many of its overseas realms the crown sold municipal offices as well, but not in Florida. When a royal cedula dated 1629 arrived asking for a list of the offices it might be possible to fill in that land, Accountant Juan de Cueva responded that there were no new settlements; the only town of Spaniards was the one at the presidio.38 One office frequently sold or farmed out in the Indies was that of tribute collector (corregidor de indios). For reasons that will be seen, this office did not exist in Florida. The St. Augustine treasury received revenue from the auc- tion of lesser posts such as public and governmental notary or toll collector on the Salamototo ferry, but this income was inconsequen- tial and almost certainly never reached the crown.39 The half-annate (media anata) was a separate revenue derived from offices and other royal grants: the return to the crown of half the salary of one's first year of income. Except in the case of ecclesiastics, it superseded the earlier mesada, or month's pay paid by a new ap- pointee. Presented as an emergency measure following Piet Heyn's seizure of the treasure fleet, the half-annate was decreed in 1631, empire-wide, for every beneficiary of the king's grace, from a minor receiving a plain soldier's plaza, to the royal infants, the king's sons. According to the Recopilacidn, the half-annate was increased by half (making it actually a two-thirds-annate) from 1642 through 1649.40 But Governor Luis de Horruytiner coming to Florida in 1633, paid the two-thirds amount, not the half. It was permitted to pay the tax in two installments, signing a note at 8 percent interest for the second half, due one year later This is what Horruytiner did.41 For the rather complicated bookkeeping of this tax the St. Au- 83 The King's Coffer gustine treasury was authorized to hire a clerk of the half-annate, but collection of the royal kickback did not proceed evenly. The auditor who came to Florida in 1655 found that three-fourths of those liable for the half-annate still owed on it.42 In 1680 it was decreed that the governors of Florida were exempt from the tax because His Majesty had declared their post to be one like Chile, known for active war (guerra viva). Four years later the treasury officials were included under this exemption because of valor shown during a pirate attack. What half-annate those in office had already paid was refunded.43 The tax was not reinstated for this category of officials until 1727. Regular officers, however; in spite of a 1664 law exempting those on hazard duty, continued to owe the half-annate on their original appointments and for every promotion.44 One more revenue from royal offices was the unpaid salary money vacanciess) due to the death or suspension of royal appointees. As we have seen, the vacancies of bishoprics in New Spain formed a regular fund upon which the crown drew for extraordinary ex- penses. The same held true for Florida, except that the money was absorbed locally the way vacant plazas were. Surplus salaries due to vacancies were sent to the crown one time only, in 1602.45 A final type of revenue-producing royal property was the monopoly. The king had a tendency to alienate his monopolies by giving them out as royal favors (mercedes). Pedro Menendez's con- tract, for example, promised the adelantado two fisheries in Florida, of fish and of pearls. Since the pearl fishery did not materialize, this clause meant, in effect, that only the governor or his lieutenants had the right to fish with a drag net or a seine, and this privilege was enforced. When the dispute over the Menendez contract came to a formal end in 1634, the family's one remaining property in Florida was this fishery.46 Another monopoly which produced no revenue for the crown was gambling. To the official circular extending the monopoly of playing cards in the Indies, Governor Ybarra responded that people in Florida did not use them.47 Some years later Sergeant Major Eugenio de Espinosa was granted the right to run a gaming table in the guardhouse, a monopoly he passed on to his feckless son-in- law. 48 Beginning in 1640, paper stamped with the royal coat-of-arms (papel sellado) was required for legal documents in the Indies. A 84 The Royal Revenues governor's interim appointment, for instance, must be written up on twenty-four-real paper for the first page and one-real for each page thereafter. Ordinary notarized documents began on six-real paper. Indians and indigents were entitled to use paper costing a quarter- real, or omit the stamp altogether. Perhaps this was why St. Au- gustine notaries seldom bothered to keep a supply of stamped paper; although when they used the unstamped they were supposed to collect an equivalent fee.49 One further crown revenue from a monopoly came from the three reales per beef charged at the royal slaughterhouse. Governor Marques Cabrera instituted this fee in the 1680s to pay for construc- tion of the slaughterhouse and raise money for the castillo. It was one of his little perquisites to be given the beef tongues.50 The third category of royal revenues in Florida came from ship- ping. In St. Augustine, founded as the result of a naval action, ships were highly important. The townspeople were descendants of sea- farers, and their only contact with the outside world was by sea. The bar at the entrance to their harbor was shallow at low tide, especially after the great hurricane of 1599, which altered many coastal features. Use of the harbor was consequently restricted to vessels under 100 tons or flat-bottomed flyboats on the Flemish model.51 Some of the galliots, frigates, barges, pirogues, launches, shallops, and tenders belonging at various times to the presidio were purchased in Spain, Vera Cruz, or Havana, but a surprising number were constructed locally, perhaps in the same San Sebastian inlet where present-day inhabitants build shrimp boats. The people of St. Augustine referred to their boats fondly by name (Josepfe, Nuevo San Agustin) or nickname (la Titiritera, la Chata). Storms, shallows, and corsairs guaranteed that no vessel would last forever; but woe to the master who by carelessness or cowardice lost one! One source of the crown income from shipping was freight (fletes). Freight charges in the Caribbean were high. Gillaspie esti- mates that between 1685 and 1689 shipping costs on flour repre- sented 35 percent of its cost to the presidio. Whenever possible, the royal officials and the governor would buy a boat to transport the supplies rather than hire one. And since it cost 300 ducats a year to maintain the presidio boats whether they were in use or not, and the seamen had to be paid and rationed in any case, the vessels were kept in service as much as possible.52 In them the chief pilot and other 85 The King's Coffer shipmasters carried loads of supplies out to the missions and maize back to the town. They patrolled the coast, putting out extra boats after a storm to look for shipwrecks, survivors, and salvage. They also made trips to Havana, Vera Cruz, Campeche, and across the Atlantic. On any of these trips the shipmasters might execute private commissions and carry registered goods for those willing to pay the freight. Governor Mendez de Canzo's first report to the crown from St. Augustine suggested that the mariners be paid from these ship revenues. The crown responded by requesting the governor to report on all the presidio vessel income, what it was converted to, and on what spent. Accountant Bartolome de Argiielles replied on his own. The governor, he said, saved himself 1,000 ducats a year in freight by the use of His Majesty's flyboat.53 A second crown revenue from shipping was the import and export duty on trade: the almorifazgo, which later officials would write "almojarifazgo." It was a complicated tax whose rate could be varied in numerous ways: by the class of goods, by their origin, by whether or not they were being transshipped, by the port of exit or entry (colonial or Indies), by special concessions to the seller carrier or consignee, and, perhaps most, by the individual interpretations of corrupt or confused officials. The year after St. Augustine was founded the duties on Spanish imports were doubled from 2 V2 to 5 percent ad valorem on articles leaving port in Spain, and from 5 to 10 percent on the same articles at their increased value in the Indies. The tax on wine more than doubled, changing from a total of 7/2 percent to 20. Products of the Indies leaving for Spain paid 22 percent at the port of origin and 5 percent upon arrival.54 At the time, all this was theoretical as far as Florida was concerned. The adelantado and his lieutenants had been exempted from the almorifazgo for the three years of his contract, and the first settlers for ten years.55 The export tax apparently began in 1580, the year the Florida provinces were given permission to send two ships a year to the Canaries or Seville. At the same time the crown granted up to 300 ducats from the situado to build a customs house on the wharf in St. Augustine-a suggestion that became a command three years later.56 The governor and royal officials used the proceeds of the export almorifazgo to pay their own salaries until 1598, when the crown assigned that income for the next four years to the parish church. The 86 The Royal Revenues rate at which the tax was then being collected is unknown. In 1600 the auditor set it at 21/2 percent. Export almorifazgo revenue came mostly from the sassafras and peltry of the Georgia coast. Realizing that St. Augustine was not a convenient shipping point, the royal officials sent a representative to San Pedro (Cumberland Island) to record cargoes, collect the tax, and see that the Indians were not cheated.57 Several general exemptions from the almorifazgos operated to the benefit of people in Florida. The belongings of royal appointees going to the Indies were exempt up to an amount stated in their travel licenses. Everything for divine worship and educational purposes was shipped tax-free, including the supplies and provisions for friars, and any kind of book. Colonially produced wheat flour and similar staples paid no tax in the port of origin. In 1593 a specific exemption was provided for the Florida presidio: nothing consigned to it from Vera Cruz was to be charged customs.58 A reduction in expense was not a revenue. The royal officials at the treasury in St. Augustine were supposed to be charging import almorifazgos of their own: 10 percent ad valorem on cargoes direct from Spain, 5 percent on the increase in value of Spanish goods transshipped from another colonial port, and 5 percent ad valorem on any colonial goods, even from another port in Florida. During the sixteenth century this almorifazgo was haphazardly applied. Ac- countant Argiielles reported that Governor Mendez de Canzo did not pay taxes on half of what the presidio boats brought him, yet it is evident that the royal officials did not know what percentage to charge.59 Auditor Pedro Redondo Villegas, coming to Florida in 1600, ordered that almorifazgos be collected on all imports regardless of point of origin, seller; carrier consignee, or kind of goods. In his view, supplies bought with situado funds were as liable to entry duties as the goods purchased by individuals. The treasury officials in St. Augustine, as purchasing agents for the garrison, were accustomed to buy naval supplies tax-free from the skippers of passing ships. Their defense was that if the treasury charged the skipper an almorifazgo, he added the amount of it to his price and the cost was passed on to the soldiers, which they could ill afford. But when the auditor insisted that even naval supplies were subject to import duties, the treasury officials acceded without further protest; the revenue was to be 87 The King's Coffer applied to their salaries.60 At San Juan de Ulia, the port for Vera Cruz, the officials imposed an import almorifazgo of 10 percent on Spanish goods, based on the appraised value of the goods in their port. The Florida officials assumed that their own import tax on the same goods should be 10 percent of the increase in value between the appraisal at San Juan de Ulua and the appraisal they made in St. Augustine. Redondo Villegas, rummaging about in Juan de Cevadilla's old papers, found what was probably the tax schedule of 1572-74 saying that the proper percentage was 5 if the goods had paid 10 percent already.61 Presumably this was the rate the royal officials adopted for Spanish merchandise that did not come directly from Spain. They collected it in a share of the goods, which they ex- changed preferably for cash at auction. Auditor Redondo Villegas had gone too far In 1604 the crown repeated the presidio's 1593 exemption with clarifications for his benefit: Because they are needy and prices are high and their salaries are small I order that they not pay taxes of almorifazgo in those provinces even when it is a contract with some private person, and this goes for what may be loaded in Seville also, or in another part of these kingdoms, on the situado account.62 In other words, goods charged against the situado were not to have export duties levied on them at the point of origin, or import duties in Florida. The royal exchequer was not so distinct from the presidio that the one should tax the other The strong position taken in this cedula lasted for two years. In 1606 the crown ordered that the export tax be paid on all wine shipped to the Indies, even that going as rations for soldiers. The royal officials in St. Augustine, for their part, levied the import almorifazgo on all merchandise brought in by private persons to sell to the soldiers, over the protests of the company captains, the gov- ernor and at times, the crown.63 The first customs house was evidently destroyed in the fires or flood of 1599. To replace it, the officials asked for and received an addition to the counting house. They also were allowed a customs constable on salary and a complement of guards when there were goods on hand for registration or valuation.64 The people of St. 88 The Royal Revenues Augustine put their ingenuity to work getting around the hated tax. By law, no one was supposed to board or disembark from an incom- ing ship ahead of the official inspection, under pain of three months in prison. Interim Accountant Sinchez Siez, syndic and close friend of the Franciscans, may have been the one who suggested that the friars board vessels ahead of the royal officials. In the name of the Holy Office of the Inquisition they could seal boxes of books containing schismatic material, and only they could reopen these sealed boxes. Books were nontaxable items, and the friars, secure against inspec- tion, could introduce high-value goods in the guise of books, un- taxed. This was a common practice in the Indies. Governor Ybarra put a quick stop to the friars' presumption.65 Due to a shortage of ships, the crown was often forced to allow trade to foreign vessels. The earliest reinforcements ever to arrive in the new Florida colony, in the Archiniega expedition of 1566, shipped out in Flemish ships whose owners refused to embark from San Luicar without licenses to load return cargoes of sugar and hides in Cuba and Santo Domingo.66 The Flemish operated legally; other visitors did not. A foreign-owned ship coming to trade without registration was subject to seizure and confiscation, yet most of the merchant ships visiting St. Augustine may have been foreign. In 1627 the treasury officials accused Governor Rojas y Borja of being in collusion with Portuguese merchant Martin Freile de Andrada and of allowing open trade with the French.67 By 1683 the crown, totally unable to supply its colony on the North Atlantic seaboard, was forced to approve Governor Marques Cabrera's emergency pur- chases from a New York merchant he called Felipe Federico. This Dutchman first gained entrance to the harbor as an intermediary returning the governor's son and another lad captured by pirates. Captain Federico and his little sloop, The Mayflower, became regular callers at St. Augustine. Others followed suit.68 The penalty for bringing in contraband goods even in Spanish bottoms was confiscation. The law provided that after taxes a sixth of the value went to the magistrate, a third of the remainder to the informer; and the rest to the king's coffer.69 In many parts of the Indies this inconvenience was circumvented by the sloop trade in out-of-the-way harbors. In Florida, which had operated outside the mercantile law from the start, such evasions were necessary only when someone important was out of sorts. While Governor Salazar 89 The King's Coffer Vallecilla was under suspension, a ship he had sent to Spain came back with a largely unregistered cargo of dry goods and wine. His confed- erates hid what they could before the return of Treasurer and Interim Co-Governor Francisco Menendez Marquez, who was out in the provinces pacifying Indians, but the treasurer was able to locate 30,000 pesos' worth and apply price evaluations retroactively to what had been sold. For doing this, he declared, his honor and his very life were in danger. The governor and his henchmen were all Basques, Francisco said meaningfully, and the accountant behaved like one.70 Francisco was probably disgruntled at having been left out of the distribution. He was not ordinarily so solicitous of the king's coffer. He and the same accountant, Ponce de Le6n, had been jointly over- drawn 960 ducats from the almorifazgo account between 1631 and 1640, and during most of that time Ponce de Le6n was not in Florida.71 The legal trade with Spain suffered as much from overregulation as from taxes. A cedula of 1621 had licensed the presidio's two little ships-of-permission to export pelts up to a value of 3,000 ducats a year-1,000 ducats above the former limit. By 1673 the Floridians did not find this small a cargo worth their while, yet the crown refused to raise the limit further 72 The royal bureaucracy, rigid about rules, was capricious in en- forcement. In 1688 Accountant Thomas Menendez Marquez, Fran- cisco's son, reported that Captain Juan de Ayala y Escobar was bringing in unregistered goods and evading duties and that the governor, Quiroga y Losada, refused to take action. Unwittingly, Thomas brought down on himself the royal displeasure. If he and the other officials ever let this happen again, the crown warned, they would be punished severely. When they had knowledge of fraud they were to act independently of viceroys, presidents, and governors; how to do so was left unexplained. The governor escaped without reproof, and Ayala y Escobar was commended for his willingness to make dangerous voyages on behalf of the presidio.73 The royal officials complained that they could not be present at all the ports in Florida. Governor Vega Castro y Pardo allowed them to station subordinate customs officials at the San Marcos harbor in Apalache, but these did not stay. The governors' deputies in Apalache were directed to collect duties from visiting ships; in 1657 the friars of that province claimed that this directive had not produced a single 90 Full Text xml version 1.0 encoding UTF-8 REPORT xmlns http: xmlns:xsi http: xsi:schemaLocation http: INGEST IEID ET1YT2SKZ_0488GK INGEST_TIME 2013-07-12T22:23:16Z PACKAGE AA00014878_00001 AGREEMENT_INFO ACCOUNT UF PROJECT UFDC FILES
http://ufdc.ufl.edu/AA00014878/00001
CC-MAIN-2014-42
en
refinedweb
01 October 2010 10:03 [Source: ICIS news]LONDON (ICIS)--There is no need to lower the limit of bisphenol A (BPA) intake as there is no new evidence that low doses of the chemical have any adverse effects on human health, the European Food Safety Authority (EFSA) said late on Thursday.. The EFSA came to the same conclusion in its 2006 opinion and reconfirmed it in 2008, but said it had carried out the new review following requests from the European Commission to look into recent studies. Some European countries have sought to lower the intake of BPA, which has been banned for use in baby bottles in ?xml:namespace> ,” said the EFSA. The EFSA panel said that based on its literature review, it did not consider the currently available data as convincing evidence that BPA would have any adverse effects on aspects of behaviour, such as learning and memory. However, earlier this year, Jochen Flasbarth, the president of “EFSA is monitoring ongoing publications on BPA and is aware of studies being carried out and planned worldwide,” said the EFSA. BPA is used in the manufacture of polycarbonate (PC) plastic found in such items as reusable drinking bottles, infant feeding bottles and storage containers, and in the lining of some food and drinks cans.
http://www.icis.com/Articles/2010/10/01/9397883/EU-food-watchdog-maintains-current-BPA-levels-are-safe.html
CC-MAIN-2014-42
en
refinedweb
02 March 2011 23:29 [Source: ICIS news] HOUSTON (ICIS)--North American titanium dioxide (TiO2) price-hike initiatives of 15 cents/lb ($331/tonne, €242/tonne) from most major domestic producers prompted buyers to warn on Wednesday that the architectural-coatings market will weaken beneath the pressure of escalating prices. “It’s insane,” a buyer said of the latest round of price-hike efforts that first surfaced from DuPont during the last week of February. “It’s definitely going to slow down the housing market.” US architectural coatings demand stems largely from the already weak home construction and repaint market, but it is also commonly used in plastics, paper, inks, fibres, foods, pharmaceuticals and products such as cosmetics and toothpaste. Major TiO2 producers Cristal, Kronos and Huntsman announced identical increases for ?xml:namespace> There was not yet confirmation of a domestic price-hike effort from Tronox, though the company recently announced increases of €225/tonne or $300/tonne in the dollar markets of Europe, Africa and the Middle East, $300/tonne in Asia Pacific and Latin America, Australian dollar (A$)300/tonne in Australia and yen (Y)40/kg in Japan. Some buyers insisted they could not pass along continually rising costs, but one TiO2 customer said it had already sent its sales staff out seeking higher prices for its products because of higher TiO2 costs. “Everybody can use more TiO2,” another buyer said. “It just depends on how much customers are willing to pay. And if they’re not willing, then business will slow down.” The buyer suggested the newest price efforts, which would be implemented on 1 July, would be fully successful or perhaps slip by as little as 5 cents/lb only because customers still have few sourcing options. Meanwhile, current January initiatives of plus 10 cents/lb are poised to be implemented on April 1 at the full increase, though several buyers are still attempting to negotiate reductions of 1-2 cents/lb. Current North American prices TiO2 were assessed by ICIS at $1.30-1.44/lb. ($1 = €0.73)
http://www.icis.com/Articles/2011/03/02/9440286/us-15-centlb-tio2-proposals-will-weaken-coatings-market-buyers.html
CC-MAIN-2014-42
en
refinedweb
Subject: Re: [OMPI users] File seeking with shared filepointer issues From: pascal.deveze_at_[hidden] Date: 2011-06-27 09:20:36(). Pascal users-bounces_at_[hidden] a écrit sur 25/06/2011 12:54:32 : > De : Jeff Squyres <jsquyres_at_[hidden]> > A : Open MPI Users <users_at_[hidden]> > Date : 25/06/2011 12:55 > Objet : Re: [OMPI users] File seeking with shared filepointer issues > Envoyé par : users-bounces_at_[hidden] > > I'm not super-familiar with the IO portions of MPI, but I think that > you might be running afoul of the definition of "collective." > "Collective," in MPI terms, does *not* mean "synchronize." It just > means that all functions must invoke it, potentially with the same > (or similar) parameters. > > Hence, I think you're seeing cases where MPI processes are showing > correct values, but only because the updates have not completed in > the background. Using a barrier is forcing those updates to > complete before you query for the file position. > > ...although, as I type that out, that seems weird. A barrier should > not (be guaranteed to) force the completion of collectives (file- > based or otherwise). That could be a side-effect of linear message > passing behind the scenes, but that seems like a weird interface. > > Rob -- can you comment on this, perchance? Is this a bug in ROMIO, > or if not, how is one supposed to use this interface can get > consistent answers in all MPI processes? > > > On Jun 23, 2011, at 10:04 AM, Christian Anonymous wrote: > > > I'm having some issues with MPI_File_seek_shared. Consider the > following small test C++ program > > > > > > #include <iostream> > > #include <mpi.h> > > > > > > #define PATH "simdata.bin" > > > > using namespace std; > > > > int ThisTask; > > > > int main(int argc, char *argv[]) > > { > > MPI_Init(&argc,&argv); /* Initialize MPI */ > > MPI_Comm_rank(MPI_COMM_WORLD,&ThisTask); > > > > MPI_File fh; > > int success; > > MPI_File_open(MPI_COMM_WORLD,(char *) > PATH,MPI_MODE_RDONLY,MPI_INFO_NULL,&fh); > > > > if(success != MPI_SUCCESS){ //Successfull open? > > char err[256]; > > int err_length, err_class; > > > > MPI_Error_class(success,&err_class); > > MPI_Error_string(err_class,err,&err_length); > > cout << "Task " << ThisTask << ": " << err << endl; > > MPI_Error_string(success,err,&err_length); > > cout << "Task " << ThisTask << ": " << err << endl; > > > > MPI_Abort(MPI_COMM_WORLD,success); > > } > > > > > > /* START SEEK TEST */ > > MPI_Offset cur_filepos, eof_filepos; > > > > MPI_File_get_position_shared(fh,&cur_filepos); > > > > //MPI_Barrier(MPI_COMM_WORLD); > > MPI_File_seek_shared(fh,0,MPI_SEEK_END); /* Seek is collective */ > > > > MPI_File_get_position_shared(fh,&eof_filepos); > > > > //MPI_Barrier(MPI_COMM_WORLD); > > MPI_File_seek_shared(fh,0,MPI_SEEK_SET); > > > > cout << "Task " << ThisTask << " reports a filesize of " << > eof_filepos << "-" << cur_filepos << "=" << eof_filepos-cur_filepos << endl; > > /* END SEEK TEST */ > > > > /* Finalizing */ > > MPI_File_close(&fh); > > MPI_Finalize(); > > return 0; > > } > > > > Note the comments before each MPI_Barrier. When the program is run > by mpirun -np N (N strictly greater than 1), task 0 reports the > correct filesize, while every other process reports either 0, minus > the filesize or the correct filesize. Uncommenting the MPI_Barrier > makes each process report the correct filesize. Is this working as > intended? Since MPI_File_seek_shared is a collective, blocking > function each process have to synchronise at the return point of the > function, but not when the function is called. It seems that the use > of MPI_File_seek_shared without an MPI_Barrier call first is very > dangerous, or am I missing something? > > > > _______________________________________________________________ > > Care2 makes it easy for everyone to live a healthy, green > lifestyle and impact the causes you care about most. Over 12 Millionmembers! > Feed a child by searching the web! Learn how > > > users mailing list > > users_at_[hidden] > > > > > > > -- > Jeff Squyres > jsquyres_at_[hidden] > For corporate legal information go to: > > > > _______________________________________________ > users mailing list > users_at_[hidden] >
http://www.open-mpi.org/community/lists/users/2011/06/16770.php
CC-MAIN-2014-42
en
refinedweb
Use try-with-resources The: package java.lang; public interface AutoCloseable { void close() throws Exception; }. package java.io; import java.io.IOException; public interface Closeable extends AutoCloseable { public void close() throws IOException; } next section for more information.. meeta gaur wrote:I couldn't figure out meaning of NOTE: the close() methods of resources are called in the OPPOSITE order of their creation. meeta gaur wrote:
http://www.coderanch.com/t/602019/java/java/resources
CC-MAIN-2014-42
en
refinedweb
15 March 2010 13:40 [Source: ICIS news] LONDON (ICIS news)--NOVA Chemicals reported a fourth-quarter net income of $17m (€12.4m) compared with a net loss of $212m for the same period in 2008 as demand improved after the economic downturn, the Canadian petrochemicals producer said on Monday. Revenue for the three months ended 31 December 2009 fell by 2.5% to $1.12bn. “2009 was characterised by sequential quarter-over-quarter improvement starting from operating losses in the beginning of the year and building to operating gains later in the year,” NOVA said. In addition to improved demand, the company had also realised an increase in pricing and lower operating costs due to decreased utility costs, it said in a statement. NOVA is a wholly owned subsidiary of ?xml:namespace> NOVA is headquartered in the ($1 = €0.73) For more on Nova Chemicals
http://www.icis.com/Articles/2010/03/15/9342848/nova-chemicals-swings-to-q4-net-income-of-17m-sales-down-2.5.html
CC-MAIN-2014-42
en
refinedweb
17 June 2013 10:34 [Source: ICIS news] (recasts paragraph 12 for clarity) By Prema Viswanathan SINGAPORE (ICIS)--Players in Iran’s petrochemical markets are hopeful that international sanctions on the Middle Eastern country will eventually ease following the election of its former top nuclear negotiator – Hassan Rohani – as president, industry sources said on Monday. China, India, Turkey and Brazil count Iran as a major supplier of petroleum and petrochemical products. Rohani won the presidential elections held on 14 June, taking the reins from Mahmoud Ahmadinejad - who has a hardline stance on Iran's nuclear issue and has been ruling the country since 2005. “We are cautiously optimistic that some change will be seen even in the next two months in terms of an easing of trade flows,” a Tehran-based supplier said. In recent years, international sanctions in Iran have been tightening over the country’s suspected nuclear weapons programme, posing major deterrents in its trade dealings with the rest of the world. Any material change to the prevailing trading conditions, however, will take time to emerge, market players said. “The removal of sanctions will be a more long-drawn process which will take a couple of years,” the Iranian supplier said. Rohani, who was nicknamed “the diplomat sheikh”, served as secretary of the Supreme National Security Council for 16 years, and was the country’s top nuclear negotiator from 6 October 2003 to 15 August 2005. “We cannot comment on how the political negotiations will go, but we are hoping that given the new president’s past experience as a negotiator, he can open up dialogue on the nuclear issue and help Iran resume normal trade with the rest of the world,” said a market source. In May, the international sanctions on Iran were reinforced, directly targeted at trade and investments in its petrochemical industry. In the fiscal year ending 20 March 2013, Iran exported around 15.5m tonnes of petrochemical products valued at $11.6bn (€8.7bn), out of its total domestic capacity of 60m tonnes/year. Iranian-origin material accounted for about 27% and 16% of China’s low density polyethylene (PE) and high density PE (HDPE imports, respectively, in 2012. China also relies on Iran for 40% of its methanol import requirements. India, on the other hand, procures more than 70% of its methanol import from the Middle Eastern country. Before any major policy change can be enforced in Iran, Rohani will need to convince Iranian religious leaders and the United Nations of his broad plans, taking into account recent developments in the Arab world, a Saudi-based market player said. “We have to keep in mind that his power is limited as the ultimate lead is still with the religious leaders of Iran,” he said “[The] ongoing unrest in many countries of the Middle East could [prompt] leaders of Iran [to] start a slow reform process to avoid any civil conflicts like we see in Syria and other places,” the Saudi market player said. “I believe that we have a [clearer] view if we better understand his role towards Syria and how he will negotiate the nuclear program with the UN in the next weeks and how he can convince the religious leaders to support him in his reform ideas. If he will master those two major challenges I see a strong player coming back into the market,” he added. In the near term, market conditions are likely to stay roughly the same for Iran, market sources said. Iran’s low-priced polyethylene (PE) cargoes are expected to continue flowing in huge volumes into neighbouring Pakistan via illegal routes because of porous borders, said a Karachi-based trader. Pakistan is located southeast of Iran. “Things may change if [Rohani] manages to overturn the [US-led] sanctions. But he will take time to change anything – one or two years even, or more. He may face pressure from the Supreme Council,” the Pakistani trader said. Electing a “moderate leader is good for Iran and the world”, an India-based petrochemical trader said. “While I don’t expect any immediate change in the situation, I think western nations will wait and watch before taking any further action to curtail trade with Iran. “In the long run, it will depend on how the equation between the religious and political leadership within Iran emerges, which is anybody’s guess,” he added. ?xml:namespace> ($1 = €0.75)Additional reporting by Muhamad Fadhil, Heng Hui, Chow Bee Lin
http://www.icis.com/Articles/2013/06/17/9678963/iran-chem-players-see-intl-sanctions-easing-after-rohani-win.html
CC-MAIN-2014-42
en
refinedweb
Master java classes. - Java Beginners java classes. I con not understand the behavior of anonymous inner classes? Hi friend, I am sending you a link. This link will help you. Please for more information. Classes in Java Classes in Java  ... java have mechanisms for handling exceptions. This is known as catching exception in Java. The exceptions that occur in the program can be caught using try Java list of uninstantiated classes Java list of uninstantiated classes Java list of uninstantiated classes Helper classes in java Helper classes in java helper classes GWT supported java classes GWT supported java classes What are the methods and classes in Java which are supported and unsupported by GWT Nested and Inner classes -class.shtml Thanks...Nested and Inner classes What is the significance of Inner Classes and Static Inner Classes? or Why are nested classes used? Hi Friend Collection classes in java is the reason using java collection classes saved/stored the data/content.I don't understand, what is the idea using java collection classes in project. Or the data is stored in both database and java collection classes What is Abstract classes in Java? What is Abstract classes in Java? What is Abstrack class in Java..., Hi, In Java programming language we are used the Java Abstract.... That's why the Abstract class in Java programming language is used to provide Java application that uses the classes and Java application that uses the classes and i have this class diagram with three classes: Curriculum, Course, and Lecture. Your class diagram...) I wish Write a Java application that uses the classes and methods above FontMetrics classes Java FontMetrics classes What is the difference between the Font and FontMetrics classes? The Font class provides mappings to fonts that are used to render text data onto the screen. The Font class maps creating java classes creating java classes Create a Java class that can be used to store inventory information about a book. Your class should store the book title, the author?s name, the price, and the number of books in stock. The class you wrapper classes of the primitive wrapper classes in Java are immutable i.e. once assigned a value...wrapper classes Explain wrapper classes and their use? Java Wrapper Class Wrapper class is a wrapper around a primitive data type Interfaces and Abstract Classes - Development process Interface and Abstract Classes? Hi Friend, Interface: Java does...:// Thanks creating java classes creating java classes This program uses a class named DrivingLicense to keep track of two driving licenses, including the driver?s name, and the number of speeding tickets they have received. You may not modify the DLTest code classes - Java Beginners Java Classes Java Classes conducted online by Roseindia include an elite panel of some... that if a beginner starts taking a Java classes online here, than he/she... an expert in the field. Java classes also act as libraries for future reference Inner classes Inner classes Hi I am bharat . I am student learning java course . I have one question about inner classes . question is how to access the instance method of Non-static classes which is defined in the outer Classes in java etc. For more details click on the following link Classes in Java Java classes Java classes are like a group under which all objects and methods... examples that will help beginners in Java understand the definition of Java classes... of Java classes, object and its methods. Here in order to determine Area classes in c++ classes in c++ 1- design and implement a class datatype that implement the day of the week in the program.the class datatype should store the day... on this class. Here is the Java code: public class DayType{ final Member Classes, Inner Nested Classes in java Member Classes Member classes are defined within the body of a class. We can use member classes anywhere within the body of the containing class. We declare member classes Inner Classes In Java Inner Classes In Java There are 4 kind of classes that can be defined in a Java program, roughly can be termed as the inner classes. -- Inner classes provides an elegant executing java program with 2 classes ); } } Executing java program with 2 classes save the file with D.java...executing java program with 2 classes how to run a program of java containing more than one class....ex of program is given below.... class C Classes and Objects -oriented programming. Nested Classes: One more advantage of Java... Classes and Objects Objects and classes are the fundamental parts of object-orientated programming technique. A class Distinguishes JavaBeans from Java Classes Distinguishes JavaBeans from Java Classes What is the difference between a Java Bean and an instance of a normal Java class?Explain with an example, pls? Hi Friend, Differences: 1)The java beans are serializable How are Anonymous (inner) classes used in Java? How are Anonymous (inner) classes used in Java? How are Anonymous (inner) classes used in Java Nested classes: Examples and tutorials Nested classes: Examples and tutorials Nested classes Here is another advantage of the Java... within another class, such class is called a nested class. Inner classes can classes and data abstraction - Java Beginners classes and data abstraction Create a java program for a class named... java code according to your requirement. Please implement following code...:// Thanks File Handling Classes in Java In this section, you will get to know about file handling classes in java to handle the file input output operations Video Tutorial - Classes and Objects in Java Video Tutorial on Classes and Objects in Java teaches you with example code In this video tutorial you will learn about Classes and Objects in Java... will learn what are Classes and Objects in Java program. You will also learn how Nested classes Nested classes Here is another advantage of the Java, an object-oriented programming language that allows us to define a class within another class, such classes write a java program that implements the following classes: write a java program that implements the following classes: write a java program that implements the following classes: A) are subclasses of the circle class. All these classes should about predefine classes..... about predefine classes..... how we know that the class used in your given examples are predefind and from where we get info. about other predefine classes. we need explanation about examples given by you for java beginners List the names of classes used to create button and text box in Java. List the names of classes used to create button and text box in Java. List the names of classes used to create button and text box in Java URGENT: User Defined Classes - Java Beginners :// Here, you will get different data What is the base class of all classes? What is the base class of all classes? Hi, What is the base class of all classes? thanks Hi, The base class is nothing but the Abstract class of Java programming. We uses the syntax for base class of all Wrapper Classes as a Boolean class instance. All of the primitive wrapper classes in Java... In Java 5.0 version, additional wrapper classes were introduced... Wrapper Classes   Can you give few examples of final classes defined in Java API? Can you give few examples of final classes defined in Java API? Hi, Can you give few examples of final classes defined in Java API? thanks Get classes In Package Java Get classes In Package  ... the classes from the package by providing the path of the jar file and the package.... Following code adds all the classes of the package getting from the jar Exception Classes Exception Classes The hierarchy of exception classes commence from Throwable class which is the base class for an entire family of exception classes, declared in  java java hi im new to java plz suggest me how to master [email protected] Spring Web Annotation Classes Spring Web Annotation Annotation is introduced since java 5. It is a new kind... be used with the classes or methods or both. It is used to specify the handler classes or methods that handles the particular request. /* Used with Class Abstract class,Abstract methods and classes (); Abstract Class In java programming language, abstract classes... Abstract methods and classes While going through the java language programming you have learned Java AWT Package Example it. This is done by using java gui programming using Classes or APIs of java awt package.... The Java programming in java 2D API provides several classes... Java AWT Package Example   java java what are abstract methods Please visit the following link: Inner Nested Classes Inner Nested Classes Non-static nested classes are slightly different from static nested classes, a non-static nested class is actually associated Abstract methods and classes (); Abstract Class In java programming language, abstract classes... Abstract methods and classes While going through the java language programming you have learned Classes and Objects ; Nested Classes: One more advantage of Java, as an object-oriented... Classes and Objects Objects and classes are the fundamental parts of object-orientated programming Persistent Fields and Properties in Entity Classes in Entity Classes. Here we will learn about the field types supported... specification. Following Java Language types of fields and properties are supported: Java primitive types java.lang.String Other java , visit the following links: java :// why to create directory structure as web-inf & classes & lib - Java Beginners why to create directory structure as web-inf & classes & lib we all know that to prepare webapplications we will create a direcotry sturcture... lib and class under WEB-INF direcory c)provide the servlet classes under which class is super class to all classes which class is super class to all classes in java which class is super class to all classes Object is the super class for all user defined and predefined class. java.lang.Object is the supper class of all java a program in java to show that private mambers of a super class cannot be accessed from derived classes java ??all the database must be in master computer only and all other computer must retrieve that data from the master computer only.please help core java - Java Beginners :// Thanks...; Hi friend, Marker Interface : In java language programming... are implemented by the classes or their super classes in order to add some java java java swing Swing is a principal GUI toolkit for the Java programming language. It is a part of the JFC (Java Foundation Classes), which is an API for providing a graphical user interface for Java programs greatest of 3 numbers using classes and functions. greatest of 3 numbers using classes and functions. WAP to calculate greatest of 3 numbers using classes and functions with parameters through input in main? Java Find Largest Number import java.util.*; class java - Java Beginners to : java java wrtie a provram in java to creagte a Player class. inherit the classes CricketPlayer, FootballPlayer and Hockey_Player from Player class Java API Java API Java API Java API is not but a set of classes and interfaces that comes with the JDK. Java API is actually a huge collection of library java - Java Beginners /java/master-java/java-object-oriented-language.shtml Thanks... as concrete methods Abstract classes has to be extended. A class can extend only one java inner class - Java Beginners -class.shtml Thanks...java inner class What is the difference between the nested class Masters of Java Assignment Plugin it significantly easier to develop Master of Java Assignments by making use of the Eclipse Java IDE. The plugin has the following features... Masters of Java Assignment Plugin   java - Java Beginners the following links: Thanks...)An interface represents only the boundaries of a class/classes that implement the interface Java Glossary : Adapter Classes java java create classes to store details of java students,java ee students and oracle students.all students have admission number and name,java students have marks in theory oracle students have marks in lab,java ee students have Classes and Interfaces of the I/O Streams Classes and Interfaces of the I/O Streams Classes: The following listing of classes...;ObjectStreamClass Serialization's descriptor for classes.   java by the Java utility package are very powerful and perform a wide range of functions. These data structures consist of the following interface and classes: Enumeration BitSet Vector Stack Dictionary Hashtable Properties These classes Java abstraction - Java Beginners (); } } ------------------------------------------ For read more information. abstraction what is abstraction in java please explain with example Hi, Abstract Class Abstract classes are those that works Java Java What is class path and path in java Hi Friend, Classpath is Environment Variable that tells JVM or Java Tools where to find the classes. In CLASSPATH environment you need to specify only .class files. Path Pattern,Matcher,Formatter and Scanner classes , and in addition some Java classes like Date and BigInteger also have...Pattern,Matcher,Formatter and Scanner classes This section Describes : 1... to format or parse strings or streams. 2. Uses of the Pattern and Matcher classes java - Java Beginners /java/master-java/java-object-oriented-language.shtml Thanks... classes may implement some of the methods. Though you can?t instantiate an abstract Wrapper Classes classes in Java are immutable i.e. once assigned a value to a wrapper class...Primitive Wrapper Class In this section you will learn about Wrapper classes... link. Wrapper Classes Insert an element with the add(int, Object) method java - Java Interview Questions /master-java/interface.shtml Thanks... "implements" is used. Interfaces are similar to abstract classes but the major... in case of abstract classes must have at least one abstract method. Interfaces Java - Java Beginners :// Regards Neelam...Use of Interfacecs in Java What is the use of Interfaces in Java? Uses of Interface Interfaces encodes similar types of classes java - Java Interview Questions :// Thanks... to its implementation class object Hi Friend, In java language... Interfaces are implemented by the classes or their super classes in order to add
http://www.roseindia.net/tutorialhelp/comment/86993
CC-MAIN-2014-42
en
refinedweb
Description. Tz alternatives and similar packages Based on the "Date and Time" category. Alternatively, view Tz alternatives based on common mentions on social networks and blogs. timex9.8 7.8 Tz VS timexA complete date/time library for Elixir projects. calendar8.8 0.0 Tz VS calendardate-time and time zone handling in Elixir tzdata8.2 3.0 Tz VS tzdatatzdata for Elixir. Born from the Calendar library. filtrex7.5 0.0 Tz VS filtrexA library for performing and validating complex filters from a client (e.g. smart filters) Cocktail7.3 6.6 Tz VS CocktailElixir date recurrence library based on iCalendar events chronos6.6 0.0 Tz VS chronosAn elixir date/time library Crontab6.6 1.5 Tz VS CrontabParse Cron Expressions, Compose Cron Expression Strings and Caluclate Execution Dates. repeatex4.9 0.0 Tz VS repeatexNatural language for repeating dates good_times4.9 0.0 Tz VS good_timesExpressive and easy to use datetime functions in Elixir. ex_ical4.8 0.0 Tz VS ex_icalICalendar parser for Elixir. Ex_Cldr_Dates_Times4.8 6.7 Tz VS Ex_Cldr_Dates_TimesDate & times formatting functions for the Common Locale Data Repository (CLDR) package open_hours4.0 0.0 Tz VS open_hoursTime calculations using business hours moment3.7 0.0 Tz VS momentMoment is designed to bring easy date and time handling to Elixir. jalaali3.5 6.4 Tz VS jalaaliA Jalaali (Jalali, Persian, Khorshidi, Shamsi) calendar system implemention for Elixir tiktak2.9 0.0 Tz VS tiktakFast and lightweight web scheduler timex_interval2.4 0.0 Tz VS timex_intervalA date/time interval library for Elixir projects, based on Timex. timelier1.9 0.0 Tz VS timelierA cron-style scheduler application for Elixir. block_timer1.9 0.0 Tz VS block_timerMacros to use :timer.apply_after and :timer.apply_interval with a block calendarific1.8 2.7 Tz VS calendarificAn Elixir wrapper for the holiday API Calendarific emojiclock1.0 0.0 Tz VS emojiclockAn Elixir module for returning an emoji clock for a given hour milliseconds0.5 0.0 Tz VS millisecondsSimple library to work with milliseconds japan_municipality_key0.5 0.0 Tz VS japan_municipality_keyElixir Library for Japan municipality key converting Calixir0.4 0.0 Tz VS CalixirCalixir is a port of the Lisp calendar software calendrica-4.0 to Elixir. Calendars0.2 0.0 Tz VS CalendarsCalendar Collection based on Calixir Do you think we are missing an alternative of Tz or a related project? Popular Comparisons README Tz. Features Battle-tested The tz library is tested against nearly 10 million past dates, which includes most of all possible imaginable edge cases. Pre-compiled time zone data Time zone periods are deducted from the IANA time zone data. A period is a period of time where a certain offset is observed. Example: in Belgium, from 31 March 2019 until 27 October 2019, clock went forward by 1 hour; this means that during this period, Belgium observed a total offset of 2 hours from UTC time. The time zone periods are computed and made available in Elixir maps during compilation time, to be consumed by the DateTime module. Automatic time zone data updates tz can watch for IANA time zone database updates and automatically recompile the time zone periods. To enable automatic updates, add Tz.UpdatePeriodically as a child in your supervisor: {Tz.UpdatePeriodically, []} If you do not wish to update automatically, but still wish to be alerted for new upcoming IANA updates, add Tz.WatchPeriodically as a child in your supervisor: {Tz.WatchPeriodically, []} This will simply log to your server when a new time zone database is available. Lastly, add the http client mint and ssl certificate store castore into your mix.exs file: defp deps do [ {:castore, "~> 0.1.5"}, {:mint, "~> 1.0"}, {:tz, "~> 0.10.0"} ] end Usage To use the tz database, either configure it via configuration: config :elixir, :time_zone_database, Tz.TimeZoneDatabase or by calling Calendar.put_time_zone_database/1: Calendar.put_time_zone_database(Tz.TimeZoneDatabase) or by passing the module name Tz.TimeZoneDatabase directly to the functions that need a time zone database: DateTime.now("America/Sao_Paulo", Tz.TimeZoneDatabase) Refer to the DateTime API for more details about handling datetimes with time zones. Performance tweaks tz provides two environment options to tweak performance. You can decrease compilation time, by rejecting time zone periods before a given year: config :tz, reject_time_zone_periods_before_year: 2010 By default, no periods are rejected. For time zones that have ongoing DST changes, period lookups for dates far in the future will result in periods being dynamically computed based on the IANA data. For example, what is the period for 20 March 2040 for New York (let's assume that the last rules for New York still mention an ongoing DST change as you read this)? We can't compile periods indefinitely in the future; by default, such periods are computed until 5 years from compilation time. Dynamic period computations is a slow operation. You can decrease period lookup time for such periods lookups, by specifying until what year those periods have to be computed: config :tz, build_time_zone_periods_with_ongoing_dst_changes_until_year: 20 + NaiveDateTime.utc_now().year Note that increasing the year will also slightly increase compilation time, as it will generate more periods to compile. Installation Add tz for Elixir as a dependency in your mix.exs file: def deps do [ {:tz, "~> 0.10.0"} ] end HexDocs HexDocs documentation can be found at.
https://elixir.libhunt.com/tz-alternatives
CC-MAIN-2021-43
en
refinedweb
A Behavioral Cloning agent. View aliases Main aliases tf_agents.agents.behavioral_cloning.behavioral_cloning_agent.BehavioralCloningAgent tf_agents.agents.BehavioralCloningAgent( time_step_spec: tf_agents.trajectories.TimeStep, action_spec: tf_agents.typing.types.NestedTensorSpec, cloning_network: tf_agents.networks.Network, optimizer: tf_agents.typing.types.Optimizer, num_outer_dims: Literal[1, 2] = 1, epsilon_greedy: tf_agents.typing.types.Float= 0.1, loss_fn: Optional[Callable[[types.NestedTensor, bool], types.Tensor]] = None, gradient_clipping: Optional[types.Float] = None, debug_summaries: bool = False, summarize_grads_and_vars: bool = False, train_step_counter: Optional[tf.Variable] = None, name: Optional[Text] = None ) Implements a generic form of BehavioralCloning that can also be used to pipe supervised learning through TF-Agents. By default the agent defines two types of losses. For discrete actions the agent uses: def discrete_loss(agent, experience, training=False): bc_logits = self._cloning_network(experience.observation, training=training) return tf.nn.sparse_softmax_cross_entropy_with_logits( labels=experience.action - action_spec.minimum, logits=bc_logits) This requires a Network that generates num_action Q-values. In the case of continuous actions a simple MSE loss is used by default: def continuous_loss_fn(agent, experience, training=False): bc_output, _ = self._cloning_network( experience.observation, step_type=experience.step_type, training=training, network_state=network_state) if isinstance(bc_output, tfp.distributions.Distribution): bc_action = bc_output.sample() else: bc_action = bc_output return tf.losses.mse(experience.action, bc_action) The implementation of these loss functions is slightly more complex to support nested action_specs..
https://www.tensorflow.org/agents/api_docs/python/tf_agents/agents/BehavioralCloningAgent?authuser=0
CC-MAIN-2021-43
en
refinedweb
Allow other ports to compile ATSUI and CoreText functions in SimpleFontData for Mac. Created attachment 54719 [details] Patch This is needed so that wx can compile and use ComplexTextController, BTW. Comment on attachment 54719 [details] Patch This should be done by svn cp’ing SimpleFontDataMac.mm to the two new files and removing everything but the needed methods, includes and copyright headers. The #if USE()s should go outside the namespace and the second paragraph of #includes. Created attachment 54747 [details] Patch Attachment 54747 [details] did not pass style-queue: Failed to run "['WebKitTools/Scripts/check-webkit-style', '--no-squash']" exit_code: 1 WebCore/platform/graphics/mac/SimpleFontDataCoreText.cpp:48: Use 0 instead of NULL. [readability/null] [5] WebCore/platform/graphics/mac/SimpleFontDataCoreText.cpp:74: Use 0 instead of NULL. [readability/null] [5] WebCore/platform/graphics/mac/SimpleFontDataCoreText.cpp:81: Use 0 instead of NULL. [readability/null] [5] Total errors found: 3 in 4 files If any of these errors are false positives, please file a bug against check-webkit-style. Comment on attachment 54747 [details] Patch Is there a reason why you didn’t address my second feedback item, regarding the placement of the #if directives? Created attachment 54749 [details] Patch (In reply to comment #6) > (From update of attachment 54747 [details]) > Is there a reason why you didn’t address my second feedback item, regarding the > placement of the #if directives? Oops, sorry, no, I just got caught up in redoing the patch and forgot about it. I'll fix and re-upload. Created attachment 54751 [details] Patch The patch was committed as r58557, and rolled out by r58560 because of compile errors on Tiger, Leopard, and Chromium/Mac. (In reply to comment #10) > The patch was committed as r58557, and rolled out by r58560 because of compile > errors on Tiger, Leopard, and Chromium/Mac. Correction: Tiger was ok. Leopard and Chromium/Mac failed. I haven't looked into the compile failures but FYI Chromium compiles with both the ATSUI & Core text functionality enabled and switches the APIs used at runtime. This, as opposed to other ports that make the decision at compile time. might have broken Chromium Mac Release Landed in r58581, with Chromium build fix in r58582.
https://bugs.webkit.org/show_bug.cgi?id=38334
CC-MAIN-2021-43
en
refinedweb
A short description for your project. Project description A simple pygame-based implementation of Conway’s game of life. Cmd: $ game-of-life As library: from game_of_life import Board, run board = Board(30, 30) # Diagonal line for i in range(30: board[i, i] = 1 run('conway') This implementation requires Python3 and pygame. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/game-of-life/
CC-MAIN-2021-43
en
refinedweb
Auto-printing Python Classes This article examines different ways to generate __repr__ functions. But, just to be clear, here are my recommendations in order: - Immutable data: typing.NamedTuple(Python >= 3.5) collections.namedtuple - Mutable data: dataclasses.dataclass(Python >= 3.7 or you don't mind using pipto get it) argparse._AttributeHolder(Python >= 3.2 and you don't mind using its non-public class) generic_repror GenericReprBase(read on for implementation) Before you program in Python for too long, you start to work with classes. In most day to day programming, a class is just a container for similarly named variables ( very similar to a struct in C ). Here's an example Song class: class SongVanilla: def __init__(self, title: str, length: int): self.title = title self.length = length This is a perfectly functional object, even if the __init__ method seems a little redundant. However if you print it, you something similar to this ugliness: s = SongVanilla('mysong', 300) print(s) # <__main__.SongVanilla object at 0x10be016a0> The default __repr__ method for classes is very unhelpful. Let's add one: class SongVanilla: def __init__(self, title: str, length: int): self.title = title self.length = length def __repr__(self): return f'SongVanilla({self.title!r}, {self.length!r})' This does get us a nice print method: s = SongVanilla('mysong', 300) print(s) # SongVanilla('mysong', 300) Now the redundancy is even more transparent: All we wanted was a way to group title and length and get reasonable ways to access and print instance variables. With this method, we have to type title 4 times just to get that! And we have to do this with every member variable want to add. Want to add author? Prepare for an early case of arthritis. And this gets even worse if we decide to add any of the comparison functions or other double underscore methods to our class. NamedTuple Fortunately, there are better ways to do this. If you know your data will be immutable, Python's standard library comes with namedtuple in the collections module that does the right thing. from collections import namedtuple SongCollectionsNamedTuple = namedtuple('SongCollectionsNamedTuple', ['title', 'length']) This gives us a printing function that does the expected: s = SongCollectionsNamedTuple('mysong', 300) print(s) # SongCollectionsNamedTuple(title='mysong', length=300) So namedtuples are a nice solution to this problem assuming your data in the class won't change, but you can't really typehint them, and their odd declaration syntax means adding a docstring looks funny. Fortunately, typing.NamedTuple fixes most of these problems: from typing import NamedTuple class SongTypingNamedTuple(NamedTuple): title: str length: int This looks nice, but also gets us a good __repr__. The immutability requirement still applies. s = SongTypingNamedTuple('mysong', 300) print(s) # SongTypingNamedTuple(title='mysong', length=300) In code that supports it, I recommend typing.NamedTuple unconditionally over collections.namedtuple. static repr If you have mutable data, another option is to use Python's metaprogramming abilities to create generic __repr__ and __init__ methods for our classes and then call them from each instance via inheritance or function call. I'm only going to focus on the __repr__ method for this post. Let's dissect what happens in __repr__. We need: - The name of the class - provided by type(self).__name__- and - The member variables of the class - provided by vars(self), which returns them in a dictionary. Using these two bits of information, let's build a generic_repr(instance) -> str function: def generic_repr(instance) -> str: name = type(instance).__name__ vars_list = [f'{key}={value!r}' for key, value in vars(instance).items()] vars_str = ', '.join(vars_list) return f'{name}({vars_str})' s = SongVanilla('mysong', 300) print(generic_repr(s)) # SongVanilla(title='mysong', length=300) Unfortunately, this function won't work on the immutable collections.namedtuple or typing.NamedTuple. It relies on attributes they don't have (specifically the __dict__ attribute for vars()). That's not really a problem, though, because they supply their own __repr__s. Now that we have this function, we can call it from a class's __repr__ method: def __repr__(self): return generic_repr(self) or bake it into a base class and inherit from it: class GenericReprBase: def __repr__(self): name = type(self).__name__ vars_list = [f'{key}={value!r}' for key, value in vars(self).items()] vars_str = ', '.join(vars_list) return f'{name}({vars_str})' class SongInheritedRepr(GenericReprBase): def __init__(self, title: str, length: int): self.title = title self.length = length s = SongInheritedRepr('mysong', 300) print(s) # SongInheritedRepr(title='mysong', length=300) This is basically what argparse does in it's private _AttributeHolder class. Dataclasses In Python 3.7 ( coming mid-2018 ), the dataclasses module has been added to the standard library. Python 3 users can pip install dataclasses to use it right now. Use the @dataclass decorator provided by this module to automatically give your class __init__, __repr__, and more goodies you can read about in the PEP. For mutable data, this is almost certainly the best option. Here's an example: from dataclasses import dataclass @dataclass class SongDataClass: title: str length: int s = SongDataClass('mysong', 300) print(s) # SongDataClass(title='mysong', length=300) The only issues I can think of is that, with this option, IDEs can have a hard time informing you of what arguments they need exactly for the __init__ function of the class. Also if you're doing something clever with your classes (for example, some crazy SQLAlchemy operator), you should probably buckle down and write __repr__ yourself. In summary, Python offers many ways to generate printable classes- from adding __repr__ manually to every class to standard library solutions to third party solutions to writing your own generic solution, you shouldn't have to repeat yourself very often at all
https://www.bbkane.com/blog/auto-printing-python-classes/
CC-MAIN-2021-43
en
refinedweb
README react-native-anchor-pointreact-native-anchor-point Provide a simple, tricky but powerful function, withAnchorPoint , like Anchor Point in iOS, Pivot Point in Android, transform-origin in css to achieve better 3D transform animation in React-Native. Make the 3D transform easier in React Native Getting StartedGetting Started install the react-native-anchor-point yarn add react-native-anchor-point or npm install react-native-anchor-point ExampleExample import { withAnchorPoint } from 'react-native-anchor-point'; getTransform = () => { let transform = { transform: [{ perspective: 400 }, { rotateX: rotateValue }], }; return withAnchorPoint(transform, { x: 0.5, y: 0 }, { width: CARD_WIDTH, height: CARD_HEIGHT }); }; <Animated.View style={[styles.blockBlue, this.getTransform()]} />
https://www.skypack.dev/view/react-native-anchor-point
CC-MAIN-2021-43
en
refinedweb
Interfacing L298N H-bridge motor driver with raspberry pi First things first, if you have not checked out my previous blog about how to set up a raspberry pi with out an HDMI cable and monitor, then have a quick read this blog. Now, let’s continue with our journey. At this point, you are probably wondering what journey, is there a destination? Well, there is. I will be explicitly talking about it another separate blog (yeah, I am a little lazy in writing blogs, sorry ;( ). However, a little sneak peak into this project, this robot has motors (surprise surprise ;p) to power the wheels. To protect the controller from the current usage incompatibility, a motor driver is used. To know more about what are motor drivers and why are they used, please read this amazing blog. What is a motor driver and why do we need it with the Raspberry Pi The motors require an amount of current that is larger than the amount of current that the Raspberry Pi could handle… In this current blog, we will interface the L298N H-bridge with raspeberry pi and run a script on the pi to move the robot. This will be divided into 2 sections. - Hardware Integration - Software Program and testing Let’s go. - Hardware Integration First, let’s talk about L298N. We have 4 6V DC motors, and we have only two motor outputs, therefore, 2 motors will use the same motor output from the H-bridge. Hence, 2 motors connect to Motor A output, the other 2 motors connect to the Motor B output. Before we connect the motor driver to the pi, let’s take care of the battery pack which would power the motors. The +ve of the batter pack gets connected to the power supply connection from the image and the -ve of the battery pack gets connected to the GND. Once the battery pack is connected properly to the H-bridge, a red light will start glowing. Now, let’s connect the H-bridge with raspberry pi. There can be many combinations for connecting L298N to pi. This is one of them: 2. Software Program and testing Now that the hardware of H-Bridge has been interfaced with the Pi, let’s write a python program to run this hardware and see some action ;p. import RPi.GPIO as gpio import timedef init(): gpio.setmode(gpio.BCM) gpio.setup(17, gpio.OUT) gpio.setup(22, gpio.OUT) gpio.setup(23, gpio.OUT) gpio.setup(24, gpio.OUT)def forward(sec): init() gpio.output(17, False) gpio.output(22, True) gpio.output(23, True) gpio.output(24, False) time.sleep(sec) gpio.cleanup() def reverse(sec): init() gpio.output(17, True) gpio.output(22, False) gpio.output(23, False)7 gpio.output(24, True) time.sleep(sec) gpio.cleanup()def left_turn(sec): init() gpio.output(17, True) gpio.output(22, False) gpio.output(23, True) gpio.output(24, False) time.sleep(sec) gpio.cleanup()def right_turn(sec): init() gpio.output(17, False) gpio.output(22, True) gpio.output(23, False) gpio.output(24, True) time.sleep(sec) gpio.cleanup()seconds = 3time.sleep(seconds) print("forward") forward(seconds) time.sleep(seconds-2)print("right") right_turn(seconds) time.sleep(seconds-2)time.sleep(seconds) print("forward") forward(seconds) time.sleep(seconds-2)print("right") right_turn(seconds) time.sleep(seconds-2) Code is available at SharadRawat/pi_sensor_setup You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or… github.com Let’s run this basic program. Yayee, that feels good. So we have the motors running for this robot now. For the next post, I will be writing about interfacing of other sensors to the raspberry pi. Thanks for reading. If you like this blog, please clap. Next up, read about the two types of perception sensors I am using for my robot. The problems, solutions and procedure is described in the blog below.
https://sharad-rawat.medium.com/interfacing-l298n-h-bridge-motor-driver-with-raspberry-pi-7fd5cb3fa8e3?responsesOpen=true&source=user_profile---------4----------------------------
CC-MAIN-2021-43
en
refinedweb
Introduction to Hash Table Hash Table in Data Structure, Hash Table is the table that stores all the values of the hash code used while storing the keys and element data values using hashing mechanism. The hash table stores hash codes which are generated by using the hash function. Hashing is the mechanism that helps to identify all the objects uniquely within the set of groups of objects. This type of system is also used in day-to-day life like when assigning the ids to employees, books or roll numbers to students so that it is easy to identify each of them individually in a better way. In all these examples the unique key that is used for identification is called as hash codes, hash values, hash or hash sums when we are implementing hashing by using hash tables. In this article, we will see how the hash table proves to be helpful in implementing hashing and also see one example to study how hashing can be implemented. Working of Hash Table Working of hash table are given below: - Indexing of the array makes the data access very fast, hence, the hash table internally makes the use of array to store all the values while the unique indexing created by using hash tables acts as the indexing of the array. When we know exactly where which value is stored using its key, it becomes very easy and fast to access it. - This data structure stores all the values of the array or simply any other list or group of elements along with its keys. This keys act as a index that is useful while referring and storing the data in hash tables. Whenever the key or index becomes very big, the hash code reduces the key length by using a hash function. - The hash function should be chosen appropriately depending on the factors that the resulting storage of elements should be distributed uniformly across the hash table. This makes sure that there is no clustering of data at a single place in hash table. Also, the hash function should be very easy and simple for calculation as increasing its complexity will increase the storage requirement as well as the computations required. - One should also make sure to take the at the most collision prevention technique usage as it is common that even though a good hash function is chosen for hashing it may result more than one value is mapped to the same hash value. There are many collision detection and resolution techniques followed such as open hashing also known as separate chaining, linear probing or closed hashing, quadratic probing or double hashing. Applications of Hash Tables and Hashing Technique Many of the algorithms internally make use of hash tables to make computing faster. The hashing is used in various ways such as – - Database Indexing: The disk-based data structures involve the usage of hash tables. For example, dbm. - Associative arrays: in-memory tables implementation involves the usage of hash tables for storing complicated or arbitrary strings as index implementing associative arrays in it. - Caches: auxiliary data tables implement hash tables which increase the speed of accessing the data. - Object Representation: The programming languages that are dynamic in nature such as Perl, ruby, JavaScript or python make the use of hash tables for storing objects. Implementation of Hash Table in Data Structure There are three operations which can be carried out on the hash table which are insertion, deletion and searching a particular value of the hash table. Let us have a look at how the hash table can be implemented in C programming language with the help of an example – Code: #include <string.h> #include <stdio.h> #include <stdbool.h> #include <stdlib.h> #define LENGTH 20 struct keyValuePair { int value; int key; }; struct keyValuePair* hashArray[LENGTH]; struct keyValuePair* temporarykeyValuePair; struct keyValuePair* item; int generateHashCode(int key) { return key % LENGTH; } struct keyValuePair *search(int key) { //retrieve the hash value int hashIndex = generateHashCode(key); //shift the value to an array until it is empty while(hashArray[hashIndex] != NULL) { if(hashArray[hashIndex]->key == key) return hashArray[hashIndex]; //move to next value ++hashIndex; //Table is wrapped hashIndex %= LENGTH; } return NULL; } void insertItem(int key,int value) { struct keyValuePair *item = (struct keyValuePair*) malloc(sizeof(struct keyValuePair)); item->value= value; item->key = key; //retrieve hash value int hashIndex = generateHashCode(key); //move in array until an empty or deleteItemd cell while(hashArray[hashIndex] != NULL && hashArray[hashIndex]->key != -1) { //go to next cell ++hashIndex; //wrap around the table hashIndex %= LENGTH; } hashArray[hashIndex] = item; } struct keyValuePair* deleteItem(struct keyValuePair* item) { int key = item->key; //get the hash int hashIndex = generateHashCode(key); //shift the array until it is completely vacant while(hashArray[hashIndex] != NULL) { if(hashArray[hashIndex]->key == key) { struct keyValuePair* temp = hashArray[hashIndex]; //give the value of the temporary dummy item at the place where the element was deleteItemd hashArray[hashIndex] = temporarykeyValuePair; return temp; } //move to next item ++hashIndex; //wrap around the table hashIndex %= LENGTH; } return NULL; } void display() { int i = 0; for(i = 0; i<LENGTH; i++) { if(hashArray[i] != NULL) printf(" (%d,%d)",hashArray[i]->key,hashArray[i]->value); else printf(" - "); } printf("\n"); } int main() { temporarykeyValuePair = (struct keyValuePair*) malloc(sizeof(struct keyValuePair)); temporarykeyValuePair->value= -1; temporarykeyValuePair->key = -1; insertItem(1, 20); insertItem(2, 70); insertItem(42, 80); insertItem(4, 25); insertItem(12, 44); insertItem(14, 32); insertItem(17, 11); insertItem(13, 78); insertItem(37, 97); display(); item = search(37); if(item != NULL) { printf("keyValuePair found: %d\n", item->value); } else { printf("keyValuePair not found\n"); } deleteItem(item); item = search(37); if(item != NULL) { printf("The hash keys and its corresponding values are : %d\n", item->value); } else { printf("Key Value Pair not found\n"); } } The output of the execution of above program is as shown below displaying all the contents of the hash table and also when we perform the deletion operation the message displayed shows that the key value pair was not found. Conclusion Hash tables are used to implement hashing technique for uniquely identifying each of the value with the help of key. All this key value pairs are stored in the array where the elements stored in it are values and the indexes of the array are the key. When the key length becomes very big, it is reduced by making the use of hash function. This results in uniform and even distribution of data across the hash table. Recommended Articles This is a guide to Hash Table in Data Structure. Here we also discuss the introduction and applications of hash tables along with example. You may also have a look at the following articles to learn more –
https://www.educba.com/hash-table-in-data-structure/
CC-MAIN-2021-43
en
refinedweb
Testing ZK applications using Selenium can be a drag. Selenium offers a lot of tool to test traditional request-response cycle applications. But it relies heavily on stable element IDs and submitting whole forms. ZK, on the other hand, sends just a skeleton page to the browser and from then, builds everything with JavaScript. Instead of posting a whole form, it submits individual field values (to trigger validation). And it assigns random IDs to reach element. At first glance, the two don’t seem to be a perfect match. But there are a couple of simple tools that will make your life much more simple. Getting Stable IDs for Tests One solution here is to write an ID generator which always returns the same IDs for each element. But this is tedious, error-prone and sometimes impossible. A better solution is to attach a custom attribute to some elements which doesn’t change. A beacon. If you have a central content area, then being able to find that will make your life much more simple because whatever else you might seek – it must be a child of the main content. To do that, add a XML namespace: <zk xmlns: to the top of your ZUL file, you can now use a new attribute ca:data-test-id="xxx" to set a fixed attribute on an element. In Selenium code, you can now locate this element using this code: By.xpath( "//div[@data-test-id = 'xxx']" ) I suggest to use this sparingly in order not to bloat your DOM. But a few of them for fast moving targets will make your tests much more stable. Debugging the DOM Sometimes, your life would be much more simple if you could see what the DOM was when your test failed. Here is a simple trick to get a HTML fragment from the browser: protected JavascriptLibrary javascript = new JavascriptLibrary(); public String dump( WebElement element ) { return (String) javascript.executeScript( driver, "return arguments[0].innerHTML", element ); } You can now use JTidy and JDOM to convert the fragment first into W3C XML nodes and then into JDOM elements: public org.w3c.dom.Document asDom( WebElement parent ) { String html = dump( parent ); Tidy tidy = new Tidy(); tidy.setShowWarnings( false ); tidy.setErrout( new PrintWriter( new NullOutputStream() ) ); org.w3c.dom.Document dom = tidy.parseDOM( new StringReader( html ), null ); return dom; } public static org.jdom2.Document asJDOM( org.w3c.dom.Document content ) { // JDOM chokes on: org.jdom2.IllegalNameException: The name "html PUBLIC "-//W3C//DTD HTML 4.01//EN"" is not legal for JDOM/XML DocTypes: XML names cannot contain the character " ". Node docType = content.getDoctype(); if( null != docType ) { content.removeChild( docType ); } DOMBuilder builder = new DOMBuilder(); org.jdom2.Document jdomDoc = builder.build( content ); return jdomDoc; } Another great use case for this is a default exception handler for tests which saves a screenshot and a copy of the DOM at the time a test fails. No more guessing why something didn’t happen. If we go with ZK big time then I’ll be coming back to this post! Great tip, thanks. Using the client attributes for this is a good idea that saves much time and removes the need to implement a custom IDGenerator or similar.
https://blog.pdark.de/2013/08/30/selenium-vs-zk/
CC-MAIN-2021-43
en
refinedweb
Python API (advanced)¶ In some rare cases, experts may want to create Scheduler, Worker, and Nanny objects explicitly in Python. This is often necessary when making tools to automatically deploy Dask in custom settings. It is more common to create a Local cluster with Client() on a single machine or use the Command Line Interface (CLI). New readers are recommended to start there. If you do want to start Scheduler and Worker objects yourself you should be a little familiar with async/ await style Python syntax. These objects are awaitable and are commonly used within async with context managers. Here are a few examples to show a few ways to start and finish things. Full Example¶ We first start with a comprehensive example of setting up a Scheduler, two Workers, and one Client in the same event loop, running a simple computation, and then cleaning everything up.()) Now we look at simpler examples that build up to this case. Scheduler¶ We create scheduler by creating a Scheduler() object, and then await that object to wait for it to start up. We can then wait on the .finished method to wait until it closes. In the meantime the scheduler will be active managing the cluster.. import asyncio from dask.distributed import Scheduler, Worker async def f(): s = Scheduler() # scheduler created, but not yet running s = await s # the scheduler is running await s.finished() # wait until the scheduler closes asyncio.get_event_loop().run_until_complete(f()) This program will run forever, or until some external process connects to the scheduler and tells it to stop. If you want to close things yourself you can close any Scheduler, Worker, Nanny, or Client class by awaiting the .close method: await s.close() Worker¶ The worker follows the same API. The only difference is that the worker needs to know the address of the scheduler. import asyncio from dask.distributed import Scheduler, Worker async def f(scheduler_address): w = await Worker(scheduler_address) await w.finished() asyncio.get_event_loop().run_until_complete(f("tcp://127.0.0.1:8786")) Start many in one event loop¶ We can run as many of these objects as we like in the same event loop. import asyncio from dask.distributed import Scheduler, Worker async def f(): s = await Scheduler() w = await Worker(s.address) await w.finished() await s.finished() asyncio.get_event_loop().run_until_complete(f()) Use Context Managers¶ We can also use async with context managers to make sure that we clean up properly. Here is the same example as from above: import asyncio from dask.distributed import Scheduler, Worker async def f(): async with Scheduler() as s: async with Worker(s.address) as w: await w.finished() await s.finished() asyncio.get_event_loop().run_until_complete(f()) Alternatively, in the example below we also include a Client, run a small computation, and then allow things to clean up after that computation..()) This is equivalent to creating and awaiting each server, and then calling .close on each as we leave the context. In this example we don’t wait on s.finished(), so this will terminate relatively quickly. You could have called await s.finished() though if you wanted this to run forever. Nanny¶ Alternatively, we can replace Worker with Nanny if we want your workers to be managed in a separate process. The Nanny constructor follows the same API. This allows workers to restart themselves in case of failure. Also, it provides some additional monitoring, and is useful when coordinating many workers that should live in different processes in order to avoid the GIL. # w = await Worker(s.address) w = await Nanny(s.address) API¶ These classes have a variety of keyword arguments that you can use to control their behavior. See the API documentation below for more information. Scheduler¶ - class distributed.Scheduler(loop=None, delete_interval='500ms', synchronize_worker_interval='60s', services=None, service_kwargs=None, allowed_failures=None, extensions=None, validate=None, scheduler_file=None, security=None, worker_ttl=None, idle_timeout=None, interface=None, host=None, port=0, protocol=None, dashboard_address=None, dashboard=None, http_prefix='/', preload=None, preload_argv=(), plugins=(), *: {task key: TaskState} Tasks currently known to the scheduler - unrunnable: {TaskState} Tasks in the “no-worker” state - workers: {worker key: WorkerState} Workers currently connected to the scheduler - idle: {WorkerState}: Set of workers that are not fully utilized - saturated: {WorkerState}: Set of workers that are not over-utilized - host_info: {hostname: dict}: Information about each worker host - clients: {client key: ClientState} Clients currently connected to the scheduler - services: {str: port}: Other services running on this scheduler, like Bokeh - loop: IOLoop: The running Tornado IOLoop - client_comms: {client key: Comm} For each client, a Comm object used to receive task requests and report task status updates. - stream_comms: {worker key: Comm} For each worker, a Comm object from which we both accept stimuli and report results - task_duration: {key-prefix: time} Time we expect certain functions to take, e.g. {'sum': 0.25} - adaptive_target(comm=None, target_duration=None)[source]¶ Desired number of workers based on the current workload This looks at the current running tasks and memory use, and returns a number of desired workers. This is often used by adaptive scheduling. - Parameters - target_durationstr A desired duration of time for computations to take. This affects how rapidly the scheduler will ask to scale. See also - async add_client(comm, client=None, versions=None)[source]¶ Add client to network We listen to all future messages from this Comm. - add_keys(comm=None, worker=None, keys=(), stimulus_id=None)[source]¶ Learn that a worker has certain keys This should not be used in practice and is mostly here for legacy reasons. However, it is sent by workers from time to time. - add_plugin(plugin: distributed.diagnostics.plugin.SchedulerPlugin, *, idempotent: bool = False, name: str | None = None, **kwargs)[source]¶ Add external plugin to scheduler. See - Parameters - pluginSchedulerPlugin SchedulerPlugin instance to add - idempotentbool If true, the plugin is assumed to already exist and no action is taken. - namestr A name for the plugin, if None, the name attribute is checked on the Plugin instance and generated if not discovered. - **kwargs Deprecated; additional arguments passed to the plugin class if it is not already an instance - async add_worker(comm=None, address=None, keys=(), nthreads=None, name=None, resolve_address=True, nbytes=None, types=None, now=None, resources=None, host_info=None, memory_limit=None, metrics=None, pid=0, services=None, local_directory=None, versions=None, nanny=None, extra=None)[source]¶ Add a new worker to the cluster - async broadcast(comm=None, msg=None, workers=None, hosts=None, nanny=False, serializers=None)[source]¶ Broadcast message to workers, return all results - async close(comm=None, fast=False, close_workers=False)[source]¶ Send cleanup signal to all coroutines then wait until finished See also Scheduler.cleanup - async close_worker(comm=None, worker=None, safe=None). - async delete_worker_data(worker_address: str, keys: Collection[str]) None [source]¶ Delete data from a worker and update the corresponding worker/task states - Parameters - worker_address: str Worker address to delete keys from - keys: list[str] List of keys to delete on the specified worker - async feed(comm, function=None, setup=None, teardown=None, interval='1s', **kwargs)[source]¶ Provides a data Comm to external requester Caution: this runs arbitrary Python code on the scheduler. This should eventually be phased out. It is mostly used by diagnostics. - async gather(comm=None, keys=None, serializers=None)[source]¶ Collect data from workers to the scheduler - async gather_on_worker(worker_address: str, who_has: dict[str, list[str]]) set [source]¶ Peer-to-peer copy of keys from multiple workers to a single worker - Parameters - worker_address: str Recipient worker address to copy keys to - who_has: dict[Hashable, list[str]] {key: [sender address, sender address, …], key: …} - Returns - returns: set of keys that failed to be copied - get_worker_service_addr(worker, service_name, protocol=False)[source]¶ Get the (host, port) address of the named service on the worker. Returns None if the service doesn’t exist. - Parameters - workeraddress - service_namestr Common services include ‘bokeh’ and ‘nanny’ - protocolboolean Whether or not to include a full address with protocol (True) or just a (host, port) pair - handle_long_running(key=None, worker=None, compute_duration=None)[source]¶ A task has seceded from the thread pool We stop the task from being stolen in the future, and change task duration accounting as if the task has stopped. - async handle_worker(comm=None, worker=None)[source]¶ Listen to responses from a single worker This is the main loop for scheduler-worker interaction See also Scheduler.handle_client Equivalent coroutine for clients - async proxy(comm=None, msg=None, worker=None, serializers=None)[source]¶ Proxy a communication through the scheduler to some other worker - async rebalance(comm=None, keys: Iterable[Hashable] = None, workers: Iterable[str] = None) dict [source]¶ Rebalance keys so that each worker ends up with roughly the same process memory (managed+unmanaged). Warning This operation is generally not well tested against normal operation of the scheduler. It is not recommended to use it while waiting on computations. Algorithm Find the mean occupancy of the cluster, defined as data managed by dask + unmanaged process memory that has been there for at least 30 seconds ( distributed.worker.memory.recent-to-old-time). This lets us ignore temporary spikes caused by task heap usage. Alternatively, you may change how memory is measured both for the individual workers as well as to calculate the mean through distributed.worker.memory.rebalance.measure. Namely, this can be useful to disregard inaccurate OS memory measurements. Discard workers whose occupancy is within 5% of the mean cluster occupancy ( distributed.worker.memory.rebalance.sender-recipient-gap/ 2). This helps avoid data from bouncing around the cluster repeatedly. Workers above the mean are senders; those below are recipients. Discard senders whose absolute occupancy is below 30% ( distributed.worker.memory.rebalance.sender-min). In other words, no data is moved regardless of imbalancing as long as all workers are below 30%. Discard recipients whose absolute occupancy is above 60% ( distributed.worker.memory.rebalance.recipient-max). Note that this threshold by default is the same as distributed.worker.memory.targetto prevent workers from accepting data and immediately spilling it out to disk. Iteratively pick the sender and recipient that are farthest from the mean and move the least recently inserted key between the two, until either all senders or all recipients fall within 5% of the mean. A recipient will be skipped if it already has a copy of the data. In other words, this method does not degrade replication. A key will be skipped if there are no recipients available with enough memory to accept the key and that don’t already hold a copy. The least recently insertd (LRI) policy is a greedy choice with the advantage of being O(1), trivial to implement (it relies on python dict insertion-sorting) and hopefully good enough in most cases. Discarded alternative policies were: Largest first. O(n*log(n)) save for non-trivial additional data structures and risks causing the largest chunks of data to repeatedly move around the cluster like pinballs. Least recently used (LRU). This information is currently available on the workers only and not trivial to replicate on the scheduler; transmitting it over the network would be very expensive. Also, note that dask will go out of its way to minimise the amount of time intermediate keys are held in memory, so in such a case LRI is a close approximation of LRU. - Parameters - keys: optional whitelist of dask keys that should be considered for moving. All other keys will be ignored. Note that this offers no guarantee that a key will actually be moved (e.g. because it is unnecessary or because there are no viable recipient workers for it). - workers: optional whitelist of workers addresses to be considered as senders or recipients. All other workers will be ignored. The mean cluster occupancy will be calculated only using the whitelisted workers. - reevaluate_occupancy(worker_index: ctypes.c_long = 0). - async register_nanny_plugin(comm, plugin, name=None)[source]¶ Registers a setup function, and call it on every worker - async register_scheduler_plugin(comm=None, plugin=None, name=None)[source]¶ Register a plugin on the scheduler. - async register_worker_plugin(comm, plugin, name=None)[source]¶ Registers a setup function, and call it on every worker - remove_plugin(name: str | None = None, plugin: SchedulerPlugin | None = None) None [source]¶ Remove external plugin from scheduler - Parameters - namestr Name of the plugin to remove - pluginSchedulerPlugin Deprecated; use name argument instead. Instance of a SchedulerPlugin class to remove; - async remove_worker(comm=None, address=None, safe=False, close=True)[source]¶ Remove worker from cluster We do this when a worker reports that it plans to leave or when it appears to be unresponsive. This may send its tasks back to a released state. - async replicate(comm=None, keys=None, n=None, workers=None, branching_factor=2, delete=True, lock=True)[source]¶ Replicate data throughout cluster This performs a tree copy of the data throughout the network individually on each piece of data. - Parameters - keys: Iterable list of keys to replicate - n: int Number of replications we expect to see within the cluster - branching_factor: int, optional The number of workers that can copy data in each generation. The larger the branching factor, the more data we copy in a single step, but the more a given worker risks being swamped by data requests. See also - report(msg: dict, ts: Optional[distributed.scheduler.TaskState] = None, client: Optional[str] = None)[source]¶ Publish updates to all listening Queues and Comms If the message contains a key then we only send the message to those comms that care about the key. - reschedule(key=None, worker=None)[source]¶ Reschedule a task Things may have shifted and this task may now be better suited to run elsewhere - async retire_workers(comm=None, workers=None, remove=True, close_workers=False, names=None, lock=True, **kwargs) dict [source]¶ Gracefully retire workers from cluster - Parameters - workers: list (optional) List of worker addresses to retire. If not provided we call workers_to_closewhich finds a good set - names: list (optional) List of worker names to retire. - remove: bool (defaults to True) Whether or not to remove the worker metadata immediately or else wait for the worker to contact us - close_workers: bool (defaults to False) Whether or not to actually close the worker explicitly from here. Otherwise we expect some external job scheduler to finish off the worker. - **kwargs: dict Extra options to pass to workers_to_close to determine which workers we should drop - Returns - Dictionary mapping worker ID/address to dictionary of information about - that worker for each retired worker. See also - run_function(stream, function, args=(), kwargs={}, wait=True)[source]¶ Run a function within this process See also - async scatter(comm=None, data=None, workers=None, client=None, broadcast=False, timeout=2)[source]¶ Send data out to workers See also - send_task_to_worker(worker, ts: distributed.scheduler.TaskState, duration: ctypes.c_double = - 1)[source]¶ Send a single computational task to a worker - start_ipython(comm=None)[source]¶ Start an IPython kernel Returns Jupyter connection info dictionary. - stimulus_cancel(comm, keys=None, client=None, force=False)[source]¶ Stop execution on a list of keys -: str, *args, **kwargs)[source]¶ Transition a key from its current state to the finish state - Returns - Dictionary of recommendations for future transitions See also Scheduler.transitions transitive version of this function Examples >>> self.transition('x', 'waiting') {'x': 'processing'} - transitions(recommendations: dict)[source]¶ Process transitions until none are left This includes feedback from previous transitions and continues until we reach a steady state - update_data(comm=None, *, who_has: dict, nbytes: dict, client=None, serializers=None)[source]¶ Learn that new data has entered the network from an external source See also Scheduler.mark_key_in_memory - update_graph(client=None, tasks=None, keys=None, dependencies=None, restrictions=None, priority=None, loose_restrictions=None, resources=None, submitting_task=None, retries=None, user_priority=0, actors=None, fifo_timeout=0, annotations=None, code=None)[source]¶ Add new computations to the internal dask graph This happens whenever the Client calls submit, map, get, or compute. - worker_send(worker, msg)[source]¶ Send message to worker This also handles connection failures by adding a callback to remove the worker on the next cycle. - workers_list(workers)[source]¶ List of qualifying workers Takes a list of worker addresses or hostnames. Returns a list of all worker addresses that match - workers_to_close(comm=None, memory_ratio: int | float | None = None, n: int | None = None, key: Callable[[WorkerState], Hashable] | None = None, minimum: int | None = None, target: int | None = None, attribute: str = 'address') list[str] [source]¶ Find workers that we can close with low cost This returns a list of workers that are good candidates to retire. These workers are not running anything and are storing relatively little data relative to their peers. If all workers are idle then we still maintain enough workers to have enough RAM to store our data, with a comfortable buffer. This is for use with systems like distributed.deploy.adaptive. - Parameters - memory_ratioNumber Amount of extra space we want to have for our stored data. Defaults to 2, or that we want to have twice as much memory as we currently have data. - nint Number of workers to close - minimumint Minimum number of workers to keep around - keyCallable(WorkerState) An optional callable mapping a WorkerState object to a group affiliation. Groups will be closed together. This is useful when closing workers must be done collectively, such as by hostname. - targetint Target number of workers to have after we close - attributestr The attribute of the WorkerState object to return, like “address” or “name”. Defaults to “address”. - Returns - to_close: list of worker addresses that are OK to close See also Examples >>> scheduler.workers_to_close() ['tcp://192.168.0.1:1234', 'tcp://192.168.0.2:1234'] Group workers by hostname prior to closing >>> scheduler.workers_to_close(key=lambda ws: ws.host) ['tcp://192.168.0.1:1234', 'tcp://192.168.0.1:4567'] Remove two workers >>> scheduler.workers_to_close(n=2) Keep enough workers to have twice as much memory as we we need. >>> scheduler.workers_to_close(memory_ratio=2) Worker¶ - class distributedthreads: int: Number of nthreads used by this worker process - executors: Dict[str, concurrent.futures.Executor]: Executors used to perform computation. Always contains the default executor. - - comm_threshold_bytes: int As long as the total number of bytes in flight is below this threshold we will not limit the number of outgoing connections for a single tasks dependency fetch. - - Parameters - scheduler_ip: str - scheduler_port: int - ip: str, optional - data: MutableMapping, type, None The object to use for storage, builds a disk-backed LRU dict by default - nthreads: int, optional - loop: tornado.ioloop.IOLoop - local_directory: str, optional Directory where we place local resources - name: str, optional - memory_limit: int, float, string Number of bytes of memory that this worker should use. Set to zero for no limit. Set to ‘auto’ to calculate as system.MEMORY_LIMIT * min(1, nthreads / total_cores) Use strings or numbers like 5GB or 5e9 - memory_target_fraction: float Fraction of memory to try to stay beneath - memory_spill_fraction: float Fraction of memory at which we start spilling to disk - memory_pause_fraction: float Fraction of memory at which we stop running new tasks - executor: concurrent.futures.Executor, dict[str, concurrent.futures.Executor], str - The executor(s) to use. Depending on the type, it has the following meanings: Executor instance: The default executor. Dict[str, Executor]: mapping names to Executor instances. If the “default” key isn’t in the dict, a “default” executor will be created using ThreadPoolExecutor(nthreads). Str: The string “offload”, which refer to the same thread pool used for offloading communications. This results in the same thread being used for deserialization and computation. - resources: dict Resources that this worker has like {'GPU': 2} - nanny: str Address on which to contact nanny, if it exists - lifetime: str Amount of time like “1 hour” after which we gracefully shut down the worker. This defaults to None, meaning no explicit shutdown time. - lifetime_stagger: str Amount of time like “5 minutes” to stagger the lifetime value The actual lifetime will be selected uniformly at random between lifetime +/- lifetime_stagger - lifetime_restart: bool Whether or not to restart a worker after it has reached its lifetime Default False - async close_gracefully(restart=None)[source]¶ Gracefully shut down a worker This first informs the scheduler that we’re shutting down, and asks it to move our data elsewhere. Afterwards, we close as normal - async gather_dep(worker: str, to_gather: Iterable[str], total_nbytes: int, *, stimulus_id)[source]¶ Gather dependencies for a task from a worker who has them - Parameters - workerstr Address of worker to gather dependencies from - to_gatherlist Keys of dependencies to gather from worker – this is not necessarily equivalent to the full list of dependencies of depas some dependencies may already be present on this worker. - total_nbytesint Total number of bytes for all the dependencies in to_gather combined - get_current_task()[source]¶ Get the key of the task we are currently running This only makes sense to run within a task See also get_worker Examples >>> from dask.distributed import get_worker >>> def f(): ... return get_worker().get_current_task() >>> future = client.submit(f) >>> future.result() 'f-1234' - handle_cancel_compute(key, reason)[source]¶ Cancel a task on a best effort basis. This is only possible while a task is in state waiting or ready. Nothing will happen otherwise. - handle_free_keys(comm=None, keys=None, reason=None)[source]¶ Handler to be called by the scheduler. The given keys are no longer referred to and required by the scheduler. The worker is now allowed to release the key, if applicable. This does not guarantee that the memory is released since the worker may still decide to hold on to the data and task since it is required by an upstream dependency. - handle_remove_replicas(keys, stimulus_id)[source]¶ Stream handler notifying the worker that it might be holding unreferenced, superfluous data. This should not actually happen during ordinary operations and is only intended to correct any erroneous state. An example where this is necessary is if a worker fetches data for a downstream task but that task is released before the data arrives. In this case, the scheduler will notify the worker that it may be holding this unnecessary data, if the worker hasn’t released the data itself, already. This handler does not guarantee the task nor the data to be actually released but only asks the worker to release the data on a best effort guarantee. This protects from race conditions where the given keys may already have been rescheduled for compute in which case the compute would win and this handler is ignored. For stronger guarantees, see handler free_keys - async memory_monitor()[source]¶ Track this process’s memory usage and act accordingly If we rise above 70% memory use, start dumping data to disk. If we rise above 80% memory use, stop execution of new tasks - transition(ts, finish: str, *, stimulus_id, **kwargs)[source]¶ Transition a key from its current state to the finish state - Returns - Dictionary of recommendations for future transitions See also Scheduler.transitions transitive version of this function Examples >>> self.transition('x', 'waiting') {'x': 'processing'} - transitions(recommendations: dict, *, stimulus_id)[source]¶ Process transitions until none are left This includes feedback from previous transitions and continues until we reach a steady state - trigger_profile()[source]¶ Get a frame from all actively computing threads Merge these frames into existing profile counts Nanny¶ - class distributed fraction of its memory limit. The parameters for the Nanny are mostly the same as those for the Worker with exceptions listed below. - Parameters - env: dict, optional Environment variables set at time of Nanny initialization will be ensured to be set in the Worker process as well. This argument allows to overwrite or otherwise set environment variables for the Worker. It is also possible to set environment variables using the option distributed.nanny.environ. Precedence as follows Nanny arguments Existing environment variables Dask configuration - close_gracefully(comm=None)[source]¶ A signal that we shouldn’t try to restart workers if they go away This is used as part of the cluster shutdown process. - async instantiate(comm=None) distributed.core.Status [source]¶ Start a local worker process Blocks until the process is up and the scheduler is properly informed - async kill(comm=None, timeout=2)[source]¶ Kill the local worker process Blocks until both the process is down and the scheduler is properly informed
https://docs.dask.org/en/latest/how-to/deploy-dask/python-advanced.html
CC-MAIN-2021-43
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, In learning the Hair API, I'm trying to re-perform the Hair Edit > Convert from Spline Command. Basically, you select a spline and it gets converted to a hair object. Hair Edit > Convert from Spline My main bottleneck is the SetGuide which I believe creates guides in hair object. HairObject.SetGuides(guides, clone) SetGuide HairObject.SetGuides(guides, clone) The problem is the c4d.modules.hair.HairGuides doesn't have an initialize command which I need to c4d.modules.hair.HairGuides Is there a way around this? Regards, Ben hi, the set guides just define the guide, but you have to create them before. I've create this more generic example. From this, you should be able to create the guide from the spline. import c4d from c4d import gui # Welcome to the world of Python # Main function def main(): # Creates the guides guideCnt = 10 segmentPerGuide = 4 pointsPerGuide = segmentPerGuide +1 guides = c4d.modules.hair.HairGuides(guideCnt, segmentPerGuide) if guides is None: raise ValueError("coudln't create the guides") x = 0 xspace = 10 yspace = 10 # set the points position of the guides. for guideIndex in range(guideCnt): # space each guide x += xspace for spid in range(pointsPerGuide): # Calculate the vertical position of each point pPos = c4d.Vector (x, yspace * spid, 0 ) # get the index of that point in the global guide array pIndex = guideIndex * pointsPerGuide + spid # Sets the point guides.SetPoint( pIndex, pPos ) #() after looking the result you need a bit more information about the command. the convert spline to hair is using the function UniformToNatural to convert bezier curves. It also take into account any hair tag that could change the render look of the spline. Cheers, Manuel @m_magalhaes Thanks for the response. I was able to create guides but it doesn't seem to follow the original position of the splines. You can see the problem here: Some general notes Here is the sample file: Here is the code so far: import c4d from c4d import gui # Main function def main(): spline = doc.SearchObject("spline") guide_count = spline.GetSegmentCount() segment_per_guide = 9 points_per_guide = segment_per_guide spline_points = spline.GetAllPoints() guides = c4d.modules.hair.HairGuides(guide_count, segment_per_guide) for guide_index in range(guide_count): idx = 0 for spid in range (points_per_guide): point_index = guide_index * points_per_guide + spid if idx <= len(spline_points) - 1: guides.SetPoint(point_index, spline_points[point_index]) idx += 1 else: guides.SetPoint(point_index[-1], spline_points[-1]) # This was made to address the stray point but it is still a stray point. lol hair = c4d.BaseObject(c4d.Ohair) hair.SetGuides(guides, False) doc.InsertObject(hair, None, None) c4d.EventAdd() # Execute main() if __name__=='__main__': main() hair guide are exactly as spline, they have points, segment. My example was more generic of course and that's why there was an offset on it. (we try sometimes to create more generic example so we can add them on github later) look at the following example, it does what you are looking for. import c4d from c4d import gui # Welcome to the world of Python # Main function def main(): spline = op if spline is None: raise ValueError("there's no object selected") if not spline.IsInstanceOf(c4d.Ospline): raise ValueError("select a spline") # we can use a SplineLengthData to use the UniformToNatural function sld = c4d.utils.SplineLengthData() # Retrieves the spline matrix splineMg = spline.GetMg() segCount = spline.GetSegmentCount() # number of point in a segment is number of segment +1 segmentPerGuide = 4 pointsPerGuide = segmentPerGuide + 1 # Creates the guides guides = c4d.modules.hair.HairGuides(segCount, segmentPerGuide) # for each segment of the spline for segIndex in range(segCount): # initialize the splineLenghtData with the current segment sld.Init(spline, segIndex) for i in range(pointsPerGuide): # for each point of the guide, calculate the position for that point pos = spline.GetSplinePoint(sld.UniformToNatural(i / float(segmentPerGuide) ), segIndex) # Calculates the points index on the guides points pointIndex = segIndex * (pointsPerGuide ) + i # Sets the point position in global space pos = splineMg * pos guides.SetPoint( pointIndex, pos ) #() Thanks! It works as expected. So that's what you mean by UniformToNatural. Basically, even if the segment guides are not the same as spline points, it will conform to its overall shape. UniformToNatural
https://plugincafe.maxon.net/topic/12801/re-performing-hair-edit-convert-from-spline-command/1
CC-MAIN-2021-43
en
refinedweb
TOP-list of rare ThinkOrSwim indicators that everybody search for❗️ To your attention, I present 5 ready-to-use indicators for ThinkOrSwim trading platform absolutely for free. I’m sure that with the help of this indicator you will be able to make an intricate market analysis far easier. At the same time, we will analyze all settings and technical aspects of our indicator use in real market conditions. 💡Save it not to lose later and leave comments if it was useful. I will be happy to receive any kind of feedback. Let’s go! 1. ADX and ADXR indicator in Thinkorswim trading platform 💡 ADX indicator measures the strengths of the current trend on the market and usually used by traders as a support-signal to close their positions, as well as the main signal to open a trade. Important! With the help of ADX indicator you can measure the strength of the trend, but not it’s direction. Due to this, the direction of the market (up or down) will be at its peak when ADX is higher and vice versa. ADXR indicator is giving a correction to the standard ADX indicator. If you need standard ADX indicator, just delete the last line in code. ⚙️Input length variable should be set to “14” days (half cycle of 28 trading days). This is recommended by the indicator’s creator, Welles Wilder. You can test ADX indicator for TOS right now! ⬇️ #thinkscript indicator : ADX #ADX #by thetrader.top declare lower; input length = 14; input averageType = AverageType.WILDERS; plot ADX = DMI(length, averageType).ADX; plot ADXR = (ADX + ADX[length — 1]) / 2; ADX.SetDefaultColor(GetColor(5)); ADXR.SetDefaultColor(GetColor(0)); 2. ATR indicator in Thinkorswim account This indicator will show the average daily movement of the stock in cents for a certain period. ATR In-Play shows how much current stock already moved in its ATR for the current trading session. ThinkScript indicator ATR for Thinkorswim account. ⬇️ #thinkscript indicator : ATR, ATR In Play #Average True Range #by thetrader.top #Average True Range label #ATR in play — How many ATR stock did today input ATRInPlay = {default “1”, “0”}; input ATR = {default “1”, “0”}; def iATR = Round(MovingAverage (AverageType.WILDERS, TrueRange(high(period = AggregationPeriod.DAY )[1], close(period = AggregationPeriod.DAY)[1], low(period = AggregationPeriod.DAY )[1]), 14), 2); AddLabel (!ATR, “ATR “ + iATR, if iATR <= 0.5 then Color.RED else if iATR <= 2 then Color.DARK_GREEN else Color.WHITE); def iATRPlay = Round((high(period = “DAY”) — low(period = “DAY”)) / iATR, 2); AddLabel (!ATRInPlay, “ATRInPlay “ + iATRPlay, if iATRPlay <= 0.5 then Color.DARK_RED else if iATRPlay <= 1 then Color.WHITE else Color.DARK_GREEN); 3. Indicator of accumulation and distribution for TOS (AccDist) 🔥 This indicator is based on the accumulation and distribution in market cycles. With the help of AccDist in TOS you can evaluate supply and demand on the basis of currently traded volume. ❗️You will be able to detect discrepancies in the price movements of stocks and in the volume output for these movements with the volumes in the security that were traded during the cumulative period. #thinkscript indicator : ATR, ATR In Play #Average True Range #by thetrader.top declare lower; plot AccDist = TotalSum(volume * CloseLocationValue()); #code of the indicator CloseLocationValue itself will be calculated by formula: CLV = (close-low)-(high-close)/high-low; TotalSum — returns the amount of all values from the first bar until the current one plot ZeroLine = 0; #base null line AccDist.SetDefaultColor(GetColor(1)); #colour of the chart is displayed by indicator, it can be changed in settings ZeroLine.SetDefaultColor(GetColor(5)); #colour of the chart is displayed by indicator, it can be changed in settings 4. Parabolic SAR trend indicator in Thinkorswim 📈 The Parabolic SAR indicator is built directly on the chart in the TOS and has some similarities with the “moving average” indicator. Determine the most optimal exit points from positions together with this indicator for TOS: close short positions if the price is above the “parabolic” line, and long positions if the price is below the line. Some traders use the Parabolic SAR indicator as a trailing stop. The indicator changes its position depending on the direction of the “parabolic” trend. yourself. Max. acceleration factor indicator (input accelerationLimit) = 0.2; def state can be equal to three values: long, short, and default init. #thinkscript indicator : Parabolic SAR #Parabolic SAR #by thetrader.top input accelerationFactor = 0.02; input accelerationLimit = 0.2; assert(accelerationFactor > 0, “‘acceleration factor’ must be positive: ” + accelerationFactor); assert(accelerationLimit >= accelerationFactor, “‘acceleration limit’ (” + accelerationLimit + “) must be greater than or equal to ‘acceleration factor’ (” + accelerationFactor + “)”); def state = {default init, long, short}; def extreme; def SAR; def acc; switch (state[1]) { case init: state = state.long; acc = accelerationFactor; extreme = high; SAR = low; case short: if (SAR[1] < high) then { state = state.long; acc = accelerationFactor; extreme = high; SAR = extreme[1]; } else { state = state.short; if (low < extreme[1]) then { acc = min(acc[1] + accelerationFactor, accelerationLimit); extreme = low; } else { acc = acc[1]; extreme = extreme[1]; } SAR = max(max(high, high[1]), SAR[1] + acc * (extreme — SAR[1])); } case long: if (SAR[1] > low) then { state = state.short; acc = accelerationFactor; extreme = low; SAR = extreme[1]; } else { state = state.long; if (high > extreme[1]) then { acc = min(acc[1] + accelerationFactor, accelerationLimit); extreme = high; } else { acc = acc[1]; extreme = extreme[1]; } SAR = min(min(low, low[1]), SAR[1] + acc * (extreme — SAR[1])); } } plot parSAR = SAR; parSAR.SetPaintingStrategy(PaintingStrategy.POINTS); parSAR.SetDefaultColor(GetColor(5)); 5. High, Low, Close indicator for Thinkorswim terminal The indicator displays max / min prices and closing prices of the previous trading session. ⚙️ To configure and integrate ThinkScript into TOS, go to the Edit studies menu, then Create. Give a name to your indicator and go to the thinkScript Editor tab. #thinkscript indicator : Hight, Low, Close #Hight, Low, Close #by thetrader.top input timeFrame = {default DAY, WEEK, MONTH}; plot High = high(period = timeFrame)[1]; plot Low = low(period = timeFrame)[1]; plot Close = close(period = timeFrame)[1]; High.SetDefaultColor (Color.GREEN); High.SetPaintingStrategy(PaintingStrategy.DASHES); Low.SetDefaultColor(Color.RED); Low.SetPaintingStrategy(PaintingStrategy.DASHES); Close.SetDefaultColor (Color.GRAY); Close.SetPaintingStrategy(PaintingStrategy.DASHES); ** Don’t have a TOS account yet without delay? Having problems registering a Thinkorswim live account without quotes delays? Not sure how to remove the 20 minute delay? Contact us via credentials in the profile and we will fix it! 🔥 If this article was helpful to you, please click Claps 👏, subscribe and save. I would be glad to receive feedback ..!
https://thinkorswim-europe.medium.com/top-list-of-rare-thinkorswim-indicators-that-everybody-search-for-%EF%B8%8F-c077df14dbb?source=post_internal_links---------0----------------------------
CC-MAIN-2021-43
en
refinedweb
I Made my Own Data Type (C++)! It's called "dsq::squid". The namespace "dsq" stands for "dynamic squid". If you want to see how I created this, take a look at the attached repl. Also, if you want to learn how to make your own data type, check out this tutorial. Otherwise, let's see what "squid" can do! So my data type can actually accommodate most primitive types, making it dynamic. It can hold an integer, string, float, and a few more! Here's the basic syntax: Here's the list of types it can hold: - bool - char - short, int, long - float, double - std::string And the cool thing is, each type has it's own special method! Now let's take a look at bool. Boolean Squids As you can see, with the power of overloaded operators, we can make the object boolVar function like a traditional boolean! Well, that's just bool. Next, characters! Character Squid What's special about char is that you can actually get the ascii value of it! Should come in handy sometimes. Well, next up, integers! Integer Squids What makes the int type so special is the round method! The place is based 10. So you can round it by the 10th place, 100th place, or even 100000th place. The default is 10. The method is how you would like to round it. Up, down, or normally. Up - 3.14 becomes 3.2; Down - 3.14 becomes 3.1 Normal - 3.14 becomes 3.1 There's also a to_str() method which converts the int to a string. Next up, decimals! Decimal Squids So for decimals, it's actually very similar to ints, but the round function works a little differently. You specify how many decimals places you would like to remove (default is 3). Another special feature is the decimal_places() method, which counts the number of decimal places. It returns and int. And lastly, strings! String Squids So a few things here. Subscript operator, multiplication, length, and erase. And yeah! That's all the data types of the dsq::squid data type. Comment below if you want me to add more. Also, one more thing: Don't forget to upvote! noice I don't know, I see C casts in there (int), and I can't do ++ on the bool. Some of this could've had better performance, for example, constexpr for .size. Also, why was it a function? Really, most of this uses the C++ stdlib, makes it private, and exposes even less than std:: does. Thinking about this... I might be able to make something relatively useful. Now really, I was preparing to make a type-safe output system relatively similar to (w)cout and I had seen squid& as the return value of the operator, which is a reference to an object of the type squid, yet attempts to do so in my own code fail. @DynamicSquid cout is type-safe already, but like printf, you can easily do printf("%d", "die system, die"). But can you explain how the squid& works? @StudentFires oh you mean like for this: squid& operator ++ () { ++variable; return *this; }? @DynamicSquid yes, I have no clue as to how that's working, as I'm far from an expert on C++ classes. @StudentFires oh, okay. here, think of it like this: So the reference return type basically turns the function into the actual value it's returning. WAIT: YOU ATTACKED A REPL HOW DARE YOU MONSTER @Codemonkey51 oh oops, lol. I meant "attached" xD :) @DynamicSquid
https://replit.com/talk/share/I-Made-my-Own-Data-Type-C/36833?order=votes
CC-MAIN-2021-43
en
refinedweb
1.1 anton 1: \ count execution of control-flow edges 2: 1.5 ! anton 3: \ Copyright (C) 2004: 21: \ relies on some Gforth internals 22: 23: \ !! assumption: each file is included only once; otherwise you get 24: \ the counts for just one of the instances of the file. This can be 25: \ fixed by making sure that every source position occurs only once as 26: \ a profile point. 27: 1.2 anton 28: true constant count-calls? \ do some profiling of colon definitions etc. 29: 1.3 anton 30: \ for true COUNT-CALLS?: 31: 32: \ What data do I need for evaluating the effectiveness of (partial) inlining? 33: 34: \ static and dynamic counts of everything: 35: 36: \ original BB length (histogram and average) 37: \ BB length with partial inlining (histogram and average) 38: \ since we cannot partially inline library calls, we use a parameter 39: \ that represents the amount of partial inlining we can expect there. 40: \ number of tail calls (original and after partial inlining) 41: \ number of calls (original and after partial inlining) 42: \ reason for BB end: call, return, execute, branch 43: 44: \ how many static calls are there to a word? How many of the dynamic 45: \ calls call just a single word? 46: 1.1 anton 47: struct 48: cell% field profile-next 49: cell% 2* field profile-count 50: cell% 2* field profile-sourcepos 51: cell% field profile-char \ character position in line 1.2 anton 52: count-calls? [if] 1.3 anton 53: cell% field profile-colondef? \ is this a colon definition start 1.2 anton 54: cell% field profile-calls \ static calls to the colon def 55: cell% field profile-straight-line \ may contain calls, but no other CF 56: cell% field profile-calls-from \ static calls in the colon def 57: [endif] 1.1 anton 58: end-struct profile% \ profile point 59: 60: variable profile-points \ linked list of profile% 61: 0 profile-points ! 1.2 anton 62: variable next-profile-point-p \ the address where the next pp will be stored 63: profile-points next-profile-point-p ! 64: count-calls? [if] 65: variable last-colondef-profile \ pointer to the pp of last colon definition 66: [endif] 67: 1.1 anton 68: : new-profile-point ( -- addr ) 69: profile% %alloc >r 70: 0. r@ profile-count 2! 71: current-sourcepos r@ profile-sourcepos 2! 72: >in @ r@ profile-char ! 1.2 anton 73: [ count-calls? ] [if] 74: r@ profile-colondef? off 75: 0 r@ profile-calls ! 76: r@ profile-straight-line on 77: 0 r@ profile-calls-from ! 78: [endif] 79: 0 r@ profile-next ! 80: r@ next-profile-point-p @ ! 81: r@ profile-next next-profile-point-p ! 1.1 anton 82: r> ; 83: 84: : print-profile ( -- ) 85: profile-points @ begin 86: dup while 87: dup >r 88: r@ profile-sourcepos 2@ .sourcepos ." :" 89: r@ profile-char @ 0 .r ." : " 90: r@ profile-count 2@ 0 d.r cr 91: r> profile-next @ 92: repeat 93: drop ; 94: 1.2 anton 95: : print-profile-coldef ( -- ) 96: profile-points @ begin 97: dup while 98: dup >r 99: r@ profile-colondef? @ if 100: r@ profile-sourcepos 2@ .sourcepos ." :" 101: r@ profile-char @ 0 .r ." : " 102: r@ profile-count 2@ 0 d.r 103: r@ profile-straight-line @ space . 104: cr 105: endif 106: r> profile-next @ 107: repeat 108: drop ; 109: 110: 111: : dinc ( d-addr -- ) 112: \ increment double pointed to by d-addr 113: dup 2@ 1. d+ rot 2! ; 114: 115: : profile-this ( -- ) 116: new-profile-point profile-count POSTPONE literal POSTPONE dinc ; 117: 118: \ Various words trigger PROFILE-THIS. In order to avoid getting 119: \ several calls to PROFILE-THIS from a compiling word (like ?EXIT), we 120: \ just wait until the next word is parsed by the text interpreter (in 121: \ compile state) and call PROFILE-THIS only once then. The whole 122: \ BEFORE-WORD hooking etc. is there for this. 123: 124: \ The reason that we do this is because we use the source position for 125: \ the profiling information, and there's only one source position for 126: \ ?EXIT. If we used the threaded code position instead, we would see 127: \ that ?EXIT compiles to several threaded-code words, and could use 128: \ different profile points for them. However, usually dealing with 129: \ the source is more practical. 130: 131: \ Another benefit is that we can ask for profiling anywhere in a 132: \ control-flow word (even before it compiles its own stuff). 133: 134: \ Potential problem: Consider "COMPILING ] [" where COMPILING compiles 135: \ a whole colon definition (and triggers our profiler), but during the 136: \ compilation of the colon definition there is no parsing. Afterwards 137: \ you get interpret state at first (no profiling, either), but after 138: \ the "]" you get parsing in compile state, and PROFILE-THIS gets 139: \ called (and compiles code that is never executed). It would be 140: \ better if we had a way of knowing whether we are in a colon def or 141: \ not (and used that knowledge instead of STATE). 142: 143: Defer before-word-profile ( -- ) 144: ' noop IS before-word-profile 145: 146: : before-word1 ( -- ) 147: before-word-profile defers before-word ; 148: 149: ' before-word1 IS before-word 150: 151: : profile-this-compiling ( -- ) 152: state @ if 153: profile-this 154: ['] noop IS before-word-profile 155: endif ; 156: 157: : cock-profiler ( -- ) 158: \ as in cock the gun - pull the trigger 159: ['] profile-this-compiling IS before-word-profile 160: [ count-calls? ] [if] \ we are at a non-colondef profile point 161: last-colondef-profile @ profile-straight-line off 162: [endif] 163: ; 164: 165: : hook-profiling-into ( "name" -- ) 166: \ make (deferred word) "name" call cock-profiler, too 167: ' >body >r :noname 168: POSTPONE cock-profiler 169: r@ @ compile, \ old hook behaviour 170: POSTPONE ; 171: r> ! ; \ change hook behaviour 172: 173: hook-profiling-into then-like 1.3 anton 174: \ hook-profiling-into if-like \ subsumed by other-control-flow 175: \ hook-profiling-into ahead-like \ subsumed by other-control-flow 1.2 anton 176: hook-profiling-into other-control-flow 177: hook-profiling-into begin-like 178: hook-profiling-into again-like 179: hook-profiling-into until-like 180: 181: count-calls? [if] 182: : :-hook-profile ( -- ) 183: defers :-hook 184: next-profile-point-p @ 185: profile-this 186: @ dup last-colondef-profile ! 187: profile-colondef? on ; 188: 189: ' :-hook-profile IS :-hook 190: [else] 191: hook-profiling-into exit-like 192: hook-profiling-into :-hook 193: [endif]
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/profile.fs?annotate=1.5;sortby=rev;f=h;only_with_tag=v0-7-0;ln=1
CC-MAIN-2021-43
en
refinedweb
Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher. In order to have a sample “data feed” for an application I wanted to use Azure Event Grid to create a topic that events could be published to and clients could subscribe to in order to process the events. The generic scenario is that of alarms (of whatever sort you’d imagine; car, house, IoT device) for which the events represent status updates. This is the first post about getting started but in my next posts I will cover consuming the events in an Azure Logic App and turning the custom event publishing app into a Docker image. Creating an Event Grid Topic The first step is to create the Event Grid Topic. As Event Grid is serverless, this is as easy as you might expect (no thinking about infrastructure, sizing or scaling). The step by step how to is here but you simply give it a name, a resource group and choose the region to run it in. Wait a few seconds and that’s it, the Topic is created: Creating a publisher (custom app) I wanted to create an app that would keep generating events, representing alarms and their status, and publishing these to the Event Grid Topic created above. Publishing an event is simply performing an HTTP Post, and although I chose to implement this in .NET Core, you could choose almost any language. The data object in an Azure Event Grid event is a JSON payload, and therefore I started by thinking about what that data would consist of: { "properties": { "deviceId": { "type": "number" }, "image": { "type": "string" }, "latitude": { "type": "number" }, "longitude": { "type": "number" }, "status": { "type": "string" } }, "type": "object" } For my purposes I wanted the alarm to include the device id, an image sent by the alarm (in fact a URL to an image in blob storage), the location of the alarm (longitude and latitude) and the status (green, amber, red). To create the app I simply used dotnet new console and then started editing it in Visual Studio Code code . A good reference for getting started with exactly this can be found here. The key point for me that I’d call out is that if you’re used to the .NET HttpClient then in .NET Core HttpClient doesn’t have a PostAsJsonAsync method. Instead what I’ve ended up doing is create a class as follows: public class JsonContent : StringContent { public JsonContent(object obj) : base(JsonConvert.SerializeObject(obj), Encoding.UTF8, "application/json"){ } } Then call the PostAsync method but passing in the JSON content: _client.PostAsync(_eventTopicEndpoint, new JsonContent(alarmEvents)); This requires that the Newtonsoft.Json package is added to the csproj file: <PackageReference Include="Newtonsoft.Json" Version="9.0.1" /> At it’s core publishing to an event Grid Topic is then: Set the headers including the Event Grid Topic key: _client.DefaultRequestHeaders.Accept.Clear(); _client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); _client.DefaultRequestHeaders.Add("aeg-sas-key", _eventAegSasKey); Create an event with a payload of an object that matches the JSON schema you want: AlarmEvent alarmEvent = new AlarmEvent {topic = _eventTopicResource, subject = "Alarm", id = Guid.NewGuid().ToString(), eventType = "recordInserted", eventTime = DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ss.FFFFFFK"), data = payload }; AlarmEvent[] alarmEvents = { alarmEvent }; Post the event: HttpResponseMessage response = await _client.PostAsync(_eventTopicEndpoint, new JsonContent(alarmEvents)); There are other ways of doing all of this of course but if you want it then all my code can be found in GitHub. A reasonable amount of the code is my attempt at setting configurable boundaries for the geographical location for the devices, but it’s not relevant here so I won’t go into it. Before you commit to Git, make sure you have an appropriate .gitignore file to ensure you’re only staging the files you need. For .NET Core I used this one: This reduced the number of files I needed to stage and commit from 20 to 4. There are a wide range of .gitignore files here for most languages. I didn’t want to hardcode some of the key Event Grid information both so that keys didn’t end up in GitHub and also to make it reusable for other Topics. Therefore I can now run my console app from the command line providing some key arguments: dotnet run <EventTopicURL> <EventResourcePath> <EventKey> where: - EventTopicURL: the endpoint for the Event Grid Topic and can be copied from the Overview blade in the Azure Portal. - EventResourcePath: the path to the resource and is of the form: /subscriptions/< Azure subscription id>/resourceGroups/<Event Grid Topic resource group name>/providers/Microsoft.EventGrid/topics/<Event Grid Topic name>. - EventKey: the key for the Event Grid Topic and can be copied from the Access Keys blade in the Azure Portal. In the next post I’ll look at how to subscribe to the Topic with a Logic App to both test and consume the events..
https://docs.microsoft.com/en-us/archive/blogs/ukhybridcloud/building-an-azure-event-grid-app-part1-event-grid-topic-and-a-net-core-custom-app-event-publisher
CC-MAIN-2020-24
en
refinedweb
Microsoft Azure Iot Hub Library¶ The Zerynth Microsoft Azure Iot Hub Library can be used to ease the connection to Microsoft Azure Iot Hub. It allows to make your device act as a Microsoft Azure Iot Hub Device which can be registered through Azure command line tools or Azure web dashboard. The Device class¶ - class Device(hub_id, device_id, api_version, key, timestamp_fn, token_lifetime=60)¶ Create a Device instance representing a Microsoft Azure Iot Hub Device. The Device object will contain an mqtt client instance pointing to Microsoft Azure Iot Hub MQTT broker located at hub_id.azure-devices.net. The client is configured with device_idas MQTT id and is able to connect securely through TLS and authenticate through a SAS token with a token_lifetimeminutes lifespan. Valid tokens generation process needs current timestamp which will be obtained calling passed timestamp_fn. timestamp_fnhas to be a Python function returning an integer timestamp. A valid base64-encoded primary or secondary key keyis also needed. The api_versionstring is mandatory to enable some responses from Azure MQTT broker on specific topics. The client is accessible through mqttinstance attribute and exposes all Zerynth MQTT Client methods so that it is possible, for example, to setup custom callbacks on MQTT commands (though the Device class already exposes high-level methods to setup Azure specific callbacks). The only difference concerns mqtt.connectmethod which does not require broker url and ssl context, taking them from Device configuration: def timestamp_fn(): valid_timestamp = 1509001724 return valid_timestamp key = "ZhmdoNjyBccLrTnku0JxxVTTg8e94kleWTz9M+FJ9dk=" my_device = iot.Device('my-hub-id', 'my-device-id', '2017-06-30', key, timestamp_fn) my_device.mqtt.connect() ... my_device.mqtt.loop() on_bound(bound_cbk)¶ Set a callback to be called on cloud to device messages. bound_cbkcallback will be called passing a string containing sent message and a dictionary containing sent properties: def bound_callback(msg, properties): print('c2d msg:', msg) print('with properties:', properties) my_device.on_bound(bound_callback) on_method(method_name, method_cbk)¶ Set a callback to respond to a direct method call. method_cbkcallback will be called in response to method_namemethod, passing a dictionary containing method payload (should be a valid JSON): def send_something(method_payload): if method_payload['type'] == 'random': return (0, {'something': random(0,10)}) deterministic = 5 return (0, {'something': deterministic}) my_device.on_method('get', send_something) method_cbkcallback must return a tuple containing response status and a dictionary or None as response payload. on_twin_update(twin_cbk)¶ Set a callback to respond to cloud twin updates. twin_cbkcallback will be called when a twin update is notified by the cloud, passing a dictionary containing desired twin and an integer representing current twin version: def twin_callback(twin, version): print('new twin version:', version) print(twin) my_device.on_twin_update(twin_callback) It is possible for twin_cbkto return a dictionary which will be immediately sent as reported twin. report_twin(reported, wait_confirm=True, timeout=1000)¶ reportedtwin. reportedtwin must be a dictionary and will be sent as JSON string. It is possible to not wait for cloud confirmation setting wait_confirmto false or to set a custom timeout( -1to wait forever) for the confirmation process which could lead to TimeoutException. An integer status code is returned after cloud confirmation. get_twin(timeout=1000)¶ Get current twin containing desired and reported fields. It is possible set a custom timeout( -1to wait forever) for the process which could lead to TimeoutException. An integer status code is returned after cloud response along with received twinJSON-parsed dictionary.
https://docs.zerynth.com/latest/official/lib.azure.iot/docs/official_lib.azure.iot_iot.html
CC-MAIN-2020-24
en
refinedweb
$ cnpm install polymorphic-tests Polymorphic test framework for JS by Wizard Enterprises. See also polymorphic-web-component-tests. Install with npm i -D polymorphic-tests or yarn add -D polymorphic-tests. Once polymorphic-tests is installed, run ./node_modules/.bin/polytest init to configure your project. This tool depends on stage-0 decorators, implemented through either Babel or TypeScript. This is configured automatically through the init command, but configuring TypeScript or Babel is left to the user (see examples). Running is done simply through ./node_modules/.bin/polytest run. This can be added to your npm scripts (and is during init, if it's not defined) for added convenience. The init command generates a polytest.js file in your project, which exports configuration for running your tests. See ./node_modules/.bin/polytest help run for nearly full options. Nearly because the run --setup flag only accepts globs to setup files, while the polytest.js#setup field can additionally accept plain old functions to run during setup. This section assumes you understand polymorphism through class extension in JS. Tests are written in classes extending TestSuite, and must both extend TestSuite and be decorated with the @Suite() decorator, for reasons that will be explained below. Note: test methods may always return a promise to be awaited. import {Test, Suite, TestSuite} from 'polymorphic-tests' @Suite() class MySuite extends TestSuite { @Test() '1 + 1 = 2'(t) { t.expect(add(1, 1).to.equal(2) } } function add(x, y) { return x + y } t.expect here is from the Chai Assertion Library. t also includes Chai's should and assert APIs. Chai can be configured further by configuring setup, see Configuring. Test suites can contain other test suites. import {Test, Suite, SubSuite, TestSuite} from 'polymorphic-tests' @Suite() class Calculator extends TestSuite { @Test() 'calculator exists'(t) { t.expect(calculate).to.be.an.instanceof(Function) } } @SubSuite(Calculator) class Add extends TestSuite { @Test() '3 + 3 = 6'(t) { t.expect(calculate('+', 3, 3)).to.equal(6) } } @SubSuite(Calculator) class Multiply extends TestSuite { @Test() '3 * 3 = 9'(t) { t.expect(calculate('*', 3, 3)).to.equal(9) } } function calculate(operation: string, x: number, y: number) { return eval(x + operation + y) } Note that sub-suites still extend TestSuite and not their parent suites! This is be explained below. This is mostly useful for reporting and for test selection and filtering. The @Suite, @SubSuite, and @Test decorators can all recieve as their last argument an options config with flags skip and only, e.g. @Test({skip: true}), @SubSuite(MyParentSuite, {only: true}). The only flag is not global. This means decorating a test entity with it only applies inside that entity's suite. Currently there is no global only flag. The skip flag skips the decorated test entity and any children it might have, taking complete priority over only. So for example: import {Test, Suite, SubSuite, TestSuite} from 'polymorphic-tests' @Suite() class RootWithEntitiesWithFlags extends TestSuite { @Test() 'skipped because of sibling suite with only'(t) {} } @SubSuite(RootWithEntitiesWithFlags) class SkippedBecauseOfSiblingWithOnly extends TestSuite {} @SubSuite(RootWithEntitiesWithFlags, {only: true}) class SuiteWithOnly extends TestSuite { @Test() 'runs normally'(t) {} @Test({skip: true}) 'still skipped'(t) {} } @Suite({skip: true}) class SkippedRoot extends TestSuite {} @Suite() class UnaffectedRoot extends TestSuite { @Test() 'runs normally'(t) {} } Every test suite exposes these hook methods for extending. It's good practice to always call super.lifecycleHook() when you override one of these. Note: hook methods may always return a promise to be awaited. import {Test, Suite, TestSuite} from 'polymorphic-tests' @Suite() class MySuite extends TestSuite { static onDecorate() { // called when MySuite gets decorated } setup() { // called once when MySuite starts running } before(t) { // called before every test } @Test() 'some test'(t) {} after(t) { // called after every test } teardown() { // called once when MySuite ends } } Hooks still run after a test fails. This is important for cleanup purposes, but be aware that if an error gets thrown during after and teardown it will be reported and your test error will be swallowed. TestSuite Test suites being implemented as classes allows sharing all sorts of common boilerplate very easily through polymorphically implementing lifecycle hooks. For a full-fledged example of this, see polymorphic-web-component-tests. Another use for extending TestSuite is inheriting tests, in addition to boilerplate. import {Test, Suite, TestSuite} from 'polymorphic-tests' abstract class CustomTestSuite extends TestSuite { abstract testString: string abstract expectedLength: number @Test() 'my test'(t) { t.expect(this.testString.length).to.equal(expectedLength) } } At this point, by design, no tests will run. This is because CustomTestSuite wasn't decorated with the Suite or SubSuite decorators for registration, so our test has no parent. This allows us to do this: import {Test, Suite} from 'polymorphic-tests' import {CustomTestSuite} from '...' @Suite() class FirstSuite extends CustomTestSuite { testString = 'foo' expectedLength = 3 } @Suite() class SecondSuite extends CustomTestSuite { testString = 'longer' expectedLength = 6 } Note that here, FirstSuite#my test and SecondSuite#my test will run, but CustomTestSuite#my test will never run. ???? [email protected] Contributions, issues and feature requests are welcome! Feel free to check issues page. Give a ⭐️ if this project helped you! This project is ISC licensed.
https://developer.aliyun.com/mirror/npm/package/polymorphic-tests
CC-MAIN-2020-24
en
refinedweb
Java design patterns 101 About this tutorial Should I take this tutorial? This tutorial is for Java programmers who want to learn about design patterns as a means of improving their object-oriented design and development skills. After reading this tutorial This tutorial assumes that you are familiar with the Java language and with basic object-oriented concepts such as polymorphism, inheritance, and encapsulation. Some understanding of the Unified Modeling Language (UML) is helpful, but not required; this tutorial will provide an introduction to the basics. What is this tutorial about? Design patterns capture the experience of expert software developers, and present common recurring problems, their solutions, and the consequences of those solutions in methodical way. This tutorial explains: - Why patterns are useful and important for object-oriented design and development - How patterns are documented, categorized, and cataloged - When patterns should be used - Some important patterns and how they are implemented Tools The examples in this tutorial are all written in the Java language. It is possible and sufficient to read the code as a mental exercise, but to try out the code requires a minimal Java development environment. A simple text editor (such as Notepad in Windows or vi in a UNIX environment) and the Java Development Kit (version 1.2 or later) are all you need. A number of tools are also available for creating UML diagrams (see Related topics ). These are not necessary for this tutorial. Design patterns overview A brief history of design patterns Design patterns were first described by architect Christopher Alexander in his book A Pattern Language: Towns, Buildings, Construction (Oxford University Press, 1977). The concept he introduced and called patterns -- abstracting solutions to recurring design problems -- caught the attention of researchers in other fields, especially those developing object-oriented software in the mid-to-late 1980s. Research into software design patterns led to what is probably the most influential book on object-oriented design: Design Patterns: Elements of Reusable Object-Oriented Software, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Addison-Wesley, 1995; see Related topics ). These authors are often referred to as the "Gang of Four" and the book is referred to as the Gang of Four (or GoF) book. problems involving concurrency, for example, and Core J2EE Patterns: Best Practices and Design Strategies by Deepak Alur, John Crupi, and Dan Malks focuses on patterns for multi-tier applications using Java 2 enterprise technologies. There is an active pattern community that collects new patterns, continues research, and takes leads in spreading the word on patterns. In particular, the Hillside Group sponsors many conferences including one introducing newcomers to patterns under the guidance of experts. Related topics provides additional sources of information about patterns and the pattern community. Pieces of a pattern. Different catalogers use different templates to document their patterns. Different catalogers also use different names for the different parts of the pattern. Each catalog also varies somewhat in the level of detail and analysis devoted to each pattern. The next several sections describe the templates used in Design Patterns and in Patterns in Java. Design Patterns template Design Patterns uses the following template: - Pattern name and classification: A conceptual handle and category for the pattern - Intent: What problem does the pattern address? - Also known as: Other common names for the pattern - Motivation: A scenario that illustrates the problem - Applicability: In what situations can the pattern be used? - Structure: Diagram using the Object Modeling Technique (OMT) - Participants: Classes and objects in design - Collaborations: How classes and objects in the design collaborate - Consequences: What objectives does the pattern achieve? What are the tradeoffs? - Implementation: Implementation details to consider, language-specific issues - Sample code: Sample code in Smalltalk and C++ - Known uses: Examples from the real world - Related patterns: Comparison and discussion of related patterns Patterns in Java template Patterns in Java uses the following template: - Pattern Name: The name and a reference to where it was first described - Synopsis: A very short description of the pattern - Context: A description of the problem the pattern is intended to solve - Forces: A description of the considerations that lead to the solution - Solution: A description of the general solution - Consequences: Implications of using the pattern - Implementation: Implementation details to consider - Java API Usage: When available, an example from the Java API is mentioned - Code example: A code example in the Java language - Related patterns: A list of related patterns Learning patterns The most important things to learn at first is the intent and context of each pattern: what problem, and under what conditions, the pattern is intended to solve. This tutorial covers some of the most important patterns, but skimming through a few catalogs and picking out this information about each pattern is the recommended next step for the diligent developer. In Design Patterns, the relevant sections to read are "Intent," "Motivation," and Applicability." In Patterns in Java, the relevant sections are "Synopsis," "Context," and "Forces and Solution." Doing the background research can help you identify a pattern that lends itself as a solution to a design problem you're facing. You can then evaluate the candidate pattern more closely for applicability, taking into account the solution and its consequences in detail. If this fails, you can look to related patterns. In some cases, you might find more than one pattern that can be used effectively. In other cases, there may not be an applicable pattern, or the cost of using an applicable pattern, in terms of performance or complexity, may be too high, and an ad hoc solution may be the best way to go. (Perhaps this solution can lead to a new pattern that has not yet been documented!) Using patterns to gain experience A critical step in designing object-oriented software is discovering the objects. There are various techniques that help: use cases, collaboration diagrams, or Class-Responsibility-Collaboration (CRC) cards, for example -- but discovering the objects is the hardest step for inexperienced designers to get right. Lack of experience or guidance can lead to too many objects with too many interactions and therefore dependencies, creating a monolithic system that is hard to maintain and impossible to reuse. This defeats the aim of object-oriented design. Design patterns help overcome this problem because they teach the lessons distilled from experience by experts: patterns document expertise. Further, patterns not only describe how software is structured, but more importantly, they also describe how classes and objects interact, especially at run time. Taking these interactions and their consequences explicitly into account leads to more flexible and reusable software. When not to use patterns While using a pattern properly results in reusable code, the consequences often include some costs as well as benefits. Reusability is often obtained by introducing encapsulation, or indirection, which can decrease performance and increase complexity. For example, you can use the Facade pattern to wrap loosely related classes with a single class to create a single set of functionality that is easy to use. One possible application might be to create a facade for the Java Internationalization API. This approach might be reasonable for a stand-alone application, where the need to obtain text from resource bundles, format dates and time, and so on, is scattered in various parts of the applications. But this may not be so reasonable for a multitier enterprise application that separates presentation logic from business. If all calls to the Internationalization API are isolated in a presentation module -- perhaps by wrapping them as JSP custom tags -- it would be redundant to add yet another layer of indirection. Another example of when patterns should be used with care is discussed in Concurrency patterns, regarding the consequences of the Single Thread Execution pattern. As a system matures, as you gain experience, or flaws in the software come to light, it's good to occasionally reconsider choices you've made previously. You may have to rewrite ad hoc code so that it uses a pattern instead, or change from one pattern to another, or remove a pattern entirely to eliminate a layer of indirection. Embrace change (or at least prepare for it) because it's inevitable. A brief introduction to UML class diagrams Class diagrams UML has become the standard diagramming tool for object-oriented design. Of the various types of diagrams defined by UML, this tutorial only uses class diagrams. In class diagrams, classes are depicted as boxes with three compartments, as shown in Figure 1. Figure 1. Class diagrams The top compartment contains the class name; if the class is abstract the name is italicized. The middle compartment contains the class attributes (also called properties, or variables). The bottom compartment contains the class methods (also called operations ). Like the class name, if a method is abstract, its name is italicized. Depending on the level of detail desired, it is possible to omit the properties and show only the class name and its methods, or to omit both the properties and methods and show only the class name. This approach is common when the overall conceptual relationship is being illustrated. Associations between classes Any interaction between classes is depicted by a line drawn between the classes. A simple line indicates an association, usually a conceptual association of any unspecified type. The line can be modified to provide more specific information about the association. Navigability is indicated by adding an open arrowhead. Specialization, or subclassing, is indicated by adding a triangular arrowhead. Cardinal numbers (or an asterisk for an unspecified plurality) can also be added to each end to indicate relationships, such as one-to-one and many-to-one. Figure 2 shows these different types of associations. Figure 2. Class associations Related topics provides further reading on UML and Java language associations. Creational patterns Creational patterns overview. The Singleton pattern, for example, is used to encapsulate the creation of an object in order to maintain control over it. This not only ensures that only one is created, but also allows lazy instantiation; that is, the instantiation of the object can be delayed until it is actually needed. This is especially beneficial if the constructor needs to perform a costly operation, such as accessing a remote database. The Singleton patternmethods. The Factory Method pattern In addition to the Singleton pattern, another common example of a creational pattern is the Factory Method. This pattern is used when it must be decided at run time which one of several compatible classes is to be instantiated. This pattern is used throughout the Java API. For example, the abstract Collator class's getInstance() method returns a collation object that is appropriate for the default locale, as determined by java.util.Locale.getDefault(): Collator defaultCollator = getInstance(); The concrete class that is returned is actually always a subclass of Collator, RuleBasedCollator, but that is an unimportant implementation detail. The interface defined by the abstract Collator class is all that is required to use it. Figure 3. The Factory Method pattern Structural patterns Structural patterns overview Structural patterns prescribe the organization of classes and objects. These patterns are concerned with how classes inherit from each other or how they are composed from other classes. Common structural patterns include Adapter, Proxy, and Decorator patterns. These patterns are similar in that they introduce a level of indirection between a client class and a class it wants to use. Their intents are different, however. Adapter uses indirection to modify the interface of a class to make it easier for a client class to use it. Decorator uses indirection to add behavior to a class, without unduly affecting the client class. Proxy uses indirection to transparently provide a stand-in for another class. The Adapter pattern The Adapter pattern is typically used to allow the reuse of a class that is similar, but not the same, as the class the client class would like to see. Typically the original class is capable of supporting the behavior the client class needs, but does not have the interface the client class expects, and it is not possible or practical to alter the original class. Perhaps the source code is not available, or it is used elsewhere and changing the interface is inappropriate. Here is an example that wraps OldClass so a client class can call it using a method, NewMethod() defined in NewInterface: public class OldClassAdapter implements NewInterface { private OldClass ref; public OldClassAdapter(OldClass oc) { ref = oc; } public void NewMethod() { ref.OldMethod(); } } Figure 4. The Adapter pattern The Proxy and Decorator patterns A Proxy is a direct stand-in for another class, and it typically has the same interface as that class because it implements a common interface or an abstract class. The client object is not aware that it is using a proxy. A Proxy is used when access to the class the client would like to use must be mediated in a way that is apparent to the client -- because it requires restricted access or is a remote process, for example. Figure 5. The Proxy pattern Decorator, like Proxy, is also a stand-in for another class, and it also has the same interface as that class, usually because it is a subclass. The intent is different, however. The purpose of the Decorator pattern is to extend the functionality of the original class in a way that is transparent to the client class. Examples of the Decorator pattern in the Java API are found in the classes for processing input and output streams. BufferedReader(), for example, makes reading text from a file convenient and efficient: BufferedReader in = new BufferedReader(new FileReader("file.txt")); The Composite pattern The Composite pattern prescribes recursive composition for complex objects. The intent is to allow all component objects to be treated in a consistent manner. All objects, simple and complex, that participate in this pattern derive from a common abstract component class that defines common behavior. Forcing relationships into a part-whole hierarchy in this way minimizes the types of objects that our system (or client subsystem) needs to manage. A client of a paint program, for example, could ask a line to draw itself in the same way it would ask any other object, including a composite object. Figure 6. The Composite pattern Behavioral patterns Behavioral patterns overview Behavioral patterns prescribe the way objects interact with each other. They help make complex behavior manageable by specifying the responsibilities of objects and the ways they communicate with each other. The Observer pattern Observer is a very common pattern. You typically use this pattern when you're implementing an application with a Model/View/Controller architecture. The Model/View part of this design is intended to decouple the presentation of data from the data itself. Consider, for example, a case where data is kept in a database and can be displayed in multiple formats, as a table or a graph. The Observer pattern suggests that the display classes register themselves with the class responsible for maintaining the data, so they can be notified when the data changes, and so they can update their displays. The Java API uses this pattern in the event model of its AWT/Swing classes. It also provides direct support so this pattern can be implemented for other purposes. The Java API provides an Observable class that can be subclassed by objects that want to be observed. Among the methods Observable provides are: addObserver(Observer o)is called by Observableobjects to register themselves. setChanged()marks the Observableobject as having changed. hasChanged()tests if the Observableobject has changed. notifyObservers()notifies all observers if the Observableobject has changed, according to hasChanged(). To go along with this, an Observer interface is provided, containing a single method that is called by an Observable object when it changes (providing the Observer has registered itself with the Observable class, of course): public void update(Observable o, Object arg) The following example demonstrates how an Observer pattern can be used to notify a display class for a sensor such as temperature has detected a change: import java.util.*; class Sensor extends Observable { private int temp = 68; void takeReading() { double d; d =Math.random(); if(d>0.75) { temp++; setChanged(); } else if (d<0.25) { temp--; setChanged(); } System.out.print("[Temp: " + temp + "]"); } public int getReading() { return temp; } } public class Display implements Observer { public void update(Observable o, Object arg) { System.out.print("New Temp: " + ((Sensor) o).getReading()); } public static void main(String []ac) { Sensor sensor = new Sensor(); Display display = new Display(); // register observer with observable class sensor.addObserver(display); // Simulate measuring temp over time for(int i=0; i < 20; i++) { sensor.takeReading(); sensor.notifyObservers(); System.out.println(); } } } The Strategy and Template patterns Strategy and Template patterns are similar in that they allow different implementations for a fixed set of behaviors. Their intents are different, however. Strategy is used to allow different implementations of an algorithm, or operation, to be selected dynamically at run time. Typically, any common behavior is implemented in an abstract class and concrete subclasses provide the behavior that differs. The client is generally aware of the different strategies that are available and can choose between them. For example, an abstract class, Sensor, could define taking measurements and concrete subclasses would be required to implement different techniques: one might provide a running average, another might provide an instantaneous measurement, and yet another might hold a peak (or low) value for some period of time. Figure 7. Sensor abstract class The intention of the Template pattern is not to allow behavior to be implemented in different ways, as in Strategy, but rather to ensure that certain behaviors are implemented. In other words, where the focus of Strategy is to allow variety, the focus of Template is to enforce consistency. The Template pattern is implemented as an abstract class and it is often used to provide a blueprint or an outline for concrete subclasses. Sometimes this is used to implement hooks in a system, such as an application framework. Concurrency patterns Concurrency patterns overview Concurrency patterns prescribe the way access to shared resources is coordinated or sequenced. By far the most common concurrency pattern is Single Thread Execution, where it must be ensured that only one thread has access to a section of code at a time. This section of code is called a critical section, and typically it is a section of code that either obtains access to a resource that must be shared, such as opening a port, or is a sequence of operations that should be atomic, such as obtaining a value, performing calculations, and then updating the value. The Single Thread Execution pattern The Singleton pattern we discussed earlier contains two good examples of the Single Thread Execution pattern. The problem motivating this pattern first arises because this example uses lazy instantiation -- delaying instantiating until necessary -- thereby creating the possibility that two different threads may call getInstance() at the same time: public static synchronized Sequence getInstance() { if(instance==null) // Lazy instantiation { instance = new Sequence(); } return instance; } If this method were not protected against simultaneous access with synchronized, each thread might enter the method, test and find that the static instance reference is null, and each might try to create a new instance. The last thread to finish wins, overwriting the first thread's reference. In this particular example, that might not be so bad -- it only creates an orphaned object that garbage collector will eventually clean up -- but had there been a shared resource that enforced single access, such as opening a port or opening a log file for read/write access, the second thread's attempt to create an instance would have failed because the first thread would have already obtained exclusive access to the shared resource. Another critical section of code in the Singleton example is the getNext() method: public static synchronized int getNext() { return ++counter; } If this is not protected with synchronized, two threads calling it at the same time might obtain the same current value and not the unique values this class is intended to provide. If this were being used to obtain primary keys for a database insert, the second attempt to insert with same primary key would fail. As we discussed earlier, you should always consider the cost of using a pattern. Using synchronized works by locking the section of code when it is entered by one thread and blocking any other threads until the first thread is finished. If this is code used frequently by many threads, this could cause a serious degradation in performance. Another danger is that two threads could become deadlocked if one thread is blocked at one critical section waiting for the second, while the second thread is blocked at another critical section, waiting for the first. Wrapup Summary. Downloadable resources Related topics - Books - object primer: Using object-oriented techniques to develop software" by Scott W. Ambler - There are also several articles on the Java language, patterns, and UML: -.
https://www.ibm.com/developerworks/java/tutorials/j-patterns/j-patterns.html
CC-MAIN-2020-24
en
refinedweb
Was here last November (after reading reviews) and we are counting the days to when we return (August). By chance we got a table at the bar and that really added to our experience. Our "chef" was very helpfull and we gave him full licence to prepare what he wanted for us. Everything about this place is wonderful, the understated... more
http://www.tripadvisor.com/Restaurant_Review-g60763-d425325-Reviews-Sushi_Yasuda-New_York_City_New_York.html
crawl-002
en
refinedweb
Asynchronous RPC Contents AsyncRPC is a non-blocking and not a wholly asynchronous RPC library. It provides abstractions over the Sockets layer for non-blocking transmission of RPC calls and reception of RPC replies. It provides notification of RPC replies through callbacks. Callbacks can be registered at the time the RPC call is transmitted through an interface function along with some private data that could be required during the callback. It is used as the RPC library for libnfsclient, a userland NFS client operations library, which in turn is used by a tool called nfsreplay. The NFS benchmarking project page is here: NFSBenchmarking I can be reached at <shehjart AT gelato DOT NO SPAM unsw DOT edu GREEBLIES DOT au> News April 5, 2007, nfsreplay svn is up March 31, 2007 Async RPC is still pre-alpha. Use with caution. Main features Non-blocking, the socket reads and writes are non-blocking and managed by the library. In case of writes, if the socket blocks the data is copied into internal buffers for trying again later. Asynchronous, the responses are notified via callbacks registered at the time of making the RPC calls. These callbacks are not true asynchronous mechanisms as they do not rely on signals or other asynchronous notifications mechanisms. In the worst case scenario, a completion function needs to be called to explicitly process pending replies and the associated callbacks. Interface The interface is very similar to the RPC library in the glibc, with the addition of callbacks and non-blocking socket IO. Creating Client Handle #include <clnt_tcp_nb.h> CLIENT *clnttcp_nb_create(struct sockaddr_in *raddr, u_long prog, u_long vers, int *sockp, u_int sbufsz, u_int rbufsz); CLIENT *clnttcp_b_create(struct sockaddr_in *raddr, u_long prog, u_long vers, int *sockp, u_int sbufsz, u_int rbufsz); Use clnttcp_nb_create to initiate a connection to a remote server using a non-blocking socket. The 'clnt_b_create does the same using a blocking socket. The parameters are: raddr - Socket which provides the server's IP and optionally, a port to connect to. The port number is optional and if 0, is acquired from the portmapper service using the prog and vers parameters. prog - The number identifying the RPC program. vers - The version of the RPC program. sockp - If the caller already has a usable socket descriptor, pass it as this argument. A new socket descriptor is created if the value of *sockp is RPC_ANYSOCK. sbufsz - The size of the buffer which is sent to the write syscall. Uses default value of ASYNC_READ_BUF if 0. rbufsz - The size of the buffer which is given to the read syscall. Uses default value of ASYNC_READ_BUF if 0. The function returns a handle which is used to identify this particular connection. User callbacks User callbacks are of the type #include <clnt_tcp_nb.h> typedef void (*user_cb)(void *msg_buf, int bufsz, void *priv); msg_buf - Pointer to the msg buffer. bufsz - Size in bytes of the message in msg_buf. priv - Pointer to the private data registered with clnttcp_nb_call. Calling Remote Procedures #include <clnt_tcp_nb.h> extern enum clnt_stat clnttcp_nb_call(CLIENT *handle, u_long proc, xdrproc_t inproc, caddr_t inargs, user_cb callback, void * usercb_priv); clnttcp_nb_call is the function used call remote procedures asynchronously. handle - The pointer to the handle returned by clnttcp_nb_create. proc - The RPC procedure number. inproc - Function that is used to translate user message into XDR format. inargs - Pointer to user message. callback - The callback function. Its called when the reply is received for this RPC message. usercb_priv - The pointer to the private data that will be passed as the third argument to the function pointed to by callback. On a successful transmission of the call, the return value is RPC_SUCCESS. This applies only to the send function. RPC_SUCCESS is returned even in cases, when the message is copied into internal buffers for later transmission. This would happen in case the write syscall returns EAGAIN to notify that the call will block. clnttcp_nb_call transparently handles blocking and non-blocking sockets so there is no need to maintain additional state after the client handle has been created. Executing callbacks #include <clnt_tcp_nb.h> int clnttcp_nb_receive(CLIENT * handle, int flag); handle - The pointer to the handle returned by clnttcp_nb_create. flag - This argument takes the following values: RPC_NONBLOCK_WAIT - If the user application requires that the read from socket for this invocation of clnttcp_nb_receive be non-blocking. RPC_BLOCKING_WAIT - If the user application requires that the read from socket for this invocation of clnttcp_nb_receive be blocking till atleast one RPC response was received by the library, i.e. atleast a one callback was executed by the library internally. The flag argument determines socket read behaviour in tandem with the original socket creation type. The following table shows the resulting combinations: The idea above is to show that using the flag as RPC_BLOCKING_WAIT even a non-blocking socket can block-wait for a response if necessary. The function returns the count of callbacks that were executed for the socket buffers that were processed. This value can also be taken to be the count of replies received and processed in this call. Closing a connection #include <clnt_tcp_nb.h> void clnttcp_nb_destroy (CLIENT *h); Simply call clnttcp_nb_destroy to the state related to this connection. h - The pointer to the handle returned by clnttcp_nb_create. Retreiving amount of data transferred #include <clnt_tcp_nb.h> unsigned long clnttcp_datatx (CLIENT *h); Returns the count of bytes transferred over this CLIENT handle. Internals Some aspects that need focus are presented here. Plugability into glibc Since glibc's RPC implementation has some degree of extensibility I've been able to use quite a bit of underlying infrastructure. It allows for new pluggable transport protocol handlers, pluggable XDR translation libraries and pluggable functions that actually do the reading and writing from sockets. Some aspects of glibc RPC code structure are shown in the two pages here, which are basically pictures of diagrams I drew on a whiteboard to understand it myself. See glibcRPCDesign. Record Stream Management The RPC Record Marking Standard is used for serializing RPC messages over byte-stream transports like TCP. Since we have two different paths for sending(using clnttcp_nb_call) and receiving RPC messages(through callbacks), the XDR translation takes place differently in both cases. Transmission case: While sending the Async RPC code uses glibc's XDRREC translation routines which are used for XDR translation for record streams like TCP. XDRREC in turn provides pluggable functions which are used for writing the translated messages to socket descriptors. The Async RPC library defines a custom function that is plugged into to the XDRREC routines. This function, writetcp_nb handles non-blocking writes to socket descriptors. This function is the last one to be called by the XDRREC routines which means the buffers passed to it contain RPC messages already in XDR format. If the write to socket blocks, it copies the message into internal buffers for later transmission. The buffers are stored in the client handles. The code is well commented. Reception case: Message reception takes place either during the calls itself, in case the socket used is blocking or by using clnttcp_nb_receive function. Mainly, the task involves defragmenting(..RPC terminology..) and desegmenting(..TCP terminology..) bytes read from the socket and collating them into single RPC records. Each RPC Record contains one RPC message. The user callbacks are called only when enough bytes have been read to complete a full RPC record. See RFC 1831 Section on Record Marking Standard for more info. The bytes read from the sockets are in XDR format. The RPC message headers are un-XDRed using the XDRMEM routines which allow translation to and from buffers in memory. This is different from the XDRREC module which sends message translations to socket descriptors from in-memory buffers and the XDR data read from socket descriptors to un-XDRed memory buffers. Callbacks Callbacks are called only with complete RPC messages. The buffers passed to the callbacks are in XDR format and need to be translated before being useful. Use the glibc XDRMEM routines to do this. For examples of use with NFS messages, see the XDR translation routines in libnfsclient callbacks. The source for libnfsclient is packaged as part of nfsreplay. Callbacks are saved internally in a hashtable by using the RPC XID as the key. As each call produces a unique XID, each message needs a callback to be registered while sending that message. Registering a callback is optional and the library discards a reply which does not have a registered callback. Callbacks might need some state information while processing each reply. This state can be provided as the user_cb_priv argument to clnttcp_nb_call. This reference is passed to the callback eventually as the priv argument of the callback functions, which are of the type user_cb. This approach provides for per-XID callback and private info, i.e. the callback and the private data passed to it can be different for each request. The message buffers passed to the callbacks are freed after the callbacks return. Copy it, if persistence is needed. Response Notification Response notification happens through callbacks. Within the library, response processing is attempted right after the RPC call is made by clnttcp_nb_call. The attempt to read a response blocks if the socket was created using clnttcp_b_create. If the read results in a full RPC record being read, the callback is executed and clnttcp_nb_call returns. In case the socket was non-blocking and the read returns EAGAIN, clnttcp_nb_call returns without calling any callbacks. In such cases, the user application might need to explicitly initiate the callbacks using a completion function. The clnttcp_nb_receive function is used for this purpose. Again, clnttcp_nb_receive allows the user application to explicitly specify whether this invocation of clnttcp_nb_receive should block. In case there are buffers that can read without blocking, they are read in. The callbacks are called only if these buffers collate to form a complete RPC record. See the description of clnttcp_nb_receive to understand under what conditions it will block till atleast one callback is executed. Code libnfsclient and AsyncRPC are part of the nfsreplay source package. See nfsreplay page for instructions on checking out these two components. Usage Usage and building with the library involves including the header file and building the library user's C files with the clnt_tcp_nb.c source file. The header clnt_tcp_nb.h is needed by all the files that use the interface functions above. Support Use nfsreplay lists for support and discussion.
http://www.gelato.unsw.edu.au/IA64wiki/AsyncRPC
crawl-002
en
refinedweb
When Python runs a script and an uncatched exception is raised, a traceback is printed and the script is terminated. Python2.1 has introduced sys.excepthook, which can be used to override the handling of uncaught exceptions. This allows to automatically start the debugger on an unexpected exception, even if python is not running in interactive mode. Discussion The above code should be included in 'sitecustomize.py', which is automatically imported by python. The debugger is only started when python is run in non-interactive mode. If you have not yet a 'sizecustomize.py' file, create one and place it somewhere on your pythonpath. Gui debugger? This is a great idea; I'd love to use it under the PythonWin gui debugger. Any way to make that happen? I'm using Python as the Active Script language for IIS/ASP, and would love to throw the errors into the PythonWin debugger. Thanks! using pywin debugger. instead of: import pdb pdb.pm() use: import pywin.debugger pywin.debugger.pm() Some small improvements. It is nice to check if the exception is a syntax error because SyntaxError's can't be debugged. Also nice is just assigning the debug exception hook when the program is run is debug mode. Should check if stdin.isatty also. If stdin isn't a tty (e.g. a pipe) and this gets called the pdb code will just start printing it's prompt continually instead of giving a useful traceback. Changing the test to "not (sys.stderr.isatty() and sys.stdin.isatty())" fixes it. -Adam Addidtional check: sys.stdout.isatty(). If the standart output is redirected to a file, then you might not want to start the debugger, because you do not see the debugger's output. Under consideration of the previous comments the condition has to be changed to: if hasattr(sys, 'ps1') or not sys.stderr.isatty() or not sys.stdin.isatty() or not sys.stdout.isatty() or type==SyntaxError: Note, sometimes sys.excepthook gets redefined by other modules, for instance if you are using IPython embedded shell. In this case, insert following after all the imports in your script from sitecustomize import info sys.excepthook=info
http://code.activestate.com/recipes/65287/
crawl-002
en
refinedweb
Aspen comes bundled with a handler called simplates. In basic terms, a simplate is a single-file web template with an initial pure-Python section that populates the context for the template. Simplates are a way to keep logic and presentation as close together as possible without actually mixing them. In more detail, a simplate is a template with two optional Python components at the head of the file, delimited by ASCII form feeds (this character is also called a page break, FF, <ctrl>-L, 0xc, 12). If there are two initial Python sections, then the first is exec'd when the simplate is first loaded, and the namespace it populates is saved for all subsequent invocations of this simplate. This is the place to do imports and set constants; it is referred to as the simplate's import section (be sure the objects defined here are thread-safe). The second Python section, or the first if there is only one, is exec'd within the simplate namespace each time the simplate is invoked; it is called the run-time Python section. The third section is parsed according to one of the various web templating languages. The namespace for the template section is a copy of the import section's namespace, further modified by the run-time Python section. If a simplate has no Python sections at all, then the template section is rendered with an empty context. SyntaxError is raised when parsing a simplate that has more than two form feeds. In debugging and development modes, simplates are loaded for each invocation of the resource. In staging and production modes, simplates are loaded and cached until the filesystem modification time of the underlying file changes. If parsing the file into a simplate raises an Exception, then that is cached as well, and will be raised on further calls until the entry expires as usual. Simplates obey an encoding key in a [simplates] section of aspen.conf: this is the character encoding used when reading simplates off the filesystem, and it defaults to 'UTF-8'. For all simplates, the full filesystem path of the simplate is placed in its namespace as __file__ before the import section is executed. NB: Simplates are never used in the abstract. Rather, one always uses a particular flavor of simplate that obeys the above general rules but which provides slightly different semantics corresponding to the web framework upon which each flavor is based. The Aspen distribution currently bundles two flavors of simplate: Django-flavored and stdlib-flavored. The WSGI callables for each are defined in the aspen.handlers.simplates module: environ['PATH_TRANSLATED']as a Django-flavored simplate. environ['PATH_TRANSLATED']as a stdlib-flavored simplate. In addition to the aspen.apps.django_ app, which serves Django in usual monolithic fashion, we also provide a handler that integrates the Django web framework with the simplate pattern. As mentioned, this callable is available as django in the aspen.handlers.simplates module. To use Django simplates, first install the Django framework in your site: [django]section to aspen.conf, with a settings_module key that points to your settings module. Then tell Aspen to use the django simplate handler for various files via the __/etc/handlers.conf file. For example, the following handlers.conf would serve files ending in .html as Django simplates, and would serve all other resources statically: fnmatch aspen.rules:fnmatch catch_all aspen.rules:catch_all [aspen.handlers.simplates:django] fnmatch *.html [aspen.handlers.static:wsgi] catch_all Lastly, close the loop by telling Django about simplates via the urls.py file in your Django project package, like so: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^', include('aspen.handlers.simplates.django_')) ) Admittedly, that is a fair amount of wiring. The main benefits to using Django via Aspen simplates are first, that your view code and template code are together in the same file (without being mixed); and second, that you get filesystem- rather than regex-based URL layouts. Django-flavored simplates have these distinctives: Aspen includes a simplate flavor that has no dependencies outside the standard library, effectively giving you a raw WSGI interface. The handler for this is named stdlib and is defined in the aspen.handlers.simplates module. Here are its distinctives: Aspen is copyright © 2006-2007 by Chad Whitacre and contributors, and is offered under the MIT license.Aspen is copyright © 2006-2007 by Chad Whitacre and contributors, and is offered under the MIT license.
http://www.zetadev.com/software/aspen/0.8/doc/html/api-handlers-simplates.html
crawl-002
en
refinedweb
Python's lack of a 'switch' statement has garnered much discussion and even a PEP. The most popular substitute uses dictionaries to map cases to functions, which requires lots of defs or lambdas. While the approach shown here may be O(n) for cases, it aims to duplicate C's original 'switch' functionality and structure with reasonable accuracy. Discussion While the use of 'for ... in' may not be semantically accurate here (it will only ever execute the suite once), the fact that we are trying multiple cases (even if they are all within the same iteration) at least gives it the illusion of making some sense. For better or worse, changing the switched variable within the suite will have no effect on the proceeding cases. This is a diversion from C functionality. As noted above, case() (with no arguments) is used in place of 'default'. Another option would be to just omit the last conditional or use 'if True:'. To make that part look even more like the classic idiom, you could: --> change "yield self.match" in __iter__ to "yield self.match, True" --> change "for case in" to "for case, default in" --> change "if case():" to "if default:" ...but, in my opinion, this slight adjustment is just a trade-off of one area of readability for another. As far as I can tell, people generally use switch for its conciseness and readability, and not for its better lookup time, if that is indeed true. Enjoy! Alan Haffner's dictionary-based recipe: Runsun Pan's dictionary-based recipe: PEP 275 - Switching on Multiple Values: A very inventive approach!!! This is what I'd call an inspired solution. Of all switch-case substitutes, I probably like this one most. Just one marginal note: the "raise StopIteration" at the end of "__iter__" is probably superfluous. StopIteration is raised as soon as a generatator returns normally. Perhaps you'd like to see my "Exception-based Switch-Case" recipe at and give me some comment on it? Best Regards Zoran re: Readable switch construction... Well-done and readable solution. I think we're getting close to being able to implement Duff's Device () in Python. ;-) Most Elegant. In python, I really haven't missed switch(), but I am really impressed with the simplicity and beauty of this implementation. Very nice. Nice solution. Great solution. I liked it for its simplicity and elegance. The need for explicit "pass" is in line with all things Pythonic. Multiple cases ? I agree with the previous comments, it's nice, concise and elegant. Have you considered an extension to multiple cases ? I mean something like You would just have to change a line in the match method to Multiple cases ? Ooh, nice. I didn't consider it, as I was aiming to duplicate the actual functionality of 'switch', but this possibility definitely enhances it, in my opinion. I'm going to mention your suggestion in the recipe, if you don't mind. I hate to knock such a beautiful effort... ... but I think I have to disagree with the consensus here: I think this is a poor idea. Brian explains: But I disagree. Switch IS O(1) in number of branches, and that is an extremely important feature of it. Dictionary dispatch is the correct solution in Python when "a switch statement" is needed because it is O(1), and IMHO when people say "a switch statement" they are implying O(1). If readability is the only criterion, then I personally believe that you can't beat this: So, while I admire the cleverness that went into creating a syntax that closely mimics that of C and its offspring, I would advise AGAINST ever using this recipe. It's not that the recipe is poor, just that what it attempts to do (imitate C syntax) is not, IMHO, a wise idea in Python. It's not about Python, it's about a class of "switch" You could extend this thing a long way and it will become more internally consistent/elegant, but I don't think it make things easier for a Python programmer. It is good for someone who needs to program a switch in some customized language running under Python. For example, case could be smart enough to know when it tests true, so it raises an error when the programmer fails to choose a course of action in the code block and falls onto the next case test. That would be useful for some mini-language. Speaking as someone who often forgot the 'break' statement when first doing C But as a Python programmer, why would I write code that requires I remember to hardcode a 'break' or other action within each code block for any reason other than imitating C programming? This class could be expanded to be a smart tool for novice programmers, but as a Python programmer I would just leverage existing constructs, such as dictionaries. Re: I hate to knock such a beautiful effort... Hold on now... you recommend against using this just because it's O(n), but then recommend using if, elif, else? The output of the above is: So how is this not O(n) for cases? Or does this only happen if __eq__ is overridden? Re: It's not about Python, it's about a class of "switch" If you don't want to manually break, just change subsequent 'if' to 'elif'. It still saves you from typing a similar conditional over and over. My Response. No, I recomend against using it because it is O(n) and if you want an O(n) solution, then you should use if-elif-else. In C (and its decendents) one uses switch() despite its ugliness because it is O(1), but this recipe lacks that advantage. switch is not O(1) in C. I remember reading it somewhere that the switch..case in C is only a syntactic sugar for if.. else if and not a O(1). What most of the modern C compilers do when you profile the code and reuse the profile information is that they put the most likely condition to succeed at the top of the if..else if.. structure so that on the average you get an O(1) performance. switch O(?). Okay, after doing some reading in this direction, I've concluded that C's switch can be O(1), but it is not guaranteed. In Python, it might be important to also consider the overhead in each approach, but I'm no expert in this area. Oops, that second link should be: Is there really a point in this? I don't think that the cookbook is reserved for speedy recipes only. For example, the recipe "Swapping Values Without Using a Temporary Variable" involves packing and unpacking a tuple and probably runs slower then code that does use a temporary variable. Does this disqualify the recipe? No. As I understand the cookbook, it is about how things can be done with python. A good recipe will point out the pro's and con's for/against using it, and it's up to the cook what he/she will do about it. You want a speedy dish? Use dictionaries. You want a dish that looks like C but speed is less important to you? Use the recipe here. At the end, a cookbook conveys freedom of choice, and more recipes mean more freedom of choice. Another implementation. I think the idea is really interesting, interesting enough to propose another implementation. This one feels more natural to me (and is also much shorter). What do you think? It has exactly the same behavior, but doesn't special case the call without args (just commented out here). The reasons: (1) It has been the only case that returns True, but does not start falling through further calls. (2) Anyway in C or Java there is no direct equivalent for it. Case must always have an argument. The default case has its own keywork ("default"). (3) You did already mention it: Its not necessary. Just omit the last conditional and write the default code near the end of the for loop. A conditional 'if True:' near the end, or an 'else:' clause for the for-loop have the same effect. I just timed both alternatives, and, despite your presumptions, it turns out that tuple unpacking is faster than using a temporary variable. In what python version? On what platform? ... Unpacking and re-packing used to be the slower variants in early versions of python, but even back then a, b = c, d was attractive for its conciseness, if not for the speed. Anyway, timing is not the point here. Or would you change your code with every new version of python just because some constructs are now more optimized then the others used sofar? I wonder how some people seem to be obsessed with speed at every bit, but then choose python as the implementation language. But I do admire the time and energy spent to measure even such negligible issues like the above. Yet I couln't withstand the temptation and timed it myself. It turns out that a, b = b, a is about 7% slower on Python 2.2 and WinXP and about 11% slower on Python 2.4 and Win2000. My presumptions seem to still hold. Seeing a switch: pythonic encapsulating and "brainy" chunking: [How pythonic is it?].... Encapsulating is highly pythonic. It gives us the local namespaces of function, class, and module, as well as more subtle **kwargs and *args groupings of parameters. In terms of the instance at hand, here we see encapsulating both in the switch class's function suite, and in the *args addition Pierre Quentel suggested for groupings of workalike parameters. [don cogsci hat].... In cognitive science terms, this encapsulating of a [supposedly unique-]choice function is akin to "chunking". Chunking is a well-proven psychological "trick": Chunking several items into one allows more stuff to be fitted into a limited-capacity memory or buffer (cf. fixed-length namespace). But here, instead of grouping items the abstraction is a grouping of tests. This structure of sequenced test criteria thus implements choice_function-chunking. [switch back ;)] ... Is this a case of 'Python fits the brain'? Different results. Maybe I do it wrong, but on my FreeBSD 5.4 with Python 2.4.1 this code: The Beck switch is ingenious. I don't use switch statements. In C I index into arrays of functions. Swapping that array with another can change the code's personality. In python I use dictionaries of functions.
http://code.activestate.com/recipes/410692/
crawl-002
en
refinedweb
repoze.profile 0.6 Aggregate profiling for WSGI requests This package provides a WSGI middleware component which aggregates profiling data across all requests to the WSGI application. It provides a web GUI for viewing profiling data. Installation Install using setuptools, e.g. (within a virtualenv): $ easy_install repoze.profile Configuration via Python Wire up the middleware in your application: from repoze.profile.profiler import AccumulatingProfileMiddleware middleware = AccumulatingProfileMiddleware( app, log_filename='/foo/bar.log', discard_first_request=True, flush_at_shutdown=True, path='/__profile__' ) The configuration options are as follows: - ``log_filename`` is the name of the file to which the accumulated profiler statistics are logged. - persist across application restarts. - ``path`` is the URL path to the profiler UI. It defaults to ``/__profile__``. Once you have some profiling data, you can visit path in your browser to see a user interface displaying profiling statistics (e.g.). Configuration via Paste Wire the middleware into a pipeline in your Paste configuration, for example: [filter:profile] use = egg:repoze.profile#profile log_filename = myapp.profile discard_first_request = true path = /__profile__ flush_at_shutdown = true ... [pipeline:main] pipeline = egg:Paste#cgitb egg:Paste#httpexceptions profile myapp Once you have some profiling data, you can visit path in your browser to see a user interface displaying profiling statistics. Reporting Bugs / Development Versions Visit to report bugs. Visit to download development or tagged versions. repoze.profile 0.6 (2008-08-21) - discard_first_request = false did not work. - Clearing the profile data from the user interface did not properly discard profiler state. repoze.profile 0.5 (2008-06-11) - Initial PyPI release. repoze.profile. repoze.profile 0.3 (2008-02-20) - Added compatibility with Python 2.5. - Made setup.py epend explicitly on ElementTree 1.2.6: meld needs it but meld isn't a setuptools package. repoze.profile 0.2 (2008-02-20) - Added a browser UI. - Added a knob to control discard at shutdown. repoze.profile 0.1 (2008-02-08) - Initial release. - Author: Agendaless Consulting <repoze-dev at lists repoze org> - Keywords: web application server wsgi zope - License: BSD-derived () - Categories - Package Index Owner: chrism, hannosch - DOAP record: repoze.profile-0.6.xml
http://pypi.python.org/pypi/repoze.profile/0.6
crawl-002
en
refinedweb
I’m still trying to figure out imports in 1.3. I’m having trouble accessing my Meteor.methods after removing insecure. File structure: App client/ main.js server/ main.js imports/ api/ methods/ methods.js ui/ myview.jsx My Meteor.call() in myview.jsx does not seem to work and I get an access denied. I tried importing the methods.js into server/main.js with: import '../imports/api/methods/methods.js'; Shouldn’t this register the Meteor.method with the server? I was trying to use the demo todo’s but the removing insecure page does not show how it’s imported (or I’m missing it).Do I need to export from methods.js or does importing the file run and register Meteor.methods()?
https://forums.meteor.com/t/how-to-import-meteor-method-file/21427
CC-MAIN-2022-33
en
refinedweb
Creating a Test Database: PyTest + SQLAlchemy Disclaimer: I’m a total newb with Python and the ecosystem that surrounds it. I come from Rails and apparently I’ve been completely spoiled by how easy it was to use. Dear reader, I’m writing to declare victory. I’ve successfully navigated the gauntlet of setting up a test database for my app that consists of FastAPI, SQLAlchemy, Alembic, and PyTest with PostgreSQL as the database. In my whole career, I’ve never actually had to do this seemingly boilerplate task. Since documentation was scarce, I’m leaving this breadcrumb in the hopes that it will benefit those that come after me. Step By Step Step 1: Forget about Alembic. Yes, you used it to get your database schema to this point, but you won’t need it for this exercise. Save yourself the time. Don’t bark up that tree. Step 2: Drop and re-create the test database from previous runs. This goes in conftest.py , which gets loaded at the beginning of a test run. You’ll see some references to settings.db_url . We’ll cover that part in a moment. The main idea here is that 1) we set up DB_NAME to be something other than our normal development database, and 2) we call db_prep, which drops (if it existed previously) and re-creates the test database and adds our apps user to have access to the test database. It’s just executing the same SQL that you would execute if you were doing this by hand. We’ll cover the fixture at the bottom of this file that invokes prep_db in just a minute. # test # |_ conftest.pyimport os from dotenv import load_dotenv from sqlalchemy import create_engine, MetaData from sqlalchemy.exc import ProgrammingError, OperationalError import pytest from app import settingsload_dotenv() settings.init()DB_NAME = f”{os.getenv(‘DB_NAME’)}_test” settings.db_url = f”postgresql://{os.getenv(‘DB_USER’)}:{os.getenv(‘DB_PASSWORD’)}@{os.getenv(‘DB_HOST’, ‘localhost’)}/{DB_NAME}”def db_prep(): print(“dropping the old test db…”) engine = create_engine(“postgresql://postgres@localhost/postgres”) conn = engine.connect() try: conn = conn.execution_options(autocommit=False) conn.execute(“ROLLBACK”) conn.execute(f”DROP DATABASE {DB_NAME}”) except ProgrammingError: print(“Could not drop the database, probably does not exist.”) conn.execute(“ROLLBACK”) except OperationalError: print(“Could not drop database because it’s being accessed by other users (psql prompt open?)”) conn.execute(“ROLLBACK”) print(f”test db dropped! about to create {DB_NAME}”) conn.execute(f”CREATE DATABASE {DB_NAME}”) try: conn.execute(f”create user {os.getenv(‘DB_USER’)} with encrypted password ‘{os.getenv(‘DB_PASSWORD’)}’”) except: print(“User already exists.”) conn.execute(f”grant all privileges on database {DB_NAME} to {os.getenv(‘DB_USER’)}”) conn.close() print(“test db created”)() Step 3: I created a settings file that helps me manage global variables. In this case, we’re using it to make our connection string to the database global in scope. You’ll see why in step 4. # app # |_ settings.pydef init(): global db_url db_url = None Step 4: Use the database connection string from settings if it exists otherwise initialize it. This allows us to initialize the database name to our test database if we’re entering the app through tests. Otherwise we initialize it in main to our dev/prod database name. NOTE: We want to make sure we import anything dependent on the database after we set the global db_url setting. # app # |_ main.pyimport os import traceback from typing import Optional from dotenv import load_dotenv from fastapi import Depends, FastAPI, Header, HTTPException, Request from fastapi.responses import PlainTextResponse from sqlalchemy import text from sqlalchemy.orm import Session import requests from . import settingsload_dotenv()try: settings.db_url except AttributeError: settings.init() settings.db_url = f"postgresql://{os.getenv('DB_USER')}:{os.getenv('DB_PASSWORD')}@{os.getenv('DB_HOST', 'localhost')}/{os.getenv('DB_NAME')}"from .database import SessionLocal, engine from . import models, schemasapp = FastAPI()def get_db(): db = SessionLocal() try: yield db finally: db.close() And in our database set-up, we only reference the setting. # app # |_ database.pyimport os from dotenv import load_dotenv from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from . import settingsengine = create_engine(settings.db_url)SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)Base = declarative_base() Step 5: OK, let’s take a look back at the fixture from conftest.py : () Here’s what’s going on, line-by-line: 1. The fixture is declared with session scope. 2. We call the afore-referenced db_prep to drop the test test database if it existed before and re-create it. 3. We create our SQLAlchemy engine using settings.db_url, which was set up to point at our test database. 4. We import any models that define our schema. In this case it’s just the User model. It’s important that we do this at this point. (I’m assuming at this point that you’ve created SQLAlchemy models and such.) 5. We import other SQLAlchemy dependencies for database access during tests. 6. We call Base.metatdata.create_all and pass it the engine we instantiated earlier that points to our test database. This does the work of inspecting any models that have been imported and creating the schema that represents them in our test database. 7. We yield the database session we’ve created to allow our tests to use it. (See below.) Step 6: Use the test database in your tests: import pytest from app.models import User from app.schemas import UserCreateclass TestModels: def test_user(self, fake_db): user = UserCreate(external_id="890", external_id_source="456") u = User.create(db=fake_db, user=user) assert u.id > 0 The fake_db argument in our test is a reference to the fixture we created above. My model has a create method that takes the database connection as a dependency to be injected. This may be refactored shortly, but it works for the moment. That’s All Folks I’m 100% positive that somebody else has a better way of doing this or will see a better way to do this based on what I’ve shared. At least I hope that’s the case. If you’re that somebody and happen to come across this, give me a holler and show me (and everyone else) the way. Like I said at the outset, I’m no expert with Python yet. I learned a lot in this process and I’m always open to learning from those that have gone before me.
https://johncox-38620.medium.com/creating-a-test-database-pytest-sqlalchemy-97356f2f02d2?source=user_profile---------2----------------------------
CC-MAIN-2022-33
en
refinedweb
Townsend4,720 Points Function python Ok, what am I not getting here!? def is_odd(division): return 4 / 2 == 0 1 Answer Steven Parker216,688 Points Here's a few hints: - the function needs to test the argument (division) but it currently tests fixed values - the remainder operator might be more useful than the division operator for this - remember to test in such a way that the value will be Truewhen the argument is not divisible by 2
https://teamtreehouse.com/community/function-python
CC-MAIN-2022-33
en
refinedweb
View source for Skillshare intro to Python/Unit 2 You do not have permission to edit this page, for the following reasons: - You do not have permission to edit pages in the Page namespace. - You must confirm your email address before editing pages. Please set and validate your email address through your user preferences. You can view and copy the source of this page. Return to Skillshare intro to Python/Unit 2.
https://wiki.openhatch.org/w/index.php?title=Skillshare_intro_to_Python/Unit_2&action=edit
CC-MAIN-2022-33
en
refinedweb
Vultr Kubernetes Engine (VKE) is a fully managed Kubernetes product with predictable pricing. When you deploy VKE, you'll get a managed Kubernetes control plane that includes our Cloud Controller Manager (CCM) and the Container Storage Interface (CSI). In addition, you can configure block storage and load balancers or install add-ons such as Vultr's ExternalDNS and Cert Manager. We've made Kubernetes hosting easy, so you can focus on scaling your application. This quickstart guide explains how to deploy a VKE cluster and assumes you have experience using Kubernetes. If you have comments about this guide, please use the Suggest an Update button at the bottom of the page. Please see our changelog for information about supported versions of Kubernetes. You can deploy a new VKE cluster in a few clicks. Here's how to get started. Create a Node Pool. About Node Pools When creating a VKE cluster, you can assign one or more Node Pools with multiple nodes per pool. For each Node Pool, you'll need to make a few selections. The monthly rate for the node pool is calculated as you make your selections. If you want to deploy more than one, click Add Another Node Pool. When ready, click Deploy Now. Kubernetes requires some time to inventory and configure the nodes. When VKE completes the configuration the cluster status will report Running. To verify the status of your cluster, you can download your kubeconfigfile (as described in the next section) and run: kubectl --kubeconfig={PATH TO THE FILE} cluster-info After deploying your VKE cluster, you need to gather some information and manage it. Click the Manage button to the right of the desired cluster. On the Overview tab, you'll see your cluster's IP address and Endpoint information. Click the Download Configuration button in the upper-right to download your kubeconfig file, which has credentials and endpoint information to control your cluster. Use this file with kubectl as shown: kubectl --kubeconfig={PATH TO THE FILE} get nodes kubectl uses a configuration file, known as the kubeconfig, to access your Kubernetes cluster. A kubeconfig file has information about the cluster, such as users, namespaces, and authentication mechanisms. The kubectl command uses the kubeconfig to find a cluster and communicate with it. The default kubeconfig is ~/.kube/config unless you override that location on the command line or with an environment variable. The order of precedence is: --kubeconfigflag, kubectl loads only that file. You may use only one flag, and no merging occurs. $KUBECONFIGenvironment variable, it is parsed as a list of filesystem paths according to the normal path delimiting rules for your system. ~/.kube/configfile, and no merging occurs. Please see the Kubernetes documentation for more details. To manage Node Pools, click the Nodes tab on the Manage Cluster page. You have several controls available: Xicon to the right of the pool the destroy the pool. Important: You must use VKE Dashboard or the Kubernetes endpoints of the Vultr API to delete VKE worker nodes. If you delete a worker node from elsewhere in the customer portal or with Instance endpoints of the Vultr API, Vultr will redeploy the worker node to preserve the defined VKE Cluster node pool configuration. To manage the resources linked to VKE, such as Block Storage and Load Balancers, click the Linked Resources tab on the Manage Cluster page. When you deploy VKE, you automatically get several managed components. Although you don't need to deploy or configure them yourself, here are brief descriptions with links to more information. Vultr Cloud Controller Manager (CCM) is part of the managed control plane that connects Vultr features to your Kubernetes cluster. The CCM monitors the node's state, assigns their IP addresses, and automatically deploys managed Load Balancers as needed for your Kubernetes Load Balancer/Ingress services. Learn more about the CCM on GitHub. If your application persists data, you'll need storage. VKE's managed control plane automatically deploys Vultr Container Storage Interface (CSI) to connect your Kubernetes cluster with Vultr's high-speed block storage by default. Learn more about the CSI on GitHub. NOTE: You should use Block Storage volumes for persistent data. The local disk storage on worker nodes is transient and will be lost during Kubernetes upgrades. Vultr offers two block storage technologies: HDD and NVMe. HDD is an affordable option that uses traditional rotating hard drives and supports volumes larger than 10 TB. vultr-block-storage-hdd NVMe is a higher performance option for workloads that require rapid I/O. vultr-block-storage Use the /v2/regions API endpoint to discover which storage classes are available at your location. block_storage_storage_optindicates HDD storage is available. block_storage_high_perfindicates NVMe storage is available. Some locations support both storage classes. If NVMe block storage is available in a location, our CSI uses that class by default. To use block storage with VKE, deploy a Persistent Volume Claim (PVC). For example, to deploy a 10Gi block on your account for VKE with NMVe-backed storage, use a PersistentVolumeClaim template like this: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vultr-block-storage To attach this PVC to a Pod, define a volume node in your Pod template. Note the claimName below is csi-pvc, referencing the PersistentVolumeClaim in the example above. kind: Pod apiVersion: v1 metadata: name: readme-app spec: containers: - name: readme-app image: busybox volumeMounts: - mountPath: "/data" name: vultr-volume command: [ "sleep", "1000000" ] volumes: - name: vultr-volume persistentVolumeClaim: claimName: csi-pvc To learn more about Persistent Volumes, see the Kubernetes documentation. If you'd like to learn more about Vultr CSI, see our GitHub repository. Load Balancers in VKE offer all the same features and capabilities as standalone managed Load Balancers. To deploy a VKE load balancer for your application, add a LoadBalancer type to your service configuration file and use metadata annotations to tell the CCM how to configure VKE load balancer. VKE will deploy the Kubernetes service load balancer according to your service configuration and attach it to the cluster. Here's an example service configuration file that declares a load balancer for HTTP traffic on port 80. The app selector app-name matches an existing service set of pods on your cluster. apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http" name: vultr-lb-http spec: type: LoadBalancer selector: app: app-name ports: - port: 80 name: "http" Notice the annotations in the metadata section. Annotations are how you configure the load balancer, and you'll find the complete list of available annotations in our GitHub repository. Here is another load balancer example that listens on HTTP port 80, and HTTPS port 443. The SSL certificate is declared as a Kubernetes TLS secret named ssl-secret, which this example assumes was already deployed. See the TLS Secrets documentation to learn how to deploy a TLS secret. apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http" service.beta.kubernetes.io/vultr-loadbalancer-https-ports: "443" # You will need to have created a TLS Secret and pass in the name as the value service.beta.kubernetes.io/vultr-loadbalancer-ssl: "ssl-secret" name: vultr-lb-https spec: type: LoadBalancer selector: app: app-name ports: - port: 80 name: "http" - port: 443 name: "https" As you increase or decrease the number of cluster worker nodes, VKE manages their attachment to the load balancer. If you'd like to learn general information about Kubernetes load balancers, see the documentation at kubernetes.io. VKE Cert Manager adds certificates and certificate issuers as resource types in VKE and simplifies the process of obtaining, renewing, and using those certificates. Our Cert Manager documentation is on GitHub, and you can use Vultr's Helm chart to install Cert Manager. ExternalDNS makes Kubernetes resources discoverable via public DNS servers. For more information, see our tutorial to set up ExternalDNS with Vultr DNS. Vultr Kubernetes Engine is a fully-managed product offering with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS. Please see our changelog for information about supported versions of Kubernetes. Vultr Kubernetes Engine includes the managed control plane free of charge. You pay for the Worker Nodes, Load Balancers, and Block Storage resources you deploy. Worker nodes and Load Balancers run on Vultr cloud server instances of your choice with 2 GB of RAM or more. See our hourly rates. Yes, the minimum size for a Block Storage volume is 10GB. Kubernetes uses Vultr cloud servers. It does not support Bare Metal servers. No, VKE does not come with an ingress controller preconfigured. Vultr Load Balancers will work with any ingress controller you deploy. Popular ingress controllers include Nginx, HAProxy, and Traefik.
https://www.vultr.com/docs/vultr-kubernetes-engine/
CC-MAIN-2022-33
en
refinedweb
To convert a Microsoft Acces table ( .accdb) to a CSV file in Python, use the following four steps: - Establish database connection, - Run the SQL query to select data, - Store the data in the CSV using basic Python file I/O and the csvmodule, and - Close the database connection. Here’s a specific example with additional annotations and simplified for clarity: import pyodbc import csv # 1. Example establish database connection connection = pyodbc.connect("your connection string") # 2. Run SQL query cursor = connection.cursor() cursor.execute('select * from XXX;') # 3. Store the contents in "cursor" in the CSV using file I/O with open('my_file.csv','w') as f: writer = csv.writer(f) writer.writerows([x[0] for x in cursor.description]) writer.writerows(cursor) # 4. Close the database connection cursor.close() conn.close() Basically, as soon as you have mastered the art of pulling data into the cursor variable using the pyodbc module, you can use all different ways to write that data into the CSV using basic Python file I/O — such as the open() function or context managers — or even Pandas function to write DataFrames to CSV files. If you’ve landed on this article, you’ll probably struggle with one or both of those issues: - Understanding basic Python features and functions—or knowing about them in the first place. In that case, join my free email academy for an infinite stream of learning content. - Understanding the database handling provided by Python’s Access interface. In that case, check out our in-depth tutorial on pyodbcon the Finxter blog. I hope you learned something out of today’s tutorial, my friend! Here’s a joke to round things out: Nerd Humor >.
https://blog.finxter.com/how-to-convert-access-accdb-table-to-a-csv-in-python/
CC-MAIN-2022-33
en
refinedweb
Positional parameters Lets say we would like to make some pretty URLs for some lists that we have on our site. We have a "show_list" method that takes a user and an id as parameters. The "normal" way would be to have. But with positional paramteres, this can be written: import cherrypy class Root: def show_list(self, user, id): return "user: %s <br /> id: %s" % (user, id) show_list.exposed = True cherrypy.quickstart(Root()) This also works well when you don't know the number of path components beforehand. Just take advantage of Python's arbitrary arguments: class Root: def view(self, *path): return ", ".join(["(%s)" % p for p in path]) view.exposed = True Older versions You can read more about positional paramters for 2.2 in the CherryPy book () CherryPy 2.1 allowed positional parameters for methods named "default". CherryPy 2.0 had no allowance for positional parameters.
http://cherrypy.org/wiki/PositionalParameters
crawl-001
en
refinedweb