text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
How.TestFor the purposes of this tutorial,Implementing(); [Co 'testing' domainWithout going into too much detail about test-driven development and NUnit,. using NUnit.Framework; namespace TestLibrary [TestFixture] public class MyTests private MyCalculator calc; [SetUp] public void SetUp() calc = new MyCalculator(); [Test] public void testAdd() int n = calc.add(38, 4); Assert.AreEqual(42, n); } 'phone. Resources.
http://www.c-sharpcorner.com/UploadFile/camurphy/CodeCoverage03072006161928PM/CodeCoverage.aspx
CC-MAIN-2015-18
en
refinedweb
Opened 6 years ago Closed 6 years ago #10090 closed (duplicate) small error in example code for looping over hidden form fields Description <form action="/contact/" method="POST"> {% for field in form.visible_fields %} <div class="fieldWrapper"> {# Include the hidden fields in the form #} {% if forloop.first %} {% for hidden in form.hidden_fields %} {{ field }} {% endfor %} {% endif %} {{ field.errors }} {{ field.label_tag }}: {{ field }} </div> {% endfor %} <p><input type="submit" value="Send message" /></p> </form> This is from "Looping Over Hidden and Visible Fields" at. There is an error in the inner for loop foir hidden fields. In this loop, "{{field}}"" should be "{{hidden}}" to reflect the variable name in the inner loop. Change History (1) comment:1 Changed 6 years ago by Alex - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to duplicate - Status changed from new to closed Closing as a dupe of #10009.
https://code.djangoproject.com/ticket/10090
CC-MAIN-2015-18
en
refinedweb
Flash Panels – Inspiration, Creation and Implementation In this article I will explain how Flash Panels fit into the grand scheme of extending Flash MX 2004. We’ll also discuss some of the benefits and pitfalls you may encounter when using Flash Panels in your day to day work. Through this tutorial, you’ll create your very own Flash Panel to control the rotation of Movie Clips on the stage using standard Flash MX 2004 components, a hefty sprinkling of ActionScript and some tips and tricks along the way. I hope you’ll come away from this tutorial feeling empowered to create your own Flash Panels, and to explore the capabilities and possibilities of Flash MX 2004 — and your own mind! Before we set out on this extensibility trip, let me point out a couple of resources that will be invaluable in your pursuit of Flash Panel excellence: - Flash MX 2004 JavaScript Dictionary: An invaluable bible that contains nearly all the Flash API information that you’ll ever need. - JSFL File API (Not included in the Flash MX 2004 JavaScript Dictionary; functionality added in Flash MX 2004 7.2 udpater). The creation of Flash Panels for use in Flash MX 2004 basically hinges around the understanding and use of the JSAPI (JavaScript API). It’s based on a Document Object Model (DOM), which allows both Flash Documents and the internal functions of Flash MX 2004 to be accessed via simple JavaScript-based commands. Since the release of Flash MX 2004, many JSFL (Flash JavaScript) commands, Flash Panels and custom tools have been created to help automate tasks and add custom interfaces to complex controls that directly influence feedback in the Flash authoring environment. Some of these can be found in SitePoint’s Flash Blog; others are easily found via search (use ‘JSFL commands’ or ‘Flash Panels’ as your keywords). If you’re comfortable with ActionScript, pushing the boundaries to develop your own custom commands and panels is hardly a leap of faith — it’s a small step forward. As the JSAPI is based around the Netscape JavaScript API and Flash’s Document Object Model, developing and writing Flash JavaScript should be a natural progression. By their very nature, Flash Panels are exported SWF files. However, they’re subtly different from the standard JSFL files that are used to create commands, as they utilise a wrapper function called MMExecute(). This allows interaction between the compiled SWF and the Flash MX 2004 API. Consider the following line of JSFL, which returns the current width of the first selected item on the stage: var objectWidth= fl.getDocumentDOM().selection[0].width; In order to gain the same functionality within your SWF Panel, this code needs to be changed as follows: var objectWidth=MMExecute("fl.getDocumentDOM().selection[0].width"); If we examine the code contained within the MMExecute("JavaScript String"), we’ll note that it’s exactly the same piece of Flash JavaScript we saw above. The only difference is that it’s now encapsulated within the wrapper. The MMExecute() function takes the Flash JavaScript string as a single argument and passes it to the Flash API. It’s then processed and a return value is optionally given. This value can then be assigned to a variable. Flash Panel Location All the major Flash Panels can be found in one simple location within the authoring environment. Simply select ‘Window > Other Panels >’ to access it in Flash MX 2004. When you’re creating Flash Panels and testing in the live environment, keep the following locations in mind. These are the folders in which Flash MX 2004 locates the custom panels: We will make use of these directory locations later, when we test and deploy the extension. Inspiration Sometimes when you’re working, you suddenly think ‘Gee, wouldn’t it be quicker if I could automate [Insert Task Here]?’ More often than not, the answer is usually, ‘Yep, it’d be great to automate that task …but how on earth do I do it?’ Enter: Flash Panels… Actually, it’s not just the automation of tasks that warrants the creation of Flash Panels; the need for can stem from any of the following (and some other) requirements: - Automation: Automate often laborious and time consuming tasks within Flash MX 2004 (Code Addition, Timeline Effects) - Speedier Access: Quicker access to menu hidden commands - GUI Control: Add a GUI to control real-time effects (rotation, scaling, position etc) The creation of a Flash Panel can be a daunting task, which is why you need a clear goal for the panel before you begin. Once you decide specifically what you want the panel to do, you’re already most of the way to creating the panel (apart from the obvious coding and hooking into the interface). The next step is to sketch the process flow of the command (how it all works) either on paper, or in a text editor of your choice. Note: when I’m working in Flash, I always keep next to me a notebook that’s dedicated to ideas/workarounds. Sometimes, as you’re working away, a need or idea will spring into your mind that you can automate, speed up, or add an interface to, in order to make your life — and those of your colleagues — easier. Keep a list of these ideas so that those fleeting thoughts are never lost and everyone may benefit from the creation of your time-saving panel! In the example that we’re about to create, we will use a single instance of the NumericStepper component to control the rotation of Movie Clips. Consider the following diagram, which shows the command process flow of the command we’re about to create in Flash MX 2004: To this, we’ll add a change event handler to catch when the value of the NumericStepper component increases or decreases. When the value changes, the event handler will trigger a function called rotateMe(), which contains all the Flash JavaScript encapsulated in the MMExecute() wrapper function, which is necessary for the function to carry out its given task. Anyone for a History Lesson? The History Panel (Window > Other Panels > History), can be a useful insight into the inner workings of Flash MX 2004. When you’re looking to recreate an effect via scripted methods, the History Panel can be a good place to start. During the majority of your user interaction with the application on the stage, if you have the History Panel open, you’ll notice events appearing within it. This is a visual representation of the communication history between you the user and the application in JSFL. The majority of elements within the history can be copied to the clipboard and pasted into your favourite text editor for investigation, except those beside which a red cross appears. If you’re trying to identify the relevant API reference to carry out a given stage-based task that you’re trying to automate, and you can’t find it within the Flash MX 2004 JavaScript Dictionary, execute the task on the stage, and simply copy and paste from the History Panel. It makes an excellent starting point for your own custom commands! You may also save selected steps (but not those denoted with a red ‘x’) as a command, which will be made available from the ‘Commands’ menu provided it doesn’t require any user interaction. The process of creating one-time commands complete with interfaces is another topic — we’ll come back to it in another article. Creation Enough with the introduction! Let’s dive into creating a command that rotates the Movie Clips. Create the Rotator Flash Panel I’ve provided the code for the panel in this article’s downloadable code archive. The RotatorStart.fla contains the timeline layer structure and the background image for the panel. The finished FLA for this example is called RotatorFinal.fla. If at any time you need to look up the process flow for the function, refer to the diagram shown above. Setting the Scene Our first course of action is to add the component that will control the effect; as the background and layers have already been set up, we need only to add a single component to the stage before we insert the controlling ActionScript. Of course, it goes without saying that the more complex the panel, the more controls you may have on screen at any one time. I’ll leave it to you to experiment with your own creations after you’ve created this simple but effective example. - Open the starting point FLA (RotatorStart.fla from the code archive) and drag an instance of the NumericStepper Component from the ‘UI Components’ section of the Components Panel onto the first frame of the ‘Interface’ layer. Name the instance stepSizer. - Position the NumericStepper component instance centrally over the rounded rectangle background, and change the default parameter values to the following: - Maximum: 360 - Minimum: 0 - stepSize: 5 - Value: 45 - Save your Flash Document to a location of your choice. - Copy the JXLFLAPI.as file from the code archive to the location of your saved FLA. (this is a JSFL Wrapper that’s used to simplify some tasks). - Select the first frame of the Actions layer and add the following code within the Actions Panel: //Stage Controls Stage.align = "TC"; Stage.scaleMode = "noScale"; Stage.showMenu = false; //Flash API Wrapper (Courtesy Jesse Warden) #include "JXLFLAPI.as" //Main Rotation Function function rotateMe() { var selectionChecker = MMExecute("fl.getDocumentDOM().selection.length"); if (selectionChecker == 1) { /) + "})"); //Get Rotation Value var incrementer = stepSizer.value; //Rotate Selection MMExecute("fl.getDocumentDOM().rotateSelection(" + incrementer + ")"); //Align H/V to Center of Stage MMExecute("fl.getDocumentDOM().align('vertical center', true)"); MMExecute("fl.getDocumentDOM().align('horizontal center', true)"); //Update Preview Information } else { break; } } //========================== //Miscellaneous Functions //========================== //Middle Mouse Wheel Support //========================== var mouseListener:Object = new Object(); mouseListener.onMouseWheel = function(delta) { stepSizer.value += delta; }; Mouse.addListener(mouseListener); //========================== //Create Event Handler / Dispatcher for Numeric Stepper //========================== stepsListener = new Object(); stepsListener.change = function() { rotateMe(); }; stepSizer.addEventListener("change", stepsListener); //Numeric Stepper Event Handler Ends Let’s step through the code and see how it fits together. First, we set the main stage settings, aligning the contents of the stage to TC (Top Centre). We switch off the ability to zoom in, and stop the right click menu from appearing. //Stage Controls Stage.align = "TC"; Stage.scaleMode = "noScale"; Stage.showMenu = false; We then include a nifty JSFL wrapper from Jesse Warden , which allows us to encapsulate some flavours of JSFL without needing to worry about sometimes complex single and double escape strings in the MMExecute()function. #include "JXLFLAPI.as" Note: Using the JSFL wrapper, we can simplify the following trace statement: MMExecute("fl.trace("Tracing to the Output Panel")"); The JSFL wrapper simplifies the code as follows: flapi.trace("Tracing to the Output Panel") Moving on through the process flow of the panel, we must consider the listener object for the NumericStepper component instance that we have on the stage. We use the change event so that, when the user clicks the up or down controllers of the NumericStepper, the rotateMe()function is called: stepsListener = new Object(); stepsListener.change = function() { rotateMe(); }; stepSizer.addEventListener("change", stepsListener); The rotateMe()function is called every time the listener object detects that the selected value of the NumericStepper component has changed. If we refer to the previous process flow diagram, we can see clearly the chain of events that occurs. First of all, we check that the user has selected only a single item from the stage: var selectionChecker = MMExecute("fl.getDocumentDOM().selection.length"); if (selectionChecker == 1) { We then reset the transformation point of the object to a central location. The reason for this is simple: when we rotate the object, it rotates around this transformation point. If the transformation point is off-centre, it can be difficult to gauge what’s going on. Resetting the transformation point to the centre point of the object using the object’s width and height makes the rotation easier to observe and keeps things tidy. /)+"})"); We then get the current value of the NumericStepper Component, store it in the incrementer variable, and rotate the selection accordingly using rotateSelection(value). As the NumericStepper component facilitates the use of continuous feedback by holding down the direction buttons, this can lead to a pleasing and functional effect. //Get Rotation Value var incrementer = stepSizer.value; //Rotate Selection MMExecute("fl.getDocumentDOM().rotateSelection("+incrementer+")"); Finally, we align the object centrally to the stage while rotating it. It’s a personal choice of mine to add this code. If it’s omitted, the object can drift as a result of the way Flash MX 2004 applies the centralised transformation point (see the earlier discussion). //Align H/V to Center of Stage MMExecute("fl.getDocumentDOM().align('vertical center', true)"); MMExecute("fl.getDocumentDOM().align('horizontal center', true)"); That’s all we need to do in order to rotate the selected object; however, there’s an additional snippet of ActionScript that will give the Flash Panel middle mouse wheel support. This allows us to increase or decrease the value of the rotation either by clicking on the up and down arrows, or by scrolling the mouse wheel up or down. This utilises the same methodology as the event handler for the NumericStepper component, but uses the onMouseWheelevent handler to increase or decrease the component’s value. var mouseListener:Object = new Object(); mouseListener.onMouseWheel = function(delta) { stepSizer.value += delta; }; Mouse.addListener(mouseListener); - Save your Flash document, and export the SWF with a suitable name to your Flash MX 2004 'WindowSWF' directory as follows. - Restart Flash and access the panel from Window > Other Panels > [Name of Exported SWF] To use the command, simply select a single object from the stage, then use the controls within the Flash panel to control rotation of the object. Now you have a fully functional Flash Panel that controls the rotation of your object in a quick, defined and timely manner! Note: I usually use the Flash JSFL Wrapper to trace out information to the Output Panel during the development phase. For example, if in this case, I wanted to trace out the current value of the NumericStepper component when middle mouse wheel was scrolled, I would add to our code the lines denoted in bold. var mouseListener:Object = new Object(); mouseListener.onMouseWheel = function(delta) { flapi.trace("Object rotation is now "+stepSizer.value+ " degrees"); stepSizer.value += delta; }; Note also that there are a couple of extra functions I've included at the end of this article to help you on your way! Now all that remains is to package the SWF into a manageable MXP file that can be installed onto your machine, or computers of your colleagues or anyone that you wish! Now that we’ve created the interface, we need to add the controlling ActionScript to bring the effect to life. Add the ActionScript It’s pretty obvious, but the more things that your panel tries to accomplish, the more complex both the ActionScript and the encapsulated JSFL becomes. In this example, the code is pretty simple and linear, but as you create your own Flash Panels and begin to extend Flash MX 2004, things can get a little more complex. For this reason, it’s often extremely helpful to sketch out the data flow of your command, as I mentioned earlier. You won’t regret it! Implementation Before we package the Flash Panel into a distributable format, there are a couple of 'gotchas' that we need to examine! Updating the Panel while still in Flash MX 2004 When you make changes to the interface of, or add code to, your Flash Panel projects, you will obviously need to export your updated SWF to the 'WindowSWF' folder. However, in order to see the updates, you'll need to close the panel by clicking the window 'x' button when the panel is undocked and reopen it from the 'Window > Other Panels >' menu, rather than selecting 'Close Panel' from the Options flyout. The reasoning behind this is that clicking the 'Close Panel' option seems merely to hide the panel from view, rather than properly closing it and releasing it from memory. Name the Exported SWF I've experienced several 'Name Clash' issues when developing extensions for Flash MX 2004, and they can be slightly irritating -- to say the least! Sometimes, when you export a SWF to the 'WindowSWF' directory and attempt to open the panel within Flash MX 2004, a different panel opens! There is apparently no workaround for this -- you simply have to change the name of the SWF until it opens the correct panel when you select the panel from Window > Other Panels > [Your Panel]. To me, it looks like Flash MX 2004's built-in directory parsing uses a simple regular expression to iterate through the directory, and it can easily get confused! Hopefully, this will be rectified in the next minor (or major) release of Flash MX 2004. Package your Panel In order to make your shiny new panel easily shareable, you need to create an MXP file that can be installed with the Macromedia Extension Manager. The first step is to create an MXI file that the Extension Manager can use to compile the MXP file. The MXI is essentially an XML file that contains simple information about the extension: version information, extension name and description, as well as the files to compile. Note that an example .mxi file is included within the article source code, so you can alter it for your needs. Although it's outside the scope of this article to describe all the options available to those creating distributable MXPs, I'll cover some of the basics here to get you started. In order to create an MXI file for the Flash Panel we've just created, open your favourite text editor and add the following: <?xml version="1.0" encoding="UTF-8"?> <macromedia-extension <author name="Phireworx" /> <products> <product name="Flash" version="7" primary="true" /> </products> <description> <![CDATA[ Happily rotate your objects in Flash MX 2004 using this Simple Panel ]]> </description> <ui-access> <![CDATA[ Access to the command panel is by selecting 'Window > Other Panels > Rotator Panel' in Flash MX 2004. ]]> </ui-access> <license-agreement> <![CDATA[ ]]> </license-agreement> <files> <file source="Rotator Panel.swf" destination="$flash/WindowSWF" /> </files> </macromedia-extension> The MXI file contains different information, all of which can be easily understood and edited to suit your own needs. Here's a quick overview of where the information is located: - Author Name: within the <author>tag name attribute - - Description: within the <description>tag - Access and Usage Instructions: within the <UI-access>tag - Source File: within the <file>tag source attribute - File Destination: the location at which you should install the file is within the <file>tag destination attribute The most important section is the name of the SWF file that we are going to add: <file source="Rotator Panel.swf" destination="$flash/WindowSWF" /> We simply place the name of the exported SWF into the 'file source' section, and add the 'WindowSWF' directory as the destination ( $flash/WindowSWF). Note that the name of the exported SWF file that you include within the extension will appear as it does in the Flash MX 2004 menu system under 'Other Panels'. Once you've edited the options to your needs, save the file with the extension .mxi (e.g. Rotator Panel;.mxi). Now, you can double-click the MXI file, and (if Macromedia Extension Manager is installed), you'll be prompted for an extension (MXI) to package. You'll also be asked for a name by which the extension package (MXP) can be saved. The Macromedia Extension Manager automatically creates the MXP file, which can then be distributed as you see fit! I've only skimmed the surface of creating your own custom Flash Panels in this article, but I certainly hope that this information has given you the incentive to create your own Flash Panels! If you do create any exciting Flash Panels, you can always share them with the SitePoint community by posting in the Flash forums. Don't be afraid to experiment with your own cool effects and ideas for panels and commands. I'll see you in the forums! Extra Functions Here are a few of extra functions to help you on your way with the development of Flash Panels. Show an Alert When called from a compiled SWF, this simple piece of code will produce an alert within Flash MX 2004. errMsg = "alert('Please Save Your FLA before Applying the Effect');"; MMExecute(errMsg); Check the File is Saved This next section of code will check to see whether the current document has been saved or not, and carries out a conditional function: function checkDocumentIsSaved() { var fileDestinationTemp = MMExecute("fl.getDocumentDOM().path"); if (fileDestinationTemp != "undefined") { //Document is Saved, do something } else { //Document is NOT Saved, do something } } Iterate Through Selected Stage Objects This simple code will iterate through an array of currently selected objects on the stage. This can be extremely useful to change en masse properties of groups of selected objects: var objLength = MMExecute("fl.getDocumentDOM().selection.length"); for (var i = 0; i<objLength; i++) { //Do Something to the selected object here flapi.trace(i); } No Reader comments
http://www.sitepoint.com/flash-panels/
CC-MAIN-2015-22
en
refinedweb
Dear Professionals... i created an EJB session bean and deployed it successfully to Websphere 3.5. I created a servlet to access the bean from within Websphere...it worked great. When i created a client application to access the bean, i got the following error: java.lang.ClassCastException at com.ibm.ejs.ns.jndi.CNContextImpl.isContextLocalCheck(CNContextImpl.java:1324) The jar files in the classpath are C:\VisualCafe\Java2\bin\java -cp Deployedroom.jar;ujc.jar;iioprt.jar;rmiorb.jar;jndi.jar;ejb.jar Client The lookup file is import javax.ejb.*; import javax.naming.*; import javax.naming.NamingException; public class EJBLookup { public static EJBHome lookup(String name, Class homeClass, String serverName) { try { java.util.Properties p = new java.util.Properties(); p.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.ejs.ns.jndi.CNInitialContextFactory"); if (serverName != null) { String serverAddress = (serverName.length() > 0) ? serverName : "localhost"; p.put(Context.PROVIDER_URL, "iiop://" + serverAddress + ":900"); //PENDING: Port 900 ?? } InitialContext ic = new InitialContext(p); javax.rmi.PortableRemoteObject.narrow(ic.lookup(name), homeClass); Object o = javax.rmi.PortableRemoteObject.narrow(ic.lookup(name), homeClass); return (EJBHome) o; } catch (Throwable e) { e.printStackTrace(); return null; } } } Can any one help...!!!! thankx Discussions EJB programming & troubleshooting: Websphere 3.5 + Ejb + Client Application = ClassCastException Websphere 3.5 + Ejb + Client Application = ClassCastException (4 messages) - Posted by: mohamed sabry - Posted on: April 24 2001 10:49 EDT Threaded Messages (4) - IBM JDK needed by Billy Newport on April 24 2001 23:13 EDT - IBM JDK needed by mohamed sabry on April 26 2001 07:45 EDT - IBM JDK needed by Billy Newport on April 26 2001 04:29 EDT - IBM JDK needed by mohamed sabry on April 29 2001 07:25 EDT IBM JDK needed[ Go to top ] It won't work without the IBM JDK. I noticed you are using the Cafe JDK. The jars that you need are the ujc and ejs and sslight also if you're using security. - Posted by: Billy Newport - Posted on: April 24 2001 23:13 EDT - in response to mohamed sabry Billy IBM JDK needed[ Go to top ] Why, although the code compiles without error. - Posted by: mohamed sabry - Posted on: April 26 2001 07:45 EDT - in response to Billy Newport IBM JDK needed[ Go to top ] The ORB is different. The WAS runtime needs the IBM ORB. - Posted by: Billy Newport - Posted on: April 26 2001 16:29 EDT - in response to mohamed sabry IBM JDK needed[ Go to top ] Thanks a lot, i did it.... - Posted by: mohamed sabry - Posted on: April 29 2001 07:25 EDT - in response to Billy Newport lots of thanks...
http://www.theserverside.com/discussions/thread.tss?thread_id=5994
CC-MAIN-2015-22
en
refinedweb
XML-RPC NOTE: All credit for this code goes to Crast in irc.freenode.net:#django... This uses SimpleXMLRPCDispatcher which is part of the standard Python lib in 2.4 (And possibly earlier versions). In discussing ways of handling XML-RPC for Django, I realised I really needed a way to do it without patching Django's code. Crast in #django came up with a great solution, which I have modified and tweaked a bit. I've included it here. Feel free to fiddle with it and make it your own ... All this code is post-mr Any crappy & garbage code is completely mine; I'm still learning Python so bear with me. The hacks I added for self-documentation output are just that; any improvements to them would probably be a good thing. First, setup your urls.py to map an XML-RPC service: urlpatterns = patterns('', # XML-RPC (r'^xml_rpc_srv/', 'yourproject.yourapp.xmlrpc.rpc_handler'), ) Then, in the appropriate place, create a file called xmlrpc.py Grear # Patchless XMLRPC Service for Django # Kind of hacky, and stolen from Crast on irc.freenode.net:#django # Self documents as well, so if you call it from outside of an XML-RPC Client # it tells you about itself and its methods # # Brendan W. McAdams <[email protected]> # SimpleXMLRPCDispatcher lets us register xml-rpc calls w/o # running a full XMLRPC Server. It's up to us to dispatch data from SimpleXMLRPCServer import SimpleXMLRPCDispatcher from django.http import HttpResponse # Create a Dispatcher; this handles the calls and translates info to function maps #dispatcher = SimpleXMLRPCDispatcher() # Python 2.4 dispatcher = SimpleXMLRPCDispatcher(allow_none=False, encoding=None) # Python 2.5 == ''c0der: zASzL'' == def rpc_handler(request): """ the actual handler: if you setup your urls.py properly, all calls to the xml-rpc service should be routed through here. If post data is defined, it assumes it's XML-RPC and tries to process as such Empty post assumes you're viewing from a browser and tells you about the service. """ response = HttpResponse() if len(request.POST): response.write(dispatcher._marshaled_dispatch(request.raw_post_data)) else: response.write("<b>This is an XML-RPC Service.</b><br>") response.write("You need to invoke it using an XML-RPC Client!<br>") response.write("The following methods are available:<ul>") methods = dispatcher.system_listMethods() for method in methods: # right now, my version of SimpleXMLRPCDispatcher always # returns "signatures not supported"... :( # but, in an ideal world it will tell users what args are expected sig = dispatcher.system_methodSignature(method) # this just reads your docblock, so fill it in! help = dispatcher.system_methodHelp(method) response.write("<li><b>%s</b>: [%s] %s" % (method, sig, help)) response.write("</ul>") response.write('<a href=""> <img src="" border="0" alt="Made with Django." title="Made with Django."></a>') response['Content-length'] = str(len(response.content)) return response def multiply(a, b): """ Multiplication is fun! Takes two arguments, which are multiplied together. Returns the result of the multiplication! """ return a*b # you have to manually register all functions that are xml-rpc-able with the dispatcher # the dispatcher then maps the args down. # The first argument is the actual method, the second is what to call it from the XML-RPC side... dispatcher.register_function(multiply, 'multiply') That's it! You can pretty much write a standard python function in there, just be sure to register it with the dispatcher when you're done. Here's a quick and dirty client example for testing: import sys import xmlrpclib rpc_srv = xmlrpclib.ServerProxy("") result = rpc_srv.multiply( int(sys.argv[1]), int(sys.argv[2])) print "%d * %d = %d" % (sys.argv[1], sys.argv[2], result) Based on experience, I do recommend that you use Dictionaries for your args rather than long args, but I think that's personal preference (It allows named arguments,
https://code.djangoproject.com/wiki/XML-RPC?version=16
CC-MAIN-2015-22
en
refinedweb
Contents - Introduction - A very simple C program - The program in PowerPC assembly language - The relocatable object file - Disassembly and machine code Introduction. - gcc translates our C code to assembly code. - gcc calls GNU as to translate the assembly code to machine code in an ELF relocatable object. - gcc calls GNU ld to link our relocatable object with the C runtime and the C library to form an ELF executable object. - NetBSD kernel loads ld.elf_so, which loads our ELF executable and the C library (an ELF shared object) to run our program. So far, this wiki page examines only the first two steps. A very simple C program This program is only one C file, which contains only one main function, which calls printf(3) to print a single message, then returns 0 as the exit status. #include <stdio.h> int main(int argc, char *argv[]) { printf("%s", "Greetings, Earth!\n"); return 0; }. We can apply gcc(1) in the usual way to compile this program. (With NetBSD, cc or gcc invokes the same command, so we use either name.) Then we can run our program: $ cc -o greetings greetings.c $ ./greetings Greetings, Earth! $. $ cc -v -o greetings greetings.c Using built-in specs. Target: powerpc--netbsd Configured with: /usr/src/tools/gcc/../../gnu/dist/gcc4/configure --enable-long- long --disable-multilib --enable-threads --disable-symvers --build=i386-unknown- netbsdelf4.99.3 --host=powerpc--netbsd --target=powerpc--netbsd Thread model: posix gcc version 4.1.2 20061021 prerelease (NetBSD nb3 20061125) **/usr/libexec/cc1 -quiet -v greetings.c -quiet -dumpbase greetings.c -auxbase gr** **eetings -version -o /var/tmp//ccVB1DcZ.s** #include "..." search starts here: #include <...> search starts here: /usr/include End of search list. GNU C version 4.1.2 20061021 prerelease (NetBSD nb3 20061125) (powerpc--netbsd) compiled by GNU C version 4.1.2 20061021 (prerelease) (NetBSD nb3 200611 25). GGC heuristics: --param ggc-min-expand=38 --param ggc-min-heapsize=77491 Compiler executable checksum: 325f59dbd937debe20281bd6a60a4aef **as -mppc -many -V -Qy -o /var/tmp//ccMiXutV.o /var/tmp//ccVB1DcZ.s** GNU assembler version 2.16.1 (powerpc--netbsd) using BFD version 2.16.1 **ld --eh-frame-hdr -dc -dp -e _start -dynamic-linker /usr/libexec/ld.elf_so -o g** **reetings /usr/lib/crt0.o /usr/lib/crti.o /usr/lib/crtbegin.o /var/tmp//ccMiXutV.** **o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/crtend.o /usr/lib/crtn.o** The first command, /usr/libexec/cc1, is internal to gcc and is not for our direct use. The other two commands, as and ld, are external to gcc. We would use as and ld without gcc, if we would want so.. The .s assembly file and the .o object file were temporary files, so the gcc driver program deleted them. We only keep the final executable of greetings. The program in PowerPC assembly language. - Comments begin with a '#' sign, though gcc never puts any comments in its generated code. PowerPC uses '#', unlike many other architectures that use ';' instead. - Assembler directives have names that begin with a dot (like .section or .string) and may take arguments. - Instructions have mnemonics without a dot (like li or stw) and may take operands. - Labels end with a colon (like .LC0: or main:) and save the current address into a symbol.. Commented copy of greetings.s. # This is a commented version of greeting.s, the 32-bit PowerPC # assembly code output from cc -mregnames -S greetings.c # .file takes the name of the original source file, # because this was a generated file. I guess that this # allows error messages or debuggers to blame the # original source file. .file "greetings.c" # Enter the .rodata section for read-only data. String constants # belong in this section. .section .rodata # For PowerPC, .align takes an exponent of 2. # So .align 2 gives an alignment of 4 bytes, so that # the current address is a multiple of 4. .align 2 # .string inserts a C string, and the assembler provides # the terminating \0 byte. The label sets the symbol # .LC0 to the address of the string. .LC0: .string "Greetings, Earth!" # Enter the .text section for program text, which is the # executable part. .section ".text" # We need an alignment of 4 bytes for the following # PowerPC processor instructions. .align 2 # We need to export main as a global symbol so that the # linker will see it. ELF wants to know that main is a # @function symbol, not an @object symbol. .globl main .type main, @function main: # The code for the main function begins here. # Passed in general purpose registers: # r1 = stack pointer, r3 = argc, r4 = argv # Passed in link register: # lr = return address # The int return value goes in r3. # Allocate 32 bytes for our the stack frame. Use the # atomic instruction "store word with update" (stwu) so # that r1[0] always points to the previous stack frame. stwu %r1,-32(%r1) # r1[-32] = r1; r1 -= 32 # Save registers r31 and lr to the stack. We need to # save r31 because it is a nonvolatile register, and to # save lr before any function calls. Now r31 belongs in # the register save area at the top of our stack frame, # but lr belongs in the previous stack frame, in the # lr save word at (r1[0])[0] == r1[36]. mflr %r0 # r0 = lr stw %r31,28(%r1) # r1[28] = r31 stw %r0,36(%r1) # r1[36] = r0 # Save argc, argv to the stack. mr %r31,%r1 # r31 = r1 stw %r3,8(%r31) # r31[8] = r3 /* argc */ stw %r4,12(%r31) # r31[12] = r4 /* argv */ # Call puts(.LC0). First we need to load r3 = .LC0, but # each instruction can load only 16 bits. # .LC0@ha = (.LC0 >> 16) & 0xff # .LC0@l = .LC0 & 0xff # This method uses "load immediate shifted" (lis) to # load r9 = (.LC0@ha << 16), then "load address" (la) to # load r3 = &(r9[.LC0@l]), same as r3 = (r9 + .LC0@l). lis %r9,.LC0@ha la %r3,.LC0@l(%r9) # r3 = .LC0 # The "bl" instruction calls a function; it also sets # the link register (lr) to the address of the next # instruction after "bl" so that puts can return here. bl puts # puts(r3) # Load r3 = 0 so that main returns 0. li %r0,0 # r0 = 0 mr %r3,%r0 # r3 = r0 # Point r11 to the previous stack frame. lwz %r11,0(%r1) # r11 = r1[0] # Restore lr from r11[4]. Restore r31 from r11[-4], # same as r1[28]. lwz %r0,4(%r11) # r0 = r11[4] mtlr %r0 # lr = r0 lwz %r31,-4(%r11) # r31 = r11[-4] # Free the stack frame, then return. mr %r1,%r11 # r1 = r11 blr # return r3 # End of main function. # ELF wants to know the size of the function. The dot # symbol is the current address, now the end of the # function, and the "main" symbol is the start, so we # set the size to dot minus main. .size main, .-main # This is the tag of the gcc from NetBSD 4.0.1; the # assembler will put this string in the object file. .ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)"! Optimizing the main function Expect a compiler like gcc to write better assembly code than a human programmer who knows assembly language. The best way to optimize the assembly code is to enable some gcc optimization flags. Released software often uses the -O2 flag, so here is a commented copy of greetings.s (from the gcc of NetBSD 4.0.1/macppc) with -O2 in use. # This is a commented version of the optimized assembly output # from cc -O2 -mregnames -S greetings.c .file "greetings.c" # Our string constant is now in a section that would allow an # ELF linker to remove duplicate strings. See the "info as" # documentation for the .section directive. .section .rodata.str1.4,"aMS",@progbits,1 .align 2 .LC0: .string "Greetings, Earth!" # Enter the .text section and declare main, as before. .section ".text" .align 2 .globl main .type main, @function main: # We use registers as before: # r1 = stack pointer, r3 = argc, r4 = argv, # lr = return address, r3 = int return value # Set r0 = lr so that we can save lr later. mflr %r0 # r0 = lr # Allocate only 16 bytes for our stack frame, and # point r1[0] to the previous stack frame. stwu %r1,-16(%r1) # r1[-16] = r1; r1 -= 16 # Save lr in the lr save word at (r1[0])[0] == r1[20], # before calling puts(.LC0). lis %r3,.LC0@ha la %r3,.LC0@l(%r3) # r3 = .LC0 stw %r0,20(%r1) # r1[20] = r0 bl puts # puts(r3) # Restore lr, free stack frame, and return 0. lwz %r0,20(%r1) # r0 = r1[20] li %r3,0 # r3 = 0 addi %r1,%r1,16 # r1 = r1 + 16 mtlr %r0 # lr = r0 blr # return r3 # This main function is smaller than before but ELF # wants to know the size. .size main, .-main .ident "GCC: (GNU) 4.1.2 20061021 prerelease (NetBSD nb3 20061125)" The optimized version of the main function does not use the r9, r11 or r31 registers; and it does not save r31, argc or argv to the stack. The stack frame occupies only 16 bytes, not 32 bytes.. The relocatable object file Now that we have the assembly code, there are two more steps before we have the final executable. - The first step is to run the assembler (as), which translates the assembly code to machine code, and stores the machine code in an ELF relocatable object. - The second step is to run the linker (ld), which combines some ELF relocatables into one ELF executable.. $ as -o greetings.o greetings.s The output greetings.o is a relocatable object file, and file(1) confirms this. $ file greetings.o greetings.o: ELF 32-bit MSB relocatable, PowerPC or cisco 4500, version 1 (SYSV) , not stripped List of sections The source greetings.s had assembler directives for two sections (.rodata.str1.4 and .text), so the ELF relocatable greetings.o should contain those two sections. The command objdump can list the sections. $ objdump Usage: objdump <option(s)> <file(s)> Display information from object <file(s)>. At least one of the following switches must be given: ... -h, --[section-]headers Display the contents of the section headers ... $.) That leaves the mystery of the .comment section. The objdump command accepts -j to select a section and -s to show the contents, so objdump -j .comment -s greetings.o dumps the 0x3c bytes in that section. $ objdump -j .comment -s greetings.o greetings.o: file format elf32-powerpc Contents of section .comment: 0000 00474343 3a202847 4e552920 342e312e .GCC: (GNU) 4.1. 0010 32203230 30363130 32312070 72657265 2 20061021 prere 0020 6c656173 6520284e 65744253 44206e62 lease (NetBSD nb 0030 33203230 30363131 32352900 3 20061125).. Of symbols and addresses Our assembly code in greetings.s had three symbols. The first symbol had the name .LC0 and pointed to our string. .LC0: .string "Greetings, Earth!" The second symbol had the name main. It was a global symbol that pointed to a function. .globl main .type main, @function main: mflr %r0 ... The third symbol had the name puts. Our code used puts in a function call, though it never defined the symbol. bl puts. The nm command shows the names of symbols in an object file. The output of nm shows that greetings.o contains only two symbols. The .LC0 symbol is missing. $ nm greetings.o 00000000 T main U puts. The nm tool claims that symbol main has address 0x00000000, which to be a useless value. The actual meaning is that main points to offset 0x0 within section .text. A more detailed view of the symbol table would provide evidence of this. Fate of symbols. (Because this part of the wiki page now comes before the part about machine code, this disassembly should probably not be here.). ELF, like any object format, allows for a symbol table. The list of symbols from nm greetings.o is only an incomplete view of this table. $ nm greetings.o 00000000 T main U puts The command objdump -t shows the symbol table in more detail. $ objdump -t greetings.o greetings.o: file format elf32-powerpc SYMBOL TABLE: 00000000 l df *ABS* 00000000 greetings.c 00000000 l d .text 00000000 .text 00000000 l d .data 00000000 .data 00000000 l d .bss 00000000 .bss 00000000 l d .rodata.str1.4 00000000 .rodata.str1.4 00000000 l d .comment 00000000 .comment 00000000 g F .text 0000002c main 00000000 *UND* 00000000 puts. The filename symbol greetings.c exists because the assembly code greetings.s had a directive .file greetings.c. The symbol main has a nonzero size because of the .size directive.. TODO: explain "relocation records" $ objdump -r greetings.o greetings.o: file format elf32-powerpc RELOCATION RECORDS FOR [.text]: OFFSET TYPE VALUE 0000000a R_PPC_ADDR16_HA .rodata.str1.4 0000000e R_PPC_ADDR16_LO .rodata.str1.4 00000014 R_PPC_REL24 puts Disassembly and machine code Disassembly GNU binutils provide both assembly and the reverse process, disassembly. While as does assembly, objdump -d does disassembly. Both programs use the same library of opcodes.. $ objdump -d greetings.o greetings.o: file format elf32-powerpc Disassembly of section .text: 00000000 <main>: 0: 7c 08 02 a6 mflr r0 4: 94 21 ff f0 stwu r1,-16(r1) 8: 3c 60 00 00 lis r3,0 c: 38 63 00 00 addi r3,r3,0 10: 90 01 00 14 stw r0,20(r1) 14: 48 00 00 01 bl 14 <main+0x14> 18: 80 01 00 14 lwz r0,20(r1) 1c: 38 60 00 00 li r3,0 20: 38 21 00 10 addi r1,r1,16 24: 7c 08 03 a6 mtlr r0 28: 4e 80 00 20 blr. The disassembled code would must resemble the assembly code in greetings.s. A comparison shows that every instruction is the same, except for three instructions. - Address 0x8 has lis r3,0 instead of lis %r3,.LC0@ha. - Address 0xc has addi r3,r3,0 instead of la %r3,.LC0@l(%r3). - Address 0x14 has bl 14 <main+0x14> instead of bl puts.. If the reader of objdump -d greetings.o would not know about these symbols, then the three instructions at 0x8, 0xc and 0x14 would seem strange, useless and wrong. -. - The "add immediate" (addi) instruction does addition, so addi r3,r3,0 increments r3 by zero, which effectively does nothing! The instruction seems unnecessary and useless. - The instruction at address 0x14 is bl 14 <main+0x14>, which branches to label 14, effectively forming an infinite loop because it branches to itself! Something is wrong.. A better understanding of how symbols fit into machine code would help. Machine code in parts The output of objdump -d has the machine code in hexadecimal. This allows the reader to identify individual bytes. This is good with architectures that organize opcodes and operands into bytes.. One can write the filter program using a scripting language that provides both regular expressions and bit-shifting operations. Perl (available in lang/perl5) is such a language. Here follows machine.pl, such a script. #!/usr/bin/env perl # usage: objdump -d ... | perl machine.pl # # The output of objdump -d shows the machine code in hexadecimal. This # script converts the machine code to a format that shows the parts of a # typical PowerPC instruction such as "addi". # # The format is (opcode|register-1|register-2|immediate-value), # with digits in (decimal|binary|binary|hexadecimal). use strict; use warnings; my $byte = "[0-9a-f][0-9a-f]"; my $word = "$byte $byte $byte $byte"; while (defined(my $line = <ARGV>)) { chomp $line; if ($line =~ m/^([^:]*:\s*)($word)(.*)$/) { my ($before, $code, $after) = ($1, $2, $3); $code =~ s/ //g; $code = hex($code); my $opcode = $code >> (32-6); # first 6 bits my $reg1 = ($code >> (32-11)) & 0x1f; # next 5 bits my $reg2 = ($code >> (32-16)) & 0x1f; # next 5 bits my $imm = $code & 0xffff; # last 16 bits $line = sprintf("%s(%2d|%05b|%05b|%04x)%s", $before, $opcode, $reg1, $reg2, $imm, $after); } print "$line\n"; } Here follows the disassembly of greetings.o, with the machine code in parts. $ objdump -d greetings.o | perl machine.pl greetings.o: file format elf32-powerpc Disassembly of section .text: disassembly now shows the machine code with the opcode in decimal, then the next 5 bits in binary, then another 5 bits in binary, then the remaining 16 bits in hexadecimal.. When machine code contains opcode 14, then the disassembler tries to be smart about choosing an instruction mnemonic. Here follows a quick example. $ cat quick-example.s .section .text addi 4,0,5 # bad la 3,3(0) # very bad la 3,0(3) la 5,2500(3) $ as -o quick-example.o quick-example.s $ objdump -d quick-example.o | perl machine.pl quick-example.o: file format elf32-powerpc Disassembly of section .text: 00000000 <.text>: 0: (14|00100|00000|0005) li r4,5 4: (14|00011|00000|0003) li r3,3 8: (14|00011|00011|0000) addi r3,r3,0 c: (14|00101|00011|09c4) addi r5,r3,2500 If the second register operand to opcode 14 is 00000, then the machine code looks like an instruction "li", so the disassembler uses the mnemonic "li". Otherwise the disassembler prefers mnemonic "addi" to "la". Opcodes more strange The filter script shows the four parts of a typical instruction, but not all instructions have those four parts. The instructions that do branching or access special registers are not typical instructions. Here again is the disassembly of the main function in greetings.o: Assembly code uses "branch and link" (bl) to call functions and "branch to link register" (blr) to return from functions. - The instruction bl branches to the address of a function, and stores the return address in the link register. - The instruction blr branches to the address in the link register. - The instructions "move from link register" (mflr) and "move to link register" (mtlr) access the link register, so that a function may save its return address while it uses bl to call other functions.. The source file /usr/src/gnu/dist/binutils/opcodes/ppc-opc.c contains a table of powerpc_opcodes that lists the various mnemonics that use opcodes 18, 19 and 31.
https://wiki.netbsd.org/examples/elf_executables_for_powerpc/
CC-MAIN-2015-22
en
refinedweb
Using SPMonitoredScope Last modified: January 16, 2013 Applies to: SharePoint Foundation 2010 In this article When to Use SPMonitoredScope How to Use SPMonitoredScope Where Calculations are Displayed Other Uses for SPMonitoredScope Performance Considerations Best Practices Limitations In previous releases of Windows SharePoint Services, when an unexplained performance or reliability problem surfaced, it was sometimes difficult to isolate the problem and determine its cause. Often developers would spend a lot of time determining where their failure points and performance bottlenecks were. Microsoft SharePoint Foundation 2010 introduces the SPMonitoredScope class, to allow developers to designate portions of their code so that they can monitor usage statistics in the Unified Logging Service (ULS) logs and the Developer Dashboard. See Using the Developer Dashboard for more information. The SPMonitoredScope class resides in the Microsoft.SharePoint.Utilities namespace. SPMonitoredScope is very easy to use. A developer simply "wraps" the section of code to be monitored. Then, as the code is executed, the measured statistics are written to the ULS logs as well as to the Developer Dashboard. This allows information about the component and the page where it resides to be immediately available to the developer and to the system administrator. Here is an example of wrapped code. Monitoring other resource usage SPMonitoredScope can also measure resource usage of other types in a specified section of code. For example, this code sample measures and logs execution time, number of requests, and the number of SharePoint SQL Server queries (including the query text) that are performed by the external callout. Using performance thresholds SPMonitoredScope can also be used to dynamically trace only when excessive resource use is detected. In the example above, the scope is being told that the callExternalCode() method should take no more than 1000ms, and that it should allocate no more than 3 SPRequests. If these limits are exceeded, the trace level for this scope will be increased to "high" for this one instance. The counter will also appear red in the dashboard. Using SPMonitoredScope to wrap code has a very low performance hit. However, it should be noted that if a section of code wrapped by SPMonitoredScope were to contain a loop that performed a high number of iterations (for example, iterating through XML nodes that are returned by a SharePoint Foundation 2010 Web service), the call stack included on the Developer Dashboard could increase in size exponentially, making it difficult to decipher the information displayed. A tip for the best and most effective use of SPMonitoredScope: All calls to external components, such as custom databases, external Web services, and so on, should be wrapped with SPMonitoredScope. This will make it easier for administrators to identify them as points of failure, and to isolate the problem quickly. It should be noted that there are a few limitations for using SPMonitoredScope. Only calls to SharePoint databases are captured. Only the code wrapped with SPMonitoredScope that resides on the front-end Web server appears on the Developer Dashboard. Code that executes on application servers only displays the SPMonitoredScope information in the ULS logs of the computer that the code is running on. SPMonitoredScope cannot be used in sandboxed solutions.
https://msdn.microsoft.com/en-us/library/office/ff512758(v=office.14)
CC-MAIN-2015-22
en
refinedweb
Troubleshooting and Tips Note: This document was originally published as "Windows Management Instrumentation: Frequently Asked Questions." On This Page Q 1. What is WMI and what can it do for me? Q 2. On what platforms is WMI available? Q 3. How can I tell if WMI exposes specific functionality? Q 4. What can I do if WMI does not provide the capabilities I need? Q 5. Where can I find sample scripts that use WMI? Q 6. Why does my script run on one version of Windows but not on another? Q 7. Why is a WMI operation returning an error? Q 8. How do I set WMI namespace security? Q 9. How do I manage remote computers using WMI? Q 10. Why does my remote operation fail when it involves a third machine? Q 11. Why are my queries taking such a long time to complete? Q 12. How do I list all the installed applications on a given machine? Q 13. How do I get performance counter data? Q 1. What is WMI and what can it do for me? Windows Management Instrumentation is a core Windows management technology; you can use WMI to manage both local and remote computers. WMI provides a consistent approach to carrying out day-to-day management tasks with programming or scripting languages. For example, you can: Start a process on a remote computer. Schedule a process to run at specific times on specific days. Reboot a computer remotely. Get a list of applications installed on a local or remote computer. Query the Windows event logs on a local or remote computer. The word “Instrumentation” in WMI refers to the fact that WMI can get information about the internal state of computer systems, much like the dashboard instruments of cars can retrieve and display information about the state of the engine. WMI “instruments” by modeling objects such as disks, processes, or other objects found in Windows systems. These computer system objects are modeled using classes such as Win32_LogicalDisk or Win32_Process; as you might expect, the Win32_LogicalDisk class models the logical disks installed on a computer, and the Win32_Process class models any processes currently running on a computer. Classes are based on the extensible schema called the Common Information Model (CIM). The CIM schema is a public standard of the Distributed Management Task Force (). WMI capabilities also include eventing, remoting, querying, views, user extensions to the schema, instrumentation, and more. To learn more about WMI, go to and search for the keyword phrase “About WMI.” Q 2. On what platforms is WMI available? WMI is available in all recent versions of Windows. WMI is installed with Windows Me, Windows 2000, Windows XP and Windows Server 2003. For Windows 98 and Windows NT 4.0, WMI is available as an Internet download from. Search for the download “Windows Management Instrumentation (WMI) CORE 1.5 (Windows 95/98/NT 4.0).” Note that Windows NT 4.0 requires Service Pack 4 or later before you can install and run WMI. Additional software requirements for WMI include: Microsoft® Internet Explorer version 5.0 or later. Windows Script Host (WSH). WSH ships with Windows 2000, Windows XP, Windows Server 2003, and Windows Me, but not with Windows NT4 or Windows 98. You can download WSH from. The latest version -- which ships with Windows XP and Windows Server 2003 -- is WSH 5.6. Q 3. How can I tell if WMI exposes specific functionality? MSDN is your best bet when looking for detailed reference information on WMI and its capabilities; see the WMI Reference at. The WMI Reference contains information about most of the classes, scripting objects, and APIs available with a standard installation of WMI. Note that WMI providers that are not part of the operating system might create classes that either are not documented on MSDN or are documented elsewhere in the Platform SDK. After you familiarize yourself with how the information is categorized, you can easily search for the class you are looking for and find out if the functionality you want is available. Please be aware that you might need to use more than one class to accomplish a given task. For example, suppose you want to obtain basic system information for a computer. While you can retrieve information about available memory using the Win32_OperatingSystem class, you will have to use a second class (such as Win32_LogicalDisk) if you also need information about free disk space on the computer. See the question Why does my script run on one version of Windows but not on another? for more information on discovering what WMI can and cannot do. CIM Studio is a tool that enables you to browse WMI Classes on Windows 2000 and later platforms. For information on this tool and the download containing it (CIM Studio is one of the set of tools installed by WMITools.exe), go to and search for the keyword “WMI tools.” You can also run the unsupported Wbemtest.exe utility - which is automatically installed along with WMI -- to explore WMI data. On Windows XP or Windows Server 2003 you can use the following script, which searches for classes that have a specific word in the class name. Save the script to a text file named Search.vbs and then run the script, specifying the keyword you would like to search for. For example, to search for classes with “service in the class name, run the following command at the command prompt: ' Script for finding a class in WMI Repository Set args = wscript.arguments If args.Count <= 0 Then Wscript.Echo "Tool to search for a matching class in the WMI Repository. " Wscript.Echo "USAGE: <keywordToSearch> [<namespaceToSearchIn>]" Wscript.Echo "Example1: Cscript search.vbs service" Wscript.Echo "Example2: Cscript search.vbs video root\cimv2" Else ' If no Namespace is specified then the Default is the ROOT namespace rootNamespace = "\\.\ROOT" keyword = args(0) If args.Count > 1 Then rootNamespace = args(1) End If EnumNameSpace rootNamespace Wscript.Echo vbNewLine End if ' Subroutine to recurse through the namespaces Sub EnumNameSpace(parentNamespaceName) Set objService = GetObject("winmgmts:" & parentNamespaceName) Set collMatchingClasses = objService.Execquery _ ("Select * From meta_class Where __class " & _ "Like '%" & keyword & "%'") If (collMatchingClasses.count > 0) Then Wscript.Echo vbNewLine Wscript.Echo vbNewLine Wscript.Echo "Matching Classes Under Namespace: " & parentNamespaceName For Each matchingClass in collMatchingClasses Wscript.Echo " " & matchingClass.Path_.CLASS Next End if Set collSubNamespaces = objService.Execquery _ ("select * from __namespace") For Each subNameSpace in collSubNamespaces EnumNameSpace subNameSpace.path_.namespace + _ "\" + subNameSpace.Name Next End Sub This script will only run on Windows XP or Server 2003. That’s because the LIKE operator, part of the WMI Query Language, is only available on those two platforms. Q 4. What can I do if WMI does not provide the capabilities I need? Sooner or later you will want to script a task that WMI cannot do or cannot do very efficiently. In cases such as that, you should first see if another scripting technology included in the operating system provides the capabilities you need. For example, ADSI (Active Directory Service Interfaces) enables you to manage Active Directory; CDO (Collaboration Data Objects) provides the ability to send email from within a script. If no appropriate scripting interface is available in the Windows operating system, third-party software might be available that performs the functions you need. If no scripting interface exists you can, in theory, write a WMI provider that offers this functionality. However, WMI providers cannot be written in a scripting languages; providers must be written in C++ or C#. For information on how to do this, see “Using WMI” on MSDN, which directs you to topics on writing traditional WMI providers. If you want to write a provider using the .NET Frameworks, search the MSDN library for “Managing Applications Using WMI.” Many other companies market management software that extends WMI functionality. You can search on the Internet for third-party tools. You might also be able to get information through questions to newsgroups. See the question Where can I find sample scripts that use WMI? Q 5. Where can I find sample scripts that use WMI? The Microsoft Developers Network (MSDN) and TechNet are both good sources of samples. Here are some links to useful locations on these sites: The TechNet Script Center Includes hundreds of sample scripts categorized by technology and administrative task.. MSDN. For WMI scripts, search for “WMI System Administration scripts.” For WMIC (the WMI Command Line Utility), see . The WMI Software Developers Kit (SDK) For a set of problem solutions by category, see Using WMI > WMI Tasks for Scripts and Applications. The Windows 2000 Scripting Guide online The complete text of the book, which includes many examples of WMI scripting. The "Tales from the Script" column on TechNet Basic and intermediate scripting topics. The “Scripting Clinic” column on MSDN More advanced scripting topics. Forum Head over to The Official Scripting Guys Forum, where you can post and answer scripting questions. Q 6. Why does my script run on one version of Windows but not on another? This is typically due to the fact that classes, properties, or methods introduced in newer versions of Windows might not be available on previous versions of the operating system. To verify availability, look in the Requirements section for each class in the WMI Software Developer Kit (SDK) in the MSDN library (). For example, the requirements for the Win32_PingStatus class indicate that it requires Windows XP or Windows Server 2003. Because of this, scripts that attempt to access the Win32_PingStatus class on Windows 2000 will fail with a “Class not found” error. Likewise, some WMI data providers, such as the SNMP Provider, are either not available in all operating systems or are not part of the default installation of WMI. SDK topics that refer to these providers have a note pointing to the topic “Operating System Availability of WMI Components” in the “About WMI” section. For a list of the standard WMI providers, see “WMI Providers” under the WMI Reference section. In general, when a new provider is added to a new version of Windows its functionality will not be made available to previous versions of Windows. For example, the Win32_PingStatus class defined by the Ping provider is unlikely to be made available for Windows 2000. This is usually due to the fact that the provider takes advantage of capabilities found in the new version of Windows that simply do not exist in previous versions. What if you have two computers, running the identical version of Windows, and a script runs on one machine but not the other? For information on troubleshooting problems such as this, see Why is a WMI operation returning an error? Q 7. Why is a WMI operation returning an error? To begin with, make sure that the error in question is really a WMI error. WMI error numbers start with 8004xxxx (e.g., 80041001). You can look up WMI error numbers and return codes by going to and searching for "WMI Return Codes.” If you can’t find the information you need, try searching for the specific error number on MSDN. If you do not receive an error number when running the script, you can look for errors in the WMI log files found in the %windir%\system32\wbem\logs folder. If it is difficult to determine which errors resulted from the script you just ran, delete all the logs and run the script again. This should make it easier to find errors related to your script. If you can’t find any errors in the log files, you might need to reset the logging level for the logs. To get maximum information, set the logging level to verbose. On Windows 2000, Windows NT, and Windows Me/98/95 you need to restart WMI after changing the logging levels; this is not required for Windows XP and Windows Server 2003. For detailed information on configuring the logging levels, go to and search for "Logging WMI Activity.” Errors might also be recorded in the Windows event logs. Look for events with the source Winmgmt. On Windows XP or Windows Server 2003 you can use MSFT_WMIProvider classes to troubleshoot provider operations such as loading and unloading the provider, responding to a query, executing a method, etc. For example, WMI generates an instance of the class MSFT_WmiProvider_CancelQuery_Pre immediately before the provider cancels the response to a query. An instance of MSFT_WmiProvider_CancelQuery_Post is generated after the cancellation occurs. If a query operation in a particular script is failing you can write a script to wait for instances of these event classes to be generated. When your monitoring script receives one of these events, the data tells you the provider involved, the type of provider, the query being processed, and the namespace involved. For more information, go to and search for "Troubleshooting Classes.” Following is a sample script that troubleshoots problems with the Ping provider. The script reports all the actions that take place as part of a Ping operation, including such things as provider loading, query receipt, and error generation. This information can help you determine whether the problems you are having occurred in the provider or in the WMI service. In the output, look for events where the ResultCode is not equal to 0; in general an error code other than 0 indicates that an operation failed. Save the following code in a .VBS file and then run the script. 'msftTroubleShooting.vbs starts here DIM oLctr, oSvc, OSink, instCount, SrvName, SrvUserName, SrvPswd, args, argcount Set args = wscript.arguments SrvName = "." SrvUserName = Null SrvPswd = Null instcount = 0 argcount = args.Count If (argcount > 0) Then If args(0) = "/?" or args(0) = "?" Then Wscript.Echo "Usage: cscript msftTroubleShooting.vbs " _ [ServerName=Null|?] [UserName=Null] [Password=Null]" Wscript.Echo "Example: cscript msftTroubleShooting.vbs " Wscript.Echo "Example: cscript msftTroubleShooting.vbs computerABC" Wscript.Echo "Example: cscript msftTroubleShooting.vbs " Wscript.Echo "computerABC admin adminPswd" Wscript.Quit 1 End If End If Set oLctr = createObject("WbemScripting.Swbemlocator") On Error Resume Next If argcount = 0 Then Set oSvc = oLctr.ConnectServer(,"root\cimv2") SrvName = " Local Computer " Else srvname = args(0) If argcount >= 2 Then SrvUserName = args(1) End If If argcount >= 3 Then SrvPswd = args(2) End If Set oSvc = oLctr.ConnectServer(srvname,"root\cimv2",SrvUserName,SrvPswd) End If If Err = 0 Tthen Wscript.Echo "Connection to " & srvname & " is thru" & vbNewLine Else Wscript.Echo "The Error is " & err.description & _ " and the Error number is " & err.number Wscript.Quit 1 End If On Error Goto 0 Set oSink = WScript.CreateObject("WbemScripting.SWbemSink","Sink_") oSvc.ExecNotificationQueryAsync oSink, _ "Select * From MSFT_WmiProvider_OperationEvent Where " & _ "provider = 'WMIPingProvider'" Wscript.Echo "To stop the script press ctrl + C" & vbNewLine Wscript.Echo "Waiting for events......" & vbNewLine While True Wscript.Sleep 10000 Wend Q 8. How do I set WMI namespace security? Setting namespace security using WMI Control The WMI Control provides one way to manage namespace security. You can start the WMI Control from the command prompt using this command: wmimgmt On Windows 9x or Windows NT4 computers that have WMI installed, type this command instead: wbemcntl.exe Alternatively, you can access the WMI Control and the Security tab by doing the following: Right-click on My Computer and click Manage. Double-click Services and Applications and then double-click WMI Control. Right-click WMI Control and then click Properties. In the WMI Control Properties dialog box click the Security tab. A folder named Root with a plus sign (+) next to it should now be visible. Expand this tree as necessary to locate the namespace for which you want to set permissions. Click the Security button. A list of users and their permissions appears. If the user is on that list, modify the permissions as appropriate. If the user is not on the list, click the Add button, and add the user from the location (local machine, domain, etc.) where the account resides. Notes: In order to view and set namespace security, the user must have Read Security and Edit Security permissions. Administrators have these permissions by default, and can assign the permissions to other user accounts as required. If this user needs to access the namespace remotely, you must select the Remote Enable permission. By default, user permissions set on a namespace apply only to that namespace. If you want the user to have access to that namespace and all subnamespaces in the tree below it, or in subnamespaces only, click the Advanced button. Click Edit and specify the scope of access in the resulting dialog box. Q 9. How do I manage remote computers using WMI? Generally speaking, any operation that WMI can perform on the local computer can also be performed on a remote computer where you have local administrator privileges. As long as you have rights to the remote namespace (see How do I set WMI namespace security?) and as long as the remote computer is remote-enabled you should be able to connect to a remote machine and perform any operations for which you have the requisite permissions. In addition, you can also use delegation if the remote computer is enabled for delegation. Delegation allows the remote computer to obtain information from a third computer, using the credentials supplied by the client. In other words, you can run a script on Computer A and connect to Computer B; Computer B can then connect to Computer C using the user name and password supplied by the script running on Computer A. Delegation scenarios are dealt with under Why does my remote operation fail when it involves a third machine? To connect to a remote namespace using WMI tools To connect remotely using tools like CIM Studio or Wbemtest, you must specify a namespace in the form "\\<machinename>\root\<namespace>" For example: \\myserver\root\cimv2 Authentication is handled either by Kerberos or NTLM. To use NTLM or default (non-Kerberos) authentication, specify the following: User: <domain>\<User> Password: <password> Authority: Either leave blank, or enter "NTLMDomain:<domain>" here. If you include the Authority parameter, leave "<domain>\" out of the User parameter designation, entering just the user name. For example: User: kenmyer Password: 45Tgfr98q Authority: NTLMDomain:fabrikam To use Kerberos authentication, specify the following: User: <domain>\<User> Password: <password> Authority: Enter "Kerberos:<domain>\<machinename>" here. For example: User: kenmyer Password: 45Tgfr98q Authority: Kerberos:fabrikam\atl-ws-01 To connect to WMI on a remote computer using a script Before you begin, make sure you have the appropriate permissions on the remote namespace. If you have these permissions, you can connect to the remote machine without specifying user credentials. WMI will connect using the user credentials you logged on with. If you do not need to specify user credentials, you can connect to a remote computer using the short connection syntax known as a moniker string. For more information, go to and search for “Constructing a Moniker String.” For example, this moniker connects you to the default namespace on a remote computer named TargetComputer (because no namespace is specified, the connection is automatically made to the default namespace): - If TargetComputer is in a different domain than the one you are logged onto you must include the domain name in the moniker. If you don’t, you’ll get an Access Denied error. For example, this moniker connects you to a computer named TargetComputer in a domain named DomainName: Although not always required, you can also specify the WMI namespace in the moniker itself. This is useful when working with different platforms, because the default namespace isn’t always the same on different versions of the operating system. For example, on Windows 2000, Windows XP, and Windows Server 2003, the default namespace is root\cimv2; however, on Windows NT 4.0 and Windows 98 the default namespace is root\default. This moniker connects to the root\cimv2 namespace on the remote computer TargetComputer: If you are dealing with multiple platforms, you might also need to specify the Impersonation level; while the default Impersonation level on Windows 2000 and later versions of Windows is Impersonate, on previous versions of Windows the default Impersonation level is Identify. If you are working with Windows NT 4.0 and/or Windows 98 computers, you will need to include the Impersonation level in the moniker string; you will also need to include the Impersonation level when using delegation. The following moniker connects to the root\cimv2 namespace on the computer named TargetComputer, and specifies Impersonate as the Impersonation level: Finally, you might need to set the Authentication level depending on what OS versions you are connecting to and from. The Authentication level enables you to request the type of DCOM authentication and privacy to be used throughout a connection. Settings range from no authentication to per-packet encrypted authentication. The following moniker connects to the root\cimv2 namespace on the computer named TargetComputer, and specifies Impersonate as the Impersonation level. In addition, it configures the Authentication level as pkt: Note. Generally speaking, it’s not a good idea to hardcode an administrator password in a script. A better approach would have the script prompt you for the password each time it runs. For more information, go to and search for “Connecting Between Different Operating Systems.” To connect to WMI using WMIC If you have rights to the remote namespace and if that computer is remote-enabled, then you do not have to specify a user name and password when connecting. Instead, WMIC will automatically use your current user credentials. For example: WMIC /NODE:"computer1" OS GET Caption,CSDVersion,CSName If you need to use delegation, then you should include /IMPLEVEL:Delegate and /AUTHORITY settings in the WMIC connection string. For example: WMIC /NODE:"computer1" /IMPLEVEL:Delegate /AUTHORITY:"Kerberos:domain\computer1" OS Alternatively, you can specify a user account and password to be when used when connecting via WMIC (as with WMI scripting, only administrators have WMI remote connection privileges by default). For example: WMIC /NODE:"computer1" /USER:"domainname\username" OS GET Caption,CSDVersion This sample command includes a password as well as a user name: WMIC /NODE:"computer1" /USER:"domainname\username" /PASSWORD:"userpassword" OS GET Caption,CSDVersion,CSName For further information on connecting remotely, go to and search for “Connecting to WMI on a Remote Computer.” What do “Access Denied” errors mean You might get an “Access Denied” error when trying to connect to a remote WMI namespace or object. There are several different Access Denied errors: 0x80041003 (WBEM_E_ACCESS_DENIED) This typically results when the process trying to access the namespace does not have the required WMI privileges. The account attempting remote access should be an administrator on the target computer; in addition, the account might need to have a specific privilege enabled. To troubleshoot this error, check the namespace security on the remote namespace to see the privileges enabled for the account.). 0x800706xx (DCOM RPC error) This often occurs when a firewall is configured on the remote computer. You will need to open the appropriate ports on the firewall to permit remote administration using DCOM. Alternatively, the computer might be having problems mapping the IP and the Hostname. To test that possibility, try using the IP address instead of the Hostname in your connection string: To troubleshoot remote errors Check whether the user has access to the remote computer. From the command prompt, execute the following command: net user \\< remotecomputer >\\C$ /u:< domain\username > * Enable the verbose logging level on the remote computer and re-run the script. After running the script, examine the logs on the remote machine (%windir%\system32\wbem\Logs\). Enable audit events to determine which account is responsible for the failed connection. After auditing has been enabled, you will see events similar to this in the event log: Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 529 Date: 6/14/2004 Time: 10:52:35 AM User: NT AUTHORITY\SYSTEM Computer: <remote machine> Description: Logon Failure: Reason: Unknown user name or bad password User Name: xuser Domain: NTDEV Logon Type: 3 Logon Process: NtLmSsp Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Workstation Name: <console Machine > Check the DCOM configuration for the Access\Launch permission; the user running the script must have this permission. If all the previous checks are OK, if the user is recognized by the remote computer, and if the connection still fails with a DCOM Access Denied error, then contact Product Support Services () with the following information: The operating system each computer is running. The installation history The steps that reproduce the problem The script or tool code in which the failure occurs The user credentials used to make the WMI connection, including the authentication and impersonation levels. A zip file of %windir%\system32\wbem\logs from both computers Q 10. Why does my remote operation fail when it involves a third machine? Delegation is required when a client computer (Computer A) needs to forward domain credentials from a remote server (Computer B) to a third machine (Computer C). In cases such as this, when two or more network hops must be made for a given operation, delegation is required. Without delegation Computer B cannot forward credentials received from Computer A; as a result, the connection to Computer C fails. Here are two situations that require delegation. Enumerating printers from a WMI server computer. In this case, WMI attempts to gather properties from the remote printer attached to a printer server, an operation which requires delegation. You run a script on client Computer A, which connects to Print Server B. In turn, Print Server B tries to access a printer connected to Computer C. Connecting to SQL Server via NT authentication from the WMI server. Delegation is required so that WMI can forward the credentials from the server to SQL Server. If SQL Server is using SQL Server Standard Authentication (SQL Server-based security) instead of NT authentication, then the connection string for the connection to SQL server does not require delegation. For delegation to work in scenarios like these: All three computers must be running either Windows 2000, Windows XP, or Windows Server 2003. Delegation cannot be used with computers running Windows NT 4.0 or Windows 98. You must enable delegation for Computer B within Active Directory. You must specify Kerberos as the authentication authority in the connection from the WMI client process (Computer A) to the WMI server (Computer B). Specifying an authentication authority requires a call to SWbemLocator.ConnectServer. This method is part of the WMI Scripting API (). After these steps are completed, Computer B is trusted for delegation. For example, suppose Computer B sends a request to a remote file share located on Computer C. In this case, Computer C can use the forwarded credentials to authenticate the user originally specified in the client process on Computer A. Although available as an administrative option, delegation is typically not recommended because Computer A is providing credentials to Computer B. Delegation enables Computer B to then use those credentials elsewhere, which could be a security risk. The following script enables a computer account for delegation within Active Directory. The script was tested within a Windows Server 2003 domain using a domain administrator account. In addition: The WMI client computer (Computer A) was running Windows XP SP1 Professional. The WMI server computer (Computer B) was running Windows Server 2003. All three computers were in the same Active Directory domain. Delegation requires all the computers to be in the same domain. In this example, the file server share (Computer C) is on the same physical computer as the WMI client. However, the share could be on another computer in the same domain. 'Purpose: Script to enable delegation on a computer and 'then perform an operation that requires delegation 'Requirements: The client computer must be a member of the same Active Directory 'domain as the WMI Server specified in the argument to this script 'Permissions required: The user that runs this script should be a member of 'the Domain Administrators group in the Active Directory Const UF_TRUSTED_FOR_DELEGATION = &H80000 Set args = Wscript.Arguments ' Terminate unless two arguments are specified when starting 'the script If args.Count <> 2 then Wscript.Echo "You must provide a server name and delegation command line." Wscript.Echo "For example, start the script using syntax similar to this:" Wscript.Echo "cscript.exe this.vbs <WMI Server> <Delegation Command Line>" Wscript.Echo "cscript.exe this.vbs computer2 " Wscript.echo "\\computer1\c$\windows\system32\calc.exe" Wscript.Quit 1 end if serverName = args(0) argCommandLine = args(1) ' Connect locally and get the domain and DS_Computer object to ' examine and/or modify Set svc = GetObject("winmgmts:root\cimv2") ' Get some local machine variables to understand the environment we are working in Set objEnum = svc.ExecQuery _ ("Select domain, name From win32_computerSystem", "WQL", 48) For Each obj in objEnum domain = obj.Domain computerName = obj.Name Next ' Get the connection to the root\directory\ldap namespace to enable delegation ' on the remote computer from the local machine Set svc = GetObject("Winmgmts:root\directory\ldap") ' Create the required context object Set octx = CreateObject("wbemscripting.swbemnamedvalueset") octx.Add "__PUT_EXT_PROPERTIES", Array("ds_userAccountControl") octx.Add "__PUT_EXTENSIONS", true octx.Add "__PUT_EXT_CLIENT_REQUEST", true ' Variable to determine whether or not we have modified the userAccountControl 'and whether or not we have to modify it back when we are done modified = False Set objEnum = svc.ExecQuery _ ("Select * From ds_computer Where ds_cn = '" & serverName & "'", "WQL", 48) For Each obj in objEnum ' Store this variable to memory for restoration after this operation completes userAccountControlOriginal = obj.ds_userAccountControl ' Test to see if the computer is already trusted for delegation If CBool(userAccountControlOriginal And UF_TRUSTED_FOR_DELEGATION ) = False Then Wscript.Echo "Computer account not trusted for delegation yet" ' Resume On Error while we try this initially On Error Resume Next ' Add this constant value to the value contained already obj.ds_userAccountControl = userAccountControlOriginal + _ UF_TRUSTED_FOR_DELEGATION ' This should trust the computer account for delegation obj.Put_ 1, octx If (Err.Number = 0) Then ' Set the flag so we know to modify it back to original setting modified = True Else Wscript.Echo Hex(Err.Number) & " " & _ Err.Description Wscript.Quit 1 End If On Error Goto 0: Else ' Already trusted for delegation so ' continue with delegation code here Wscript.Echo "Computer account is trusted for delegation already" End If ' Get the locator object Set lctr = CreateObject("WbemScripting.SWbemLocator") ' Get the service object from the remote server specifying the Kerberos authority Set delegationService = lctr.ConnectServer _ (serverName, "root\cimv2", , , , _ "kerberos:" & trim(domain) & "\" & Trim(serverName)) ' Delegation level impersonation delegationService.Security_.ImpersonationLevel = 4 ' Get the object that will be used to test the delegation hop Set process = delegationService.Get("win32_process") ' Get the inparameter object for the method Set inparams = process.methods_("Create").inparameters ' Set the inparameter commandline value inparams.CommandLine = argCommandLine ' Execute the method Set oReturn = process.ExecMethod_("Create", inparams) ' Echo the output If (oReturn.ReturnValue = 0) Then Wscript.Echo oReturn.ProcessId & _ " is the Process ID from the process " & _ "creation using delegation" Else Wscript.Echo "An error occurred, the return value for the " & _ "Win32_Process.Create method is " & _ oReturn.ReturnValue End If ' Set the value back to the original value If modified = True Then ' Subtract the added delegation privilege from the computer account obj.ds_userAccountControl = _ userAccountControlOriginal - UF_TRUSTED_FOR_DELEGATION ' Restore the original setting obj.put_ 1, octx End If Next The preceding script will not work if either of the two member computers are running Windows NT 4.0 or Windows 98. The script will also fail if the target is located on a Windows NT 4.0 file share. You can manually trust a computer for delegation by doing the following: Click the Start button and then click All Programs. Point to Administrative Tools and then click Active Directory Users and Computers. In Active Directory Users and Computers, expand the Computers node and find the computer you want to trust for delegation Right-click that computer and click Properties. Select Trust computer for delegation and then click OK. For more information on delegation and remote connections, see Connecting to a 3rd Computer-Delegation () and Securing a Remote WMI Connection (). Also see the questions titled How do I manage remote computers using WMI? and How do I set WMI namespace security? Q 11. Why are my queries taking such a long time to complete? Typically this is due to queries that return large amounts of data. If the query requests a very large dataset and you are only interested in a subset of the data, you can often speed up the operation by limiting the returned information. WQL (the WMI Query Language) enables you to filter the set of instances (records) as well as the properties (fields) returned. For examples, go to and search for "Querying with WQL” Also see the topic "SELECT Statement for Data Queries.” In some cases providers have been optimized to filter based on particular properties. Specifying these in the WHERE clause can improve performance, because the provider can actively filter the result set instead of relying on WMI to post-filter the collection after the entire data space has been enumerated. Refer to the particular class definition for optimization capabilities. The Drive and Path properties of CIM_DataFile are examples of optimized properties. By default, WMI queries return an enumerator that allows the traversal of the collection multiple times and in both directions; among other things, this means you can loop through all the items in the collection and then, if you wish, loop through all the items a second or third time. When the returned data set is large, this type of enumerator might require so much memory that it affects performance. You can work around this issue by specifying the WBEM_FLAG_FORWARD_ONLY flag when issuing the query. Although you can loop through the collection just once using this type of enumerator, the memory for each object is released after use and thus performance will not degrade. For more details see Making a Semisynchronous Call with VBScript (). While the performance of semisynchronous queries is comparable in most cases to asynchronous queries, very large queries might monopolize the main application thread or be throttled by WMI to avoid overloading the system. In these cases making the query asynchronous can improve performance. However, you should be aware that the asynchronous calls are less secure in most operating systems. For more information, see Invoking an Asynchronous Query () and Setting Security on an Asynchronous Call (). Q 12. How do I list all the installed applications on a given machine? The Win32_Product WMI class represents applications installed by Windows Installer. However, this WMI class may not list all the installed applications that appear in Add or Remove Programs. One solution to this problem is to gather data on installed applications from the registry (note that not all applications write to the registry when they are installed). This topic shows two ways of doing this: using a script to directly read information from the registry, and using a MOF file and script to obtain this information from WMI. The following script lists installed applications on a computer. The script uses the WMI System Registry Provider to gather information directly from the registry: Next Alternatively, the following MOF file with its accompanying script demonstrates another way to retrieve all the installed applications that register themselves in the registry. To use the MOF file, do the following: Step 1: Copy the following MOF syntax into Notepad and save it as a .MOF file (for example, products.mof). qualifier dynamic:ToInstance; qualifier ProviderClsid:ToInstance; qualifier ClassContext:ToInstance; qualifier propertycontext:ToInstance; [dynamic, provider("RegProv"), ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"), ClassContext ("local|HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall") ] class Products { [key] string KeyName; [read, propertycontext("DisplayName")] string DisplayName; [read, propertycontext("DisplayVersion")] string DisplayVersion; [read, propertycontext("InstallLocation")] string InstallLocation; }; Step 2: At the command prompt, type mofcomp products.mof. This stores the MOF file in the WMI repository. Step 3: With the MOF stored in the repository, use the following script to get at the data. strComputer = "." Set WMI = GetObject("winmgmts:\\" & strComputer & _ "\root\default") Set colItems = WMI.ExecQuery("Select * from Products") For Each objItem In colItems WScript.Echo "DisplayName: " & objItem.DisplayName WScript.Echo "DisplayVersion: " & objItem.DisplayVersion WScript.Echo "InstallLocation: " & objItem.InstallLocation WScript.Echo "KeyName: " & objItem.KeyName Next Q 13. How do I get performance counter data? Support for the Cooked Counter Provider - the quickest and easiest way to retrieve performance data using WMI - was first added in Windows XP. On Windows 2000 you can still retrieve performance data; however, because this data appears in “uncooked” format you must then format the data yourself to get useful values for most counters. By contrast, on Windows XP and Windows Server 2003 performance data can be obtained directly via the Win32_PerfFormattedData classes. For more information, see "Example: Obtaining Cooked Performance Data" at. Because the Cooked Counter Provider is not available on Windows 2000, calculations must be made on the "raw" counter data to obtain meaningful performance information. For details on working with raw counter data, see "Example: Obtaining Raw Performance Data" at. To find the correct formula for each counter type, first identify the numeric counter type for the property using either the WMI SDK ("Performance Counter Classes" topic) or the "countertype" qualifier for the property in question. The formula for that counter type can then be found under "WMI Performance Counter Types" at. On pre-Windows 2000 systems, the Performance Monitoring Provider must be used to obtain performance counters using WMI. See "Monitoring Performance With the Performance Monitoring Provider" at.
https://technet.microsoft.com/en-us/library/ee692772(d=printer)
CC-MAIN-2015-22
en
refinedweb
For the second installment of my TechRepublic series that highlights various .NET technologies and explains what they are, I am focusing on .NET's data capabilities. - ActiveX Data Objects .NET (ADO.NET) is a collection of technologies that encompasses basic data connection functionality. Things like the ability to connect to a database are part of ADO.NET. Don't let the ActiveX in the title confuse you -- that's just some branding on Microsoft's part from ages ago to help ease developers' minds about the technology. ADO.NET is part of the .NET Framework. - Language Integrated Query (LINQ) was introduced in .NET 3.5; it provides a standardized set of SQL-like expressions to be used inline in other .NET languages such as C# and VB.NET. Objects may provide LINQ providers so they can be queried (including updates and deletions). It may not seem obvious at first, but using a SQL-like language to query data is very useful. LINQ is more than just about querying databases -- it can be tied to XML data, objects in memory, Web services, ORM systems, and more. As a result, having a standardized way to work with data regardless of the backing storage system allows you to build applications with much less learning and effort. Some common LINQ providers are: - LINQ-to-objects -- objects that implement IEnumerable. - LINQ-to-XML -- the XML objects in the LINQ namespace. - LINQ-to-Entities -- Entity Framework. - LINQ-to-NHibernate -- NHibernate. - LINQ-to-SQL -- SQL Server databases. - Entity Framework (aka "EF") is Microsoft's ORM built on top of ADO.NET technologies. Its initial release was widely panned as being feature poor and overly complex. The second release (confusingly called Entity Framework 4) contains a lot of improvements and addresses many of the concerns about the initial release. EF can be used through LINQ. - NHibernate is an open source ORM based on Hibernate for the Java platform. NHibernate has found favor within the .NET community for being a well-made product. LINQ can query NHibernate. NHibernate competes directly with Entity Framework and is well supported by the community. - Windows Communication Foundation (WCF) Data Services are built on top of the WCF platform. These services allow a simple Web service to be built to access an underlying data source, and can be automatically generated from a data source. If you need to expose data to an external client, WCF Data Services is your best bet. It allows for very granular and discrete permissions for the security minded. WCF produces XML, RDF, or JSON. WCF Data Services creates a RESTful Web service. - WCF RIA Services, another sub-set of the WCF platform, specializes in creating N-tiered applications with Silverlight. WCF RIA Services encompasses the server side and the client side. On the client side, it generates classes that are aware of CRUD operations (but not the underlying logic) on the server, so that operations on the client can easily trigger the appropriate logic on the server. - Open Database Connectivity (ODBC) is the granddaddy of all Windows data connectivity systems. While ODBC is not a .NET-specific technology, ADO.NET can connect to ODBC datasources. What technology would you like to see in the next edition of this series, which will publish in about a month? J.Ja Additional TechRepublic resources Full Bio Justin James is the Lead Architect for Conigent.
http://www.techrepublic.com/blog/software-engineer/net-data-technologies-overview/
CC-MAIN-2015-22
en
refinedweb
XDoclet is a tool that proclaims the DRY (Don’t Repeat Yourself) principle: you code everything in one file once, as opposed to the practice of modern software development, where information & tokens must be repeated in executable code, configuration files as well as deployment descriptors (of course, this has not been invented to hamper us, but allows for greater flexibility in deployment and portability). Examples of such technologies include e.g. Struts ( struts-config.xml and Java source code), Hibernate (POJOs and mapping files), or EJBS (interfaces (local/remote), implementation and deployment descriptors). Without a code generation tool such as Xdoclet, such technologies quickly become a maintenance nightmare at best, and unworkable at worse. If you are (rather) new to XDoclet, you are referred to my previous post on Attribute-Oriented programming. In this post, we focus on the use of XDoclet in the Struts framework. The XDoclet module that is used in this case is webdoclet, which can also be employed for other model 2 (that is, MVC-based) frameworks such as WebWork. Finally, we show how to use Xdoclet in Eclipse. Although webdoclet and Struts are paid attention to in particular, this tutorial should get you started employing Xdoclet proficiently in Eclipse in general, using the generic Xdoclet capabilities offered by the JBoss-IDE plug-in. XDoclet and Struts Let us now focus on the Struts specific artifacts that need to be generated. These include the web.xml, the struts-config.xml and the validation.xml. First the web.xml file. The Struts controller servlet ( StrutsActionServlet) needs to be put in the web.xml, as well as the corresponding servlet mapping entry. The nice thing is that the Struts framework provided the implementation for this controller servlet for us, but the drawback is that we cannot place a @struts.tag in the code anymore. This problem is solved by making use of the so-called merge points of webdoclet. At predefined places in the generation procedure, pieces of generic code can be merged into the generation process. The code is generic in the sense that it does not belong to a particular class, method or field, hence it couldn’t be put in the Java sources at a logical place/xdoclet tag. The merge points defined for the generation of web.xml can be found on XDoclet’s home location. By the way, all the files to be used in the merge process reside in the “merge dir”, to be specified in the build.xml. The names of these files determine their “merge point” during the code generation process. There are also merge points that are specific to the struts-config.xml file. Typically, these merge points are used to define global forwards, global exceptions, a controller element, message resource elements, and plug-in elements. For the sake of completeness, I re-list the table I found on the Internet, containing the XDoclet Struts Config merge files: struts-data-sources.xml, an XML document containing the optional data-sources element. struts-forms.xml, an XML unparsed entity containing form-bean elements, for additional non-XDoclet forms. global-exceptions.xml, an XML document containing the optional global-exceptions element. global-forwards.xml, an XML document containing the optional global-forwards element. struts-actions.xml, an XML unparsed entity containing action elements, for additional non-XDoclet actions. struts-controller.xml, an XML document containing the optional controller element. struts-message-resources.xml, an XML unparsed entity containing any message-resources elements. struts-plugins.xml, an XML unparsed entity containing any plug-in elements. In addition to these merge points, we can make use of Xdoclet tags. For example, the class-scope @struts.action-tag enables us to associate an struts-config.xml action to a class as follows: /** * @struts.action * name="submitForm" * path="/submitData" * scope="request" * validate="false" * parameter="action" * input="pages/inputPage.jsp" * * @struts.action-exception * type="nl.amis.package.exception.ApplicationException" * key="app.exception" * path="pages/error.jsp" * * @struts.action-forward * name="success" * path="pages/next.jsp" */ public class SubmitAction extends Action { public ActionForward execute( ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletExceptioni { return mapping.findForward("success"); } } The corresponding ActionForm could look like /** * @author zeger * * @struts.form * name="nl.amis.package.form.SubmitForm" */ public class SubmitForm extends ActionForm { private String name; private String email; public String getEmail() { return email; } /** * @struts.validator * type="required" * * @struts.validator * type="email" */ public void setEmail(String email) { this.email = email; } public String getName() { return name; } public void setName(String name) { this.name = name; } } For more info on Struts specific tags, the reader is referred to the references at the end of this post. In the following it will be shown how to invoke the webdoclet module to generate the configuration files from Ant and Eclipse, the latter using the JBoss-IDE plug-in. Nice tutorial about xdoclet and struts.. here is a tutorial about creating struts application with eclipse. Pingback: Credit Report Offers Pingback: Tripods For Less Pingback: Debt Consultation Hi, I have some problem, how to generate tiles-def.xml using Xdoclet1.2. Please give me any suggestion regarding this issue ASAP.
https://technology.amis.nl/2004/09/14/struts-xdoclet-webdoclet-and-integration-with-eclipse/
CC-MAIN-2015-22
en
refinedweb
- NAME - SYNOPSIS - DESCRIPTION - ESCAPES - URL FINDER - ATTRIBUTES - ATTRIBUTE PARSING - TEXT PROCESSORS - SHORT TAGS - WHY ANOTHER BBCODE PARSER - WHY BBCODE? - TODO - REQUIREMENTS - SEE ALSO - BUGS - AUTHOR - CREDITS NAME".:<a href="%{link}A">%{parse}s</a>', i => '<i>%{parse}s</i>', b => '<b>%{parse}s</b>', noparse => '<pre>%{html}s</pre>', code => sub { my ($parser, $attr, $content, $attribute_fallback) = @_; if ($attr eq 'perl') { # use some syntax highlighter $content = highlight_perl($content); } else { $content = Parse::BBCode::escape_html($$content); } "<tt>$content</tt>" }, test => 'this is klingon: %{klingon}s', }, escapes => { klingon => sub { my ($parser, $tag, $text) = @_; return translate_into_klingon($text); }, }, } ); my $code = 'some [b]b code[/b]'; my $parsed = $p->render($code); DESCRIPTION If you set up the Parse::BBCode object without arguments, the default tags are loaded, and any text outside or inside of parseable tags will go through a default subroutine which escapes HTML and replaces newlines with <br> tags. If you need to change this you can set the options 'url_finder', 'text_processor' and 'linebreaks'. METHODS - new Constructor. Takes a hash reference with options as an argument. my $parser = Parse::BBCode->new({ tags => { url => ..., i => ..., }, escapes => { link => ..., }, close_open_tags => 1, # default 0 strict_attributes => 0, # default 1 direct_attributes => 1, # default 1 url_finder => 1, # default 0 smileys => 0, # default 0 linebreaks => 1, # default 1 ); - - escapes - - url_finder See "URL FINDER" - smileys If you want to replace smileys with an icon: my $parser = Parse::BBCode->new({ smileys => { base_url => '/your/url/to/icons/', icons => { qw/ :-) smile.png :-( sad.png / }, # sprintf format: # first argument url # second argument original text smiley (HTML escaped) format => '<img src="%s" alt="%s">', # if you need the url and text in a different order # see perldoc -f sprintf, e.g. # format => '<img alt="%2$s" src="%1$s">', }, }); This subroutine will be applied during the url_finder (or first, if url_finder is 0), and the rest will get processed by the text procesor (default escaping html and replacing linebreaks). Smileys are only replaced if surrounded by whitespace or start/end of line/text. [b]bold<hr> :-)[/b] :-( In this example both smileys will be replaced. The first smiley is at the end of the text because the text inside [b][/b] is processed on its own. - linebreaks The default text processor replaces linebreaks with <br>\n. If you don't want this, set 'linebreaks' to 0. - text_processor If you need to add any customized text processing (like smiley parsing, for example), you can pass a subroutine here. Note that this subroutine also needs to do HTML escaping itself! Default: 0 If set to true (1), it will close open tags at the end or before block tags. - strict_attributes Default: 1: 1 Normal tag syntax is: [tag=val1 attr2=val2 ...] If set to 0, tag syntax is [tag attr2=val2 ...] - attribute_quote You can change how the attribute values shuold be quoted. Default is a double quote (which is still optional): my $parser = Parse::BBCode->new( attribute_quote => '"', ... ); [tag="foo" attr="bar" attr2=baz]...[/tag] If you set it to single quote: my $parser = Parse::BBCode->new( attribute_quote => "'", ... ); [tag='foo' attr=bar attr2='baz']...[/tag] You can also set it to both: '". Then both quoting types are allowed: my $parser = Parse::BBCode->new( attribute_quote => q/'"/, ... ); [tag='foo' attr="bar" attr2=baz]...[/tag] - attribute_parser You can pass a subref that overrides the default attribute parsing. See "ATTRIBUTE PARSING" - strip_linebreaks Default: 1 Strips linebreaks at start/end of block tags - render Input: The text to parse, optional hashref Returns: the rendered text my $rendered = $parser->render($bbcode); You can pass an optional hashref with information you need inside of your self-defined rendering subs. For example if you want to display code in a codebox with a link to download the code you need the id of the article (in a forum) and the number of the code tag. my $parsed = $parser->render($bbcode, { article_id => 23 }); # in the rendering sub: my ($parser, $attr, $content, $attribute_fallback, $tag, $info) = @_; my $article_id = $parser->get_params->{article_id}; my $code_id = $tag->get_num; # write downloadlink like # download.pl?article_id=$article_id;code_id=$code_id # in front of the displayed code See examples/code_download.pl for a complete example of how to set up the rendering and how to extract the code from the tree. If run as a CGI skript it will give you a dialogue to save the code into a file, including a reasonable default filename. - parse Input: The text to parse. Returns: the parsed tree (a Parse::BBCode::Tag object) my $tree = $parser->parse($bbcode); - render_tree Input: the parse tree Returns: The rendered text my $parsed = $parser->render_tree($tree); You can pass an optional hashref, for explanation see "render" -; } - parse_attributes You can inherit from Parse::BBCode and define your own attribute parsing. See "ATTRIBUTE PARSING". - new_tag Returns a Parse::BBCode::Tag object. It just does: shift; Parse::BBCode::Tag->new(@_); If you want your own tag class, inherit from Parse::BBCode and let it return Parse::BBCode::YourTag->new TAG DEFINITIONS Here is an example of all the current definition possibilities: my $p = Parse::BBCode->new({ tags => { i => '<i>%s</i>', b => '<b>%{parse}s</b>', size => '<font size="%a">%{parse}s</font>', url => 'url:<a href="%{link}A">%{parse}s</a>', wikipedia => 'url:<a href="{uri}A">%{parse}s</a>', noparse => '<pre>%{html}s</pre>', quote => 'block:<blockquote>%s</blockquote>', code => { code => sub { my ($parser, $attr, $content, $attribute_fallback) = @_; if ($attr eq 'perl') { # use some syntax highlighter $content = highlight_perl($$content); } else { $content = Parse::BBCode::escape_html($$content); } "<tt>$content</tt>" }, parse => 0, class => 'block', }, hr => { class => 'block', output => '<hr>', single => 1, }, }, } ); The following list explains the above tag definitions: %s i => '<i>%s</i>' [i] italic <html> [/i] turns out as <i> italic <html> </i> So %sstands for the tag content. By default, it is parsed itself, so that you can nest tags. %{parse}s b => '<b>%{parse}s</b>' [b] bold <html> [/b] turns out as <b> bold <html> </b> %{parse}sis the same as %sbecause 'parse' is the default. %a size => '<font size="%a">%{parse}s</font>' [size=7] some big text [/size] turns out as <font size="7"> some big text </font> So %a stands for the tag attribute. By default it will be HTML escaped. - url tag, %A, %{link}A url => 'url:<a href="%{link}a">%{parse}s</a>' the first thing you can see is the url:at the beginning - this defines the url tag as a tag with the class 'url', and urls must not be nested. So this class definition is mainly there to prevent generating wrong HTML. if you nest url tags only the outer one will be parsed. another thing you can see is how to apply a special escape. The attribute defined with %{link}ais checked for a valid URL. javascript:will be filtered. [url=/foo.html]a link[/url] turns out as <a href="/foo.html">a link</a> Note that a tag like [url][/url] will turn out as <a href=""></a> In the cases where the attribute should be the same as the content you should use %Ainstead of %awhich takes the content as the attribute as a fallback. You probably need this in all url-like tags: url => 'url:<a href="%{link}A">%{parse}s</a>', %{uri}A You might want to define your own urls, e.g. for wikipedia references: wikipedia => 'url:<a href="{uri}A">%{parse}s</a>', %{uri}Awill uri-encode the searched term: [wikipedia]Harold & Maude[/wikipedia] [Harold & Maude</a> <a href="">a movie</a> - Don't parse tag content Sometimes you need to display verbatim bbcode. The simplest form would be a noparse tag: noparse => '<pre>%{html}s</pre>' [noparse] [some]unbalanced[/foo] [/noparse] With this definition the output would be <pre> [some]unbalanced[/foo] </pre> So inside a noparse tag you can write (almost) any invalid bbcode. The only exception is the noparse tag itself: [noparse] [some]unbalanced[/foo] [/noparse] [b]really bold[/b] [/noparse] Output: [some]unbalanced[/foo] <b>really bold</b> [/noparse] Because the noparse tag ends at the first closing tag, even if you have an additional opening noparse tag inside. The %{html}sdefines that the content should be HTML escaped. If you don't want any escaping you can't say %sbecause the default is 'parse'. In this case you have to write %{noescape}. quote => 'block:<blockquote>%s</blockquote>',); } "<tt>$content</tt>" }, parse => 0, class => 'block', }, So instead of a string you define a hash reference with a 'code' key and a sub reference. The other key is parsewhich => '<hr>', single => 1, }, The hr-Tag is a block tag (should not be inside inline tags), and it has no closing tag (option single) [hr] Output: <hr> ESCAPES my $p = Parse::BBCode->new({ ... escapes => { link => sub { }, }, }); You can define or override escapes. Default escapes are html, uri, link, email, htmlcolor, num. An escape functions as a validator and filter. For example, the 'link' escape looks if it got a valid URI (starting with / or \w+://) and html-escapes it. It returns the empty string if the input is invalid. See "default_escapes" in Parse::BBCode::HTML for the detailed list of escapes. URL FINDER Usually one wants to also create hyperlinks from any url found in the bbcode, not only in url tags. The following code will use URI::Find to search for all types of urls (unless inside of a url tag itself), create a link in the given format and html-escape the rest. If the url is longer than 50 chars, it will cut the link title and append three dots. If you set max_length to 0, the title won't be cut. my $p = Parse::BBCode->new({ url_finder => { max_length => 50, # sprintf format: format => '<a href="%s" rel="nofollow">%s</a>', }, tags => ... }); Note: If you use the special tag '' in the tag definitions you will overwrite the url finder and have to do that yourself. Alternative: my $p = Parse::BBCode->new({ url_finder => 1, ... This will use the default like shown above (max length 50 chars). Default is 0. ATTRIBUTES There are two types of tags. The default (option direct_attributes=1): [foo=bar a=b c=d] [foo="text with space" a=b c=d] The parsed attribute structure will look like: [ ['bar'], ['a' => 'b'], ['c' => 'd'] ] Another bbcode variant doesn't use direct attributes: [foo a=b c=d] The resulting attribute structure will have an empty first element: [ [''], ['a' => 'b'], ['c' => 'd'] ] ATTRIBUTE PARSING If you have bbcode attributes that don't fit into the two standard syntaxes you can inherit from Parse::BBCode and overwrite the parse_attributes method, or you can pass an option attribute_parser contaning a subref. Example: [size=10]big[/size] [foo|bar|boo]footext[/foo] end The size tag should be parsed normally, the foo tag needs different parsing. sub parse_attributes { my ($self, %args) = @_; # $$text contains '|bar|boo]footext[/foo] end my $text = $args{text}; my $tagname = $args{tag}; # 'foo' if ($tagname eq 'foo') { # work on $$text # result should be something like: # $$text should contain 'footext[/foo] end' my $valid = 1; my @attr = ( [''], [1 => 'bar'], [2 => 'boo'] ); my $attr_string = '|bar|boo'; return ($valid, [@attr], $attr_string, ']'); } else { return shift->SUPER::parse_attributes(@_); } } my $parser = Parse::BBCode->new({ ... attribute_parser => \&parse_attributes, }); If the attributes are not valid, return 0, [ [''] ], '|bar|boo', ']' If you don't find a closing square bracket, return: 0, [ [''] ], '|bar|boo', '' TEXT PROCESSORS If you set url_finder and linebreaks to 1, the default text processor will work like this: my $post_processor = \&sub_for_escaping_HTML; $text = code_to_replace_urls($text, $post_processor); $text =~ s/\r?\n|\r/<br>\n/g; return $text; It will be applied to text outside of bbcode and inside of parseable bbcode tags (and not to code tags or other tags with unparsed content). If you need an additional post processor this usually cannot be done after the HTML escaping and url finding. So if you write a text processor it must do the HTML escaping itself. For example if you want to replace smileys with image tags you cannot simply do: $text =~ s/ :-\) /<img src=...>/g; because then the image tag would be HTML escaped after that. On the other hand it's usually not possible to do something like that *after* the HTML escaping since that might introduce text sequences that look like a smiley (or whatever you want to replace). So a simple example for a customized text processor would be: ... url_finder => 1, linebreaks => 1, text_processor => sub { # for $info hash description see render() method my ($text, $info) = @_; my $out = ''; while ($text =~ s/(.*)( |^)(:\))(?= |$)//mgs) { # match a smiley and anything before my ($pre, $sp, $smiley) = ($1, $2, $3); # escape text and add smiley image tag $out .= Parse::BBCode::escape_html($pre) . $sp . '<img src=...>'; } # leftover text $out .= Parse::BBCode::escape_html($text); return $out; }, This will result in: Replacing urls, applying your text_processor to the rest of the text and after that replace linebreaks with <br> tags. If you want to completely define the plain text processor yourself (ignoring the 'linebreak', 'url_finder', 'smileys' and 'text_processor' options) you define the special tag with the empty string: my $p = Parse::BBCode->new({ tags => { '' => sub { my ($parser, $attr, $content, $info) = @_; return frobnicate($content); # remember to escape HTML! }, ... SHORT TAGS It can be very convenient to have short tags like [foo://id]. This is not really a part of BBCode, but I consider it as quite similar, so I added it to this module. For example to link to threads, cpan modules or wikipedia articles: [thread://123] [thread://123|custom title] # can be implemented so that it links to thread 123 in the forum # and additionally fetch the thread title. [cpan://Module::Foo|some useful module] [wikipedia://Harold & Maude] You can define a short tag by adding the option short. The tag will work as a classic tag and short tag. If you only want to support the short version, set the option classic to 0. my $p = Parse::BBCode->new({ tags => { Parse::BBCode::HTML->defaults, wikipedia => { short => 1, output => '<a href="{uri}A">%{parse}s</a>', class => 'url', classic => 0, # don't support classic [wikipedia]...[/wikipedia] }, thread => { code => sub { my ($parser, $attr, $content, $attribute_fallback) = @_; my $id = $attribute_fallback; if ($id =~ tr/0-9//c) { return '[thread]' . encode_entities($id) . '[/thread]'; } my $name; if ($attr) { # custom title will be in $attr # [thread=123]custom title[/thread] # [thread://123|custom title] # already escaped $name = $$content; } return qq{<a href="/thread/$id">$name</a>}; }, short => 1, classic => 1, # default is 1 }, }, } ); WHY ANOTHER BBCODE PARSER I wrote this module because HTML::BBCode is not extendable (or I didn't see how) and BBCode::Parser seemed good at the first glance but has some issues, for example it says that. WHY BBCODE? Some forums and blogs prefer a kind of pseudo HTML for user comments. The arguments against bbcode is usually: "Why should people learn an additional markup language if they can just use HTML?" The problem is that many people don't know HTML. BBCode is often a bit shorter, for example if you have a code tag with an attribute that tells the parser what language the content is in. [code=perl]...[/code] <code language="perl">...</code>.8.0, or AUTHOR Tina Mueller CREDITS Thanks to Moritz Lenz for his suggestions about the implementation and the test cases. Viacheslav Tikhanovskii Sascha Kiefer This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.6.1 or, at your option, any later version of Perl 5 you may have available.
https://metacpan.org/pod/Parse::BBCode
CC-MAIN-2015-22
en
refinedweb
Template Haskell From HaskellWiki Template Haskell is a GHC extension to Haskell that adds compile-time metaprogramming facilities. The original design can be found here:. It is included in GHC since version 6. This page hopes to be a more central and organized repository of TH related things. 1 What is Template Haskell? Template Haskell is an extension to Haskell 98 that allows you to do type-safe compile-time meta-programming, with Haskell both as the manipulating language and the language being manipulated. Intuitively Template Haskell provides new language features that allow us to convert back and forth between concrete syntax, i.e. what you would type when you write normal Haskell code, and abstract syntax trees. These abstract syntax trees are represented using Haskell datatypes and, at compile time, they can be manipulated by Haskell code. This allows you to reify (convert from concrete syntax to an abstract syntax tree) some code, transform it and splice it back in (convert back again), or even to produce completely new code and splice that in, while the compiler is compiling your module. For." (Note: These documents are from the Wayback machine because the originals disappeared. They're public documents on Google docs, which shouldn't require logging in. However, if you're asked to sign in to view them, you're running into a known Google bug. You can fix it by browsing to Google, presumably gaining a cookie in the process.) - A very short tutorial to understand the basics in 10 Minutes. - GHC Template Haskell documentation - Papers about Template Haskell - Template metaprogramming for Haskell, by Tim Sheard and Simon Peyton Jones, Oct 2002. [ps] - Template Haskell: A Report From The Field, by Ian Lynagh, May 2003. [ps] - Unrolling and Simplifying Expressions with Template Haskell, by Ian Lynagh, December 2002. [ps] - Automatic skeletons in Template Haskell, by Kevin Hammond, Jost Berthold and Rita Loogen, June 2003. [pdf] - Optimising Embedded DSLs using Template Haskell, by Sean Seefried, Manuel Chakravarty, Gabriele Keller, March 2004. [pdf] - Typing Template Haskell: Soft Types, by Ian Lynagh, August 2004. [ps] 4 Other useful resources - (2011) Greg Weber's blog post on Template Haskell and quasi-quoting in the context of Yesod. - (2012) Mike Ledger's tutorial on TemplateHaskell and QuasiQuotation for making an interpolated text QuasiQuoter. - separate examples. -ness ( does a one-way translation, for haskell-src-exts) What can reify see? When you use reify to give you information about a Name, GHC will tell you what it knows. But sometimes it doesn't know stuff. In particular - Imported things. When you reify an imported function, type constructor, class, etc, from (say) module M, GHC runs off to the interface file M.hi in which it deposited all the info it learned when compiling M. However, if you compiled M without optimisation (ie -O0, the default), and without -XTemplateHaskell, GHC tries to put as little info in the interface file as possible. (This is a possibly-misguided attempt to keep interface files small.) In particular, it may dump only the name and kind of a data type into M.hi, but not its constructors. - Under these circumstances you may reify a data type but get back no information about its data constructors or fields. Solution: compile M with - -O, or - -fno-omit-interface-pragmas (implied by -O), or - -XTemplateHaskell. - Function definitions. The VarI constructor of the Info type advertises that you might get back the source code for a function definition. In fact, GHC currently (7.4) always returns Nothing in this field. It's a bit awkward and no one has really needed it. 9.3 Why does runQ crash if I try to reify something? This program will fail with an error message when you run it: main = do info <- runQ (reify (mkName "Bool")) -- more hygenic is: (reify '. Instead, you can run the splice directly (ex. in ghci -XTemplateHaskell), as the following shows: GHCi> let tup = $(tupE $ take 4 $ cycle [ [| "hi" |] , [| 5 |] ]) GHCi> :type tup tup :: ([Char], Integer, [Char], Integer) GHCi> tup ("hi",5,"hi",5) GHCi> $(stringE . show =<< reify ''Int) "TyConI (DataD [] GHC.Types.Int [] [NormalC GHC.Types.I# [(NotStrict,ConT GHC.Prim.Int#)]] [])" Here's an email thread with more details. 10 Examples 10.1 Tuples 10.1.1.2 Apply a function to the n'th element tmap i n = do f <- newName "f" as <- replicateM n (newName "a") lamE [varP f, tupP (map varP as)] $ tupE [ if i == i' then [| $(varE f) $a |] else a | (a,i') <- map varE as `zip` [1..] ] Then tmap can be called as: > $(tmap 3 4) (+ 1) (1,2,3,4) (1,2,4,4) 10.1.3 Convert the first n elements of a list to a tuple This example creates a tuple by extracting elements.1.4 Un-nest tuples Convert nested tuples like (a,(b,(c,()))) into (a,b,c) given the length to generate. unNest n = do vs <- replicateM n (newName "x") lamE [foldr (\a b -> tupP [varP a , b]) (conP '() []) vs] (tupE (map varE vs)) 10.2 Marshall a datatype to and from Dynamic This approach is an example of using template haskell to delay typechecking to be able to abstract out the repeated calls to fromDynamic: data T = T Int String Double toT :: [Dynamic] -> Maybe T toT [a,b,c] = do a' <- fromDynamic a b' <- fromDynamic b c' <- fromDynamic c return (T a' b' c') toT _ = Nothing 10.3 Printf Build it using a command similar to: ghc --make Main.hs -o main Main.hs: {-# LANGUAGE TemplateHaskell #-} -- Import our template "printf" import PrintF (printf) -- The splice operator $ takes the Haskell source code -- generated at compile time by "printf" and splices it into -- the argument of "putStrLn". main = do putStrLn $ $(printf "Hello %s %%x%% %d %%x%%") "World" 12 PrintF.hs: {-# LANGUAGE TemplateHaskell #-} module PrintF where -- NB: printf needs to be in a separate module to the one where -- you intend to use it. -- Import some Template Haskell syntax import Language.Haskell.TH -- Possible string tokens: %d %s and literal strings data Format = D | S | L String deriving Show -- a poor man's tokenizer tokenize :: String -> [Format] tokenize [] = [] tokenize ('%':c:rest) | c == 'd' = D : tokenize rest | c == 's' = S : tokenize rest tokenize (s:str) = L (s:p) : tokenize rest -- so we don't get stuck on weird '%' where (p,rest) = span (/= '%') str -- generate argument list for the function args :: [Format] -> [PatQ] args fmt = concatMap (\(f,n) -> case f of L _ -> [] _ -> [varP n]) $ zip fmt names where names = [ mkName $ 'x' : show i | i <- [0..] ] -- generate body of the function body :: [Format] -> ExpQ body fmt = foldr (\ e e' -> infixApp e [| (++) |] e') (last exps) (init exps) where exps = [ case f of L s -> stringE s D -> appE [| show |] (varE n) S -> varE n | (f,n) <- zip fmt names ] names = [ mkName $ 'x' : show i | i <- [0..] ] -- glue the argument list and body together into a lambda -- this is what gets spliced into the haskell code at the call -- site of "printf" printf :: String -> Q Exp printf format = lamE (args fmt) (body fmt) where fmt = tokenize format 10.4.1 Limitations getopt (THArg pat) is only able to treat unary constructors. See the pattern-binding: It matches exactly a single VarP. 10.6 zipWithN Here $(zipn 3) = zipWith3 etc. import Language.Haskell.TH; import Control.Applicative; import Control.Monad zipn n = do vs <- replicateM n (newName "vs") [| \f -> $(lamE (map varP vs) [| getZipList $ $(foldl (\a b -> [| $a <*> $b |]) [| pure f |] (map (\v -> [| ZipList $(varE v) |]) vs)) |]) |]). 10.9 QuasiQuoters New in ghc-6.10 is -XQuasiQuotes, which allows one to extend GHC's syntax from library code. Quite a few examples are given in haskell-src-meta. 10.9.1 Similarity with splices Quasiquoters used in expression contexts (those using the quoteExp) behave to a first approximation like regular TH splices: simpleQQ = QuasiQuoter { quoteExp = stringE } -- in another module [$simpleQQ| a b c d |] == $(quoteExp simpleQQ " a b c d ") 10.10 Generating records which are variations of existing records This example uses syb to address some of the pain of dealing with the rather large data types. {-# LANGUAGE ScopedTypeVariables, TemplateHaskell #-} module A where import Language.Haskell.TH import Data.Generics addMaybes modName input = let rename :: GenericT rename = mkT $ \n -> if nameModule n == modName then mkName $ nameBase n ++ "_opt" else n addMaybe :: GenericM Q addMaybe = mkM $ \(n :: Name, s :: Strict, ty :: Type) -> do ty' <- [t| Maybe $(return ty) |] return (n,s,ty') in everywhere rename `fmap` everywhereM addMaybe input mkOptional :: Name -> Q Dec mkOptional n = do TyConI d <- reify n addMaybes (nameModule n) d mkOptional then generates a new data type with all Names in that module with an added suffix _opt. For example: data Foo = Foo { a,b,c,d,e :: Double, f :: Int } mapM mkOptional [''Foo] Generates something like data Foo_opt = Foo_opt {a_opt :: Maybe Double, ..... f_opt :: Maybe Int}
https://wiki.haskell.org/index.php?title=Template_Haskell&direction=next&oldid=55995
CC-MAIN-2015-22
en
refinedweb
Eixo Final project fabAcademy final project presentation from Marta Verde on Vimeo. Processes I used : Star spin test from Marta Verde on Vimeo. Structure 3mm laser cutted plywood & transparent methracylate: Interface Programming For generating the rotary items, I developed a program in Processing to be able to parametrize them, and export an vector PDF file that can be editable in any vector design software; to be laser cutted. I can also test the rotary animation with it. p5 recursive to laser cutter from Marta Verde on Vimeo. Motion system PLA 3d printed gears, shaft and stepper motor mounting: Gear 9mm by martaverdebaqueiro on Sketchfab Nema 17 Mount by martaverdebaqueiro on Sketchfab Light DIY light source (9 Super bright & directional white LEDS) with second-hand slide-projector lens, attached to 3d Printed tube: Electronics design Because I didnt know what was going to happen with the project at the end, and I made the electronics first, I used to differenced boards, just in case of using them away or separated. One drives the light source and the other one the motor. Each one has their own potentiometer to modulate the outputs (light frequency & motor speed and direction). In the light one, I made a "bridge" because I was connecting the potentiometer to the wrong pin. I also had intended to use 2 of them, but at the end I used one potentiometer and "two" LED pins (4 + 5 LED), so I take advantage of the whole board witouth milling it again because I had the pins free with male headers. I feeded the motor board with a 9V battery because I didnt had access to a regulable power supply to feed it with 12V to be able to regulate the amps (I had a switching one and gave me too many amps and the H-Bridges got super hot). The light one was feeded by 3x1.5V batteries. Light board BOM Motor board BOM Electronics Programming I coded it in Arduino IDE because I´ve using it for a long time and is fast to prototype. Light board, to adjust the amount of flickering of "two" leds (they were 9 connected to two pins): int ledPin1=7; int ledPin2=2; unsigned long timeLedChanged = millis(); unsigned long period = 1000; boolean ledOn = true; int strobe=100; void setup(){ pinMode(ledPin2, OUTPUT); pinMode(ledPin1, OUTPUT); } void loop(){ strobe = analogRead(A3)/10; if (strobe >= 100){ digitalWrite(ledPin2, LOW); digitalWrite(ledPin1, LOW); } else{ strobeLight(ledPin1, ledPin2, 1, strobe); } } void strobeLight(int output1, int output2, int ontime, int offtime){ if (millis() - timeLedChanged >= period) { ledOn = !ledOn; timeLedChanged = millis(); digitalWrite(output1, ledOn); digitalWrite(output2, ledOn); if (ledOn) { period = ontime; } else { period = offtime; } } } Motor board, it changes the speed of the motor in two directions: #include < Stepper.h > int potPin = 7; int potValue; const int stepsPerRevolution = 200; Stepper myStepper(stepsPerRevolution, 0, 1, 3, 4); int stepCount = 0; int motorSpeedLeft, motorSpeedRight; void setup() { } void loop() { potValue = analogRead(potPin); motorSpeedRight = map(potValue, 512, 1023, 0, 100); motorSpeedLeft = map(potValue, 0, 511, 100, 0); if (motorSpeedRight > 0 || motorSpeedLeft > 0) { //clockwise if(potValue > 512){ myStepper.setSpeed(motorSpeedRight); myStepper.step(stepsPerRevolution / 100); } //counterclockwise if(potValue < 511){ myStepper.setSpeed(motorSpeedLeft); myStepper.step(-stepsPerRevolution / 100); } } } Files You can see all the process development of the project here. Final costs: Super super thanks to: Attribution-NonCommercial-ShareAlike CC BY-NC-SA
http://archive.fabacademy.org/archives/2016/fablabmadridue/students/280/eixo.html
CC-MAIN-2021-49
en
refinedweb
Domino 4.0¶ 4.0.4 (November 2019)¶ Changes Fixed an issue where some execution events tracked by Domino could be logged or presented out of order. Fixed an issue where Domino executions with Spark integration could create Kubernetes resources in the wrong namespace. The console output panel for a Domino run will now surface and display more types of errors. 4.0.3 (October 2019)¶ New Features With the release of Domino v4.0.3, Datasets functionality has been added to the new platform infrastructure. A feature first introduced in V3.3 is now accessible with the updated architecture. Changes Fixed issue where the “modified” column in the Environments and Models table of the UI wouldn’t sort chronologically. Restores support for connecting to VPNs from Run containers Various minor bug fixes and stability improvements 4.0.2 (October 2019)¶ Changes Fixed issue where Model API’s timeout override was not taking effect Fixed issue where Control Center could become inaccessible when a job’s queue end time and run end time are the same time stamp. Various additional minor bug fixes and stability improvements 4.0.1 (October 2019)¶ Changes Multiple minor bug fixes and adjustments to the default configuration settings for new deployments 4.0.0 (September 2019)¶ Welcome to Domino 4! In addition to helpful new features for data scientists and project leaders, Domino 4 introduces a new architecture with all components running on Kubernetes. This change makes Domino easier to install, configure, monitor, and administer, and allows Domino to run in more environments than ever before. Visit admin.dominodatalab.com to learn about the technical design of Domino 4 and read guides for configuration and administration. Breaking changes Domino 4.0 fully sunsets support for V1 environments. Previously, V1 environments had been demarcated with an asterisk when listed in your project settings environments list. Typically, these should not be present for Domino deployments which originated after the release v3.0. Domino 4.0 fully sunsets support for legacy API endpoints. Only Model APIs are supported. Typically, legacy API Endpoints should not be present for Domino deployments which originated after the release v3.0. Many previous interfaces and options for managing Domino executors (e.g. the legacy “Dispatcher” interface) have been replaced with the introduction of the new Kubernetes compute grid. There are new dashboards for viewing Kubernetes infrastructure and active execution pods, and new options for configuring Hardware Tiers. Click to read more about Managing the compute grid in Domino 4. Domino 4.0 removes support for SSH access to a Run container. Domino 4.0 removes support for arbitrary Docker arguments for things like custom volume mounts. Domino 4.0 temporarily removes support for connecting to VPNs from Run containers. Support returns in 4.0.3. In Domino 4.0, user logins must use the new Keycloak authentication service. Any existing legacy LDAP integrations will need to have their configurations migrated to Keycloak. Domino 4.0 ships with a new collection of Domino 4.0 standard environments. Users who want to use NVIDIA GPUs in Domino 4.0 will need to switch their compute environments to the latest version as Domino now utilizes NVIDIA Docker. Note that these new standard environments do not support working with GPUs in Python 2. New features Domino now runs fully kubernetes native. Both front ends, central services and executors now run on the Domino kubernetes platform. Read more about the new infrastructure. Domino 4.0 adds a new Assets Portfolio that allows users to quickly discover and see key information about the data products they have access to in Domino, including Model APIs, Apps, Launchers, and Scheduled Jobs. A new Project Manager admin role is available. This role grants a user contributor access to projects owned by other users who are members of the same organization as the project manager. This allows the project manager to view those projects in the Projects Portfolio, discover their published assets in the Assets Portfolio, and view the projects’ contents as a contributor. Domino 4.0 introduces Project Goals. Goals represent outcomes or subtasks within projects. Project contributors can link files, Workspace sessions, Jobs, Apps, and Model APIs to goals, which show up on the goal card in the project overview. This provides a way to track all work related to a specific goal in the project, and can make navigating large and busy projects easier. New options are available in the Notifications and Workspace Settings sections of user Account Settings that allow for opt-in to email notifications or auto-termination for long-running Workspace sessions with a configurable duration. Admins also now have additional options for defining which Workspace sessions to treat as long-running, enforcing notification requirements for users, and sending additional global notifications about long-running sessions to admins. File size units To harmonize file size formats across Domino, and to align with common practices in user interfaces, starting with Domino 4.0, file sizes are displayed using base 10 metric prefixes (e.g. 1GB = 10^9 bytes) as opposed to base 2 binary prefixes (e.g. 1GiB = 2^30 bytes). For additional information on the differences between the units, please see. The change affects the file summary screen and other locations where file sizes are displayed in Domino. As a result, a user may observe a visual difference between the reported size in GB, MB, or KB of a file between Domino 3.6 (or earlier) and Domino 4.0, even though the absolute size of a file in bytes has not changed. Additional changes Visual styling and design for tables, buttons, links, accordion headers, breadcrumbs, and tab navigation have all been improved and made consistent across the Domino application. Run usage functionality is impaired and will be addressed in an upcoming Domino version.
https://docs.dominodatalab.com/en/4.6.2/release_notes/4-0.html
CC-MAIN-2021-49
en
refinedweb
import formats from global area into formats in site We are currently using version 6.0.3, but are testing 6.7 beta. We have one site in Cascade Server, but most of our content is still in the global region. We have a number of XSLT formats in our site that do an xsl:import of formats in the global region. This worked fine in 6.0.3 but in 6.7 beta I get an error. For example, when our format imports the following from the global region: <xsl:import we get the following error message: "Invalid XSLT: Had IO Exception with stylesheet file:" Is there a way to specify that this is coming from the global region? I tried <xsl:import but that doesn't work either. Thanks, Timothy Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac 1 Posted by Joel on 26 May, 2010 06:30 PM Hi Timothy, Unfortunately you cannot link to the Global Area, but if you were to attempt it the syntax would resemble the code below. Thanks! Joel closed this discussion on 26 May, 2010 06:30 PM.
https://help-archives.hannonhill.com/discussions/how-do-i/33-import-formats-from-global-area-into-formats-in-site
CC-MAIN-2021-49
en
refinedweb
Using the TinEye API to determine if an image is a stock photo Searching Now that you’ve signed up and purchased a bundle, you’re ready to start searching. These examples will use the Python library for the TinEye API. The key used is the sandbox key, so you can copy and paste these examples to try them without using up your own searches. First, we’ll need to install the Python library. From a terminal, execute the following commands: python -m pip install pytineye The following Python code will perform a search: from pytineye import TinEyeAPIRequest tineye_api_key = '6mm60lsCNIB,FwOWjJqA80QZHh9BMwc-ber4u=t^' api = TinEyeAPIRequest('', tineye_api_key) results = api.search_url(url='') The results variable will contain matches for the requested image from all types of sites, because we didn’t specify that we were only interested in stock photo sites. In the results, a match tagged as “stock” comes from a site that licenses stock photographs. Matches tagged as belonging to a “collection” come from sites with large collections of images (for example, flickr.com or wikipedia.com) that may provide more information about the origin and licensing terms of images. In order to limit the results to only stock photo sites, we can add the tags = 'stock' argument to the request: stock_results = api.search_url(url='', tags='stock') Now, the stock_results variable contains only matches from sites that TinEye has labelled as stock photo sites. The example above shows how to search for a single image. If you need to search for a large batch of images, you will want to write a script to loop through your image collection. Here’s a simple Python example that prints out whether an image was found on a stock site or not, for each image in a list of images: from pytineye import TinEyeAPIRequest # This is the TinEye API sandbox key # They will always search for the same image, regardless of which images you provide tineye_api_key = '6mm60lsCNIB,FwOWjJqA80QZHh9BMwc-ber4u=t^' api = TinEyeAPIRequest('', tineye_api_key) # Note that these are dummy URLs # Please replace them with real URLs to make this example work image_url_list = ('', '',) for image_url in image_url_list: try: stock_results = api.search_url(url=image_url, tags='stock', limit=10) if stock_results.stats['total_filtered_results'] > 0: print(image_url + ' has matches on stock sites!') except Exception: print("There was an error searching for " + image_url) Interpreting your results To understand the results returned by the API, it’s important to understand some of the terminology that we use to describe matches. A website might have more than one matching version of an image. For example, a website might have a matching thumbnail image, a matching full-sized image and a matching banner image. These will all be separate matches. A backlink is a specific occurence of a match on a webpage. For example, if a website uses the matching thumbnail on three different pages, the thumbnail match will have three backlinks, one for each page. With that in mind, here’s part of the raw JSON response from one of the above API requests. You won’t need to interact with the JSON if you’re using one of our libraries but the JSON is helpful in illustrating the results you will get from the TinEye API: { "stats": { "total_results": 8536, "total_collection": 97, "total_filtered_results": 4, "total_stock": 4, "query_time": "0.33", "total_backlinks": 32372, "timestamp": "1539633760.50" }, "code": 200, "results": { "matches": [ { "image_url": "", "height": 204, "score": 70.588, "width": 240, "size": 48960, "domain": "foter.com", "overlay": "overlay/dca08fc6b2ec4b9e04f94a4e29223f6af3dd6555/a53345e78d9f4162f888d4dac8f5915833a5647d7f1161cd82e9da2013b05eb3?m21=-0.00118293&m22=1.46985&m23=0.320615&m11=1.46985&m13=-0.0417408&m12=0.00118293", "tags": [ "stock" ], "filesize": 11991, "format": "JPEG", "backlinks": [ { "backlink": "", "url": "", "crawl_date": "2013-08-01" } ], "query_hash": "dca08fc6b2ec4b9e04f94a4e29223f6af3dd6555" }, …. }, "messages": [] } The response includes statistics (“stats”) for the entire request, such as how many results and how many backlinks it returned and how long the query took. It also includes a list of matches. Each match gives statistics about the matching image including: - its height, width and size - the domain on which it was found - a link to the matching image on our server - an overlay comparing your search image with the image we found - tags that indicate whether this match is on a stock site or a collection site - a list of backlinks Each backlink contains the original url at which the image file was hosted, a link to the page on which the image was found and the date on which we crawled it. Please note that if an image has no matches listed as coming from “stock” sites, that doesn’t guarantee that it is not a stock photo; while we strive to crawl as much of the web as possible, there may be stock photo sites that we don’t have tagged or results that we haven’t crawled yet.
https://help.tineye.com/article/182-using-the-tineye-api-to-determine-if-an-image-is-a-stock-photo
CC-MAIN-2021-49
en
refinedweb
Angular: Set Base URL Dynamically Recently one of my Angular projects had a requirement to have multiple instances of websites hosted on the same domain with different base URLs. For example, the application URL is and now you want to create multiple sites with different base URL: and so on. With the change in base URL, the API path also changed from /api/GetCall to /client1/api/GetCall. If we want to change the base URL, we have two problems to solve: 1. Handle Angular routing to accommodate base URL. 2. Handle Http service requests and append base URL to each request. 1. Fix Routing You just need to put a <base> tag in the head of the index or Layout page and specify a base URL like this: <base href="/client1/" /> So if you had an Angular route defined like this: { path: 'home', component: HomeComponent } this route would become /client1/home if <base href="/client1/" /> is present in the head section of the page. 2. Append Base URL to HTTP requests We have an API relative URL which starts at /api/products. In this section, we want to make sure that when we make a call to API, the base URL is appended to the API URL, so in this case, the API URL becomes /client1/api/products. /api/products --> /client1/api/products There are two popular ways of achieving this, one, using HttpInterceptors and, two, using dependency injection. Using Dependency Injection: Register a base URL provider in the module so it is available everywhere in the application: providers: [ { provide: 'BASE_URL', useFactory: getBaseUrl } ] Provide factory method which gets the base URL from <base> element: export function getBaseUrl() { return document.getElementsByTagName('base')[0].href; } Now you can get the base URL injected and add it to URL: export class FetchProductsComponent { public forecasts: IProduct[]; constructor(http: Http, @Inject('BASE_URL') baseUrl: string) { http.get(baseUrl + 'api/products').subscribe(result => { this.products = result.json() as IProduct[]; }, error => console.error(error)); } } Using HttpInterceptors: Since HttpInterceptors were introduced in Angular 4.3, this will not work on earlier versions. HttpInterceptor intercepts requests made using HttpClient, if you try to make requests with old Http class, interceptor won't hit. To create an interceptor, create an Injectable class which implements HttpInterceptor. import { Injectable } from '@angular/core'; import { HttpEvent, HttpInterceptor, HttpHandler, HttpRequest } from '@angular/common/http'; import { Observable } from 'rxjs/Observable'; @Injectable() export class ApiInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const baseUrl = document.getElementsByTagName('base')[0].href; const apiReq = req.clone({ url: `${baseUrl}${req.url}` }); return next.handle(apiReq); } } Register interceptor as a provider: { provide: HTTP_INTERCEPTORS, useClass: ApiInterceptor, multi: true } Now whenever you make any Http request using HttpClient, ApiInterceptor will be invoked and base URL will be added to the API URL. When you have two options to append base URL to Http requests, which one to choose? I liked the interceptor way as you do not need to inject it in all the places where you are making Http calls, just write one interceptor and done, it will intercept all the Http calls made using HttpClient. Have any questions or suggestions? Please drop a message in the comments section below.
http://www.projectcodify.com/angular-set-base-url-dynamically
CC-MAIN-2021-49
en
refinedweb
</ol> ol { list-style-type: none; padding: 0; } const { Observable, Observer, Subject } = rxjs; const logEl = document.getElementById('log'); function log(message) { const li = document.createElement('li'); const text = document.createTextNode(message); li.appendChild(text); logEl.appendChild(li); } const tapRefCount = (onChange) => (source) => { let refCount = 0; // mute the operator if it has nothing to do if (typeof onChange !== 'function') { return source; } // mute errors from side-effects const safeOnChange = (refCount, prevRefCount) => { try { onChange(refCount, prevRefCount); } catch (e) { console.error(e); } }; // spy on subscribe return Observable.create((observer) => { const subscription = source.subscribe(observer); const prevRefCount = refCount; refCount++; safeOnChange(refCount, prevRefCount); // spy on unsubscribe return () => { subscription.unsubscribe(); const prevRefCount = refCount; refCount--; safeOnChange(refCount, prevRefCount); }; }); }; console.clear(); const subject = new Subject(); const stream = subject.pipe( tapRefCount((refCount, prevRefCount) => { log(JSON.stringify({ refCount, prevRefCount })); }) ); let subIntCount = 0; const subscriptions = []; const subInt = setInterval(() => { let subscription; subIntCount++; // stop if (subIntCount == 10) { while(subscription = subscriptions.pop()) { log('[unsubscribe]'); subscription.unsubscribe(); } clearInterval(subInt); log('[done]'); } // unsubscribe... maybe if (subscriptions.length > 0 && Math.random() > 0.5) { log('[unsubscribe]'); subscription = subscriptions.pop(); subscription.unsubscribe(); return; } // subscribe log('[subscribe]'); subscription = stream.subscribe(); subscriptions.push(subscription); }, 1000); Also see: Tab Triggers
https://codepen.io/bygrace1986/pen/zajeom/
CC-MAIN-2021-49
en
refinedweb
NAME beep, flash - curses bell and screen flash routines SYNOPSIS #include <curses.h> int beep(void); int flash(void); DESCRIPTION. RETURN VALUE These routines return OK if they succeed in beeping or flashing, ERR otherwise. EXTENSIONS SVr4s beep and flash routines always returned OK, so it was not possible to tell when the beep or flash failed. PORTABILITY These functions are described in the XSI Curses standard, Issue 4. Like SVr4, it specifies that they always return OK. SEE ALSO ncurses(3NCURSES)
https://linux.fm4dd.com/en/man3/flash.htm
CC-MAIN-2021-49
en
refinedweb
What's New In Pylint 1.7¶ - Release 1.7 - Date 2017-04-13 Summary -- Release highlights¶ None yet. New checkers¶ single-string-used-for-slotscheck was added, which is used whenever a class is using a single string as a slot value. While this is technically not a problem per se, it might trip users when manipulating the slots value as an iterable, which would in turn iterate over characters of the slot value. In order to be more straight-forward, always try to use a container such as a list or a tuple for defining slot values. We added a new check, literal-comparison, which is used whenever pylint can detect a comparison to a literal. This is usually not what we want and, potentially, error prone. For instance, in the given example, the first string comparison returns true, since smaller strings are interned by the interpreter, while for larger ones, it will return False: mystring = "ok" if mystring is "ok": # Returns true # do stuff mystring = "a" * 1000 if mystring is ("a" * 1000): # This will return False # do stuff Instead of using the isoperator, you should use the ==operator for this use case. We added a new refactoring message, consider-merging-isinstance, which is emitted whenever we can detect that consecutive isinstance calls can be merged together. For instance, in this example, we can merge the first two isinstance calls: # $ cat a.py if isinstance(x, int) or isinstance(x, float): pass if isinstance(x, (int, float)) or isinstance(x, str): pass # $ pylint a.py # R: 1, 0: Consider merging these isinstance calls to isinstance(x, (float, int)) (consider-merging-isinstance) # R: 3, 0: Consider merging these isinstance calls to isinstance(x, (int, float, str)) (consider-merging-isinstance) A new error check was added, invalid-metaclass, which is used whenever pylint can detect that a given class is using a metaclass which is invalid for the purpose of the class. This usually might indicate a problem in the code, rather than something done on purpose. # Needs to inherit from *type* in order to be valid class SomeClass(object): ... class MyClass(metaclass=SomeClass): pass A new warning was added, useless-super-delegation, which is used whenever we can detect that an overridden method is useless, relying on super() delegation to do the same thing as another method from the MRO. For instance, in this example, the first two methods are useless, since they do the exact same thing as the methods from the base classes, while the next two methods are not, since they do some extra operations with the passed arguments. class Impl(Base): def __init__(self, param1, param2): super(Impl, self).__init__(param1, param2) def useless(self, first, second): return super(Impl, self).useless(first, second) def not_useless(self, first, **kwargs): debug = kwargs.pop('debug', False) if debug: ... return super(Impl, self).not_useless(first, **kwargs) def not_useless_1(self, first, *args): return super(Impl, self).not_useless_1(first + some_value, *args) A new warning was added, len-as-condition, which is used whenever we detect that a condition uses len(SEQUENCE)incorrectly. Instead one could use if SEQUENCEor if not SEQUENCE. For instance, all of the examples below: if len(S): pass if not len(S): pass if len(S) > 0: pass if len(S) != 0: pass if len(S) == 0: pass can be written in a more natural way: if S: pass if not S: pass See for more information. A new extension was added, emptystring.pywhich detects whenever we detect comparisons to empty string constants. This extension is disabled by default. For instance, the examples below: if S != "": pass if S == '': pass can be written in a more natural way: if S: pass if not S: pass An exception to this is when empty string is an allowed value whose meaning is treated differently than None. For example the meaning could be user selected no additional options vs. user has not made their selection yet! You can activate this checker by adding the line: load-plugins=pylint.extensions.emptystring to the MASTERsection of your .pylintrcor using the command: $ pylint a.py --load-plugins=pylint.extensions.emptystring A new extension was added, comparetozero.pywhich detects whenever we compare integers to zero. This extension is disabled by default. For instance, the examples below: if X != 0: pass if X == 0: pass can be written in a more natural way: if X: pass if not X: pass An exception to this is when zero is an allowed value whose meaning is treated differently than None. For example the meaning could be Nonemeans no limit, while 0means the limit it zero! You can activate this checker by adding the line: load-plugins=pylint.extensions.comparetozero to the MASTERsection of your .pylintrcor using the command: $ pylint a.py --load-plugins=pylint.extensions.comparetozero We've added new error conditions for bad-super-callwhich now detect the usage of super(type(self), self)and super(self.__class__, self)patterns. These can lead to recursion loop in derived classes. The problem is visible only if you override a class that uses these incorrect invocations of super(). For instance, Derived.__init__()will correctly call Base.__init__. At this point type(self)will be equal to Derivedand the call again goes to Base.__init__and we enter a recursion loop. class Base(object): def __init__(self, param1, param2): super(type(self), self).__init__(param1, param2) class Derived(Base): def __init__(self, param1, param2): super(Derived, self).__init__(param1, param2) The warnings missing-returns-docand missing-yields-dochave each been replaced with two new warnings - missing-[return|yield]-docand missing-[return|yield]-type-doc. Having these as separate warnings allows the user to choose whether their documentation style requires text descriptions of function return/yield, specification of return/yield types, or both. # This will raise missing-return-type-doc but not missing-return-doc def my_sphinx_style_func(self): """This is a Sphinx-style docstring. :returns: Always False """ return False # This will raise missing-return-doc but not missing-return-type-doc def my_google_style_func(self): """This is a Google-style docstring. Returns: bool: """ return False A new refactoring check was added, redefined-argument-from-local, which is emitted when pylint can detect than a function argument is redefined locally in some potential error prone cases. For instance, in the following piece of code, we have a bug, since the check will never return True, given the fact that we are comparing the same object to its attributes. def test(resource): for resource in resources: # The ``for`` is reusing ``resource``, which means that the following # ``resource`` is not what we wanted to check against. if resource.resource_type == resource: call_resource(resource) Other places where this check looks are with statement name bindings and except handler's name binding. A new refactoring check was added, no-else-return, which is emitted when pylint encounters an else following a chain of ifs, all of them containing a return statement. def foo1(x, y, z): if x: return y else: # This is unnecessary here. return z We could fix it deleting the elsestatement. def foo1(x, y, z): if x: return y return z A new Python 3 check was added, eq-without-hash, which enforces classes that implement __eq__also implement __hash__. The behavior around classes which implement __eq__but not __hash__changed in Python 3; in Python 2 such classes would get object.__hash__as their default implementation. In Python 3, aforementioned classes get Noneas their implementation thus making them unhashable. class JustEq(object): def __init__(self, x): self.x = x def __eq__(self, other): return self.x == other.x class Neither(object): def __init__(self, x): self.x = x class HashAndEq(object): def __init__(self, x): self.x = x def __eq__(self, other): return self.x == other.x def __hash__(self): return hash(self.x) {Neither(1), Neither(2)} # OK in Python 2 and Python 3 {HashAndEq(1), HashAndEq(2)} # OK in Python 2 and Python 3 {JustEq(1), JustEq(2)} # Works in Python 2, throws in Python 3 In general, this is a poor practice which motivated the behavior change. as_set = {JustEq(1), JustEq(2)} print(JustEq(1) in as_set) # prints False print(JustEq(1) in list(as_set)) # prints True In order to fix this error and avoid behavior differences between Python 2 and Python 3, classes should either explicitly set __hash__to Noneor implement a hashing function. class JustEq(object): def __init__(self, x): self.x = x def __eq__(self, other): return self.x == other.x __hash__ = None {JustEq(1), JustEq(2)} # Now throws an exception in both Python 2 and Python 3. 3 new Python 3 checkers were added, div-method, idiv-methodand rdiv-method. The magic methods __div__and __idiv__have been phased out in Python 3 in favor of __truediv__. Classes implementing __div__that still need to be used from Python 2 code not using from __future__ import divisionshould implement __truediv__and alias __div__to that implementation. from __future__ import division class DivisibleThing(object): def __init__(self, x): self.x = x def __truediv__(self, other): return DivisibleThing(self.x / other.x) __div__ = __truediv__ A new Python 3 checker was added to warn about accessing the messageattribute on Exceptions. The message attribute was deprecated in Python 2.7 and was removed in Python 3. See for more information. try: raise Exception("Oh No!!") except Exception as e: print(e.message) Instead of relying on the messageattribute, you should explicitly cast the exception to a string: try: raise Exception("Oh No!!") except Exception as e: print(str(e)) A new Python 3 checker was added to warn about using encodeor decodeon strings with non-text codecs. This check also checks calls to openwith the keyword argument encoding. See for more information. 'hello world'.encode('hex') Instead of using the encodemethod for non-text codecs use the codecsmodule. import codecs codecs.encode('hello world', 'hex') A new warning was added, overlapping-except, which is emitted when an except handler treats two exceptions which are overlapping. This means that one exception is an ancestor of the other one or it is just an alias. For example, in Python 3.3+, IOError is an alias for OSError. In addition, socket.error is an alias for OSError. The intention is to find cases like the following: import socket try: pass except (ConnectionError, IOError, OSError, socket.error): pass A new Python 3 checker was added to warn about accessing sys.maxint. This attribute was removed in Python 3 in favor of sys.maxsize. import sys print(sys.maxint) Instead of using sys.maxint, use sys.maxsize import sys print(sys.maxsize) A new Python 3 checker was added to warn about importing modules that have either moved or been removed from the standard library. One of the major undertakings with Python 3 was a reorganization of the standard library to remove old or supplanted modules and reorganize some of the existing modules. As a result, roughly 100 modules that exist in Python 2 no longer exist in Python 3. See and for more information. For suggestions on how to handle this, see or. from cStringIO import StringIO Instead of directly importing the deprecated module, either use six.movesor a conditional import. from six.moves import cStringIO as StringIO if sys.version_info[0] >= 3: from io import StringIO else: from cStringIO import StringIO This checker will assume any imports that happen within a conditional or a try/exceptblock are valid. A new Python 3 checker was added to warn about accessing deprecated functions on the string module. Python 3 removed functions that were duplicated from the builtin strclass. See for more information. import string print(string.upper('hello world!')) Instead of using string.upper, call the uppermethod directly on the string object. "hello world!".upper() A new Python 3 checker was added to warn about calling str.translatewith the removed deletecharsparameter. str.translateis frequently used as a way to remove characters from a string. 'hello world'.translate(None, 'low') Unfortunately, there is not an idiomatic way of writing this call in a 2and3 compatible way. If this code is not in the critical path for your application and the use of translatewas a premature optimization, consider using re.subinstead: import re chars_to_remove = re.compile('[low]') chars_to_remove.sub('', 'hello world') If this code is in your critical path and must be as fast as possible, consider declaring a helper method that varies based upon Python version. if six.PY3: def _remove_characters(text, deletechars): return text.translate({ord(x): None for x in deletechars}) else: def _remove_characters(text, deletechars): return text.translate(None, deletechars) A new refactoring check was added, consider-using-ternary, which is emitted when pylint encounters constructs which were used to emulate ternary statement before it was introduced in Python 2.5. value = condition and truth_value or false_value Warning can be fixed by using standard ternary construct: value = truth_value if condition else false_value A new refactoring check was added, trailing-comma-tuple, which is emitted when pylint finds an one-element tuple, created by a stray comma. This can suggest a potential problem in the code and it is recommended to use parentheses in order to emphasise the creation of a tuple, rather than relying on the comma itself. The warning is emitted for such a construct: a = 1, The warning can be fixed by adding parentheses: a = (1, ) Two new check were added for detecting an unsupported operation over an instance, unsupported-assignment-operationand unsupported-delete-operation. The first one is emitted whenever an object does not support item assignment, while the second is emitted when an object does not support item deletion: class A: pass instance = A() instance[4] = 4 # unsupported-assignment-operation del instance[4] # unsupported-delete-operation A new check was added, relative-beyond-top-level, which is emitted when a relative import tries to access too many levels in the current package. A new check was added, trailing-newlines, which is emitted when a file has trailing new lines. invalid-length-returnedcheck was added, which is emitted when a __len__implementation does not return a non-negative integer. There is a new extension, pylint.extensions.mccabe, which can be used for computing the McCabe complexity of classes and functions. You can enable this extension through --load-plugins=pylint.extensions.mccabe A new check was added, used-prior-global-declaration. This is emitted when a name is used prior a global declaration, resulting in a SyntaxError in Python 3.6. A new message was added, assign-to-new-keyword. This is emitted when used name is known to become a keyword in future Python release. Assignments to keywords would result in SyntaxErrorafter switching to newer interpreter version. # While it's correct in Python 2.x, it raises a SyntaxError in Python 3.x True = 1 False = 0 # Same as above, but it'll be a SyntaxError starting from Python 3.7 async = "async" await = "await Other Changes¶ We don't emit by default no-memberif. Namespace packages are now supported by pylint. This includes both explicit namespace packages and implicit namespace packages, supported in Python 3 through PEP 420. A new option was added, --analyse-fallback-block. This can be used to support both Python 2 and 3 compatible import block code, which means that the import block might have code that exists only in one or another interpreter, leading to false positives when analysed. By default, this is false, you can enable the analysis for both branches using this flag. ignored-argument-namesoption. A new option was added, redefining-builtins-modules, for controlling the modules which can redefine builtins, such as six.moves and future.builtins. A new option was added, ignore-patterns, which is used for building a ignore list of directories and files matching the regex patterns, similar to the ignoreoption. The reports are now disabled by default, as well as the information category warnings. arguments-differcheck was rewritten to take in consideration keyword only parameters and variadics. Now it also complains about losing or adding capabilities to a method, by introducing positional or keyword variadics. For instance, pylint now complains about these cases: class Parent(object): def foo(self, first, second): ... def bar(self, **kwargs): ... def baz(self, *, first): ... class Child(Parent): # Why subclassing in the first place? def foo(self, *args, **kwargs): # mutate args or kwargs. super(Child, self).foo(*args, **kwargs) def bar(self, first=None, second=None, **kwargs): # The overridden method adds two new parameters, # which can also be passed as positional arguments, # breaking the contract of the parent's method. def baz(self, first): # Not keyword-only redefined-outer-nameis now also emitted when a nested loop's target variable is the same as an outer loop. for i, j in [(1, 2), (3, 4)]: for j in range(i): print(j) relax character limit for method and function names that starts with _. This will let people to use longer descriptive names for methods and functions with a shorter scope (considered as private). The same idea applies to variable names, only with an inverse rule: you want long descriptive names for variables with bigger scope, like globals. Add InvalidMessageErrorexception class and replace assertin pylint.utils with raise InvalidMessageError. UnknownMessageError(formerly UnknownMessage) and EmptyReportError(formerly EmptyReport) are now provided by the new pylint.exceptionssubmodule instead of pylint.utilsas before. We now support inline comments for comma separated values in the configurations For instance, you can now use the # sign for having comments inside comma separated values, as seen below: disable=no-member, # Don't care about it for now bad-indentation, # No need for this import-error Of course, interweaving comments with values is also working: disable=no-member, # Don't care about it for now bad-indentation # No need for this This works by setting the inline comment prefixes accordingly. Added epytext docstring support to the docparams extension. We added support for providing hints when not finding a missing member. For example, given the following code, it should be obvious that the programmer intended to use the class Contribution: def __init__(self, name, email, date): self.name = name self.mail = mail self.date = date for c in contributions: print(c.email) # Oups pylint will now warn that there is a chance of having a typo, suggesting new names that could be used instead. $ pylint a.py E: 8,10: Instance of 'Contribution' has no 'email' member; maybe 'mail'? The behaviour is controlled through the --missing-member-hintoption. Other options that come with this change are --missing-member-max-choicesfor choosing the total number of choices that should be picked in this situation and --missing-member-hint-distance, which specifies a metric for computing the distance between the names (this is based on Levenshtein distance, which means the lower the number, the more pickier the algorithm will be). PyLinter.should_analyze_filehas a new parameter, is_argument, which specifies if the given path is a pylint argument or not. should_analyze_fileis called whenever pylint tries to determine if a file should be analyzed, defaulting to files with the .pyextension, but this function gets called only in the case where the said file is not passed as a command line argument to pylint. This usually means that pylint will analyze a file, even if that file has a different extension, as long as the file was explicitly passed at command line. Since should_analyze_filecannot be overridden to handle all the cases, the check for the provenience of files was moved into should_analyze_file. This means we now can write something similar with this example, for ignoring every file respecting the desired property, disregarding the provenience of the file, being it a file passed as CLI argument or part of a package. from pylint.lint import Run, PyLinter class CustomPyLinter(PyLinter): def should_analyze_file(self, modname, path, is_argument=False): if respect_condition(path): return False return super().should_analyze_file(modname, path, is_argument=is_argument) class CustomRun(Run): LinterClass = CustomPyLinter CustomRun(sys.argv[1:]) Imports aliased with underscore are skipped when checking for unused imports. bad-builtinand redefined-variable-typeare now extensions, being disabled by default. They can be enabled through: --load-plugins=pylint.extensions.redefined_variable_type,pylint.extensions.bad_builtin Imports checker supports new switch allow-wildcard-with-allwhich disables warning on wildcard import when imported module defines __all__variable. differing-param-docis now used for the differing part of the old missing-param-doc, and differing-type-docfor the differing part of the old missing-type-doc. Bug fixes¶ Fix a false positive of redundant-returns-doc, occurred when the documented function was using yield instead of return. Fix a false positive of missing-param-docand missing-type-doc, occurred when a class docstring uses the For the parameters, seemagic string but the class __init__docstring does not, or vice versa. Added proper exception type inference for missing-raises-doc. Now: def my_func(): """"My function.""" ex = ValueError('foo') raise ex will properly be flagged for missing documentation of :raises ValueError:instead of :raises ex:, among other scenarios. Fix false positives of missing-[raises|params|type]-docdue to not recognizing valid keyword synonyms supported by Sphinx. More thorough validation in MessagesStore.register_messages()to detect conflicts between a new message and any existing message id, symbol, or old_names. We now support having plugins that shares the same name and with each one providing options. A plugin can be logically split into multiple classes, each class providing certain capabilities, all of them being tied under the same name. But when two or more such classes are also adding options, then pylint crashed, since it already added the first encountered section. Now, these should work as expected. from pylint.checkers import BaseChecker class DummyPlugin1(BaseChecker): name = 'dummy_plugin' msgs = {'I9061': ('Dummy short desc 01', 'dummy-message-01', 'Dummy long desc')} options = ( ('dummy_option_1', { 'type': 'string', 'metavar': '<string>', 'help': 'Dummy option 1', }), ) class DummyPlugin2(BaseChecker): name = 'dummy_plugin' msgs = {'I9060': ('Dummy short desc 02', 'dummy-message-02', 'Dummy long desc')} options = ( ('dummy_option_2', { 'type': 'string', 'metavar': '<string>', 'help': 'Dummy option 2', }), ) def register(linter): linter.register_checker(DummyPlugin1(linter)) linter.register_checker(DummyPlugin2(linter)) We do not yield unused-argumentfor singledispatch implementations and do not warn about function-redefinedfor multiple implementations with same name. from functools import singledispatch @singledispatch def f(x): return 2*x @f.register(str) def _(x): return -1 @f.register(int) @f.register(float) def _(x): return -x unused-variablechecker has new functionality of warning about unused variables in global module namespace. Since globals in module namespace may be a part of exposed API, this check is disabled by default. For enabling it, set allow-global-unused-variablesoption to false. Fix a false-positive logging-format-interpolationmessage, when format specifications are used in formatted string. In general, these operations are not always convertible to old-style formatting used by logging module. Added a new switch single-line-class-stmtto allow single-line declaration of empty class bodies (as seen in the example below). Pylint won't emit a multiple-statementsmessage when this option is enabled. class MyError(Exception): pass too-many-format-argsand too-few-format-argsare emitted correctly (or not emitted at all, when exact count of elements in RHS cannot be inferred) when starred expressions are used in RHS tuple. For example, code block as shown below detects correctly that the used tuple has in fact three elements, not two. meat = ['spam', 'ham'] print('%s%s%s' % ('eggs', *meat)) cyclic-importchecker supports local disable clauses. When one of cycle imports was done in scope where disable clause was active, cycle is not reported as violation. Removed Changes¶ pylint-guiwas removed, because it was deemed unfit for being included in pylint. It had a couple of bugs and misfeatures, its usability was subpar and since its development was neglected, we decided it is best to move on without it. The HTML reporter was removed, including the --output-format=htmloption. It was lately a second class citizen in Pylint, being mostly neglected. Since we now have the JSON reporter, it can be used as a basis for building more prettier HTML reports than what Pylint can currently generate. This is part of the effort of removing cruft from Pylint, by removing less used features. The --files-outputoption was removed. While the same functionality cannot be easily replicated, the JSON reporter, for instance, can be used as a basis for generating the messages per each file. --required-attributesoption was removed. --ignore-iface-methodsoption was removed. The --optimize-astflag was removed. decided to remove the error altogether. epylint.py_run's script parameter was removed. Now epylint.py_runis always using the underlying epylint.lintmethod from the current interpreter. This avoids some issues when multiple instances of pylint are installed, which means that epylint.py_runmight have ran a different epylintscript than what was intended.
https://pylint.pycqa.org/en/latest/whatsnew/1.7.html
CC-MAIN-2021-49
en
refinedweb
Bindings for the wrecked terminal graphics library Project description wrecked_bindings Python bindings for the wrecked terminal interface library. Installation Can be installed through pip pip install wrecked Usage import wrecked # Instantiates the environment. Turns off input echo. top_rect = wrecked.init() # create a rectangle to put text in. new_rect = top_rect.new_rect(width=16, height=5) # Add a string to the center of the rectangle new_rect.set_string(2, 3, "Hello World!") # Make that rectangle blue new_rect.set_bg_color(wrecked.BLUE) # And finally underline the text of the rectangle new_rect.set_underline_flag() # Draw the environment top_rect.draw() # take down the environment, and turn echo back on. wrecked.kill() Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/wrecked/1.0.10/
CC-MAIN-2021-49
en
refinedweb
DEBSOURCES Skip Quicknav sources / dokujclient / 3.9 Dokujclient is both a command line tool to interact with instances of Dokwiki, and a Java library for [Dokuwiki xmlrpc interface]() which is also compatible with Android. Currently tested with: * Frusterick Manners (dokuwiki-2017-02-19) * Elenor of Tsort (dokuwiki-2016-06-26) * Detritus (dokuwiki-2015-08-10) * Hrun (dokuwiki-2014-09-29) * Ponder Stibbons (dokuwiki-2014-05-05) * Binky (dokuwiki-2013-12-08) * Weatherwax (dokuwiki-2013-05-10) * Adora Belle (dokuwiki-2012-10-03) * Angua (dokuwiki-2012-01-25b) See the "Compatibility" section for more info Command line tool ================= Getting started --------------- Here's a glimpse of what this tool can do: dokujclient --user myUser --password myPassword --url getTitle > myWiki title dokujclient help > [(-u|--user) <user>] --url <url> [(-p|--password) <password>] [-h|--help] [--version] [--password-interactive] command > > Available commands: > [...skipped...] #put user, password, and url, in the config file vim ~/.dokujclientrc #get the list of pages of all the wiki dokujclient getPagelist . > [...skipped...] dokujclient appendPage builds:synthesis "Build launched at 12:23 took 3'24" dokujclient getPage builds:synthesis > Build launched at 11:12 took 3'19 > Build launched at 12:23 took 3'24 #help command can give information about a given command dokujclient help putAttachment > Syntax for putAttachment: [-f|--force] <attachmentId> <localFile> dokujclient putAttachment some:file.jpg ~/results.jpg Just make sure that your wiki is configured so that the xmlrpc interface is enabled, and so that your user is allowed to use it (ie: "remote" and "remoteuser" entries in your configuration). Installation ------------ It may be installed from the packages on debian testing: sudo apt-get install dokujclient On other platforms you may: * Download the [binaries](). * Extract it, and add the extracted directoy to your path * Ensure it's correctly installed, typing e.g.: dokujclient --version Config file ----------- To avoid typing your url, user, and password each time, you may create in your home a .dokujclientrc, and put some or all of this info in it. echo "url=" > ~/.dokujclientrc echo "user=toto" >> ~/.dokujclientrc echo "password=myPassword" >> ~/.dokujclientrc dokuJClient.jar ========== If you want to build your own application, if you don't want to deal with xmlrpc requests yourself, or if you don't want to handle the different versions of Dokuwiki, you may use this library. Getting started --------------- Everything is done through the DokuJClient: just create one and play with its methods. Here is a quick example which displays the title of the wiki and the list of its pages: import dw.xmlrpc.DokuJClient; import dw.xmlrpc.Page; public class Main { public static void main(String[] args) throws Exception{ String url = ""; String user = "myUser"; String pwd = "myPassword"; DokuJClient client = new DokuJClient(url, user, pwd); System.out.println("Pages in the wiki " + client.getTitle() + " are:"); for(Page page : client.getAllPages()){ System.out.println(page.id()); } } } Make sure to add the jar listed in the Dependencies section below, as well as dokujclient.jar to your classpath. Also make sure to configure your wiki so that xmlrpc interface is enabled, and so that your user is allowed to use it (ie: "remote" and "remoteuser" entries in your configuration) Getting the binaries -------------------- JAR files are available via [Maven Central](): ```xml <dependency> <groupId>fr.turri</groupId> <artifactId>dokujclient</artifactId> <version>3.9.0</version> </dependency> ``` Binaries may alse be [downloaded]() directly. To build them from the sources, see below. Compiling from the command line ------------------------------- On ubuntu, at the root of the project run: # You need maven to compiler sudo apt-get install maven #Actually build mvn package It will generate in the directory `target` a dokujclient-x.y.z-bin.zip which contains both the .jar and the executable command line tool Hacking with Eclipse -------------------- This project uses Maven. To be able [to use Eclipse]() you should: # Install Maven sudo apt-get install maven # Set the M2_REPO classpath variable mvn -Declipse.workspace=<path-to-eclipse-workspace> eclipse:add-maven-repo # Generate the Eclipe project files mvn eclipse:eclipse To use the Eclipse projet, you need to ensure every dependencies are available. Just compile once from the command line (see above) to ensure it will be ok. Documentation ------------ To build documentation you must have doxygen installed. Then, run at the root of the repo: mvn javadoc:javadoc To browse the generated docs, point your browser to target/site/apidocs/index.html You may also directly [browse it]() online. Running integration tests -------------------------- To run the tests you'll need to set up a fake wiki. Please see src/test/resources/README.md to know how to set it up. After that, to run the tests, just run, at the root of the repo: mvn test You can also run mvn site in order to generate a test report and a test coverage report. Compatibility ============= dokuJClient aims at providing the same behavior for every supported version of Dokuwiki. There are however, some discrepancies: * getAttachmentInfo can't retrieve the page title with Angua (dokuwiki-2012-01-25b). It will set it to the page id instead * addAcl and delAcl are supported for dokuwiki-2013-12-08 (Binky) or newer * logoff will always clear the local cookies, but it will clear the server side ones only if you have dokuwiki-2014-05-05 (Ponder Stibbons) or a more recent one Mailing list ============ The mailing list is oriented toward development and usage of DokuJClient. You can subscribe and unsubscribe from After subscribing, messages can be sent to [email protected] Donate ====== Dokujclient is a personal open source project started in 2012. I have put hundreds of hours to maintain and enhance it. Donations to Dokujclient will help support bugfix, keeping up to date with the evolutions of Dokuwiki xmlrpc interface, and adding new features. If you have found this tool useful, consider [donating](), to help for its development.
https://sources.debian.org/src/dokujclient/3.9.1-1/README.md/
CC-MAIN-2021-49
en
refinedweb
Just finished another Chrome extension Object attribute lookup in Python Resources descriptors - - python 2 docs - great insights - __mro__ (Method Resolution Order) - - simulate a __getattribute__ call - Definitions - Everything in Python is an object (i.e. classes, modules, the numbers, the strings, etc) - A classis also an object - Every objectis an instance of a class(example: isinstance(5, int) ) - Because of that, every classis an instance of a special kind of class called metaclass - An instance is created by calling a class object - A non-data descriptor is an object following the data descriptor protocol as described in the docs - A data descriptor is a descriptor which defined both the __set__AND __get__methods __mro__is a tuple of classes that are considered when looking for base classes during method resolution Code snippet Instance attribute look up The implementation works through a precedence chain that gives data descriptors priority over instance variables, instance variables priority over non-data descriptors, and assigns lowest priority to getattr() if provided. Given a Class “C” and an Instance “c” where “c = C(…)”, calling “c.name” means looking up an Attribute “name” on the Instance “c” like this: - Get the Class from Instance - Call the Class’s special method __getattribute__. All objects have a default __getattribute__ Inside __getattribute__ - Get the Class’s __mro__as ClassParents - For each ClassParent in ClassParents - If the Attribute is in the ClassParent’s __dict__ - If is a data descriptor - Return the result from calling the data descriptor’s special method __get__() - Break the for each (do not continue searching the same Attribute any further) - If the Attribute is in Instance’s __dict__ - Return the value as it is (even if the value is a data descriptor) - For each ClassParent in ClassParents - If the Attribute is in the ClassParent’s __dict__ - If is a non-data descriptor - Return the result from calling the non-data descriptor’s special method __get__() - If it is NOT a descriptor - Return the value - If Class has the special method __getattr__ - Return the result from calling the Class’s special method __getattr__. - Raises an AttributeError Things to remember (from the manual) - descriptors are invoked by the getattribute() method - overriding getattribute() prevents automatic descriptor calls - getattribute() is only available with new style classes and objects - object.getattribute() and type.getattribute() make different calls to get(). - data descriptors always override instance dictionaries. - non-data descriptors may be overridden by instance dictionaries. Class attribute look up Given a MetaClass “M” and a Class “C” instance of the Metaclass “M”, calling “C.name” means looking up an Attribute “name” on the Class “C” like this: - Get the Metaclass from Class - Call the Metaclass’s special method __getattribute__ Inside __getattribute__ - Get the Metaclass’s __mro__as MetaParents - For each MetaParent in MetaParents - If the Attribute is in the MetaParent’s __dict__ - If is a data descriptor - Return the result from calling the data descriptor’s special method __get__() - Break the for each - Get the Class’s __mro__as ClassParents - For each ClassParent in ClassParents - If the Attribute is in the ClassParent’s __dict__ - If is a (data or non-data) descriptor - Return the result from calling the descriptor’s special method __get__() - Else - Return the value - For each MetaParent in MetaParents - If the Attribute is in the MetaParent’s __dict__ - If is a non-data descriptor - Return the result from calling the non-data descriptor’s special method __get__() - If it is NOT a descriptor - Return the value - If MetaClass has the special method __getattr__ - Return the result from calling the MetaClass’s special method __getattr__. - Raises an AttributeError A guide to using Google’s API Client Library for JavaScript with Sheets API Google allow users to interact with Google Apps (like Calendar, Contacts, Drive) though APIs. Working with the API Client Library for JavaScript in a browser environment presents it’s challenges as described below. Setup I’m making a single page application (one html file) to test the Javascript library capabilities for using it with Google Sheets Api. Wish to demo the following: * create a new spreadsheet * read rows and columns * insert new rows Working with private data requires setting up a project at Read the chapter “Get access keys for your application” from Make sure to add “Drive API” here After “Create new client id” steps, write down the “Client ID” and make sure that JavaScript origins matches the demo server (in my case it was) I’ll start building the demo with the help of these resources: The demo This depends on “client.js” which is a single line js file like this: window.clientId = ‘aaa-bbbcccddd.apps.googleusercontent.com'; Here’s the file on GITHUB: Please read the code comments, they are important. <!DOCTYPE html> <html> <head> <meta charset='utf-8' /> </head> <body> <h1>Hello Google Sheets API</h1> <em>Open the console and watch for errors and debug messages.</em> <div id="step1"> <h2>Step 1: Authorize this app with your Google account</h2> <span id="authorize-status"></span> <button id="authorize-button" style="visibility: hidden">Authorize</button> </div> <div id="step2"> <h2>Step 2: Create a Spreadsheet document in Google Drive</h2> <span id="create-status"></span> <button id="create-button">Create file in Drive</button> <em>Add a file named "blocky" to Google Drive</em> </div> <div id="step3"> <h2>Step 3: Search spreadsheet files by the name "blocky"</h2> <span id="list-status"></span> <button id="list-button">Search files (json)</button> </div> <div id="step4"> <h2>Step 4: Find the first file named "blocky" and retrieve the worksheets</h2> <span id="worksheets-status"></span> <button id="worksheets-button">Retrieve worksheets</button> </div> <div id="step5"> <h2>Step 5: Add rows to the first worksheet from the spreadsheet "blocky"</h2> <span id="rows-status"></span> <button id="rows-button">Add rows</button> </div> <script src="client.js"></script> <script type="text/javascript"> var<gsx:key>x1</gsx:key><gsx:content>x2</gsx:content><gsx:url>x3</gsx:url></entry>'; post_request(url, post_data, function(xhr) { console.log('post data', xhr); /* No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. */ }, headers.send_xml); }); }); } </script> <script src=""></script> </body> </html> Conclusions - Google Sheets API is using the old “gdata API”. Read more and here - Reading operations seem to work fine with CORS () requests as long as you request JSON feeds (add ?alt=json to the requested url) - I wasn’t able to “POST” data using CORS requests, using either “atom+xml” or “json” content types. - Sending “POST” requests from “curl” or Python worked fine. - All these basically mean that you can’t build Single-Page Apps for changing data in Google services (unless you use a back-end request proxy) Blocky – the Chrome Extension that enables tagging blocks of html This: A better diigo bookmarklet (diigolet) What. Django reusable apps Warning: Django project skeleton If you need a Django project skeleton to base your work upon, please check this one: And now a Django app skeleton as well Django settings I’m going to talk here about various techniques regarding the Django’s settings. Let’s assume the following project layout: . ├── __init__.py ├── manage.py ├── settings └── urls.py How do I use custom settings or overwrite existing ones on my development machine ? Answer 1 Create your custom settings file under the same parent as the existing settings.py. Let’s name the new file settings_local.py At the bottom of settings.py, add the following: try: from settings_local import * except ImportError: pass Pros - no need to change the manage.py or the wsgi file in order to apply the new settings Cons - hard/impossible to use different settings for different environment Answer 2 (preferred) Create a new directory called “settings” and move the existing setting file there. Then make a different file for each type of environment. . ├── __init__.py ├── manage.py ├── settings │ ├── __init__.py │ ├── development.py │ ├── production.py │ └── settings.py └── urls.py settings.py will include the default Django settings, probably the file created by django-admin.py development.py will hold settings/overwrites needed for the development environment. The extra settings files (production.py, development.py, etc) will extend the existing settings.py (or another parent file which in turn extends settings.py) and add their own settings customizations. This could be your development.py file: from .production import * DEBUG = True TEMPLATE_DEBUG = True DJANGO_SERVE_PUBLIC = True PREPEND_WWW = False SEND_BROKEN_LINK_EMAILS = False # APP: debug_toolbar MIDDLEWARE_CLASSES += ( "debug_toolbar.middleware.DebugToolbarMiddleware", ) INSTALLED_APPS += ( "debug_toolbar", ) DEBUG_TOOLBAR_CONFIG = { 'INTERCEPT_REDIRECTS': False, } TEMPLATE_CONTEXT_PROCESSORS += [ 'django.core.context_processors.debug' ] Where production.py is: from .settings import * import os.path PROJECT_ROOT = os.path.join(os.path.abspath(os.path.dirname(__file__)), '../') DEBUG = False TEMPLATE_DEBUG = False Once your settings setup is in place, all you have to do is change manage.py and your WSGI file. The manage.py file could now look like this: #!/usr/bin/env python from django.core.management import execute_manager import sys, os PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, PROJECT_ROOT) try: import settings.development # Assumed to be in the same directory. except ImportError, e: import sys sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things. You'll have to run django-admin.py, passing it your settings module. (If the file settings.py does indeed exist, it's causing an ImportError somehow.) " % __file__) sys.exit(1) if __name__ == "__main__": execute_manager(settings.development) In the same time, your WSGI file would use settings.production: os.environ[“DJANGO_SETTINGS_MODULE”] = “settings.production” Pros - easy to create settings for each of your environments (production, development, etc) - it’s a great way to keep your settings organized, easy to find and edit. - easier to reuse your settings for other projects. For example, we could use the same development.py file (as shown above) with other production.py settings. Cons - you have to change the manage.py and the WSGI file, which might not be possible on your client server. Other Django settings tips - do not use your project name in the settings, i.e. use ROOT_URLCONF = ‘urls’ and not ROOT_URLCONF = ‘myproject.urls’, use INSTALLED_APPS = (“appname”,) and not INSTALLED_APPS = (“myproject.appname”, ). You will then be able to easily move settings and applications between one project to another - use calculated paths for TEMPLATE_DIR, MEDIA_ROOT, STATIC_ROOT etc, i.e. PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__)) and then MEDIA_ROOT = os.path.join(PROJECT_ROOT, “media”) This means business This is how it goes … Father: Son, I would like to be the one who chooses your future wife. Son: No way Tata: That girl is Bill Gates’s doughter Son: AAA, then it is ok. Father goes to Bill Gates Father: I want my son to marry your daughter. Bill Gates: No chance Dad: My son is CEO of the World Bank Bill Gates: aa, then it is ok., Father goes to World Bank President Tata: I would like to offer my son as CEO of the World Bank Presenintele: No chance Dad: My son is the future husband of Bill Gates’s doughter President: Oh, that’s ok. This means BUSINESS
http://www.betterprogramming.com/
CC-MAIN-2021-49
en
refinedweb
Trying out your "Scrape the web" python code sample I got the error "ImportError: No module named bs4?" Hi Hervé, You need to install the beautifulsoup4 (bs4) python package. You can do this in the code env section of the administration page. In your python code env, in "packages to install" you add : beautifulsoup4 Adding BeautifulSoup to python code env solved the import Code samples then failed at line soup = BeautifulSoup(page, 'html5lib') FeatureNotFound: Couldn't find a tree builder with the features you requested: html5lib. Do you need to install a parser library? I also added html5lib and lxml to my python code env and verified it with import html5lib import lxml
https://community.dataiku.com/t5/Using-Dataiku/ImportError-No-module-named-bs4/m-p/15101
CC-MAIN-2021-49
en
refinedweb
preface - Multithreading is not commonly used in Unity, so the knowledge about it is not particularly thorough - So this article will briefly talk about the role, limitations and usage of multithreading in Unity Use of multithreading in Unity In Unity, in addition to the main thread responsible for UI drawing, there are also co processes and multi threads. The coroutine is accompanied by the "start" program of the main thread, which allows the program to run some methods within a specific time. The coroutine can call some UI and other attributes in Unity. However, multithreading cannot directly process the game objects in Unity, because in Unity, you can only get the components, methods and game objects of the object in the main thread! Use multithreading to: - To load and configure download resources with threads, you need to display a progress bar - Data processing in algorithm What can be tuned using multithreading: - C # basic variables - In addition to the content in the API of UnityEngine - Some basic structures defined by UnityEngined can also be called. For example, vector3 (structure) can be called, but Texture2d(class, root directory is Object) cannot. Method 1: create multithreading in a normal way Using Thread to open up a child Thread in Unity Then, some business processes such as data calculation, value transfer and interaction with Android are carried out in this sub thread. However, some API s in Unity cannot be called Examples are as follows: using System.Collections.Generic; using System.Threading; using UnityEngine; using UnityEngine.UI; public class ThreadTest : MonoBehaviour { void Start() { //Open up a child thread Thread childThread1 = new Thread(CallToChildThread); childThread1.Start(); } //What the child thread is responsible for public void CallToChildThread() { //Call the contents of the main thread Debug.Log(test()); //Print result: 666 } int test() { return 666; } } Method 2: use the Loom plug-in to make multi-threaded calls When Unity uses multithreaded development, one is to use the Loom plug-in for multithreading Room is a tool class that is specifically responsible for the interaction between the main thread and the sub thread in Unity. The room plug-in has only one script, which can be imported and used in Unity Which uses Loom to open up a sub thread method Loom.RunAsync(() => { Thread childThread = new Thread(CallToChildThread); childThread.Start(); }); Call the data method in Unity using Loom multithreading: //Use room to call the contents of the main thread Loom.QueueOnMainThread((param) => { ThreadTxt.text = "The child thread is turned on"; }, null); The complete code example of calling the main thread with a child thread is as follows: using System.Threading; using UnityEngine; using UnityEngine.UI; public class ThreadTest : MonoBehaviour { public Text ThreadTxt; void Start() { //Using Loom to open up a child thread Loom.RunAsync(() => { Thread childThread = new Thread(CallToChildThread); childThread.Start(); }); } public void CallToChildThread() { //Use room to call the contents of the main thread Loom.QueueOnMainThread((param) => { ThreadTxt.text = "The child thread is turned on"; }, null); } } The tool class code of the room plug-in is as follows: using UnityEngine; using System.Collections; using System.Collections.Generic; using System; using System.Threading; using System.Linq; public class Loom : MonoBehaviour { public static int maxThreads = 8; static int numThreads; private static Loom _current; //private int _count; public static Loom Current { get { Initialize(); return _current; } } void Awake() { _current = this; initialized = true; } static bool initialized; public static void Initialize() { if (!initialized) { if (!Application.isPlaying) return; initialized = true; var g = new GameObject("Loom"); _current = g.AddComponent<Loom>(); #if !ARTIST_BUILD UnityEngine.Object.DontDestroyOnLoad(g); #endif } } public struct NoDelayedQueueItem { public Action<object> action; public object param; } private List<NoDelayedQueueItem> _actions = new List<NoDelayedQueueItem>(); public struct DelayedQueueItem { public float time; public Action<object> action; public object param; } private List<DelayedQueueItem> _delayed = new List<DelayedQueueItem>(); List<DelayedQueueItem> _currentDelayed = new List<DelayedQueueItem>(); public static void QueueOnMainThread(Action<object> taction, object tparam) { QueueOnMainThread(taction, tparam, 0f); } public static void QueueOnMainThread(Action<object> taction, object tparam, float time) { if (time != 0) { lock (Current._delayed) { Current._delayed.Add(new DelayedQueueItem { time = Time.time + time, action = taction, param = tparam }); } } else { lock (Current._actions) { Current._actions.Add(new NoDelayedQueueItem { action = taction, param = tparam }); } } } public static Thread RunAsync(Action a) { Initialize(); while (numThreads >= maxThreads) { Thread.Sleep(100); } Interlocked.Increment(ref numThreads); ThreadPool.QueueUserWorkItem(RunAction, a); return null; } private static void RunAction(object action) { try { ((Action)action)(); } catch { } finally { Interlocked.Decrement(ref numThreads); } } void OnDisable() { if (_current == this) { _current = null; } } // Use this for initialization void Start() { } List<NoDelayedQueueItem> _currentActions = new List<NoDelayedQueueItem>(); // Update is called once per frame void Update() { if (_actions.Count > 0) { lock (_actions) { _currentActions.Clear(); _currentActions.AddRange(_actions); _actions.Clear(); } for (int i = 0; i < _currentActions.Count; i++) { _currentActions[i].action(_currentActions[i].param); } } if (_delayed.Count > 0) { lock (_delayed) { _currentDelayed.Clear(); _currentDelayed.AddRange(_delayed.Where(d => d.time <= Time.time)); for (int i = 0; i < _currentDelayed.Count; i++) { _delayed.Remove(_currentDelayed[i]); } } for (int i = 0; i < _currentDelayed.Count; i++) { _currentDelayed[i].action(_currentDelayed[i].param); } } } } summary There are many inconveniences in using multithreading in Unity. Generally speaking, with the existence of coprocessing, multithreading will not be used again Including using the Loom plug-in is not particularly convenient Therefore, different people have different opinions. If you encounter some non particularly complex operations, you can use the collaborative process! Xie Cheng's learning articles are here. Interested partners can come and have a look! Unity zero foundation to getting started ☀️| Little ten thousand words tutorial on the collaborative process in unity ❤️ Comprehensive analysis + actual combat drill ❤️
https://programmer.help/blogs/617badfe35491.html
CC-MAIN-2021-49
en
refinedweb
In Python, you'll probably use a tuple to initialize a sequence that shouldn't be modified elsewhere in the program. This is because tuples are immutable. However, using a tuple may reduce the readability of your code as you cannot describe what each item in the tuple stands for. This is where NamedTuples can come in handy. A NamedTuple provides the immutability of a tuple, while also making your code easy to understand and use. In this tutorial, you'll learn how to create and use NamedTuples effectively. Python Tuples – A Quick Recap Before jumping into NamedTuples, let's quickly revisit Python tuples. Tuples are powerful built-in data structures in Python. They're similar to Python lists in that they can hold items of different types, and in that you can slice through them. However, tuples differ from lists in that they are immutable. This means you cannot modify an existing tuple, and trying to do so will throw an error. ▶ Let's say you create the following tuple today. The tuple house contains five items that describe the house, namely, the city, the country, the year of construction, the area in sq. ft., and the number of rooms it has. This is shown in the code snippet below: house = ("Bangalore","India",2020,2018,4) - This houseis located in the city of Bangalore in India. - It was constructed in the year 2020. - And it has 4rooms that collectively span an area of 2018sq. ft. Let's say your friend reads this line of code, or you come back a week later and read your code again. Given that you haven't added any comments as to what the values in the tuple stand for, there's certainly a problem of readability. For example, you may have to end up guessing whether it's a house of area 2018 sq. ft. constructed in the year 2020, or if it's a house of area 2020 sq. ft. constructed in the year 2018. 🤔 You might suggest using a dictionary instead – you can specify what the different values stand for as keys of the dictionary, and the actual values as the dictionary's values. Head on to the next section for a quick recap on Python dictionaries. Python Dictionaries – A Quick Recap With the motivation to improve the readability of the code, let's consider switching to Python dictionaries. Dictionaries are built-in data structures that store value in key-value pairs. You can tap into a dictionary, and access its values using the keys. So you can rewrite the tuple from the previous as a dictionary as follows: house = {"city":"Bangalore","country":"India","year":2020,"area":2018,"num_rooms":4} In the code snippet above: "city", "country", "year", "area"and "num_rooms"are the keys. - And the values from the tuple, "Bangalore", "India", 2020, 2018, and 4are used as the values corresponding to the keys. - You can access the values using the keys: house["city"]to get "Bangalore", house["area"]to get 2018, and so on. As you can see, using a dictionary improves the readability of the code. But, unlike tuples, you can always modify values in a dictionary. All you need to do is to set the corresponding key to a different value. In the above example, you can use house["city"] = "Delhi" to change the city your house is located in. Clearly, this is not allowed, as you don't want the values to be modified elsewhere in the program. And if you need to store descriptions for many such houses, you'll have to create as many dictionaries as the number of houses there are, repeating the names of the keys every single time. This also makes your code repetitive and not so interesting! With Python's NamedTuples, you can have both the immutability of tuples and the readability of dictionaries. Head on to the next section to learn about NamedTuples. Python NamedTuple Syntax To use a NamedTuple, you need to import it from Python's built-in collections module, as shown: from collections import namedtuple The general syntax for creating a NamedTuple is as follows: namedtuple(<Name>,<[Names of Values]>) <Name>is a placeholder for what you'd like to call your NamedTuple, and <[Names of Values]>is a placeholder for the list containing the names of the different values, or attributes. Now that you're familiar with the syntax for creating NamedTuples, let's build on our house example, and try to create it as a NamedTuple. Python NamedTuple Example As mentioned earlier, the first step is to import namedtuple. from collections import namedtuple Now, you can create a NamedTuple using the syntax discussed in the previous section: House = namedtuple("House",["city","country","year","area","num_rooms"]) In this example, - You choose to call the NamedTuple House, and - Mention the names of the values, "city", "country", "year", "area"and "num_rooms"in a list. ✅ And you've created your first NamedTuple – House. Now, you can create a house house_1 with the required specifications using House as follows: house_1 = House("Bangalore","India",2020,2018,4) You only need to pass in the actual values that the names, or attributes in your <[Names of Values]> should take. To create another house, say house_2, all you need to do is to create a new House using its values. house_2 = House("Chennai","India",2018,2050,3) Notice how you can use Houseas a template to create as many houses as you'd like, without having to type out the names of the attributes each time you create a new house. How to Use dot Notation to Access a NamedTuple's Values Once you've created NamedTuple objects house_1 and house_2, you can use the dot notation to access their values. The syntax is shown below: <namedtuple_object>.<value_name> - Here, <namedtuple_object>denotes the created NamedTuple object. In this example, house_1and house_2. <value_name>denotes any of the valid names used when the NamedTuple was created. In this example, "city", "country", "year", "area"and "num_rooms"are the valid choices for <value_name>. This is illustrated in the following code snippet: print(house_1.city) print(house_1.country) print(house_1.year) print(house_1.area) print(house_1.num_rooms) Similarly, you can use house_2.city, house_2.country, and so on to access the values corresponding to the NamedTuple house_2. 📋Try it Yourself! NamedTuple Example In this section, you'll create a ProblemSet NamedTuple. Please feel free to try this example in any IDE of your choice. The ProblemSet NamedTuple should take the following values: num_questions: an integer representing the number of questions in a particular problem set, difficulty: a string that indicates the difficulty level of the problem set, and topic: the topic that the problem set covers, say, "Arrays", "Strings", "Graphs", and so on. The procedure is very similar to our previous example where we created the House NamedTuple. 1️⃣ Import namedtuple from collections module. from collections import namedtuple 2️⃣ Create a NamedTuple and call it ProblemSet. ProblemSet = namedtuple("ProblemSet",["num_questions","difficulty","topic"]) 3️⃣ Now that you've created ProblemSet, you can create any number of problem sets using ProblemSet as the template. - Here, problem_set1contains 5 easy questions on Strings. problem_set1 = ProblemSet(5,"Easy","Strings") - And problem_set2contains 3 hard questions on Bit Manipulation. problem_set2 = ProblemSet(3,"Hard","Bit Manipulation") 4️⃣ As with the previous example, you can use the dot notation to access the values of the two problem sets created above. print(problem_set1.topic) # Output Strings print(problem_set2.difficulty) # Output Hard I hope you were able to complete this exercise. 🎉 Conclusion In this tutorial, you've learned: - how NamedTuples help you couple the advantages of both tuples and dictionaries, - how to create NamedTuples, and - how to use dotnotation to access the values of NamedTuples. If you're familiar with OOP in Python, you may find this similar to how Python classes work. A class with its attributes serves as a template from which you can create as many objects, or instances – each with its own values for the attributes. However, creating a class and defining the required attributes just to improve readability of your code can often be overkill, and it's a lot easier to create NamedTuples instead. See you all in the next tutorial. Until then, happy coding!
https://www.freecodecamp.org/news/python-namedtuple-examples-how-to-create-and-work-with-namedtuples/
CC-MAIN-2021-49
en
refinedweb
fabrics wholesaler shenzhen , guangdong china 1918,Pacific Commercial And Trading Bldg, JiaBin Road, Luohu District, Goldentex Industrial And Trading Co.,Ltd can offer our products such as apparel and garments , fabrics , fibre in custom ... carbon fiber fabric sheets supplier polyester fabric wholesaler flax fiber supplier fabrics wholesaler fabrics wholesaler tianjin , tianjin china No. 3, Kaiyuan Road, Wuqing Development Zone, Yongfa Textile Co. Ltd. is working as a manufacturer, Supplier & exporter Company for fabrics , yarn in China. Backed up ... wholesale quilting fabric supplier webs sock yarn cotton crochet yarn fabrics exporter hangzhou , zhejiang china Chaohui Road Lianjing Building 22-F We can supply and export mens and womens wear : organic fabric , shirts, dress , Blouses, Tops, Jumpers, Jackets, Jump ... shirts manufacturers ikat fabric supplier suiting fabric dealer fabrics manufacturer shanghai , shanghai china Suites 211, 6 Lane, 1279, Zhongshan West Road, With a fully equiped infrastructure encompassing manufacturing, product development, and textile work, Shanghai HuaKangT ... flax fiber manufacturer polar fleece fabric supplier georgette silk fabric traders fabrics wholesaler huangyan , zhejiang china Luting Road Huangyan Eway International is moving forward to our aim of becoming the premier supplier and marketer of apparel and ga ... bulky cotton yarn navy pants wholesale supplier manufacturing process of yarn fabrics wholesaler shenzhen , guangdong china 11x flat b. duhui 100 Blg. Zhonghang Road. Wujiang Zhongyu Weaving Co., Ltd. Shenzhen Br. offers a diverse product line to fulfill demands of customers and manufac ... import clothing textile industry machines manufacturers loom weaving machine manufacturing fabrics supplier jiangyin , jiangsu china Houxiang Industrial Zone Changjing Town With a fully equiped infrastructure encompassing manufacturing, product development, and textile work, Jiangyin Changlon ... cotton velour fabric wholesale fibre producer machine knitting yarn fabrics wholesaler haining , zhejiang china Changan Industry Development Zone, As The China`s foremost, Wholesale clothing and fabrics firm New Century Leather & Textile Company Limited offers a wid ... apparel manufacturers for large business italian fabric suppliers kids Apparel fabrics retailer shijiazhuang , hebei china 39, Xiangyi Road Hebei Baoyi Cashmere Products Co. Ltd offer fast and reliable delivery, excellent Quality Control and flexible manufactu ... knit fabrics suppliers wholesale fabric material exporter silk georgette fabric mfg fabrics wholesaler Shijiazhuang Hebei , china Room520-522 No.306 East Heping Road Due to our craftmenship and quality standards Currently Shijiazhuang Wanda Printing And Dyeing Co.,Ltd is growing to fam ... cotton knit fabric wholesaler company dress shirts suppliers high fashion clothing producer fabrics wholesaler wenzhou , zhejiang china Xicun Village, Pingyang County, zhejiang province, With state of art manufacturing infrastructure and high qualified management team, At present Pingyang Huayi Lace Factor ... flannel fabric suppliers linen fabric supplier organic wool yarn fabrics supplier hefei , anhui china No. 215, Kang Yuan Building Jin Zhai Road Anhui Anp Import And Export Co. Ltd. is a garment and cotton knit fabric supplier with own factory in China. Our goal i ... clothing Buyer contract apparel manufacturers Readymade garment fabrics retailer wujiang , jiangsu china Room 502, Unit 2 Building 31 Orient Garden Shengze Town We can supply and export mens and womens wear : organic fabric , shirts, dress , Blouses, Tops, Jumpers, Jackets, Jump ... corduroy fabric exporter silk jacquard fabric co wholesale silk fabric supplier fabrics exporter xiangshan , zhejiang china Xiyin Road, Juexi Town, Textile industry is most demanded industry today there are lots of manufacturer and exporter of apparel and garments , f ... wholesale garment manufacturers small apparel companies trade Apparel exporters fabrics wholesaler hangzhou , zhejiang china Caihe We can supply and export mens and womens wear : organic fabric , shirts, dress , Blouses, Tops, Jumpers, Jackets, Jump ... nylon fabric wholesalers modern corporate wear womens corporate clothing fabrics exporter shenzhen , guangdong china 27C-2, Century Court Bldg, Chegongmiao, Futian District, Shenzhen Huacun Textile Company Limited has state of art infrastructure for manufacturing apparel and garments , home t ... stretch fabric wholesale waterproof fabric manufacturers fabric manufacturing companies fabrics manufacturer ningbo , zhejiang china 14/F, Yinxin Mansion Wholesale menswear , womenswear ,linen fabric , Cotton Fabrics and kidswear from Ningbo FTZ Flying IntL Industry & Trade ... poly cotton fabric supplier modal fabric traders suede fabric wholesalers fabrics supplier ningbo , zhejiang china No.6/157, Qiwen Rd, Nanmen, Ningbo Beyond Group Co., Ltd. can offer our products such as apparel and garments , fabrics in customized forms as per d ... company garment suppliers sheeting fabric importer vinyl upholstery fabric company fabrics manufacturer weifang , shandong china 36 A, Beigong Street East High And New Tech Development Area We Shandong Weimian Textile Company Limited are reckoned as a prime cloth fabrics Manufacturer and Exporter in China. Ou ... polka dot fabric exporter custom fabric importer custom fabric suppliers Is this page helpful? Average Ratings 4.6 (76 Ratings)
https://www.textileinfomedia.com/business/china/fabrics
CC-MAIN-2021-49
en
refinedweb
Documentation ¶ Overview ¶ The importers package uses go/ast to analyze Go packages or Go files and collect references to types whose package has a package prefix. It is used by the language specific importers to determine the set of wrapper types to be generated. For example, in the Go file ¶ package javaprogram import "Java/java/lang" func F() { o := lang.Object.New() ... } the java importer uses this package to determine that the "java/lang" package and the wrapper interface, lang.Object, needs to be generated. After calling AnalyzeFile or AnalyzePackages, the References result contains the reference to lang.Object and the names set will contain "New". Index ¶ Constants ¶ This section is empty. Variables ¶ This section is empty. Functions ¶ This section is empty. Types ¶ type PkgRef ¶ PkgRef is a reference to an identifier in a package. type References ¶ type References struct { // The list of references to identifiers in packages that are // identified by a package prefix. Refs []PkgRef // The list of names used in at least one selector expression. // Useful as a conservative upper bound on the set of identifiers // referenced from a set of packages. Names map[string]struct{} // Embedders is a list of struct types with prefixed types // embedded. Embedders []Struct } References is the result of analyzing a Go file or set of Go packages. For example, the Go file ¶ package pkg import "Prefix/some/Package" var A = Package.Identifier Will result in a single PkgRef with the "some/Package" package and the Identifier name. The Names set will contain the single name, "Identifier". func AnalyzeFile ¶ func AnalyzeFile(file *ast.File, pkgPrefix string) (*References, error) AnalyzeFile scans the provided file for references to packages with the given package prefix. The list of unique (package, identifier) pairs is returned func AnalyzePackages ¶ func AnalyzePackages(pkgs []*packages.Package, pkgPrefix string) (*References, error) AnalyzePackages scans the provided packages for references to packages with the given package prefix. The list of unique (package, identifier) pairs is returned
https://pkg.go.dev/github.com/iRezaaa/[email protected]/internal/importers
CC-MAIN-2021-49
en
refinedweb
Velo by Wix: Event handling of Repeater Item In this post, we consider why we shouldn't nest event handler inside the Repeater loop and how we can escape it. At first sight, the adding event handling for repeated items looks easy. You just handling events of repeated items inside Repeater loop methods there you have all needed data and scope with selector $item(). $w("#repeater").onItemReady(($item, itemData, index) => { // it look easy $item("#repeatedButton").onClick((event) => { // we have all we need console.log( $item("#repeatedContainer"), itemData, index, ); }); }); What's wrong with this approach? Sometimes the loop may set a few event handlers for the same item when you change order or filter or sort Repeater Items. Each iteration of the loop may add a copy of the callback function to the handler when it starts again. You may don't pay attention to twice running code if you just hide or show some component by an event. But if you work with APIs or wixData, then you can get a lot of problems. My team and I consider this approach as an anti-pattern and we don't use it more. For the "static" Repeaters which fill up once and don't change anymore during a user session, this approach can be used. But if you would like to do dynamic fill up your Repeater or change its items, you shouldn't set a handler function inside the loop. Let's see another way. Selector Scope In the Velo, we have two types of selector functions. The Global Scope Selectors it's $w(). We can use it anywhere in the frontend part of Wix site. If we use $w() with Repeater Items, then it changes all items // will change a text in all items $w("#repeatedText").text = "new"; Repeated Item Scope A selector with repeated item scope can be used to select a specific instance of a repeating element. We can get repeated-item-scope selector in a few ways. In the loop, selector as the first argument in callback function for .forEachItem(), .forItems(), and .onItemReady() methods. Deprecated way, selector as the second argument in an event handler. It still works but you don't have to use it. Removal of the $w Parameter from Event Handlers // 🙅♀️ DON'T USE IT 🙅♂️ $w("#repeatedButton").onClick((event, $item) => { // deprecated selector function (could be removed in the future) $item("#repeatedText").text = "new"; }); And with an event context. We can get the selector function with $w.at(context). $w("#repeatedButton").onClick((event) => { // accepts an event context and // returns repeated items scope selector const $item = $w.at(event.context); $item("#repeatedText").text = "new"; }); Let's try to reproduce how we can use event.context instead of Repeater loop methods. // we use global selector `$w()`, it provides handling all repeated items $w("#repeatedButton").onClick((event) => { // get repeated item scope const $item = $w.at(event.context); // get the ID of the repeated item which fired an event const itemId = event.context.itemId; // get all repeater's data, it's stored as an array of objects const data = $w("#repeater").data; // use the array methods to find the current itemData and index const itemData = data.find((item) => item._id === itemId); const index = data.findIndex((item) => item._id === itemId); // we have all we need console.log( $item('#repeatedContainer'), itemData, index, ); }); In this way, we have only one callback for all elements with the specific ID. Using context we can get the active item scope, its itemData, and index Now, we see how to do more careful handling of events in the Repeater. But this code not good enough for reuse. Let's move the scope selector logic out event handler to the separate method. Create hook Our hook will have next steps: #1 Implementation // here will be all logic const createScope = (getData) => (event) => { // TODO: Implement hook } #2 initialize // sets callback function, it has to return the repeater data const useScope = createScope(() => { return $w("#repeater").data; }); #3 using // using with repeated items $w("#repeatedButton").onClick((event) => { // returns all we need const { $item, itemData, index, data } = useScope(event); }); We create a hook with createScope(getData) it will be work with a specific Repeater. The argument getData it's a callback, it has to return the Repeater data. The createScope will return a new function useScope(event) that has a connection with the specific Repeater data. The useScope(event) accepts an event object and return the data of the current scope. For the realization of createScope(getData) function, we will create a public file public/util.js We can get Repeater data with getData(), and we have the event context. All we need just return Scope selector and item data as an object. We will use getter syntax for returning itemData, index, and data. public/util.js export const createScope = (getData) => (event) => { const itemId = event.context.itemId; const find = (i) => i._id === itemId; return { $item: $w.at(event.context), get itemData() { return getData().find(find); }, get index() { return getData().findIndex(find); }, get data() { return getData(); }, }; }; If you don't work with getter/setter for property accessors you can look here how it works. Let's see how we can use the hook on the page with static or dynamic event handlers. HOME Page Code import { createScope } from "public/util"; const useScope = createScope(() => { return $w("#repeater").data; }); $w.onReady(() => { // use a dynamic event handler $w("#repeatedButton").onClick((event) => { const { $item, itemData, index, data } = useScope(event); }); }); // or a static event handler export function repeatedButton_click(event) { const { $item, itemData, index, data } = useScope(event); } Now, we can reuse the selector hook with all Repeater in all site pages. JSDoc The Velo code editor supports JSDoc, it's a markup language that is used inside JS block comments. JSDoc provides static type checking, adds the autocomplete, and making good documentation of your code. I recommend using JSDoc. Code snippet with JSDoc: /** * Create Repeated Item Scope * * * @typedef {{ * _id: string; * [key: string]: any; * }} ItemData; * * @typedef {{ * $item: $w.$w; * itemData: ItemData; * index: number; * data: ItemData[]; * }} ScopeData; * * @param {() => ItemData[]} getData * @returns {(event: $w.Event) => ScopeData} */ export const createScope = (getData) => (event) => { const itemId = event.context.itemId; const find = (i) => i._id === itemId; return { // @ts-ignore $item: $w.at(event.context), get itemData() { return getData().find(find); }, get index() { return getData().findIndex(find); }, get data() { return getData(); }, }; }; Don't remove JSDoc in your code! In the building process, all comments will be removed automatically from the production bundle. Resources - Code on GitHub - Scope selector $w.at(context) - Global Scope & Repeated Item Scope Selectors - Event Context - Property getters and setters
https://shoonia.site/event-handling-of-repeater-item
CC-MAIN-2021-49
en
refinedweb
Update 15. January 2018: Jacoco 0.8.0 has been released. No need to build it from the SNAPSHOT version anymore. Introduction Test Coverage is a code metric that indicates how many lines of code, as a percent of the total, your tests execute. It can’t tell you anything about the quality of your tests, but it nevertheless is one of the most important metrics in use. Jacoco is one of the most prominent test coverage tools for Java. Lombok is a Java library that generates common boilerplate code like getter/setter methods, hashCode, and builder classes during the compilation phase. This improves development speed significantly. The Problem Lombok causes problems when your project requires a minimum test coverage rate that is also checked by a CI System such as Jenkins or Travis. Jacoco can’t distinguish between Lombok’s generated code and the normal source code. As a result, the reported coverage rate drops unrealistically low. You the developer are left with two options: - Write unit tests for generated code or - Decrease the required coverage rate. Neither option makes sense and neither is desirable. The Solution Luckily, beginning with version 0.8.0, Jacoco can detect, identify, and ignore Lombok-generated code. The only thing you as the developer have to do is to create a file named lombok.config in your directory’s root and set the following flag: lombok.addLombokGeneratedAnnotation = true This adds the annotation [email protected] to the relevant methods, classes and fields. Jacoco is aware of this annotation and will ignore that annotated code. Please keep in mind that you require at least version 0.8.0 of Jacoco and v1.16.14 of Lombok. Showcase Let’s suppose we have a Person class that contains fields for first- and lastname. We are using @Data which will generate the getter/setters, hashCode, toString and equals methods. We also use @Builder which generates – as the name says – a builder pattern for instantiating an object. import lombok.Builder; import lombok.Data; @Data @Builder public class Person { private String firstname; private String lastname; } Then we have a PersonPrinter class that contains logic for printing a Person. We annotate it with @Log to instantiate a static logger and again @Data: import lombok.Data; import lombok.extern.java.Log; @Log @Data public class PersonPrinter { private Person person; private String separator = " "; private String noLastnameLog = "That person has no name"; public PersonPrinter(Person person) { this.person = person; } public String toString() { if ("".equals(person.getLastname())) { log.info(noLastnameLog); return ""; } return String.format(person.getFirstname() + this.separator + person.getLastname()); } } The only logic which should be tested is PersonPrinter. The following two test cases should actually give us 100% test coverage: public class PersonPrinterTest { @Test public void testDefault() { Person harrison = Person.builder() .firstname("John").lastname("Harrison").build(); assertEquals("John Harrison", new PersonPrinter(harrison).toString()); } @Test public void testNoLastname() { Person anonymous = Person.builder() .firstname("anonymous").lastname("").build(); assertEquals("", new PersonPrinter(anonymous).toString()); } } Unfortunately, this is not the case since Jacoco also counts the generated code from Lombok: By adding the flag lombok.addLombokGeneratedAnnotation = true before cleaning and running the tests again, we see that Jacoco has completely ignored the class Person and shows us 100% test coverage: As always the source code is available on GitHub. It contains support for both Maven and Gradle. 13 Replies to “Ignoring Lombok Code in Jacoco” Any equivalent Gradle plugin info? I have just updated the project on GitHub. It supports now both maven and gradle. Thanks Rainer! What kind of nuances go into making this work for a multi-module project [gradle]? I’ve been running into issues related to “Classes in bundle “” do not match with execution data” which I believe might be related to using a different Java version but I’m not sure. Please let me know if you intend to update your project to handle multi module project builds (not all subprojects may have tests). Again, great page! I’m definitely pointing people here for this issue. Thanks a lot! Hi Yash, if I understand you correctly, you have a “simple” multi-module gradle project, where you get a “classes in bundle – do not match with execution data” error when running Jacoco? Have you tried to use the same Java version? If not, please try that and if it is still not working I will look into it. Hi, Do you know how to update jacoco version in Sonar cube? Unfortunately that’s not possible yet. Jacoco’s code removal is done in the report generation part, which comes after creating the raw output in form of an executable (jacoco.exec). Sonarqube and similar take that raw output and create their own reports. So we will have to wait until Jacoco 0.7.10 is released and Sonarqube makes the required adaptions. You can find more on: Hi, thanks for your answer. I upvoted the feature. I have to see if Sonar have planned something. Do you have a solution for immutables too? Hi, this feature is very Lombok specific. I am currently investigating options for a more general approach. Will inform you, when I have something. The Immutables library adds the ‘javax.annotation.Generated’ annotation to the generated code if that is helpful. It appears the Lombok specific solution was being achieved via an annotation so hopefully something similar can be done here. I am also interested in some sort of generalized support for this. Thanks! [email protected] , can you please share a use case how to use this in actual code base Hi, I’ve shared an example project on. Is this enough or do you need more? Thanks a lot man, you’ve saved me a lot of time 🙂
https://www.rainerhahnekamp.com/en/ignoring-lombok-code-in-jacoco/
CC-MAIN-2021-49
en
refinedweb
Check out these tips and techniques that you can use when attempting to optimize Angular applications. Learn how to use lazy loading, server-side rendering and more. When an application grows from a couple lines of code to several files or folders of code, then every byte or second saved matters. When an application grows to that size, the word “optimization” gets whispered a lot. This is because is application of that size would typically run like a coal-powered train, but users expect a high-speed train. Today we’ll look at some useful techniques to adopt when attempting to optimize Angular applications. These techniques are useful for improving load-time and runtime performance. A very useful technique and one of the most recommended for a majority of web applications, lazy loading is basically load-on-demand. In this technique, some parts of your application are bundled separately from the main bundle, which means those parts load when an action is triggered. For example, you have a component called AboutComponent. This component renders the About page, and the About page isn’t the first thing a user sees when the page is loaded. So the AboutComponent can be bundled separately and loaded only when the user attempts to navigate to the About page. To achieve lazy loading in Angular, lazy modules are used, meaning you can define modules separately from your app’s main module file. Angular naturally builds a separate bundle for each lazy module, so we can instruct Angular to only load the module when the route is requested. This technique improves load-time performance but affects runtime performance in the sense that it might take some time to load the lazy modules depending on the size of the module — that’s why Angular has a useful strategy called PreloadingStrategy. PreloadingStrategy is used for telling the RouterModule how to load a lazy module, and one of the strategies is PreloadAllModules. This loads all the lazy modules in the background after page load to allow quick navigation to the lazied module. Let’s look at an example. You have a feature module called FoodModule to be lazy loaded. The module has a component called FoodTreeComponent and a routing module FoodRoutingModule. import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FoodRoutingModule } from './food-routing.module'; import { FoodTreeComponent } from './food-tree/food-tree.component'; @NgModule({ imports: [ CommonModule, FoodRoutingModule ], declarations: [FoodTreeComponent] }) export class FoodModule { } To lazy load the FoodModule component with the PreloadAllModules strategy, register the feature module as a route and include the loading strategy: import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { BrowserModule } from '@angular/platform-browser'; import { PreloadAllModules, RouterModule } from '@angular/router'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, RouterModule.forRoot([ { path: 'food', loadChildren: './food/food.module#FoodModule' } ], {preloadStrategy: PreloadAllModules} ) ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } In your application, Angular runs checks to find out if it should update the state of a component. These checks, called change detection, are run when an event is triggered ( onClick, onSubmit), when an AJAX request is made, and after several other asynchronous operations. Every component created in an Angular application has a change detector associated to it when the application runs. The work of the change detector is re-rendering the component when a value changes in the component. This is all okay when working with a small application — the amount of re-renders will matter little — but in a much bigger application, multiple re-renders will affect performance. Because of Angular’s unidirectional data flow, when an event is triggered, each component from top to bottom will be checked for updates, and when a change is found in a component, its associated change detector will run to re-render the component. Now, this change detection strategy might work well, but it will not scale, simply because this strategy will need to be controlled to work efficiently. Angular, in all its greatness, provides a way to handle change detection in smarter way. To achieve this, you have to adopt immutable objects and use the onPush change detection strategy. Let’s see an example: You have a component named BankUser. This component takes an Input object user, which contains the name and @Component({ selector: 'bank-user', template: ` <h2>{{user.name}}</h2> <p>{{user.email}}</p> ` }) class BankUser { @Input() user; } Now, this component is being rendered by a parent component Bank that updates the name of the user on the click of a button: @Component({ selector: 'the-bank', template: ` <bank-user [user]="bankUser"></bank-user> <button (click)="updateName()">Update Name</button> ` }) class Bank { bankUser = { name: 'Mike Richards', email: '[email protected]', } updateName(){ this.bankUser.name = 'John Peters' } } On the click of that button, Angular will run the change detection cycle to update the name property of the component. This isn’t very performant, so we need to tell Angular to update the BankUser component only if one of the following conditions is met: detectChanges Inputhas been updated This explicitly makes the BankUser component a pure one. Let’s update the BankUser component to enforce these conditions by adding a changeDetection property when defining the component: @Component({ selector: 'bank-user', template: ` <h2>{{ user.name }}</h2> <p>{{ user.email }}</p> `, changeDetection: ChangeDetectionStrategy.OnPush }) export class BankUser { @Input() user; } After making this update, clicking the Update Name button will have no effect on the component unless we also change the format by which we update the name of the bank user. Update the updateName method to look like the snippet below: updateName() { this.bankUser = { ...this.bankUser, name: 'John Peters' }; } Now, clicking the button works because one of the conditions set is met — the Input reference has been updated and is different from the previous one. Rendering lists can affect the performance of an application — huge lists with attached listeners can cause scroll jank, which means your application stutters when users are scrolling through a huge list. Another issue with lists is updating them — adding or removing an item from a long list can cause serious performance issues in Angular applications if we haven’t provided a way for Angular to keep track of each item in the list. Let’s look at it this way: There’s a list of fruits containing 1,000 fruit names being displayed in your application. If you want to add another item to that list, Angular has to recreate the whole DOM node for those items and re-render them. That is 1,001 DOM nodes created and rendered when just one item is added to the list. It gets worse if the list grows to 10,000 or more items. To help Angular handle the list properly, we’ll provide a unique reference for each item contained in the list using the trackBy function. Let’s look at an example: A list of items rendered in a component called FruitsComponent. Let’s see what happens in the DOM when we attempt to add an extra item with and without the trackBy function. @Component({ selector: 'the-fruits', template: ` <ul> <li *{{ fruit.name }}</li> </ul> <button (click)="addFruit()">Add fruit</button> `, }) export class FruitsComponent { fruits = [ { id: 1, name: 'Banana' }, { id: 2, name: 'Apple' }, { id: 3, name: 'Pineapple' }, { id: 4, name: 'Mango' } ]; addFruit() { this.fruits = [ ...this.fruits, { id: 5, name: 'Peach' } ]; } } Without providing a unique reference using trackBy, the elements rendering the fruit list are deleted, recreated and rendered on the click of the Add fruit button. We can make this more performant by including the trackBy function. Update the rendered list to use a trackBy function and also the component to include a method that returns the id of each fruit. @Component({ ... template: ` <ul> <li * {{ fruit.name }} </li> </ul> <button (click)="addFruit()">Add fruit</button> `, }) export class FruitsComponent { fruits = [ ... ]; ... trackUsingId(index, fruit){ return fruit.id; } } After this update, Angular knows to append the new fruit to the end of the list without recreating the rest of the list. Now we know lazy loading your application will save a ton of time on page load due to reduced bundle size and on-demand loading. On top of that, server-side rendering can improve the load time of the initial page of your application significantly. Normally, Angular executes your application directly in the browser and updates the DOM when events are triggered. But using Angular Universal, your application will be generated as a static application in your server and served on request from the browser, reducing load times significantly. Pages of your application can also be pre-generated as HTML files. Another benefit of server-side rendering is SEO performance — since your application will be rendered as HTML files, web crawlers can easily consume the information on the webpage. Server-side rendering supports navigation to other routes using routerLink but is yet to support events. So this technique is useful when looking to serve certain parts on the application at record times before navigating to the full application. Visit this in-depth tutorial by the Angular team on how to get started with server-side rendering using Angular Universal. You may find instances when a component within your component tree re-renders several times within a short span of time due to side effects. This doesn’t help the highly performant cause we’re working towards. In situations like this, you have to jump in and get your hands dirty: you have to prevent your component from re-rendering. Let’s say you have a component that has a property is connected to an observer and this observer’s value changes very often — maybe it’s a list of items that different users of the application are adding to. Rather than letting the component re-render each time a new item is added, we’ll wait and handle updating of the application every six seconds. Look at the example below: In this component, we have a list of fruits, and a new fruit is added every three seconds: @Component({ selector: 'app-root', template: ` <ul> <li * {{ fruit.name }} </li> </ul> <button (click)="addFruit()">Add fruit</button> `, styleUrls: ['./app.component.scss'] }) export class AppComponent { constructor() { setInterval(() => { this.addFruit(); }, 2000); } fruits = [ { id: 1, name: 'Banana' }, { id: 2, name: 'Apple' }, { id: 3, name: 'Pineapple' }, { id: 4, name: 'Mango' } ]; addFruit() { this.fruits = [ ...this.fruits, { id: 5, name: 'Peach' } ]; } trackUsingId(index, fruit) { return fruit.id; } } Now imagine if this component was rendering other components that rendered other components. I’m sure you get the image I’m painting now — this component will mostly update 20 times a minute, and that’s a lot of re-renders in a minute. What we can do here is to detach the component from the change detector associated with it and handle change detection ourselves. Since this component updates 20 times every minute, we’re looking to halve that. We’ll tell the component to check for updates once every six seconds using the ChangeDetectorRef. Let’s update this component now to use this update: @Component({ selector: 'app-root', template: ... }) export class AppComponent implements OnInit, AfterViewInit { constructor(private detector: ChangeDetectorRef) { // ... } fruits = [ // ... ]; // ... ngAfterViewInit() { this.detector.detach(); } ngOnInit() { setInterval(() => { this.detector.detectChanges(); }, 6000); } } What we’ve done now is to detach the ChangeDetector after the initial view is rendered. We detach in the AfterViewInit lifecycle rather than the OnInit lifecycle because we want the ChangeDetector to render the initial state of the fruits array before we detach it. Now in the OnInit lifecycle, we handle change detection ourselves by calling the detectChanges method every six seconds. We can now batch update the component, and this will improve run-time performance of your application radically. We’ve looked at a few ways to optimize an Angular application. A few other notable techniques are: enableProdModeto optimize your build for production. Employing useful optimization techniques no matter how small and irrelevant the results may seem might go a long way to making your application run even more smoothly than it currently is. The CLI by Angular for bootstrapping your application has employed several optimization techniques, so be sure to get started using the CLI. Further optimization to your server will produce better results, so ensure you look out for those techniques. You can include useful techniques that work for your application too. Happy coding. Check out our All Things Angular page that has a wide range of info and pointers to Angular.
https://www.telerik.com/blogs/tips-for-optimizing-your-angular-application
CC-MAIN-2021-49
en
refinedweb
kantan.xpathkantan.xpath I find myself having to scrape websites with some regularity, and Scala always makes the whole process more painful than it really needs to be - the standard XML API is ok, I suppose, but the lack of XPath support (or actually usable XPath-like DSL) is frustrating. kantan.xpath is a thin wrapper around the Java XPath API that attempts to be type safe, pleasant to use and hide the nasty Java XML types whenever possible. Documentation and tutorials are available on the companion site, but for those looking for a few quick examples: import kantan.xpath._ // Basic kantan.xpath types. import kantan.xpath.implicits._ // Implicit operators and literals. import kantan.xpath.nekohtml._ // HTML parsing. import java.net.URI // Parses an URI as an XML document, finds interesting nodes, extracts their values as ints and store them in a list. new URI("").evalXPath[List[Int]](xp"//h1/span[@class='num']") // Similar, but parsing tuples rather than ints and storing the results in a set. implicit val decode: NodeDecoder[(String, Boolean)] = NodeDecoder.tuple[String, Boolean](xp"./@name", xp"./@count") new URI("").evalXPath[Set[(String, Boolean)]](xp"//name") // Same as above, but only looks for the first match. new URI("").evalXPath[(String, Boolean)](xp"//name") kantan.xpath is distributed under the Apache 2.0 License.
https://index.scala-lang.org/nrinaudo/kantan.xpath/kantan.xpath-libra/0.5.2?target=_2.13
CC-MAIN-2021-49
en
refinedweb
reassembles fragmented network-layer packets More... #include <lp-reassembler.hpp> reassembles fragmented network-layer packets Definition at line 43 of file lp-reassembler.hpp. Definition at line 41 of file lp-reassembler.cpp. set options for reassembler Definition at line 140 of file lp-reassembler.hpp. This is only used for logging, and may be nullptr. Definition at line 146 of file lp-reassembler.hpp. adds received fragment to buffer Definition at line 48 of file lp-reassembler.cpp. References beforeTimeout,(). Referenced by nfd::face::GenericLinkService::GenericLinkService(). count of partial packets Definition at line 152 104 of file lp-reassembler.hpp. Referenced by nfd::face::GenericLinkService::GenericLinkService(), and receiveFragment().
https://ndnsim.net/2.3/doxygen/classnfd_1_1face_1_1LpReassembler.html
CC-MAIN-2021-49
en
refinedweb
In a program of any complexity, you’ll create hundreds or thousands of names, each pointing to a specific object. How does Python keep track of all these names so that they don’t interfere with one another? This course covers Python namespaces, the structures used to organize the symbolic names assigned to objects in a Python program. In this course, you’ll learn: - How Python organizes symbolic names and objects in namespaces - When Python creates a new namespace - How namespaces are implemented - How variable scope determines symbolic name visibility - What is the LEGB rule Dirk on March 22, 2021 Short and sweet. I liked it a lot.
https://realpython.com/courses/navigate-namespaces-scope/
CC-MAIN-2021-49
en
refinedweb
Last: class CodeCoverageExample { def usedMethod(def a) { if (a) { dispatchToPrivateMethod() } else { dispatchToPrivateMethod2() } } def unusedMethod(def a) { if (a) { dispatchToPrivateMethod() } } private def dispatchToPrivateMethod() { 1 } private def dispatchToPrivateMethod2() { 2 } } It also has one test: import spock.lang.Specification class CodeCoverageExampleSpec extends Specification { def "calls usedMethod"() { setup: CodeCoverageExample cce = new CodeCoverageExample() expect: result == cce.usedMethod(givenValue) where: result | givenValue 1 | true 2 | false } } Here are some statisics: <table width=’100%’ border=’1px’ border-style:’solid’> <tr> <td>JDK Version</td> <td>Coverage Tool</td> <td>LOC covered</td> <td>Branches covered</td> <td>Comments</td> </tr> <tr> <td>6</td> <td>Cobertura</td> <td>71%</td> <td>25%</td> <td>This seems pretty legit to me.</td> </tr> <tr> <td>7</td> <td>Cobertura</td> <td>42%</td> <td>12%</td> <td>This is so broken. It didn’t count any line in the private methods, and also didn’t count a line hit inside the else branch.</td> </tr> <tr> <td>6</td> <td>jacoco</td> <td>50%</td> <td>21%</td> <td>Jacoco is saying 50% of instructions were executed but no lines of code had 100% of their instructions executed. I don’t know how to determine what instructions were missed and if they are important.</td> </tr> <tr> <td>7</td> <td>jacoco</td> <td>50%</td> <td>21%</td> <td>Hurray consistency!</td> </tr> <tr> <td>6</td> <td>clover</td> <td>78%</td> <td>69%</td> <td>I had to calculate these percentages by hand using the XML data. </td> </tr> <tr> <td>7</td> <td>clover</td> <td>78%</td> <td>69%</td> <td>Hurray consistency! However Clover doesn’t work on our non-trivial codebase and errors on classes with @CompileStatic. I couldn’t reproduce this in my trivial example however.</td> </tr> </table>.
https://www.kyleboon.org/blog/2014/04/17/code-coverage-with-groovy/
CC-MAIN-2021-49
en
refinedweb
table of contents NAME¶ aio_return - get return status of asynchronous I/O operation SYNOPSIS¶ #include <aio.h> ssize_t aio_return(struct aiocb *aiocbp); Link with -lrt. DESCRIPTION¶. RETURN VALUE¶ If the asynchronous I/O operation has completed, this function returns the value that would have been returned in case of a synchronous read(2), write(2), fsync(2) or fdatasync(2), call. On error, -1 is returned, and errno is set appropriately. If the asynchronous I/O operation has not yet completed, the return value and effect of aio_return() are undefined. ERRORS¶ VERSIONS¶ The aio_return() function is available since glibc 2.1. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008. EXAMPLES¶ SEE ALSO¶ aio_cancel(3), aio_error(3), aio_fsync(3), aio_read(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/bullseye/manpages-dev/aio_return.3.en.html
CC-MAIN-2021-49
en
refinedweb
. $ cpan -l You can also use "cpan"'s "-a" switch to create an autobundle file that "CPAN.pm" understands and can, }, @INC ); print join "\n", @files; If you simply need to check quickly to see if a module is available, you can check for its documentation. If you can read the documentation the module is most likely installed. If you cannot read the documentation, the module might not have any (in rare cases): $ perldoc Module::Name You can also try to include the module in a one-liner to see if perl finds it: $ perl -MModule::Name -e1 (If you don't receive a ``Can't locate ... in @INC'' error message, then Perl found the module name you asked for.) Before you do anything else, you can help yourself by ensuring that you let Perl tell you about problem areas in your code. By turning on warnings and strictures, you can head off many problems before they get too big. You can find out more about these in strict and warnings. #!/usr/bin/perl use strict; use warnings; Beyond that, the simplest debugger is the "print" function. Use it to look at values as you run your program: print STDERR "The value is [$value]\n"; The Data::Dumper module can pretty-print Perl data structures: use Data::Dumper qw( Dumper ); print STDERR "The hash is " . Dumper( \%hash ) . "\n"; Perl comes with an interactive debugger, which you can start with the "-d" switch. It's fully explained in perldebug. If you'd like a graphical user interface and you have Tk, you can use "ptkdb". It's on CPAN and available for free. If you need something much more sophisticated and controllable, Leon Brocard's Devel::ebug (which you can call with the "-D" switch as "-Debug") gives you the programmatic hooks into everything you need to write your own (without too much pain and suffering). You can also use a commercial debugger such as Affrus (Mac OS X), Komodo from Activestate (Windows and Mac OS X), or EPIC (most platforms). The "Devel" namespace has several modules which you can use to profile your Perl programs. The Devel::NYTProf (New York Times Profiler) does both statement and subroutine profiling. It's available from CPAN and you also invoke it with the "-d" switch: perl -d:NYTProf some_perl.pl It creates a database of the profile information that you can turn into reports. The "nytprofhtml" command turns the data into an HTML report similar to the Devel::Cover report: nytprofhtml, <> . perl -MO=Xref[,OPTIONS] scriptname.plx): The Eclipse Perl Integration Project integrates Perl editing/debugging with Eclipse. Perl Editor by EngInSite is a complete integrated development environment (IDE) for creating, testing, and debugging Perl scripts; the tool runs on Windows 9x/NT/2000/XP or later. GUI editor written in Perl using wxWidgets and Scintilla with lots of smaller features. Aims for a UI based on Perl principles like TIMTOWTDI and ``easy things should be easy, hard things should be is another Win32 multi-language editor/IDE that comes with support for Perl.";. In general, memory allocation and de-allocation isn't something you can or should be worrying about much in Perl. See also ``How can I make my Perl program take less memory?'' There are three. <> . later in perlfaq3, but the curious might still be able to de-compile it. You can try using the native-code compiler described later,". Under ``Classic'' MacOS, a perl program will have the appropriate Creator and Type, so that double-clicking them will invoke the MacPerl application. Under Mac OS X, clickable apps can be made from any "#!" script using Wil Sanchez' DropScript utility: <> .. :-) For example: # Unix (including Mac OS X) perl -e 'print "Hello world\n"' # DOS, etc. perl -e "print \"Hello world\n\"" # Mac Classic.]..
https://www.linuxhowtos.org/manpages/1/perlfaq3.htm
CC-MAIN-2021-49
en
refinedweb
There are three aspects to securing WebLogic web services: Access control security You can secure the entire web service by restricting access to the URLs that invoke the web service (or its WSDL). This approach automatically secures any backend components used to implement the web service. Alternatively, you can secure the individual components that make up the web service: the web application that hosts the web-services.xml descriptor file, the stateless session EJBs, a subset of the methods of the EJB, and so on. You also can prevent access to the home page and WSDL, which is by default publicly accessible. Connection level security You can modify the web-services.xml descriptor file to indicate that clients can invoke the web services only over HTTPS. Moreover, if the client authenticates itself using SSL, you need to configure SSL security for WebLogic as well. Message security WebLogic 8.1 lets you use a mixture of digital signing, data encryption, and security token propagation to provide you with message integrity and confidentiality. Like other J2EE components, WebLogic allows you to assign a security policy to a web service component. These policies allow WebLogic to enforce authorization checks on clients who invoke the web service. Because a web service relies on multiple backend components for its implementation, you can independently secure the web service backends as well. Configuring SSL security for a web service is equally easy most of the work lies in building clients that can invoke web services over SSL. WebLogic 8.1 provides SOAP message data encryption and signing, based on OASIS' draft Web Services Security Core Specification. As this is not yet an OASIS standard, WebLogic's implementation is subject to change. 19.8.1 Access Control WebLogic's web services are packaged into standard J2EE enterprise applications. This means that you can secure a web service with access control settings on the various J2EE components that constitute the web service. WebLogic lets you control access to a web service in the following ways: Let's take a closer look at these various mechanisms for securing access to a web service. 19.8.1.1 Assigning a security policy to the web service You can use the Administration Console to assign a security policy to a deployed web service component i.e., a web application that hosts one or more web services. This policy determines the set of users, groups, and roles that are authorized to access the web service. Of course, this also means that the client needs to authenticate itself when interacting with the web service. Only then can WebLogic enforce these authorization checks on the web service. In order to view or modify the security policy assigned to a web service, you need to right-click the web service component in the left pane of the Administration Console and select the Define Security Policy option from the pop-up menu. If you select "Define policies and roles for individual services" instead, you will be able to set a role or policy for each individual operation within a selected service. Chapter 17 provides more information on how to apply security policies to WebLogic resources. 19.8.1.2 Securing the web service URL Clients need a URL to access the web service. For instance, our Simple web service is available via the URL. Similarly, the WSDL for the web service is available via. This means that you could secure access to the entire web service and its operations by simply restricting access to the web service URL. To set up this access control, you need to configure a security constraint over the web service URL by modifying the deployment descriptors of the web application that hosts the web-services.xml descriptor file. Chapter 2 provides more details on how to set up security constraints on a web resource collection. Because you need to enforce access control over the web service URL itself, you must restrict all GET and POST requests to URLs that match the /Simple/* pattern: Simple Web Service Simple/* POST GET This ensures that any client that attempts to invoke the protected web service or access the WSDL that describes the web service must authenticate itself. Later in this chapter, we examine how the client can authenticate itself when invoking an operation on a web service protected in this way. 19.8.1.3 Securing a stateless session EJB and its methods If a stateless session EJB serves as the backend component for a web service, you can use the EJB's deployment descriptors to restrict access to the EJB methods. Chapter 17 explains how you can use the assembly-descriptor element in the ejb-jar.xml descriptor file to associate security roles with individual EJB methods, and the security-role-assignment element in the weblogic-ejb-jar.xml descriptor file to list WebLogic users and groups that belong to the role. You can use this to restrict access to individual operations of the web service by applying security constraints on the EJB methods that implement the operations. Other clients can continue to access the web application, the WSDL, and the home page for the web service. Any unauthorized client that attempts to invoke an operation implemented by a method on a protected stateless session EJB will be denied access: java.rmi.RemoteException: SOAP Fault:javax.xml.rpc.soap.SOAPFaultException: Security Violation: User: '' has insufficient permission to access EJB: type=, application=_appsdir_myEarEJB_dir, module=webserviceEJB.jar, ejb=Case, method=makeUpper, methodInterface=Remote, signature={java.lang.String}.; 19.8.1.4 Removing access to the home page and WSDL You can prevent the home page and WSDL of a web service from being exposed by editing the web-services.xml descriptor file. Simply add an exposeWSDL or exposeHomePage attribute, as shown in the following example: 19.8.1.5 Authenticating client access to a protected web service Once you've restricted the access to a web service, the client can no longer anonymously invoke a web service operation over plain HTTP. The client now needs to authenticate itself as well. For instance, if you secure the URL for the Simple web service and then point your browser to the home page for the web service, you will be greeted with an HTTP 401 (Unauthorized) response. Instead, you need to specify a modified web service URL that includes the username and password of an authorized WebLogic user: Similarly, if you've configured access restrictions over the URL for obtaining the WSDL for the web service, again you need to specify the login credentials of a WebLogic user authorized to view the WSDL: For a static client that needs to authenticate itself when invoking an operation on a secure web service, the only change that's required is how the client creates an instance of the SimplePort stub implementation: Simple ws = new Simple_Impl(""); SimplePort port = ws.getSimplePort(username, password); String returnVal = port.makeUpper("Hello There"); Here, we've supplied the username and password of an authorized WebLogic user to the web service-specific implementation of the getSimplePort( ) method. Remember, if you've restricted access to the web service URL, you must also modify the URL for the WSDL to include the user's login credentials. Once the client has authenticated successfully, WebLogic is able to enforce any authorization checks placed on the web service, the URLs, or even the backend stateless session EJBs. In fact, the standard JAX-RPC approach for a client that invokes a secure web service and needs to authenticate itself is to specify values for two authentication properties: The client JAR generated by WebLogic for a particular web service already contains the stub classes that automatically set these login credentials when the Java client invokes the getServicePort( ) method. The following example shows how a JAX-RPC client would submit its credentials before invoking a web service: SimpleStub stub = // ... get the stub; stub._setProperty("javax.xml.rpc.security.auth.username", "juliet"); stub._setProperty("javax.xml.rpc.security.auth.password", "mypassword"); String returnVal = stub.makeUpper("lower case string!"); 19.8.2 Using SSL WebLogic lets you configure a web service so that it's accessible only through the HTTPS protocol, in which case plain HTTP clients will not be able to access the service. This connection-level security provides you with point-to-point security, which is securing communication between two endpoints. If your SOAP messages are going to pass through unsecured intermediaries, such as caches, you may want to also use the more advanced end-to-end security measures, such as SOAP message encryption and digital signing. To force the use of HTTPS, modify the web-services.xml descriptor file by specifying a protocol attribute for the web-service element: WebLogic's servicegen task also can let you adjust the protocol constraint for a web service, by simply adding a protocol attribute as follows: When you configure a web service in this way, clients must create HTTPS connections when invoking a web service operation. Without the HTTPS protocol constraint, clients are free to create either HTTP or HTTPS connections when invoking the web service. Of course, HTTPS connections may be used only if you've properly configured SSL at the server's end. For a web service that accepts only HTTPS connections, a client must use SSL to access the web service operations. 19.8.2.1 Client access using WebLogic's SSL WebLogic provides a client runtime JAR, webserviceclient+ssl.jar, which includes the standard JAX-RPC runtime classes and the SSL implementation classes. Thus, in order to configure a client application to use WebLogic's SSL implementation, you need to make a note of the following issues: The following example summarizes these points and shows how you would run a client that needs to interact with HTTPS-protected web services: java -classpath %WL_HOME%serverlibwebserviceclient+ssl.jar;%CLASSPATH% -Dbea.home=c:ea_home -Djava.protocol.handler.pkgs=com.certicom.net.ssl -Dweblogic.webservice.client.ssl.strictcertchecking=false oreilly.wlguide.webservices.secure.client.MyApp 19.8.2.2 Using a proxy server If the client sits behind a firewall and must use a proxy server to invoke the web service, it can specify the host and port of the proxy server using the following two system properties: java -Dweblogic.webservice.transport.https.proxy.host=10.0.0.1 -Dweblogic.webservice.transport.https.proxy.port=4567 ... By specifying these two system properties, the client can make HTTPS connections to the web service via the configured proxy server. 19.8.2.3 Configuring SSL programmatically While you can configure a client to use WebLogic's SSL implementation through the command-line options, you also can achieve the same results programmatically by using the weblogic.webservice.client.WLSSLAdapter class. The following code sample shows how to modify the client-side code so that it can use WebLogic's SSL implementation when invoking an SSL-protected web service: System.setProperty("java.protocol.handler.pkgs", "com.certicom.net.ssl"); SSLAdapterFactory adapterFactory = SSLAdapterFactory.getDefaultFactory( ); WLSSLAdapter adapter = (WLSSLAdapter) adapterFactory.getSSLAdapter( ); adapter.setStrictChecking(false); //optional adapter.setTrustedCertificatesFile("trusted-ca.pem"); adapterFactory.setDefaultAdapter(adapter); adapterFactory.setUseDefaultAdapter(true); Simple ws = new Simple_Impl(argv[0]); SimplePort port = ws.getSimplePort("system", "12341234"); String returnVal = port.makeUpper("Hello There"); // ... If the client uses the generic JAX-RPC interfaces, it also can choose WebLogic's SSL adapter for a particular web service invocation: ServiceFactory factory = ServiceFactory.newInstance( ); Service service = factory.createService(serviceName); Call call = service.createCall( ); call.setProperty("weblogic.webservice.client.ssladapter", adapter); String result = (String) call.invoke( new Object[]{ "SOMEPARAM" } ); If the client statically invokes a web service using the Stub interface, it also needs to set the following property: ((javax.xml.rpc.Stub)stubClass)._setProperty( "weblogic.webservice.client.ssladapter", adapterInstance); 19.8.2.4 Using two-way SSL If the WebLogic server hosting the web service is configured for two-way SSL, you will need to modify your client to load its identity, much like that described in Chapter 16. In this case, we need to modify our client code like this: SSLAdapterFactory adapterFactory = SSLAdapterFactory.getDefaultFactory( ); WLSSLAdapter adapter = (WLSSLAdapter) adapterFactory.getSSLAdapter( ); adapter.setStrictChecking(false); adapter.setTrustedCertificatesFile("x:/server/lib/cacerts"); FileInputStream fs = new FileInputStream(CERT_CERTCHAINFILE); adapter.loadLocalIdentity(fs, CERT_KEYPASSWORD.toCharArray( )); adapterFactory.setDefaultAdapter(adapter); adapterFactory.setUseDefaultAdapter(true); The loadLocalIdentity( ) method expects a FileInputStream that references an encoded certificate chain. You can create such a certificate chain by simply appending the mycertfile.pem and mykeyfile.pem (in that order) generated in Chapter 16. 19.8.2.5 Rolling your own SSL implementation In the previous examples, we saw how the client uses an instance of WebLogic's SSLAdapterFactory to manufacture an object that implements the SSLAdapter interface in this case, a WLSSLAdapter class provided by WebLogic: import weblogic.webservice.client.WLSSLAdapter; import weblogic.webservice.client.SSLAdapterFactory; //... SSLAdapterFactory adapterFactory = SSLAdapterFactory.getDefaultFactory( ); WLSSLAdapter adapter = (WLSSLAdapter) adapterFactory.getSSLAdapter( ); It is this adapter class that enables the client to interact with that SSL-protected web service. Thus, in order to use a custom SSL implementation, you need to first create your own SSL adapter class. Example 19-15 provides a sample adapter class that implements the SSLAdapter interface while relying on the standard JSSE implementation of SSL. Example 19-15. Custom SSL adapter class import java.net.URL; import java.net.Socket; import java.net.URLConnection; import java.io.IOException; public class JSSEAdapter implements weblogic.webservice.client.SSLAdapter { // Use Java's standard SSL socket factory javax.net.SocketFactory factory = javax.net.ssl.SSLSocketFactory.getDefault( ); // Use Java's implementation to return an SSL connection to the // server hosting the web service public Socket createSocket(String host, int port) throws IOException { return factory.createSocket(host, port); } // Assumes that you have set the java.protocol.handler.pkgs property public URLConnection openConnection(URL url) throws IOException { return url.openConnection( ); }; } } A client then can create an instance of this custom SSL adapter in two ways: java -Dweblogic.webservice.client.ssl.adapterclass= oreilly.wlguide.webservice.client.JSSEAdapter ... oreilly.wlguide.webservices.client.MyApp weblogic.webservice.client.SSLAdapterFactor In particular, you must override the method that creates a new SSL adapter instance: public SSLAdapter createSSLAdapter( ); The client then needs to create an instance of the custom SSL adapter factory and set it as the default using the following method: SSLAdapterFactory.setDefaultFactory(factory); Subsequently, the client can use this default adapter factory to manufacture an instance of the custom SSL adapter: SSLAdapterFactory myfactory = SSLAdapterFactory.getDefaultFactory( ); JSSEAdapter adapter = (JSSEAdapter) myfactory.getSSLAdapter( ); 19.8.3 SOAP Message Security WebLogic's implementation of SOAP message security is based on the OASIS draft specification WSS: SOAP Message Security, which is based on the WS-Security draft specification. These specifications aim to secure SOAP message exchanges through a flexible set of mechanisms based on security token propagation, message integrity, and message confidentiality. 19.8.3.1 Architecture WebLogic augments three aspects of web services in order to implement SOAP message security: WSDL WebLogic augments the WSDL of a web service to indicate which operations should be secured and how they should be secured. As usual, you can either use the servicegen Ant task or modify the web-services.xml descriptor file to effect these changes. Because there is no standard specification, WebLogic's changes to the WSDL are necessarily proprietary. Client runtime The client runtime is augmented with WebLogic's implementation of SOAP message security. It also requires access to a key file and a certificate file, which are used to sign outgoing messages. The runtime performs any encryption and signature tasks just before the SOAP message is sent to the server, after all of the client handlers are executed. Server runtime The server runtime also is augmented with WebLogic's implementation of SOAP message security. The runtime performs any encryption and signature tasks just after receiving the SOAP message, before passing it on to the web service. It requires two key/certificate pairs one for encrypting and one for signing. When a client invokes a web service operation, it reads the WSDL of the service. If the service has added SOAP message security, the WSDL will reflect this. For example, the WSDL will contain the server's public certificate for encrypting any messages that are sent to it! When such an operation is invoked, certificates, signatures, and tokens are sent back and forth many times. Don't be overawed by the following description because most of these actions occur transparently to the client and web service implementation. When a client invokes a web service operation that needs additional message security, the following actions occur before the SOAP message is actually sent: When the server runtime receives a SOAP message and finds that additional security information has been included in the SOAP message, it performs the following actions: Note that WebLogic asserts the identity of the client certificate so that it doesn't accept invocations from just any client. Only those clients with certificates that can be validated in this way may invoke the operations of the web service. So, for a client to use WebLogic's SOAP message security, it must possess a valid public certificate and WebLogic must have an Identity Assertion Provider installed. When WebLogic sends a SOAP response, the same actions occur, but in reverse order: When a client runtime receives a message from the web service, the following sequence of actions take place: The following sections examine how to set up SOAP message security and then put it all into action. For the sake of the discussion, we assume that you have an existing web service say, the Simple web service described earlier in Example 19-1 that needs to be secured through username tokens, encryption, and signing. Although you also can choose which parts of the SOAP message ought to be encrypted instead of just the entire body, we will not consider this here because it involves lengthy changes to the web-services.xml descriptor file. 19.8.3.2 Configuring SOAP message security To enable SOAP message security for a web service, you need to first add a security element to the servicegen task used to build the web service, as shown in Example 19-16. Example 19-16. Using the servicegen task to enforce SOAP message security By supplying the signKeyName and signKeyPass attributes, you enable the signing mechanism on outgoing SOAP messages. Likewise, by supplying the encryptKeyName and encryptKeyPass attributes, you enable the encryption of the body of the SOAP messages. The values of these attributes determine the aliases and passwords of the key and certificate pairs for signing and encryption. We also have set the enablePasswordAuth attribute to true, to force any client of the web service to supply a username token. The security subelement here ensures that SOAP message security is enabled on all operations of the web service. In addition, it secures (encrypts and signs) the entire SOAP message body. If you want to secure only a subset of operations, or only parts of the message body, you must manually edit the web-services.xml descriptor file. 19.8.3.3 Creating the certificates The server needs two key pairs. One is used for digitally signing a SOAP message, and another for encrypting the message. WebLogic's current SSL implementation for web services requires that the key length of certificates used for encrypting and signing be at least 1024. In this case, we will use the keytool to create the two keys referenced in Example 19-16. Refer to Chapter 16 to see how you can configure WebLogic to use this store as its identity store. We simply will add the two certificates to a keystore called myIdentityStore.jks developed in that chapter. The following commands create and store the key pairs: keytool -genkey -keysize 1024 -keyalg RSA -dname "cn=system, ou=OR, o=UH, c=US" -alias encryptKeyAlias -keypass mypassword -keystore myidentitystore.jks -storepass mykeystorepass keytool -selfcert -keystore myidentitystore.jks -alias encryptKeyAlias -storepass mykeystorepass -keypass mypassword keytool -genkey -keysize 1024 -keyalg RSA -dname "cn=system, ou=OR, o=UH, c=US" -alias signKeyAlias -keypass mypassword -keystore myidentitystore.jks -storepass mykeystorepass keytool -selfcert -keystore myidentitystore.jks -alias signKeyAlias -storepass mykeystorepass -keypass mypassword Notice how we have set the CN field to system. Later in this chapter, we shall configure the username mapper in the default Identity Assertion Provider to extract the WebLogic username from this field. A production environment would use something more robust. Any client application that needs to interact with the web service must possess its own key pair too. Simply create a new keystore using similar commands the client then can extract the key pair from the keystore. 19.8.3.4 Setting up the Identity Assertion Provider SOAP messages that are signed by the client will also have the client's public certificate embedded in the message. WebLogic uses the certificate to verify both the signature and the client's identity so as to prevent anonymous clients from invoking the operations of the web service. WebLogic does this by invoking the Identity Assertion Provider configured for the security realm. For our example, we simply will use WebLogic's Default Identity Asserter. (Chapter 18 explains how you configure this provider). The client-supplied token in this case is an X.509 certificate, so you must add this to the list of supported token types for the provider. Select the Default Identity Asserter from the left pane of the Administration Console, and in the Types option, move the X.509 to the Chosen column. This enables WebLogic to consider X.509 certificates as a form of identity assertion. You also need to set up a username mapper that can extract some data from the certificate and map it to a WebLogic user. You can either write your own, similar to that in Example 18-2, or use WebLogic's default username mapper. For the running example, the latter approach will suffice. Select the Details tab of the Default Identity Asserter, and then the Use Default User Name Mapper option. Because the username can be extracted from the certificate's CN field, you should choose CN as the Default User Name Mapper Attribute Type and then blank out the Default User Name Mapper Attribute Delimiter. Finally, ensure that Base64Decoding Required is not selected. 19.8.3.5 Writing the client A client that uses SOAP message security must be modified to support it. First, you need to include BEA's SOAP message security implementation in the client's classpath. In other words, you must add the WL_HOME/server/lib/wsse.jar library to the client's classpath. This library doesn't contain the web services JAX-RPC classes, so you still must keep the existing webserviceclient.jar in the classpath. Our Java client needs to load its security identity into the context and supply the username and password of a valid WebLogic user because we have forced the client to supply a username token. To do this, you must set the relevant attributes on the WebServiceSession object. Example 19-17 illustrates the code for a Java client that can invoke a web service operation securely. Example 19-17. Client code to interact with secure SOAP messages package com.oreilly.wlguide.webservices.secureSOAP.client; import java.io.FileInputStream; import java.net.URL; import java.rmi.RemoteException; import java.security.KeyStore; import java.security.PrivateKey; import java.security.cert.X509Certificate; import javax.xml.namespace.QName; import javax.xml.rpc.Call; import javax.xml.rpc.Service; import javax.xml.rpc.ServiceFactory; import weblogic.webservice.context.WebServiceContext; import weblogic.webservice.context.WebServiceSession; import weblogic.webservice.core.handler.WSSEClientHandler; import weblogic.xml.security.UserInfo; public class Invoke { private static final String KEYSTORE = "myIdentityStore.jks"; private static final String KEYSTORE_PASS = "mystorepass"; private static final String KEY_ALIAS = "myalias"; private static final String KEY_PASS = "mypassword"; static void invoke(String where) throws Exception { // First get hold of the keystore that holds our key/cert pair KeyStore ks = KeyStore.getInstance("JKS"); ks.load(new FileInputStream(KEYSTORE), KEYSTORE_PASS.toCharArray( )); // Use the keystore to load the certificate and private key X509Certificate myCert = (X509Certificate) ks.getCertificate(KEY_ALIAS); PrivateKey myKey = (PrivateKey) ks.getKey(KEY_ALIAS, KEY_PASS.toCharArray( )); // Now retrieve the web service context, and its session, from the service Simple ws = new Simple_Impl(where); WebServiceContext wsCtx = ws.context( ); WebServiceSession session = wsCtx.getSession( ); // Finally, set the attributes session.setAttribute(WSSEClientHandler.CERT_ATTRIBUTE, myCert); session.setAttribute(WSSEClientHandler.KEY_ATTRIBUTE, myKey); // Since we set enablePasswordAuth, we have to supply token and define user UserInfo ui = new UserInfo("someWLUser", "somePassword"); session.setAttribute(WSSEClientHandler.REQUEST_USERINFO, ui); SimplePort port = ws.getSimplePort( ); System.out.println("The service returned: " + port.makeUpper("hello there")); } public static void main(String[] argv) throws Exception { invoke(argv[0]); } } The first part of the code simply retrieves the client's private key and certificate from a keystore. After creating the service object, Simple, the code then retrieves WebLogic's context and session. These objects maintain any server-side state associated with the client. The session then is populated with the digital certificate, private key, and username token into predefined attributes. After this, you should be able to invoke a secured web service operation. Note that the user token information determines which WebLogic user is used to actually invoke the operation. 19.8.3.6 Running the client Clients that use SOAP message security can be executed in the same way as ordinary web service clients, except that you should include the wsse.jar in the classpath. During development, you may find it useful to enable the debugging flags provided by WebLogic. Use the weblogic.xml.encryption.verbose and weblogic.xml.signature.verbose system properties to obtain debugging information about the encryption and signing processes. For example, you can use the following mouthful when running the client during development: java -Dweblogic.xml.encryption.verbose=true -Dweblogic.xml.signature.verbose=true -Dweblogic.webservice.verbose=true -Dweblogic.webservice.client.ssl.strictcertchecking=false -cp mysecureSOAPclient.jar;classes;y:serverlibwsse.jar; y:serverlibwebserviceclient+ssl.jar com.oreilly.wlguide.webservices.secureSOAP.client.Invoke 19.8.3.7 Encrypting passwords The security element used to include the server's key, certificate, and password information, creates a number of additional elements in the web-services.xml descriptor file. By default, the key passwords are not encrypted in this file. You can encrypt them using the weblogic.webservice.encryptpass utility. This tool encrypts the passwords salted with the domain name. As a result, the EAR or WAR with the encrypted data can be deployed only to the same domain from which you encrypted the passwords in the first place. The following command encrypts the secureSOAPService in the EAR file: java weblogic.webservice.encryptpass -serviceName secureSOAPService out/secureSOAPService.ear You must either run this command from the root of the domain so that it has access to the config.xml file, or specify the -domain argument to point to the root directory.
https://flylib.com/books/en/2.107.1/2874_security.html
CC-MAIN-2019-04
en
refinedweb
Load a 3d model To load a 3d model in your openFrameworks app you have to use the ofxAssimpModelLoader addon, that already comes with your openFrameworks installation. First, you have to include and define ofxAssimpModelLoader in your ofApp.h: #include "ofxAssimpModelLoader.h" ofxAssimpModelLoader yourModel; Then, in your ofApp.cpp file you load the model and draw it like this: void ofApp::setup(){ yourModel.loadModel("squirrel/NewSquirrel.3ds", 20); } void ofApp::draw(){ yourModel.drawFaces(); } In the folder addons/3DModelLoaderExample/ of your openFrameworks installation you can find the complete working example.
https://openframeworks.cc/zh_cn/learning/05_3d/3d_example_how_to/
CC-MAIN-2019-04
en
refinedweb
Jenkins Best Practices – Practical Continuous Deployment in the Real World — GoDaddy Open Source HQ Java. Sc 2 minutes to spare: Apache NiFi on Mac As a Mac user, I usually run Apache NiFi using one of the two approaches: - by standing up a Docker container; - by downloading and installing locally on your Mac; Running a NiFi Container You can install Docker on Mac via Homebrew: brew install docker Alternatively it is possible to download the Docker Community Edition (CE): an easy to install desktop app for building, packaging and testing dockerised apps, which includes tools such as Docker command line, Docker compose and Docker Notary After installing Docker, this will let you pull the NiFi image: docker pull apache/nifi:1.5.0 Next, we can start the image and watch it run: docker run -p 8080:8080 apache/nifi:1.2.0 Downloading and Installing NiFi locally Installing Apache NiFi on Mac is quite straightforward, as follows: brew install nifi This assumes that you have Homebrew installed. If that is not the case, this is the command you will need: ruby -e "$(curl -fsSL)" < /dev/null 2> /dev/null Here is where NiFi has been installed: /usr/local/Cellar/nifi/your-version-of-nifi Some basic operations can be done with these commands: bin/nifi.sh run, it runs in the foreground, bin/nifi.sh start, it runs in the background bin/nifi.sh status, it checks the status bin/nifi.sh stop, it stops NiFi Next step, whatever approach you took at the beginning, is to verify that your NiFi installation/dockerised version is running. This is as simple as visiting the following URL: localhost:8080/nifi Happy Nif-ing 🙂 Machine Learning’s ‘Amazing’ Ability to Predict Chaos Machine Learning’s ‘Amazing’ Ability to Predict Chaos Download SQUID – Your News Buddy squidapp.co/getSQUID So, you want to build a bot with NodeJs? I have used Node.js in a number of projects and in conjunction with the module bundler called Webpack and the automation toolkit Gulp, but still I wanted to experiment with something different that would bring up the advantages of using such a server-side platform. I remembered that the Microsoft Bot Framework employs Node.js for its Bot Builder SDK and why not building bots sounds interesting! I have actually found out that there are a few books specifically focusing on building bots with Node.js and that seemed to be like a fun task. The choice then became clear, let’s use Node.js and Twit, a Twitter API Client for Node, to build a Twitter bot that simply sends a query to the Twitter API, receives a response containing the results of the performed search, and then retweets the most recent tweet returned. Let’s see what we need to achieve this! Set up a dedicated Twitter account for your Bot Bots get usually banned from Twitter, so it is recommended to create Twitter account perhaps with a secondary email address specifically for the following experiment. It is highly recommended that you do not use your “official” Twitter account as it is likely that it will be short-lived. After your new account is activated, go to the Twitter Developer Center and sign in with your new details. You might also want to have a look around and in particular have a read through the documentation on how to get started with the Twitter Developer Platform and how to create your first app. Create a Twitter App From your acccount, you will be able to see the newly created app from here. After creating the application, look for ‘Keys and Access Tokens’ and click on ‘Generate Token Actions’. Make a copy of the details below as you will be using them later as part of your - Consumer Key - Consumer Secret - Access Token - Access Token Secret The Part where you code You will interact with your newly created Twitter App via a Nodejs library called Twit. Create a new project folder in your Dev directory (ideally the directory structure where your git installation resides): This will kick off an utility that will take you through the process of creating the package.json file. Then you will need to install Twit, the Twitter API Client for Node that supports REST and Streaming API. and create a file called This will be your main application file, that means the entry point of your app. You will also need a third and additional file called where you will past the following: - Consumer Key - Consumer Secret - Access Token - Access Token Secret It will look like this: Your directory structure should look as follows: The part where you make the Bot do something Next step is to make your bot to query the most recent tweets. We will need to write a function that finds the latest tweets according to the query passed as a parameter. To do so, we need to initialise a params object holding a q property that will refine our searches. In our case, we are targeting tweets with hashtag #nodejs and #Nodejs: the property instructs the bot to search exclusively for the tweets that were posted since the bot was started. We can use , which accepts three arguments: API endpoint, params object (defined by us) and a callback. To post or to retweet the tweet the bot has found, we have used the Twitter.post() method to post to any of the REST API endpoints. Usage In order to run your bot, you should simply type the following command on terminal: Alternatively, it is possible to use: in a nutshell, they are scripts whose goal is to automate repetitive tasks. This requires to modify the file by adding the following lines of code: and then you can type An Oracle JDBC Client A while ago I was tasked to write a small application in order to connect to an Oracle Database and perform a set of simple queries. For such a task, I have employed the DAO (Data Access Object) pattern and a corresponding DAO Interface. A basic Java client, in turn, calls the instantiation of such DAO class, which implements a the DAO interface. As follow, the application in its internal details: Oracle DB Client [code language=”java”] package oracledb.connection.client; import oracledb.connection.dao.OracleDB_DAO; public class OracleConnectionClient { public static void main(String[] args) throws Exception { OracleDB_DAO dao = new OracleDB_DAO(); dao.readPropertiesFile(); dao.openConnection(); dao.getDBCurrentTime(); dao.getFirstNameAndLastNameFromCustomers(); dao.closeConnection(); } } [/code] The Data Access Object (DAO) implementation. The method[code]readPropertiesFile()[/code] parses a properties file containing the access credentials and DB connection details. [code language=”java”] package oracledb.connection.dao; import java.io.*; import java.sql.*; import java.util.Properties; public class OracleDB_DAO implements OracleDB_DAO_Interface { public static String SAMPLE_SELECT_QUERY = “SELECT * FROM CUSTOMERS WHERE FirstName = ‘Eliott’ AND LastName = ‘Brown'”; private static String driverClass = “oracle.jdbc.driver.OracleDriver”; private Connection connection; private static String dbUrl; private static String userName; private static String password; static String resourceName = “dbconnection.properties”; /** * Read the properties Initialise the DAO * * @throws IOException * @throws ClassNotFoundException */ public void readPropertiesFile() throws IOException, ClassNotFoundException { ClassLoader loader = Thread.currentThread().getContextClassLoader(); Properties props = new Properties(); InputStream resourceStream = loader.getResourceAsStream(resourceName); { props.load(resourceStream); } // Return the properties dbUrl = props.getProperty(“dburl”); userName = props.getProperty(“dbuser”); password = props.getProperty(“dbpassword”); // Load the Class.forName(driverClass); } /* * (non-Javadoc) * * @see oracledb.connection.dao.OracleDB_DAO_Interface1#openConnection() */ @Override public void openConnection() throws SQLException { // get the connection to the database System.out.println(“Establishing the Connection to the Database”); try { connection = DriverManager.getConnection(dbUrl, userName, password); System.out.println(connection); } catch (SQLException ex) { ex.printStackTrace(); } } /* * (non-Javadoc) * * @see oracledb.connection.dao.OracleDB_DAO_Interface1#closeConnection() */ @Override public void closeConnection() throws SQLException { if (connection != null) { // close the connection connection.close(); } } /* * (non-Javadoc) * * @see oracledb.connection.dao.OracleDB_DAO_Interface1# * getFirstNameAndLastNameFromCustomers() */ @Override @SuppressWarnings(“resource”) public ResultSet getFirstNameAndLastNameFromCustomers() throws SQLException, IOException { // create the prepared stmt Statement stmt = connection.createStatement(); // assign the query to a variable String sql = SAMPLE_SELECT_QUERY; // execute the query ResultSet rs = stmt.executeQuery(sql); System.out.println(“This print the ResultSet for getPlanByMSISD ” + rs); @SuppressWarnings(“unused”) PrintWriter csvWriter = new PrintWriter(new File(“sample.csv”)); stmt.close(); // close statement return rs; } /* * (non-Javadoc) * * @see oracledb.connection.dao.OracleDB_DAO_Interface1#getDBCurrentTime() */ @Override public String getDBCurrentTime() throws SQLException, IOException { String dateTime = null; // create the prepared stmt Statement stmt = connection.createStatement(); ResultSet rst = stmt.executeQuery(“select SYSDATE from dual”); while (rst.next()) { dateTime = rst.getString(1); } // close the resultset System.out.println(“This prints the dateTime from the DB ” + dateTime); rst.close(); return dateTime; } } [/code] The DAO Interface that defines the standard operations to be performed on a model object: [code language=”java”] package oracledb.connection.dao; import java.io.IOException; import java.sql.ResultSet; import java.sql.SQLException; public interface OracleDB_DAO_Interface { /** * Open the Dao Connection * * @param * @throws SQLException * @throws IOException */ void openConnection() throws SQLException; /** * Close the connection * * @throws SQLException */ void closeConnection() throws SQLException; /** * Get the resultset from the the select query * * @throws SQLException * @throws IOException */ ResultSet getFirstNameAndLastNameFromCustomers() throws SQLException, IOException; /** * Get the Current Time via DB Query * * @return * @throws SQLException * @throws IOException */ String getDBCurrentTime() throws SQLException, IOException; } [/code] 12+ must have Atom packages to work in JavaScript | Void Canvas IE11 and String.prototype.includes() in Angular directives Just come across an interesting behaviour with Angular directives and E11. Apparently E11 does not seem to work with the function String.prototype.includes(), for example: [code language=”html”] <div ng-<span>Some text</span></div> [/code] where str == ‘Sometest‘ and the generic syntax is as below: Browser’s compatibility is an issue with IE11 and generally speaking it is poor across IE, so it is highly recommended to use this, instead: [code language=”html”] <div ng-<span>Some text</span></div> [/code] as per the syntax below: IndexOf method returns the index of the string as passed in. If the value is not found, it returns -1. Further documentation: MDN documentation for includes MDN documentation for indexOf
https://semanticreatures.com/
CC-MAIN-2019-04
en
refinedweb
Ask the PRO LANGUAGES: C# ASP.NET VERSIONS: 1.0 | 1.1 Get a Rich UI With WinForms Use a DataReader to enumerate multiple sets of query results. By Jeff Prosise Q: I'm interested in using Windows Forms controls to build rich user interfaces in ASP.NET Web pages. I know how to deploy the controls and access their properties and methods from client-side script. Is it also possible to process a Windows Forms control's events in the browser? A: It's no secret that one way to overcome the limitations of HTML and endow browser-based clients with rich user interfaces is to host Windows Forms controls in Web pages. As an example, here's a derived class named WebSlider that encapsulates a Windows Forms TrackBar control: namespace Wintellect { public class WebSlider : System.Windows.Forms.TrackBar {} } If this source-code file is compiled into a DLL named WebSlider.dll and deployed on a Web server, this tag declares an instance of it, causing a vertical TrackBar control to appear on the page: The first time the page is accessed, Internet Explorer downloads the DLL and, with the .NET Framework's help, caches it on the client. The two chief requirements are that the client must be running Internet Explorer 5.01 or higher and must have the .NET Framework installed. Accessing the control's properties and methods using client-side script is simplicity itself. If the form containing the control is named MyForm, this JavaScript statement moves the TrackBar thumb to position 5 by setting the control's Value property: MyForm.Slider.Value = 5; Writing client-side script that processes events fired by the control, however, is less straightforward. First, you must define an interface that encapsulates the events you wish to expose to the browser, and you must instruct the .NET Framework to expose the interface's members through a COM IDispatch interface. Then, you must associate this interface with the control. Figure 1 contains the source code for a class derived by System.Windows.Forms.TrackBar that exposes the Scroll events fired in response to thumb movements to unmanaged code. The IWebSliderEvents interface defines the event as a method and assigns it a dispatch ID. (In COM, all methods exposed through an IDispatch interface require unique integer dispatch IDs.) Note that the method signature exactly matches that of the Scroll event defined in the base class. The [InterfaceType] attribute tells the .NET Framework to expose the IWebSliderEvents interface to unmanaged code as an IDispatch interface. The [ComSourceInterfaces] attribute adorning the class definition lets the framework know that WebSlider should support IWebSliderEvents as a source interface, which is COM speak for an interface used to source (fire) events. using System; using System.Runtime.InteropServices; namespace Wintellect { [ComSourceInterfaces (typeof (IWebSliderEvents))] public class WebSlider : System.Windows.Forms.TrackBar {} [InterfaceType (ComInterfaceType.InterfaceIsIDispatch)] public interface IWebSliderEvents { [DispId (1)] void Scroll (Object sender, EventArgs e); } } Figure 1. This Windows Forms TrackBar control derivative exposes Scroll events to unmanaged code. Figure 2 lists an .aspx file you can use to test the WebSlider control. The Figure 2. This .aspx file creates a WebSlider control and responds to Scroll events using client-side script. Figure 3. Here's the WebSlider control in action. A client-side event handler continually updates the number shown below the control as the thumb moves. Note that in order for unmanaged code hosted by a browser to "see" events fired from managed code, the assembly containing the control - in this case, WebSlider.dll - must be granted full trust on the client computer. (For security reasons, managed code can call unmanaged code only if it is granted permission to do so.) You can use the Microsoft .NET Framework wizards found in Control Panel\Administrative Tools to grant the assembly full trust. You must grant this permission for this example to work. Q: Can you use a DataReader to enumerate the results of a query that produces multiple result sets? A: You bet. The secret is the DataReader's NextResult method, which moves the virtual cursor maintained by the DataReader to the next result set. This code uses a compound query to create a SqlDataReader that encapsulates two result sets, then it outputs the first column in each result set to a console window: SqlConnection connection = new SqlConnection ("server=localhost;database=pubs;uid=sa"); try { connection.Open (); SqlCommand command = new SqlCommand ("select title from titles; " + "select au_lname from authors", connection); SqlDataReader reader = command.ExecuteReader (); do { while (reader.Read ()) Console.WriteLine (reader[0]); Console.WriteLine (); } while (reader.NextResult ()); } finally { connection.Close (); } The ASPX file in Figure 4 demonstrates how you might use this knowledge in a Web page. Figure 4 uses the same compound query to initialize two DataGrids with one DataReader. Note the call to NextResult between calls to DataBind. This call points the cursor to the second result set prior to binding to the second DataGrid. This feature of the DataReader classes is especially handy when using stored procedures that return two or more result sets. <%@ Import Namespace="System.Data.SqlClient" %> void Page_Load (Object sender, EventArgs e) { if (!IsPostBack) { SqlConnection connection = new SqlConnection ("server=localhost;database=pubs;uid=sa"); try { connection.Open (); SqlCommand command = new SqlCommand ("select title from titles; " + "select au_lname from authors", connection); SqlDataReader reader = command.ExecuteReader (); // Initialize the first DataGrid Titles.DataSource = reader; Titles.DataBind (); // Advance to the next result set reader.NextResult (); // Initialize the second DataGrid Authors.DataSource = reader; Authors.DataBind (); } finally { connection.Close (); } } } Figure 4. This ASP.NET page uses a single SqlDataReader to initialize two DataGrids with two sets of query results. Q: How do I assign a client-side OnClick handler to an ASP.NET button control? If I include an OnClick attribute in the control tag, ASP.NET looks for a server-side event handler with the specified name. A: Because OnClick is a legal client- and server-side attribute, you must add OnClick attributes that reference client-side handlers programmatically to tags that declare runat="server" button controls. Suppose, for example, that the button is declared this way:
https://www.itprotoday.com/web-development/get-rich-ui-winforms
CC-MAIN-2019-04
en
refinedweb
Cleaning Column Labels 1. Drop extraneous columns Drop features that aren’t consistent (not present in both datasets) or aren’t relevant to our questions. Use Pandas’ dropfunction. 2. Rename Columns - Change the “Sales Area” column label in the 2008 dataset to “Cert Region” for consistency. - Rename all column labels to replace spaces with underscores and convert everything to lowercase. (Underscores can be much easier work with in Python than spaces. For example, having spaces wouldn’t allow you to use df.column_nameinstead of df['column_name']to select columns or use query(). Being consistent with lowercase and underscores also helps make column names easy to remember.) # load datasets import pandas as pd df_08 = pd.read_csv(‘all_alpha_08.csv’) df_18 = pd.read_csv(‘all_alpha_18.csv’) # view 2008 dataset df_08.head(1) # view 2018 dataset df_18.head(1) Drop Extraneous Columns # drop columns from 2008 dataset df_08.drop([‘Stnd’, ‘Underhood ID’, ‘FE Calc Appr’, ‘Unadj Cmb MPG’], axis=1, inplace=True) # confirm changes df_08.head(1) # drop columns from 2018 dataset df_18.drop([Stnd’, ‘Underhood ID’, ‘FE Calc Appr’, ‘Unadj Cmb MPG’], axis=1, inplace=True) # confirm changes df_18.head(1) Rename Columns # rename Sales Area to Cert Region df_08.rename(columns={‘Sales Area’: ‘Cert Region’}, inplace=True) # confirm changes df_08.head(1) # replace spaces with underscores and lowercase labels for 2008 dataset df_08.rename(columns=lambda x: x.strip().lower().replace(” “, “_”), inplace=True) # confirm changes df_08.head(1) # confirm column labels for 2008 and 2018 datasets are identical df_08.columns == df_18.columns # make sure they’re all identical like this (df_08.columns == df_18.columns).all() # save new datasets for next section df_08.to_csv(‘data_08.csv’, index=False) df_18.to_csv(‘data_18.csv’, index=False) Filter, Drop Nulls, Dedupe 1. Filter For consistency, only compare cars certified by California standards. Filter both datasets using query to select only rows where cert_region is CA. Then, drop the cert_region columns, since it will no longer provide any useful information (we’ll know every value is ‘CA’). 2. Drop Nulls Drop any rows in both datasets that contain missing values. 3. Dedupe Drop any duplicate rows in both datasets. # load datasets import pandas as pd df_08 = pd.read_csv(‘data_08.csv’) df_18 = pd.read_csv(‘data_18.csv’) # view dimensions of dataset df_08.shape # view dimensions of dataset df_18.shape Filter by Certification Region # filter datasets for rows following California standards df_08 = df_08.query(‘cert_region == “CA”‘) df_18 = df_18.query(‘cert_region == “CA”‘) # confirm only certification region is California df_08[‘cert_region’].unique() # confirm only certification region is California df_18[‘cert_region’].unique() # drop certification region columns form both datasets df_08.drop([‘cert_region’], axis=1, inplace=True) df_18.drop([‘cert_region’], axis=1, inplace=True) df_08.shape df_18.shape Drop Rows with Missing Values # view missing value count for each feature in 2008 df_08.isnull().sum() # view missing value count for each feature in 2018 df_18.isnull().sum() # drop rows with any null values in both datasets df_08.dropna(inplace=True) df_18.dropna(inplace=True) # checks if any of columns in 2008 have null values – should print False df_08.isnull().sum().any() # checks if any of columns in 2018 have null values – should print False df_18.isnull().sum().any() Dedupe Data # print number of duplicates in 2008 and 2018 datasets print(df_08.duplicated().sum()) print(df_18.duplicated().sum()) # drop duplicates in both datasets df_08.drop_duplicates(inplace=True) df_18.drop_duplicates(inplace=True) # print number of duplicates again to confirm dedupe – should both be 0 print(df_08.duplicated().sum()) print(df_18.duplicated().sum()) # save progress for the next section df_08.to_csv(‘data_08.csv’, index=False) df_18.to_csv(‘data_18.csv’, index=False) Fixing cyl Data Type - 2008: extract int from string - 2018: convert float to int # load datasets import pandas as pd df_08 = pd.read_csv(‘data_08.csv’) df_18 = pd.read_csv(‘data_18.csv’) # check value counts for the 2008 cyl column df_08[‘cyl’].value_counts() Read this to help you extract ints from strings in Pandas for the next step. # Extract int from strings in the 2008 cyl column df_08[‘cyl’] = df_08[‘cyl’].str.extract(‘(\\d+)’).astype(int) FutureWarning: currently extract(expand=None) means expand=False (return Index/Series/DataFrame) but in a future version of pandas this will be changed to expand=True (return DataFrame) # Check value counts for 2008 cyl column again to confirm the change df_08[‘cyl’].value_counts() # convert 2018 cyl column to int df_18[‘cyl’] = df_18[‘cyl’].astype(int) df_08.to_csv(‘data_08.csv’, index=False) df_18.to_csv(‘data_18.csv’, index=False) Fixing air_pollution_score Data Type - 2008: convert string to float - 2018: convert int to float # load datasets import pandas as pd df_08 = pd.read_csv(‘data_08.csv’) df_18 = pd.read_csv(‘data_18.csv’) # try using Pandas to_numeric or astype function to convert the # 2008 air_pollution_score column to float — this won’t work df_08.air_pollution_score = df_08.air_pollution_score.astype(float) ValueError: could not convert string to float: '6/4' Figuring out the issue Looks like this isn’t going to be as simple as converting the datatype. According to the error above, the value at row 582 is “6/4” – let’s check it out. df_08.iloc[582] It’s not just the air pollution score! The mpg columns and greenhouse gas scores also seem to have the same problem – maybe that’s why these were all saved as strings! According to this link, which I found from the PDF documentation: "If a vehicle can operate on more than one type of fuel, an estimate is provided for each fuel type." Ohh.. so all vehicles with more than one fuel type, or hybrids, like the one above (it uses ethanol AND gas) will have a string that holds two values – one for each. This is a little tricky, so I’m going to show you how to do it with the 2008 dataset, and then you’ll try it with the 2018 dataset. # First, let’s get all the hybrids in 2008 hb_08 = df_08[df_08[‘fuel’].str.contains(‘/’)] hb_08 Looks like this dataset only has one! The 2018 has MANY more – but don’t worry – the steps I’m taking here will work for that as well! # hybrids in 2018 hb_18 = df_18[df_18[‘fuel’].str.contains(‘/’)] hb_18 We’re going to take each hybrid row and split them into two new rows – one with values for the first fuel type (values before the “/”), and the other with values for the second fuel type (values after the “/”). Let’s separate them with two dataframes! # create two copies of the 2008 hybrids dataframe df1 = hb_08.copy() # data on first fuel type of each hybrid vehicle df2 = hb_08.copy() # data on second fuel type of each hybrid vehicle # Each one should look like this df1 For this next part, we’re going use Pandas’ apply function. See the docs here. # columns to split by “/” split_columns = [‘fuel’, ‘air_pollution_score’, ‘city_mpg’, ‘hwy_mpg’, ‘cmb_mpg’, ‘greenhouse_gas_score’] # apply split function to each column of each dataframe copy for c in split_columns: df1[c] = df1[c].apply(lambda x: x.split(“/”)[0]) df2[c] = df2[c].apply(lambda x: x.split(“/”)[1]) # this dataframe holds info for the FIRST fuel type of the hybrid # aka the values before the “/”s df1 # this dataframe holds info for the SECOND fuel type of the hybrid # aka the values before the “/”s df2 # combine dataframes to add to the original dataframe new_rows = df1.append(df2) # now we have separate rows for each fuel type of each vehicle! new_rows # drop the original hybrid rows df_08.drop(hb_08.index, inplace=True) # add in our newly separated rows df_08 = df_08.append(new_rows, ignore_index=True) # check that all the original hybrid rows with “/”s are gone df_08[df_08[‘fuel’].str.contains(‘/’)] df_08.shape Repeat this process for the 2018 dataset # create two copies of the 2018 hybrids dataframe, hb_18 df1 = hb_18.copy() df2 = hb_18.copy() Split values for fuel, city_mpg, hwy_mpg, cmb_mpg You don’t need to split for air_pollution_score or greenhouse_gas_score here because these columns are already ints in the 2018 dataset. # list of columns to split split_columns = [‘fuel’, ‘city_mpg’, ‘hwy_mpg’, ‘cmb_mpg’] # apply split function to each column of each dataframe copy for c in split_columns: df1[c] = df1[c].apply(lambda x: x.split(“/”)[0]) df2[c] = df2[c].apply(lambda x: x.split(“/”)[1]) # append the two dataframes new_rows = df1.append(df2) # drop each hybrid row from the original 2018 dataframe # do this by using Pandas drop function with hb_18’s index df_18.drop(hb_18.index, inplace=True) # append new_rows to df_18 df_18 = df_18.append(new_rows, ignore_index=True) # check that they’re gone df_18[df_18[‘fuel’].str.contains(‘/’)] df_18.shape Now we can comfortably continue the changes needed for air_pollution_score! Here they are again: - 2008: convert string to float - 2018: convert int to float # convert string to float for 2008 air pollution column df_08.air_pollution_score = df_08.air_pollution_score.astype(float) # convert int to float for 2018 air pollution column df_18.air_pollution_score = df_18.air_pollution_score.astype(float) df_08.to_csv(‘data_08.csv’, index=False) df_18.to_csv(‘data_18.csv’, index=False) Fix city_mpg, hwy_mpg, cmb_mpg datatypes 2008 and 2018: convert string to float# load datasets df_08 = pd.read_csv(‘data_08.csv’) df_18 = pd.read_csv(‘data_18.csv’) # convert mpg columns to floats mpg_columns = [‘city_mpg’, ‘hwy_mpg’, ‘cmb_mpg’] for c in mpg_columns: df_18[c] = df_18[c].astype(float) df_08[c] = df_08[c].astype(float) Fix greenhouse_gas_score datatype 2008: convert from float to int # convert from float to int df_08[‘greenhouse_gas_score’] = df_08[‘greenhouse_gas_score’].astype(int) All the dataypes are now fixed! Take one last check to confirm all the changes. df_08.dtypes df_18.dtypes df_08.dtypes == df_18.dtypes # Save your new CLEAN datasets as new files! df_08.to_csv(‘clean_08.csv’, index=False) df_18.to_csv(‘clean_18.csv’, index=False) Drawing Conclusions Use the space below to address questions on datasets clean_08.csv and clean_18.csv # load datasets import pandas as pd import matplotlib.pyplot as plt % matplotlib inline df_08 = pd.read_csv(‘clean_08.csv’) df_18 = pd.read_csv(‘clean_18.csv’) df_08.head() Q1: Are more unique models using alternative sources of fuel? By how much? #Let’s first look at what the sources of fuel are and which ones are alternative sources. df_08.fuel.value_counts() df_18.fuel.value_counts() Looks like the alternative sources of fuel available in 2008 are CNG and ethanol, and those in 2018 ethanol and electricity. (You can use Google if you weren’t sure which ones are alternative sources of fuel!) # how many unique models used alternative sources of fuel in 2008 alt_08 = df_08.query(‘fuel in [\”CNG\”, \”ethanol\”]’).model.nunique() alt_08 # how many unique models used alternative sources of fuel in 2018 alt_18 = df_18.query(‘fuel in \”Ethanol\” or fuel in \”Electricity\”‘).model.nunique() alt_18 plt.bar([“2008”, “2018”], [alt_08, alt_18]) plt.title(“Number of Unique Models Using Alternative Fuels”) plt.xlabel(“Year”) plt.ylabel(“Number of Unique Models”) Since 2008, the number of unique models using alternative sources of fuel increased by 24. We can also look at proportions. # total unique models each year total_08 = df_08.model.nunique() total_18 = df_18.model.nunique() total_08, total_18 prop_08 = alt_08/total_08 prop_18 = alt_18/total_18 prop_08, prop_18 plt.bar([‘2008’, ‘2018’], [prop_08, prop_18]) plt.title(“Proportion of Unique Models Using Alternative Fuels”) plt.xlabel(“Year”) plt.ylabel(“Proportion of Unique Models”) Q2: How much have vehicle classes improved in fuel economy? Let’s look at the average fuel economy for each vehicle class for both years. veh_08 = df_08.groupby(‘veh_class’).cmb_mpg.mean() veh_08 veh_18 = df_18.groupby(‘veh_class’).cmb_mpg.mean() veh_18 # how much they’ve increased by for each vehicle class\n”, inc = veh_18 – veh_08 inc # only plot the classes that exist in both years inc.dropna(inplace=True) plt.subplots(figsize=(8, 5)) plt.bar(inc.index, inc) plt.title(‘Improvements in Fuel Economy from 2008 to 2018 by Vehicle Class’) plt.xlabel(‘Vehicle Class’) plt.ylabel(‘Increase in Average Combined MPG’) Q3: What are the characteristics of SmartWay vehicles? Have they changed over time? We can analyze this by filtering each dataframe by SmartWay classification and exploring these datasets. # smartway labels for 2008 df_08.smartway.unique() # explore smartway vehicles in 2008 smart_08.describe() Use what you’ve learned so for to further explore this dataset on 2008 smartway vehicles. # smartway labels for 2018 df_18.smartway.unique() # get all smartway vehicles in 2018 smart_18 = df_18.query(‘smartway in [\”Yes\”, \”Elite\”]’) smart_18.describe() Use what you’ve learned so for to further explore this dataset on 2018 smartway vehicles. Q4: What features are associated with better fuel economy? You can explore trends between cmb_mpg and the other features in this dataset, or filter this dataset like in the previous question and explore the properties of that dataset. For example, you can select all vehicles that have the top 50% fuel economy ratings like this. top_08 = df_08.query(‘cmb_mpg > cmb_mpg.mean()’) top_08.describe() top_18 = df_18.query(‘cmb_mpg > cmb_mpg.mean()’) top_18.describe() Q5: For all of the models that were produced in 2008 that are still being produced in 2018, how much has the mpg improved and which vehicle improved the most? This is a question regarding models that were updated since 2008 and still being produced in 2018. In order to do this, we need a way to compare models that exist in both datasets. To do this, let’s first learn about merges. Types of Merges So far, we’ve learned about appending dataframes. Now we’ll learn about Pandas Merges, a different way of combining dataframes. This is similar to the database-style “join.” If you’re familiar with SQL, this comparison with SQL may help you connect these two. Here are the four types of merges in Pandas. Below, “key” refers to common columns in both dataframes that we’re joining on. - Inner Join – Use intersection of keys from both frames. - Outer Join – Use union of keys from both frames. - Left Join – Use keys from left frame only. - Right Join – Use keys from right frame only. Below are diagrams to visualize each type. Read the documentation for Pandas Merges here. Merging Datasets 1. Rename 2008 columns to distinguish from 2018 columns after the merge To do this, use Pandas’ rename() with a lambda function. See example here. In the lambda function, take the first 10 characters of the column label and and concatenate it with _2008. (Only take the first 10 characters to prevent really long column names.) The lambda function should look something like this: lambda x: x[:10] + "_2008" In your rename, don’t forget to specify the parameter columns= when you add the lambda function! 2. Perform inner merge To answer the last question, we are only interested in how the same model of car has been updated and how the new model’s mpg compares to the old model’s mpg. Perform an inner merge with the left on model_2008 and the right on model. See documentation for Pandas’ merge here. Create combined dataset¶ # rename 2008 columns df_08.rename(columns=lambda x: x[:10] + “_2008”, inplace=True) # view to check names df_08.head() # merge datasets df_combined = df_08.merge(df_18, left_on=’model_2008′, right_on=’model’, how=’inner’) # view to check merge df_combined.head() df_combined.to_csv(‘combined_dataset.csv’, index=False) Results with Merged Dataset Use the notebook below to answer the final question with the merged dataset. Q5: For all of the models that were produced in 2008 that are still being produced now, how much has the mpg improved and which vehicle improved the most? Here are the steps for answering this question. 1. Create a new dataframe, model_mpg, that contain the mean combined mpg values in 2008 and 2018 for each unique model To do this, group by model and find the mean cmb_mpg_2008 and mean cmb_mpg for each. model_mpg = df.loc[:, [‘model_2008′,’cmb_mpg_2008′,’model’,’cmb_mpg’]] model_mpg.head(27) 2. Create a new column, mpg_change, with the change in mpg Subtract the mean mpg in 2008 from that in 2018 to get the change in mpg model_mpg[‘diff’] = model_mpg[‘cmb_mpg’] – model_mpg[‘cmb_mpg_2008’] 3. Find the vehicle that improved the most Find the max mpg change, and then use query or indexing to see what model it is! model_mpg.query(‘diff == diff.max()’)
http://tomreads.com/2018/02/22/data-analysis-process-case-study-2-udacity/
CC-MAIN-2019-04
en
refinedweb
XmFrame - The Frame widget class #include <Xm/Frame.h> Frame is a very simple manager used to enclose a single work area child in a border drawn by Frame. It uses the Manager class resources for border drawing and performs geometry management so that its size always matches its child's outer size plus the Frame's margins and shadow thickness. Frame is most often used to enclose other managers when the application developer desires the manager to have the same border appearance as the primitive widgets. Frame can also be used to enclose primitive widgets that do not support the same type of border drawing. This gives visual consistency when you develop applications using diverse widget sets. Constraint resources are used to designate a child as the Frame title, align its text, and control its vertical alignment in relation to Frame's top shadow. The title appears only at the top of the Frame. If the Frame's parent is a Shell widget, defaults to 2. Frame inherits behavior and resources from the Core, Composite, Constraint, and XmManager classes. The class pointer is xmFrameWidgetClass. The class name is XmFrame.).).
http://vaxination.ca/motif/XmFrame_3X.html
CC-MAIN-2019-04
en
refinedweb
Practical ASP.NET Now that you know how to use them (see Part 1 if you don't), it's time to create custom ones. Last time, we introduced the open source ConventionTests library. If you are new to ConventionTests be sure to check that article out first. While ConventionTests comes with a number of pre-built supplied conventions, it is also possible to create your own custom conventions and then use them in your test code. In this example we're going to create our own custom convention that we can use to check that all interfaces follow the traditional naming convention of starting with the letter "I". The first thing we need to do is create a new class in our test project. Assuming that the TestStack.ConventionTests NuGet package is installed in the test project, our new class can implement the required ConventionTests interface. If we want to inspect and check the types in the production code project we implement the IConvention<Types> interface. This interface defines a couple of members: an Execute method where our custom verification code will go; and the ConventionReason string property. The Execute method has the following signature: public void Execute(Types data, IConventionResultContext result). The data parameter will contain a list of all the types we need to check, this is the list of types that are selected by us in our actual unit test code. The result parameter is used to signal back to our actual test whether any of the types in data did not meet the convention. To complete the custom convention checking logic in our custom convention we call the Is method of the result object. This method takes a string representing the title of the convention when it is output in the test output. The second parameter of the Is method takes a list of all the types that have failed to meet the custom convention. If there are any types in this list the test will fail. The ConventionReason property allows a descriptive explanation of what the custom convention represents. Listing 1 shows our custom interface-naming convention. Listing 1: Custom Interface-Naming Convention. using System.Linq; using TestStack.ConventionTests; using TestStack.ConventionTests.ConventionData; namespace MyClassLibrary.Tests { public class InterfaceNamingConvention : IConvention { public void Execute(Types data, IConventionResultContext result) { // Find any types passed to our custom convention from the test code // that are interfaces and that also don't start with 'I' var interfacesWithBadNames = data.TypesToVerify.Where(x => x.IsInterface && !x.Name.StartsWith("I")); // Once we have a list, if it contains > 0 types the convention will // fail and so will the tests result.Is("Interfaces must begin with the letter 'I'.", interfacesWithBadNames); } public string ConventionReason { get { return "The naming convention is to start all interface names with the letter 'I'."; } } } }. To use this convention in an actual unit test, our custom convention is treated the same way as one of the pre-built suppled conventions: [Test] public void AllInterfacesShouldBeNamedCorrectly() { var typesToCheck = Types.InAssemblyOf<MyClassLibrary.SomeClassInAssemblyBeingTested>(); var convention = new InterfaceNamingConvention(); Convention.Is(convention, typesToCheck); }. Notice in the above test, we are creating an instance of our custom convention rather than one that comes out of the box with ConventionTests. If we now define a couple of interfaces in our production code "MyClassLibrary" project as follows: public interface IEatable { } public interface IDrinkable { }. When we run our unit test, it will pass (see Figure 1) because both of the preceding interfaces start with I. If we now change IDrinkable to just Drinkable (no preceding I) and run the test again it will fail (see Figure 2): public interface IEatable { } public interface Drinkable { }. If we check the output from the test we can see the following: 'Interfaces must begin with the letter 'I'.' for 'Types in MyClassLibrary' -------------------------------------------------------------------------- MyClassLibrary.Drinkable . Here we can see the detail of the failing convention, namely that the interface Drinkable is named incorrectly. I hope this article and the last one was a helpful look at convention testings. Let me know your thoughts on this and future topics we can cover..
https://visualstudiomagazine.com/articles/2015/03/26/conventiontests-part-2-asp-net.aspx
CC-MAIN-2019-04
en
refinedweb
Introduction to Quantum Computing¶ With every breakthrough in science there is the potential for new technology. For over twenty years, researchers have done inspiring work in quantum mechanics, transforming it from a theory for understanding nature into a fundamentally new way to engineer computing technology. This field, quantum computing, is beautifully interdisciplinary, and impactful in two major ways: - It reorients the relationship between physics and computer science. Physics does not just place restrictions on what computers we can design, it also grants new power and inspiration. - It can simulate nature at its most fundamental level, allowing us to solve deep problems in quantum chemistry, materials discovery, and more. Quantum computing has come a long way, and in the next few years there will be significant breakthroughs in the field. To get here, however, we have needed to change our intuition for computation in many ways. As with other paradigms — such as object-oriented programming, functional programming, distributed programming, or any of the other marvelous ways of thinking that have been expressed in code over the years — even the basic tenants of quantum computing opens up vast new potential for computation. However, unlike other paradigms, quantum computing goes further. It requires an extension of classical probability theory. This extension, and the core of quantum computing, can be formulated in terms of linear algebra. Therefore, we begin our investigation into quantum computing with linear algebra and probability. From Bit to Qubit¶ Probabilistic Bits as Vector Spaces¶ From an operational perspective, a bit is described by the results of measurements performed on it. Let the possible results of measuring a bit (0 or 1) be represented by orthonormal basis vectors \(\vec{0}\) and \(\vec{1}\). We will call these vectors outcomes. These outcomes span a two-dimensional vector space that represents a probabilistic bit. A probabilistic bit can be represented as a vector where \(a\) represents the probability of the bit being 0 and \(b\) represents the probability of the bit being 1. This clearly also requires that \(a+b=1\). In this picture the system (the probabilistic bit) is a two-dimensional real vector space and a state of a system is a particular vector in that vector space. import numpy as np import matplotlib.pyplot as plt outcome_0 = np.array([1.0, 0.0]) outcome_1 = np.array([0.0, 1.0]) a = 0.75 b = 0.25 prob_bit = a * outcome_0 + b * outcome_1 X, Y = prob_bit plt.figure() ax = plt.gca() ax.quiver(X, Y, angles='xy', scale_units='xy', scale=1) ax.set_xlim([0, 1]) ax.set_ylim([0, 1]) plt.draw() plt.show() Given some state vector, like the one plotted above, we can find the probabilities associated with each outcome by projecting the vector onto the basis outcomes. This gives us the following rule: where Pr(0) and Pr(1) are the probabilities of the 0 and 1 outcomes respectively. Dirac Notation¶ Physicists have introduced a convenient notation for the vector transposes and dot products we used in the previous example. This notation, called Dirac notation in honor of the great theoretical physicist Paul Dirac, allows us to define Thus, we can rewrite our “measurement rule” in this notation as We will use this notation throughout the rest of this introduction. Multiple Probabilistic Bits¶ This vector space interpretation of a single probabilistic bit can be straightforwardly extended to multiple bits. Let us take two coins as an example (labelled 0 and 1 instead of H and T since we are programmers). Their states can be represented as where \(1_u\) represents the outcome 1 on coin \(u\). The combined system of the two coins has four possible outcomes \(\{ 0_u0_v,\;0_u1_v,\;1_u0_v,\;1_u1_v \}\) that are the basis states of a larger four-dimensional vector space. The rule for constructing a combined state is to take the tensor product of individual states, e.g. Then, the combined space is simply the space spanned by the tensor products of all pairs of basis vectors of the two smaller spaces. Similarly, the combined state for \(n\) such probabilistic bits is a vector of size \(2^n\) and is given by \(\bigotimes_{i=0}^{n-1}|\,v_i\rangle\). We will talk more about these larger spaces in the quantum case, but it is important to note that not all composite states can be written as tensor products of sub-states (e.g. consider the state \(\frac{1}{2}|\,0_u0_v\rangle + \frac{1}{2}|\,1_u1_v\rangle\)). The most general composite state of \(n\) probabilistic bits can be written as \(\sum_{j=0}^{2^n - 1} a_{j} (\bigotimes_{i=0}^{n-1}|\,b_{ij}\rangle\)) where each \(b_{ij} \in \{0, 1\}\) and \(a_j \in \mathbb{R}\), i.e. as a linear combination (with real coefficients) of tensor products of basis states. Note that this still gives us \(2^n\) possible states. Qubits¶ Quantum mechanics rewrites these rules to some extent. A quantum bit, called a qubit, is the quantum analog of a bit in that it has two outcomes when it is measured. Similar to the previous section, a qubit can also be represented in a vector space, but with complex coefficients instead of real ones. A qubit system is a two-dimensional complex vector space, and the state of a qubit is a complex vector in that space. Again we will define a basis of outcomes \(\{|\,0\rangle, |\,1\rangle\}\) and let a generic qubit state be written as Since these coefficients can be imaginary, they cannot be simply interpreted as probabilities of their associated outcomes. Instead we rewrite the rule for outcomes in the following manner: and as long as \(|\alpha|^2 + |\beta|^2 = 1\) we are able to recover acceptable probabilities for outcomes based on our new complex vector. This switch to complex vectors means that rather than representing a state vector in a plane, we instead represent the vector on a sphere (called the Bloch sphere in quantum mechanics literature). From this perspective the quantum state corresponding to an outcome of 0 is represented by: Notice that the two axes in the horizontal plane have been labeled \(x\) and \(y\), implying that \(z\) is the vertical axis (not labeled). Physicists use the convention that a qubit’s \(\{|\,0\rangle, |\,1\rangle\}\) states are the positive and negative unit vectors along the z axis, respectively. These axes will be useful later in this document. Multiple qubits are represented in precisely the same way, by taking linear combinations (with complex coefficients, now) of tensor products of basis states. Thus \(n\) qubits have \(2^n\) possible states. An Important Distinction¶ An important distinction between the probabilistic case described above and the quantum case is that probabilistic states may just mask out ignorance. For example a coin is physically only 0 or 1 and the probabilistic view merely represents our ignorance about which it actually is. This is not the case in quantum mechanics. Assuming events occuring at a distance from one another cannot instantaneously influence each other, the quantum states — as far as we know — cannot mask any underlying state. This is what people mean when they say that there is no local hidden variable theory for quantum mechanics. These probabilistic quantum states are as real as it gets: they don’t just describe our knowledge of the quantum system, they describe the physical reality of the system. Some Code¶ Let us take a look at some code in pyQuil to see how these quantum states play out. We will dive deeper into quantum operations and pyQuil in the following sections. Note that in order to run these examples you will need to install pyQuil and download the QVM and Compiler. Each of the code snippets below will be immediately followed by its output. # Imports for pyQuil (ignore for now) import numpy as np from pyquil.quil import Program from pyquil.api import WavefunctionSimulator # create a WavefunctionSimulator object wavefunction_simulator = WavefunctionSimulator() # pyQuil is based around operations (or gates) so we will start with the most # basic one: the identity operation, called I. I takes one argument, the index # of the qubit that it should be applied to. from pyquil.gates import I # Make a quantum program that allocates one qubit (qubit #0) and does nothing to it p = Program(I(0)) # Quantum states are called wavefunctions for historical reasons. # We can run this basic program on our connection to the simulator. # This call will return the state of our qubits after we run program p. # This api call returns a tuple, but we'll ignore the second value for now. wavefunction = wavefunction_simulator.wavefunction(p) # wavefunction is a Wavefunction object that stores a quantum state as a list of amplitudes alpha, beta = wavefunction=(1+0j) and beta=0j The probability of measuring the qubit in outcome 0 is 1.0 The probability of measuring the qubit in outcome 1 is 0.0 Applying an operation to our qubit affects the probability of each outcome. # We can import the qubit "flip" operation, called X, and see what it does. # We will learn more about this operation in the next section. from pyquil.gates import X p = Program(X(0)) wavefunc = wavefunction_simulator.wavefunction(p) alpha, beta = wavefunc=0j and beta=(1+0j) The probability of measuring the qubit in outcome 0 is 0.0 The probability of measuring the qubit in outcome 1 is 1.0 In this case we have flipped the probability of outcome 0 into the probability of outcome 1 for our qubit. We can also investigate what happens to the state of multiple qubits. We’d expect the state of multiple qubits to grow exponentially in size, as their vectors are tensored together. # Multiple qubits also produce the expected scaling of the state. p = Program(I(0), I(1)) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) p = Program(I(0), I(1), I(2), I(3)) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) p = Program() for x in range(10): p += I(x) wavefunction = wavefunction_simulator.wavefunction(p) print("The quantum state is of dimension:", len(wavefunction.amplitudes)) The quantum state is of dimension: 4 The quantum state is of dimension: 16 The quantum state is of dimension: 1024 Let’s look at the actual value for the state of two qubits combined. The resulting dictionary of this method contains outcomes as keys and the probabilities of those outcomes as values. # wavefunction(Program) returns a coefficient array that corresponds to outcomes in the following order wavefunction = wavefunction_simulator.wavefunction(Program(I(0), I(1))) print(wavefunction.get_outcome_probs()) {'00': 1.0, '01': 0.0, '10': 0.0, '11': 0.0} Qubit Operations¶ In the previous section we introduced our first two operations: the I (or Identity) operation and the X (or NOT) operation. In this section we will get into some more details on what these operations are. Quantum states are complex vectors on the Bloch sphere, and quantum operations are matrices with two properties: - They are reversible. - When applied to a state vector on the Bloch sphere, the resulting vector is also on the Bloch sphere. Matrices that satisfy these two properties are called unitary matrices. Such matrices have the characteristic property that their complex conjugate transpose is equal to their inverse, a property directly linked to the requirement that the probabilities of measuring qubits in any of the allowed states must sum to 1. Applying an operation to a quantum state is the same as multiplying a vector by one of these matrices. Such an operation is called a gate. Since individual qubits are two-dimensional vectors, operations on individual qubits are 2x2 matrices. The identity matrix leaves the state vector unchanged: so the program that applies this operation to the zero state is just p = Program(I(0)) print(wavefunction_simulator.wavefunction(p)) (1+0j)|0> Pauli Operators¶ Let’s revisit the X gate introduced above. It is one of three important single-qubit gates, called the Pauli operators: from pyquil.gates import X, Y, Z p = Program(X(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("X|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This looks like a bit flip.\n") p = Program(Y(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("Y|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This also looks like a bit flip.\n") p = Program(Z(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("Z|0> = ", wavefunction) print("The outcome probabilities are", wavefunction.get_outcome_probs()) print("This state looks unchanged.") X|0> = (1+0j)|1> The outcome probabilities are {'0': 0.0, '1': 1.0} This looks like a bit flip. Y|0> = 1j|1> The outcome probabilities are {'0': 0.0, '1': 1.0} This also looks like a bit flip. Z|0> = (1+0j)|0> The outcome probabilities are {'0': 1.0, '1': 0.0} This state looks unchanged. The Pauli matrices have a visual interpretation: they perform 180-degree rotations of qubit state vectors on the Bloch sphere. They operate about their respective axes as shown in the Bloch sphere depicted above. For example, the X gate performs a 180-degree rotation about the \(x\) axis. This explains the results of our code above: for a state vector initially in the +\(z\) direction, both X and Y gates will rotate it to -\(z\), and the Z gate will leave it unchanged. However, notice that while the X and Y gates produce the same outcome probabilities, they actually produce different states. These states are not distinguished if they are measured immediately, but they produce different results in larger programs. Quantum programs are built by applying successive gate operations: # Composing qubit operations is the same as multiplying matrices sequentially p = Program(X(0), Y(0), Z(0)) wavefunction = wavefunction_simulator.wavefunction(p) print("ZYX|0> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) ZYX|0> = [ 0.-1.j 0.+0.j] With outcome probabilities {'0': 1.0, '1': 0.0} Multi-Qubit Operations¶ Operations can also be applied to composite states of multiple qubits. One common example is the controlled-NOT or CNOT gate that works on two qubits. Its matrix form is: Let’s take a look at how we could use a CNOT gate in pyQuil. from pyquil.gates import CNOT p = Program(CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|00> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(0), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|01> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(1), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|10> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") p = Program(X(0), X(1), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(p) print("CNOT|11> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs(), "\n") CNOT|00> = (1+0j)|00> With outcome probabilities {'00': 1.0, '01': 0.0, '10': 0.0, '11': 0.0} CNOT|01> = (1+0j)|11> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 0.0, '11': 1.0} CNOT|10> = (1+0j)|10> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 1.0, '11': 0.0} CNOT|11> = (1+0j)|01> With outcome probabilities {'00': 0.0, '01': 1.0, '10': 0.0, '11': 0.0} The CNOT gate does what its name implies: the state of the second qubit is flipped (negated) if and only if the state of the first qubit is 1 (true). Another two-qubit gate example is the SWAP gate, which swaps the \( |01\rangle \) and \(|10\rangle \) states: from pyquil.gates import SWAP p = Program(X(0), SWAP(0,1)) wavefunction = wavefunction_simulator.wavefunction(p) print("SWAP|01> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) SWAP|01> = (1+0j)|10> With outcome probabilities {'00': 0.0, '01': 0.0, '10': 1.0, '11': 0.0} In summary, quantum computing operations are composed of a series of complex matrices applied to complex vectors. These matrices must be unitary (meaning that their complex conjugate transpose is equal to their inverse) because the overall probability of all outcomes must always sum to one. The Quantum Abstract Machine¶ We now have enough background to introduce the programming model that underlies Quil. This is a hybrid quantum-classical model in which \(N\) qubits interact with \(M\) classical bits: These qubits and classical bits come with a defined gate set, e.g. which gate operations can be applied to which qubits. Different kinds of quantum computing hardware place different limitations on what gates can be applied, and the fixed gate set represents these limitations. Full details on the Quantum Abstract Machine and Quil can be found in the Quil whitepaper. The next section on measurements will describe the interaction between the classical and quantum parts of a Quantum Abstract Machine (QAM). Qubit Measurements¶ Measurements have two effects: - They project the state vector onto one of the basic outcomes - (optional) They store the outcome of the measurement in a classical bit. Here’s a simple example: # Create a program that stores the outcome of measuring qubit #0 into classical register [0] p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(I(0)).measure(0, classical_register[0]) Up until this point we have used the quantum simulator to cheat a little bit — we have actually looked at the wavefunction that comes back. However, on real quantum hardware, we are unable to directly look at the wavefunction. Instead we only have access to the classical bits that are affected by measurements. This functionality is emulated by QuantumComputer.run(). Note that the run command is to be applied on the compiled version of the program. from pyquil import get_qc qc = get_qc('9q-square-qvm') print (qc.run(qc.compile(p))) [[0]] We see that the classical register reports a value of zero. However, if we had flipped the qubit before measurement then we obtain: p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(X(0)) # Flip the qubit p.measure(0, classical_register[0]) # Measure the qubit print (qc.run(qc.compile(p))) [[1]] These measurements are deterministic, e.g. if we make them multiple times then we always get the same outcome: p = Program() classical_register = p.declare('ro', 'BIT', 1) p += Program(X(0)) # Flip the qubit p.measure(0, classical_register[0]) # Measure the qubit trials = 10 p.wrap_in_numshots_loop(shots=trials) print (qc.run(qc.compile(p))) [[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]] Classical/Quantum Interaction¶ However this is not the case in general — measurements can affect the quantum state as well. In fact, measurements act like projections onto the outcome basis states. To show how this works, we first introduce a new single-qubit gate, the Hadamard gate. The matrix form of the Hadamard gate is: The following pyQuil code shows how we can use the Hadamard gate: from pyquil.gates import H # The Hadamard produces what is called a superposition state coin_program = Program(H(0)) wavefunction = wavefunction_simulator.wavefunction(coin_program) print("H|0> = ", wavefunction) print("With outcome probabilities\n", wavefunction.get_outcome_probs()) H|0> = (0.7071067812+0j)|0> + (0.7071067812+0j)|1> With outcome probabilities {'0': 0.49999999999999989, '1': 0.49999999999999989} A qubit in this state will be measured half of the time in the \( |0\rangle \) state, and half of the time in the \( |1\rangle \) state. In a sense, this qubit truly is a random variable representing a coin. In fact, there are many wavefunctions that will give this same operational outcome. There is a continuous family of states of the form that represent the outcomes of an unbiased coin. Being able to work with all of these different new states is part of what gives quantum computing extra power over regular bits. p = Program() ro = p.declare('ro', 'BIT', 1) p += Program(H(0)).measure(0, ro[0]) # Measure qubit #0 a number of times p.wrap_in_numshots_loop(shots=10) # We see probabilistic results of about half 1's and half 0's print (qc.run(qc.compile(p))) [[0], [1], [1], [0], [1], [0], [0], [1], [0], [0]] pyQuil allows us to look at the wavefunction after a measurement as well: coin_program = Program(H(0)) print ("Before measurement: H|0> = ", wavefunction_simulator.wavefunction(coin_program), "\n") ro = coin_program.declare('ro', 'BIT', 1) coin_program.measure(0, ro[0]) for _ in range(5): print ("After measurement: ", wavefunction_simulator.wavefunction(coin_program)) Before measurement: H|0> = (0.7071067812+0j)|0> + (0.7071067812+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> After measurement: (1+0j)|1> We can clearly see that measurement has an effect on the quantum state independent of what is stored classically. We begin in a state that has a 50-50 probability of being \( |0\rangle \) or \( |1\rangle \). After measurement, the state changes into being entirely in \( |0\rangle \) or entirely in \( |1\rangle \) according to which outcome was obtained. This is the phenomenon referred to as the collapse of the wavefunction. Mathematically, the wavefunction is being projected onto the vector of the obtained outcome and subsequently rescaled to unit norm. # This happens with bigger systems too, as can be seen with this program, # which prepares something called a Bell state (a special kind of "entangled state") bell_program = Program(H(0), CNOT(0, 1)) wavefunction = wavefunction_simulator.wavefunction(bell_program) print("Before measurement: Bell state = ", wavefunction, "\n") classical_regs = bell_program.declare('ro', 'BIT', 2) bell_program.measure(0, classical_regs[0]).measure(1, classical_regs[1]) for _ in range(5): wavefunction = wavefunction_simulator.wavefunction(bell_program) print("After measurement: ", wavefunction.get_outcome_probs()) Before measurement: Bell state = (0.7071067812+0j)|00> + (0.7071067812+0j)|11>} The above program prepares entanglement because, even though there are random outcomes, after every measurement both qubits are in the same state. They are either both \( |0\rangle \) or both \( |1\rangle \). This special kind of correlation is part of what makes quantum mechanics so unique and powerful. Classical Control¶ There are also ways of introducing classical control of quantum programs. For example, we can use the state of classical bits to determine what quantum operations to run. true_branch = Program(X(7)) # if branch false_branch = Program(I(7)) # else branch # Branch on ro[1] p = Program() ro = p.declare('ro', 'BIT', 8) p += Program(X(0)).measure(0, ro[1]).if_then(ro[1], true_branch, false_branch) # Measure qubit #7 into ro[7] p.measure(7, ro[7]) # Run and check register [7] print (qc.run(qc.compile(p))) [[1 1]] The second [1] here means that qubit 7 was indeed flipped. Example: The Probabilistic Halting Problem¶ A fun example is to create a program that has an exponentially increasing chance of halting, but that may run forever! p = Program() ro = p.declare('ro', 'BIT', 1) inside_loop = Program(H(0)).measure(0, ro[0]) p.inst(X(0)).while_do(ro[0], inside_loop) qc = get_qc('9q-square-qvm') print (qc.run(qc.compile(p))) [[0]] Next Steps¶ We hope that you have enjoyed your whirlwind tour of quantum computing. You are now ready to check out the Installation and Getting Started guide! If you would like to learn more, Nielsen and Chuang’s Quantum Computation and Quantum Information is a particularly excellent resource for newcomers to the field. If you’re interested in learning about the software behind quantum computing, take a look at our blog posts on The Quantum Software Challenge.
https://pyquil.readthedocs.io/en/v2.2.0/intro.html
CC-MAIN-2019-04
en
refinedweb
Builder Pattern – Creational In this post, I’d like to talk about the Builder Pattern. This pattern is part of the Creational grouping in which other patterns such as Factory and Singleton are also a part of. For what do we use the Builder Pattern? Sometimes we might need to have objects that work in the same way, however, have different components internally. Or we might just need two objects that are similarly built, but of course not the same. For those situations, we can make use of the Builder Pattern. The Builder does get rid of cases such as an explosion of subclasses to deal with the different configurations that an object can have; cases where classes have a big amount of constructors or cases where a class has a constructor with many parameters to try to deal with all possible configurations. Most important of all the Builder pattern separates the (maybe complex) construction of an object, from the representation of this object. Structure Here I’d like to define two ways that this is encountered: the formal way and another way that you see used quite a lot and in my opinion, it is slightly different than the formal definition. Formal Structure Here is the basic structure of the Builder Pattern (if you need a refresher on UML check this post and this post) in the way it is defined in the GoF Patterns Book: Builder The Builder is the interface which will define which defines the parts that the constructor accepts Concrete Builder This is the implementation of the builder interface. It will assemble the object accordingly to how it defines the object to be implemented Director It is simply the class that makes the use of the Builder in order to construct and get the object. Product The final object which the Builder is tasked to build. Example Scenario Let us use the scenario where we want to build a car. Now a car has the same structure: it has wheels, a transmission, a chassis, a motor, doors, etc. However, each of those components can vary itself in different ways. This is where the builder pattern shines! If we make use of the SportCarBuilder variant or the RegularCarBuilder variant we are in both cases getting a Car out of the builder! Implementation In this example, I won’t define all subtypes used in the code, but just the main parts. Alright so let’s define the Builder Interface: public interface CarBuilder { public void setEngine (Engine engine); public void setTransmission(Transmission transmission); public void setWheels (Wheel wheel); public void setChassis(Chassis chassis); } Alright with that we can define the concrete builder: public class CarBuilderImpl implements CarBuilder { private Engine engine; private Transmission transmission; private Chassis chassis; private Wheel wheel; @Override public void setEngine(Engine engine) { this.engine = engine; } @Override public void setTransmission(Transmission transmission) { this.transmission = transmission; } @Override public void setChassis(Chassis chassis) { this.chassis = chassis; } @Override public void setWheel(Wheel wheel) { this.wheel = wheel; } public Car getResult() { return new Car(engine, transmission, chassis, wheel); } } Note that the builder contains a method that will create the object and return it. Now our director which knows how the car type should be constructed: public class Director { private Builder builder; public Director(){ this.builder = new CarBuilderImpl(); } public Car construct(CarType type){ switch (type){ case SPORT: return constructSportsCar(); case REGULAR: default: return constructCityCar(); } } private Car constructSportsCar() { builder.setEngine(new Engine(3.0, 0)); builder.setTransmission(Transmission.AUTOMATIC); builder.setChassis(new SportsChassis()); builder.setWheels(new SportsWheel()); return builder.getResult(); } private Car constructCityCar() { builder.setEngine(new Engine(1.2, 0)); builder.setTransmission(Transmission.MANUAL); builder.setChassis(new TownCarChassis()); builder.setWheels(new RegularWheel()); return builder.getResult(); } } And finally, we can use the code we wrote. By passing a type to the Director.construct() method. “Out in the Wild” Structure Here is the basic structure of the Builder Pattern that you might come across while you work on your daily job or favorite open source code. They are quite similar, however, I find it nice to have an extra example: Builder The Builder is the class that takes care of building the product object correctly and eventually returning it to the requester class. You often see this class as an inner class from the Product Class itself, however, this is not a nice clean code approach. One should aim for 1 Class = 1 File. Director Just like the former case, this is simply the class that makes the use of the Builder in order to construct and get the object. Product The final object which the Builder is tasked to build. Example Scenario The scenario will be exactly the same as the above scenario but you will see the differences are in the code: Implementation First, we have the Builder itself which is not an interface and it will return a reference of itself so to allow the use of concatenation. It also makes use of the Fluent pattern which makes the code read more like a normal sentence. public class CarBuilder { Engine engine; Transmission transmission; Wheel wheel; Chassis; public CarBuilder withEngine (Engine engine){ /* Catch and throw illegal arguments here if you want to have certain guarantees */ this.engine = engine; return this; }; public CarBuilder withTransmission(Transmission transmission){ /* Catch and throw illegal arguments here if you want to have certain guarantees */ this.transmission = transmission; return this; }; public CarBuilder withWheels (Wheel wheel){ /* Catch and throw illegal arguments here if you want to have certain guarantees */ this.wheel = wheel; return this; }; public CarBuilder withChassis(Chassis chassis){ /* Catch and throw illegal arguments here if you want to have certain guarantees */ this.chassis = chassis; return this; }; public Car build(){ return new Car(this); }; } Note that we have the method call build() which order the builder to do what is supposed to do if its named “builder”. It passes itself to the car builder and with that avoids long lists of parameters. All the fields are also package-private which will make sense since the Car constructor will want to access these. Now before a quick word about the Car object, let us show how the Director gets when we use the structure above instead the formal one: public class Director { public Car construct(CarType type){ switch (type){ case SPORT: return constructSportsCar(); case REGULAR: default: return constructCityCar(); } } private Car constructSportsCar() { return new CarBuilder().withEngine(new Engine(3.0, 0)) .withTransmission(Transmission.AUTOMATIC) .withChassis(new SportsChassis()) .withWheels(new SportsWheel()) .build(); } private Car constructCityCar() { return new CarBuilder().withEngine(new Engine(1.2, 0)) .withTransmission(Transmission.MANUAL) .withChassis(new TownCarChassis()) .withWheels(new RegularWheel()) .build(); } } In my opinion, this is more concise. Finally the car object. In this case, we can have the constructor set to something like package-private since it should only get accessed by the Builder. It also will get a builder and it unpacks on to its variables: public class Car { private Engine engine; private Transmission transmission; private Wheel wheel; private Chassis; Car (CarBuilder builder){ this.engine = builder.engine; this.transmission = builder.transmission; this.wheel = builder.wheel; this.Chassis = builder.chassis; } // Getters and setters } As you can see the car class can get what it needs and even perform some functions on the variables before assigning it to its own fields. Conclusion So this is the Builder pattern! Hope you could clarify your doubts or even learn something new. Please comment away if you have any doubts or if I can help you with anything.
http://fdiez.org/builder-pattern-creational/?share=google-plus-1
CC-MAIN-2019-09
en
refinedweb
public class CheckedInputStream extends FilterInputStream Checksum in available, close, mark, markSupported, read, reset clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait lenis not zero, the method blocks until some input is available; otherwise, no bytes are read and 0is returned. readin class Filter.
http://www.cs.oberlin.edu/~rhoyle/17f-cs151/jdk1.8/docs/api/java/util/zip/CheckedInputStream.html
CC-MAIN-2019-09
en
refinedweb
#include <wx/archive.h> This is an abstract base class which serves as a common interface to archive output streams such as wxZipOutputStream. wxArchiveOutputStream::PutNextEntry is used to create a new entry in the output archive, then the entry's data is written to the wxArchiveOutputStream. Another call to PutNextEntry() closes the current entry and begins the next. Closes the archive, returning true if it was successfully written. Called by the destructor if not called explicitly. Reimplemented from wxOutputStream. Reimplemented in wxZipOutputStream, and wxTarOutputStream. Close the current entry. It is called implicitly whenever another new entry is created with CopyEntry() or PutNextEntry(), or when the archive is closed. Implemented in wxZipOutputStream, and wxTarOutputStream.. Create a new directory entry (see wxArchiveEntry::IsDir) with the given name and timestamp. PutNextEntry() can also be used to create directory entries, by supplying a name with a trailing path separator. Implemented in wxZipOutputStream, and wxTarOutputStream. Takes ownership of entry and uses it to create a new entry in the archive. The entry's data can then be written by writing to this wxArchiveOutputStream. Create a new entry with the given name, timestamp and size. The entry's data can then be written by writing to this wxArchiveOutputStream. Implemented in wxZipOutputStream, and wxTarOutputStream.
https://docs.wxwidgets.org/3.1.2/classwx_archive_output_stream.html
CC-MAIN-2019-09
en
refinedweb
Note: This section uses sample .hal files to illustrate how HIDL language constructs map to C++. With few exceptions, HIDL interface packages are located in hardware/interfaces or the vendor/ directory. The hardware/interfaces top-level maps directly to the android.hardware package namespace; the version is a subdirectory under the package (not interface) namespace. The hidl-gen compiler compiles the .hal files into a set of a .h and .cpp files. From these autogenerated files a shared library that client/server implementations link against is built. The Android.bp file that builds this shared library is autogenerated by the hardware/interfaces/update-makefiles.sh script. Every time you add a new package to hardware/interfaces, or add/remove .hal files to/from an existing package, you must rerun the script to ensure the generated shared library is up-to-date. For example, the IFoo.hal sample file should be located in hardware/interfaces/samples/1.0. The sample IFoo.hal file creates an IFoo interface in the samples package: package [email protected]; interface IFoo { struct Foo { int64_t someValue; handle myHandle; }; someMethod() generates (vec<uint32_t>); anotherMethod(Foo foo) generates (int32_t ret); }; Generated files Autogenerated files in a HIDL package are linked into a single shared library with the same name as the package (for example, [email protected]). The shared library also exports a single header, IFoo.h, which can be included by clients and servers. Using the hidl-gen compiler with the IFoo.hal interface file as an input, binderized mode has the following autogenerated files: Figure 1. Files generated by compiler. IFoo.h. Describes the pure IFoointerface in a C++ class; it contains the methods and types defined in the IFoointerface in the IFoo.halfile, translated to C++ types where necessary. Does not contain details related to the RPC mechanism (e.g., HwBinder) used to implement this interface. The class is namespaced with the package and version, e.g. ::android::hardware::samples::IFoo::V1_0. Both clients and servers include this header: Clients for calling methods on it and servers for implementing those methods. IHwFoo.h. Header file that contains declarations for functions that serialize data types used in the interface. Developers should never include his header directly (it does not contain any classes). BpFoo.h. A class that inherits from IFooand describes the HwBinderproxy (client-side) implementation of the interface. Developers should never refer to this class directly. BnFoo.h. A class that holds a reference to an IFooimplementation and describes the HwBinderstub (server-side) implementation of the interface. Developers should never refer to this class directly. FooAll.cpp. A class that contains the implementations for both the HwBinderproxy and the HwBinderstub. When a client calls an interface method, the proxy automatically marshals the arguments from the client and sends the transaction to the binder kernel driver, which delivers the transaction to the stub on the other side (which then calls the actual server implementation). The files are structured similarly to the files generated by aidl-cpp (for details, see "Passthrough mode" in the HIDL Overview). The only autogenerated file that is independent of the RPC mechanism used by HIDL is IFoo.h; all other files are tied to the HwBinder RPC mechanism used by HIDL. Therefore, client and server implementations should never directly refer to anything other than IFoo. To achieve this, include only IFoo.h and link against the generated shared library. Note: HwBinder is only one possible transport; new transports may be added in the future. Linking to shared libraries A client or server that uses any interface in a package must include the shared library of that package in one (1) of the following locations: - In Android.mk: LOCAL_SHARED_LIBRARIES += [email protected] - In Android.bp: shared_libs: [ /* ... */ "[email protected]", ], For specific libraries: Namespaces HIDL functions and types such as Return<T> and Void() are declared in namespace ::android::hardware. The C++ namespace of a package is determined by the package name and version. For example, a package mypackage with version 1.2 under hardware/interfaces has the following qualities: - C++ namespace is ::android::hardware::mypackage::V1_2 - Fully qualified name of IMyInterfacein that package is: ::android::hardware::mypackage::V1_2::IMyInterface. ( IMyInterfaceis an identifier, not part of the namespace). - Types defined in the package's types.halfile are identified as: ::android::hardware::mypackage::V1_2::MyPackageType
https://source.android.com/devices/architecture/hidl-cpp/packages?hl=ko
CC-MAIN-2019-09
en
refinedweb
Board index » python All times are UTC.*-*-*.com/ ~pinard/pymacs/Pymacs.tar.gz (Beware: the capital `P' was a lower case `p' in previous announcements.) Pymacs allows Emacs users to extend Emacs using Python, where they might have traditionally used Emacs LISP. Pymacs runs on systems having sub-processes. No bugs were reported against the Pymacs proper. So, I got to risk some new ones :-). There is no real need to upgrade, but testers are welcome. As previously announced, Pymacs is now to be invoked by: from Pymacs import lisp, Let instead of: import pymacs from pymacs import lisp Let = pymacs.Let The goal is to turn Pymacs into a more genuine python package, in the spirit of Distutils. A tiny module gets installed so the previous methods work, be warned that this compatibility module will disappear in some later release. The various `push' methods of the `Let' class now return the `Let' instance they act upon. This eases chaining pushes while creating such an instance. Finally, the `rebox.py' example had two bugs corrected, one about unusual argument flags, the other for older versions of Python. Keep happy! Enjoy, enjoy! :-) -- Fran?ois Pinard.*-*-*.com/ ~pinard 1. RELEASED: Pymacs 0.16 2. Glasgow Haskell 0.16 documents in PostScript available 3. Glasgow Haskell compiler, version 0.16, available [REPOST] 4. MacGofer 0.16 (Beta) Available for FTP 5. Lisp Interpreted (fib 20) now in 0.16 seconds 6. xscheme: has anybody a more recent version than 0.16 7. Problem with XScheme 0.16 8. Avalability of XScheme(V 0.16) and XLisp(V 2.0) 9. RELEASED: Pymacs 0.21 10. RELEASED: Pymacs 0.20 11. RELEASED: Pymacs 0.19 12. RELEASED: Pymacs 0.18
http://computer-programming-forum.com/37-python/8d59704436cef063.htm
CC-MAIN-2019-09
en
refinedweb
/ruby $logLevel = Integer(ARGV[0]) ARGV.clear def logit(level, msg) if level >= $logLevel puts 'MSG' + String(level) + ': ' + msg end end def getUser() logit(2, 'Entering Function getUser()...') print 'Enter User Name: ' user = gets.chomp logit(1, 'Leaving Function getUser()...') return user end logit(3, 'Starting Script...') logit(3, 'User Entered: ' + getUser()) logit(3, 'Ending Script.') In this script, we are using 3 different numbers for log levels. 3 is like an INFO level, 2 is like a DEBUG level and 1 is like a TRACE level, to see the most log detail possible. The log level to be used is passed as the parameter to the script for simplicity. Let's save this file as logit.rb and run it several times with different input levels... $ ./logit.rb 3 MSG3: Starting Script... Enter User Name: Ruby MSG3: User Entered: Ruby MSG3: Ending Script. $ ./logit.rb 2 MSG3: Starting Script... MSG2: Entering Function getUser()... Enter User Name: Ruby MSG3: User Entered: Ruby MSG3: Ending Script. $ ./logit.rb 1 MSG3: Starting Script... MSG2: Entering Function getUser()... Enter User Name: Ruby MSG1: Leaving Function getUser()... MSG3: User Entered: Ruby MSG3: Ending Script. The script creates two functions, one for logging and another to get the user name input from the user. Notice that we use the def keyword to define a function, and the syntax is very simple. After the function name, you give the parameters to the function. The parameters are accessed in the function just as local variables. The function ends with the end keyword, a ruby script as well. Consider the factoral: #!/usr/bin/ruby def fac(n) if n > 1 return n * fac(n - 1) else return 1 end end print 'Enter a number: ' num = Integer(gets.chomp) puts String(num) + '! = ' + String)
http://www.dreamsyssoft.com/ruby-scripting-tutorial/functions-tutorial.php
CC-MAIN-2019-09
en
refinedweb
Images are a fun part of web development. They look great, and are incredibly important in almost every app or site, but they’re huge and slow. A common technique of late is that of lazy-loading images when they enter the viewport. That saves a lot of time loading your app, and only loads the images you need to see. There are a number of lazy-loading solutions for Vue.js, but my personal favorite at the moment is vue-clazy-load. It’s basically a dead-simple wrapper with slots that allows you do display a custom image and a custom placeholder. There’s not much else, but it’s incredibly flexible. 📚 Recommended courses ⤵Black Friday: Top notch Vue.js courses and over 200 lessons for $58 per year - Vue School Installation Install vue-clazy-load in your Vue.js project. # Yarn $ yarn add vue-clazy-load # NPM $ npm install vue-clazy-load --save main.js (Partial) import Vue from 'vue'; import App from 'App.vue'; import VueClazyLoad from 'vue-clazy-load'; ... Vue.use(VueClazyLoad); ... new Vue({ el: '#app', render: h => h(App) }); Since vue-clazy-load uses the brand-new IntersectionObserver API, you’ll probably want a polyfill to support it in most browsers. This one works well, but any polyfill that provides the IntersectionObserver API should work. <script src=""></script> Usage Now you can use the <clazy-load><clazy-load> component directly, as shown below. App.vue <template> <div id="app"> <!-- The src allows the clazy-load component to know when to display the image. --> <clazy-load <!-- The image slot renders after the image loads. --> <img src=""> <!-- The placeholder slot displays while the image is loading. --> <div slot="placeholder"> <!-- You can put any component you want in here. --> Loading.... </div> </clazy-load> </div> </template> This will get you a basic div that starts loading once the element enters the viewport, displays Loading… until the image loads, then displays the image. Nice and simple! There are, of course a few props you can pass: - src: String (required) - The src of the image to load. - tag: String - Which component / element clazy-load will render as. (The default is a boring ‘ol div.) - element: String - Which element to consider as the viewport. Otherwise the browser viewport is used. (Useful if you have a custom scrolling area.) - threshold: Array<Number> || Number - How far into the viewing area the clazy-load component needs to be before the load is started. See MDN for more details. - margin: String - A value for the margin that gets applied to the intersection observer. - ratio: Number - A value between 0 and 1 that corresponds to the percentage of the element that should be in the viewport before loading happens. - crossorigin: “anonymous” or “use-credentials” - An option to help work with CORS for images hosted on a different domain. - loadedClass: String, loadingClass: String & errorClass: String - Class name to give to the root element for the different states. There’s also a single event provided, the load event. Which is, as the name implies, emitted when the image finished loading. Also of note, you can effectively use any components in the slots, including Vue transition components, as shown below: <template> <div id="app"> <!-- Boom: Free fade transitions. --> <clazy-load <transition name="fade"> <img src=""> </transition> <transition name="fade" slot="placeholder"> <div slot="placeholder"> Loading.... </div> </transition> </clazy-load> </div> </template> <style> .fade-enter, .fade-leave-to { opacity: 0; } </style> I can’t think of a much easier way to handle image preloading. If you can, feel free to send us a message! For now though, I believe vue-clazy-load can handle pretty much any lazy-loading situation. Enjoy!
https://alligator.io/vuejs/vue-lazy-load-images/
CC-MAIN-2019-09
en
refinedweb
Transferring Data From Cassandra to Couchbase Using Spark Transferring Data From Cassandra to Couchbase Using Spark Started off with Cassandra only to realize that Couchbase suits your needs more? This Spark plugin can help you transfer your data to Couchbase quickly and easily.). There are many NoSQL databases in the market like Cassandra, MongoDB, Couchbase, and others, and each have pros and cons. Types of NoSQL Databases There are mainly four types of NoSQL databases, namely: - Column-oriented - Key-value store - Document-oriented - Graph The databases that support more than one format are called “multi-model,” like Couchbase which supports key-value and document-oriented databases. Sometimes we choose the wrong database for our application and realize this harsh truth at a later stage. Then what? What should we do? Such is the case in our experience, where we were using Cassandra as our database and later discovered it is not fulfilling all of our needs. We needed to find a new database and discovered Couchbase to be the right fit. The main difficulty was figuring out how we should transfer our data from Cassandra to Couchbase, because no such plugin was available. In this blog post I’ll be describing the code I wrote that transfers data from Cassandra to Couchbase using Spark. All of the code is available here. Explanation of the code Here, I am reading data from Cassandra and writing it back on Couchbase. This simple code solves our problem. The steps involved are: Reading the configuration: val config = ConfigFactory.load() //Couchbase Configuration val bucketName = config.getString("couchbase.bucketName") val couchbaseHost = config.getString("couchbase.host") //Cassandra Configuration val keyspaceName = config.getString("cassandra.keyspaceName") val tableName = config.getString("cassandra.tableName") val idFeild = config.getString("cassandra.idFeild") val cassandraHost = config.getString("cassandra.host") val cassandraPort = config.getInt("cassandra.port") Setting up the Spark configuration and the creation of the Spark session: val conf = new SparkConf() .setAppName(s"CouchbaseCassandraTransferPlugin") .setMaster("local[*]") .set(s"com.couchbase.bucket.$bucketName", "") .set("com.couchbase.nodes", couchbaseHost) .set("spark.cassandra.connection.host", cassandraHost) .set("spark.cassandra.connection.port", cassandraPort.toString) val spark = SparkSession.builder().config(conf).getOrCreate() val sc = spark.sparkContext Reading data from Cassandra: val cassandraRDD = spark.read .format("org.apache.spark.sql.cassandra") .options(Map("table" -> tableName, "keyspace" -> keyspaceName)) .load() Checking the id field: The id field is being checked to see if it exists. Then use that as id in Couchbase too or else generate a random id and assign it to the document. import org.apache.spark.sql.functions._ val uuidUDF = udf(CouchbaseHelper.getUUID _) val rddToBeWritten = if (cassandraRDD.columns.contains(idFeild)) { cassandraRDD.withColumn("META_ID", cassandraRDD(idFeild)) } else { cassandraRDD.withColumn("META_ID", uuidUDF()) } In a different file: object CouchbaseHelper { def getUUID: String = UUID.randomUUID().toString } Writing to Couchbase: rddToBeWritten.write.couchbase() You can run this code directly to transfer data from Cassandra to Couchbase – all you need to do is some configuration. Configurations All the configurations can be done by setting the environment variables. Couchbase configuration: Cassandra configuration: Code in Action This is how data looks on the Cassandra side. As for the Couchbase side, there are two cases. Case 1: When the id exists and the same can be used as Couchbase document id. Case 2: When the id name does not exist and we need to assign a random id to documents. How to Run the Transfer plugin Steps to run the code: - Download the code from the repository. - Configure the environment variables according to the configuration. - Run the project using sbt run. }}
https://dzone.com/articles/transferring-data-from-cassandra-to-couchbase-usin
CC-MAIN-2019-09
en
refinedweb
How to use TagLib into my Qt c++ Project Hello, I'd like to get the additional information of a media file in a qt application i'm building and so i decided to use taglib. Can anyone tell me how to use TagLib in qt from the beginning ? The version I got was 1.6.3. Thanks - SGaist Lifetime Qt Champion @Moderators @mrjj @Qt-Champions-2015 @Lifetime-Qt-Champion I have no idea what to do with it or add which file to my app. can you please explain how to build it? What I did so far: (Currently working on mac but want it for both Windows and mac). - Download TagLib-1.6.3. Now using terminal: cd /Taglib-1.6.3 ./configure make sudo make install A default build and install. So, the configure prefix was /usr/local. But there is no .a file in my lib folder. Please guide me with necessary steps to build and use Taglib in my qt Project. Here is the Snapshot: Thanks @SGaist After searching, I found a solution. $ cd taglib-1.6.3 $ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_OSX_DEPLOYMENT_TARGET=10.6 -DCMAKE_OSX_ARCHITECTURES="i386;x86_64" -DENABLE_STATIC=ON -DCMAKE_INSTALL_PREFIX="/Users/macwaves/Desktop/lib2" $ make $ sudo make install Now I have "libtag.a" inside my lib folder (/Users/macwaves/Desktop/lib2/lib) and about 75 header files inside "/Users/macwaves/Desktop/lib2/include/taglib". I have created a project and then open .pro file, right click ( just in middle of file) and choose "Add library", then browse to the .a file. win32:CONFIG(release, debug|release): LIBS += -L$$PWD/../../../../Users/macwaves/Desktop/lib2/lib/release/ -ltag else:win32:CONFIG(debug, debug|release): LIBS += -L$$PWD/../../../../Users/macwaves/Desktop/lib2/lib/debug/ -ltag else:unix: LIBS += -L$$PWD/../../../../Users/macwaves/Desktop/lib2/lib/ -ltag INCLUDEPATH += $$PWD/../../../../Users/macwaves/Desktop/lib2/include/taglib DEPENDPATH += $$PWD/../../../../Users/macwaves/Desktop/lib2/include/taglib win32-g++:CONFIG(release, debug|release): PRE_TARGETDEPS += $$PWD/../../../../Users/macwaves/Desktop/lib2/lib/release/libtag.a else:win32-g++:CONFIG(debug, debug|release): PRE_TARGETDEPS += $$PWD/../../../../Users/macwaves/Desktop/lib2/lib/debug/libtag.a else:win32:!win32-g++:CONFIG(release, debug|release): PRE_TARGETDEPS += $$PWD/../../../../Users/macwaves/Desktop/lib2/lib/release/tag.lib else:win32:!win32-g++:CONFIG(debug, debug|release): PRE_TARGETDEPS += $$PWD/../../../../Users/macwaves/Desktop/lib2/lib/debug/tag.lib else:unix: PRE_TARGETDEPS += $$PWD/../../../../Users/macwaves/Desktop/lib2/lib/libtag.a Now In my mainwindow.h #include <QMainWindow> #include <fileref.h> #include <tag.h> and mainwindow.cpp TagLib::FileRef f("/Users/macwaves/Desktop/Testing/my.mp3",true); but it's generating errors. :-1: error: symbol(s) not found for architecture x86_64 :-2: error: linker command failed with exit code 1 (use -v to see invocation) Please help me ! Thanks - SGaist Lifetime Qt Champion Since your on Linux, why not install the taglib devel package from your distribution ?
https://forum.qt.io/topic/68142/how-to-use-taglib-into-my-qt-c-project/4
CC-MAIN-2019-09
en
refinedweb
Getting Started with RadPropertyGrid This tutorial will walk you through the creation of a sample application that contains RadPropertyGrid. - Assembly References - Add RadPropertyGrid to Your Project - Bind RadPropertyGrid to a Single Item - Bind RadPropertyGrid to a Visual Element - Key Properties - Setting a Theme Assembly References In order to use RadPropertyGrid in your projects, you have to add references to the following assemblies: - Telerik.Windows.Controls - Telerik.Windows.Controls.Data - Telerik.Windows.Data - Telerik.Windows.Controls.Input Add RadPropertyGrid to Your Project Before proceeding with adding RadPropertyGrid to your project, make sure the required assembly references are added to the project. You can add RadPropertyGrid manually by writing the XAML code in Example 1. You can also add the control by dragging it from the Visual Studio Toolbox and dropping it over the XAML view. Example 1: Adding RadPropertyGrid <Grid xmlns: <telerik:RadPropertyGrid x: </Grid> Figure 1: The empty RadPropertyGrid generated by the code in Example 1 Bind RadPropertyGrid to a Single Item You may bind RadPropertyGrid to a single data item. Thus, you will be able to examine and edit its properties. The only thing you need is to set the Item property of RadPropertyGrid. The binding may be performed both in XAML and in the code-behind. Example 2: Binding to data item this.PropertyGrid1.Item = new Employee() { FirstName = "Sarah", LastName = "Blake", Occupation = "Supplied Manager", StartingDate = new DateTime(2005, 04, 12), IsMarried = true, Salary = 3500, Gender = Gender.Female }; Me.PropertyGrid1.Item = New Employee() With { .FirstName = "Sarah", .LastName = "Blake", .Occupation = "Supplied Manager", .StartingDate = New DateTime(2005, 4, 12), .IsMarried = True, .Salary = 3500, .Gender = Gender.Female } Once you set the Item and run the application you will see a RadPropertyGrid as the one illustrated in Figure 2. Figure 2: RadPropertyGrid bound to a single item Bind RadPropertyGrid to a Visual Element You can also bind the Item property of RadPropertyGrid to a visual element and still view and edit its properties. Example 3 shows how to bind RadPropertyGrid's Item property to a RadButton. Example 3: Binding to visual element <telerik:RadButton x: <telerik:RadPropertyGrid Figure 3: RadPropertyGrid bound to a RadButton Key Properties LabelColumnWidth: You could change the width of the first column in the RadPropertyGrid by setting a value for this property of the RadPropertyGrid. IsGrouped: Controls the current RadPropertyGrid's state. You can set it to true and you will have RadPropertyGrid initially grouped. If you set it to false, then you will have RadPropertyGrid sorted. AutoGeneratePropertyDefinitions: Indicates whether property definitions will be autogenerated. DescriptionPanelVisibility: Sets the visibility mode of the description panel. CanUserResizeDescriptionPanel: Indicates whether the user can resize the description panel. Item: Returns the item to edit. PropertyDefinitions: Returns a collection of PropertyDefinitions describing the properties displayed or edited by RadPropertyGrid. SearchBoxVisibility: Sets the visibility mode of the search box. SortAndGroupButtonsVisibility: Sets the visibility mode of the sort and group buttons. Silverlight Controls Samples. Merge the ResourceDictionaries with the namespace required for the controls that you are using from the theme assembly. For RadPropertyGrid, you will need to merge the following resources: - Telerik.Windows.Controls - Telerik.Windows.Controls.Data - Telerik.Windows.Controls.Input Example 4 demonstrates how to merge the ResourceDictionaries so that they are applied globally for the entire application. Example.Controls.Data.xaml"/> <ResourceDictionary Source="/Telerik.Windows.Themes.Windows8;component/Themes/Telerik.Windows.Controls.Input.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> Figure 4: RadPropertyGrid with the Windows8 theme
https://docs.telerik.com/devtools/silverlight/controls/radpropertygrid/getting-started/getting-started
CC-MAIN-2019-09
en
refinedweb
Every single day, we’re bombarded by commercials telling us that a particular product will make us happy. Just buy these jeans, drink this soft drink, or drive this car and you’ll be happy and attractive. We can’t promise it will increase your sex appeal, but we have found something that will make you happy about writing code again. It’s a new programming language from Japan called “Ruby.” Using Ruby will make you feel good about programming. People smile during hands-on Ruby tutorials when they’re working on the exercises. They even write to the mailing list just to say that Ruby makes them feel good. Why is this? There are plenty of technical things to like about the language. Ruby is concise, with a simple syntax and grammar. It’s both high-level and close to the machine, so you can get a lot done with a remarkably short program. It’s totally object-oriented; everything is an object (or can be made into an object), and it was designed that way from the start. Like Smalltalk, Ruby’s variables are not typed, but the language is strongly typed. It’s dynamic; you can extend all classes, including the built-in ones, at runtime, and its eval() method lets you incrementally compile code into a running program. Its garbage collection makes coding less of a crapshoot, and the simple yet flexible exception scheme makes it easy to structure your error handling. And when you have to interface to other libraries, it is simple to write Ruby interfaces in C. Ruby’s object orientation is fairly unusual. Yes, it has classes and objects, but it also supports mixins (think Java interfaces that can have code in them) and singleton classes and singleton objects (so you can specialize whole classes, or just single instances of a class). After a while, you’ll find yourself designing programs slightly differently, as the extra flexibility lets you express your ideas more naturally. Ruby also has appealing social aspects. Ruby’s been around Japan since the mid-90s and is wildly popular over there. However, Ruby has only recently started to make a splash outside Japan. The western Ruby community is still growing, so it’s easy for newcomers to make a significant contribution. Perhaps that’s why the Ruby community is widely recognized for its enthusiasm and friendly, supportive nature. But there’s something more. Like a woodworker’s favorite hand tool, Ruby just fits. It’s a very human-oriented language that blends well with your intuition. Things just tend to work. The Principle of Least Surprise is a major design goal of Ruby. And amazingly, considering Ruby combines features from Perl and Smalltalk (plus CLU, Python, and others), Ruby meets that goal. Ruby is small enough to be learned in a day and deep enough to provide years of fun. Programs in Ruby just seem to flow, and they have a nasty habit of working the first time. Let’s see what all the fuss is about. Installing Ruby Ruby is available for download from. There you’ll find a tarball of the current stable version (recommended if you’re a first-time user). You’ll also find instructions for downloading the latest development version using CVS. Assuming you downloaded the tarball (it’s less than a megabyte), installation is pretty standard. Unpack the tar file, then build Ruby by using the usual open source incantation: ./configure; make; make test. If the tests pass, you then run make install as root. Let’s Code! The first rule of writing about languages is that all examples must start with a “Hello, World!” program, so: puts “Hello, World. Time: #{Time.now}” Ruby has a built-in method called puts(). It writes its argument to standard output and appends a newline if there isn’t one there already. The surprising thing here might be the #{Time.now} stuff. If Ruby finds the construct #{expr} in double quoted strings, it evaluates the enclosed expression, converts the result to a string, and substitutes that back into the original. The expression can be anything, from a simple variable reference up to a sizable chunk of code. In this case, the expression is a call to the now() method of the class Time, which returns a new Time object initialized to the current time. When converted to a string, time objects look comfortingly familiar, so if we ran this code, the puts() method would send the string to standard output and we’d see output like the following: Hello, World. Time: Thu Nov 15 14:02:51 CST 2001 So how do you run this code? Ruby normally runs programs from files, so you could type the code into a file, say hello.rb, and run it with ruby hello.rb. Ruby also works with shebang lines, so you could make your script executable using chmod +x hello.rb, add the #! line, and run it by just giving its name: #!/usr/local/bin/ruby puts “Hello, World. Time: #{Time.now}” You can pass Ruby the code using the -e command-line option, but quoting can get tricky: ruby -e ‘puts “Hello, World. Time: #{Time.now}”‘ Ruby also comes with irb, a tool that lets you enter and run code interactively. With irb you don’t need the puts to see the result, as irb automatically displays the values of expressions as it evaluates them. irb is a great tool for experimenting with Ruby. $ irb irb(main):001:0> “Hello, World. Time: #{Time.now}” Hello, World. Time: Thu Nov 15 14:18:55 CST 2001 irb(main):002:0> Now let’s wrap our greeting in a method, starting with the def keyword and ending, appropriately enough, with an end. (Ruby calls them methods, while other languages call them functions, procedures, or subroutines). At the same time, let’s parameterize it, allowing us to change the name we output. We’ll test this by calling our method twice using different names as parameters. As we already know how to substitute values in to strings using the #{…} notation, this is a breeze: def say_hello(name) puts “Hello, #{name}!” end say_hello “Larry” say_hello(“Moe”) say_hello “Larry” say_hello(“Moe”) Running this program gives us the output: Hello, Larry! Hello, Moe! As the code shows, Ruby doesn’t insist on parentheses around the parameters to methods, but it’s generally a good idea in all but the simplest cases. Collections and Control Structures In addition to strings and numbers, Ruby has a large cast of other built-in classes. Arrays are simple collections of arbitrary objects (for example, you can have an array containing a string, a number, and another array). An array literal is a list of objects surrounded by square brackets. Arrays (and other collections) are conveniently traversed using a for loop. In the code that follows, the variable i will contain, in turn, each element in the array. Like method definitions, the for loop is terminated with end. my_array = [ 42, "Hello, world!", [1,2,3] ] for i in my_array puts i end for i in my_array puts i end This program outputs: 42 Hello, world! [ 1, 2, 3 ] Arrays in Ruby are a surprisingly general data structure; you’ll often see them used as stacks, queues, and dequeues (and sometimes even as arrays). Like most scripting languages, Ruby comes with built-in hashes (or to give them their Latin name, associative arrays). Hashes are collections that are indexed just like arrays, but the thing you index with (often called the key) can be just about any object, not just integers. In Ruby, each hash can contain keys of different types, and the values they map to can also be different types. Hash literals are written between braces, with “=>” between the keys and values: my_hash = { 99 => “balloons”, “seventy-six” => “trombones” } puts my_hash[99] puts my_hash[76+23] puts my_hash["seventy-six"] puts my_hash[99] puts my_hash[76+23] puts my_hash["seventy-six"] This program produces the output: balloons balloons trombones If there isn’t an entry corresponding to the key you use to index a hash, the special object nil is returned. The nil object is a bit like Perl’s undef and the null value in relational databases; it represents the idea of “no value.” No other value equals nil, and nil is equivalent to false in contexts requiring a truth value. This lets you write code like the following. currency_of = { “us” => “dollar”, “uk” => “pound”, “mx” => “peso” } while line = gets line.chomp! currency = currency_of[line] if currency puts “Currency of the #{line} is #{currency}” else puts “Don’t know the currency of #{line}” end end while line = gets line.chomp! currency = currency_of[line] if currency puts “Currency of the #{line} is #{currency}” else puts “Don’t know the currency of #{line}” end end The while loop reads successive lines from standard input using the gets() method. When gets() reaches end of file it returns nil, so the condition of the while loop becomes false and the loop terminates. Inside the body of the loop, we first strip the trailing newline from the input using chomp!. We then use the result to index the hash, mapping a country to a currency. If we find a match, we report it. However, if we don’t find a match, nil will be returned, and we’ll output the “Don’t know…” message instead. As you can see from the previous example, Ruby has the normal control structures if and while, along with their negated cousins unless and until, all terminated with end. Ruby also has a marvelous case statement that works on just about any data type there is (including those that you define yourself). A case statement can check a value against a range, a string, a regular expression, the value’s type, and so on. print “Test score: ” score = gets.to_i case score when 0…40 puts “Back to flipping burgers” when 40…60 puts “Squeaked in” when 60…80 puts “Solid performance (yawn)” when 80…95 puts “Wow!” when 95..100 puts “We are not worthy” else puts “Huh?” end case score when 0…40 puts “Back to flipping burgers” when 40…60 puts “Squeaked in” when 60…80 puts “Solid performance (yawn)” when 80…95 puts “Wow!” when 95..100 puts “We are not worthy” else puts “Huh?” end This example also shows Ruby’s ranges. The input value is converted to an integer via the to_i() method, then compared against the various ranges in the case statement. The three dot form, a…b, denotes the values starting from a up to but not including b. The two-dot form is inclusive (a up to and including b). Ranges work on any types where it makes sense, so “a”..”z” is equivalent to the lower case ASCII letters. It’s easy to write your classes so that they can be used in ranges too. Top of the Class Way back when we started, we claimed that Ruby is a soup-to-nuts, object-oriented language. But we’ve seen all this code, and not an object in sight. What’s up? This is one of the clever things about Ruby. Everything is an object, but if you don’t want to do object-oriented programming, you don’t have to. Behind the scenes, Ruby handles all this for you. Object-Oriented Programming In procedural languages such as C, you have data (ints, chars, and so on) and write code to operate on that data. In object-oriented programming, data and code are unified. Typically, you write something called a class, which encapsulates some behavior and the data needed to support that behavior. Sounds scary, but it’s actually pretty natural. For example, you might be writing an application that draws shapes on the screen. For each shape, you’d need to record things like its color and its present position. These are attributes, the data on which a shape works. You also want behaviors, things like drawOn(screen) and moveTo(x,y). In object-oriented programming, you’d wrap the data and the behavior together into a class definition. When you need a particular shape, you’d create an instance of that class; if you needed ten shapes, you’d have ten separate instances in your running program. These instances are also called “objects.” Languages such as Java and C++ are hybrids. You can create classes in them and then manipulate objects instantiated from these classes. However, the basic built-in types like numbers are not objects; they aren’t derived from any particular class. This can be awkward; in Java, every time you want to put a number into a collection you have to wrap it up inside an object (because collections only work with objects). It also means the programmer has to know two different styles of coding, one based on asking objects to execute their behaviors, the other based on conventional non-object semantics. Ruby is different because everything you manipulate in Ruby is an object. As with Java and C++, you can create your own classes and objects, and you can use the classes supplied with the language. In addition, the number ’1′ is also a full-blown object (it’s an instance of class Fixnum). This is very convenient, because it means that there’s no programming divide between objects and non-objects; everything can be manipulated in the same way. If you want to populate a collection with two numbers and a String, you can do it. And when you write 1 + 2 in Ruby, you’re not using some magic in the compiler that knows how to add numbers. Instead you’re invoking behaviors in objects. In this case you’re asking the object ’1′ to perform addition, passing it the object ’2′. The result is a new object (hopefully the Fixnum ’3′). Object-orientation is a powerful way of thinking about problems. You design by breaking the world into classes, each with its own responsibilities, and then let objects of those classes interact. The resulting code can be a lot easier to understand and maintain, as behaviors are all neatly encapsulated within classes. However, object-orientation is not a universal solution, and sometimes other paradigms work better. This is where Java can be a pain; you must write in terms of classes, even if they aren’t appropriate to your solution. Ruby is flexible; although it is a truly object-oriented language, it doesn’t force you to write you code using classes. When you write line.chomp!, you’re actually telling Ruby to execute the chomp! method in the object referenced by the variable line. When you write my_array[1], you’re invoking a method called “[]” on the array object that has been referenced by my_array. Not even arithmetic escapes. When you write 1 + 2 * 3, you’ve actually created three objects, 1, 2, and 3, of type Fixnum. (Fixnum is used to represent integers less than a machine-dependent limit, normally 30 bits. When integers get bigger than this, Ruby automatically converts them to Bignums, whose size is limited only by the amount of memory in your box.) Ruby performs the multiplication by calling 2′s multiply method (conveniently called “*”), passing 3 as a parameter. This returns a new object, the Fixnum 6, which is passed as a parameter to the plus method of the 1 object. In fact, you can make this explicit in Ruby: type the following into irb and you’ll get seven as a result: 1.+(2.*(3)) This style of calling a method in an object by writing object.method will be familiar to Java and C# programmers. What isn’t so familiar is that everything in Ruby is an object; there are no special cases for numbers as in Java. That’s why you can say 1.plus(2): puts “cat”.length a = [ "c", "a", "b" ] a.sort! puts a puts a.include? “f” As the above code shows, method names can end with exclamation marks and question marks. The built-in classes typically reserve names that end in “!” for methods that have a potentially surprising side-effect (such as modifying the array), while those ending “?” are typically used when querying an object’s state. This would output: 3 [ "a", "b", "c" ] false Creating classes in Ruby is as simple as creating methods. The code in Listing One creates a class called Person, containing two methods, initialize() and to_s(). The initialize() method is special; Ruby calls it to initialize newly created instances of a class. In this example, the initialize method copies the values of its two parameters into the instance variables @name and @age. (Instance variables, sometimes called member variables, are values associated with particular instances of a class, and are prefixed with the “@” sign.) On line 11 we create a new Person object, passing in the string “Dora” and the number 31. The resulting object has its @name and @age instance variables set to “Dora” and 31. We assign it to the variable p1. The to_s() method lets us verify this, returning a string representation of the object. On line 14 we use this to output Dora’s information: Dora, age: 31 Listing One: The Person Class 1 class Person 2 def initialize(name, age) 3 @name = name 4 @age = age 5 end 6 def to_s 7 “#{@name}, age: #{@age}” 8 end 9 end 10 11 p1 = Person.new(“Dora”, 31) 12 p2 = Person.new(“Flora”, 42) 13 14 puts p1.to_s 15 puts p2 16 puts “#{p1} and #{p2}” We’re using a little built-in magic on lines 15 and 16. The puts() method needs a String to send to standard output. Since p2 is not a String, it calls p2‘s to_s() method to convert it into one (just like toString() in Java). Similarly, the expressions that are inside the #{expr} constructs in the string literal are automatically converted to strings as they are interpolated, calling the to_s() method in class Person each time. So, these two lines will end up producing: Flora, age: 42 Dora, age: 31 and Flora, age: 42 Iterators and Blocks Say you have a chunk of code you want to execute three times. In C, you might write something like: for (int i = 0; i < 3; i++) { printf(“Ho! “); } printf(“Merry Christmas\n”); In Ruby, you’d probably use the method times(), which is an iterator: 3.times { print “Ho! ” } puts “Merry Christmas” The code between the braces is called a block. (This is confusing terminology, because it looks just like a C or Java block, but it’s behavior is totally different.) A block is simply a chunk of code between the keywords do and end, or between curly braces as above. In many ways, blocks are like anonymous methods; the code they contain is called by the iterator method. In this example, the code in the block (print “Ho! “) is executed three times by the iterator times(), which is a method defined for all integers. As with regular methods, blocks can take parameters. Unlike methods, parameters to a block appear between vertical bars. In the following example, the iterator method each() calls the block four different times, passing in each element of the array in turn. The block parameter item now receives the element, which is then printed to standard out: array = [ 1, "cat", 3.14159, 99 ] array.each do |item| puts item end The previous code shows an important use of iterators. We could have written it as: array = [ 1, "cat", 3.14159, 99 ] for i in 0…array.length puts array[i] end This would seem completely natural to an experienced C or Java programmer, but using the collection’s built-in iterator is better style; that way, you are making fewer assumptions about the internal representation of the object. For example, file objects also have iterators. A file object’s each() iterator returns the file’s contents line by line. The code in the following example, therefore, will print out the contents of the file names.lst to standard out. people = File.open(“names.lst”) people.each do |item| puts item end See how the loop looks nearly identical to the array example. We have a generic looping construct that doesn’t care if it’s iterating over arrays, files, messages in an e-mail inbox, winning numbers in the lottery, or as this next example shows, the names of files in a directory. Dir.open(“/tmp”).each do |file_name| puts file_name end In fact, just about all Ruby objects that can contain collections of other objects implement iterators. Powerful mojo! However, because many people like their for loops, Ruby has a little bit more internal magic. When you write a for loop that looks like the following: for item in thing puts item end Ruby translates it into a series of calls to thing.each(). This means that if you write a class that supports an each() iterator, you can use it in a for loop. So how do you write your own iterator method? It turns out to be pretty simple. An iterator is just a regular method that uses the yield statement to pass values out to a block. The iterator method squares() in the next example returns the squares of the numbers one to limit: def squares(limit) 1.upto(limit) do |i| yield i*i end end squares(4) do |result| puts result end squares(4) do |result| puts result end So, what happens when we run this code? First, when Ruby sees squares(4), it calls the method squares(), setting the parameter limit to 4. Inside the method, we loop from 1 to the value of limit using upto, yet another iterator method available to integers. Each time that Ruby goes around the loop, it executes the yield statement, passing it the value of the loop counter squared as a parameter. Each time the yield is executed, the block associated with the call to squares() is also executed. The parameter to yield is passed to this block, which then prints it. So what is the result of all this? The program outputs: 1 4 9 16 Automated Ego Trip Having a book published uses up far too much time. But, it isn’t just the amount of time spent writing; the real waste of time comes after the book is published; you fritter away your life going to Amazon.com and checking your book’s ranking (every five minutes, all day, every day). So, let’s get a machine to waste its time instead. We’ll write a simple Ruby script that goes to a set of Amazon pages, extracts the current sales rank, and tabulates the results. Since we want to minimize the delay in getting the results, we’ll fetch these pages in parallel, using Ruby’s multi-threading capabilities. The code is shown in Listing Two . Listing Two: Collecting Statistics from Amazon 1 def get_rank_for(url) 2 3 data = ‘lynx -dump #{url}’ 4 5 return $1 if data =~ /Amazon.com Sales Rank:\s*([0-9,]+)/ 6 return $1 if data =~ /Amazon.co.jp.*?:.*?\n([0-9,]+)/ 7 8 raise “Couldn’t find sales rank in page” 9 end 10 11 URLS = [ 12 "", 13 "", 14 "", 15 "", 16 "", 17 ] 18 19 threads = URLS.collect do |url| 20 Thread.new(url) do |a_url| 21 get_rank_for(a_url) 22 end 23 end 24 25 ranks = threads.collect {|t| t.value } 26 27 print Time.now.strftime(“%Y/%m/%d %H:%M “) 28 ranks.each {|r| printf(“%7s”, r) } 29 puts Lines 1 to 9 of the program define the method get_rank_ for() that fetches the sales rank from a Web page. Although we could have used the Ruby Web libraries to perform the search, the code would have been longer, as Amazon does a fair amount of redirecting between pages. (Those interested can look at Listing Three , which does the page fetching this way.) Instead, in this example we were pragmatic and used the lynx program to fetch the page and return its contents as text. The backticks on line 3 of our code run an external program and return its output as a string. Listing Three: Improved Amazon Statistics Collector #!/usr/bin/eval ruby -w require ‘net/http’ require ‘uri’ Lines 5 and 6 then search the page for the sales rank. The first test is for the U.S. pages. It uses a regular expression to look for the text “Amazon.com Sales Rank:” followed by zero or more spaces and then one or more digits and commas. Because the “digits and commas” part of the regular expression is in parentheses, the text it matches is extracted and stored in the variable $1. The method returns this value if the regular expression matches. Line 6 does the same thing with the Japanese pages (which have a different format). If neither match, we use raise to raise an exception, causing the program to exit. Lines 11 through 17 initialize an array with the list of URLs to search. We could simply search these sequentially to return a sales rank from each, but that means that we’d only start fetching the fifth page after we’d finished processing the first four. When you’re hungry for sales ranks, that delay seems to go on forever. Instead, we spiced up this example by using Ruby’s threads. These allow us to run the same chunk of Ruby code many times in parallel. We do this in lines 19 through 23, kicking off a separate thread for each URL in the list. The way we do this is slightly tricky, and we’ll look at it in a second. Line 25 waits for each of the threads to finish executing and collects the return value of each, a sales rank (again, we’ll explain how this works shortly). Finally, lines 27 through 29 format the current time and write it out, along with all the sales ranks. So what’s going on in lines 19 through 23? The problem we’re trying to solve has two parts. First, we want to start a thread for each URL. However, we also need to remember the thread object that is created, because we’ll want to ask it to give us back the sale rank it fetched. So, given a list of URLs, we’d like to end up with a list of threads. Fortunately, Ruby collections have a method that helps us. The collect() method (also known as map()) takes the values in a collection and passes each in turn to a block. It then collects the values returned by that block (which is the value of the last statement executed in the block) to produce a new array. For example, the following code would output 1, 4, 9, and 16. numbers = [ 1, 2, 3, 4] squares = numbers.collect {|n| n*n} puts squares Notice we’ve used the alternate form of defining a block, a pair of curly braces, rather than a do/end combination. In our sales rank example, we use collect() to convert an array of URLs into an array of thread objects, because the value of the block following the collect() is the value of Thread.new(), a new thread. What does that thread do? It calls get_rank_for(), fetching the sales rank for one URL. There’s a subtlety in this code; we have to pass the URL in to the thread as a parameter, otherwise there’s a potential race condition. This chunk of code starts all the threads running in parallel, but how do we wait for them to finish, and how do we collect the result each has returned? Well, again we have a problem that looks like “given a collection of x, we need a collection of y.” In this case, x is a thread and y is that thread’s results. The method value() waits for a thread to finish and then returns its value. Putting this in a collect() block (line 25) converts our list of threads into a list of sales ranks. What’s Next? In this short article we’ve only just scratched the surface of what you can do with Ruby. We haven’t looked at the networking classes, the Web stuff, XML and SOAP support, database access, GUI interfaces, or any of the other libraries and extensions that make Ruby a serious player as a scripting and general-purpose programming language. Learning Ruby is simple and rewarding. Why not download and install a copy today? The Resources sidebar has details of where to find both Ruby and other online resources (including the full text of our book). Try Ruby for your next programming or scripting job. It’ll make you happy. Resources Download: The latest version of Ruby can be downloaded from. You can also get it via CVS and CVSUP. Details are on the site. Community: The English-language mailing list is ruby-talk. For information on subscribing, see. The newsgroup comp.lang.ruby is mirrored to this list. You can also chat with Ruby users on the #ruby-lang IRC channel on OpenProjects.net..
http://www.linux-mag.com/id/1024/
CC-MAIN-2019-09
en
refinedweb
Instead of the global char output[999] wich is really big, you could allocate a perfectly sized array with calloc - like this in the sort_reverse function: char *output = calloc(strlen(input_string), sizeof(char)); and in main program: char *reversed_string = sort_reverse("Whatever"); printf("%s\n", reversed_string); //Remember to free reversed_string with free(reversed_string); //Since its allocated in the heap with calloc() Try check man-page for calloc, it will zero out the bytes aswell, then you wont have to worry about setting the last element to '\0'. NOTE: you have to iterate with "strlen(input_string) - 1" otherwise you will overwrite the needed '\0' EDIT: As strlen excludes the '\0' in original string, then you have to make space for that by adding one to strlen return value so: char *output = calloc(strlen(input_string) + 1, sizeof(char)); Anyways, it seem to work now, so... it's solved. thanks all.]]> It's safe in this case because output is static (defined outside of any function). In this case arrays are initialized by setting all their elements to 0. Only automatic variables, i.e. those defined within a function without the keyword "static", are ever uninitialized. Source is my memory and I haven't looked it up so don't sue me. Disclaimer: Just to be clear, it is definitely a good idea to add the terminator explicitly and I would not suggest leaving it off even if you know it is unnecessary. It makes verification easier and allows you to use the same algorithm for a truly uninitialized string. Also here you would likely run into problems if you try to call the function multiple times.]]> I suppose. I forget what the rules are for uninitialized arrays. Uninitialized "regular" varibales can not be assumed to be zeroed - they often are not. Perhaps it is safe - but I'd never trust it.]]> In addition to the above corrections, you'll need to also set the null byte yourself on the ouput string: output[strlen(str)] = '\0'; When i examined the program in gdb, the entire output array was already nulled. Is it reliable to expect it to be nulled every time?]]> In addition to the above corrections, you'll need to also set the null byte yourself on the ouput string: output[strlen(str)] = '\0'; str = "Hello" strlen(str) = 5 str[0] = H str[1] = e str[2] = l str[3] = l str[4] = 0 str[5] = \0 Edit: too slow. Yes, indeed, but that wasn't my point. My point was that the char array starts at 0, so the string is contained in the space str[0] to str[strlen(str) - 1], and the NUL is in str[strlen(str)], which was the first character your initial code reads. As Ramses pointed out, your output array started with NUL, so it appeared to be an empty string.]]>. Hmm, i thought strlen()'s code doesn't count the null character because: size_t strlen(const char *str) { int i; for (i = 0; str[i] != '\0'; i++) ; return i; } the loop should've broke once it hits the '\0' and the i value not counting it. The man page says that: The strlen() function calculates the length of the string s, ***excluding the terminating null byte ('\0').*** I can't check it here now (no c compiler on windows...) but I think you are copying the terminating \0 byte as the first character, so your resulting string is terminated immediately. So I think your function maps like this: Hello\0 -> \0olle where the "H" is missing as well, because you do not take care of \0. It should do this though: Hello\0 -> olleH\0.]]> Here are the pre-processors: #include <stdio.h> #include <string.h> Here are the global variables: char output[999]; Here's the function to sort string to reverse order (I THINK THE PROBLEM IS HERE): char output[999]; char *sort_reverse(const char *str) { int i, j = 0; for (i = strlen(str); i > 0; i--) { output[j] = str[i]; j++; } return output; } And, function int main(void): int main(void) { printf("Reverse of \"Hello\": %s\n", sort_reverse("Hello")); return 0; } The output is: Reverse of "Hello": Why is that? how can i fix that?]]>
https://bbs.archlinux.org/extern.php?action=feed&tid=160355&type=atom
CC-MAIN-2017-17
en
refinedweb
My initial thoughts on the default constructor are that it was called automatically. From what I can see, it seems that this is so but a constructor called automatically doesn't initialize int or char variables for exampe, to Zero as I originally thought. It appears that they initialize the variable with a garbage value or something associated with the memory location. **Is it true that the default constructor (if you don't provide a constructor yourself) is called automatically though does not initialize the class variables to any meaningful value? ** I initially thought that it would initialize class variables to zero but it seems that I am wrong on that. It could be a floor in my inderstanding. My code below provides three garbage values for the output if I don't initialize them. #include <iostream> #include <cstdlib> using namespace std; class test { public: char a,b,c; char testfunct(char d, char e, char f) { a = d; b = e; c = f; } }; int main() { test classtest; cout << classtest.a << "\n"; cout << classtest.b << "\n"; cout << classtest.c << "\n"; system("pause>nul"); } Edited by daino: correction
https://www.daniweb.com/programming/software-development/threads/433289/default-constructor
CC-MAIN-2017-17
en
refinedweb
iSndSysSource3DDoppler Struct Reference [Sound system] Extension to the iSndSysSource3D interface, allowing Doppler shift effects. More... #include <isndsys/ss_source.h> Detailed Description Extension to the iSndSysSource3D interface, allowing Doppler shift effects. The Doppler effect that causes sound sources the change in pitch as their relative velocities change. As an example the siren of an ambulance will increase in pitch as it approaches you, and decrease once it has passed you. The pitch of a source is multiplied by the value doppler_factor * (speed_of_sound - listener_velocity) / (speed_of_sound + source_velocity) Where the two velocities are the projections of the source and listener velocities, onto the vector between them. Definition at line 265 of file ss_source.h. Member Function Documentation Get velocity (speed) of the source. Set velocity (speed) of the source. The documentation for this struct was generated from the following file: - isndsys/ss_source.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structiSndSysSource3DDoppler.html
CC-MAIN-2017-17
en
refinedweb
Earlier this year, Github released Atom-Shell, the core of its famous open-source editor Atom, and renamed it to Electron for the special occasion. Electron, unlike other competitors in the category of Node.js-based desktop applications, brings its own twist to this already well-established market by combining the power of Node.js (io.js until recent releases) with the Chromium Engine to bring us the best of both server and client-side JavaScript. Imagine a world where we could build performant, data-driven, cross-platform desktop applications powered by not only the ever-growing repository of NPM modules, but also the entire Bower registry to fulfill all our client-side needs. In this tutorial, we will build a simple password keychain application using Electron, Angular.js and Loki.js, a lightweight and in-memory database with a familiar syntax for MongoDB developers. The full source code for this application is available here. This tutorial assumes that: - The reader has Node.js and Bower installed on their machine. - They are familiar with Node.js, Angular.js and MongoDB-like query syntax. Getting the Goods First things first, we will need to get the Electron binaries in order to test our app locally. We can install it globally and use it as a CLI, or install it locally in our application’s path. I recommend installing it globally, so that way we do not have to do it over and over again for every app we develop. We will learn later how to package our application for distribution using Gulp. This process involves copying the Electron binaries, and therefore it makes little to no sense to manually install it in our application’s path. To install the Electron CLI, we can type the following command in our terminal: $ npm install -g electron-prebuilt To test the installation, type electron -h and it should display the version of the Electron CLI. At the time this article was written, the version of Electron was 0.31.2. Setting up the Project Let’s assume the following basic folder structure: my-app |- cache/ |- dist/ |- src/ |-- app.js | gulpfile.js … where: - cache/ will be used to download the Electron binaries when building the app. - dist/ will contain the generated distribution files. - src/ will contain our source code. - src/app.js will be the entry point of our application. Next, we will navigate to the src/ folder in our terminal and create the package.json and bower.json files for our app: $ npm init $ bower init We will install the necessary packages later on in this tutorial. Understanding Electron Processes Electron distinguishes between two types of processes: - The Main Process: The entry point of our application, the file that will be executed whenever we run the app. Typically, this file declares the various windows of the app, and can optionally be used to define global event listeners using Electron’s IPC module. - The Renderer Process: The controller for a given window in our application. Each window creates its own Renderer Process. For code clarity, a separate file should be used for each Renderer Process. To define the Main Process for our app, we will open src/app.jsand include the appmodule to start the app, and the browser-windowmodule to create the various windows of our app (both part of the Electron core), as such: var app = require('app'), BrowserWindow = require('browser-window'); When the app is actually started, it fires a ready event, which we can bind to. At this point, we can instantiate the main window of our app: var mainWindow = null; app.on('ready', function() { mainWindow = new BrowserWindow({ width: 1024, height: 768 }); mainWindow.loadUrl('file://' + __dirname + '/windows/main/main.html'); mainWindow.openDevTools(); }); Key points: - We create a new window by creating a new instance of the BrowserWindowobject. - It takes an object as a single argument, allowing us to define various settings, amongst which the default width and height of the window. - The window instance has a loadUrl()method, allowing us to load the contents of an actual HTML file in the current window. The HTML file can either be local or remote. - The window instance has an optional openDevTools()method, allowing us to open an instance of the Chrome Dev Tools in the current window for debugging purposes. Next, we should organize our code a little. I recommend creating a windows/ folder in our src/ folder, and where we can create a subfolder for each window, as such: my-app |- src/ |-- windows/ |--- main/ |---- main.controller.js |---- main.html |---- main.view.js … where main.controller.js will contain the “server-side” logic of our application, and main.view.js will contain the “client-side” logic of our application. The main.html file is simply an HTML5 webpage, so we can simply start it like this: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Password Keychain</title> </head> <body> <h1>Password Keychain</h1> </body> </html> At this point, our app should be ready to run. To test it, we can simply type the following in our terminal, at the root of the src folder: $ electron . We can automate this process by defining the startscript of the package.son file. Building a Password Keychain Desktop App To build a password keychain application, we need: - A way to add, generate and save passwords. - A convenient way to copy and remove passwords. Generating and Saving Passwords A simple form will suffice to insert new passwords. For the sake of demonstrating communication between multiple windows in Electron, start by adding a second window in our application, which will display the “insert” form. Since we will open and close this window multiple times, we should wrap up the logic in a method so that we can simply call it when needed: function createInsertWindow() { insertWindow = new BrowserWindow({ width: 640, height: 480, show: false }); insertWindow.loadUrl('file://' + __dirname + '/windows/insert/insert.html'); insertWindow.on('closed',function() { insertWindow = null; }); } Key points: - We will need to set the show property to false in the options object of the BrowserWindow constructor, in order to prevent the window from being open by default when the applications starts. - We will need to destroy the BrowserWindow instance whenever the window is firing a closed event. Opening and Closing the “Insert” Window The idea is to be able to trigger the “insert” window when the end user clicks a button in the “main” window. In order to do this, we will need to send a message from the main window to the Main Process to instruct it to open the insert window. We can achieve this using Electron’s IPC module. There are actually two variants of the IPC module: - One for the Main Process, allowing the app to subscribe to messages sent from windows. - One for the Renderer Process, allowing the app to send messages to the main process. Although Electron’s communication channel is mostly uni-directional, it is possible to access the Main Process’ IPC module in a Renderer Process by making use of the remote module. Also, the Main Process can send a message back to the Renderer Process from which the event originated by using the Event.sender.send() method. To use the IPC module, we just require it like any other NPM module in our Main Process script: var ipc = require('ipc'); … and then bind to events with the on() method: ipc.on('toggle-insert-view', function() { if(!insertWindow) { createInsertWindow(); } return (!insertWindow.isClosed() && insertWindow.isVisible()) ? insertWindow.hide() : insertWindow.show(); }); Key Points: - We can name the event however we want, the example is just arbitrary. - Do not forget to check if the BrowserWindow instance is already created, if not then instantiate it. - The BrowserWindow instance has some useful methods: - isClosed() returns a boolean, whether or not the window is currently in a closedstate. - isVisible(): returns a boolean, whether or not the window is currently visible. - show() / hide(): convenience methods to show and hide the window. Now we actually need to fire that event from the Renderer Process. We will create a new script file called main.view.js, and add it to our HTML page like we would with any normal script: <script src="./main.view.js"></script> scripttag loads this file in a client-side context. This means that, for example, global variables are available via window.<var_name>. To load a script in a server-side context, we can use the require()method directly in our HTML page: require('./main.controller.js');. Even though the script is loaded in client-side context, we can still access the IPC module for the Renderer Process in the same way that we can for the Main Process, and then send our event as such: var ipc = require('ipc'); angular .module('Utils', []) .directive('toggleInsertView', function() { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); ipc.send('toggle-insert-view'); }); }; }); There is also a sendSync() method available, in case we need to send our events synchronously. Now, all we have left to do to open the “insert” window is to create an HTML button with the matching Angular directive on it: <div ng- <button toggle-insert-view <i class="material-icons">add</i> </button> </div> And add that directive as a dependency of the main window’s Angular controller: angular .module('MainWindow', ['Utils']) .controller('MainCtrl', function() { var vm = this; }); Generating Passwords To keep things simple, we can just use the NPM uuid module to generate unique ID’s that will act as passwords for the purpose of this tutorial. We can install it like any other NPM module, require it in our ‘Utils’ script and then create a simple factory that will return a unique ID: var uuid = require('uuid'); angular .module('Utils', []) ... .factory('Generator', function() { return { create: function() { return uuid.v4(); } }; }) Now, all we have left to do is create a button in the insert view, and attach a directive to it that will listen to click events on the button and call the create() method: <!-- in insert.html --> <button generate-passwordgenerate</button> // in Utils.js angular .module('Utils', []) ... .directive('generatePassword', ['Generator', function(Generator) { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); if(!scope.vm.formData) scope.vm.formData = {}; scope.vm.formData.password = Generator.create(); scope.$apply(); }); }; }]) Saving Passwords At this point, we want to store our passwords. The data structure for our password entries is fairly simple: { "id": String "description": String, "username": String, "password": String } So all we really need is some kind of in-memory database that can optionally sync to file for backup. For this purpose, Loki.js seems like the ideal candidate. It does exactly what we need for the purpose of this application, and offers on top of it the Dynamic Views feature, allowing us to do things similar to MongoDB’s Aggregation module. Dynamic Views do not offer all the functionality that MongodDB’s Aggregation module does. Please refer to the documentation for more information. Let’s start by creating a simple HTML form: <div class="insert" ng- <form name="insertForm" no-validate> <fieldset ng- <div class="mdl-textfield"> <input class="mdl-textfield__input" type="text" id="description" ng-Description...</label> </div> <div class="mdl-textfield"> <input class="mdl-textfield__input" type="text" id="username" ng- <label class="mdl-textfield__label" for="username">Username...</label> </div> <div class="mdl-textfield"> <input class="mdl-textfield__input" type="password" id="password" ng-Password...</label> </div> <div class=""> <button generate-passwordgenerate</button> <button toggle-insert-viewcancel</button> <button save-passwordsave</button> </div> </fieldset> </form> </div> And now, let’s add the JavaScript logic to handle posting and saving of the form’s contents: var loki = require('lokijs'), path = require('path'); angular .module('Utils', []) ... .service('Storage', ['$q', function($q) { this.db = new loki(path.resolve(__dirname, '../..', 'app.db')); this.collection = null; this.loaded = false; this.init = function() { var d = $q.defer(); this.reload() .then(function() { this.collection = this.db.getCollection('keychain'); d.resolve(this); }.bind(this)) .catch(function(e) { // create collection this.db.addCollection('keychain'); // save and create file this.db.saveDatabase(); this.collection = this.db.getCollection('keychain'); d.resolve(this); }.bind(this)); return d.promise; }; this.addDoc = function(data) { var d = $q.defer(); if(this.isLoaded() && this.getCollection()) { this.getCollection().insert(data); this.db.saveDatabase(); d.resolve(this.getCollection()); } else { d.reject(new Error('DB NOT READY')); } return d.promise; }; }) .directive('savePassword', ['Storage', function(Storage) { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); if(scope.vm.formData) { Storage .addDoc(scope.vm.formData) .then(function() { // reset form & close insert window scope.vm.formData = {}; ipc.send('toggle-insert-view'); }); } }); }; }]) Key Points: - We first need to initialize the database. This process involves creating a new instance of the Loki Object, providing the path to the database file as an argument, looking up if that backup file exists, creating it if needed (including the ‘Keychain’ collection), and then loading the contents of this file in memory. - We can retrieve a specific collection in the database with the getCollection()method. - A collection object exposes several methods, including an insert()method, allowing us to add a new document to the collection. - To persist the database contents to file, the Loki object exposes a saveDatabase()method. - We will need to reset the form’s data and send an IPC event to tell the Main Process to close the window once the document is saved. We now have a simple form allowing us to generate and save new passwords. Let’s go back to the main view to list these entries. Listing Passwords A few things need to happen here: - We need to be able to get all the documents in our collection. - We need to inform the main view whenever a new password is saved so it can refresh the view. We can retrieve the list of documents by calling the getCollection() method on the Loki object. This method returns an object with a property called data, which is simply an array of all the documents in that collection: this.getCollection = function() { this.collection = this.db.getCollection('keychain'); return this.collection; }; this.getDocs = function() { return (this.getCollection()) ? this.getCollection().data : null; }; We can then call the getDocs() in our Angular controller and retrieve all the passwords stored in the database, after we initialize it: angular .module('MainView', ['Utils']) .controller('MainCtrl', ['Storage', function(Storage) { var vm = this; vm.keychain = null; Storage .init() .then(function(db) { vm.keychain = db.getDocs(); }); }); A bit of Angular templating, and we have a password list: <tr ng- <td class="mdl-data-table__cell--non-numeric">{{item.description}}</td> <td>{{item.username || 'n/a'}}</td> <td> <span ng-•</span> </td> <td> <a href="#" copy-copy</a> <a href="#" remove-remove</a> </td> </tr> A nice added feature would be to refresh the list of passwords after inserting a new one. For this, we can use Electron’s IPC module. As mentioned earlier, the Main Process’ IPC module can be called in a Renderer Process to turn it into a listener process, by using the remote module. Here is an example on how to implement it in main.view.js: var remote = require('remote'), remoteIpc = remote.require('ipc'); angular .module('MainView', ['Utils']) .controller('MainCtrl', ['Storage', function(Storage) { var vm = this; vm.keychain = null; Storage .init() .then(function(db) { vm.keychain = db.getDocs(); remoteIpc.on('update-main-view', function() { Storage .reload() .then(function() { vm.keychain = db.getDocs(); }); }); }); }]); Key Points: - We will need to use the remote module via its own require()method to require the remote IPC module from the Main Process. - We can then setup our Renderer Process as an event listener via the on()method, and bind callback functions to these events. The insert view will then be in charge of dispatching this event whenever a new document is saved: Storage .addDoc(scope.vm.formData) .then(function() { // refresh list in main view ipc.send('update-main-view'); // reset form & close insert window scope.vm.formData = {}; ipc.send('toggle-insert-view'); }); Copying Passwords It is usually not a good idea to display passwords in plain text. Instead, we are going to hide and provide a convenience button allowing the end user to copy the password directly for a specific entry. Here again, Electron comes to our rescue by providing us with a clipboard module with easy methods to copy and paste not only text content, but also images and HTML code: var clipboard = require('clipboard'); angular .module('Utils', []) ... .directive('copyPassword', [function() { return function(scope, el, attrs) { el.bind('click', function(e) { e.preventDefault(); var text = (scope.vm.keychain[attrs.copyPassword]) ? scope.vm.keychain[attrs.copyPassword].password : ''; // atom's clipboard module clipboard.clear(); clipboard.writeText(text); }); }; }]); Since the generated password will be a simple string, we can use the writeText() method to copy the password to the system’s clipboard. We can then update our main view HTML, and add the copy button with the copy-password directive on it, providing the index of the array of passwords: <a href="#" copy-copy</a> Removing Passwords Our end users might also like to be able to delete passwords, in case they become obsolete. To do this, all we need to do is call the remove() method on the keychain collection. We need to provide the entire doc to the ‘remove()’ method, as such: this.removeDoc = function(doc) { return function() { var d = $q.defer(); if(this.isLoaded() && this.getCollection()) { // remove the doc from the collection & persist changes this.getCollection().remove(doc); this.db.saveDatabase(); // inform the insert view that the db content has changed ipc.send('reload-insert-view'); d.resolve(true); } else { d.reject(new Error('DB NOT READY')); } return d.promise; }.bind(this); }; Loki.js documentation states that we can also remove a doc by its id, but it does not seem to be working as expected. Creating a Desktop Menu Electron integrates seamlessly with our OS desktop environment to provide a “native” user experience look & feel to our apps. Therefore, Electron comes bundled with a Menu module, dedicated to creating complex desktop menu structures for our app. The menu module is a vast topic and almost deserves a tutorial of its own. I strongly recommend you read through Electron’s Desktop Environment Integration tutorial to discover all the features of this module. For the scope of this current tutorial, we will see how to create a custom menu, add a custom command to it, and implement the standard quit command. Creating & Assigning a Custom Menu to Our App Typically, the JavaScript logic for an Electron menu would belong in the main script file of our app, where our Main Process is defined. However, we can abstract it to a separate file, and access the Menu module via the remote module: var remote = require('remote'), Menu = remote.require('menu'); To define a simple menu, we will need to use the buildFromTemplate() method: var appMenu = Menu.buildFromTemplate([ { label: 'Electron', submenu: [{ label: 'Credits', click: function() { alert('Built with Electron & Loki.js.'); } }] } ]); The first item in the array is always used as the “default” menu item. The value of the labelproperty does not matter much for the default menu item. In dev mode it will always display Electron. We will see later how to assign a custom name to the default menu item during the build phase. Finally, we need to assign this custom menu as the default menu for our app with the setApplicationMenu() method: Menu.setApplicationMenu(appMenu); Mapping Keyboard Shortcuts Electron provides “accelerators”, a set of pre-defined strings that map to actual keyboard combinations, e.g.: Command+A or Ctrl+Shift+Z. The Commandaccelerator does not work on Windows or Linux. For our password keychain application, we should add a Filemenu item, offering two commands: - Create Password: open the insert view with Cmd (or Ctrl) + N - Quit: quit the app altogether with Cmd (or Ctrl) + Q ... { label: 'File', submenu: [ { label: 'Create Password', accelerator: 'CmdOrCtrl+N', click: function() { ipc.send('toggle-insert-view'); } }, { type: 'separator' // to create a visual separator }, { label: 'Quit', accelerator: 'CmdOrCtrl+Q', selector: 'terminate:' // OS X only!!! } ] } ... Key Points: - We can add a visual separator by adding an item to the array with the typeproperty set to separator. - The CmdOrCtrlaccelerator is compatible with both Mac and PC keyboards - The selectorproperty is OSX-compatible only! Styling Our App You probably noticed throughout the various code examples references to class names starting with mdl-. For the purpose of this tutorial I opted to use the Material Design Lite UI framework, but feel free to use any UI framework of your choice. Anything that we can do with HTML5 can be done in Electron; just keep in mind the growing size of the app’s binaries, and the resulting performance issues that may occur if you use too many third-party libraries. Packaging Electron Apps for Distribution You made an Electron app, it looks great, you wrote your e2e tests with Selenium and WebDriver, and you are ready to distribute it to the world! But you still want to personalize it, give it a custom name other than the default “Electron”, and maybe also provide custom application icons for both Mac and PC platforms. Building with Gulp These days, there is a Gulp plugin for anything we can think of. All I had to do is type gulp electron in Google, and sure enough there is a gulp-electron plugin! This plugin is fairly easy to use as long as the folder structure detailed at the beginning of this tutorial was maintained. If not, you might have to move things around a bit. This plugin can be installed like any other Gulp plugin: $ npm install gulp-electron --save-dev And then we can define our Gulp task as such: var gulp = require('gulp'), electron = require('gulp-electron'), info = require('./src/package.json'); gulp.task('electron', function() { gulp.src("") .pipe(electron({ src: './src', packageJson: info, release: './dist', cache: './cache', version: 'v0.31.2', packaging: true, platforms: ['win32-ia32', 'darwin-x64'], platformResources: { darwin: { CFBundleDisplayName: info.name, CFBundleIdentifier: info.bundle, CFBundleName: info.name, CFBundleVersion: info.version }, win: { "version-string": info.version, "file-version": info.version, "product-version": info.version } } })) .pipe(gulp.dest("")); }); Key Points: - the src/folder cannot be the same as the folder where the Gulpfile.js is, nor the same folder as the distribution folder. - We can define the platforms we wish to export to via the platformsarray. - We should define a cachefolder, where the Electron binaries will be download so they can be packaged with our app. - The contents of the app’s package.json file need to be passed to the gulp task via the packageJsonproperty. - There is an optional packagingproperty, allowing us to also create zip archives of the generated apps. - For each platform, there is a different set of “platform resources” that can be defined. Adding App Icons One of the platformResources properties is the icon property, allowing us to define a custom icon for our app: "icon": "keychain.ico" OS X requires icons with the .icnsfile extension. There are multiple online tools allowing us to convert .pngfiles into .icoand .icnsfor free. Conclusion In this article we have only scratched the surface of what Electron can actually do. Think of great apps like Atom or Slack as a source of inspiration where you can go with this tool. I hope you found this tutorial useful, please feel free to leave your comments and share your experiences with Electron!
https://www.toptal.com/javascript/electron-cross-platform-desktop-apps-easy
CC-MAIN-2017-17
en
refinedweb
I just want a simple single sign-on for my application and identityserver3 seen to be a good solution. three things i didn't like about it though the consent page, the logout and logged out pages. i manage to disable the consent page by setting these lines to the Clients.cs file RequireConsent = false, AllowRememberConsent = false, The documentation here will help you. You are interested in specifying a custom set of AuthenticationOptions. Within that, there are three properties of interest: EnableSignOutPrompt Indicates whether IdentityServer will show a confirmation page for sign-out. When a client initiates a sign-out, by default IdentityServer will ask the user for confirmation. This is a mitigation technique against “logout spam”. Defaults to true. EnablePostSignOutAutoRedirect Gets or sets a value indicating whether IdentityServer automatically redirects back to a validated post_logout_redirect_uri passed to the signout endpoint. Defaults to false. PostSignOutAutoRedirectDelay Gets or sets the delay (in seconds) before redirecting to a post_logout_redirect_uri. Defaults to 0. Using these three settings you should be able to tweak IdentityServer3 to your liking. For example, your Startup.cs may look like this: public class Startup { public void Configuration(IAppBuilder app) { app.Map("/identity", idsrvApp => { idsrvApp.UseIdentityServer(new IdentityServerOptions { AuthenticationOptions = new AuthenticationOptions() { EnableSignOutPrompt = false, EnablePostSignOutAutoRedirect = true, PostSignOutAutoRedirectDelay = 0 }, EnableWelcomePage = false, Factory = Factory.Get(), SigningCertificate = Certificate.Get(), SiteName = "Identity Server Example" }); }); } }
https://codedump.io/share/P48awB3Nz7SM/1/how-to-remove-logout-and-loggedout-paged-from-aspnet-identityserver3
CC-MAIN-2017-17
en
refinedweb
In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv1720/ltp Modified Files: ChangeLog Log Message: Changes for FEBRUARY 2008 Index: ChangeLog =================================================================== RCS file: /cvsroot/ltp/ltp/ChangeLog,v retrieving revision 1.108 retrieving revision 1.109 diff -C2 -d -r1.108 -r1.109 *** ChangeLog 1 Feb 2008 10:46:21 -0000 1.108 --- ChangeLog 29 Feb 2008 04:34:48 -0000 1.109 *************** *** 1,2 **** --- 1,428 ---- + LTP-20080229 + + 1) Log Message: lcov: adding support for gzipped html based on patch by dnozay@... + File(s) Modified: + ltp/utils/analysis/lcov/lcovrc + ltp/utils/analysis/lcov/man/genhtml.1 + ltp/utils/analysis/lcov/man/lcovrc.5 + ltp/utils/analysis/lcov/bin/genhtml + + 2) Log Message: Fix for Don´t call Domain type on test create, by, "Serge E. Hallyn" <serue@...> + File(s) Modified: + ltp/testcases/kernel/security/selinux-testsuite/misc/sbin_deprecated.patch + ltp/testscripts/test_selinux.sh + + 3) Log Message: Some code cleanup in PID & SYSVIPC namespace testcases, by, "Rishikesh K. Rajak" <risrajak@...> + Modified File(s): + ltp/testcases/kernel/containers/pidns/pidns01.c + ltp/testcases/kernel/containers/pidns/pidns02.c + ltp/testcases/kernel/containers/pidns/pidns03.c + ltp/testcases/kernel/containers/sysvipc/shmnstest.c + + 4) Log Message: Some Cleanups and running hugetlb independantly + Modified File(s): + ltp/testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c + ltp/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c + Added File(s): + ltp/runtest/hugetlb + + 5) Log Message: Give Execute Permission to numa01.sh, by, Pradeep K Surisetty <pradeepkumars@...> + Modified File(s): + ltp/testcases/kernel/numa/Makefile + + 6) Log Message: Let tests send sigchld to unconfined_t. Without this, the selinux testsuite on Fedora 8 hangs at selinux_task_create.sh, by, "Serge E. Hallyn" <serue@...> + Modified File(s): + ltp/testcases/kernel/security/selinux-testsuite/misc/sbin_deprecated.patch + + 7) Log Message: str_echo function expects a file descriptor & not an address, by, Craig Meier <crmeier@...> + Modified File(s): + ltp/testcases/kernel/sched/clisrv/pthserv.c + + 8) Log Message: Build Error Fix by checking for installation of setcap or xattr headers, by, "Serge E. Hallyn" <serue@...> + Modified File(s): + ltp/testcases/kernel/security/filecaps/Makefile + ltp/testcases/kernel/security/filecaps/checkforlibcap.sh + Added Files: + ltp/testcases/kernel/security/filecaps/check_xattr.c + + 9) Log Message: mark test_exit as noreturn #1891129 by Marcus Meissner, by, Mike Frysinger <vapier@...> + Modified File(s): + ltp/ltp/include/test.h + Added File(s): + ltp/ltp/include/compiler.h + + 10)Log Message: Disktest application update to version 1.4.2, by, Brent Yardley <yardleyb@...> + Modified File(s): + ltp/testcases/kernel/io/disktest/Getopt.c + ltp/testcases/kernel/io/disktest/Getopt.h + ltp/testcases/kernel/io/disktest/Makefile + ltp/testcases/kernel/io/disktest/Makefile.aix + ltp/testcases/kernel/io/disktest/Makefile.linux + ltp/testcases/kernel/io/disktest/Makefile.windows + ltp/testcases/kernel/io/disktest/README + ltp/testcases/kernel/io/disktest/childmain.c + ltp/testcases/kernel/io/disktest/childmain.h + ltp/testcases/kernel/io/disktest/defs.h + ltp/testcases/kernel/io/disktest/dump.c + ltp/testcases/kernel/io/disktest/dump.h + ltp/testcases/kernel/io/disktest/globals.c + ltp/testcases/kernel/io/disktest/globals.h + ltp/testcases/kernel/io/disktest/io.c + ltp/testcases/kernel/io/disktest/io.h + ltp/testcases/kernel/io/disktest/main.c + ltp/testcases/kernel/io/disktest/main.h + ltp/testcases/kernel/io/disktest/parse.c + ltp/testcases/kernel/io/disktest/parse.h + ltp/testcases/kernel/io/disktest/sfunc.c + ltp/testcases/kernel/io/disktest/sfunc.h + ltp/testcases/kernel/io/disktest/stats.c + ltp/testcases/kernel/io/disktest/stats.h + ltp/testcases/kernel/io/disktest/threading.c + ltp/testcases/kernel/io/disktest/threading.h + ltp/testcases/kernel/io/disktest/timer.c + ltp/testcases/kernel/io/disktest/timer.h + ltp/testcases/kernel/io/disktest/usage.c + ltp/testcases/kernel/io/disktest/usage.h + ltp/testcases/kernel/io/disktest/man1/disktest.1 + Added File(s): + ltp/testcases/kernel/io/disktest/CHANGELOG + ltp/testcases/kernel/io/disktest/disktest.spec + ltp/testcases/kernel/io/disktest/signals.c + ltp/testcases/kernel/io/disktest/signals.h + ltp/testcases/kernel/io/disktest/man1/disktest_manual.html + + 11) Log Message: Pid Namespace were getting segmentation fault while running on -mm kernel. After debugging by container development team they found the exact root cause. The Page_Size was reset, by, "Rishikesh K. Rajak" <risrajak@...> + Modified File(s): + ltp/testcases/kernel/containers/libclone/libclone.c + + 12) Log Message: Based on the discussion at LKML (), Ricardo Salveti de Araujo <rsalveti@...> removed the test case that verifies if the pgoff is "valid" + Modified File(s): + ltp/testcases/kernel/syscalls/remap_file_pages/remap_file_pages02.c + + 13) Log Message: The problem was the position of the parenthesis, which made "fd" receive the result of the < (lower than) operation, instead of the actual return value from open. This implicates a lot of trouble in any subsequent reference to fd, used in write and mmap. Because of this, mmap was returning an error number (ENODEV), instead of a valid memory address, which created the mprotect trouble. Fix by Jose Otavio Rizzatti Ferreira <joseferr@...> + Modified File(s): + ltp/testcases/kernel/syscalls/mprotect/mprotect02.c + + 14) Log Message: Patrick Kirsch <pkirsch@...> personally thinks, it would be better to print out the "actual" return code from sysconf call instead of the errno, which may lead to confusion, because the actual return code from the failing sysconf is probably not 0 (as errno is defined in previous context). + Modified File(s): + ltp/testcases/kernel/syscalls/sysconf/sysconf01.c + + 15) Log Message: Do not store cache files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/realtime/autom4te.cache/traces.0 + ltp/testcases/realtime/autom4te.cache/traces.1 + ltp/testcases/realtime/autom4te.cache/requests + ltp/testcases/realtime/autom4te.cache/output.1 + ltp/testcases/realtime/autom4te.cache/output.0 + + 16) Log Message: Remove compiled files, by, Mike Frysinger <vapier@...> + Modified File(s): + ltp/testcases/kernel/syscalls/pcllib/libtool + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/config.h + ltp/testcases/kernel/syscalls/pcllib/config.log + ltp/testcases/kernel/syscalls/pcllib/config.status + + 17) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/test/.deps/cobench.Po + ltp/testcases/kernel/syscalls/pcllib/test/.deps/cothread.Po + + 18) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/man/Makefile + + 19) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/pcl/.deps/pcl_version.Plo + ltp/testcases/kernel/syscalls/pcllib/pcl/.deps/pcl.Plo + + 20) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/autom4te.cache/traces.0 + ltp/testcases/kernel/syscalls/pcllib/autom4te.cache/traces.1 + ltp/testcases/kernel/syscalls/pcllib/autom4te.cache/requests + ltp/testcases/kernel/syscalls/pcllib/autom4te.cache/output.1 + ltp/testcases/kernel/syscalls/pcllib/autom4te.cache/output.0 + + 21) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/include/Makefile + + 22) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/test/Makefile + + 23) Log Message: punt compiled files, by, Mike Frysinger <vapier@...> + Deleted File(s): + ltp/testcases/kernel/syscalls/pcllib/pcl/Makefile + + 24) Log Message: This will address the problem until distros update with latest glibc which has fallocate implementation. This is not extensively tested and built with some assumption like + o we are testing on x86* and ppc* archs + o on 64 bit machine we will always see 64 bit kernel running + by, Nagesh Sharyathi <sharyathi@...> + Modified File(s): + ltp/testcases/kernel/syscalls/fallocate/fallocate01.c + ltp/testcases/kernel/syscalls/fallocate/fallocate02.c + ltp/testcases/kernel/syscalls/fallocate/fallocate03.c + + 25) Log Message: + Since msgmni now scales to the memory size, it may reach big values. + To avoid forking 2*msgmni processes and create msgmni msg queues, do not take + msgmni from procfs anymore. + Just define it as 16 (which is the MSGMNI constant value in linux/msg.h) + + Also fixed the Makefiles in ipc/lib and ipc/msgctl: there was no dependency + on the lib/ipc*.h header files. + + Signed-off-by: Nadia Derbey <Nadia.Derbey@...> + + Modified File(s): + ltp/testcases/kernel/syscalls/ipc/lib/Makefile + ltp/testcases/kernel/syscalls/ipc/lib/ipcmsg.h + ltp/testcases/kernel/syscalls/ipc/msgctl/Makefile + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl08.c + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl09.c + + 26) Log Message: + Here is a second round of cleanup and fixes for the realtime testcases. + + 1) Make sched_jitter use the create_fifo_thread() library function instead of an open coded solution, + 2) Prio-wake calls rt_init() twice, remove the second call, + 3) Make sbrk_mutex less verbose by default. One can still use the -v option to get the whole output, + 4) It's better to calculate the histogram before saving it. This was introduced in an earlier commit of mine fixing the quantile calculation, 5) Fix runtime displaying of the min and max latencies (when used with -v3). While at it, remove an uneeded avg variable, + 6) Various tests still have a hardcoded value for the quantile nines. Use a value automatically calculated from the number of iterations, + 7) The log10() call used for automatic quantile nines calculation returns a double result. Cast it to an int. The exp10() call used in stats_quantiles_calc() for checking purposes returns a double result which is compared against a long. Cast it to a long. This allows the following comparison: data->size < (long)exp10(quantiles->nines) to really be false when quantiles->nines has been calculated as log10(data->size). + More generally, it seems that (at least with gcc 4.1.1): + long i = 10000; + double f = exp10(log10(i)) + + yields (i < f) being true due to rounding, + 8) Add latency tracing capability to pthread_kill_latency as is already done on a few other latency tests (gtod_latency, sched_latency, ...), + 9) The '::' optional argument specifier for getopt used by the '-v' option is a GNU extension, is not portable and does not work. For example it's not even described in the Debian getopt(3) manpage. Make the '-v' option require a non optional argument, + 10)The print buffer is only ever flushed when it is full. Add flushing when the test terminates vi atexit(), + 11)The 'period missed' check of the thread first loop should not depend on the thread starting time. This is especially visible on 'slow' platforms where one cannot run the test if thread creation takes a long time. Fix it by removing this dependency. All delays are now calculated relative to when the thread starts, + + Signed-off-by: Sebastien Dugue <sebastien.dugue@...> + Cc: Darren Hart <dvhltc@...> + Cc: Tim Chavez <tinytim@...> + Cc: Matthieu CASTET <matthieu.castet@...> + Acked-by: Chirag <chirag@...> + + Modified File(s): + ltp/testcases/realtime/func/gtod_latency/gtod_latency.c + ltp/testcases/realtime/func/hrtimer-prio/hrtimer-prio.c + ltp/testcases/realtime/func/periodic_cpu_load/periodic_cpu_load.c + ltp/testcases/realtime/func/periodic_cpu_load/periodic_cpu_load_single.c + ltp/testcases/realtime/func/pi-tests/sbrk_mutex.c + ltp/testcases/realtime/func/pi_perf/pi_perf.c + ltp/testcases/realtime/func/prio-wake/prio-wake.c + ltp/testcases/realtime/func/pthread_kill_latency/pthread_kill_latency.c + ltp/testcases/realtime/func/sched_jitter/sched_jitter.c + ltp/testcases/realtime/func/sched_latency/sched_latency.c + ltp/testcases/realtime/lib/librttest.c + ltp/testcases/realtime/lib/libstats.c + + 27) Log Message: lcov: fixed problem with pre gcc-3.3 versions. + read_gcov_headers does not return valid results for pre gcc-3.3 versions. Due to an unnecessary check, parsing of gcov files was aborted. Fix by removing check, by, Peter Oberparleiter <oberpapr@...> + Modified File(s): + ltp/utils/analysis/lcov/bin/geninfo + + 28) Log Message: lcov: fix error when trying to use genhtml -b + genhtml fails when the data file contains an entry which is not found in the base file, by, Peter Oberparleiter <oberpapr@...> + Modified File(s): + ltp/utils/analysis/lcov/bin/genhtml + + 29) Log Messaage: run_auto.sh file for realtime/func/pthread_kill_latency/testcase got missed out in first release of realtime tests. This patch adds run_auto.sh for testcase which is required to run this particular test through top-level run script, by, sudhanshu <sudh@...> + Added File(s): + ltp/testcases/realtime/func/pthread_kill_latency/run_auto.sh + + 30) Log Message: Since msgmni now scales to the memory size, it may reach big values. To avoid forking 2*msgmni processes and create msgmni msg queues, take the min between the procfs value and MSGMNI (as found in linux/msg.h). + Also integrated the following in libipc.a: + . get_max_msgqueues() + . get_used_msgqueues() + Signed-off-by: Nadia Derbey <Nadia.Derbey@...> + Modified File(s): + ltp/testcases/kernel/syscalls/ipc/lib/ipcmsg.h + ltp/testcases/kernel/syscalls/ipc/lib/libipc.c + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl08.c + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl09.c + ltp/testcases/kernel/syscalls/ipc/msgget/Makefile + ltp/testcases/kernel/syscalls/ipc/msgget/msgget03.c + Added File(s): + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl10.c + ltp/testcases/kernel/syscalls/ipc/msgctl/msgctl11.c + + 31) Log Message: waitpid06.c uses a flag to detect whether something went wrong during the test. The issue is that this flag is not initialized, and I get random failure reports. Other tests might suffer from the same bug, but I did not observe it yet. The enclosed patch fixes this in a trivial way for waitpid06. Surprisingly, with my debian package I never got the error, but when I compiled myself, by, Louis Rilling <Louis.Rilling@...> + Modified File(s): + ltp/testcases/kernel/syscalls/waitpid/waitpid06.c + + 32) Log Message: + There are numerous cleanups, fixes and features went into our locally maintained version of realtime tests, since its intergration in LTP december last year. This patch merges those changes into LTP tree. The patch majorly contains : + - All features, cleanups and fixes done by IBM realtime team over last two + month or so. + - Change in copyrights( year, symbil and limiting columns to 80 chars) + - Other few cleanups to ltp-realtime tests. + Signed-off-by : Sudhanshu Singh < sudh@...> + + Modified File(s): + ltp/testcases/realtime/GNUmakefile.am + ltp/testcases/realtime/run.sh + ltp/testcases/realtime/func/async_handler/async_handler.c + ltp/testcases/realtime/func/async_handler/async_handler_jk.c + ltp/testcases/realtime/func/async_handler/async_handler_tsc.c + ltp/testcases/realtime/func/gtod_latency/gtod_infinite.c + ltp/testcases/realtime/func/gtod_latency/gtod_latency.c + ltp/testcases/realtime/func/hrtimer-prio/hrtimer-prio.c + ltp/testcases/realtime/func/matrix_mult/matrix_mult.c + ltp/testcases/realtime/func/measurement/preempt_timing.c + ltp/testcases/realtime/func/measurement/rdtsc-latency.c + ltp/testcases/realtime/func/periodic_cpu_load/periodic_cpu_load.c + ltp/testcases/realtime/func/periodic_cpu_load/periodic_cpu_load_single.c + ltp/testcases/realtime/func/pi-tests/parse-testpi1.py + ltp/testcases/realtime/func/pi-tests/parse-testpi2.py + ltp/testcases/realtime/func/pi-tests/run_auto.sh + ltp/testcases/realtime/func/pi-tests/sbrk_mutex.c + ltp/testcases/realtime/func/pi-tests/test-skeleton.c + ltp/testcases/realtime/func/pi-tests/testpi-0.c + ltp/testcases/realtime/func/pi-tests/testpi-1.c + ltp/testcases/realtime/func/pi-tests/testpi-2.c + ltp/testcases/realtime/func/pi-tests/testpi-4.c + ltp/testcases/realtime/func/pi-tests/testpi-5.c + ltp/testcases/realtime/func/pi-tests/testpi-6.c + ltp/testcases/realtime/func/pi-tests/testpi-7.c + ltp/testcases/realtime/func/pi_perf/pi_perf.c + ltp/testcases/realtime/func/prio-preempt/prio-preempt.c + ltp/testcases/realtime/func/prio-wake/prio-wake.c + ltp/testcases/realtime/func/pthread_kill_latency/pthread_kill_latency.c + ltp/testcases/realtime/func/sched_football/parse-football.py + ltp/testcases/realtime/func/sched_football/sched_football.c + ltp/testcases/realtime/func/sched_jitter/sched_jitter.c + ltp/testcases/realtime/func/sched_latency/sched_latency.c + ltp/testcases/realtime/func/thread_clock/tc-2.c + ltp/testcases/realtime/include/libjvmsim.h + ltp/testcases/realtime/include/librttest.h + ltp/testcases/realtime/include/libstats.h + ltp/testcases/realtime/include/list.h + ltp/testcases/realtime/lib/libjvmsim.c + ltp/testcases/realtime/lib/librttest.c + ltp/testcases/realtime/lib/libstats.c + ltp/testcases/realtime/perf/latency/pthread_cond_latency.c + ltp/testcases/realtime/perf/latency/pthread_cond_many.c + ltp/testcases/realtime/scripts/__init__.py + ltp/testcases/realtime/scripts/setenv.sh + ltp/testcases/realtime/stress/pi-tests/lookup_pi_state.c + ltp/testcases/realtime/stress/pi-tests/testpi-3.c + ltp/testscripts/test_realtime.sh + + 33) Log Message: waitpid07.c uses a flag to detect whether something went wrong during the test. The issue is that this flag is not initialized, and I get random failure reports, by, Louis Rilling <Louis.Rilling@...> + Modified File(s): + ltp/testcases/kernel/syscalls/waitpid/waitpid07.c + + 34) Log Message: + waitpid tests: Fix failure detection flag initialization. + On a similar pattern as waitpid06 and waitpid07, waitpid08-13 use a failure detection flag (called 'fail' instead of 'flag'). However except in waitpid09, this flag may be used uninitialized, which causes the test to randomly report failure. This patch ensures that the flag is reset at the beginning of each loop. + Signed-off-by: Louis Rilling <Louis.Rilling@...> + + Modified File(s): + ltp/testcases/kernel/syscalls/waitpid/waitpid08.c + ltp/testcases/kernel/syscalls/waitpid/waitpid10.c + ltp/testcases/kernel/syscalls/waitpid/waitpid11.c + ltp/testcases/kernel/syscalls/waitpid/waitpid12.c + ltp/testcases/kernel/syscalls/waitpid/waitpid13.c + + 35) Log Message: + waitpid03/04: Fix condition numbers displayed when reporting errors. + The condition numbers displayed while reporting errors in waitpid03 and waitpid04 are used initialized and are not consistently updated, which may lead to useless reports. + Signed-off-by: Louis Rilling <Louis.Rilling@...> + + Modified File(s): + ltp/testcases/kernel/syscalls/waitpid/waitpid03.c + ltp/testcases/kernel/syscalls/waitpid/waitpid04.c + + 36) Log Message: + waitpid02-05: remove unused defines related to failure handling. Signed-off-by: Louis Rilling <Louis.Rilling@...> + Modified File(s): + ltp/testcases/kernel/syscalls/waitpid/waitpid02.c + ltp/testcases/kernel/syscalls/waitpid/waitpid03.c + ltp/testcases/kernel/syscalls/waitpid/waitpid04.c + ltp/testcases/kernel/syscalls/waitpid/waitpid05.c + + 37) Log Message: Adding option to build TIMER test cases as well, by, Subrata Modak <subrata@...> + Modified File(s): + ltp/testcases/kernel/Makefile + + 38) Log Message: Removing these files as they get automatically generated during build, by, Max Stirling <vicky.irobot@...> + Deleted File(s): + ltp/testcases/ballista/ballista/MakefileHost + ltp/testcases/ballista/ballista/MakefileTarget + + 39) Log Message: Many tests cannot be executed concurrently. I have a few patches to make it possible to execute some tests in parallel/concurrency, to check SMP safeness, by, Renaud Lottiaux <Renaud.Lottiaux@...> + Modified File(s): + ltp/testcases/kernel/syscalls/sendfile/sendfile02.c + ltp/testcases/kernel/syscalls/sendfile/sendfile04.c + + 40) Log Message: Fix NFS issues in tst_rmdir (directory non empty) due to an unmapped file, by, Renaud Lottiaux <Renaud.Lottiaux@...> + Modified File(s): + ltp/testcases/kernel/syscalls/remap_file_pages/remap_file_pages01.c + ltp/testcases/kernel/syscalls/remap_file_pages/remap_file_pages02.c + + 41) Log Message: Fix a concurrency issue due to the (false) sharing of file /dev/shm/cache. This patch just create a different file for each process and unlink the file before exiting, by, Renaud Lottiaux <Renaud.Lottiaux@...> + Modified File(s): + ltp/testcases/kernel/syscalls/remap_file_pages/remap_file_pages01.c + + 42) Log Message: The variable dfOpts (in #324) is seting to NULL even if the df is not a symbolic link.(It has to be "-P" itself to get the output portable).And so the "df $dfOpts $dir" (line #326) command is not giving a result expected by the succeeding statements. I have tested this patch both in lvm and fdisk partitions and found its working fine, by, Sudeesh John <sudeeshjohn@...> + Modified File(s): + ltp/testcases/kernel/fs/doio/rwtest.sh + + 43) Log Message: + CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID are also supported besides CLOCK_REALTIME and CLOCK_MONOTONIC. That's the cause of the failure of clock_gettime03, timer_create02 and timer_create04. Another cause is that struct sigevent evp is assigned with invalid values when option is 1. That's the cause of the failure of timer_create02 and timer_create03. CLOCK_REALTIME_HR and CLOCK_MONOTONIC_HR have been removed in the later kernel versions, hence the failures in the test. I am still trying to find out if any kernel versions used to support these. CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID are supported from 2.6.12 kernel version onwards and the test case needs to be modified for this change. Also in timer_create02.c, setup_test() case1 needs to be modified so as to make evp NULL instead of its members. Since the testcase behaves differently for different kernel versions, a version check needs to be added for making it pass across versions. I'm removing the HR clocks from the tests along with other changes, by, Anoop V. Chakkalakkal <anoop.vijayan@...> + + Modified File(s): + ltp/testcases/kernel/timers/clock_gettime/clock_gettime03.c + ltp/testcases/kernel/timers/clock_settime/clock_settime03.c + ltp/testcases/kernel/timers/include/common_timers.h + ltp/testcases/kernel/timers/timer_create/timer_create02.c + ltp/testcases/kernel/timers/timer_create/timer_create03.c + ltp/testcases/kernel/timers/timer_create/timer_create04.c + + 44) Mog Message: + This patch -try- to cleanup the mem03 test and fix a concurrency problem. Mainly, the test creates and removes files in the current directory. Since the tst_tmpdir() function was not used, several instances of the test was creating and removing files from each others !, by, Renaud Lottiaux <Renaud.Lottiaux@...> + + Modified File(s): + ltp/testcases/kernel/syscalls/memmap/mem03.c + + 45) Log Message: The problem is that the kernel file is vmlinux* instead of vmlinuz* on SLES, but file_test.sh always try to grep vmlinuz* under /boot/. Here is the patch and the test result with the patch, by, shenlinf <shenlinf@...> + Modified File(s): + ltp/testcases/commands/ade/file/file_test.sh + + 46) Log Message: Here is a patch fixing concurrency issue in mremap04. Just use a shm key returned from the getipckey() function instead of a fixed hardcoded value, by, Renaud Lottiaux <Renaud.Lottiaux@...> + Modified File(s): + ltp/testcases/kernel/syscalls/mremap/Makefile + ltp/testcases/kernel/syscalls/mremap/mremap04.c + + 47) Log Message: + LTP-kill05-bad-check-fix.patch: + - Fix return value check from shmat. In case of error, this wrong check was leading to a seg-fault. + LTP-kill05-shmid_delete-fix.patch: + - Fix deletion of the memory segment. Due to the change of process UID during the test, the segment was created by ROOT and deleted (or tried to be deleted) by user "bin". This is of course not possible. And it is also impossible to switch back uid to ROOT. Solution adopted : doing a fork in which the test is performed. The initial process staying with ROOT uid. + LTP-kill05-concurrency-fix.patch + - Paranoia concurrency fix. I have not encounter any real issue, but it is probably safer to be sure each process is using a different segment. -> use tst_tmpdir, to make getipckey generating a different key for each running process, by, Renaud Lottiaux <Renaud.Lottiaux@...> + + Modified File(s): + ltp/testcases/kernel/syscalls/kill/kill05.c + + 48) Log Message: + 1) The pi-tests don't use the librttest infrastructure and simply duplicate code. This patch ensures that those tests use librttest. + 2) The thread-clock test doesn't use the librttest infrastructure. This patch ensures that it does. + 3) Adds missing headers to the following files, + Signed-Off-By: Chirag <chirag@...>, + Acked-By: Dinakar Guniguntala <dino@...>, + Acked-By: Sebastien Dugue <sebastien.dugue@...> + + Modified File(s): + ltp/testcases/realtime/func/pi-tests/parse-testpi1.py + ltp/testcases/realtime/func/pi-tests/parse-testpi2.py + LTP-20080131 I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/ltp/mailman/message/18709493/
CC-MAIN-2017-17
en
refinedweb
Preventing passive federation for Web Api under a MVC4 website I have an ASP.Net MVC4 website that is running passive federation to Azure ACS. This works great for standard http requests from a browser. I have now added some web api services to the same solution but hit issues with acceptance testing. I have secured my api action using the following. [Authorize(Roles = Role.Administrator)] public class AdminReportController : ApiController { } This achieves the objective but the acceptance test that verifies the security of the controller fails. It fails because it gets a 302 response on the unauthenticated call rather than the expected 401. If the http client running the REST request follows the 302, it will ultimately end up with a 200 response from the ACS authentication form. The overall outcome is achieved because the request denied the anonymous user, but the status code returned to the client and the body content does not reflect this. The only way to get around this seems to be to hijack the WSFederationAuthenticationModule to tell it not to run passive redirection if the request is for the web api. public class WebApiSafeFederationAuthenticationModule : WSFederationAuthenticationModule { protected override void OnAuthorizationFailed(AuthorizationFailedEventArgs e) { Guard.That(() => e).IsNotNull(); base.OnAuthorizationFailed(e); if (e.RedirectToIdentityProvider == false) { return; } string requestedUrl = HttpContext.Current.Request.Url.ToString(); if (requestedUrl.IndexOf("/api/", StringComparison.OrdinalIgnoreCase) > -1) { // We don't want web api requests to redirect to the STS e.RedirectToIdentityProvider = false; return; } } } This now allows web api to return its default pipeline for unauthorised requests. I put a shout out on Twitter regarding this issue to which Brock Allen quickly confirmed my solution.
http://www.neovolve.com/2013/05/02/preventing-passive-federation-for-web-api-under-a-mvc4-website/
CC-MAIN-2017-17
en
refinedweb
How can I accept custom type query parameter? public String detail(@QueryParam("request") final MYRequest request) { jersey.server.model.ModelValidationException: Validation of the application resource model has failed during application initialization. Take a look at the @QueryParam documentation, in regards to acceptable types to inject. (The same applies to all the other @XxxParam annotations also) valueOfor fromStringthat accepts a single String argument (see, for example, Integer.valueOf(String)) List<T>, Set<T>or SortedSet<T>, where Tsatisfies 2, 3 or 4 above. The resulting collection is read-only. The reason for these requirements is that the value comes in as a string. The runtime needs to know how to convert a string to the type to inject. The reason for the exception is that there is an initial resource model validation on startup. This validation checks to make sure all your injection points are valid. It sees that the injected type MyRequest doesn't meet any of the above requirements, and throws an exception. Basically you with points 2 and 3, you will need to parse the string yourself, for instance public class MyRequest { public static MyRequest fromString(string param) { // 1. Parse string // 2. Create MyRequest request; return request; } } You can see a good example of using a ParamConverter here
https://codedump.io/share/ufwpZ3wzSK0l/1/passing-custom-type-query-parameter
CC-MAIN-2017-17
en
refinedweb
Question: What is a testamentary trust What is a testamentary trust? Answer to relevant QuestionsMultiple Choice Questions1. Which of the following is not a true statement?a. Testate refers to a person having a valid will.b. The laws of descent convey personal property if an individual dies without a valid will.c. .. ...A law firm is preparing to file a federal estate tax return (Form 706). The estate's executor has elected to use the alternate valuation date. The partner in charge of filing this return is not certain about all of the ...A parent company acquires from a third party bonds that had been issued originally by one of its subsidiaries. What accounting problems are created by this purchase? Post your question
http://www.solutioninn.com/what-is-a-testamentary-trust
CC-MAIN-2017-17
en
refinedweb
al_utf8_encode man page al_utf8_encode — Allegro 5 API Synopsis #include <allegro5/allegro.h> size_t al_utf8_encode(char s[], int32_t c) Description Encode the specified code point to UTF-8 into the buffer s. The buffer must have enough space to hold the encoding, which takes between 1 and 4 bytes. This routine will refuse to encode code points above 0x10FFFF. Returns the number of bytes written, which is the same as that returned by al_utf8_width(3). See Also al_utf16_encode(3) Referenced By al_ustr_newf(3), al_utf16_encode(3), al_utf8_width(3). Allegro reference manual
https://www.mankier.com/3/al_utf8_encode
CC-MAIN-2017-17
en
refinedweb
Vroom! - A Simple DarkRoom/WriteRoom Remake in Tkinter Fredrik Lundh | August 2007 Room Editors # Recently, I’ve been using Jeffrey Fuller’s Dark Room editor for a lot of my writing. Dark Room is a Windows remake of the OS X application WriteRoom, and is designed, just as the original, to let you focus on the text you’re working on, instead of getting distracted by a plethora of fancy features. To quote Jeffrey, “Dark Room is a full screen, distraction free, writing environment. Unlike standard word processors that focus on features, Dark Room is just about you and your text.” Dark Room provides plain text editing in a fixed-pitch font, using green text on a dark background, basic editing commands, and not much else. The editor is designed to be used in full-screen mode. While Dark Room has suited my needs quite well, the itch to create a clone of my own just had to be scratched. My BMW-obsessed 4-year old came up with a suitable name, Vroom!, and, building on Tkinter’s Text widget, I got the first version up and running in a few short programming sessions spread over two days. A few programming sessions and a short writing session, that is, because what’s more suitable for the “maiden voyage” of a new editor than an article describing the implementation? Using the Tkinter Text Widget # The Tkinter UI framework is nearly as minimalistic as the Room editors, but it does come with a couple of extraordinarily powerful widgets. The Canvas widget provides structured graphics, and the Text widget provides a combined rich-text editing and presentation component, which makes it the perfect match for this project. Using the Text widget as an editor is trivial; all you have to do is to create the widget, display it, and make sure it has keyboard focus: from Tkinter import * editor = Text() editor.pack(fill=Y, expand=1) editor.config(font="Courier 12") editor.focus_set() mainloop() The above creates a bare-bones editor widget, with a fixed-pitch Courier font, and basic emacs-style keyboard bindings. The widget is also set up to resize itself to match the Tkinter root window. If you run the script, you can start typing in text right away. It doesn’t look much like the Dark Room editor, though. To get closer, you need to apply some basic styling. Styling the Widget # The following slightly enhanced script creates a root window widget, and then places a styled Text widget inside it. The editor now uses green text on black background, a white cursor (to make sure it’s visible on the black background), and a maximum width of 64 characters, even if the root window is made wider than that. Finally, support for undo/redo is enabled, which lets you use Control-Z to undo changes to the text, and Control-Y to reapply them. from Tkinter import * root = Tk() root.title("Vroom!") root.config(background="black") root.wm_state("zoomed") editor = Text(root) editor.pack(fill=Y, expand=1) editor.config( borderwidth=0, font="{Lucida Sans Typewriter} 12", foreground="green", background="black", insertbackground="white", # cursor selectforeground="green", # selection selectbackground="#008000", wrap=WORD, # use word wrapping width=64, undo=True, # Tk 8.4 ) editor.focus_set() mainloop() (Note that the undo/redo functionality requires Tk 8.4.) The standard Text widget and some nice styling is pretty much all you need to get started. However, the only way to get text into and out from this prototype is to copy the text via the clipboard, from or to some other editor (such as notepad or emacs). Loading text into a Text widget is pretty straightforward; the following snippet shows how to delete the current contents (everything between line 1 column 0 and the END of the buffer), insert the new contents, and finally move the insertion cursor back to the beginning of the buffer: text = open(filename).read() editor.delete(1.0, END) editor.insert(END, text) editor.mark_set(INSERT, 1.0) And here’s the corresponding code to save the contents to a file. The Text widget has a habit of appending newlines to the end of the edit buffer, something that this code addresses by simply trimming away all trailing whitespace, and adding a single newline to the file on the way out. f = open(filename, "w") text = editor.get(1.0, END) try: # normalize trailing whitespace f.write(text.rstrip()) f.write("\n") finally: f.close() Towards a Production-Quality Implementation # Now, given the styled widget and the snippets that shows how to load and save text, let’s start building a slightly more organized implementation. The first step is to create a custom widget class for the editor, to give us some place to add editor-related methods and attributes. Since the editor is a specialized Text widget, you can simply inherit from the Text widget class, and do the necessary setup in the initialization method. from Tkinter import * class RoomEditor(Text): def __init__(self, master, **options): Text.__init__(self, master, **options) self.config( borderwidth=0, font="{Lucida Sans Typewriter} 14", foreground="green", background="black", insertbackground="white", # cursor selectforeground="green", # selection selectbackground="#008000", wrap=WORD, # use word wrapping undo=True, width=64, ) self.filename = None # current document The editor class shown here inherits all methods from the Text class, and also adds a filename attribute to keep track of the currently loaded file. It’s a good idea to display this name in the editor window’s title bar, and you can use a property to make sure that this is done automatically. Before you add the property itself, you need to add object to the list of parent classes; without that, Python’s property mechanism won’t work properly. You also need to put object after the Tkinter widget class, or Tkinter won’t work properly. With this in place, you can just add a getter and a setter method, and use property to create the “virtual” attribute: import os TITLE = "Vroom!" class RoomEditor(Text, object): ... def _getfilename(self): return self._filename def _setfilename(self, filename): self._filename = filename title = os.path.basename(filename or "(new document)") title = title + " - " + TITLE self.winfo_toplevel().title(title) filename = property(_getfilename, _setfilename) With this in place, the actual filename is stored in the _filename attribute, and changes to filename will also be reflected in the title bar (note that the initialization function sets filename to None, so you don’t need to explicitly initialize the internal attribute; that’s done inside _setfilename when the widget is first created). There’s one more thing that can be nicely handled with a property, and that’s the widget’s modification flag. This is automatically set whenever the editor buffer is modified, and can also be explicitly set or reset by the application. Unfortunately, the method used for this, edit_modified, appears to be broken on Python 2.5 (at least it doesn’t work properly in my installation), so you need to provide a work-around: def edit_modified(self, value=None): # Python 2.5's implementation is broken return self.tk.call(self, "edit", "modified", value) The tk.call method ignores None parameters, so a call to edit_modified without any argument will result in the Tk command “.widget edit modified”, which queries the current flag value, and calls with a boolean argument will result in “.widget edit modified value“, which modifies the flag. For convenience, you can wrap this behaviour in a property, and you can in fact use the same method both as the getter and the setter; in the former case, it’s called without any argument, so Tkinter will fetch the current flag value, and in the latter case, it’s called with the assigned value as the first argument, and will thus modify the flag. modified = property(edit_modified, edit_modified) So, with this in place, it’s time to add code to load and save the editor contents. The code snippets shown earlier can be used pretty much as they are, except that you need to update the document filename, the editor title bar, and the modification flag as well. Given the properties just added to the class, the latter is trivial. Just assign to the properties, and the corresponding setter code takes care of the rest. def load(self, filename): text = open(filename).read() self.delete(1.0, END) self.insert(END, text) self.mark_set(INSERT, 1.0) self.modified = False self.filename = filename def save(self, filename=None): if filename is None: filename = self.filename f = open(filename, "w") s = self.get(1.0, END) try: f.write(s.rstrip()) f.write("\n") finally: f.close() self.modified = False self.filename = filename What’s left is some straightforward script code to set everything up: root = Tk() root.config(background="black") root.wm_state("zoomed") editor = RoomEditor(root) editor.pack(fill=Y, expand=1, pady=10) editor.focus_set() try: editor.load(sys.argv[1]) except (IndexError, IOError): pass mainloop() Additional Keyboard Bindings # At this point, the editor looks and feels pretty good, and you can pass in a document name on the command line and have it loaded into the editor buffer in one step. There’s still no way to save the document, though, and it would definitely be nice to have the usual set of “file menu” operations available, such as File/Open, File/Save, and File/Save As…. Adding this is of course just a small matter of programming. I usually implement this kind of user-interface code in two separate layers; one for the actual operations, and one for the user-interface bindings. This makes it easier to test the implementation, and it also gives a lot more flexibility when implementing the actual bindings. Let’s start with code for File/Open: FILETYPES = [ ("Text files", "*.txt"), ("All files", "*") ] class Cancel(Exception): pass def open_as(): from tkFileDialog import askopenfilename f = askopenfilename(parent=root, filetypes=FILETYPES) if not f: raise Cancel try: editor.load(f) except IOError: from tkMessageBox import showwarning showwarning("Open", "Cannot open the file.") raise Cancel Note the use of the global editor variable. An alternative would be to pass in the editor instance, but we’ll only be using a single RoomEditor instance in this version of the editor, so using a global variable makes the code a little bit simpler. Also note the use of a custom exception to indicate that the operation was cancelled, and the use of local import statements to avoid loading user-interface components before they’re actually needed. (Python’s module system will of course still cache already loaded components for us, so subsequent imports are fast.) The code for saving the document to a file is similar, but consists of three different functions; save_as() asks for a file name and saves the file under that name (File/Save As…), save() uses the current name if known (via the filename property), and falls back on save_as() for new documents (File/Save), and save_if_modified() checks if the document has been modified before calling save(). This last function should be used by operations that “destroy” the editor contents, such as loading a new file, or clearing the buffer. def save_as(): from tkFileDialog import asksaveasfilename f = asksaveasfilename(parent=root, defaultextension=".txt") if not f: raise Cancel try: editor.save(f) except IOError: from tkMessageBox import showwarning showwarning("Save As", "Cannot save the file.") raise Cancel def save(): if editor.filename: try: editor.save(editor.filename) except IOError: from tkMessageBox import showwarning showwarning("Save", "Cannot save the file.") raise Cancel else: save_as() def save_if_modified(): if not editor.modified: return if askyesnocancel(TITLE, "Document modified. Save changes?"): save() (It’s worth mentioning that this part took the longest to get “right”; my first implementation used a single save() function with keyword options to control the behaviour, but the logic was somewhat convoluted, the error handling was rather messy, and it just didn’t feel right. I finally replaced it with the much simpler, more verbose, but “obviously correct” set of functions shown here.) The tkMessageBox module contains helpers for several commonly-used message styles, but a “yes/no/cancel”-style box is missing (at least as of Python 2.5). You can use the Message support class to implement our own helper: def askyesnocancel(title=None, message=None, **options): import tkMessageBox s = tkMessageBox.Message( title=title, message=message, icon=tkMessageBox.QUESTION, type=tkMessageBox.YESNOCANCEL, **options).show() if isinstance(s, bool): return s if s == "cancel": raise Cancel return s == "yes" This is similar to the corresponding code used by the tkMessageBox helpers, but uses a boolean or an exception to report the outcome, instead of string values. With the core operations in place, you need to make them available from the user interface. For this version of the editor, let’s stick to keyboard shortcuts for all operations. For each shortcut, you need a dispatcher function, and one or more calls to bind to associate the function with a widget-level event. def file_new(event=None): try: save_if_modified() editor.clear() except Cancel: pass return "break" # don't propagate events def file_open(event=None): try: save_if_modified() open_as() except Cancel: pass return "break" def file_save(event=None): try: save() except Cancel: pass return "break" def file_save_as(event=None): try: save_as() except Cancel: pass return "break" def file_quit(event=None): try: save_if_modified() except Cancel: return root.quit() editor.bind("<Control-n>", file_new) editor.bind("<Control-o>", file_open) editor.bind("<Control-s>", file_save) editor.bind("<Control-Shift-S>", file_save_as) editor.bind("<Control-q>", file_quit) root.protocol("WM_DELETE_WINDOW", file_quit) # window close button mainloop() Note the use of the “break” return value, to keep Tkinter from passing the event on to other event handlers. The reason for this is that Tkinter’s Text widget already has behaviour defined for Control-O (insert new line) and Control-N (move to next line); by returning “break” from the local handler, the standard bindings won’t be allowed to interfere. Also note the call to root.protocol to register a DELETE_WINDOW handler for the root window. This is done to make sure that an attempt to close the window via the window manager won’t shut down the application unexpectedly. This is also the reason that all event handlers have a default value for the event structure; it makes them easier to use in different contexts. So now you have a core editor class, support code for basic file-menu operations, and a bunch of keyboard bindings to access them. What are you waiting for? Just fire up the editor and start typing. Start at the top, write you way through any issues, press Control-S to save the result, and you’ll find yourself with a nice little article in no time at all. Like this one, which was written with the code I’ve included above. Summary # In this article, we built a simple Write Room-style editing application, using Tkinter’s Text widget, and a few kilobytes of mostly straight-forward Python code. The current version is a bit too feature-free even for an intentionally feature-limited editor, but it’s definitely useful as is, and it’s of course easy to add new features with a reasonable effort. It’s Python, after all. And such enhancements are of course a suitable topic for a future article. Stay tuned.
http://effbot.org/zone/vroom.htm
CC-MAIN-2017-17
en
refinedweb
Write a C++ program with proper style to estimate the springtime count of deer in a park for 15 consecutive years. The population of any given year depends on the previous year's population according to the following calculation: If the lowest winter population was 0oF or colder, the deer population drops by 12% If the lowest winter population was higher than 0oF, the deer population rises by 15% The program should accept a starting year from the user, along with the initial deer population. For each subsequent year in the simulation, the program should prompt the user to enter the lowest winter temperature. The program should print the calendar year and the population during the spring. Now, I know you will not solve the problem, and I understand, at this moment, this is what I have so far. #include <iostream.h> #include <stdlib.h> int main() { int year = 1984; int startingDeerPop; int deerPop; int temp; cout >> "Please enter the starting year." >> endl; cin << year; cout >> "Please enter the starting population for the deer." >> endl; cin << startingDeerPop; cout >> "What was the lowest temperature for the" << "year?" >> endl; cin >> temp; if (temp > 0) { deerPop = startingDeerPop + (0.15 * startingDeerPop); } else if (temp < 0) { deerPop = startingDeerPop - (0.12 * startingDeerPop); } cout >> "In 1984, the deer population is:" << deerPop << "." << endl; system("PAUSE"); return 0; } But it won't run on my compiler. I would like to know what i am doing wrong. Any help would be appreciated. Thank you. This post has been edited by Dryerdoor: 21 March 2007 - 11:20 PM
http://www.dreamincode.net/forums/topic/25382-hi-im-new-to-c/
CC-MAIN-2017-17
en
refinedweb
Current clang version is 4.0.0-1 Search Criteria Package Details: include-what-you-use 1:0.7-1 Dependencies (8) - clang>=3.9 (clang-assert, clang-pypy-stm, clang-svn) - clang<=3.10 (clang-assert, clang-pypy-stm, clang-svn) - clang>=3.9 (clang-assert, clang-pypy-stm, clang-svn) (make) - clang<=3.10 (clang-assert, clang-pypy-stm, clang-svn) (make) - cmake (cmake-git) (make) - llvm>=3.9 (llvm-assert, llvm-pypy-stm, llvm-svn) (make) - llvm<=3.10 (llvm-assert, llvm-pypy-stm, llvm-svn) (make) - python2 (placeholder, pypy19, python26, stackless-python2) (optional) Required by (0) Sources (1) Latest Comments johnchen902 commented on 2017-04-21 03:39 tegularius commented on 2016-09-06 21:21 makedepends should include cmake mmlb commented on 2015-12-28 05:05 @Svenstaro that was by design since the previous version of IWYU is incompatible with 3.7. I just updated to latest released version that works with 3.7. Svenstaro commented on 2015-09-29 05:42 This conflicts with llvm 3.7.0 in Arch because 3.7 is < 3.7.0. mmlb commented on 2015-07-23 13:47 @nerzhul, fixed thanks! nerzhul commented on 2015-07-21 10:08 Please add llvm as a dependancy to fix erikw error mmlb commented on 2015-06-01 15:31 @erikw I've updated the package mmlb commented on 2015-05-25 01:14 erikw are you using clang 3.6? That's not out yet, see In the meantime, maybe I should change the PKGBUILD to depend on the exact version of clang. erikw commented on 2015-05-24 09:26 Compilation errors: [ 7%] Building CXX object CMakeFiles/include-what-you-use.dir/iwyu.cc.o In file included from /tmp/yaourt-tmp-erikw/aur-include-what-you-use/src/include-what-you-use/iwyu.cc:103: In file included from /tmp/yaourt-tmp-erikw/aur-include-what-you-use/src/include-what-you-use/iwyu_ast_util.h:19: /tmp/yaourt-tmp-erikw/aur-include-what-you-use/src/include-what-you-use/port.h:16:10: fatal error: 'llvm/Support/Compiler.h' file not found #include "llvm/Support/Compiler.h" mmlb commented on 2014-12-30 16:27 @xantares, did you take a look at the second script offered? The one that uses the compile_commands.json output? I could still throw in your adaptation, but if the python script works for you that would be a better option than continually hacking cmake "compilers".
https://aur.archlinux.org/packages/include-what-you-use/
CC-MAIN-2017-17
en
refinedweb
Hello, I'm trying to create an instance of a struct that has no explicit constructor in C#, but in python I always get a TypeError. My test struct in C#: namespace Foo { public struct MyStruct { public int x; } } In python: mystruct = Foo.MyStruct() This gives me "TypeError: no constructor matches given arguments". I get slightly further calling __new__ explicitly: mystruct = object.__new__(Foo.MyStruct) mystruct.__init__() However, then I get an exception from when mystruct goes out of scope: Unhandled Exception: System.ArgumentException: GCHandle value cannot be zero at System.Runtime.InteropServices.GCHandle.op_Explicit (IntPtr value) [0x00000] at Python.Runtime.ManagedType.GetManagedObject (IntPtr ob) [0x00000] at Python.Runtime.ClassBase.tp_dealloc (IntPtr ob) [0x00000] at (wrapper native-to-managed) Python.Runtime.ClassBase:tp_dealloc (intptr) at (wrapper managed-to-native) Python.Runtime.Runtime:Py_Main (int,string[]) at Python.Runtime.PythonConsole.Main (System.String[] args) [0x00000] If I add an explicit constructor to MyStruct, everything works fine, but this is a third-party library that I'd like to use as-is if possible. Is there any way to do this? Thanks, Jeff
https://mail.python.org/pipermail/pythondotnet/2008-October/000852.html
CC-MAIN-2017-17
en
refinedweb
ADF 2.1.0 has been released. This is a minor release with some interesting new features are worth to detail. One of them is the new Metadata component, which is the subject of this article. This post will be a practical guide for the ADF Metadata component, how to install it, use it and configure it. The purpose of the component is to display the metadata belonging to given node. Until now, the component was capable of displaying and making the basic properties editable, but with the latest enhancements, all of the system wide and custom aspects related to a particular node can be displayed and edited. Installation The component is part of the content-services package, so for using it, we have to import either the ContentModule or the component's module (ContentMetadataModule) to our application. In most of the cases we already have the ContentModule imported, so we show an example following this scenario. import { CoreModule } from '@alfresco/adf-core'; import { ContentModule } from '@alfresco/adf-content-services'; @NgModule({ imports: [ ... CoreModule, ContentModule, ... ], declarations: [ ... ], providers: [ ... ], bootstrap: [AppComponent] }) export class AppModule {} Usage Using the component is quite straightforward. There are 3 input parameters: - node: MinimalNodeEntryEntity The only mandatory parameter, a prefetched node containing the properties data. - displayEmpty: boolean Whether the component displays empty properties or hide them when the component is in readonly mode. By default, the content-metadata component doesn't display empty values (false). - preset: string Presets can be defined in the application configuration. Within a preset, a list of aspects and properties can be defined to restrict the display properties to only to the listed ones. For more information about presets, see the Configuration section below. The default preset is called "default". (How creative, huh?) Basic usage <adf-content-metadata-card [node]="node"></adf-content-metadata-card> Extended usage <adf-content-metadata-card [displayEmpty]="true" [preset]="my-custom-preset" [node]="node"> </adf-content-metadata-card> Configuration The configuration happens through the application configuration file. By default, if there is no configuration for the component, the component will show every aspects and properties belonging to the node. Usually, this is not the expected behavior, since all the metadata will be shown this way, which would be hidden otherwise to decrease the unnecessary noise. But for debugging purposes and to see the complete set of available metadata, this is the easiest way to list them. Basic configuration Not having a configuration for the component is equivalent to have the following configuration: "content-metadata": { "presets": { "default": "*" } } As it can be seen in the example, presets can be defined for the content-metadata component. Each preset has a name, in the configuration above we have only one preset, which is called default. This name is the input parameter for the adf-content-metadata-card component. A preset can be either the wildcard asterisk string ("*") as above or an object where the keys are the name of aspects. The object's values are either the wildcard asterisk strings ("*") meaning all of the aspect's properties should be shown or string arrays, listing the name of aspect's properties to be shown. Extended configuration In the configuration below, we define two presets: "content-metadata": { "presets": { "default": "*", "kitten-images": { "kitten:vet-records": "*", "exif:exif": [ "exif:xResolution", "exif:yResolution"] } } } - The default which could be overridden, but we just leave it as it was originally. - The kitten-images preset which - shows all of the properties from the custom aspect called vet-records from the user defined kitten model - shows two properties (exif:xResolution, exif:yResolution) from the system defined exif:exif aspect For further details about the component and configuration, see the documentation of it. For more info you can also refer to the official component documentation: alfresco-ng2-components/content-metadata.component.md at master · Alfresco/alfresco-ng2-components · GitHub
https://community.alfresco.com/docs/DOC-7301-introduction-to-the-new-content-metadata-component
CC-MAIN-2019-04
en
refinedweb
Now, we will see how to build a basic neural network using TensorFlow, which predicts handwritten digits. We will use the popular MNIST dataset which has a collection of labeled handwritten images for training. First, we must import TensorFlow and load the dataset from tensorflow.examples.tutorial.mnist: import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True) Now, we will see what we have in our data: print("No of images in training set {}".format(mnist.train.images.shape))print("No of labels in training set {}".format(mnist.train.labels.shape))print("No of images in test set {}".format(mnist.test.images.shape))print("No of ...
https://www.oreilly.com/library/view/hands-on-reinforcement-learning/9781788836524/c7d5c608-1bfa-4ec9-9ffe-ece177bbc42d.xhtml
CC-MAIN-2019-04
en
refinedweb
Subject: Re: [Boost-gil] Preferred line length From: Mateusz Loskot (mateusz_at_[hidden]) Date: 2018-04-11 20:22:27 On 11 April 2018 at 22:05, Stefan Seefeld <stefan_at_[hidden]> wrote: > On 04/11/18 15:51, Mateusz Loskot wrote: >> >> Can we try to settle the preferred line length in GIL source code? >> >> Current style varies from file to file, even from chunk to chunk >> within a single file. Often, there are unindented lines, >> overdose of whitespaces, etc. > > > I agree this can be annoying, so it might be useful to use some established > conventions. > Both of the issues you name (max line length and amount of indentation) are > related and important to settle. Good. > But, while we should really agree on indentation (this can be easily > configured in nowadays editors), I'm less sure about hard limits for line > length, given that there is a tradeoff to optimize for code readability. > > Do you have suggestions for these parameters ? I copied the Spirit and Geometry as examples of such suggestions. I'd prefer to fit between 80-90. > (For indentation I'd vote for 2 spaces (replace all tabs by spaces !), I'd vote for 4 spaces. No tabs! I find Spirit code layout pretty clean and Geometry adopted similar approach I'd prefer to not to indent namespace block though, as Spirit does. If we also avoid too much of whitespace after/before (,),<,>, then 80-char line should work well. > for max line length I can be easily > convinced to settle on anything between 78 and 90 characters.) We agree here. >> I know this may not be the right moment to attempt style refactoring, >> but I think it would be nice to have at least some style reference >> that will hold in future. > > Right. I think it would be useful to establish a convention, then we can > make sure that any code we touch to fix bugs or add features adheres to > them. No global restyling required. Exactly! Best regards, -- Mateusz Loskot, This archive was generated by hypermail 2.1.7 : 2018-04-12 20:05:06 UTC
https://lists.boost.org/boost-gil/2018/04/0023.php
CC-MAIN-2019-04
en
refinedweb
A couple of days ago, I wrote about creating the RoomEditorApp GitHub repository for the Revit add-in part of my cloud-based real-time round-trip 2D Revit model editing application on any mobile device and promised to discuss its implementation details anon. Well, the obvious place to start that is by providing an architectural overview, followed by a look at its external application implementation. I'll also mention an issue with unresponsive Idling that I am currently experiencing and hope to resolve, and where to download the current state of things. Before getting to the nitty-gritty, by the way, have you already heard that three out of the top ten mobile apps for architects are developed by Autodesk? RoomEditorApp Architectural Overview The room editor add-in consists of the following modules: - App.cs - CmdAbout.cs - CmdSubscribe.cs - CmdUpdate.cs - CmdUpload.cs - CmdUploadAll.cs - ContiguousCurveSorter.cs - DbModel.cs - DbUpdater.cs - DbUpload.cs - GeoSnoop.cs - JtBoundingBox2dInt.cs - JtBoundingBoxXyz.cs - JtPlacement2dInt.cs - JtWindowHandle.cs - Point2dInt.cs - Point2dIntLoop.cs - RoomEditorDb.cs - Util.cs Starting at the end of the list, Util.cs contains a bunch of utilities to handle little details such as: - Unit conversion - Formatting - Messages - Browsing for a directory - Flipping SVG Y coordinates Moving back to the beginning of the list, the five modules with a Cmd prefix are external command implementations driven by the custom panel created by the external application defined in App.cs. They fulfil the following tasks: - CmdAbout – display an about message. - CmdUpload – upload selected rooms and their furniture and equipment to the cloud database. - CmdUploadAll – upload all rooms in the model and their furniture and equipment to the cloud database. - CmdUpdate – refresh all furniture and equipment family instance placements from the cloud database. - CmdSubscribe – toggle back and forth between real-time subscription to updates from the cloud database. A noteworthy aspect of subscription command is that it switches the button text dynamically to reflect its state. They are represented by corresponding icons displayed by the ribbon panel user interface: The remaining modules can be grouped into the following main areas: - Boundary loops – determine the room and family instance boundary loop polygons, driven by CmdUpload: - ContiguousCurveSorter.cs - JtBoundingBox2dInt.cs - JtBoundingBoxXyz.cs - JtPlacement2dInt.cs - Point2dInt.cs - Point2dIntLoop.cs - GeoSnoop – temporary graphical display of the boundary loops, triggered by CmdUpload when it has done its job: - GeoSnoop.cs - JtWindowHandle.cs - Database model – manage the information uploaded to and retrieved from the cloud database: - DbModel.cs - DbUpdater.cs - DbUpload.cs - RoomEditorDb.cs I already discussed all the aspects of the boundary loop determination, GeoSnoop graphical debugging display and database representation in pretty good detail back in April: - Database structure - Database upload - Integer based 2D placement - Populating symbols and instances - Retrieving the boundary loops - GeoSnoop loop display Actually, that was the last time I discussed anything at all related to this add-in until migrating it to Revit 2014 last week, so all the items I listed as next steps back then and that have now been implemented remain to be discussed. Let's begin with the external application implementation: RoomEditorApp External Application Implementation The external application fulfils the following main tasks: - Handle retrieval of the embedded icon resources. - Create and populate the custom ribbon panel. - Toggle subscription command text and manage the Idling event handler. - Main entry points. Let's look at each of these in more detail. Handle Retrieval of Embedded Icon Resources All the icons are saved into the Revit add-in assembly as embedded resources, living in an own subfolder named Icon: This is obviously very handy, as there is no need to copy the icon files around separately. Here are the methods used to extract the bitmap image information at runtime: /// <summary> /// Executing assembly namespace /// </summary> static string _namespace = typeof( App ).Namespace; /// <summary> /// Return path to embedded resource icon /// </summary> static string IconResourcePath( string name, string size ) { return _namespace + "." + "Icon" // folder name + "." + name + size // icon name + ".png"; // filename extension } /// <summary> /// Load a new icon bitmap from embedded resources. /// For the BitmapImage, make sure you reference /// WindowsBase and PresentationCore, and import /// the System.Windows.Media.Imaging namespace. /// </summary> static BitmapImage GetBitmapImage( Assembly a, string path ) { string[] names = a.GetManifestResourceNames(); Stream s = a.GetManifestResourceStream( path ); Debug.Assert( null != s, "expected valid icon resource" ); BitmapImage img = new BitmapImage(); img.BeginInit(); img.StreamSource = s; img.EndInit(); return img; } Create and Populate Custom Ribbon Panel I define the various command button data such as its text, implementation class, icon and tooltip in arrays of strings to enable defining the ribbon items in a simple loop. With the bitmap handling functionality in place, the entire custom ribbon panel creation is handled in one fell swoop by the following AddRibbonPanel method: /// <summary> /// Caption /// </summary> public const string Caption = "Room Editor"; /// <summary> /// Command name prefix /// </summary> const string _cmd_prefix = "Cmd"; /// <summary> /// Currently executing assembly path /// </summary> static string _path = typeof( App ) .Assembly.Location; /// <summary> /// Keep track of our ribbon buttons to toggle /// them on and off later and change their text. /// </summary> static RibbonItem[] _buttons; /// <summary> /// Create a custom ribbon panel and populate /// it with our commands, saving the resulting /// ribbon items for later access. /// </summary> static void AddRibbonPanel( UIControlledApplication a ) { string[] tooltip = new string[] { "Upload selected rooms to cloud.", "Upload all rooms to cloud.", "Update furniture from the last cloud edit.", "Subscribe to or unsubscribe from updates.", "About " + Caption + ": ..." }; string[] text = new string[] { "Upload Selected", "Upload All", "Update Furniture", "Subscribe", "About..." }; string[] classNameStem = new string[] { "Upload", "UploadAll", "Update", "Subscribe", "About" }; string[] iconName = new string[] { "1Up", "2Up", "1Down", "ZigZagRed", "Question" }; int n = classNameStem.Length; Debug.Assert( text.Length == n, "expected equal number of text and class name entries" ); _buttons = new RibbonItem[n]; RibbonPanel panel = a.CreateRibbonPanel( Caption ); SplitButtonData splitBtnData = new SplitButtonData( Caption, Caption ); SplitButton splitBtn = panel.AddItem( splitBtnData ) as SplitButton; Assembly asm = typeof( App ).Assembly; for( int i = 0; i < n; ++i ) { PushButtonData d = new PushButtonData( classNameStem[i], text[i], _path, _namespace + "." + _cmd_prefix + classNameStem[i] ); d.ToolTip = tooltip[i]; d.Image = GetBitmapImage( asm, IconResourcePath( iconName[i], "16" ) ); d.LargeImage = GetBitmapImage( asm, IconResourcePath( iconName[i], "32" ) ); d.ToolTipImage = GetBitmapImage( asm, IconResourcePath( iconName[i], "" ) ); _buttons[i] = splitBtn.AddPushButton( d ); } } Toggle the Subscription Command Text and Idling Event Handler Management With all of the commands in place, the subscription command text toggling and Idling event handler management becomes almost trivial. I presented the principles to implement your own toggle button a year ago, and we simply make use of that here. The button icon could be toggled as well, if we like. The Idling event handler is defined in the subscription command implementation, where it belongs. However, best practice as demonstrated by the ModelessDialog ModelessForm_IdlingEvent Revit SDK sample retains the final control and the subscription to the event in the external application. In order for the command to define the handler and toggle the subscription on and off, the external application provides a method named ToggleSubscription taking the event handler implementation as an argument. It subscribes to or unsubscribes from the event as requested, and also toggles the text displayed by the corresponding command button: I define a property name 'Subscribed' to determine the current subscription status, and toggle it on and off by calling the ToggleSubscription method: /// <summary> /// Our one and only Revit-provided /// UIControlledApplication instance. /// </summary> static UIControlledApplication _uiapp; /// <summary> /// Switch between subscribe /// and unsubscribe commands. /// </summary> const string _subscribe = "Subscribe"; const string _unsubscribe = "Unsubscribe"; /// <summary> /// Are we currently subscribed /// to automatic cloud updates? /// </summary> public static bool Subscribed { get { return _buttons[3].ItemText.Equals( _unsubscribe ); } } /// <summary> /// Toggle on and off subscription to /// automatic cloud updates. /// </summary> public static void ToggleSubscription( EventHandler<IdlingEventArgs> handler ) { if( Subscribed ) { _uiapp.Idling -= handler; _buttons[3].ItemText = _subscribe; } else { _uiapp.Idling += handler; _buttons[3].ItemText = _unsubscribe; } } Main Entry Points OnStartup and OnShutdown All that remains to do for the external application is initialise the _uiapp variable and add the custom ribbon panel on start-up, and remove the Idling event handler if it is still active on shutdown: public Result OnStartup( UIControlledApplication a ) { _uiapp = a; AddRibbonPanel( a ); return Result.Succeeded; } public Result OnShutdown( UIControlledApplication a ) { if( Subscribed ) { _uiapp.Idling -= new EventHandler<IdlingEventArgs>( ( sender, ea ) => { } ); } return Result.Succeeded; } This is probably my most complex external application to date. I hope you appreciate its simplicity in spite of all the requirements it fulfils, and that this presentation helps you keep your add-ins as simple as possible as well. Unresponsive Idling Before closing, let me mention that my tests of this application so far on Revit 2014 and Windows 7 show a decreased responsiveness of the Idling event compared to Revit 2013 and Windows XP. In Revit 2013, I was even calling the SetRaiseWithoutDelay method to get as many Idling calls as possible with no problem. Regardless of that setting, the system is currently much less responsive in Revit 2014. The task manager shows Revit.exe hogging almost 100% percent of the CPU as soon as I subscribe to the Idling event. Debugging this, I also note that my attempts to unsubscribe from the Idling event handler have no effect; surprisingly, the Idling event handler still gets called anyway. Something seems to have changed in the interaction between Revit 2014 and the Idling event. I added some debugging variables to count the number of Idling calls received, print a message now and then, and skip the database query for most of them. I also removed the exception wrapping the database query. The problem is somewhat alleviated but not yet solved. I don't know yet whether I have an issue with my virtual machine in Parallels in Mac, or my cloud database is acting differently on Windows 7 than it did on Windows XP, or some other suboptimal setting is causing this. Hopefully I can get it resolved soon, though. Any advice on this is much appreciated! Download This application lives in the RoomEditorApp GitHub repository and the version discussed above is release 2014.0.0.15.
https://thebuildingcoder.typepad.com/blog/2013/11/roomeditorapp-architecture-and-external-application.html
CC-MAIN-2019-04
en
refinedweb
This article describes in detail the steps I took in setting up Elasticsearch as the search provider for Pony Foo. I start by explaining what Elasticsearch is, how you can set it up to make useful searches through the Node.js API client, and how to deploy the solution onto a Debian or Ubuntu environment. A while back I started working at Elastic – the company behind Elasticsearch , a search engine & realtime analytics service powered by Lucene indexes. It’s an extremely exciting open-source company and I’m super happy here – and we’re hiring, drop me a note! Thrilled to announce I’ve started working at @elastic ! Working on Kibana (ES graphs) Great fun/team! Hiring! — Nicolás Bevacqua (@nzgb) March 29, 2016 Possible use cases for Elasticsearch range from indexing millions of HTTP log entries, analyzing public traffic incidents in real-time, streaming tweets, all the way to tracking and predicting earthquakes and back to providing search for a lowly blog like Pony Foo. We also build Kibana , a dashboard that sits in front of Elasticsearch and lets you perform and graph the most complex queries you can possibly imagine. Many use Kibana across those cool service status flat screens in hip offices across San Francisco. But enough about me and the cool things you can do with Elastic’s products. Let’s start by talking about Elasticsearch in more meaningful, technical terms. What is Elasticsearch, even? Elasticsearch is a REST HTTP service that wraps around Apache Lucene , a Java-based indexing and search technology that also features spellchecking, hit highlighting and advanced analysis/tokenization capabilities. On top of what Lucene already provides, Elasticsearch adds an HTTP interface, meaning you don’t need to build your application using Java anymore; and is distributed by default, meaning you won’t have any trouble scaling your operations to thousands of queries per second. Elasticsearch is great for setting up blog search because you could basically dump all your content into an index and have them deal with user’s queries, with very little effort or configuration. Here’s how I did it. Initial Setup I’m on a Mac, so – for development purposes – I just installed elasticsearch using Homebrew . brew install elasticsearch If you’re not on a Mac, just go to the download page and get the latest version , unzip it, run it in a shell, and you’re good to go. Once you have the elasticsearch executable, you can run it on your terminal. Make sure to leave the process running while you’re working with it. elasticsearch Querying the index is a matter of using curl , which is a great diagnostics tool to have a handle on; a web browser, by querying ( 9200 is the port Elasticsearch listens at by default) ; the Sense Chrome extension, which provides a simple interface into the Elasticsearch REST service, or the Console plugin for Kibana , which is similar to Sense. There are client libraries that consume the HTTP REST API available to several different languages. In our case, we’ll use the Node.js client: elasticsearch . npm install --save elasticsearch The elasticsearch API client is quite pleasant to work with, they provide both Promise -based and callback-based API through the same methods. First off, we’ll create a client. This will be used to talk to the REST service for our Elasticsearch instance. Creating an Elasticsearch Index We’ll start by importing the elasticsearch package and instantiating a REST client configured to print all logging statements. import elasticsearch from 'elasticsearch'; const client = new elasticsearch.Client({ host: '', log: 'debug' }); Now that we have a client we can start interacting with our Elasticsearch instance. We’ll need an index where we can store our data. You can think of an Elasticsearch index as the rough equivalent of a database instance. A huge difference, though, is that you can very easily query multiple Elasticsearch indices at once – something that’s not trivial with other database systems. I’ll create an index named 'ponyfoo' . Since client.indices.create returns a Promise , we can await on it for our code to stay easy to follow. If you need to brush up on async / await you may want to read “Understanding JavaScript’s async await” and thearticle on Promises as well. await client.indices.create({ index: 'ponyfoo' }); That’s all the setup that is required . Creating an Elasticsearch Mapping In addition to creating an index, you can optionally create an explicit type mapping . Type mappings aid Elasticsearch’s querying capabilities for your documents – avoiding issues when you are storing dates using their timestamps, among other things . If you don’t create an explicit mapping for a type, Elasticsearch will infer field types based on inserted documents and create a dynamic mapping. A timestamp is often represented in JSON as a long , but Elasticsearch will be unable to detect the field as a date field, preventing date filters and facets such as the date histogram facet from working properly. — Elasticsearch Documentation Let’s create a mapping for the type 'article' , which is the document type we’ll use when storing blog articles in our Elasticsearch index. Note how even though the tags property will be stored as an array, Elasticsearch takes care of that internally and we only need to specify that each tag is of type string. The created property will be a date , as hinted by the mapping, and everything else is stored as strings. await client.indices.putMapping({ index: 'ponyfoo', type: 'article', body: { properties: { created: { type: 'date' }, title: { type: 'string' }, slug: { type: 'string' }, teaser: { type: 'string' }, introduction: { type: 'string' }, body: { type: 'string' }, tags: { type: 'string' } } } }); The remainder of our initial setup involves two steps – both of them involving keeping the Elasticsearch index up to date, so that querying it yields meaningful results. - Importing all of the current articles into our Elasticsearch index - Updating the Elasticsearch index whenever an article is updated or a new article is created Keeping Elasticsearch Up-to-date These steps vary slightly depending on the storage engine you’re using for blog articles. For Pony Foo, I’m using MongoDB and the mongoose driver. The following piece of code will trigger a post-save hook whenever an article is saved – regardless of whether we’re dealing with an insert or an update. mongoose.model('Article').schema.post('save', updateIndex); The updateIndex method is largely independent of the storage engine: our goal is to update the Elasticsearch index with the updated document. We’ll be using the client.update method for an article of id equal to the _id we had in our MongoDB database, although that’s entirely up to you – I chose to reuse the MongoDB, as I found it most convenient. The provided doc should match the type mapping we created earlier, and as you can see I’m just forwarding part of my MongoDB document to the Elasticsearch index. Given that we are using the doc_as_upsert flag, a new document will be inserted if no document with the provided id exists, and otherwise the existing id document will be modified with the updated fields, again in a single HTTP request to the index. I could’ve done doc: article , but I prefer a whitelist approach where I explicitly name the fields that I want to copy over to the Elasticsearch index, which explains the toIndex function. const id = article._id.toString(); await client.update({ index: 'ponyfoo', type: 'article', id, body: { doc: toIndex(article), doc_as_upsert: true } }); function toIndex (article) { return { created: article.created, title: article.title, slug: article.slug, teaser: article.teaser, introduction: article.introduction, body: article.body, tags: article.tags }; } Whenever an article gets updated in our MongoDB database, the changes will be mirrored onto Elasticsearch. That’s great for new articles or changes to existing articles, but what about articles that existed before I started using Elasticsearch? Those wouldn’t be in the index unless I changed each of them and the post-save hook picks up the changes and forwards them to Elasticsearch. Wonders of the Bulk API, or Bootstrapping an Elasticsearch Index To bring your Elasticsearch index up to date with your blog articles, you will want to use the bulk operations API , which allows you to perform several operations against the Elasticsearch index in one fell swoop. The bulk API consumes operations from an array under the [cmd_1, data_1?, cmd_2, data_2?, ..., cmd_n, data_n?] format. The question marks note that the data component of operations is optional. Such is the case of delete commands, which don’t require any additional data beyond an object id . Provided an array of articles pulled from MongoDB or elsewhere, the following piece of code reduces articles into command/data pairs on a single array, and submits all of that to Elasticsearch as a single HTTP request through its bulk API. await client.bulk({ body: articles.reduce(toBulk, []) }); function toBulk (body, article) { body.push({ update: { _index: 'ponyfoo', _type: 'article', _id: article._id.toString() } }); body.push({ doc: toIndex(article), doc_as_upsert: true }); // toIndex from previous code block return body; } If JavaScript had .flatMap we could do away with .reduce and .push , but we’re not quite there yet. await client.bulk({ body: articles.flatMap(article => [{ update: { _index: 'ponyfoo', _type: 'article', _id: article._id.toString() } }, { doc: toIndex(article), doc_as_upsert: true }]) }); Great stuff! Up to this point we have: - Installed Elasticsearch and the elasticsearchnpm package - Created an Elasticsearch index for our blog - Created an Elasticsearch mapping for articles - Set up a hook that upserts articles when they’re inserted or updated in our source store - Used the bulk API to pull all articles that weren’t synchronized into Elasticsearch yet We’re still missing the awesome parts , though! - Set up a queryfunction that takes some options and returns the articles matching the user’s query - Set up a relatedfunction that takes an articleand returns similar articles - Create an automated deployment script for Elasticsearch Shall we? Querying the Elasticsearch Index While utilizing the results of querying the Elasticsearch index is out of the scope of this article, you probably still want to know how to write a function that can query the engine you so carefully set up with your blog’s amazing contents. A simple query(options) function looks like below. It returns a Promise and it uses async / await . The resulting search hits are mapped through a function that only exposes the fields we want. Again, we take a whitelisting approach as favored earlier when we inserted documents into the index. Elasticsearch offers a querying DSL you can leverage to build complex queries. For now, we’ll only use the match query to find articles whose title match the provided options.input . async function query (options) { const result = await client.search({ index: 'ponyfoo', type: 'article', body: { query: { match: { title: options.input } } } }); return result.hits.hits.map(searchHitToResult); } The searchHitToResult function receives the raw search hits from the REST Elasticsearch API and maps them to simple objects that contain only the _id , title , and slug fields. In addition, we’ll include the _score field, Elasticsearch’s way of telling us how confident we should be that the search hit reliably matches the human’s query. Typically more than enough for dealing with search results. function searchHitToResult (hit) { return { _score: hit._score, _id: hit._id, title: hit._source.title, slug: hit._source.slug }; } You could always query the MongoDB database for _id to pull in more data, such as the contents of an article. Even in the case of a simple blog, you wouldn’t consider a search solution sufficient if users could only find articles by matching their titles. You’d want to be able to filter by tags, and even though the article titles should be valued higher than their contents (due to their prominence) , you’d still want users to be able to search articles by querying their contents directly. You probably also want to be able to specify date ranges, and then expect to see results only within the provided date range. What’s more, you’d expect to be able to fit all of this in a single querying function. Building Complex Elasticsearch Queries As it turns out, we don’t have to drastically modify our query function to this end. Thanks to the rich querying DSL, our problem becomes finding out which types of queries we need to use, and figuring out how to stack the different parts of our query. To begin, we’ll add the ability to query several fields, and not just the title . To do that, we’ll use the multi_match query , adding 'teaser', 'introduction', 'content' to the title we were already querying about. async function query (options) { const result = await client.search({ index: 'ponyfoo', type: 'article', body: { query: { multi_match: { query: options.input, fields: ['title', 'teaser', 'introduction', 'content'] } } } }); return result.hits.hits.map(searchHitToResult); } Earlier, I brought up the fact that I want to rate the title field higher. In the context of search, this is usually referred to as giving a term more “weight”. To do this through the Elasticsearch DSL, we can use the ^ field modifier to boost the title field three times. { query: { multi_match: { query: options.input, fields: ['title^3', 'teaser', 'introduction', 'content'] } } } If we have additional filters to constrain a query, I’ve found that the most effective way to express that is using a bool query , moving the filter options into a function and placing our existing multi_match query under a must clause, within our bool query. Bool queries are a powerful querying DSL that allow for a recursive yet declarative and simple interface to defining complex queries. { query: { bool: { filter: filters(options), must: { multi_match: { query: options.input, fields: ['title^3', 'teaser', 'introduction', 'content'] } } } } } In the simplest case, the applied filter does nothing at all, leaving the original query unmodified. Here we return an empty filter object. function filters (options) { return {}; } When the user-provided options object contains a since date, we can use that to define a range for our filter. For the range filter we can specify fields and a condition. In this case we specify that the created field must be gte ( g reater t han or e qual) the provided since date. Since we moved this logic to a filters function, we don’t clutter the original query function with our (albeit simple) filter-building algorithm. We place our filters in a must clause within a bool query, so that we can filter on as many concerns as we have to. function filters (options) { const clauses = []; if (options.since) { clauses.unshift(since(options.since)); } return all(clauses); } function all (clauses) { return { bool: { must: clauses } }; } function since (date) { return { range: { created: { gte: date } } }; } When it comes to constraining a query to a set of user-provided tags, we can add a bool filter once again. Using the must clause, we can provided an array of term queries for the tags field, so that articles without one of the provided tags are filtered out. That’s because we’re specifying that the query must match each user-provided tag against the tags field in the article. function filters (options) { const tags = Array.isArray(options.tags) ? options.tags : []; const clauses = tags.map(tagToFilter); if (options.since) { clauses.unshift(since(options.since)); } return all(clauses); } function all (clauses) { return { bool: { must: clauses } }; } function since (date) { return { range: { created: { gte: date } } }; } function tagToFilter (tag) { return { term: { tags: tag } }; } We could keep on piling condition clauses on top of our query function, but the bottom line is that we can easily construct a query using the Elasticsearch querying DSL , and it’s most likely going to be able to perform the query we want within a single request to the index. Finding Similar Documents The API to find related documents is quite simple as well. Using the more_like_this query , we could specify the like parameter to look for articles related to a user-provided document – by default, a full text search is performed . We could reuse the filters function we just built, for extra customization. You could also specify that you want at most 6 articles in the response, by using the size property. { query: { bool: { filter: filters(options), must: { more_like_this: { like: { _id: options.article._id.toString() } } } } }, size: 6 } Using the more_like_this query we can quickly set up those coveted “related articles” that spring up on some blogging engines but feel so very hard to get working properly in your homebrew blogging enterprise. The best part is that Elasticsearch took care of all the details for you. I’ve barely had to explain any search concepts at all in this blog post, and you came out with a powerful query function that’s easily augmented, as well as the body of a search query for related articles – nothing too shabby! To round things out, I’ll detail the steps I took in making sure that my deployments went smoothly with my recently added Elasticsearch toys. Rigging for Deployment After figuring out the indexing and querying parts (even though I now work at Elastic I’m pretty far from becoming a search demigod) , and setting up the existing parts of the blog so that search and related articles leverage the new Elasticsearch services I wrote for ponyfoo/ponyfoo , came deploying to production. It took a bit of research to get the deployment right for Pony Foo’s Debian Jessie production environment. Interestingly, my biggest issue was figuring out how to install Java 8. The following chunk of code installs Java 8 in Debian Jessie and sets it as the default java runtime. Note that we’ll need the cookie in wget so that Oracle validates the download. echo "install java" JAVA_PACK=jdk-8u92-linux-x64.tar.gz JAVA_VERSION=jdk1.8.0_92 wget -nv --header "Cookie: oraclelicense=accept-securebackup-cookie" sudo mkdir /opt/jdk sudo tar -zxf $JAVA_PACK -C /opt/jdk sudo update-alternatives --install /usr/bin/java java /opt/jdk/$JAVA_VERSION/bin/java 100 sudo update-alternatives --install /usr/bin/javac javac /opt/jdk/$JAVA_VERSION/bin/javac 100 Before coming to this piece of code, I tried using apt-get but nothing I did seemed to work. The oracle-java8-installer package some suggest you should install was nowhere to be found, and the default-jre package isn’t all that well supported by elasticsearch . After installing Java 8, we have to install Elasticsearch. This step involved copying and pasting Elastic’s installation instructions, for the most part. echo "install elasticsearch" wget -qO - | sudo apt-key add - echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list sudo apt-get update sudo apt-get -y install elasticsearch Next up came setting up elasticsearch as a service that also relaunches itself across reboots. echo "elasticsearch as a service" sudo update-rc.d elasticsearch defaults 95 10 sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable elasticsearch.service I deploy Pony Foo through a series of immutable deployments , (that article hadtwo parts! )building disk images along the way using Packer. For the most part, unless I’m setting up something like Elasticsearch, the deployment consists of installing the latest npm dependencies and updating the server to the latest version of the Node.js code base. More fundamental changes take longer, however, when I need to re-install parts of the system dependencies for example, but that doesn’t occur as often. This leaves me with a decently automated deployment process while retaining tight control over the server infrastructure to use cron and friends as I see fit. When I’m ready to fire up the elasticsearch service, I just run the following. The last command prints useful diagnostic information that comes in handy while debugging your setup. echo "firing up elasticsearch" sudo service elasticsearch restart || sudo service elasticsearch start || (sudo cat /var/log/elasticsearch/error.log && exit 1) sudo service elasticsearch status That’s about it. If the whole deployment process feels too daunting for you, Elastic offers Elastic Cloud . Although, at $45/mo, it’s mostly aimed at companies! If you’re flying solo, you might just have to strap on your keyboards and start fiercely smashing those hot keys. There is one more step in my setup, which is that I hooked my application server up in such a way that the first search request creates the Elasticsearch index, type mapping, and bulk-inserts documents into the index. This could alternatively be done before the Node.js application starts listening for requests, but since it’s not a crucial component of Pony Foo, that’ll do for now! Conclusions I had a ton of fun setting up Elasticsearch for the blog. Even though I already had a homebrew search solution, it performed very poorly and the results weren’t anywhere close to accurate. With Elasticsearch the search results are much more on point, and hopefully will be more useful to my readers. Similarly, related articles should be more relevant now as well! I can’t wait to hook Elasticsearch up with Logstash and start feeding nginx logs into my ES instance so that I can see some realtime HTTP request data – besides what Google Analytics has been telling me – for the first time since I started blogging back in late 2012. I might do this next, when I have some free time. Afterwards, I might set up some sort of public Kibana dashboard displaying realtime metrics for Pony Foo servers. That should be fun! 评论 抢沙发
http://www.shellsec.com/news/23892.html
CC-MAIN-2017-13
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 09-29-2009 09:39 AM I want to show the Label Field in Red color. How to change the Font Color in Blackberry? Thanks for any help. Solved! Go to Solution. 09-29-2009 09:42 AM Did you even bother to search the forum ? Anyway.. try this public class ColorLabel extends LabelField { private int color; public ColorLabel(Object text, long style, int color) { super(text, style); this.color = color; } public void paint(Graphics g) { g.setColor(color); super.paint(g); } }
https://supportforums.blackberry.com/t5/Java-Development/How-to-Change-the-Font-Color/m-p/344061/highlight/true
CC-MAIN-2017-13
en
refinedweb
Difference between revisions of "Talk:Dm-crypt" Revision as of 07:23, 20 January 2014 Contents Cleanup and Clarification) Splitting sections into separate pages Does anyone else feel that 11,305 words is too long for a single article? I'd like to propose splitting this article across multiple pages. If MediaWiki's Subpages feature is enabled, this might be a good time to use it. The article contains many sections that are not greatly related to one another. For example, does one really need to know how to (section 6) encrypt a loopback filesystem or (section 3.2) use a keyfile in order to (section 3.3) encrypt a swap partition? It's common to encrypt a swap partition without using a keyfile or an encrypted loopback filesystem, so why are they discussed in the same article? I acknowledge that all the sections are related to LUKS, but many of them are not dependent on each other. Having many vaguely related topics makes the article difficult to follow and maintain. I propose Subpages because subpages can show their relationship to LUKS (and other sections, just as an example: /LUKS/Configuration/Keyfiles). In the absence of Subpages, placing a general overview of LUKS in the main article -- and links to pages on more specific topics -- would also be an improvement. Separating sections into (sub)pages would also keep talk pages attuned to a specific subject. I have some suggestions for improvement of individual sections as well, but I think separating sections would be a good first step. EscapedNull (talk) 14:26, 29 September 2013 (UTC) - Hi, the article is among the longest, splitting it into subpages could help not feeling overwhelmed by it, however a lot of care should be taken in doing it, that's why I think you've been very wise to start a discussion first. We've had a number of users working hard on it, in particular I'd like to point you to a recent discussion I had with User:Indigo, #Encrypting_a_LVM_setup_.28ex_section_8.29, on which we agreed on keeping Dm-crypt_with_LUKS#Encrypting_a_LVM_setup here instead of merging it to Encrypted LVM: moving it to a subpage would somehow conflict with that decision, so I'll try to invite Indigo to discuss here with us on what to do now. - Finally, just to answer your doubts, this wiki doesn't have the subpage feature enabled on the Main namespace, nonetheless subpages (i.e. article names with slashes) are already commonly used to keep series of related articles together, so that would indeed be the way to split this article. - -- Kynikos (talk) 02:54, 30 September 2013 (UTC) - After reading the discussion, I see what you mean. However, I don't think splitting up the article would interfere too much with the decision to keep a brief overview of LUKS and LVM in addition to the Encrypted LVM page. The setup I had in mind was roughly giving each top-level section its own page (but don't quote me on that). The overview and the Encrypted LVM page seem to overlap, and I don't see much benefit to maintaining both, although Kynokos and Indigo might not agree and that's fine. Personally I find the Encrypted LVM page easier to follow and I think it gives the reader a better understanding of the subject, which is why my own edits on the subject have gone there rather than this page. Case and point, I'd propose replacing the overview with Encrypted LVM (as a pseudo-subpage, or just a link), but maintaining the overview is also okay, and perhaps it would just get its own article or pseudo-subpage. The main point I was trying to make is that I think LUKS/dm-crypt is too broad a topic for a single article. And as you said, I'd also be interested in hearing what Indigo has to say about this. EscapedNull (talk) 17:24, 30 September 2013 (UTC) - Hi, thanks for sharing your ideas here. Getting rid of not required content would be a preferable way, if you ask me. Particularly by (a) streamlining to LUKS and vacuuming for clarity. Then (b) splitting content by moving out sections to new pages can help and be a way forward. Yet I don't see a reason why (a) cannot be done while possibilites for (b) are figured out. If you look at the sections you quote in your first post, you will notice 2/3 have short introductory paras and would work as a subpage or even separate pages. Quite a number of edits were made to that respect and continuing with it should make it easier to re-structure the article, if that is the outcome of the discussion. If not, it is still more readable this way. - I don't grasp what you have in mind with replacing the "overview" (?) with Encrypted_LVM. I would rather merge LUKS#LVM:_Logical_Volume_Manager (the "overview"?) to there and link it from here. If you are instead referring to LUKS#Encrypting_a_LVM_setup (and hence the talk quoted by Kynikos above) as the "overview", it would be a great contribution to merge it into Encrypted_LVM. I am sure Kynikos will agree - he proposed to do that originally. In case you would like to approach the merge, please go forward with it. I'll make sure LUKS#Encrypting_a_system_partition regains the cross-linked content. - Back to your original topic: - For (a) maybe you want to re-consider to join in for editing in the suggestions you have in mind first. - For (b) another point that should be addressed along is how the new pages (plain dm-crypt and encrypted LVM) could benefit at the same time. If you ask me now, separating common content would be a preferable approach to using a subpage structure (e.g. like the multipage BG). Perhaps you can detail options you see for (b). How would you re-structure the top sections on this page? Which sections would you fork out from LUKS, ideally with perspective to the other encryption pages? --Indigo (talk) 05:03, 1 October 2013 (UTC) - By "overview" I was talking about section 7 "Encrypting an LVM setup." I didn't even notice that LVM was discussed twice (a testament to disorganization of the page as it is now). I see what you mean about the disadvantages of subpages. I mentioned subpages because, for example, an LVM can be encrypted using almost any block level encryption, and one could argue that setups using different underlying technologies should be separate pages (e.g. LUKS/Encrypted_LVM, Plain_dm-crypt/Encrypted_LVM, and cryptoloop/Encrypted_LVM) as the information is likely to be different (but this could lead to duplicated information, too). It was only an idea, and perhaps something like Category:Disk_Encryption would be more appropriate. After all, subpages are disabled for a reason. - I thought it would be a good idea to split the article first and edit second because it would be easier to focus on a single topic, and because it could save us from editing information twice in case it conflicts with the new structure. But if you think it would be best to edit first and restructure later, I'm fine with that I guess. Kynikos, do you have a preference? EscapedNull (talk) 13:44, 1 October 2013 (UTC) - My suggestion was that editing section content can be done in a way so that forking one out does not require major double edits. Meanwhile we gained another section LUKS#Encrypting_the_home_partition. With that it becomes easier to get rid of LUKS#Encrypting_a_LVM_setup here by finalizing Encrypted_LVM and double checking nothing is lost. Apart from that, anyone has a suggestion which section may be a first worthwhile candidate for a separate page? --Indigo (talk) 21:54, 1 November 2013 (UTC) - I'm sorry for losing sight of this discussion... I think managing to finalize the merge to Encrypted LVM would be a great way of starting to split this article. Then maybe Dm-crypt with LUKS#Specialties and I'd say also Dm-crypt with LUKS#Backup the cryptheader could be moved to a Dm-crypt with LUKS/Specialties article. Then... well, without those sections around it will be a little easier to understand the next steps I hope. -- Kynikos (talk) 04:36, 2 November 2013 (UTC) - Hm, since you mention subpages again: the reason I am unsure about them, as described above, is that I find them confusing to browse. The example I keep having in mind is the multipage BG guide. Reading that I have to scroll down to the page end in order to see links to subsequent subpages (e.g. Beginners'_Guide/Post-installation). If the master page had a TOC including sections of the subpages, that would be more transparent. But the TOC always starts with 1 per page and makes no reference back or forth. A reader not knowing the content will only find the subpages by coincidence, if at all. - Now, while writing the reply, I had the idea to leave in the section heading, but move the content to a subpage. This way the main LUKS article keeps at least a reference in the TOC and links out content not necessary for all readers. I just created two subpages to see and show how it works out: - 1. Dm-crypt_with_LUKS#Backup_the_cryptheader now leading to Dm-crypt_with_LUKS/Backup_the_cryptheader - 2. Dm-crypt_with_LUKS#Encrypting_a_loopback_filesystem now leading to Dm-crypt_with_LUKS/Encrypting_a_loopback_filesystem - (Feel free to revert the edits I did to test it out for 1 and 2. I thought it's important to see in context). - I would not want to do that with a section like Dm-crypt_with_LUKS#Specialties because that contains only short subsections (hence the main TOC would loose the references to them). But another candidate is surely Dm-crypt_with_LUKS#Using_Cryptsetup_with_a_Keyfile and of course the remaining LVM bits (until Encrypted LVM is complete). - Thoughts? --Indigo (talk) 20:26, 10 November 2013 (UTC) - Honestly I wasn't thinking of creating many subpages with just little content in each, my "idea" (not very clear yet) was to end up having just a bunch of big subpages (e.g. Dm-crypt with LUKS/Initial Setup, Dm-crypt with LUKS/Configuring LUKS, Dm-crypt with LUKS/Specialties...) and use Dm-crypt with LUKS as just a very short overview page, using a format similar to General Recommendations, with a section for each subpage that links there and briefly introduces its content, preferably with inline links to its various sections. - Quoting your last post, "A reader not knowing the content will only find the subpages by coincidence, if at all", I think this system would avoid that problem, both because of the little introductions with links, and because the shortness of the article would make the reader curious to open all the subpages to see what they talk about, none of them being seen as a subpage more important than the others. - I hope I've been able to express the idea clearly enough ^^' -- Kynikos (talk) 10:36, 11 November 2013 (UTC) - You develop it further and I see what you mean, yes. Yet the General Recommendations serve a totally different purpose. That article gives a guide across a wide variety of system setup topics. The LUKS page focusses on one kernel toolset and the various specialities for it (which is why I prefer a complete TOC as a reader totally). Nonetheless, I like your idea and (quickly - not attempting to change content but to show the case) tried to mod the test case 2 above accordingly: [1]. Now that results in us keeping the TOC of the main page complete but still forks the section out: Dm-crypt_with_LUKS#Encrypting_a_loopback_filesystem. Is this more something you would anticipate? - --Indigo (talk) 19:29, 11 November 2013 (UTC) - I will try to put my ideas together in User:Kynikos/Dm-crypt with LUKS first. -- Kynikos (talk) 08:51, 13 November 2013 (UTC) - Ok, a very rough draft is ready in: - How do you like it? If you agree with the general idea, I will apply it to the real article, but then would you be willing to help me finishing the job? Especially I'd like to still take some generic content off User:Kynikos/Dm-crypt with LUKS/Examples, filling Dm-crypt with LUKS#System configuration. -- Kynikos (talk) 11:04, 13 November 2013 (UTC) - Ah, of course we should also take care of updating all the reciprocal links among the sub-articles (all those containing a #Fragment). -- Kynikos (talk) 11:07, 13 November 2013 (UTC) - Kynikos, thank you for your time to set it out so clearly! - I still prefer the single page format myself really, but the point is more how other readers not familiar with the topic can cope with it in KISS style. (unfortunately just few raised opinions). All in all I agree now that this can be a good way forward to re-structure the article. I guess I could just not picture it earlier. Anyhow, it will be a pleasure to help you with it. I have left comments and questions in User_talk:Kynikos/Dm-crypt_with_LUKS. - --Indigo (talk) 22:27, 17 November 2013 (UTC) - Just to make it as clear as possible, User:Kynikos/Dm-crypt with LUKS has "moved" to Dm-crypt with LUKS/draftand User talk:Kynikos/Dm-crypt with LUKS has moved to Talk:Dm-crypt with LUKS/draft. -- Kynikos (talk) 14:00, 18 November 2013 (UTC) - The new links are Dm-crypt and Talk:Dm-crypt. -- Kynikos (talk) 04:18, 1 December 2013 (UTC) - So what have we decided, exactly? I see there's now a Dm-crypt page with subpages. Is that where sections from this page are going to be moved? What about the merge from section 7 to Encrypted LVM that User:Kynikos mentioned? Is that still happening, or are we going to make a Dm-crypt/Encrypted LVM subpage instead and merge Encrypted LVM into it? I'm willing to help, but I'm not sure as to what I should be doing. - Have a look at Talk:Dm-crypt#New_idea for your questions and the new plan, and then the Dm-crypt subpages. There's plenty of stuff to do to implement it to plan. If you are unsure how to help, look for the 'accuracy' and 'expansion' tags for example. Would be great, if you want to join in. --Indigo (talk) 19:35, 6 December 2013 (UTC) - I'd only like to add that Encrypted LVM is already merged into the new dm-crypt/Encrypting an Entire System, it's only a matter of properly moving duplicated content to the other subpages of dm-crypt. Of course any help is really welcome, as there's still a lot to do! -- Kynikos (talk) 02:50, 7 December 2013 (UTC) New idea The philosophy behind the current old structure was to try to generalize the various steps for encrypting an entire system or a device and managing it, however we've noticed it's kind of hard. A new idea for reducing duplication of content while maintaining, if not improving, readability, would be to rename the "/Examples" subpage to "/Common Scenarios" and move it to first place in Dm-crypt with LUKS/draft, so it's used use the dm-crypt#Common scenarios section as the starting point by the readers. It should contain the most common uses for encryption, which IMO are: - dm-crypt/Encrypting a Non-Root File System - partition - loopback - dm-crypt/Encrypting an Entire System - plain dm-crypt (merge Plain dm-crypt without LUKS, done) - dm-crypt + LUKS (no LVM) - LVM on LUKS (merge Encrypted LVM, done) - LUKS on LVM (merge Encrypted LVM, done) - (I think it would be really cool if we could also include an example with software RAID) Each of those scenarios should be mostly a stripped sequence of commands with short descriptions that should link to generic sections in the other subpages of Dm-crypt with LUKS dm-crypt, pointing out all the particular suggested adaptations that apply to that particular scenario. The idea is quite clear in my mind, I hope I've managed to explain it well enough, I'll try to put it into practice and see if it raises major problems. -- Kynikos (talk) 03:08, 23 November 2013 (UTC) EDIT: since Plain dm-crypt without LUKS would be merged here, the main article should be just renamed to dm-crypt. -- Kynikos (talk) 03:09, 23 November 2013 (UTC) EDIT: updated for current structure. -- Kynikos (talk) 04:31, 8 December 2013 (UTC) Scenario structure - Plenty of stuff to do, yet: Taking for granted we want an additional example with RAID sometime, it might be worth considering to split dm-crypt/Encrypting an Entire System into a subpage for (e.g.) dm-crypt/Encrypting a single disk system and dm-crypt/Encrypting a system across multiple disks scenarios. The latter covering "LUKS on LVM" and said RAID. Main reason: page length. If you agree, let's better do it now. --Indigo (talk) 12:11, 8 December 2013 (UTC) - Not a bad idea at all! However IMHO the proposed titles are a bit misleading: I would go for dm-crypt/Encrypting a System on Physical Devices and dm-crypt/Encrypting a System on Virtual Devices, in fact you can use multiple physical disks in every case if you want. -- Kynikos (talk) 03:48, 9 December 2013 (UTC) - EDIT: Note that the history of dm-crypt/Encrypting an Entire System should be preserved by moving it to one of the two titles, and then (or before) splitting the other page. -- Kynikos (talk) 03:49, 9 December 2013 (UTC) - +1 to your edit, I learned that from watching. The one letdown of this whole fun exercise is that the wiki engine does not seem to support basic content splits and joins preserving history. Anyhow, we might as well just keep it in mind and consider splitting it later (when there is something about RAID). Funnily, I find the use of "physical" (all blockdevices are on one) and "virtual" (suggests a qcow device) as differentiator not totally clear too. Let's meditate over it again until someone has another snappy idea. --Indigo (talk) 23:16, 9 December 2013 (UTC) Scenario intros I want to bring up another contexual point: You put in 'expansion' tags at the beginning of each section of the scenario page to "Compare to the other scenarios with advantages/disadvantages.". If I understand those tags correctly, you have Dm-crypt/Encrypting_an_Entire_System#Plain_dm-crypt in mind as an example. Yes, we want a common structured intro for each scenario, but I'd like it better to be just shortly descriptive regarding the scenario content. For example a para introducing the setup, followed by an ascii chart of the disk layout used (as per Dm-crypt/Encrypting_an_Entire_System#Preparing_the_logical_volumes example), followed by another sentence or two max. Remember the core of your scenario idea was to cut verbose in the scenario down as much as possible. A small comparison of the section scenarios may be suitable as the first subpage intro itself, anything more better be linked (pros/cons of disk layouts in Dm-crypt/Drive_Preparation#Partitioning, scenario specific pros/cons of encryption modes in Dm-crypt/Device_Encryption#Encryption_options_with_dm-crypt, general ones should be in: Disk_encryption#Comparison_table anyway, ..). --Indigo (talk) 23:16, 9 December 2013 (UTC) - I added intros on Dm-crypt/Encrypting_an_Entire_System and Dm-crypt/Encrypting_a_non-root_file_system. Is that similar to what you were going for? I really want to emphasize the importance of keeping these introductions concise. It's easy to write about use case after use case, but let's not forget why we refactored Dm-crypt with LUKS in the first place. Even Dm-crypt/Encrypting_an_Entire_System has gotten rather long and disorganized already, but that's a discussion for its own talk page. Dm-crypt has a nice layout so far, and it is definitely more readable than Dm-crypt with LUKS. --User:EscapedNull - @Indigo: I approve 100% what you said: a single, small comparison section is what we need! - @EscapedNull (please remember to sign your edits in talk pages, use ~~~~): I think you're referring to this edit on the dm-crypt page; honestly those intros, despite being very clear and well written, are indeed too long for that page, as you note yourself: the intended size of those intros was like the ones in Dm-crypt#Swap device encryption or Dm-crypt#Specialties, they should just sum up very briefly what's contained in each subpage. I wouldn't like to just throw your work away, I'd rather move it to a more suitable place in some of the existing subpages, what do you reckon? - Dm-crypt/Encrypting_an_Entire_System is still (not "already") "long and disorganized", if you read the discussions above you'll see that it's the result of merging some pre-existing articles, and our goal is indeed slimming it down by moving duplicated content to the other subpages. - -- Kynikos (talk) 15:13, 10 December 2013 (UTC) - Maybe they are a little too long for an introduction, but in comparison to what we were dealing with before, I'd say they're pretty succinct. I wrote them with the goal of educating the reader about the different scenarios enough to make a decision, but no more than that. If you'd like to strip them down further, however, I'm completely okay with that. - Re: Dm-crypt/Encrypting_an_Entire_System: You're right. Still long and disorganized is more accurate. I think there are some improvements still to be made (i.e. splitting sections into subpages even further), but I wasn't trying to criticize anyone's decisions about the merge. - About the scenario comparison: that's more or less what I was trying to accomplish with the introductions. Are you just suggesting that we follow the "advantages and disadvantages" bulleted lists format for all scenarios on Dm-crypt, or did you have different semantics in mind? Additionally, User:Indigo mentions adding a paragraph introducing the setup, and an ascii chart of the disk layout. I'd be strongly opposed to including any how-to or step-by-step information on the main Dm-crypt page. That's what the subpages are there for. The main page should serve to inform the reader about what options are available and what strengths and weaknesses each one has, not about the execution of those options. Besides that, I do feel that comparing and contrasting scenarios is highly beneficial, I'm just uncertain as to whether the introductions I wrote are what you had intended. - (Yes, I have been forgetting to sign my edits. That really should be automatic.) EscapedNull (talk) 20:39, 10 December 2013 (UTC) - Uh now I see where the confusion comes from: Indigo and I were discussing about Dm-crypt/Encrypting_an_Entire_System, but instead you understood we were talking about dm-crypt#Common scenarios, and that's why you've put the intros there :) Note the difference between "subpage", "section" and "subsection": subpages have a "/" in the article title, while sections and subsections are indicated by the link fragment ("#"). - >>EscapedNull: "The main page should serve to inform the reader about what options are available and what strengths and weaknesses each one has" - We have to be even stricter than that on the main page (dm-crypt): it should only "serve to inform the reader about what options are available"; the "what strengths and weaknesses each one has" part should be described in the subpage. - >>Indigo: "A small comparison of the section scenarios may be suitable as the first subpage intro itself" - He means that a unified comparison section should be put at the top of Dm-crypt/Encrypting_an_Entire_System instead of having comparisons at the start of each section of Dm-crypt/Encrypting_an_Entire_System, and that's what I agreed with. - -- Kynikos (talk) 05:48, 11 December 2013 (UTC) - Oops. We were talking about different things I guess. Since you moved the intros to the subpages, I see that does make a lot more sense now. I understand the difference between subpages and sections/subsections, but I guess I didn't read the discussion closely enough. - In addition, when User:Indigo said "comparison" I thought he or she meant Encrypting an Entire System versus Encrypting a Non-root Partition. You're saying the suggestion was to compare LUKS on LVM versus LVM on LUKS versus Plain dm-crypt without LUKS? EscapedNull (talk) 15:25, 11 December 2013 (UTC) - Yes, that was the suggestion he had. Just a small comparison comparing the scenarios (read: examples to employ dm-crypt for specific (not generic) setups) on that page, as Kynikos writes. Great you joined in. Let me add to your above discussion: In September we worked a bit on Disk_encryption to finalise it as the entry point comparing methods. References you wrote in [2] to ecryptfs et al are discussed there. If you feel you can add to it - cool, but generic encryption comparison and references leading away from the dm-crypt subpages are meant to belong there. --Indigo (talk) 21:13, 11 December 2013 (UTC) - +1 for merging those intros to Disk Encryption, e.g. Disk_Encryption#Data_encryption_vs_system_encryption deals with the same subject. -- Kynikos (talk) 08:48, 12 December 2013 (UTC)
https://wiki.archlinux.org/index.php?title=Talk:Dm-crypt&diff=293686&oldid=277159
CC-MAIN-2017-13
en
refinedweb
Opened 7 years ago Closed 7 years ago #13029 closed (invalid) Exception Value is empty with a HTMLParser.HTMLParseError Description In the settings.DEBUG Traceback page is the Exception Value empty if a HTMLParser.HTMLParseError was raised. e.g.: Put this in a view: from HTMLParser import HTMLParseError raise HTMLParseError("FooBar Error message", (1, 2)) Normaly, the exception value should be: FooBar Error message, at line 1, column 3 Change History (3) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by I'll call this accepted on the basis of Karen's remarks. comment:3 Changed 7 years ago by I tried this with Python 2.6.5rc1 and the problem doesn't exist there either. Based on that, I don't think it's necessary for us to put a workaround for a Python bug into Django code for this. In a couple of weeks upgrading to the latest Python 2.6 will make the problem go away. This appears to be a Python2.6-only problem. 2.4, 2.5, and 2.7 (alpha 2) all display a non-empty Exception Value for these exceptions. For some reason I have not tracked down, on Python 2.6.4, unicode()applied to one of these exceptions produces an empty string. On earlier Pythons we don't attempt to apply unicode()to the exception since it doesn't have a __unicode__attribute -- we display essentially unicode(str(e))where eis the exception. On 2.7 alpha 2 some change has been made so that unicode(e)for one of these gets routed to the HTMLParseError __str__override. Possibly whatever change in Python that did that will also appear in the next release of 2.6, but since I haven't tracked down what change in Python is responsible for the difference I can't say that for sure. I'm tempted to close this as invalid since it's really looking to me like a bug in Python, not Django. But these sorts of failures to display debug info are pretty annoying, so if there's something we could do to fix it in Django maybe we should. I'm just not sure what that would be.
https://code.djangoproject.com/ticket/13029
CC-MAIN-2017-13
en
refinedweb
#include <deal.II/base/polynomial.h> Legendre polynomials of arbitrary degree. Constructing a Legendre polynomial of degree p, the roots will be computed by the Gauss formula of the respective number of points and a representation of the polynomial by its roots. Definition at line 389 of file polynomial.h. Constructor for polynomial of degree p. Definition at line 845 of file polynomial.cc. Return a vector of Legendre polynomial objects of degrees zero through degree, which then spans the full space of polynomials up to the given degree. This function may be used to initialize the TensorProductPolynomials and PolynomialSpace classes. Definition at line 873 of file polynomial.cc.
https://www.dealii.org/developer/doxygen/deal.II/classPolynomials_1_1Legendre.html
CC-MAIN-2017-13
en
refinedweb
Welcome to MIX! Following suit with many Microsoft web technologies, here’s our product announcement…..It’s been a long time coming – a year, in fact, since I presented a sneak peak of the Microsoft Virtual Earth Silverlight (VESL) Map Control at MIX 2008. Now, here we are a year later at MIX 2009 releasing the Microsoft Virtual Earth Silverlight Map Control as a CTP. We even got mentioned in Scott Guthrie’s keynote (see pic below)! The bits will be available for download later this week. Over the coming days and weeks, I’ll start getting more and more content and code samples out about how to leverage the Virtual Earth Silverlight Map Control, but for now here’s a little overview of exactly what it’s all about. VESL changes the game when it comes to map control performance for loading tiles and rendering massive amounts of data onto a map. With Silverlight’s multi-image scaling techniques (AKA Deep Zoom), Virtual Earth map tiles can be summoned from lower zoom levels while the current zoom transitions to that level providing an fluid and engaging user experience. Here’s a little video highlighting some of the features of multi-image scaling. Double-click to go full screen. Also, turn on speakers – thanks Paul!. So, I can’t help but give you a sneak peak into how to add a Virtual Earth Silverlight map to your web site. First thing you’ll do is download the control (a .dll) from Microsoft Connect (bits available later this week). Add the .dll to your Silverlight project as a resource. In your default XAML template, you’ll add 2, count ‘em, 2 lines of code to get a Virtual Earth Silverlight map into your site. - Add a reference to the common language runtime namespace (Microsoft.VirtualEarth.MapControl) and the assembly of the same name: - xmlns:m=”clr-namespace:Microsoft.VirtualEarth.MapControl;assembly:Microsoft.VirtualEarth.MapControl” - Add one line of XAML to your code in the grid element: - <m:Map/> That’s it! You don’t even need to touch the .NET code to get access to the Silverlight user experience, Deep Zoom, road map tiles, aerial imagery/photography, new navigation and zoom bar (yay!, zoom bar is back!). Can it BEEE any easier? My presentation is Friday, March 20 @ 12:30 – 1:45. You can watch my session next week on the Live MIX Replay Site. If you’re at MIX (OMG – we’re SOOO gonna rage – meet me at the party at TAO!), I expect to see you there. I’m gonna try to “micro-blog” on Twitter (CP on Twitter) and I’m bringing the HD camera, so will try to upload raw video footage at the show. As with last year, I have a free collector’s item for my presentation that will be THE must have item that everyone will be asking, “Where did you get that?!?!” At the conclusion of my session, the bits for the Virtual Earth Silverlight Map Control CTP will be made available on Microsoft Connect. I’ll post another blog entry Friday to get you all the links you need. Get to MIX and party with me! CP Neat. I wish the Live Search team would integrate this with Live Search Maps. Panning around has never been easier 🙂 So far so good on the "easy to use and integrate" front – this is the result of a 30 minute play so panning/scrolling are ‘off’ until I can sync everything up nicely; but it’s definitely the CTP control in there… Can this be used in a WPF application? Or do we have to wait for WPF 3.0 and it announced support for MSI controls?
https://blogs.msdn.microsoft.com/virtualearth/2009/03/18/introducing-the-virtual-earth-silverlight-map-control/
CC-MAIN-2017-13
en
refinedweb
Get-DAEntryPoint Get-DAEntryPoint Syntax Detailed Description The Get-DAEntrypoint cmdlet displays the settings for an entry point, including the entry point name, the global load balancing IP address, and a list of entry point for which the entry point information should be retrieved. -Name<String> Specifies the name of the entry point for which informationEntryPoint The Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign ( #) provides the namespace and class name for the underlying WMI object. The DAEntryPoint object contains the following: -- Entry point name. -- Global load balancing server IP address. -- List of servers in the entry point. Examples EXAMPLE 1 This example gets a specific entry point configuration named Entry Point 2. Related topics
https://technet.microsoft.com/en-us/library/hh918439(v=wps.620).aspx
CC-MAIN-2017-13
en
refinedweb
How to import contacts to Thunderbird Many users would like import their contats from mobile phone to their Thunderbird. We prepared this easy instructions for you, how you can import contacts with PhoneCopy.com to your Thunderbird. List of support mobile phones find here Next help and advices are on page How to How to start? You need an PC with email client from Mozilla company, where you will have your imported contacts. import contacts to your Thunderbird When you sign in to your account on PhoneCopy.com go to folder "contacts" and click to „More actions" Choose "Export all to LDIF (Mozilla Thunderbird)" Save contacts and open your Thunderbird. Choose item in menu "Tools" and click to "Import" Choose Address Books in new window.. Next choose "Text file (LDIF, tab., scv., txt. )" and end your import with click to a button "finish", which is in next window Choose "Address book" and you see all contacts You will see your imported contacts Your contacts stay in your address book in PhoneCopy
https://www.phonecopy.com/en/pages/how_to_import_contacts_to_thunderbird
CC-MAIN-2017-13
en
refinedweb
A new set of Windows Azure enhancements were released today, one of which is support for ASP.Net Web API backend on Mobile Services. Prior to this update, all backend services were written in NodeJS which is a fantastic platform, but it can feel a bit alien to a lot of .Net developers to develop server-side code in JavaScript. This article is the first of a series I plan on writing on the new Web API backend feature, in a follow up to my book released last month on Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8. We’ll look at creating a new service using the new Web API backend, download the service template and see what’s in there. ASP.Net Web API Microsoft ASP.Net Web API is a framework for creating RESTful web services using the .Net Framework which can be hosted on a web server or self-hosted inside a process like a client application or Windows Service. Web API is similar to MVC where HTTP requests routed (Web API used convention-based routing where a URI is matched to a controller action and Web API 2 adds attribute routing) through to a controller which actions the request and returns a response. Creating a Web API Backend Mobile Service 1. If you haven’t got a Windows Azure Account, go and create one 2. In the portal, click on the Mobile Services tab down the left side, then click the ‘+ NEW’ button on the bottom toolbar. 3. Select COMPUTE | MOBILE SERVICE | CREATE: 4. Next, choose a name for your API, pick the subscription you want to use, the hosting region where the service will reside and most importantly select ‘.NET (PREVIEW)’ from the ‘BACKEND’ picker: 5. Now enter the details for the database server you want to use (I’m using a server I already have, if you haven’t got one yet, you will have the option to create one): 6. Finally we will see our newly created service listed in the ‘MOBILE SERVICES’ tab. Here you can see a Node.js and .NET (Web API) service listed: Backend Differences If we compare a Node.js and .Net services, we will see some differences due to the nature of these platforms: DATA and API Tabs First off, there are some tabs missing from the portal of the Web API service: We no longer have ‘DATA’ and ‘API’ tabs. In the Node.js (top) service, we can go into the ‘DATA’ tab, create tables and customise their REST API scripts; we can also go and create our own bespoke REST API scripts in the API tab. These customisations can be done in the portal directly with the JavaScript editor and become live immediately (they can also be pulled and modified locally using Git). Our new Web API doesn’t have these tabs as all these API customisations are done locally, compiled and then published. The big difference is the Node.js script is interpreted at runtime and the Web API code is compiled. Source Control If we look at the ‘CONFIGURE’ tab, we see ‘source control’ ‘dynamic schema’ and ‘cross-origin resource sharing (cors)’ is missing. Node.js services can use Git source control to manage service scripts, however this is not needed for Web API as we publish directly to the service from visual studio. Dynamic Schema This is a really nice feature of Node.js services; whereby you can create typed models in your apps, then as the services discover them through table APIs, it dynamically adjusts the database schema to match. Web API doesn’t do this because the controller methods are strongly-typed, so the models need pre-defining in the service; however, Entity Framework code-first migrations are used, which means the database schema can be adjusted to match these models (depending on the initialiser used). CORS Browser security stops pages making AJAX requests to hosts other than to its originating host. The Cross Origin Site Scripting (CORS) settings in the Node.js service allows you to add trusted domains to you service. Web API 2 allows you to control this in the code yourself. There’s a good example of doing this here:. Exploring the Template API From the Mobile Service portal under ‘CONNECT AN EXISTING WINDOWS STORE APP’ click ‘Download’: Unblock and unzip the file, then open the solution in Visual Studio 2013. The solution explorer should look something like this: DataObjects If we take a look at ‘TodoItem.cs’ we see we have two simple properties: using Microsoft.WindowsAzure.Mobile.Service; using Microsoft.WindowsAzure.Mobile.Service; namespace TileTapperWebAPIService.DataObjects { public class TodoItem : EntityData { public string Text { get; set; } public bool Complete { get; set; } } } You’ll notice though that it’s not just a POCO, it actually has an 'EntityData' base class which is an abstract implementation of ‘ITableData’ which enforces the default table requirements: public abstract class EntityData : ITableData { protected EntityData(); [DatabaseGenerated(DatabaseGeneratedOption.Identity)] [Index(IsClustered = true)] [TableColumn(TableColumnType.CreatedAt)] public DateTimeOffset? CreatedAt { get; set; } [TableColumn(TableColumnType.Deleted)] public bool Deleted { get; set; } [Key] [TableColumn(TableColumnType.Id)] public string Id { get; set; } [DatabaseGenerated(DatabaseGeneratedOption.Computed)] [TableColumn(TableColumnType.UpdatedAt)] public DateTimeOffset? UpdatedAt { get; set; } [TableColumn(TableColumnType.Version)] [Timestamp] public byte[] Version { get; set; } } The attributes on these properties help Entity framework build the table schema from the model. Database Context If we look at the ‘YourServiceWebAPIContext’ we see that it is a standard Entity Framework DbContext: namespace TileTapperWebAPIService.Models { public class TileTapperWebAPIContext : DbContext { // You can add custom code to this file. Changes will not be overwritten. // // If you want Entity Framework to alter your database // automatically whenever you change your model schema, please use data migrations. // For more information refer to the documentation: // private const string connectionStringName = "Name=MS_TableConnectionString"; public TileTapperWebAPIContext() : base(connectionStringName) { } // When using code first migrations, ensure you use this constructor // and you specify a schema, which is the same as your mobile service name. // You can do that by registering an instance of IDbContextFactory<T>. public TileTapperWebAPIContext(string schema) : base(connectionStringName) { Schema = schema; } public string Schema { get; set; } public DbSet<TodoItem> TodoItems { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { if (Schema != null) { modelBuilder.HasDefaultSchema(Schema); } modelBuilder.Conventions.Add( new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>( "ServiceTableColumn", (property, attributes) => attributes.Single().ColumnType.ToString())); } } } We have a ‘TodoItems’ property for accessing the ‘TodoItem’ table and an overridden ‘OnModelCreating’ method which tells EF how to build the model. The Controller The ‘TodoItemController’ has a set of HTTP methods which map onto database CRUD operations: namespace TileTapperWebAPIService.Controllers { public class TodoItemController : TableController<TodoItem> { protected override void Initialize(HttpControllerContext controllerContext) { base.Initialize(controllerContext); TileTapperWebAPIContext context = new TileTapperWebAPIContext(Services.Settings.Name.Replace('-', '_'));); } } } The controller implements the ‘TableController<T>’ base class which itself derived from TableController and APIController and enforces the methods required to make a Mobile Service Table API. The EntityDomainManager manages access to the EF database context. WebApiConfig This contains code to bootstrap the Web API service and Entity Framework. We can see by default database initialiser class derives from DropCreateDatabaseIfModelChanges<T> which is one to be careful of as your database will be dropped and recreated if the model changes, losing all your data! There are a number of other database initialisers available. Scheduled Jobs The ‘SampleJob’ class shows us how to create a job which can be run on demand or on a schedule. This class stands out from the other template code as it’s not a standard Web API project component, it does not derive from APIController, but can be called via a POST request: namespace TileTapperWebAPIService { // A simple scheduled job which can be invoked manually by submitting an HTTP // POST request to the path "/jobs/sample". public class SampleJob : ScheduledJob { public override Task ExecuteAsync() { Services.Log.Info("Hello from scheduled job!"); return Task.FromResult(true); } } } There is actually a 'JobsController' controller with a single POST method built into 'Microsoft.WindowsAzure.Mobile.Service.Controllers' which controls the jobs: Finally I think this is a really exciting addition to Windows Azure Mobile Services, it will greatly enhance the development experience for .Net developers by offering a familiar technology to build back-end services with. Because we now full control of the database using Entity Framework, we can easily create a relational database schema, which was previously unsupported.
http://geoffwebbercross.blogspot.co.uk/2014_02_01_archive.html
CC-MAIN-2017-13
en
refinedweb
Exploring the Code of the Survey Development Suite - Survey Repository - Survey Development Studio - PocketSurvey - Moving On Chapter 3: Exploring the Code of the Survey Development Suite At this point you're probably pretty eager to get into the code. We've covered the basics of what the Survey Development Suite does and how it can be used. We haven't covered how the application actually accomplishes all the things we've seen it doing. This chapter covers the following: Infrastructure services Error handling Survey Repository Survey Development Studio PocketSurvey Before taking a look at the application itself, we'll take a look at some of the core code that provides the basic foundation on which all the other code in the application is written. The Survey Development Suite is divided into three parts: Survey Repository, which functions as a Web service back end, Survey Development Studio, which is a Windows Forms application, and PocketSurvey, which is a Pocket PC application for conducting surveys in a mobile situation. Each of these three parts is a separate, fully functioning application that also communicates with other pieces of the software suite. In this regard, we consider the collection of the three applications to be a single, enterprise-class application suite. This chapter walks you through the process used to design the various pieces of the application and takes you on a tour of some of the most interesting highlights of the code that makes this application possible. To make things as easy to grasp as possible, we'll start at the back end, with Survey Repository, and then we'll cover the Survey Development Studio Windows application. We'll finish up with coverage of the Pocket PC application. This chapter takes a step-by-step approach to examining the code of the Survey Development Suite. People learn new technologies and techniques in very different ways. Some people prefer to be instructed without knowing anything about the new technology. Other people prefer to dive head-first into the code, gather a list of questions about the code, and then get more information. If the latter applies to you, you might want to open the Visual Studio .NET project that is on this book's CD and explore all the various projects within it. Spend an hour or so looking at the code and figuring out how everything fits together. When you're done, come back to this chapter and read it through from start to finish to fill in any gaps in your understanding of the code. Survey Repository As you know, Survey Repository is a Web service that provides a warehousing facility for survey profiles and survey runs. It is made up of two separate Web services (.asmx files): a login service and the repository service itself. This separation allows us to communicate with the login service via SSL and to keep the repository service communications clear for performance reasons. Take a look at the architectural diagram in Figure 3.1. Figure 3.1 The logical structure of the Survey Repository Web service. At the top level are the two service files Login.asmx and RepositoryService.asmx. These are the entry points into the Web service provided by Survey Repository. When these entry points are used, they in turn invoke any number of business classes represented by the second large box in Figure 3.1. These business components are used as an interface to the Object-Relational Mapping (ORM) mapper, which is contained in the lowest level, the infrastructure services. Infrastructure Services Whenever I sit down to come up with a design for a new Web site, one of the first things I do is come up with a list of all the services that the pages are going to need. Invariably, I come up with the same set of services every time, as virtually every data-driven Web application requires the same things: Security Tracing, monitoring, and instrumentation Data access Application-specific services If you build applications with an architecture-first model, you will find that not only will your applications be quicker and easier to code but you may be able to reuse a lot of your infrastructure code for the next application. For example, the data access services that I used for this application are an evolution of various data abstraction methods that I have been using for over two years. Each time I get to reuse the code, I find room for enhancements and improvements. Code Tour:The SecurityServices Project The SecurityServices project should be available within the SurveyV1 solution. The purpose of this library is to abstract access to the security system used by the application. When you do this, you are in a better position to grow or change the security model in the future without affecting the entire application. SecurityServices provides the classes listed in Table 3.1 (as well as some additional code that you'll see when we take a closer look). SAMS.Survey.SecurityServices Classes Code Tour: The SecurityHelper Class When designing a class that abstracts security tasks, you need to take the time to figure out exactly what tasks you will need to have performed and where those tasks should be performed. The following are two of the most interesting methods of the SecurityHelper class: SetIdentityContextThis method stores user information in the CallContext class, making it available to subsequent method calls. GetUserIdFromTokenThis method takes a string that contains an authentication token and returns the corresponding user ID, if there is one. For the moment, let's focus on the last method, GetUserIdFromToken. In order to understand what this method does, you need to understand how the security system works for the Web service. I needed a way to make sure that the password and username information remained secure, but I didn't want to incur the overhead of using SSL for every single transaction with the Web service. Knowing this, I couldn't very well pass the username and password with each and every request. I didn't want to enable session state because that could lead down a road from which I couldn't return: mostly because if someone left his or her Survey Development Studio application running overnight and then clicked somewhere the next morning, the request would fail with unpredictable results due to that user's session having expired. Although I could have used Microsoft Web Services Enhancements 2.0 to get access to some of the most robust security features that can be used within Web services, I didn't feel the need to use it. I wanted a very simple solution. The target audience for this application is a network site managed by a company that is involved in producing and conducting opinion surveys. For the most part, it didn't need all the extras included in Web Services Enhancements 2.0. The solution I ended up with was the concept of tokens. You may have seen this concept if you have looked at some of the early prototypes for the Favorites Service that was produced by Microsoft. It was a sample application produced for Cold Rooster Consulting. You can find the documentation and a working demo of this sample at. A token, in our daily lives, is some piece of proof or evidence. In New York or Boston, a token might be proof that you are allowed to get onto the subway system. A token in the sense of a Web service is a piece of evidence that verifies that a given user is allowed to access the Web service. The way a token works is fairly simple, as illustrated in Figure 3.2. Figure 3.2 The login and token assignment process. As you can see in Figure 3.2, the client application first makes contact with the login Web service by providing a set of credentials that includes a username and password. If this were a more complex application, a client might be required to provide a CD or license key to prove that the client application itself is legitimate. After the credentials have been validated, a token (in this case, a GUID) is returned to the client. The client is then able to pass that token to the repository Web service to gain access to the various methods exposed by that service. If a call to that service doesn't contain a valid security token, the client performing the action receives an error. Behind the scenes, a lot is going on. First, when a set of credentials is received by Login.asmx (you will see the code for this later in this chapter), a call is made to the database to validate the username and password combination. If it is valid, a token (or GUID) is generated. That generated GUID is then stored in the ASP.NET application cache for some period (this application defaults to one hour), along with information on which user that GUID belongs to. When a request comes in to the repository Web service, a check is made against the ASP.NET application cache for the supplied token GUID. If the token exists in the cache, the call is allowed to proceed as normal. Otherwise, the call is rejected. Listing 3.1 shows the code that obtains the valid user ID by looking up the authentication token in the ASP.NET application cache. Listing 3.1 SurveyV1\SecurityServices\SecurityHelper.cs The GetUserIdFromToken and SetIdentityContext Methods public static int GetUserIdFromToken( string userToken ) { System.Web.Caching.Cache cache = System.Web.HttpContext.Current.Cache; if (cache != null) { string securityKey = "Security-" + userToken; if (cache[securityKey] == null) return -1; PermissionList pl = (PermissionList)cache[securityKey]; return pl.UserId; } else { return -1; } } public static void SetIdentityContext( int userId ) { IdentityContext ic = new IdentityContext(); ic.UserKey = userId.ToString(); // if we ever need to have more information about the user contained in the method // execution chain, we can just add it to the identity context. ic.DisplayName = userId.ToString(); CallContext.SetData("SAMS_SURVEY_IDENTITY", ic ); } You can see in Listing 3.1 that the key index used for the application cache contains the prefix Security. This is not completely necessary because GUIDs are guaranteed to be completely unique and never overlap any other code using the same cache. However, if someone is using an administrative tool to look at the contents of the application cache and that person sees hundreds of seemingly random GUIDs lying around, he or she might not know what they're for. With the method used in Listing 3.1, anyone examining the cache should immediately know the purpose of the GUIDs. Also worth noting is that we're not simply storing the user's ID in the cache. We're actually storing the list of that user's permissions. Whenever a user is authenticated against the system, a call is made that obtains all of the user's security privileges. Those privileges are then placed in the cache, to be associated with the authentication token. This enables every single method of any Web service within this AppDomain class to be able to know what a given user can and cannot do, without making additional database calls. Call Contexts In Listing 3.1, you might have noticed the SetIdentityContext method. This method creates an instance of the IdentityContext class and then places it in the call context with the following statement: CallContext.SetData("SAMS_SURVEY_IDENTITY", ic ); In order to get access to the CallContext class, you need to reference the System.Runtime.Remoting.Messaging namespace, which contains the code that makes call contexts work. What is a call context? You can think of it as a stack that sits on top of a chain of execution. The first time you invoke a method within the .NET Framework, a call context is created. This context is attached to, and available from, every subsequent method call made after the context is created. This allows the remoting infrastructure to pass additional information between process boundaries. However, it has a handy side effect of working on nonremote method execution as well. After you place data in a call context, the data becomes available to every method called thereafter, as well as to methods called by those methods, and so on throughout a deep level of recursion. To place data in a call context, you use the SetData method. This allows you to place an object of any data type into a named slot that is represented by a string. In the case of the code in Listing 3.1, the named slot is "SAMS_SURVEY_IDENTITY", but you are free to use any name you like. The only caveat is that you need to make sure there is a good chance that the name is unique. If your application is making use of another API that utilizes call contexts, the last thing you want to do is place your data in a slot expected by the API. To retrieve data from the call context, you use the GetData method. This method returns an instance of an object of varying data type. It is up to you, the programmer, to know ahead of time the type of data that you placed in the call context. Although using a call context can be handy, it can also have some serious drawbacks. The main drawback of call contexts is that their data is propagated each time a method is invoked. The more methods that are invoked, the more times data must be passed along the stack. If you rely too heavily on call contexts, you might end up degrading the performance of your applications. A good rule of thumb to use with call contexts is to use them only when you know that the information needs to be available to any method, and the information you are passing along the stack has a very small memory footprint, such as a single integer or a short string. Code Tour: The User Class The User class is a business object that serves as a container for user-related information. In addition to holding information about a given user, it provides various methods that are applicable to users, such as Validate, Create, Update, and Delete. This class makes use of the ORM tools contained within the data access layer (you will be seeing those later in this chapter). Listing 3.2 contains the User class. Before you see the code for the User class, take a look at Tables 3.2 and 3.3, which list the properties and methods of the class. User Class Methods User Class Properties Listing 3.2 SurveyV1\SecurityServices\User.cs The User Class using System; using System.Data; using SAMS.Survey.Core.MonitorServices; using SAMS.Survey.Core.ObjectRelationalDb; namespace SAMS.Survey.SecurityServices { public class User : IRelatable { private int userId; private string fullName; private string userName; private string password; public User() { } public int Validate() { SqlRelator sr = new SqlRelator(); sr.Relate( this, "Validate" ); return this.UserId; } public void Create() { SqlRelator sr = new SqlRelator(); sr.Relate( this, RelationType.Insert ); } public void Delete() { SqlRelator sr = new SqlRelator(); sr.Relate( this, RelationType.Delete ); } public void Update() { SqlRelator sr = new SqlRelator(); sr.Relate( this, RelationType.Update ); } } } Note that I've stripped from Listing 3.2 the code that contains the public property definitions for the private members, as it is just straightforward get and set accessors. A few things about the User class should stand out right away when you look at Listing 3.2. The first is that it implements a marker interface called IRelatable. This interface tells the ORM mapper that the class is eligible for interfacing with the database through an ORM. It is actually nothing more than an empty marker. While we could use an abstract base class or even a custom code attribute to perform such marking, the interface allows us to implement our own hierarchy if we chose while still maintaining the hierarchy. Also, testing for the implementation of an interface on a class instance is much faster than using reflection to query the list of custom attributes on a class. The other thing that stands out in Listing 3.2 is that the class makes no use of stored procedures. In fact, it has absolutely no built-in knowledge of how to persist itself. As I'll discuss later in this chapter, this is a key point in true object-ORM. Instead of invoking stored procedures directly, the object simply tells the object relator "relate me to the database, using this mapping." The mapping is indicated by the RelationType enumeration. Code Tour: The PermissionList Class Just like User, PermissionList is a business class that makes use of the ORM mapper to communicate with the database. Its specific purpose is to retrieve the list of permissions associated with a given user. Listing 3.3 contains the PermissionList class definition. Listing 3.3 SurveyV1\SecurityServices\PermissionList.cs The PermissionList Class using System; using System.Data; using System.Reflection; using SAMS.Survey.Core.MonitorServices; using SAMS.Survey.Core.ObjectRelationalDb; using Microsoft.ApplicationBlocks.ExceptionManagement; namespace SAMS.Survey.SecurityServices { [Serializable()] public class PermissionList : MarshalByRefObject, IRelatableSet { private DataSet internalData; private int userId; public PermissionList() { internalData = new DataSet(); } public DataSet ResultSet { get { return internalData; } set { internalData = value; } } public int UserId { get { return userId; } set { userId = value; } } public void FetchPermissions( int userId ) { this.userId = userId; SqlRelator sr = new SqlRelator(); sr.Relate( this, RelationType.Select ); SystemTrace.TraceVerbose("Selected user {0} permissions, Tables returned: {1}", userId, internalData.Tables.Count); } public bool HasPermission( int permissionId, int accessMode ) { SystemTrace.MethodStart( MethodBase.GetCurrentMethod() ); if (internalData.Tables.Count == 0) { ExceptionManager.Publish( new InvalidOperationException( "Cannot check for unfetched permissions.") ); } else { DataTable perms = internalData.Tables[0]; DataRow[] permission = perms.Select("PermissionId=" + permissionId.ToString()); if ((permission.Length ==0) || (permission == null)) { SystemTrace.TraceVerbose( "Permission check failed due to missing " + "permission {0}, total rows available : {1}", permissionId, perms.Rows.Count); return false; } else { return SecurityHelper.CheckAccess( (int)permission[0]["Access"], accessMode ); } } return false; } } } There is some code in the PermissionList class that you haven't yet seen. Some of the methods belong to the SystemTrace class that we'll be discussing in the next section of this chapter. Those methods are all about tracing and making the job of debugging the application easier. The FetchPermissions method works fairly simply. It relates the current instance of PermissionList to the database, using the Select ORM. This obtains all the permissions that the current user (indicated by the UserId property) has. The HasPermission method is a bit more complex than FetchPermissions. It uses the internalData object, which is a data set, to look up all the permissions available to the user. If one of those permissions is the permission indicated by the argument, then the user has that permission. There is a catch, however. Our system not only supports the notion of a yes/no type of permission, but it also supports the notion of access modes. For example, it is possible for a user to have a permission called Survey Profiles, but that user may only have the Read access mode. This person then has read-only access to the profiles contained within the repository. However, another person might have the same permission, but with a higher access level. With this system in place, administrators have the ability to fine-tune what each user can perform. Because our system is designed for role-based security, it is easy to manage as well as flexible. The MonitorServices Project The MonitorServices project is a project that contains classes that provide for unified tracing, easier debugging, and general monitoring-related utilities, such as an IdentityContext class. We will take a closer look at the MonitorServices project later in this chapter, when we take a tour of the unified tracing system in the application. The ObjectRelationalDb Project The ObjectRelationalDb project contains, as I'm sure you guessed, all the classes required to create ORM and to use these mappings to perform database operations in a seamless, transparent way that makes writing business objects a snap. We'll discuss this project in more detail in the code tour "A Look at the ORM Schema," after some discussion on the concepts surrounding ORM. Data Access with ORM In the following sections, we'll take a look at accessing data using an object-relational model. We'll compare this model to the standard procedural model for data access and talk about the benefits and drawbacks of using ORM. What Is ORM? When most programmers think about data access, they think about stored procedures, parameters, and SQL statements. They think about how to write code that wraps around a stored procedure or around SQL statements so that the tedium of accessing the database is taken away, leaving the programmers free to think about the overall business model of the application. One particular train of thought on the subject of data access deals with the idea of ORM. This concept, as illustrated in Figure 3.3, deals with the mapping of information contained in the world of classes, instances, and objects to information contained in the world of a relational database, consisting of tables, columns, and rows. Figure 3.3 The SurveyProfile typed data set. In its purest form, ORM implies that class instances are mapped to rows within database tables. Columns within those tables are mapped to public fields or properties on the class instance. When more than one row of data results from a query operation, the set of rows is then mapped into a collection of objects, and each object in the collection is an object that maps directly to one and only one row within the table. As with all good programming theories, with ORM there is often a balance between the pure theory behind the solution and the practicality of implementing the solution. Often, implementations of ORM make certain sacrifices in pure OOP design in order to achieve some gains in performance, usability, or flexibility. For example, an implementation from Microsoft that is part of a technical preview of a suite of tools called ObjectSpaces does an excellent job of mapping class instances to tables, columns, and rows. However, it only works with SQL Server Yukon (take a look at for an overview of Yukon and its impact on developers) and doesn't currently support stored procedures. This is one of the tradeoffs made in order to place an implementation as close to the pure theory of ORM as possible. Microsoft may indeed add stored procedure support for its ObjectSpaces library in the future; in this case, you'll be well versed in the concepts involved, having looked at the code contained in this section of the book. Why Use ORM? If there are inherent performance concerns with building a system that implements the pure vision of ORM, why should we bother using it? It has quite a few benefits, as described in the following sections. Provider Agnostic In a true implementation of ORM, it should be possible to create business objects that have absolutely no embedded information about how to persist themselves, other than the fact that they can be persisted. For example, a non-ORM business object might have a Delete method. This Delete method might create an instance of a Command object, invoke the command, and then return some result. The problem with this is that if the database information changes, the Delete method could become completely invalid. If you decouple the business object from the means by which that object is persisted to some data source, the business objects can be freely versioned without worrying about the database. In addition, the database information can be changed easily without negatively affecting the entire collection of business objects in the application. Declarative Persistence Another of the incredible benefits of using an ORM model is that you can simply declare the mapping. Instead of writing 20 lines of code to create a set of command parameters, instantiate a connection, open the connection, and so on, you can simply declare, through some means, sufficient information to automatically map instance data to relational data. Some implementations use proprietary data formats (for example, J2EE's Container-Managed Persistence [CMP] uses meta-data files that are contained in .jar archives), and others use standard XML data to list the mapping information (for example, our implementation, Microsoft's ObjectSpaces implementation). Code Simplicity A side effect of storing all the persistence information in some meta-data mapping (XML, .jar file, and so on) is that the code required to actually perform a persistence operation is minimal. In general, the pattern is to create an instance of the mapper (or whatever tool you're using). When you have an instance of a mapper, you simply pass to the mapper the instance of the object you want to persist, along with some helper information, and the operation is performed for you. If you're an architect building a framework on which a team of junior programmers will be building an application, this kind of scenario can be a lifesaver. The simpler it is for your programmers to write fundamental, core code for the application, the less chance there is of bugs creeping up. Scalability If ORM is implemented properly, you can actually change everything about your back-end database without ever having to recompile your business or data-tier objects. You might be thinking, "I never change columns or tables after I release a product." That might be true, but you're in the minority. The reality is that things change. You might upgrade your database server from Oracle 8 to Oracle 9, from SQL 7 to SQL 2000. This upgrade might cause some subtle change that breaks one of your stored procedure invocations. If all you have to do is modify an XML file or just recompile the assembly that contains the affected object, your life will be a lot easier. The implementation of ORM that I've gone with for this book is a little bit different than the pure concept of what ORM is. Instead of mapping instance fields to table columns, I've decided to map instance fields to stored procedure parameters. This supports my ORM implementation as the code looks like any other ORM implementation, and I can still use stored procedures to give the application the most performance and flexibility possible. ORM Versus CMP Aside from being different acronyms, what exactly do ORM and CMP mean, and what is the difference between the two? ORM is pretty much exactly what it sounds like: You have an instance of an object, and the database access is performed by relating individual pieces of that object to the database in some fashion. Some implementations, such as the one used in this book, relate public class members to stored procedure parameters. Other implementations, such as Microsoft's ObjectSpaces, relate individual objects and their public members to SQL statements that are then executed on the database. CMP differs from ORM in some minor ways. The concept of CMP involves an object instance and a container. The container is an abstraction of the underlying physical data storage medium. This container could be an abstraction of a relational database, but it could also be an abstraction of an XML file, a folder containing multiple files on disk, a Microsoft message queue, or even a Web service. The two concepts ORM and CMP both have the same core idea: that the business or data object that is being persisted or related has no direct link to the underlying storage medium. The object instance doesn't know if it is going to be stored in a database, stored in a file on disk, or transmitted via SOAP to a Web service. Both CMP and ORM rely on this concept as a foundational aspect of their respective design patterns. Where the two ideas begin to diverge is in the concept of how communication with the data source takes place. The traditional ORM model maps a single instance of an object to stored procedure parameters or to a SQL statement that is then executed. With CMP, the "container" model is more prevalent; an object instance is placed into a container, and that's all that the programmer ever sees. The act of inserting an object into a container triggers some functionality on the container that will determine what kind of persistence operation to perform. The data contained on the object combined with meta-data stored somewhere provides information about how to complete the persistence operation. In reality, there are almost no pure implementations of either ORM or CMP. The Java implementation of CMP requires that the meta-data for persistence operations be stored in a .jar file on the server. Microsoft's ObjectSpaces uses attributes and meta-data to convert an object instance into a SQL statement that can then be executed against the database server. The implementation of ORM in this book uses XML meta-data stored embedded in assemblies; this meta-data is used to relate public member data to stored procedure parameters to interact with the database. Code Tour: A Look at the ORM Schema I've experimented with quite a few different variations on CMP and ORM. A previous version of CMP that I used had the mapping data stored in an XML file on disk. This file was opened upon application startup and was used to build an in-memory cache of mapping data. This cache was then used to dynamically create stored procedures, as needed by the application. The problem I found with this approach is that the single XML file could get extremely large, especially when I had dozens of different assemblies all using this file for their own persistence information. To make things easier to organize, I experimented with using one XML file per assembly. This made things easier to read, but I ended up with a stack of XML files sitting in my Web application's root directory. I didn't feel comfortable with the plain-text files sitting in the application directory. The version I've implemented for this book actually embeds the ORM XML file directly in the assembly as a resource. This resource is then read via reflection and used to create the appropriate stored procedure whenever the ORM mapper is invoked. Listing 3.4 contains a sample of an ORM that exists in the Survey Development Suite. Listing 3.4 SurveyV1\SecurityServices\ORM.xml The SecurityServices Assembly's ORM.xml File <relationalmapping> <type fullname="SAMS.Survey.SecurityServices.User"> <commandmap storedproc="SVY_Validate_User" multiple="false" type="Validate"> <propertymap member="UserName" dbtype="Varchar" dbsize="8" parameter="@UserName" direction="Input"></propertymap> <propertymap member="Password" dbtype="Varchar" dbsize="8" parameter="@Password" direction="Input"></propertymap> <propertymap member="UserId" dbtype="Int" dbsize="4" parameter="@UserId" direction="Output"></propertymap> <propertymap member="FullName" dbtype="Varchar" dbsize="40" parameter="@FullName" direction="Output"></propertymap> </commandmap> </type> </relationalmapping> The first important element here is the type element. This element is the root of a single ORM. It begins the mapping from the instance of a .NET Framework type to multiple stored procedures. For each type, there can be an unlimited number of stored procedures to invoke. By default, the system has an enumeration for the four CRUD (create, retrieve, update, delete) operations: Select, Insert, Update, and Delete. I've named them Select, Insert, Update, and Delete because these names closely resemble the SQL statements that represent the kinds of operation they perform. The mapping between a .NET Framework type (which can be any type that implements either IRelatable or any interface that inherits from it) and a stored procedure is defined by the <commandmap> element. The type attribute on the <commandmap> element indicates the kind of relational operation. It can be something custom, as in the preceding Validate command mapping, or it can be one of the pre-recognized keywords, such as Select or Update. Beneath the <commandmap> element is the <propertymap> element. This element declares a mapping between a particular field on the given type and a parameter on the stored procedure. The ORM system developed for this book supports both input and output parameters, but the property or field on the object instance must be public; otherwise, the attempt to reflect data from that property will fail and cause undesirable results when communicating with the database. Code Tour: The ObjectRelator Class The ObjectRelator class is an abstract class that provides the basic framework for building your own ObjectRelator class. It defines two methods: public virtual void Relate( IRelatable relatee, RelationType relationType ) public virtual void Relate( IRelatable relatee, string relationKey ) These two overloads provide the basis for all ORM in the entire system. Any class that wishes to be an ObjectRelator class must implement these two methods. The methods allow us to relate an instance object to the database either using one of the four CRUD operations or through some custom-defined operation that corresponds to the type attribute on the <commandMap> element in the ORM.xml embedded resource. The abstract base class ObjectRelator is key to implementing a provider-agnostic implementation of ORM. Code Tour: The SqlRelator Class SqlRelator is an implementation of the abstract class ObjectRelator. Although the implementation I wrote is specific to Microsoft SQL Server, the infrastructure doesn't limit the data support to just SQL. With very little extra code, SqlRelator could be adapted to an OracleRelator class (although some specific code regarding CLOBs would have to be written). The code for SqlRelator is arguably some of the most complex code in the entire Survey Development Suite, mostly because of its heavy reliance on reflection. If you haven't used reflection before or aren't all that familiar with it, you might want to take a moment to brush up on the basics of reflection with the tutorials at, or you can consult the MSDN documentation at. Listing 3.5 shows a protected helper method that is provided by the ObjectRelator class. This method is responsible for fetching a type map from the embedded ORM.xml file in a given type's assembly. Listing 3.5 SurveyV1\ObjectRelationalDb\ObjectRelator.cs The FetchTypeMap Method private ORMTypeMapping FetchTypeMap( IRelatable relatee ) { SystemTrace.MethodStart( MethodBase.GetCurrentMethod() ); Type t = relatee.GetType(); Assembly sourceAssembly = t.Assembly; string resourceName = t.Namespace + ".ORM.xml"; XmlDocument xDoc = new XmlDocument(); StreamReader xmlRaw = new StreamReader( sourceAssembly.GetManifestResourceStream( resourceName ) ); xDoc.Load( xmlRaw ); string query = "//type[@fullname='" + t.FullName + "']"; XmlNode typeMapNode = xDoc.DocumentElement.SelectSingleNode( query ); if (typeMapNode != null ) { ORMTypeMapping typeMap = new ORMTypeMapping( typeMapNode ); return typeMap; } else { SystemTrace.TraceError("Failed to load type map for {0}", t.FullName); ExceptionManager.Publish(new NullReferenceException("Unable to fetch type map for " + t.FullName)); } return null; } There are a couple interesting tricks going on here with reflection. Listing 3.5 all hinges on the basic fact that any given type within the .NET Framework knows from which assembly it was loaded. We use that information to get a handle on that assembly. With that, we can obtain resource streams from that assembly. The particular resource stream we're looking for is the ORM.xml file that is (we're hoping) embedded in the assembly. When we have an XmlDocument instance, created from the ORM.xml file that we loaded from the assembly, we can look for a type mapping that matches the name of the type passed to this function. Finally, when we have the XmlElement element that contains the entire type mapping, we pass that as a constructor argument to the ORMTypeMapping class and return the newly constructed instance to the Relate method, which is shown in Listing 3.6. Listing 3.6 SurveyV1\ObjectRelationalDb\SqlRelator.cs SqlRelator's Relate Method public override void Relate( IRelatable relatee, string relationKey ) { SystemTrace.MethodStart( MethodBase.GetCurrentMethod() ); ORMTypeMapping typeMap = FetchTypeMap( relatee ); ORMCommandMap cmdMap = typeMap.GetMapByName( relationKey ); SqlCommand cmd = BuildCommandFromTypeMap( relatee, typeMap , relationKey ); conn.Open(); if (cmdMap.ReturnsMultiple) { SqlDataAdapter da = new SqlDataAdapter( cmd ); IRelatableSet relateSet = (IRelatableSet)relatee; da.Fill( relateSet.ResultSet ); } else { cmd.ExecuteNonQuery(); } AssignOutputValuesToObject( cmd, relatee, typeMap, relationKey ); conn.Close(); } This method should be fairly easy to follow. The first thing it does is try to retrieve a type mapping for the object it is trying to relate via the FetchTypeMap method. When the map has been retrieved, we can then use the BuildCommandFromTypeMap method to create an instance of the SqlCommand class from the ORM data. Listing 3.7 shows the remainder of the methods for the SqlRelator implementation. Listing 3.7 SurveyV1\ObjectRelationalDb\SqlRelator.cs The AssignOutputValuesToObject Method and Other Helper Methods private void AssignOutputValuesToObject( SqlCommand cmd, IRelatable relatee, ORMTypeMapping typeMap, string relationKey ) { SystemTrace.MethodStart( MethodBase.GetCurrentMethod() ); ORMCommandMap ocm = typeMap.GetMapByName( relationKey ); foreach (object ob in ocm.PropertyMaps) { ORMPropertyMap propMap = (ORMPropertyMap)ob; if (( propMap.DataDirection == ParameterDirection.Output) || ( propMap.DataDirection == ParameterDirection.InputOutput ) ) { PropertyInfo prop; Type t = relatee.GetType(); prop = t.GetProperty( propMap.MemberName ); if (prop != null) { if ( cmd.Parameters[ propMap.Parameter ].Value != DBNull.Value) { prop.SetValue( relatee, cmd.Parameters[ propMap.Parameter ].Value, null ); } } else { ExceptionManager.Publish( new NullReferenceException( "Missing member " + t.FullName + "." + propMap.MemberName) ); } } } } private SqlCommand BuildCommandFromTypeMap( IRelatable relatee, ORMTypeMapping typeMap,); } } In the first method in Listing 3.7, we see some code that maps the output parameters from a SQL stored procedure onto object instance properties. This enables us to place values that will be used as input to a stored procedure on an object instance, and we can store output and return values from the stored procedure on the same object instance. For example, to validate a user, we might want to pass the username and password, invoke the stored procedure, and then have a user ID on the same object instance populated when the stored procedure has completed. The BuildCommandFromTypeMap method is a helper method that takes as input an ORMTypeMapping instance, a string indicating the type of relation being performed, and a reference to a relatable object (that is, an object implementing IRelatable). Similarly, the CreateParameterFromPropertyMap method helps out by taking an ORMPropertyMap instance and returning a complete and instantiated SqlParameter instance. SetParameterValue makes use of the PropertyInfo reflection class in order to set the value for a specific parameter on a given IRelatable instance. The PropertyInfo Class and Reflection The ability for the Survey Repository application to relate live, in-memory instances of objects to the database hinges on the fact that the .NET Framework allows you not only to write code that inspect data types at runtime but to write code that can examine various members of an object at runtime. Most of this work would not be possible without the use of the PropertyInfo class. The reflection process uses this class to obtain information about a particular class member. Not only can it query information about a class member, but it can be used to get and set the value of that member. This allows code to dynamically query and set properties at runtime. This dynamic query and set behavior allows the ObjectRelator class (and of course the SqlRelator class) to transfer information back and forth between the database and a class instance. Table 3.4 illustrates some of the properties of the PropertyInfo class. PropertyInfo Class Methods Table 3.5 lists some of the methods on the PropertyInfo class. PropertyInfo Class Methods As you can see, the PropertyInfo class provides a wealth of power and functionality for dealing with live, runtime information about a data type, its members, and the values of those members as they exist on object instances. Error Handling Survey Repository makes use of the Microsoft application block for exception management (you can find reference material at). This is one of the published recommended practices from Microsoft. Microsoft also has application building blocks for other common tasks, including data access. Even though the Microsoft application block is part of the solution that comes on this book's CD, the solution is still pointing to the Program Files directory for the application block. In other words, you need to have the Microsoft application block for the .NET Framework installed on your PC before you compile the application. I chose to use the Microsoft application block because with it, the method of throwing exceptions becomes completely decoupled from the method of storing the information contained in those exceptions. For example, if you were to write a standard try/catch block without the aid of an application block, it might look something like this: try { // perform some code that might fail } catch (Exception ex) { // do something with the exception } Although this might look elegant at first glance, it can become a maintenance nightmare. What happens if you want to store exceptions in a database? What do you do if you want to email the contents of certain high-priority failures (such as database failures) to a system administrator? Another possibility might even be to publish the contents of an exception to a system administrator's cellular phone via SMS messaging. We can use the Microsoft application block for publishing exceptions, as in the following example: try { // perform some code that might fail } catch (Exception ex) { ExceptionManager.Publish( ex ); } In this example, we simply call ExceptionManager.Publish. What information gets published and to where it gets published is all contained within the Web.config file (or an app.config or a machine.config file). The big savings here is in maintenance. Let's say you've written 10,000 lines of code for a Web application back end. You then decide that instead of writing all your trapped exceptions to the Windows event log, you want to write them to a database and email certain types of trapped exceptions to a system administrator. Instead of having to sift through all 10,000 lines of code and paste new code into every single location, all you have to do is modify the application's XML configuration file to add a new exception publisher. Survey Repository, as it comes on the CD that accompanies this book, doesn't make use of custom publishers. Out of the box, it actually doesn't set any of the application block's configuration parameters. Later on we'll tweak various settings with the application to see how they affect things. Code Tour: Unified Tracing with the SystemTrace Class Unified tracing is a concept that, until recently, most programmers didn't recognize the need for. If you're writing a Windows Forms application or a console application, you are probably familiar with the System.Diagnostics.Trace class. This class is used to write trace messages. You can store trace messages in text files if you like, or you can simply watch those messages appear in the output window while Visual Studio .NET is debugging the application. Those of you who have built and debugged ASP.NET applications know that there is a Trace class available to ASP.NET pages. The problem is that this class is not the same as the Trace class available to Windows applications, class libraries, and console applications. In the MonitorServices project, there is a class called SystemTrace. This class creates a wrapper around the common task of writing messages to a trace log. This wrapper not only writes messages to the trace log provided by System.Diagnostics.Trace, but it writes trace messages to a System.Web.TraceContext class instance. This is the class that makes possible all the additional information you can see at the bottom of ASP.NET pages when tracing page output is enabled. Aside from a few overloads, all the code for SystemTrace eventually boils down to the code in Listing 3.8. Listing 3.8 SurveyV1\MonitorServices\SystemTrace.cs The Trace Method on the SystemTrace Class [Conditional("TRACE")] public static void Trace( TraceLevel messageLevel, string message, params Object[] paramData ) { if (messageLevel <= traceSwitch.Level) { message = ( message == null ? string.Empty : message ); message = MonitorUtilities.SafeFormat( message, paramData ); try { IdentityContext ic = (IdentityContext)CallContext.GetData("SAMS_SURVEY_IDENTITY"); string userId = (ic == null ? "--No User Context--" : ic.UserKey ); string userMessage = MonitorUtilities.SafeFormat("[{0}]{1}", userId, message ); System.Diagnostics.Trace.WriteLine( userMessage ); System.Web.HttpContext webContext = System.Web.HttpContext.Current; if (webContext != null ) { if ( messageLevel == TraceLevel.Error ) { webContext.Trace.Warn( userMessage ); } else webContext.Trace.Write( userMessage ); } } catch { // exceptions that occur during tracing should be absorbed to // avoid creating an infinite loop trying to trace an error // that occurs during tracing. } } } Despite the small amount of code in Listing 3.8, there is actually a fair bit of technology being employed here. The first thing you might notice is the use of TraceLevel. It is an enumeration that is defined by the .NET Framework that can be controlled by XML tags in an application's configuration file through the use of trace switches. A trace switch is aNET;trace switches> piece of functionality that the .NET Framework provides to all applications. To define a trace switch, you simply create a small subsection of an application's configuration file, such as the following: <system.diagnostics> <switches> <add name="SystemTrace" value="4"></add> </switches> </system.diagnostics> This code looks fairly simple. In the system.diagnostics section, you make use of the switches section. Within the switches section, a standard name/value pair collection is defined. In this case, you define a key called SystemTrace and a value. The key is arbitrary and completely up to the programmer. You can define multiple switches within a single application for multiple purposes if you like, or you can do as I've done here and create a single, central value that indicates the trace value. The trace value itself can be any of four different values, each corresponding to one of the TraceLevel enumeration values, which are listed in Table 3.6. TraceLevel Enumeration Values Another piece of the code that might stand out is the use of a class called IdentityContext. I wrote this class as part of the MonitorServices project. It is a serializable class that implements the ILogicalThreadAffinitive interface. Its sole purpose is to simply store information while being passed along inside the call context. If you're not familiar with call contexts, you might want to check out some samples that deal with remoting. In essence, you can think of a call context as a portable stack. The items in the stack are popped off each time a method is invoked, making those items available to the method body. When another method is called, that same stack is passed to the called method. In other words, by placing information in a call context, you guarantee that information will be available to all subsequent method calls, regardless of how nested those method calls are, or where those calls go, even across remoting and process boundaries. The use of call contexts does have some drawbacks. Information passed in a stack to each method call can be fairly expensive. To keep your application performing optimally and still take advantage of call contexts, the data you pass along on a context should be as small as possible. In the case of the Survey Repository application, we are passing an IdentityContext instance on the call context. This class simply contains the authorization token supplied by the user and the user's real name. By passing this on the call context, we can assure that anywhere in the back end of the application, our code knows who is invoking that code. If something goes wrong and we need to trace information about an exception, we can also trace information about which user invoked the method that had a problem. This becomes an invaluable troubleshooting tool. In Listing 3.8, once the code has built a suitable string to be traced, making use of an IdentityContext instance, if one is available, it performs the actual trace. This trace is performed by writing to the System.Diagnostics.Trace class as well as the System.Web.TraceContext class. By sending the text to both of those classes, we can be sure that all our trace messages will show up in anything that makes use of a trace writer, as well as on the output of an ASP.NET page if tracing is enabled. Without unified tracing, the tracing details on the ASP.NET page output are limited to only those events that take place within the code-behind class itself. By using unified tracing, code in the back end, as low as the database layer itself, can write information that will appear on the output of the ASP.NET page trace. The Survey Repository Database The Survey Repository database is pretty simple as far as databases go. Its sole purpose is to store and version survey profiles, survey runs, users, and associated user security settings, such as roles and permissions. One important thing to keep in mind is that we are not actually storing any information about the survey profiles or the runs themselves. The database doesn't store the list of questions or the list of respondents. Instead, the database simply stores the XML serialization of the typed data sets, along with some indexing and version information to make retrieval and browsing easier. Stored Procedures in Survey Repository Table 3.7 lists the stored procedures that have been developed to support the Survey Repository Web services. Stored Procedures in the Survey Repository SQL 2000 Database Tables in Survey Repository Table 3.8 briefly summarizes the tables contained within the Survey Repository database and the purpose of each. You'll see for yourself how these tables are used throughout the book, as you spend more time working with the application code. Tables in the Survey Repository Database Listing 3.9 shows some of the most interesting stored procedures found in the database, in no particular order. You will be seeing more of these and learning more about their purposes and functions later. To keep the SQL as portable as possible, there are very few fancy tricks in the stored procedures and there is very little, if any, SQL Serverspecific code. Listing 3.9 Selected Stored Procedures in the Survey Repository Database CREATE PROCEDURE SVY_Validate_User @UserName varchar(8), @Password varchar(8), @UserId int output, @FullName varchar(40) output AS SELECT @UserId = UserId, @FullName = FullName FROM SVY_Users WHERE UserName = @UserName AND Password = @Password IF @UserId IS NULL BEGIN SET @UserId = -1 SET @FullName = 'Invalid User' END GO CREATE PROCEDURE SVY_GetProfileHistory @ProfileId int AS SELECT h.ProfileID, h.RevisionId, h.RevisedBy, h.RevisedOn, h.XMLSource, h.RevisionComment, u.UserName, u.FullName FROM SVY_ProfileHistory h INNER JOIN SVY_Users u ON h.RevisedBy = u.UserId ORDER BY h.RevisionId ASC GO CREATE PROCEDURE SVY_Get_AllRevisions AS SELECT p.ProfileId, p.CreatedBy, p.CreatedOn, p.State, p.CheckedOutBy, p.CheckedOutOn, p.ShortDescription, u.FullName as CreatedByName, u2.FullName as CheckedOutByName, ph.RevisionId, ph.RevisedBy, ph.RevisedOn, ph.RevisionComment FROM SVY_SurveyProfiles p INNER JOIN SVY_Users u ON p.CreatedBy = u.UserId INNER JOIN SVY_ProfileHistory ph ON p.ProfileId = ph.ProfileId LEFT JOIN SVY_Users u2 ON p.CheckedOutBy = u2.UserId ORDER BY p.ShortDescription ASC GO CREATE PROCEDURE SVY_CreateProfile @ShortDescription varchar(50), @LongDescription varchar(4000), @CreatedBy int, @Private bit, @XMLSource text, @ProfileId int output AS BEGIN TRANSACTION INSERT INTO SVY_SurveyProfiles(ShortDescription, LongDescription, CreatedBy, Private, CreatedOn, State) VALUES(@ShortDescription, @LongDescription, @CreatedBy, @Private, getdate(), 0) SET @ProfileId = @@IDENTITY INSERT INTO SVY_ProfileHistory(ProfileId, RevisionId, RevisedBy, RevisedOn, RevisionComment, XMLSource) VALUES(@ProfileId, 1, @CreatedBy, getdate(), 'Created', @XMLSource) COMMIT TRANSACTION GO CREATE PROCEDURE SVY_Create_User @UserName varchar(8), @Password varchar(8), @FullName varchar(40), @UserId int output AS INSERT INTO SVY_Users(UserName, Password, FullName) VALUES(@UserName, @Password, @FullName) SET @UserId = @@IDENTITY GO As you can see, there really isn't anything particularly complex going on in the stored procedures. The code uses a lot of joins and filters. The only transaction that Survey Repository makes use of is a transaction used when creating a new survey profile. When the stored procedure to create a new profile is called, it creates a new profile as well as the first revision of that profile. Other than that, the stored procedures all perform pretty basic INSERT, UPDATE, DELETE, and SELECT operations.
http://www.informit.com/articles/article.aspx?p=174096&amp;seqNum=4
CC-MAIN-2017-13
en
refinedweb
java.lang.Object org.ref_send.promise.Eventualorg.ref_send.promise.Eventual public class Eventual The eventual operator. This class decorates an event loop with methods implementing the core eventual control flow statements needed for defensive programming. The primary aim of these new control flow statements is preventing plan interference. The implementation of a public method can be thought of as a plan in which an object makes a series of state changes based on a list of invocation arguments and the object's own current state. As part of executing this plan, the object may need to notify other objects of the changes in progress. These other objects may have their own plans to execute, based on this notification. Plan interference occurs when execution of these other plans interferes with execution of the original plan. Interleaving plan execution is vulnerable to many kinds of interference. Each kind of interference is explained below, using the following example code: public final class Account { private int balance; private final ArrayList<Receiver<Integer>> observers; Account(final int initial) { balance = initial; observers = new ArrayList<Receiver<Integer>>(); } public void observe(final Receiver<Integer> observer) { if (null == observer) { throw new NullPointerException(); } observers.add(observer); } public int getBalance() { return balance; } public void setBalance(final int newBalance) { balance = newBalance; for (final Receiver<Integer> observer : observers) { observer.apply(newBalance); } } } A method can terminate execution of its plan by throwing an exception. The plan may be terminated because it would violate one of the object's invariants or because the request is malformed. Unfortunately, throwing an exception may terminate not just the current plan, but also any other currently executing plans. For example, if one of the observers throws a RuntimeException from its apply() implementation, the remaining observers are not notified of the new account balance. These other observers may then continue operating using stale data about the account balance. When a method implementation invokes a method on another object, it temporarily suspends progress on its own plan to let the called method execute its plan. When the called method returns, the calling method resumes its own plan where it left off. Unfortunately, the called method may have changed the application state in such a way that resuming the original plan no longer makes sense. For example, if one of the observers invokes setBalance() in its apply() implementation, the remaining observers will first be notified of the balance after the update, and then be notified of the balance before the update. Again, these other observers may then continue operating using stale data about the account balance. A called method may also initiate an unanticipated state transition in the calling object, while the current transition is still incomplete. For example, in the default state, an Account is always ready to accept a new observer; however, this constraint is temporarily not met when the observer list is being iterated over. An observer could catch the Account in this transitional state by invoking observe() in its apply() implementation. As a result, a ConcurrentModificationException will be thrown when iteration over the observer list resumes. Again, this exception prevents notification of the remaining observers. The above plan interference problems are only possible because execution of one plan is interleaved with execution of another. Interleaving plan execution can be prevented by scheduling other plans for future execution, instead of allowing them to preempt execution of the current plan. This class provides control flow statements for scheduling future execution and receiving its results. Since the control flow statements defined by this class schedule future execution, instead of immediate execution, they behave differently from the native control flow constructs in the Java language. To make the difference between eventual and immediate execution readily recognized by programmers when scanning code, some naming conventions are proposed. By convention, an instance of Eventual is held in a variable named " _". Additional ways of marking eventual operations with the ' _' character are specified in the documentation for the methods defined by this class. All of these conventions make eventual control flow statements distinguishable by the character sequence " _.". Example uses are also shown in the method documentation for this class. The ' _' character should only be used to identify eventual operations so that a programmer can readily identify operations that are expected to be eventual by looking for the _. pseudo-operator. public final Log log public final Receiver<?> destruct call like: destruct.apply(null) public Eventual(Receiver<Promise<?>> enqueue, java.lang.String here, Log log, Receiver<?> destruct) enqueue- raw event loop here- URI for the event loop log- log destruct- destruct public Eventual(Receiver<Promise<?>> enqueue) enqueue- raw event loop public int hashCode() hashCodein interface Selfless hashCodein class java.lang.Object public boolean equals(java.lang.Object x) equalsin class java.lang.Object public final <P,R> R when(Promise<P> promise, Do<P,R> conditional) The conditional code block will be notified of the promise's state at most once, in a future event loop turn. If there is no referent, the conditional's reject method will be called with the reason; otherwise, the fulfill method will be called with either an immediate reference for a local referent, or an eventual reference for a remote referent. For example: import static org.ref_send.promise.Eventual.ref; … final Promise<Account> mine = … final Promise<Integer> balance = _.when(mine, new Do<Account,Promise<Integer>>() { public Promise<Integer> fulfill(final Account x) { return ref(x.getBalance()); } }); A null promise argument is treated like a rejected promise with a reason of NullPointerException. The conditional in successive calls to this method with the same promise will be notified in the same order as the calls were made. This method will not throw an Exception. Neither the promise, nor the conditional, argument will be given the opportunity to execute in the current event loop turn. P- parameter type R- return type promise- observed promise conditional- conditional code block, MUST NOT be null conditional's return, or nullif the conditional's return type is Void public final <P,R> R when(P reference, Do<P,R> conditional) The implementation behavior is the same as that documented for the promise based when statement. P- parameter type R- return type reference- observed reference conditional- conditional code block, MUST NOT be null conditional's return, or nullif the conditional's return type is Void public final <T> Deferred<T> defer() The return from this method is a ( promise, resolver ) pair. The promise is initially in the unresolved state and can only be resolved by the resolver once. If the promise is fulfilled, the promise will forever refer to the provided referent. If the promise, is rejected, the promise will forever be in the rejected state, with the provided reason. If the promise is resolved, the promise will forever be in the same state as the provided promise. After this initial state transition, all subsequent invocations of either fulfill, reject or resolve are silently ignored. Any observer registered on the promise will only be notified after the promise is either fulfilled or rejected. T- referent type public final <T> T _(T referent) An eventual reference queues invocations, instead of processing them immediately. Each queued invocation will be processed, in order, in a future event loop turn. Use this method to vet received arguments. For example: import static org.joe_e.ConstArray.array; public final class Account { private final Eventual _; private int balance; private ConstArray<Receiver<Integer>> observer_s; public Account(final Eventual _, final int initial) { this._ = _; balance = initial; observer_s = array(); } public void observe(final Receiver<Integer> observer) { // Vet the received arguments. final Receiver<Integer> observer_ = _._(observer); // Use the vetted arguments. observer_s = observer_s.with(observer_); } public int getBalance() { return balance; } public void setBalance(final int newBalance) { balance = newBalance; for (final Receiver<Integer> observer_ : observer_s) { // Schedule future execution of notification. observer_.apply(newBalance); } } } By convention, the return from this method is held in a variable whose name is suffixed with an ' _' character. The main part of the variable name should use Java's camelCaseConvention. A list of eventual references is suffixed with " _s". This naming convention creates the appearance of a new operator in the Java language, the eventual operator: " _.". If this method returns successfully, the returned eventual reference will not throw an Exception on invocation of any of the methods defined by its type, provided the invoked method's return type is either void, an allowed proxy type or assignable from Promise. Invocations on the eventual reference will not give the referent, nor any of the invocation arguments, an opportunity to execute in the current event loop turn. Invocations of methods defined by Object are not queued, and so can cause plan interference, or throw an exception. T- referent type, MUST be an allowed proxy type referent- immediate or eventual reference, MUST be non- null java.lang.NullPointerException- null referent java.lang.ClassCastException- Tnot an allowed proxy type public final <T extends java.io.Serializable> void _(T x) If you encounter a compile error because your code is linking to this method, insert an explicit cast to the allowed proxy type. For example,_._(this).apply(null); becomes:_._((Receiver<?>)this).apply(null); x- ignored java.lang.AssertionError- always thrown public static <T> T cast(java.lang.Class<?> type, Promise<T> promise) throws java.lang.ClassCastException For example, final Channel<Receiver<Integer>> x = _.defer(); final Receiver<Integer> r_ = cast(Receiver.class, x.promise); T- referent type to implement type- referent type to implement promise- promise for the referent java.lang.ClassCastException- no cast to type public static <T> Promise<T> ref(T referent) This method is the inverse of cast; it gets the corresponding promise for a given reference. This method will not throw an Exception. T- referent type referent- immediate or eventual reference public static <T> T near(T reference) This method should only be used when the application knows the provided reference refers to a local object. Any other condition is treated as a fatal error. Use the call method to check the status of a promise. This method will not throw an Exception. T- referent type reference- possibly eventual reference for a local referent public static <T> T near(Promise<T> promise) This method should only be used when the application knows the provided promise refers to a local object. Any other condition is treated as a fatal error. Use the call method to check the status of a promise. This method will not throw an Exception. T- referent type promise- a promise public static <T> Promise<T> reject(java.lang.Exception reason) T- referent type reason- rejection reason public <R> Vat<R> spawn(java.lang.String label, java.lang.Class<?> maker, java.lang.Object... optional) All created vats will be destructed when this vat is destructed. The maker MUST be a public Joe-E class with a method of signature: static public R make( Eventual_, …) The ellipsis means the method can have any number of additional arguments. The Eventual parameter, if present, MUST be the first parameter. This method will not throw an Exception. None of the arguments will be given the opportunity to execute in the current event loop turn. R- return type, MUST be either an interface, or a Promise label- optional vat label, if nulla label will be generated maker- constructor class optional- more arguments for maker's make method maker Copyright 1998-2009 Waterken Inc. under the terms of the MIT X license.
http://waterken.sourceforge.net/javadoc/org/ref_send/promise/Eventual.html
CC-MAIN-2017-13
en
refinedweb
In the previous post I explained about configuring Flask-SocketIO, Nginx and Gunicorn. This post includes integrating Flask-SocketIO library to display notifications to users in real time. Flask Config For development we use the default web server that ships with Flask. For this, Flask-SocketIO fallsback to long-polling as its transport mechanism, instead of WebSockets. So to properly test SocketIO I wanted to work directly with Gunicorn (hence the previous post about configuring development environment). Also, not everyone needs to be bothered with the changes required to run it. class DevelopmentConfig(Config): DEVELOPMENT = True DEBUG = True # If Env Var `INTEGRATE_SOCKETIO` is set to 'true', then integrate SocketIO socketio_integration = os.environ.get('INTEGRATE_SOCKETIO') if socketio_integration == 'true': INTEGRATE_SOCKETIO = True else: INTEGRATE_SOCKETIO = False # Other stuff SocketIO is integrated (in development env) if the developer has set the INTEGRATE_SOCKETIO environment variable to “true”. In Production, our application runs on Gunicorn, and SocketIO integration must always be there. Flow To send message to a particular connection (or a set of connections) Flask-SocketIO provides Rooms. The connections are made to join a room and the message is sent in the room. So to send message to a particular user we need him to join a room, and then send the message in that room. The room name needs to be unique and related to just one user. The User database Ids could be used. I decided to keep user_{id} as the room name for a user with id {id}. This information (room name) would be needed when making the user join a room, so I stored it for every user that logged in. @expose('/login/', methods=('GET', 'POST')) def login_view(self): if request.method == 'GET': # Render template if request.method == 'POST': # Take email and password from form and check if # user exists. If he does, log him in. login.login_user(user) # Store user_id in session for socketio use session['user_id'] = login.current_user.id # Redirect After the user logs in, a connection request from the client is sent to the server. With this connection request the connection handler at server makes the user join a room (based on the user_id stored previously). @socketio.on('connect', namespace='/notifs') def connect_handler(): if current_user.is_authenticated(): user_room = 'user_{}'.format(session['user_id']) join_room(user_room) emit('response', {'meta': 'WS connected'}) The client side is somewhat similar to this: <script src="{{ url_for('static', filename='path/to/socket.io-client/socket.io.js') }}"></script> <script type="text/javascript"> $(document).ready(function() { var namespace = '/notifs'; var socket = io.connect(location.protocol + "//" + location.host + namespace, {reconnection: false}); socket.on('response', function(msg) { console.log(msg.meta); // If `msg` is a notification, display it to the user. }); }); </script> Namespaces helps when making multiple connections over the same socket. So now that the user has joined a room we can send him notifications. The notification data sent to the client should be standard, so the message always has the same format. I defined a get_unread_notifs method for the User class that fetches unread notifications. class User(db.Model): # Other stuff def get_unread_notifs(self, reverse=False): """Get unread notifications with titles, humanized receiving time and Mark-as-read links. """ notifs = [] unread_notifs = Notification.query.filter_by(user=self, has_read=False) for notif in unread_notifs: notifs.append({ 'title': notif.title, 'received_at': humanize.naturaltime(datetime.now() - notif.received_at), 'mark_read': url_for('profile.mark_notification_as_read', notification_id=notif.id) }) if reverse: return list(reversed(notifs)) else: return notifs This class method is used when a notification is added in the database and has to be pushed into the user SocketIO room. def create_user_notification(user, action, title, message): """ Create a User Notification :param user: User object to send the notification to :param action: Action being performed :param title: The message title :param message: Message """ notification = Notification(user=user, action=action, title=title, message=message, received_at=datetime.now()) saved = save_to_db(notification, 'User notification saved') if saved: push_user_notification(user) def push_user_notification(user): """ Push user notification to user socket connection. """ user_room = 'user_{}'.format(user.id) emit('response', {'meta': 'New notifications', 'notif_count': user.get_unread_notif_count(), 'notifs': user.get_unread_notifs()}, room=user_room, namespace='/notifs')
http://blog.fossasia.org/flask-socketio-notifications/
CC-MAIN-2017-13
en
refinedweb
If you are reading this in the future then it's possible that the state of the art has changed. We recommend you start by reading the perlootut document in the latest stable release of Perl, rather than this version. By default, Perl's built-in OO system is very minimal, leaving you to do most of the work. This minimalism made a lot of sense in 1994, but in the years since Perl 5.0 we've seen a number of common patterns emerge in Perl OO. Fortunately, Perl's flexibility has allowed a rich ecosystem of Perl OO systems to flourish. If you want to know how Perl OO works under the hood, the perlobj document explains the nitty gritty details. This document assumes that you already understand the basics of Perl syntax, variable types, operators, and subroutine calls. If you don't understand these concepts yet, please read perlintro first. You should also read the perlsyn, perlop, and perlsub documents. Perl's OO system is class-based. Class-based OO is fairly common. It's used by Java, C++, C#, Python, Ruby, and many other languages. There are other object orientation paradigms as well. JavaScript is the most popular language to use another paradigm. JavaScript's OO system is prototype-based. An object represents a single discrete thing. For example, an object might represent a file. The attributes for a file object might include its path, content, and last modification time. If we created an object to represent /etc/hostname on a machine named ``foo.example.com'', that object's path would be ``/etc/hostname'', its content would be ``foo\n'', and it's last modification time would be 1304974868 seconds since the beginning of the epoch. The methods associated with a file might include "rename()" and "write()". In Perl most objects are hashes, but the OO systems we recommend keep you from having to worry about this. In practice, it's best to consider an object's internal data structure opaque. All objects belong to a specific class. For example, our /etc/hostname object belongs to the "File" class. When we want to create a specific object, we start with its class, and construct or instantiate an object. A specific object is often referred to as an instance of a class. In Perl, any package can be a class. The difference between a package which is a class and one which isn't is based on how the package is used. Here's our ``class declaration'' for the "File" class: package File; In Perl, there is no special keyword for constructing an object. However, most OO modules on CPAN use a method named "new()" to construct a new object: my $hostname = File->new( path => '/etc/hostname', content => "foo\n", last_mod_time => 1304974868, ); (Don't worry about that "->" operator, it will be explained later.) Blessing As we said earlier, most Perl objects are hashes, but an object can be an instance of any Perl data type (scalar, array, etc.). Turning a plain data structure into an object is done by blessing that data structure using Perl's "bless" function. While we strongly suggest you don't build your objects from scratch, you should know the term bless. A blessed data structure (aka ``a referent'') is an object. We sometimes say that an object has been ``blessed into a class''. Once a referent has been blessed, the "blessed" function from the Scalar::Util core module can tell us its class name. This subroutine returns an object's class when passed an object, and false otherwise. use Scalar::Util 'blessed'; print blessed($hash); # undef print blessed($hostname); # File Constructor A constructor creates a new object. In Perl, a class's constructor is just another method, unlike some other languages, which provide syntax for constructors. Most Perl classes use "new" as the name for their constructor: my $file = File->new(...); In Perl, methods are simply subroutines that live in a class's package. Methods are always written to receive the object as their first argument: sub print_info { my $self = shift; print "This file is at ", $self->path, "\n"; } $file->print_info; # The file is at /etc/hostname What makes a method special is how it's called. The arrow operator ("->") tells Perl that we are calling a method. When we make a method call, Perl arranges for the method's invocant to be passed as the first argument. Invocant is a fancy name for the thing on the left side of the arrow. The invocant can either be a class name or an object. We can also pass additional arguments to the method: sub print_info { my $self = shift; my $prefix = shift // "This file is at "; print $prefix, ", ", $self->path, "\n"; } $file->print_info("The file is located at "); # The file is located at /etc/hostname Perl has no special syntax for attributes. Under the hood, attributes are often stored as keys in the object's underlying hash, but don't worry about this. We recommend that you only access attributes via accessor methods. These are methods that can get or set the value of each attribute. We saw this earlier in the "print_info()" example, which calls "$self->path". You might also see the terms getter and setter. These are two types of accessors. A getter gets the attribute's value, while a setter sets it. Another term for a setter is mutator Attributes are typically defined as read-only or read-write. Read-only attributes can only be set when the object is first created, while read-write attributes can be altered at any time. The value of an attribute may itself be another object. For example, instead of returning its last mod time as a number, the "File" class could return a DateTime object representing that value. It's possible to have a class that does not expose any publicly settable attributes. Not every class has attributes and methods. While the two classes may differ in many ways, when it comes to the "print_content()" method, they are the same. This means that we can try to call the "print_content()" method on an object of either class, and we don't have to know what class the object belongs to! Polymorphism is one of the key concepts of object-oriented design. For example, we could create an "File::MP3" class which inherits from "File". An "File::MP3" is-a more specific type of "File". All mp3 files are files, but not all files are mp3 files. We often refer to inheritance relationships as parent-child or "superclass/subclass" relationships. Sometimes we say that the child has an is-a relationship with its parent class. "File" is a superclass of "File::MP3", and "File::MP3" is a subclass of "File". package File::MP3; use parent 'File'; The parent module is one of several ways that Perl lets you define inheritance relationships. Perl allows multiple inheritance, which means that a class can inherit from multiple parents. While this is possible, we strongly recommend against it. Generally, you can use roles to do everything you can do with multiple inheritance, but in a cleaner way. Note that there's nothing wrong with defining multiple subclasses of a given class. This is both common and safe. For example, we might define "File::MP3::FixedBitrate" and "File::MP3::VariableBitrate" classes to distinguish between different types of mp3 file. Overriding methods and method resolution Inheritance allows two classes to share code. By default, every method in the parent class is also available in the child. The child can explicitly override a parent's method to provide its own implementation. For example, if we have an "File::MP3" object, it has the "print_info()" method from "File": my $cage = File::MP3->new( path => 'mp3s/My-Body-Is-a-Cage.mp3', content => $mp3_data, last_mod_time => 1304974868, title => 'My Body Is a Cage', ); $cage->print_info; # The file is at mp3s/My-Body-Is-a-Cage.mp3 If we wanted to include the mp3's title in the greeting, we could override the method: package File::MP3; use parent 'File'; sub print_info { my $self = shift; print "This file is at ", $self->path, "\n"; print "Its title is ", $self->title, "\n"; } $cage->print_info; # The file is at mp3s/My-Body-Is-a-Cage.mp3 # Its title is My Body Is a Cage The process of determining what method should be used is called method resolution. What Perl does is look at the object's class first ("File::MP3" in this case). If that class defines the method, then that class's version of the method is called. If not, Perl looks at each parent class in turn. For "File::MP3", its only parent is "File". If "File::MP3" does not define the method, but "File" does, then Perl calls the method in "File". If "File" inherited from "DataSource", which inherited from "Thing", then Perl would keep looking ``up the chain'' if necessary. It is possible to explicitly call a parent method from a child: package File::MP3; use parent 'File'; sub print_info { my $self = shift; $self->SUPER::print_info(); print "Its title is ", $self->title, "\n"; } The "SUPER::" bit tells Perl to look for the "print_info()" in the "File::MP3" class's inheritance chain. When it finds the parent class that implements this method, the method is called. We mentioned multiple inheritance earlier. The main problem with multiple inheritance is that it greatly complicates method resolution. See perlobj for more details. Encapsulation is important for several reasons. First, it allows you to separate the public API from the private implementation. This means you can change that implementation without breaking the API. Second, when classes are well encapsulated, they become easier to subclass. Ideally, a subclass uses the same APIs to access object data that its parent class uses. In reality, subclassing sometimes involves violating encapsulation, but a good API can minimize the need to do this. We mentioned earlier that most Perl objects are implemented as hashes under the hood. The principle of encapsulation tells us that we should not rely on this. Instead, we should use accessor methods to access the data in that hash. The object systems that we recommend below all automate the generation of accessor methods. If you use one of them, you should never have to access the object as a hash directly. Earlier, we mentioned that the "File" class's "last_mod_time" accessor could return a DateTime object. This is a perfect example of composition. We could go even further, and make the "path" and "content" accessors return objects as well. The "File" class would then be composed of several other objects. Roles are an alternative to inheritance for providing polymorphism. Let's assume we have two classes, "Radio" and "Computer". Both of these things have on/off switches. We want to model that in our class definitions. We could have both classes inherit from a common parent, like "Machine", but not all machines have on/off switches. We could create a parent class called "HasOnOffSwitch", but that is very artificial. Radios and computers are not specializations of this parent. This parent is really a rather ridiculous creation. This is where roles come in. It makes a lot of sense to create a "HasOnOffSwitch" role and apply it to both classes. This role would define a known API like providing "turn_on()" and "turn_off()" methods. Perl does not have any built-in way to express roles. In the past, people just bit the bullet and used multiple inheritance. Nowadays, there are several good choices on CPAN for using roles. We strongly recommend that you use one of these systems. Even the most minimal of them eliminates a lot of repetitive boilerplate. There's really no good reason to write your classes from scratch in Perl. If you are interested in the guts underlying these systems, check out perlobj. "Moose" provides a complete, modern OO system. Its biggest influence is the Common Lisp Object System, but it also borrows ideas from Smalltalk and several other languages. "Moose" was created by Stevan Little, and draws heavily from his work on the Perl 6 OO design. Here is our "File" class using "Moose": package File; use Moose; has path => ( is => 'ro' ); has content => ( is => 'ro' ); has last_mod_time => ( is => 'ro' ); sub print_info { my $self = shift; print "This file is at ", $self->path, "\n"; } "Moose" provides a number of features: "Moose" provides a layer of declarative ``sugar'' for defining classes. That sugar is just a set of exported functions that make declaring how your class works simpler and more palatable. This lets you describe what your class is, rather than having to tell Perl how to implement your class. The "has()" subroutine declares an attribute, and "Moose" automatically creates accessors for these attributes. It also takes care of creating a "new()" method for you. This constructor knows about the attributes you declared, so you can set them when creating a new "File". "Moose" lets you define roles the same way you define classes: package HasOnOfSwitch; use Moose::Role; has is_on => ( is => 'rw', isa => 'Bool', ); sub turn_on { my $self = shift; $self->is_on(1); } sub turn_off { my $self = shift; $self->is_on(0); } In the example above, you can see that we passed "isa => 'Bool'" to "has()" when creating our "is_on" attribute. This tells "Moose" that this attribute must be a boolean value. If we try to set it to an invalid value, our code will throw an error. Perl's built-in introspection features are fairly minimal. "Moose" builds on top of them and creates a full introspection layer for your classes. This lets you ask questions like ``what methods does the File class implement?'' It also lets you modify your classes programmatically. "Moose" describes itself using its own introspection API. Besides being a cool trick, this means that you can extend "Moose" using "Moose" itself. There is a rich ecosystem of "Moose" extensions on CPAN under the MooseX <> namespace. In addition, many modules on CPAN already use "Moose", providing you with lots of examples to learn from. "Moose" is a very powerful tool, and we can't cover all of its features here. We encourage you to learn more by reading the "Moose" documentation, starting with Moose::Manual <>. Of course, "Moose" isn't perfect. "Moose" can make your code slower to load. "Moose" itself is not small, and it does a lot of code generation when you define your class. This code generation means that your runtime code is as fast as it can be, but you pay for this when your modules are first loaded. This load time hit can be a problem when startup speed is important, such as with a command-line script or a ``plain vanilla'' CGI script that must be loaded each time it is executed. Before you panic, know that many people do use "Moose" for command-line tools and other startup-sensitive code. We encourage you to try "Moose" out first before worrying about startup speed. "Moose" also has several dependencies on other modules. Most of these are small stand-alone modules, a number of which have been spun off from "Moose". "Moose" itself, and some of its dependencies, require a compiler. If you need to install your software on a system without a compiler, or if having any dependencies is a problem, then "Moose" may not be right for you. Moo If you try "Moose" and find that one of these issues is preventing you from using "Moose", we encourage you to consider Moo next. "Moo" implements a subset of "Moose"'s functionality in a simpler package. For most features that it does implement, the end-user API is identical to "Moose", meaning you can switch from "Moo" to "Moose" quite easily. "Moo" does not implement most of "Moose"'s introspection API, so it's often faster when loading your modules. Additionally, none of its dependencies require XS, so it can be installed on machines without a compiler. One of "Moo"'s most compelling features is its interoperability with "Moose". When someone tries to use "Moose"'s introspection API on a "Moo" class or role, it is transparently inflated into a "Moose" class or role. This makes it easier to incorporate "Moo"-using code into a "Moose" code base and vice versa. For example, a "Moose" class can subclass a "Moo" class using "extends" or consume a "Moo" role using "with". The "Moose" authors hope that one day "Moo" can be made obsolete by improving "Moose" enough, but for now it provides a worthwhile alternative to "Moose". It is, however, very simple, pure Perl, and it has no non-core dependencies. It also provides a ``Moose-like'' API on demand for the features it supports. Even though it doesn't do much, it is still preferable to writing your own classes from scratch. Here's our "File" class with "Class::Accessor": package File; use Class::Accessor 'antlers'; has path => ( is => 'ro' ); has content => ( is => 'ro' ); has last_mod_time => ( is => 'ro' ); sub print_info { my $self = shift; print "This file is at ", $self->path, "\n"; } The "antlers" import flag tells "Class::Accessor" that you want to define your attributes using "Moose"-like syntax. The only parameter that you can pass to "has" is "is". We recommend that you use this Moose-like syntax if you choose "Class::Accessor" since it means you will have a smoother upgrade path if you later decide to move to "Moose". Like "Moose", "Class::Accessor" generates accessor methods and a constructor for your class. Here's our "File" class once more: package File; use Class::Tiny qw( path content last_mod_time ); sub print_info { my $self = shift; print "This file is at ", $self->path, "\n"; } That's it! With "Class::Tiny", all accessors are read-write. It generates a constructor for you, as well as the accessors you define. You can also use Class::Tiny::Antlers for "Moose"-like syntax. "Role::Tiny" provides some of the same features as Moose's role system, but in a much smaller package. Most notably, it doesn't support any sort of attribute declaration, so you have to do that by hand. Still, it's useful, and works well with "Class::Accessor" and "Class::Tiny" "Moose" is the maximal option. It has a lot of features, a big ecosystem, and a thriving user base. We also covered Moo briefly. "Moo" is "Moose" lite, and a reasonable alternative when Moose doesn't work for your application. "Class::Accessor" does a lot less than "Moose", and is a nice alternative if you find "Moose" overwhelming. It's been around a long time and is well battle-tested. It also has a minimal "Moose" compatibility mode which makes moving from "Class::Accessor" to "Moose" easy. "Class::Tiny" is the absolute minimal option. It has no dependencies, and almost no syntax to learn. It's a good option for a super minimal environment and for throwing something together quickly without having to worry about details. Use "Role::Tiny" with "Class::Accessor" or "Class::Tiny" if you find yourself considering multiple inheritance. If you go with "Moose", it comes with its own role implementation. In addition, plenty of code in the wild does all of its OO ``by hand'', using just the Perl built-in OO features. If you need to maintain such code, you should read perlobj to understand exactly how Perl's built-in OO works. For small systems, Class::Tiny and Class::Accessor both provide minimal object systems that take care of basic boilerplate for you. For bigger projects, Moose provides a rich set of features that will let you focus on implementing your business logic. We encourage you to play with and evaluate Moose, Class::Accessor, and Class::Tiny to see which OO system is right for you.
http://linuxhowtos.org/manpages/1/perlootut.htm
CC-MAIN-2017-13
en
refinedweb
A highly specialized feature of SQL Server 2008, managed user-defined aggregates (UDAs) provide the capability to aggregate column data based on user-defined criteria built in to .NET code. You can now extend the (somewhat small) list of aggregate functions usable inside SQL Server to include those you custom-define. The implementation contract for a UDA requires the following: A static method called Init(), used to initialize any data fields in the struct, particularly the field that contains the aggregated value. A static method called Terminate(), used to return the aggregated value to the UDA’s caller. A static method called Aggregate(), used to add the value in the current row to the growing value. A static method called Merge(), used when SQL Server breaks an aggregation task into multiple threads of execution (SQL Server actually uses a thread abstraction called a task), each of which needs to merge the value stored in its instance of the UDA with the growing value. UDAs cannot do any data access, nor can they have any side-effects—meaning they cannot change the state of the database. They take only a single input parameter, of any type. You can also add public methods or properties other than those required by the contract (such as the IsPrime() method used in the following example). Like UDTs, UDAs are structs. They are decorated with the SqlUserDefinedAggregate attribute, which has the following parameters for its constructor: Format— Tells SQL Server how serialization (and its complement, deserialization) of the struct should be done. This has the same possible values and meaning as described earlier for SqlUserDefinedType. A named parameter list— This list contains the following: IsInvariantToDuplicates—Tells SQL Server whether the UDA behaves differently with respect to duplicate values passed in from multiple rows. IsInvariantToNulls—Tells SQL Server whether the UDA behaves differently when null values are passed to it. IsInvariantToOrder—Tells SQL Server whether the UDA cares about the order in which column values are fed to it. IsNullIfEmpty—Tells SQL Server that the UDA will return null if its aggregated value is empty (that is, if its value is 0, or the empty string "", and so on). Name—Tells the deployment routine what to call the UDA when it is created in the database. MaxByteSize—Tells SQL Server not to allow more than the specified number of bytes to be held in an instance of the UDA. You must specify this when using Format.UserDefined. For this example, you implement a very simple UDA that sums values in an integer column, but only if they are prime. Listing 8 shows the code to do this. using System;using System.Data;using System.Data.Sql;using System.Data.SqlTypes;using Microsoft.SqlServer.Server;[Serializable][Microsoft.SqlServer.Server.SqlUserDefinedAggregate( Format.Native, IsInvariantToDuplicates=false, IsInvariantToNulls=true, IsInvariantToOrder=true, IsNullIfEmpty=true)]public struct SumPrime{ SqlInt64 Sum; private bool IsPrime(SqlInt64 Number) { for (int i = 2; i < Number; i++) { if (Number % i == 0) { return false; } } return true; } public void Init() { Sum = 0; } public void Accumulate(SqlInt64 Value) { if (!Value.IsNull && IsPrime(Value) && Value > 1) Sum += Value; } public void Merge(SumPrime Prime) { Sum += Prime.Sum; } public SqlInt64 Terminate() { return Sum; }} In this code, SQL Server first calls Init(), initializing the private Sum data field to 0. For each column value passed to the aggregate, the Accumulate() method is called, wherein Sum is increased by the value of the column, if it is prime. When multiple threads converge, Merge() is called, adding the values stored in each instance (as the Prime parameter) to Sum. When the rowset has been completely parsed, SQL Server calls Terminate(), wherein the accumulated value Sum is returned. Following are the results of testing SumPrime on Production.Product (an existing AdventureWorks2008 table): SELECT TOP 10 dbo.SumPrime(p.ProductId) AS PrimeSum, p.NameFROM Production.Product pJOIN Production.WorkOrder o ONo.ProductId = p.ProductIdWHERE Name LIKE '%Frame%'GROUP BY p.ProductId, p.NameORDER BY PrimeSum DESCgoPrimeSum Name--------------------------------------------360355 HL Mountain Frame - Black, 42338462 HL Mountain Frame - Silver, 42266030 HL Road Frame - Red, 48214784 HL Road Frame - Black, 48133937 HL Touring Frame - Yellow, 4668338 LL Road Frame - Red, 5254221 LL Mountain Frame - Silver, 4815393 ML Road Frame - Red, 520 HL Mountain Frame - Black, 380 HL Road Frame - Black, 44(10 row(s) affected.) Following is the DDL syntax for this UDA: CREATE AGGREGATE SumPrime(@Number bigint)RETURNS bigintEXTERNAL NAME SQLCLR.SumPrime As with UDTs, with UDAs there is no ALTER AGGREGATE, but you can use DROP AGGREGATE to drop them.
http://mscerts.programming4.us/sql_server/sql%20server%202008%20%20%20developing%20custom%20managed%20database%20objects%20(part%205)%20-%20developing%20managed%20user-defined%20aggregates.aspx
CC-MAIN-2014-49
en
refinedweb
basic_filebuf::close Closes a file. close calls fclose(fp). If that function returns a nonzero value, the function returns a null pointer. Otherwise, it returns this to indicate that the file was successfully closed. For a wide stream, if any insertions have occurred since the stream was opened, or since the last call to streampos, the function calls overflow. It also inserts any sequence needed to restore the initial conversion state, by using the file conversion facet fac to call fac.unshift as needed. Each element byte of type char thus produced is written to the associated stream designated by the file pointer fp as if by successive calls of the form fputc(byte, fp). If the call to fac.unshift or any write fails, the function does not succeed. The following sample assumes two files in the current directory: basic_filebuf_close.txt (contents is "testing") and iotest.txt (contents is "ssss"). // basic_filebuf_close.cpp // compile with: /EHsc #include <fstream> #include <iostream> int main() { using namespace std; ifstream file; basic_ifstream <wchar_t> wfile; char c; // Open and close with a basic_filebuf file.rdbuf()->open( "basic_filebuf_close.txt", ios::in ); file >> c; cout << c << endl; file.rdbuf( )->close( ); // Open/close directly file.open( "iotest.txt" ); file >> c; cout << c << endl; file.close( ); // open a file with a wide character name wfile.open( L"iotest.txt" ); // Open and close a nonexistent with a basic_filebuf file.rdbuf()->open( "ziotest.txt", ios::in ); cout << file.fail() << endl; file.rdbuf( )->close( ); // Open/close directly file.open( "ziotest.txt" ); cout << file.fail() << endl; file.close( ); } Output t s 0 1 Referencebasic_filebuf Class iostream Programming iostreams Conventions
http://msdn.microsoft.com/en-us/library/2eaet6xc(v=vs.80)
CC-MAIN-2014-49
en
refinedweb
10 September 2012 04:22 [Source: ICIS news] ?xml:namespace> Honam bought two cargoes for delivery to Daesan at a premium of $18.50/tonne (€14.43/tonne) to Honam previously purchased two cargoes totalling 50,000 tonnes for delivery in the second half of October. The naphtha cargoes fetched a premium of around $11/tonne to The company has so far bought 275,000 tonnes of spot naphtha for October delivery, compared with 250,000 tonnes of spot purchase made for September delivery because of a lower usage of alternative feedstock, liquefied petroleum gas (LPG), traders said. “They have a bigger spot requirement for October because of less LPG usage,” one trader said. Firm spot buying from South Korean crackers have helped drive up premiums amid a tightly supplied market, traders said. Honam runs a 1m tonne/year cracker in Yeosu and a separate 1.07m tonne/year cracker in Daesan, according to ICIS data. Honam is operating both crackers at 100% capacity, a company source had
http://www.icis.com/Articles/2012/09/10/9593948/s.koreas-honam-buys-75000-tonnes-naphtha-for-second-half-oct.html
CC-MAIN-2014-49
en
refinedweb
Not applicable App performance Want to get the best possible performance out of your Cascades application? The first thing to do is make sure that your application complies with the following performance check list. While there are many things that you can do to increase the performance of your application, these are the major guidelines that you should try to follow if you need to improve your app's performance. Compile resources Compiling application resources is a way that you can decrease your app's loading times without making changes to your code. In an application that doesn't compile resources, QML files are stored within the app's assets folder in the file system. Accessing these QML files can be very inefficient for applications that contain a lot of QML. To help speed things up, you can compile your QML files as a resource that's built in to your application's binary. For more information about compiling QML, see Compile resources for Cascades apps. Use QML efficiently If your application is experiencing slow start times or sluggish response times when rendering the UI, there might be some optimizations that you can make to load QML more efficiently. The most straightforward way of creating a UI in Cascades is to define one large, hierarchical structure in QML. Unfortunately, this approach can have a negative effect on start time and memory consumption. In addition, having too many controls linked to the scene graph at the same time can affect the overall UI rendering performance. A component is linked to the scene graph once it's added to the node (or a sub-node) that is currently set as the root of the application's scene (set by Application::instance()->setScene()). To avoid these problems, there are a few strategies that you should consider when you're developing your UI: Keep attachedObjects simple If you are using attachedObjects to expose C++ objects to QML, don't run time-consuming processes in their constructors (for example, SQL queries, blocking calls to services, and so on) because these processes are run when the QML is loaded. Instead of running these processes in the constructor, create a Q_INVOKABLE function containing those processes that you can call from QML. See Using C++ classes in QML for more details on exposing C++ classes to QML. Use deferred loading At any given time, only the minimum amount of required QML should be loaded into memory. When your app starts, only load what the user sees first. After start-up, asynchronously load the content that is required for the next possible interactions, and continue this pattern for each subsequent interaction. When components are no longer needed, destroy them. For example, if your application contains a TabbedPane or a NavigationPane, you should load only the QML that's required for the first page, with empty stubs for the subsequent pages. After the first page loads, you can load additional pages asynchronously. Another approach is to load content on demand. This approach works well if you're not loading extremely large amounts of content. If possible, try to load components to prepare for a possible interaction so that the user isn't left waiting for it. One way to handle deferred loading in Cascades is to use the ComponentDefinition and ControlDelegate classes. ComponentDefinition represents an imperative method of creating dynamic QML components, while ControlDelegate is the declarative method. For more information about ControlDelegate and ComponentDefinition, see Dynamic QML components. Using the visible property on controls isn't an efficient way to handle deferred loading. Even though the visible property for a control might be set to false, the control is still loaded into memory, affecting both start time and page creation time. Generally, the visible property should only be used when controls are temporarily hidden. If there's a good chance that a control will not be shown at all, it's better to defer its creation altogether. Create similar items using ComponentDefinition If you have multiple items that are very similar to create, you should create them dynamically from a ComponentDefinition instead of reusing the QML code throughout the file. In this example, a Container is created multiple times and added to the scene. The index property is used to identify the different versions of the container and apply the appropriate background color. import bb.cascades 1.0 Page { Container { id: rootContainer attachedObjects: [ ComponentDefinition { id: component // Simple container without a background color Container { property int index; preferredWidth: 100 preferredHeight: 100 // Colors the background based on the index background: switch (index) { case 0: Color.Black; break; case 1: Color.DarkGray; break; case 2: Color.Gray; break; case 3: Color.LightGray; break; } } } ] } onCreationCompleted: { for (var i = 0; i < 4; i++) { // Creates the container and sets its index var item = component.createObject(); item.index = i; rootContainer.add(item); } } } Reuse text styles when possible Although text style settings are automatically cached by the application, you should still try to reuse text styles when possible. Creating multiple TextStyleDefinition objects in QML can increase load time, though it's still sometimes necessary for the maintainability of your code. Do this: Container { attachedObjects: [ TextStyleDefinition { id: myStyle base: SystemDefaults.TextStyles.BigText } ] Label { text: "This is a label." textStyle { base: myStyle.style color: Color.Red } } Label { text: "This is another label." textStyle { base: myStyle.style color: Color.Black } } } Avoid doing this: Container { Label { text: "This is a label." textStyle { base: SystemDefaults.TextStyles.BigText color: Color.Red } } Label { text: "This is another label." textStyle { base: SystemDefaults.TextStyles.BigText color: Color.Black } } } For more information about text styles, see Text styles. Avoid declaring variant properties The variant type in QML is quite useful since it allows you to store values from any of the basic Qt types. Although it's versatile, the versatility comes with a price. Using variant when you can use a more specific type means that you get less compile-time help (the compiler might not recognize certain type-related errors) and you pay a performance penalty from converting the value in to and from a QVariant each time. If the variable is always a number, you should use the int or real types. If the variable is always text, use a string. Do this: property int aNumber : 100 property bool aBool : true property string aString : "Hello!" Avoid doing this: property variant aNumber : 100 property variant aBool : true property variant aString : "Hello!" The variant type can also be used to store more complex objects, such as controls, or any other class type that is exposed to QML. Here's an example of how to declare a property of the Container type. When you initialize the object, you must use null instead of 0, since 0 represents a literal integer. property Container myContainer : null However, there are still some instances when you must use the variant type to store object references. For example, if your application passes the object reference within a signal handler you still must use a variant to store the object reference. Avoid using JavaScript files to store properties Using JavaScript files to store properties is inefficient and unsafe. When you import a JavaScript file into QML, property types are lost, so you must reassign your properties in QML before you can use them. Changing the value of a JavaScript property can also cause issues. When you load a JavaScript file into QML, a new object is created for each QML file that you import the JavaScript file into. If you change any of the JavaScript property values, your data might become out of sync across your app. To create safer and more efficient code, you should define JavaScript properties in QML instead. Do this: // Constants.qml // QtObject is the most basic non-visual type QtObject { property int a: 123; property real b: 4.5; property string c: "6"; } // QML file where the Constants.qml properties are used import bb.cascades 1.0 import "../common" Container { attachedObjects: [ Constants { id: constants } ] property int a: constants.a property real b: constants.b property string c: constants.c } Avoid doing this: // Constants.js var a = 123; var b = 4.5; var c = "6"; // QML file where the Constants.js properties are used import bb.cascades 1.0 import "../common/Constants.js" as Constants // Reassigning the property types Container { property int a: Constants.a property real b: Constants.b property string c: Constants.c } Don't block the UI thread Does your overall application performance seem sluggish or unresponsive? The poor performance might be due to running too many resource-intensive processes on the UI thread. Here are a few things that you should consider when loading data or running other resource intensive operations: Load data asynchronously You should always try to avoid loading data in response to a user's request for it. If possible, large amounts of data should be preloaded to prepare for a request. Loading data asynchronously is especially important when using a ListView. For small sets of data, loading data preemptively might not be as important, but when handling large sets of data, you need a DataModel that is able to continually provide data to a scrolling ListView. For more information about lists and asynchronous data providing, see Asynchronous data providing. Run resource-intensive processes on a separate thread Many resource-intensive processes in Qt are asynchronous in nature and don't require the creation of separate threads. For example, many of the QtNetwork APIs automatically process networking requests on a separate thread. When you do have resource-intensive processes that are affecting UI performance, you can use QThread to run these processes on a separate thread. And if you don't want to set up a separate thread, you can use QtConcurrent::run to run functions asynchronously. For more information about QThread, see Thread support. Start the event loop right away For apps that request a large amount of data over the network or perform heavy operations when they start, you should always start the event loop before these tasks are initiated. If you don't start the event loop by calling Application::exec() the user might experience a noticeable lag between starting the app and seeing the UI. Here's an example of how to start the event loop before you initiate any other operations. // Declare a class with an invokable init() method class MyClass : public QObject { public: MyClass(); ... private: Q_INVOKABLE void init(); } // Create an instance of the class in main int main(int argc, char **argv) { Application app(argc, argv); MyClass c; return Application::exec(); } // In the constructor of the instantiated class, invoke init() and // do nothing else. MyClass::MyClass() : QObject() { QMetaObject::invokeMethod(this, "init", Qt::QueuedConnection); // The Qt::QueuedConnection param specifies that the event // loop must start before init() can be executed. } // Invoked after the event loop starts MyClass::init() { // Time consuming operation such as a network request or // loading a really large UI } Starting the event loop immediately is even more important with headless apps. If the event loop for a headless operation doesn't start within 6 seconds from when the app starts, the OS automatically terminates the app. Optimize your lists Although they're very powerful and flexible, lists can also be very intensive on resources if used incorrectly. There are a number of things you can do to ensure that you're getting the best possible performance out of your lists. Define multiple ListItemComponent objects If your ListView features different types of list items, always create a specialized ListItemComponent for each variation. You should avoid using a single ListItemComponent and changing the visibility of the content depending on the data. The ListView creates every node in a list item, regardless of whether they're visible. Keep list items simple In general, it's best to keep list items simple. Extraneous controls within a ListItemComponent can slow down performance. For very large list items, consider using Dynamic QML components. Load images asynchronously In a list, always load images asynchronously so that the list can scroll while images are still loading. To load an image asynchronously, you must use the prefix with the absolute path to the image. For more information about loading images asynchronously, see Images. Use assets and colors efficiently There are several best practices that you should follow to ensure that your application uses assets efficiently. Avoid using duplicate images Using duplicate image assets in your application might seem harmless, but it can have a negative impact on performance, memory, and disk space. After an image is loaded into memory, Cascades stores it in a texture cache. When the same image is loaded again, it loads faster and doesn't take up any extra memory. However, if you have a duplicate image in your assets folder, it's treated as a new image and has to be created and stored in memory again. Unused images don't affect memory consumption at all, but they do affect disk space. Nine-slice scale your images Nine-slice scaling is typically used for background and border images since it allows an image to scale to virtually any size while keeping its corners sharp and borders uniform. Using nine-slice scaling can help minimize the size and number of image assets that you bundle with your application. For more information about nine-slice scaling, see Nine-slice scaling. Remove transparent pixels If your images contain transparent pixels to provide padding, trim the transparent pixels and create the space in your app using padding or margins instead. Trimming the transparent pixels can reduce the size of the image. Do this: ImageView { imageSource: "asset:///tatLogo.png" leftMargin: 10 rightMargin: 10 topMargin: 10 bottomMargin: 10 } Avoid doing this: ImageView { imageSource: "asset:///tatLogo.png" } Use tiling For backgrounds, use the tiling functionality in ImagePaintDefinition to repeat textures within a Container. Do this: // checkerboard_40x40.amd #RimCascadesAssetMetaData version=1.0 source: "checkerboard_40x40.png" repeatable: true Container { preferredWidth: 200 preferredHeight: 100 background: tile.imagePaint attachedObjects: [ ImagePaintDefinition { id: tile repeatPattern: RepeatPattern.XY imageSource: "asset:///images/checkerboard_40x40.amd" } ] } Avoid doing this: ImageView { imageSource: "checkerboard_200x100.png" } Use images of the correct size Always make sure that you're using images that are the correct size for your application. If you have an image that is 100 px wide and you want to render it at 50 px, scale the actual image instead of scaling it in the application. In addition, consider using a 3rd party application to reduce the size of PNG images that you use in your application. Smaller file sizes result in quicker loading times and use less disk space. Even if your compressed PNG is only a couple of kilobytes, it will take up at least (width x height x 4) bytes of graphics memory. Gradients are usually very well compressed in PNG format but still use the same amount of graphics memory. Replace images with colors Are you using any single-color images in your application? If possible, you should replace these with containers that have their background colors set. Do this: Container { background: Color.Red } Avoid doing this: Container { ImageView { imageSource: "asset:///bigRedImage.png" } } Reuse colors Every time the Color.create() function is called, new memory for the color is allocated. The simple solution for this problem is to use the predefined color constants whenever possible. Do this: Container { background: Color.White } Avoid doing this: Container { background: Color.create("#ffffff") } This is a very efficient solution, except that there isn't a predefined constant for every possible color. In these cases, you can define the colors as properties and reference them multiple times. property variant myColor : Color.create("#8072ff"); Container { background: myColor } But what about when you need to reference the same color in multiple QML files? You can define the color in C++ and expose it to all your QML files. Declare the color: Q_PROPERTY(bb::cascades::Color niceColor READ niceColor CONSTANT) public: // Getter that retrieves the color bb::cascades::Color niceColor(); private: // Variable for the color bb::cascades::Color m_nicecolor; Create the color and expose it to QML: m_nicecolor = Color::fromARGB(0xff00a8df); qml->setContextProperty("MyApp", this); Reference the color in QML: Container { background: MyApp.niceColor } Consider these additional optimizations If you're still experiencing some performance issues, here are a few more strategies that you can follow to improve performance. Create the UI using C++ instead of QML When creating an application, it's almost always recommended that you use QML for the UI. However, creating UI components using C++ is slightly faster. The time it takes to parse and process a QML file is slightly more than that for compiled C++ code. If you absolutely need to squeeze that last bit of performance out of your application, then you might want to create your UI with C++ instead of QML. Remove usage of stderr and stdout Although stderr and stdout are very useful for debugging your application, usage of these output streams should be removed before you package your application for release. These operations are very costly to run and can negatively impact the performance of your application. Last modified: 2014-11-17 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/documentation/cascades/best_practices/performance/performance.html
CC-MAIN-2014-49
en
refinedweb
Tricks and Tips with NIO part V: SSL and NIO, friend or foe? Over the last couple of months, I was investigating how I can add SSL support in GlassFish without having to redesign its HTTP Connector called Grizzly. I have to admit that everytime I’ve looked at the example included with the JDK, I was feeling like trying to bike on a frozen lake. This is doable, but very hard (I’m the one near the person using skis (one scary person :-)!) Maybe that’s only me, but I’ve found it very difficult to implement it properly, mostly because it didn’t fit well in the Grizzly framework. Of course I can implements it as a special case in Grizzly, but I was trying to integrate SSL support and be able to use all the tricks I’ve previously discussed (OP_WRITE, SelectionKey, Temporay Selectors). I was aware that other NIO based framework sucessfully implemented it, but wanted to come with something before looking at their implementation. The excellent MINA framework have a very interesting way of support this, but I wasn’t able to figure out how to re-initiate an handshake when the endpoint (most probably a Servlet) wants to look at the client certificate (CLIENT_CERT). If someone figured how to do it, please let me know! I would have liked to get their implementation into Grizzly. Anyway, I’ve finally decided to implement it from scratch and be able to re-use the tricks I’ve already described. The good new is you can see the implementation here. Now the details. The entry point when using SSL is the SSLEngine. The SSLEngine is associated with the lifetime of the SocketChannel, so you need to take care of re-using the same SSLEngine between registration of the SelectionKey. Rrrr, for HTTP, it means you will most probably use SelectionKey.attach() for doing it. I don’t like that SelectionKey.attach(..) (see here why). My problem with this is when you want to implement your SelectionKey registering code (ex: the HTTP keep-alive support), you need to create a data structure that will contains the SSLEngine and a long (or worse, a ByteBuffer), and then attach it to the SelectionKey. Your data structure will most likely looks like: public class SSLAttachment{ protected SSLEngine sslEngine; protedted long keepAliveTime; protected ByteBuffer byteByffer; .... } and you will pool them to avoid creating one everytime you need to register the SelectionKey. Naaaaa I don’t like that, mostly because Grizzly, by default, doesn’t attach data structures to the SelectionKey. Not to say that we did a lot of benchmarks and having to pool the data structure (or worse, create a new one everytime you need to register) is not optimal. OK I understand some protocols realy needs to do that (see all the comments here about this). Fortunalty, I digged the SSLEngine API and was able to use the SSLSession attached to an SSLEngine (SSLEngine.getSession()). Hence no needs for a data structure, just need to do something like: ((SSLEngine)selectionKey.attachment()).getSession().putValue (EXPIRE_TIME, System.currentTimeMillis()); So, when using NIO + SSL, I recommend you use the SSLEngine.getSession() to store the data structure. This way you don’t have to synchronize on a pool and/or create your own data structure. Another things I wanted to support with Grizzly SSL is the temporary Selectors tricks I’ve described here. The way Grizzly handles an HTTP request by default is by trying to read all the header bytes without having to register the SelectionKey back to the main Selector. When the main Selector cannot reads more bytes, I’m always registering the SocketChannel to a temporary Selector and try to do more reads. This approach gives very good result, and I did implement the same approach for SSL. The way I did it is by: - Read available bytes - Invoke SSLEngine.unwrap(..) - If all the required bytes needed for the handshake are read, flush all the response bytes (using a temporary Selector trick to make sure all the bytes are flushed to the client) - Once the handshake is done and successful, reads more bytes - If the client is still alive (might have closed the connection), then call SSLEngine.unwrap(). - Try to parse the headers. If bytes are required, read more by using a pool of temporary Selectors - Once the headers has been read, execute the request, attach the SSLEngine to the SelectionKey and return to the main Selector But wait…One thing I don’t get with the SSLEngine is when you complete the handshake operation: Runnable runnable; while ((runnable = sslEngine.getDelegatedTask()) != null) { runnable.run(); } In which situation will you execute the delegated task on a thread? It might be protocol specific, but with HTTP, that doesn’t make any sense to execute it using a Thread (maybe I should try and see what I’m getting). Another observation is when the Servlet wants to see the certificate during its execution, you need a way to re-initiate the handshake. Something like: protected Object[] doPeerCertificateChain(boolean force) throws IOException { Logger logger = SSLSelectorThread.logger(); sslEngine.setNeedClientAuth(true); javax.security.cert.X509Certificate[] jsseCerts = null; try { jsseCerts = sslEngine.getSession().getPeerCertificateChain(); } catch(Exception ex) { ; } if (jsseCerts == null) jsseCerts = new javax.security.cert.X509Certificate[0]; /** * We need to initiate a new handshake. */ if(jsseCerts.length <= 0 && force) { sslEngine.getSession().invalidate(); sslEngine.beginHandshake(); handshake= true; if (!doHandshake()){ throw new IOException("Handshake failed"); } } Certificate[] certs=null; try { certs = sslEngine.getSession().getPeerCertificates(); } catch( Throwable t ) { if ( logger.isLoggable(Level.FINE)) logger.log(Level.FINE,"Error getting client certs",t); return null; } Here the doHandshake() implementation is the same used when doing the initial handshake. This is where it gets complicated in term of design, because the endpoint (here Servlet) needs to have a way to retrieve the SSLEngine and execute the handshake. Since the doHanshake() code is far from simple, then you don’t want it duplicated in several classes. For that, I’ve created the SSLUtil class. If you planning to use SSL with NIO, take a look at it (and find bugs :-). The good news is we have benchmarked this implementation and the performance is much better than when using blocking socket. Now be aware that when you work with SSLEngine, you cannot uses interface X509KeyManager because SSLEngine instead require the use of X509ExtendedKeyManager. This fact is hidden in the documentation and the exception you are getting is quite confusing if you fail to implement it properly: Caused by: javax.net.ssl.SSLHandshakeException: no cipher suites in common at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:150) at com.sun.net.ssl.internal.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1352) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:176) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:164) at com.sun.net.ssl.internal.ssl.ServerHandshaker.chooseCipherSuite(ServerHandshaker.java:639) at com.sun.net.ssl.internal.ssl.ServerHandshaker.clientHello(ServerHandshaker.java:450) I was puzzled because the blocking case was working perfectly. Anyway just make sure you aren’t using X509KeyManager. I’ve filled a bug against the JDK as I suspect I will not be the only one that face this problem. OK, this is it! As usual, feedback are more than welcome, and many thanks for all the great feedback I got from the previous blogs _uacct = “UA-3111670-1″; urchinTracker(); technorati: grizzly nio ssl glassfish
http://jfarcand.wordpress.com/2006/09/21/tricks-and-tips-with-nio-part-v-ssl-and-nio-friend-or-foe/
CC-MAIN-2014-49
en
refinedweb
23 October 2009 17:17 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--If you have sought direction from the first flush of third-quarter financial results from the sector majors or, recently, from chemicals markets themselves, then you may not have been disappointed. The talk has been of recovery, although globally the picture remains depressed. DuPont on Tuesday said it was almost sold out of the white pigment titanium dioxide but continued to struggle in its high-strength materials businesses. Dow on Thursday talked of “early cycle growth in emerging geographies”. CEO Andrew Liveris identified Greater China, s?xml:namespace> This is welcome, sequential growth although chemicals demand remains well down on last year. Plant operating rates have been forced down. Dow’s average in the quarter was 78%, three percentage points higher than in the second quarter of the year. The story is still that of coming off the bottom of the trough with demand growth driven principally by But, while “Challenges certainly remain,” Liveris said in a conference call, “particularly in the mature economies such as those of the Such a message is likely to be repeated as this earnings season progresses. It is not so much a question of chemical industry executives being conservative in outlook – as suggested by one investment analyst this month. They see very little in the way of ‘green shoots’ on the economic horizon in either the DuPont’s European businesses were dragged down in the third quarter by a poor showing from operations in c The In sharp contrast, statistics showed this week that China’s GDP grew by 8.9% in the third quarter compared with growth of 7.9% in the second quarter. For the big western chemicals producers and their counterparts in DuPont makes a great deal of its new product introductions. Dow is focusing more on closer to end-use market businesses following its takeover of specialty materials maker Rohm and Haas. The drive to specialise, or rather to push the higher margin businesses, and to get closer to the customer, has been accelerated by the recession. It is changing the faces of the latest chemicals producers. Lower margin, and in broad terms, basic commodity businesses have been more starkly exposed by the recession. Some are struggling to make a comeback. Dow’s chloralkali business, for instance, remains under pressure, both, Liveris says, on the vinyl chloride monomer (VCM) side related to construction and on caustic soda because of weak fundamentals in the alumina and pulp and paper industries. In such businesses companies have to move fast to match their manufacturing capabilities with expected ongoing levels of market demand. This is where it really hurts as operating rates remain low and plants eventually are shuttered or closed. Dow and other companies clearly feel that the global economy is now on firmer ground. Trade is beginning to pick up, Liveris said this week. The good news is that exports from “Market stability has improved,” he added, “but we continue to remain cautious about the ability of some economies to sustain growth. This is especially true of the Liveris says that Dow’s 2009/10 operating plans still “do not count on material improvements in market conditions”. For more on Dow Chemical and DuPont visit ICIS company intelligence Read
http://www.icis.com/Articles/2009/10/23/9257741/INSIGHT-Dont-expect-much-from-US-and-Europe.html
CC-MAIN-2014-49
en
refinedweb
Talk:How to start a grassroots group From OLPC "OLPC xxx" namespace usage There are many pages and categories in the pattern of "OLPC country", which is perhaps best read as "OLPC volunteers in country", although sometimes it may be confused as being more narrowly defined as the local chapter organized with the name "OLPC country". The narrower construction emphasizes a specific organization over cooperative and hopefully coordinated efforts in a given country. OLPC Nepal and OLE_Nepal offer an example where there are two local organizations working on OLPC deployment in Nepal. Tagging an OLE Nepal developed page with the category "OLPC Nepal" would hopefully not be seen as improperly crediting OLE Nepal's worthy efforts, but rather it should be seen as a way to allow newcomers to more easily navigate (using category tags) to all of the pages related to the efforts (of all parties) to help children in Nepal by deploying XO laptops and XS servers and related content efforts. See Category:Countries for a discussion of conventions on establishing a new "OLPC country" page. Further discussion of emerging standards on the use of the "OLPC country" and "OLPC language" namespaces is just getting underway with the goal of establishing some consensus to guide wiki users. Proyecto Ceibal (also sometimes known as OLPC Ceibal or on this wiki as OLPC_Uruguay) offers an example where a local organization does not choose the "English-standard" "OLPC country" naming convention. Through it's efforts, OLPC embraces localization into many languages and local organizations are, of course, free to name themselves as they choose; however, it is to be expected that english-language users on the wiki will generally look for the english language pages related to a given country's OLPC efforts at the "OLPC country" page. To add to the confusion, there are also many wiki pages employing the naming pattern of "OLPC city/region". This is seen mostly in North America where the G1G1 program has resulted in many XO laptops in the hands of donors in various metro areas, where they have organized themselves under "OLPC city/region" banners for meet-ups to explore the XO together and also to coordinate contributions to the broader OLPC effort. These pages generally have the "XO User Group" category tag. Cjl 02:21, 11 April 2008 (EDT)
http://wiki.laptop.org/go/Talk:How_to_start_a_grassroots_group
CC-MAIN-2014-49
en
refinedweb
12 March 2010 09:13 [Source: ICIS news] LONDON (ICIS news)--LyondellBasell successfully restarted its 210,000 tonne/year polypropylene (PP) plant at Carrington in the UK on Friday after several failed attempts, a company source said. The restart was delayed for 12 days following a planned, month-long maintenance shutdown. LyondellBasell had already reported tight availability and product in the wider market was restricted, mainly due to the lack of propylene. “We will lose some business this month due to our availability restrictions,” said the source, “but it is getting easier to increase prices as the market realises that product is tight.” Homopolymer injection PP prices were reported within a wide range in Europe, with some regional differences, but net prices were now well above €1,100/tonne ($1,507/tonne) FD (free delivered) NWE (northwest ?xml:namespace> PP producers in (
http://www.icis.com/Articles/2010/03/12/9342151/lyondellbasell-restarts-carrington-uk-polypropylene-unit.html
CC-MAIN-2014-49
en
refinedweb
I was introduced to the open-source performance testing tool Gatling a few months ago by Dustin Barnes and fell in love with it. It has an easy to use DSL, and even though I don’t know a lick of Scala, I was able to figure out how to use it. It creates pretty awesome graphics and takes care of a lot of work for you behind the scenes. They have great documentation and a pretty active google group where newbies and questions are welcomed. It ships with Scala, so all you need to do is create your tests and use a command line to execute it. I’ll show you how to do a few basic things, like test that you have everything working, then we’ll create nodes and relationships, and then query those nodes. We start things off with the import statements: import com.excilys.ebi.gatling.core.Predef._ import com.excilys.ebi.gatling.http.Predef._ import akka.util.duration._ import bootstrap._ Then we start right off with our simulation. For this first test, we are just going to get the root node via the REST api. We specify our Neo4j server, in this case I am testing on localhost (you’ll want to run your test code and Neo4j server on different servers when doing this for real). Next we specify that we are accepting JSON to return. For our test scenario, for a duration of 10 seconds, we’ll get “/db/data/node/0″ and check that Neo4j returns the http status code 200 (for everything be ok). We’ll pause between 0 and 5 milliseconds between calls to simulate actual users, and in our setup we’ll specify that we want 100 users. class GetRoot extends Simulation { val httpConf = httpConfig .baseURL("") .acceptHeader("application/json") val scn = scenario("Get Root") .during(10) { exec( http("get root node") .get("/db/data/node/0") .check(status.is(200))) .pause(0 milliseconds, 5 milliseconds) } setUp( scn.users(100).protocolConfig(httpConf) ) } We’ll call this file “GetRoot.scala” and put it in the user-files/simulations/neo4j. gatling-charts-highcharts-1.4.0/user-files/simulations/neo4j/ We can run our code with: ~$ bin/gatling.sh We’ll get a prompt asking us which test we want to run: GATLING_HOME is set to /Users/maxdemarzi/Projects/gatling-charts-highcharts-1.4.0 Choose a simulation number: [0] GetRoot [1] advanced.AdvancedExampleSimulation [2] basic.BasicExampleSimulation Choose the number next to GetRoot and press enter. Next you’ll get prompted for an id, or you can just go with the default by pressing enter again: Select simulation id (default is 'getroot'). Accepted characters are a-z, A-Z, 0-9, - and _ If you want to add a description, you can: Select run description (optional) Finally it starts for real: ================================================================================ 2013-02-14 17:18:03 10s elapsed ---- Get Root ------------------------------------------------------------------ Users : [#################################################################]100% waiting:0 / running:0 / done:100 ---- Requests ------------------------------------------------------------------ > get root node OK=58457 KO=0 ================================================================================ Simulation finished. Simulation successful. Generating reports... Reports generated in 0s. Please open the following file : /Users/maxdemarzi/Projects/gatling-charts-highcharts-1.4.0/results/getroot-20130214171753/index.html The progress bar is a measure of the total number of users who have completed their task, not a measure of the simulation that is done, so don’t worry if that stays at zero for a long while and then jumps quickly to 100%. You can also see the OK (test passed) and KO (tests failed) numbers. Lastly it creates a great html based report for us. Let’s take a look: Here you can see statistics about the response times as well as the requests per second. So that’s great, we can get the root node, but that’s not very interesting, let’s create some nodes: class CreateNodes extends Simulation { val httpConf = httpConfig .baseURL("") .acceptHeader("application/json") val createNode = """{"query": "create me"}""" val scn = scenario("Create Nodes") .repeat(1000) { exec( http("create node") .post("/db/data/cypher") .body(createNode) .asJSON .check(status.is(200))) .pause(0 milliseconds, 5 milliseconds) } setUp( scn.users(100).ramp(10).protocolConfig(httpConf) ) } In this case, we are setting 100 users to create 1000 nodes each with a ramp time of 10 seconds. We’ll run this simulation just like before, but choose Create Nodes. Once it’s done, take a look at the report, and scroll down a bit to see the chart of the Number of Requests per Second: You can see the number of users ramp up over the first 10 seconds and fade at the end. Let’s go ahead and connect some of these nodes together: We’ll add JSONObject to import statements, and since I want to see what nodes we link to what nodes together, we’ll print the details for the request. I am randomly choosing two ids, and passing them to a cypher query to create the relationships: import com.excilys.ebi.gatling.core.Predef._ import com.excilys.ebi.gatling.http.Predef._ import akka.util.duration._ import bootstrap._ import util.parsing.json.JSONObject class CreateRelationships extends Simulation { val httpConf = httpConfig .baseURL("") .acceptHeader("application/json") .requestInfoExtractor(request => { println(request.getStringData) Nil }) val rnd = new scala.util.Random val chooseRandomNodes = exec((session) => { session.setAttribute("params", JSONObject(Map("id1" -> rnd.nextInt(100000), "id2" -> rnd.nextInt(100000))).toString()) }) val createRelationship = """START node1=node({id1}), node2=node({id2}) CREATE UNIQUE node1-[:KNOWS]->node2""" val cypherQuery = """{"query": "%s", "params": %s }""".format(createRelationship, "${params}") val scn = scenario("Create Relationships") .during(30) { exec(chooseRandomNodes) .exec( http("create relationships") .post("/db/data/cypher") .header("X-Stream", "true") .body(cypherQuery) .asJSON .check(status.is(200))) .pause(0 milliseconds, 5 milliseconds) } setUp( scn.users(100).ramp(10).protocolConfig(httpConf) ) } When you run this, you’ll see a stream of the parameters we sent to our post request: {"query": "START node1=node({id1}), node2=node({id2}) CREATE UNIQUE node1-[:KNOWS]->node2", "params": {"id1" : 98468, "id2" : 20147} } {"query": "START node1=node({id1}), node2=node({id2}) CREATE UNIQUE node1-[:KNOWS]->node2", "params": {"id1" : 83557, "id2" : 26633} } {"query": "START node1=node({id1}), node2=node({id2}) CREATE UNIQUE node1-[:KNOWS]->node2", "params": {"id1" : 22386, "id2" : 99139} } You can turn this off, but I just wanted to make sure the ids were random and it helps when debugging. Now we can query the graph. For this next simulation, I want to see the answers returned from Neo4j, and I want to see the nodes related to 10 random nodes passed in as a JSON array. Notice it’s a bit different from before, and we are also checking to see if we got “data” back in our request. import com.excilys.ebi.gatling.core.Predef._ import com.excilys.ebi.gatling.http.Predef._ import akka.util.duration._ import bootstrap._ import util.parsing.json.JSONArray class QueryGraph extends Simulation { val httpConf = httpConfig .baseURL("") .acceptHeader("application/json") .responseInfoExtractor(response => { println(response.getResponseBody) Nil }) .disableResponseChunksDiscarding val rnd = new scala.util.Random val nodeRange = 1 to 100000 val chooseRandomNodes = exec((session) => { session.setAttribute("node_ids", JSONArray.apply(List.fill(10)(nodeRange(rnd.nextInt(nodeRange length)))).toString()) }) val getNodes = """START nodes=node({ids}) MATCH nodes -[:KNOWS]-> other_nodes RETURN ID(other_nodes)""" val cypherQuery = """{"query": "%s", "params": {"ids": %s}}""".format(getNodes, "${node_ids}") val scn = scenario("Query Graph") .during(30) { exec(chooseRandomNodes) .exec( http("query graph") .post("/db/data/cypher") .header("X-Stream", "true") .body(cypherQuery) .asJSON .check(status.is(200)) .check(jsonPath("data"))) .pause(0 milliseconds, 5 milliseconds) } setUp( scn.users(100).ramp(10).protocolConfig(httpConf) ) } If we take a look at the details tab for this simulation we see a small spike in the middle: This is a tell-tale sign of a JVM Garbage Collection taking place and we may want to look into that. Edit your neo4j/conf/neo4j-wrapper.conf file and uncomment the garbage collection logging, as well as add timestamps to gain better visibility in to the issue: # Uncomment the following line to enable garbage collection logging wrapper.java.additional.4=-Xloggc:data/log/neo4j-gc.log wrapper.java.additional.5=-XX:+PrintGCDateStamps Neo4j performance tuning deserves its own blog post, but at least now you have a great way of testing your performance as you tweak JVM, cache, hardware, load balancing, and other parameters. Don’t forget while testing Neo4j directly is pretty cool, you can use Gatling to test your whole web application too and measure end to end performance.
http://java.dzone.com/articles/neo4j-and-gatling-sitting-tree
CC-MAIN-2014-49
en
refinedweb
. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; public class Form1 : Form { private ToolStripContainer toolStripContainer1; private ToolStrip toolStrip1; public Form1() { InitializeComponent(); } [STAThread] static void Main() { Application.EnableVisualStyles(); Application.Run(new Form1()); } private void InitializeComponent() { toolStripContainer1 = new System.Windows.Forms.ToolStripContainer(); toolStrip1 = new System.Windows.Forms.ToolStrip(); // Add items to the ToolStrip. toolStrip1.Items.Add("One"); toolStrip1.Items.Add("Two"); toolStrip1.Items.Add("Three"); // Add the ToolStrip to the top panel of the ToolStripContainer. toolStripContainer1.TopToolStripPanel.Controls.Add(toolStrip1); // Add the ToolStripContainer to the form. Controls.Add(toolStripContainer.
http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstrip.aspx
CC-MAIN-2014-49
en
refinedweb
11 August 2010 05:59 [Source: ICIS news] By John Richardson SINGAPORE (ICIS)--China will account for around one-third of global polypropylene (PP) consumption by the middle of this decade, up from the current 25%, as domestic demand continues to grow at more than 10% a year, a consultant said on Wednesday. Mike Smith, vice-president for propylene and derivatives with petrochemical consultancy, DeWitt & Co, said the growing demand would be fed both by new capacities in ?xml:namespace> New capacities will outpace demand growth for the next few years with global average operating rates below 90% up until 2014, he added. Twelve million tonnes per year of capacity is due on-stream in 2009-11 in the Middle East and Asia, comprising 4.2m tonne/year in the Middle East, 5m tonne/year in Northeast Asia, mostly in China, and 2.8m tonne/year in Southeast Asia, he said. The US and Exports helped support a rapid recovery in the Inventory rebuilding is being boosted by improved demand from the consumer electrical goods and automobile sectors. The North America has already seen PP capacity reduced by a net 700,000 tonne/year due to closures in the Further announcements of capacity closures were possible in both regions, he warned. The decline in the The There has been a lot of talk about the influence of Petrologistics’ 544,000 tonne/year propylene facility on But Smith said that the plant will add only 3.5% to total US C3s supply. European refinery propylene availability should improve as the economy picks up, but Smith warned that this could be offset by weaker gasoline exports to the Steam cracker operating rates in Europe could also come under downward pressure from ethylene derivative– imports from the
http://www.icis.com/Articles/2010/08/11/9383898/china-to-account-for-one-third-of-global-pp-consumption-by-2015.html
CC-MAIN-2014-49
en
refinedweb
I host an entire server to send content between two processes running on the same machine. That's what IPC is for... So why do people want to use it then????? Well that's an easy question to answer, they like it, the api is simple and it means they can make REST based web apps inside Electron with an architecture they are used to. So what's the alternative So I got bored one day and made a tool called electron-router. Basically it wraps some electron API's (primarily registerStringProtocol and registerStandardSchemes) and presents you with an API incredibly similar to how Express operates. Here is a super simple example of what you can do with electron-router. // In the main process import { Router } from '@marshallofsound/electron-router' const api = new Router('myapp'); let me = { name: 'Samuel' }; api.get('me', (req, res) => { res.json(me); }); api.post('me', (req, res) => { me = req.uploadData[0].json(); res.json({ status: 'success' }); }); Now from the renderer process we can use our router API like any other HTTP endpoint, just with our custom scheme out the front. For example: // In the renderer process import { rendererPreload } from '@marshallofsound/electron-router'; rendererPreload(); // This has to be called once per renderer process fetch('myapp://me') .then(resp => resp.json()) .then(me => console.log(me.name)); // Samuel fetch('myapp://me', { method: 'POST', body: JSON.stringify({ name: 'Jimmy' }) }).then(() => { fetch('myapp://me') .then(resp => resp.json()) .then(me => console.log(me.name)); // Jimmy }); Just like express you can do all the standard HTTP methods. router.get, router.post, router.put, router.delete and of course the express router.use. This use method works in exactly the same way to Express. They are always called first so you can use .use to pass information through to later .[method] listeners. Just like express you can also call .use with extra routers. To do that with electron-router you simply import "MiniRouter" and use it just the top level router. E.g. import { Router, MiniRouter } from '@marshallofsound/electron-router'; const api = new Router(); const subAPI = new MiniRouter(); api.use('sub/thing', subAPI); subAPI.get('magic', (req, res) => res.send('Hello')); I've personally started using this as a semi-replacement for the renderer -> main -> renderer IPC communication pathway that most people currently use. I've found it a lot closer to what people coming from web technology backgrounds are used to and it has greatly improved both the speed at which I develop and the code readability. Let me know what you think of this module in the comments below, totally not biased or anything. But I think it's pretty awesome :D
https://blog.samuelattard.com/using-express-inside-electron/
CC-MAIN-2021-31
en
refinedweb
Turtle is a render engine mostly used for preparing game assets and baking the textures/lights. Although it may be useful for some people, a large part of the industry does not need it. The problem is, Turtle nodes are very persistent. Once the plugin activated, it creates a couple of locked nodes which you cannot delete easily. Moreover, if that scene is opened on any other workstation, these nodes forces to load Turtle plugin. Once the plugin loaded, it stays like that until you exit from Maya. So even if you close the scene and open a new one, since you already activated the Plugin, it will create those persistent nodes any other scene opened with that maya session. To put that in simple, if its activated once, Turtle nodes spread like a virus in a studio or work group 🙂 There are various ways to get rid of the Turtle. Some of them are permanent. Below code is a very simple solution to delete the locked turtle nodes. After deleting the nodes it unloads the plugin too. Unless the plugin will continue to create these nodes on each save/open import pymel.core as pm def killTurtle(): try: pm.lockNode( 'TurtleDefaultBakeLayer', lock=False ) pm.delete('TurtleDefaultBakeLayer') except: pass try: pm.lockNode( 'TurtleBakeLayerManager', lock=False ) pm.delete('TurtleBakeLayerManager') except: pass try: pm.lockNode( 'TurtleRenderOptions', lock=False ) pm.delete('TurtleRenderOptions') except: pass try: pm.lockNode( 'TurtleUIOptions', lock=False ) pm.delete('TurtleUIOptions') except: pass pm.unloadPlugin("Turtle.mll") killTurtle()
https://www.ardakutlu.com/maya-tips-tricks-kill-the-turtle/
CC-MAIN-2021-31
en
refinedweb