text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Create a function that takes a number as an argument and returns a grade based on that number. Score Grade Anything greater than 1 or less than 0.6 'F' 0.9 or greater "A" 0.8 or greater "B" 0.7 or greater "C" 0.6 or greater "D" def grader(score) if score >= 0.9 return "A" elsif score >= 0.8 return "B" elsif score >= 0.7 return "C" elsif score >= 0.6 return "D" elsif score < 0.5 or score > 1.01 return "F" else return "O" end end I'll suggest to use case statement for that purpose: def grader(score) case score when 0.9..1 then 'A' when 0.8...0.9 then 'B' when 0.7...0.8 then 'C' when 0.6...0.7 then 'D' else 'F' end end
https://codedump.io/share/FgltD3JuNrGK/1/what-are-the-various-ways-of-doing-this-code
CC-MAIN-2017-13
en
refinedweb
any possibility to get a list of the pairs of atoms that are matched using the align command? cheers, marc Dear Simon and other pymolers, I have realised that the solution to getting cctbx working with pymol on windows I posted no longer works with recent builds of cctbx. I have found that the following solution works: --> 3. Download pymol built against python 2.4 (but not including its own python) and install in the default location --> (You cannot use the latest beta versions which include their own version of python to the best of my knowledge) 4. Create 2 files (use notepad or wordpad or any other text editor) and save in the C:\Program Files\Delano Scientific\PyMOL directory: a) pymol.cmd @python -x "%~f0" %* & exit /b import cctbx import pymol b) run.cmd CALL C:\cctbx_build\setpaths_all.bat CALL "C:\Program Files\Delano Scientific\PyMOL\pymol.cmd" 5. One other thing, it's important to have python in your path variable (which you can access by going to control panel|system|advanced|environment variables), just add C:\python24 to the end of the path variable, separated by a semi-colon. Hopefully this should work OK... I know it is working on at least one other system than my own. Let me know if it works for you. I'll post this up on the wiki ASAP. Cheers Roger I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/pymol/mailman/message/10098981/
CC-MAIN-2017-13
en
refinedweb
$NetBSD: README.dirs,v 1.11 2010/05/11 11:58:14 pooka Exp $ The following is a quick rundown of the current directory structure. First, components in the kernel namespace, i.e. compiled with -D_KERNEL sys/rump/librump - kernel runtime emulation /rumpkern - kernel core, e.g. syscall, interrupt and lock support /rumpcrypto - kernel cryptographic routines /rumpdev - device support, e.g. autoconf subsystem /rumpnet - networking support and sockets layer /rumpvfs - file system support sys/rump/include /machine - used for architectures where the rump ABI is not yet the same as the kernel module ABI. will eventually disappear completely /rump - rump headers installed to userspace sys/rump/dev - device components, e.g. audio, raidframe, usb drivers sys/rump/fs - file system components /lib/lib${fs} - kernel file system code sys/rump/net - networking components /lib/libnet - subroutines from sys/net, e.g. route and if_ethersubr /lib/libnetinet - TCP/IP /lib/libvirtif - a virtual interface which uses host tap(4) to shovel packets. This is used by netinet and if_ethersubr. /lib/libsockin - implements PF_INET using host kernel sockets. This is mutually exclusive with net, netinet and virtif. The rest are out-of-kernel components (i.e. no -D_KERNEL) related to rump. hypercall interface: src/lib/librumpuser The "rumpuser" set of interfaces is used by rump to communicate with the host. Users: src/lib /libp2k - puffs-to-vfs adaption layer, userspace namespace /libukfs - user kernel file system, a library to access file system images (or devices) directly in userspace without going through a system call and puffs. It provides a slightly higher interface than pure rump syscalls. src/usr.sbin/puffs rump_$fs - userspace file system daemons using the kernel fs code src/share/examples/rump Various examples detailing use of rump in different scenarios. These are provided source-only.
http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/rump/README.dirs?rev=1.11&content-type=text/x-cvsweb-markup
CC-MAIN-2017-22
en
refinedweb
Looks like John's comment on yesterday's post had us both up half the night working on the same thing.? John's solution uses B.class.isAssignableFrom(a.getClass()) while mine uses a instanceof B. I think either works in this case because a List can only hold objects, not primitives. If we were dealing with primitives, John's solution would handle more cases than mine. Not sure if there are performance considerations, but I doubt they would be significant. import java.util.ArrayList; import java.util.List; public class TypeSafe { public enum Check { NONE, FIRST, ALL; } @SuppressWarnings({"unchecked"}) public static <T> List<T> typeSafeList(List list, Class<T> clazz, Check safety) { if (clazz == null) { throw new IllegalArgumentException("typeSafeList() requires a non-null class parameter"); } // Should we perform any checks? if (safety != Check.NONE) { if ( (list != null) && (list.size() > 0) ) { for (Object item : list) { if ( (item != null) && !(item instanceof String) ) { throw new ClassCastException( "List contained a(n) " + item.getClass().getCanonicalName() + " which is not a(n) " + clazz.getCanonicalName()); } // Should we stop on first success? if (safety == Check.FIRST) { break; } // Default (Check.ALL) checks every item in the list. } } } // end if perform any checks return (List<T>) list; } // end typeSafeList() public static List<String> coerceToStringList(List list) { if (list == null) { return null; } try { // Return the old list if it's already safe return typeSafeList(list, String.class, Check.ALL); } catch (ClassCastException cce) { // Old list is not safe. Make new one. List<String> stringList = new ArrayList<String>(); for (Object item : list) { if (item == null) { stringList.add(null); } else if (item instanceof String) { stringList.add((String) item); } else { // If this throws a classCastException, so be it. stringList.add(String.valueOf(item)); } } return stringList; } } // end typeSafeList() @SuppressWarnings({"unchecked"}) private static List makeTestList() { List l = new ArrayList(); l.add("Hello"); l.add(null); l.add(new Integer(3)); l.add("world"); return l; } // end makeTestList() public static void main(String[] args) { List unsafeList = makeTestList(); List<String> stringList = coerceToStringList(unsafeList); System.out.println("Coerced strings:"); for (String s : stringList) { System.out.println(s); } List<String> safeList = typeSafeList(unsafeList, String.class, Check.ALL); System.out.println("Safe-casted list:"); for (String s : safeList) { System.out.println(s); } } // end main() } 1 comment: We were both up very late. Nicely done. I need to look at it in more detail late tonight! ;-)
http://glenpeterson.blogspot.com/2011/08/looks-like-johns-comment-yesterday-had.html
CC-MAIN-2017-22
en
refinedweb
JavaScript - TypeScript: Making .NET Developers Comfortable with JavaScript By Shayne Boyer | January 2013. The growth and capability of JavaScript applications is huge. Node.js, an entire platform for developing scalable JavaScript applications, has become enormously popular, and it’s even deployable on Azure. Moreover, JavaScript can be used with HTML5 for game development, mobile applications and even Windows Store apps. As a .NET developer, you can’t ignore the capabilities of JavaScript, nor its spread in the marketplace. When I make this statement to colleagues, I often hear groans about how JavaScript is hard to work with, there’s no strong typing, no class structures. I combat such arguments by responding that JavaScript is a functional language and there are patterns for accomplishing what you want. This is where TypeScript comes in. TypeScript isn’t a new language. It’s a superset of JavaScript—a powerful, typed superset, which means that all JavaScript is valid TypeScript, and what is produced by the compiler is JavaScript. TypeScript is an open source project, and all of the information related to the project can be found at typescriptlang.org. At the time of this writing, TypeScript was in preview version 0.8.1. In this article, I’ll cover the basic concepts of TypeScript in the form of classes, modules and types, to show how a .NET developer can become more comfortable with a JavaScript project. Classes If you work with languages such as C# or Visual Basic .NET, classes are a familiar concept to you. In JavaScript, classes and inheritance are accomplished through patterns such as closures and prototypes. TypeScript introduces the classical type syntax you’re used to and the compiler produces the JavaScript that accomplishes the intent. Take the following JavaScript snippet: This seems simple and straightforward. However, .NET developers have been hesitant to really get into JavaScript due to its loose approach to object definition. The car object can have additional properties added later without enforcement and without knowing what data type each represents, and thus throw exceptions during runtime. How does the TypeScript class model definition change this, and how do we inherit and extend car? Consider the example in Figure 1. Figure 1 Objects in TypeScript and JavaScript On the left is a nicely defined class object called car, with the properties wheels and doors. On the right, the JavaScript produced by the TypeScript compiler is almost the same. The only difference is the Auto variable. In the TypeScript editor, you can’t add an additional property without getting a warning. You can’t simply start by using a statement such as car.trunk = 1. The compiler will complain, “No trunk property exists on Auto,” which is a godsend to anyone who has ever had to track down this gotcha because of the flexibility of JavaScript—or, depending on your perspective, the “laziness” of JavaScript. Constructors, though available in JavaScript, are enhanced with the TypeScript tooling again by enforcing the creation of the object during compile time, and not allowing the object to be created without passing in the proper elements and types in the call. Not only can you add the constructor to the class, but you can make the parameters optional, set a default value or shortcut the property declaration. Let’s look at three examples that show just how powerful TypeScript can be. Figure 2 shows the first example, a simple constructor in which the class is initialized by passing in the wheels and doors parameters (represented here by w and d). The produced JavaScript (on the right) is almost equivalent, but as the dynamics and needs of your application expand, that won’t always be the case. Figure 2 A Simple Constructor In Figure 3, I’ve modified the code in Figure 2, defaulting the wheels parameter (w) to 4 and making the doors parameter (d) optional by inserting a question mark to the right of it. Notice, as in the previous example, that the pattern of setting the instance property to the arguments is a common practice that uses the “this” keyword. Figure 3 A Simple Constructor, Modified Here’s a feature I’d love to see in the .NET languages: being able to simply add the public keyword before the parameter name in the constructor to declare the property on the class. The private keyword is available and accomplishes the same auto declaration, but hides the property of the class. Default values, optional parameters and type annotations are extended with the TypeScript auto property declaration feature, making it a nice shortcut—and making you more productive. Compare the script in Figure 4, and you can see the differences in complexity start to surface. Figure 4 The Auto Declaration Feature Classes in TypeScript also provide inheritance. Staying with the Auto example, you can create a Motorcycle class that extends the initial class. In Figure 5, I also add drive and stop functions to the base class. Adding the Motorcycle class—which inherits from Auto and sets the appropriate properties for doors and wheels—is accomplished with a few lines of code in TypeScript. One important thing to mention here is that, at the top of the compiler-produced JavaScript, you’ll see a small function called “ ___extends,” as shown in Figure 6, which is the only code ever injected into the resulting JavaScript. This is a helper class that assists in the inheritance functionality. As a side note, this helper function has the exact same signature regardless of the source, so if you’re organizing your JavaScript in multiple files and use a utility such as SquishIt or Web Essentials to combine your scripts, you might get an error depending on how the utility rectifies duplicated functions. var __extends = this.__extends || function (d, b) { function __() { this.constructor = d; } __.prototype = b.prototype; d.prototype = new __(); } var Auto = (function () { function Auto(mph, wheels, doors) { if (typeof mph === "undefined") { mph = 0; } if (typeof wheels === "undefined") { wheels = 4; } this.mph = mph; this.wheels = wheels; this.doors = doors; } Auto.prototype.drive = function (speed) { this.mph += speed; }; Auto.prototype.stop = function () { this.mph = 0; }; return Auto; })(); var Motorcycle = (function (_super) { __extends(Motorcycle, _super); function Motorcycle() { _super.apply(this, arguments); this.doors = 0; this.wheels = 2; } return Motorcycle; })(Auto); var bike = new Motorcycle(); Modules Modules in TypeScript are the equivalent of namespaces in the .NET Framework. They’re a great way to organize your code and to encapsulate business rules and processes that would not be possible without this functionality (JavaScript doesn’t have a built-in way to provide this function). The module pattern, or dynamic namespacing, as in JQuery, is the most common pattern for namespaces in JavaScript. TypeScript modules simplify the syntax and produce the same effect. In the Auto example, you can wrap the code in a module and expose only the Motorcycle class, as shown in Figure 7. The Example module encapsulates the base class, and the Motorcycle class is exposed by prefixing it with the export keyword. This allows an instance of Motorcycle to be created and all of its methods to be used, but the Auto base class is hidden. Another nice benefit of modules is that you can merge them. If you create another module also named Example, TypeScript assumes that the code in the first module and the code in new module are both accessible through Example statements, just as in namespaces. Modules facilitate the maintainability and organization of your code. With them, sustaining large-scale applications becomes less of a burden on development teams. Types The lack of type safety is one of the louder complaints I’ve heard from developers who don’t swim in the JavaScript pool every day. But type safety is available in TypeScript (that’s why it’s called TypeScript) and it goes beyond just declaring a variable as a string or a Boolean. In JavaScript, the practice of assigning foo to x and then later in the code assigning 11 to x is perfectly acceptable, but it can drive you mad when you’re trying to figure out why you’re getting the ever-present NaN during runtime. The type safety feature is one of the biggest advantages of TypeScript, and there are four inherent types: string, number, bool and any. Figure 8 shows the syntax for declaring the type of the variable s and the IntelliSense that the compiler provides once it knows what actions you can perform based on the type. Figure 8 An Example of TypeScript IntelliSense Beyond allowing the typing of a variable or function, TypeScript has the ability to infer types. You can create a function that simply returns a string. Knowing that, the compiler and tools provide type inference and automatically show the operations that can be performed on the return, as you can see in Figure 9. Figure 9 An Example of Type Inference The benefit here is that you see that the return is a string without having to guess. Type inference is a major help when it comes to working with other libraries that developers reference in their code, such as JQuery or even the Document Object Model (DOM). The other way to take advantage of the type system is through annotations. Looking back, the original Auto class was declared with just wheels and doors. Now, through annotations, we can ensure that the proper types are set when creating the instance of Auto in car: However, in the JavaScript that’s produced, the annotations are compiled away, so there’s no fat and no additional dependencies to worry about. The benefit again is strong typing and, additionally, eliminating the simple errors that are generally found during runtime. Interfaces provide another example of the type safety offered in TypeScript. Interfaces allow you to define the shape of an object. In Figure 10, a new method named travel has been added to the Auto class and it accepts a parameter with a type of Trip. If you try to call the travel method with anything other than the correct structure, the design-time compiler gives you an error. In comparison, if you entered this code in JavaScript, say into a .js file, most likely you wouldn’t catch an error like this until you ran the application. In Figure 11, you can see that leveraging type annotations strongly assists not only the initial developer but also any subsequent developer who has to maintain the source. Figure 11 Annotations Assist in Maintaining Your Code Existing Code and Libraries So what about your existing JavaScript code, or what if you love building on top of Node.js or use libraries such as toastr, Knockout or JQuery? TypeScript has declaration files to help. First, remember that all JavaScript is valid TypeScript. So if you have something homegrown, you can copy that code right into the designer and the compiler will produce the JavaScript one for one. The better option is to create your own declaration file. For the major libraries and frameworks, a gentleman by the name of Boris Yankov (twitter.com/borisyankov on Twitter) has created a nice repository on GitHub (github.com/borisyankov/DefinitelyTyped) that has a number of declaration files for some of the most popular JavaScript libraries. This is exactly what the TypeScript team hoped would happen. By the way, the Node.js declaration file was created by the TypeScript team and is available as a part of the source code. Creating a Declaration File If you can’t locate the declaration file for your library or if you’re working with your own code, you’ll need to create a declaration file. You start by copying the JavaScript into the TypeScript side and adding the type definitions, and then use the command-line tool to generate the definition file (*.d.ts) to reference. Figure 12 shows a simple script for calculating grade point average in JavaScript. I copied the script into the left side of the editor and added the annotations for the types, and I’ll save the file with the .ts extension. Figure 12 Creating a Declaration File Next, I’ll open a command prompt and use the TypeScript command-line tool to create the definition file and resulting JavaScript: tsc c:\gradeAverage.ts –declarations The compiler creates two files: gradeAverage.d.ts is the declaration file and gradeAverage.js is the JavaScript file. In any future TypeScript files that need the gradeAverage functionality, I simply add a reference at the top of the editor like this: /// <reference path="gradeAverage.d.ts"> Then all the typing and tooling is highlighted when referencing this library, and that’s the case with any of the major libraries you may find at the DefinitelyTyped GitHub repository. A great feature the compiler brings in declaration files is the ability to auto-traverse the references. What this means is if you reference a declaration file for jQueryUI, which in turn references jQuery, your current TypeScript file will get the benefit of statement completion and will see the function signatures and types just as if you had referenced jQuery directly. You can also create a single declaration file—say, “myRef.d.ts”—that contains the references to all the libraries you intend to use in your solution, and then make just a single reference in any of your TypeScript code. Windows 8 and TypeScript With HTML5 a first-class citizen in the development of Windows Store apps, developers have wondered whether TypeScript can be used with these types of apps. The short answer is yes, but some setup is needed in order to do so. At the time of this writing, the tooling available either through the Visual Studio Installer or other extensions hasn’t completely enabled the templates within the JavaScript Windows Store app templates in Visual Studio 2012. There are three key declaration files available in the source code at typescript.codeplex.com—winjs.d.ts, winrt.d.ts and lib.d.ts. Referencing these files will give you access to the WinJS and WinRT JavaScript libraries that are used in this environment for accessing the camera, system resources and so forth. You may also add references to jQuery to get the IntelliSense and type safety features I’ve mentioned in this article. Figure 13 is a quick example that shows the use of these libraries to access a user’s geolocation and populate a Location class. The code then creates an HTML image tag and adds a static map from the Bing Map API. /// <reference path="winjs.d.ts" /> /// <reference path="winrt.d.ts" /> /// <reference path="jquery.d.ts" /> module Data { class Location { longitude: any; latitude: any; url: string; retrieved: string; } var locator = new Windows.Devices.Geolocation.Geolocator(); locator.getGeopositionAsync().then(function (pos) { var myLoc = new Location(); myLoc.latitude = pos.coordinate.latitude; myLoc.longitude = pos.coordinate.longitude; myLoc.retrieved = Date.now.toString(); myLoc.url = "" + myLoc.latitude + "," + myLoc.longitude + "15?mapSize=500,500&pp=47.620495,-122.34931;21;AA&pp=" + myLoc.latitude + "," + myLoc.longitude + ";;AB&pp=" + myLoc.latitude + "," + myLoc.longitude + ";22&key=BingMapsKey"; var img = document.createElement("img"); img.setAttribute("src", myLoc.url); img.setAttribute("style", "height:500px;width:500px;"); var p = $("p"); p.append(img); }); }; Wrapping Up The features TypeScript adds to JavaScript development are small, but yield large benefits to .NET developers who are accustomed to similar features in the languages they use for regular Windows application development. TypeScript is not a silver bullet, and it’s not intended to be. But for anyone who’s hesitant to jump into JavaScript, TypeScript is a great language that can ease the journey. Shayne Boyer is a Telerik MVP, Nokia Developer Champion, MCP, INETA speaker and a solutions architect in Orlando, Fla. He has been developing Microsoft-based solutions for the past 15 years. Over the past 10 years, he has worked on large-scale Web applications, with a focus on productivity and performance. In his spare time, Boyer runs the Orlando Windows Phone and Windows 8 User Group, and blogs about the latest technology at tattoocoder.com. Thanks to the following technical expert for reviewing this article: Christopher Bennage MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/jj883955.aspx
CC-MAIN-2017-22
en
refinedweb
dird On older SCO, you could print the contents of a directory file in hexadecimal format by the command "hd ." (current directory). With SCO OpenServer you get only 0000. That's not good :-) Typically, the reason for doing this was to either to see the order of the files in the directory (so you could tell how far along a backup or restore was that you were watching on another screen) or to look for "holes" (so that you could rearrange the order of entries by judicious copying and deleting). Why would you want to do that? If you have to ask, you'll probably never want to, but in some situations involving large directories and ultra critical performance, it's worth moving frequently accessed files to the "top", (assuming the namei cache can't help you, perhaps because you need that boost on the *first* access). Yet another reason was to spot garbage characters in file names. If all you need is the order of file names in the slots, ls -f will do it. Otherwise, you can get some of the remaining functionality by a simple C program like this: #include <dirent.h> #include <stdio.h> main() { DIR *dirp; char *c; long offset=0L; struct dirent *dp; dirp = opendir( "." ); while ( (dp = readdir( dirp )) != NULL ) { printf("Inode: %8lu Offset %4ld Length %4hd ", dp->d_ino,offset,dp->d_reclen); printf("%s ",dp->d_name); offset = dp->d_off; c=dp->d_name; while (*c) { printf("%x ",*c++); } printf("\n"); } closedir( dirp ); } You can add to this to display other information, of course, but while it does show the order, it doesn't show holes. You can infer holes from the directory size and offsets, and you can create holes where you want them by removing or copying files, but it's certainly true that an hd on a directory was useful now and then. Got something to add? Send me email. (OLDER) <- More Stuff -> (NEWER) (NEWEST) Printer Friendly Version Increase ad revenue 50-250% with Ezoic Inexpensive and informative Apple related e-books: El Capitan: A Take Control Crash Course Take Control of Pages Take Control of Upgrading to Sierra Take Control of iCloud, Fifth Edition iOS 8: A Take Control Crash Course More Articles by Tony Lawrence Find me on Google+ © 2009-11-07 Tony Lawrence Printer Friendly Version dird- dumps SCO_OSR5 directories
http://aplawrence.com/Unix/dird.html
CC-MAIN-2017-22
en
refinedweb
First time question asker here. I used several different threads on here to construct a fullcalendar that opens event information in a fancybox. More specifically, each day has an link (structured as an event) that opens a list of that days events in a fancybox. I constructed the calendar and fancybox links with the following code: $('#calendar').fullCalendar({ View Replies When I don't want to execute the default action that is associated with the event, I use function listener(e){ if(e.preventDefault) e.preventDefault(); /* Other code */} But I have just learnt that events have the boolean cancelable. Then, should I use this code instead? cancelable function listener(e){ if(e.c i do not know what is going on my firefox! my aspx and javascript codes are like this : <html xmlns="" ><head runat="server"> <title></title> <script type="text/javascript"> function a() { alert('a'); //alert(event.which); / Need some help in sorting out design options for simulation framework in C++ (no C++11).The user creates an "event dispatcher" and registers interest (using "watchers") in occurrence of "events". The dispatcher internally holds "event sources" which are used to detect event activation and manage notifications to watchers. There's a 1:1:1 mapping between watcher, event and event source cl I am writing some application with Raphael.js. And it should handle mouse drag events. That is, when mouse drag is ended, i try to catch the point on the Raphael's Paper object (DIV / SVG element, actually) where the mouse caused drop event. Paper DIV SVG drop FireFox and Chrome are doing well with event.layerX and ev event.layerX ev Disclaimer: I am anything but a Javascript expert, so I'm not even sure if I'm approaching this correctly... I want to be able to trigger an event in Javascript, but be able to cancel that event if another event occurs. So, what I'm looking to accomplish is: I have a button which start/stops a timer. When I click it, I remove the current event handler, and attach the opposite one. Ie, click "Stop", and the "Start" function is attached for the next click. However in IE (7), not sure about 8. The newly attached event is also triggering after the original function is finished: Lifeline in IE7:Button pushed --> "Stop" functi I have two classes A and B.In class A, I have an event EventA public delegate void FolderStructureChangedHandler();public event FolderStructureChangedHandler EventA; In class B, I have the same event which named EventB.In the a method of my application, I want to add all of the handlers registered to EventA to the event EventB View Replies handlers delegates event instance another event This is my application setup : I have a UserControls folder in my application inside which there is my .ascx file which happnes to contain just a simple ASP Button. I have not added any code in the code-behind of the ascx. I have a BaseForm.cs (just a C# class file and NOT an aspx file) which is inheriting from System.Web.UI.Page public class BaseForm : Sy I am developing an API in which I want to be able to know when an event listener on an object is added or removed. The reason is that some of the events I am firing will require me to continually poll an object for updates, and I don't want to have to poll the object if nothing is listening for the event. I am polling the html5 media player and other players for buffering updates, so eliminatin
http://bighow.org/tags/event/1
CC-MAIN-2017-22
en
refinedweb
Advanced Text Hit - Online Code Description This code shows a String and on Clicking the Text tells the letter which you are Clicking on the Console. Source Code import java.awt.Font; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.RenderingHints; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.font.FontRenderContext; i... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/1023/Advanced_Text_Hit
CC-MAIN-2017-22
en
refinedweb
Multiple Inheritance - Online Code Description best example for multiple inheritance Source Code interface t1 extends mt,mj //this is called multiple inheritance { void display(); } interface mt { void show(); } interface mj { void x(); } public class Third implements t1 { public void show() {... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/3800/Multiple_Inheritance
CC-MAIN-2017-22
en
refinedweb
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Accessing Views in Code6:54 with Ben Jakuben Now that we've covered a bit of the generated code, it's time to write our own! In this video we'll see how to write Java code in the FunFactsActivity class that declares a TextView and then gets it from the XML layout file we already edited. Cheat Sheet Related Links - A Note on Casting - Scroll down a bit Documentation - 0:00 Okay, time to make the magic happen. - 0:02 We just saw how our layout gets hooked up with the main activity code - 0:05 that controls how it operates. - 0:07 Now, we need to access the views in our layout from the code and - 0:11 manipulate them so that tapping the button displays the fact in our text view label. - 0:16 Okay, we're going to work on this onCreate method that we were just talking about. - 0:20 Let's start by adding a few lines at the bottom. - 0:22 We wanna make sure that we're adding our code after this call to setContentView. - 0:25 If we try and access any views before this, then we will get an error. - 0:29 Let's add a comment to our code to help explain how it works. - 0:32 Type in two slashes like this, - 0:34 forward slashes, and then say, declare our view variables. - 0:40 Comments and code are text like this that does not have any effect on how - 0:43 the program runs. - 0:44 They're just here to help us understand it better when we read through the code. - 0:47 They are completely ignored when this code is processed. - 0:51 This with two forward slashes is an example of a single line comment, but - 0:55 we can also create multi-line comments with a forward slash and an asterisk. - 0:59 And then we could hit Enter and - 1:00 type whatever we want in between, the starting and ending ones. - 1:04 The catch here is that we always need to close multi-line comments with, - 1:07 another asterisk and forward slash. - 1:10 Okay, let's get rid of this and we said we're going to declare our view variables. - 1:14 So let's do it. - 1:15 On the next line let's add a new variable. - 1:16 And its data type will be, TextView. - 1:18 And that matches what we used in the layout. - 1:21 Now we need to give it a name. - 1:22 Let's call it fact label and let's add a semicolon to stop this right now. - 1:27 Declaring the textview variable is done the same way as we did a string variable - 1:30 before, only we use textview as the data type here instead of string. - 1:33 Now in the next line let's add a button called show fact button and - 1:39 we'll put another semicolon at the end. - 1:41 Okay, let's talk about this briefly before we do anything. - 1:43 Now why did these names change to gray like this? - 1:46 That's because we haven't used these variables anywhere. - 1:48 If we hover over each name, - 1:49 we get a quick tip that says variable fact label or show fact button is never used. - 1:54 These will change and those warnings will disappear once we use these variables. - 1:58 Another thing I want to point out are import statements. - 2:00 By default, this class here only knows of a few main data types like string, - 2:04 for example. - 2:05 But every time we add a new data type like text view or - 2:08 button, Android studio will try to automatically add an import statement to - 2:12 make sure the project knows what the data type is. - 2:15 We can take a look up here at line three. - 2:16 If we hit this little plus button we can expand all these import statements, and - 2:20 these are the required statements to get the appropriate data types like. - 2:24 Text view and button and everything else that we use in this class. - 2:28 Now hopefully you won't need to worry about these at all but - 2:30 if you run into any problems with data types or - 2:32 import statements let us know in the forum and we'll help you get back on track okay. - 2:37 Let me close this and now we need to assign values to these new variables. - 2:41 Let's add to our comment. - 2:42 We'll say and assign them the views from the layout file. - 2:48 This is just like before when we assigned our name to that myName string variable. - 2:53 We need some code that says, - 2:55 hey layout give me that text view with the ID factlabel. - 2:58 Oh and give me that button that we IDed as showfact button. - 3:01 Fortunately, every activity like this fun facts activity - 3:04 has a method does exactly what we need. - 3:06 So after fact label and before the semicolon, type just as I am. - 3:10 A space, then an equals sign, another space, then findViewByID. - 3:14 Notice that as I'm typing that the auto-complete feature is - 3:17 filling it in below. - 3:19 Android Studio analyzes what we're typing and it offers suggestions that match. - 3:23 Auto code completion will be your best friend. - 3:25 It's convenient and it makes it so that we don't need to remember all the exact - 3:28 names and spellings, and parameters of all the code that we wanna use. - 3:32 So as you're typing you can either continue with the name or - 3:34 hit enter to select it from the code completion. - 3:37 So now we have a little hint about the parameter we need to use in here. - 3:40 The Find View by ID method requires an ID as its parameter but rather than just type - 3:45 in the ID from the layout you must refer to it through a generated resource class. - 3:50 Okay, remember that build directory over her in our Project view that we kind of - 3:54 skipped over before? - 3:55 Let's expand that. - 3:56 And we wanna go to the source folder and take a look and then in here is - 4:00 one called r and way down here, all the way down, we have just a capital R. - 4:04 Double click on that. - 4:06 So, Android automatically builds a special class for - 4:08 us called the R class where R here stands for resources. - 4:12 This class contains all the IDs that we add to any kind of file in - 4:15 the res directory. - 4:16 Now, we never want to change anything in this class. - 4:19 In fact, we get a warning here, files under the build folder are generated and - 4:22 should not be edited that's the important part. - 4:24 In fact, I think if we expand the comments, it, - 4:26 yep, it also says, auto-generated file, do not modify. - 4:29 So you'll probably never need to look at this class but - 4:31 I just wanted to show you what we're using. - 4:33 Because now if we go back to our code, - 4:34 we're gonna type r, which gets us a reference to that R class, then a dot. - 4:39 Then we can pick from those other sections we had in there. - 4:42 So we want an ID, so type ID, and then dot. - 4:45 And notice how the code completion is showing us different things - 4:48 as we go along. - 4:49 And cool, look at that we have the IDs that we put in our layout. - 4:52 Since we're doing the fact label, - 4:53 we wanna do the that text view ID that we used, and hit enter to select it. - 4:58 All right that seems like it should be enough but - 5:00 now we have a new red error line, which means something is wrong. - 5:03 If we hover over it we can see that is says it's incompatible types, and - 5:07 it requires a text view because that's on the left side of the equal sign, but - 5:11 it's finding a regular view on the right side of the equal sign. - 5:15 So this means that the return type of this findViewByID method. - 5:18 Doesn't match the data type of the variable on the left side. - 5:22 The findViewById method returns a, generic view. - 5:25 But we said that our fact label is a text view, not a generic view. - 5:30 So if we change our variable to be a, regular view like this. - 5:34 Then sure enough the error goes away. - 5:35 But this just makes the error go away without actually fixing the problem. - 5:39 We'd need this to be a text view. - 5:40 So let's out it back to text view and now let's see how to fix it the correct way. - 5:45 Ingrid could have a find method for - 5:46 every kind of view like maybe find text view by id and find button by ID. - 5:50 But that would be a lot of methods. - 5:53 Instead, because text view is a kind of view, or - 5:56 a subclass of view, we can fix this with something known as a cast. - 6:00 A cast allows us to say that one data type in this case the generic view returned by - 6:04 the method, can be safely cast as another data type. - 6:08 In this case, a text view. - 6:09 Now, we know that the view returned will in fact be a TextView, - 6:12 because that's how we set it up. - 6:14 So we can add the cast to TextView by putting the name - 6:17 TextView in parentheses in front of the method call. - 6:19 So we can type that right here. - 6:20 But Android Studio also has a very helpful keyboard shortcut that offers quick fixes - 6:25 for problems like this. - 6:26 So if you click on any code with a red underline on it and then hit Alt+enter on - 6:31 both Mac and Windows, then we get presented with a list of quick fixes. - 6:35 The first one is the one we want, so just hit Enter. - 6:38 And there, now it added the TextView inside parentheses at - 6:40 the front that we need. - 6:41 Cool, now our cast is complete, and our error is gone. - 6:44 On to the show fact button. - 6:45 It's pretty much the same code, but remember that the ID is different, - 6:48 as well as the cast to a button instead of a TextView. - 6:51 Let's try adding it as a code challenge.
https://teamtreehouse.com/library/build-a-simple-android-app-2014/basic-android-programming/accessing-views-in-code
CC-MAIN-2017-22
en
refinedweb
Key scheduling algorithm: for i from 0 to 255 S[i] := i endfor j := 0 for i from 0 to 255 j := (j + S[i] + key[i mod keylength]) mod 256 swap values of S[i] and S[j] endfor What I found interesting about this algorithm is it originally starts with a 256 bytes array/list in sequential order (0,1,2,3,4...253,254,255). This range could be represented in the form of a gradient. The above image is the visual representation of each loop of the key stream. The bottom left hand corner from left to right is the initial sequential list. The loop iterates 256 times. Each time the loop iterates the key stream is modified. Another image with the char 'B' as the key. The very top row is the complete key stream from the key-scheduling algorithm. Another image with the char 'Z' as the key. The above image was created in Python using matplotlib. The code will iterate through chars from 'A' to 'Z'. The images will be saved with a file name of example-char.png.Another image with the char 'Z' as the key. The above image was created in Python using matplotlib. The code will iterate through chars from 'A' to 'Z'. The images will be saved with a file name of example-char.png. import matplotlib.pyplot as plt import numpy as np import random import sys def rc4_init(key): # creates a list of the stream for each loop of the key stream c = 0 k = range(256) j = 0 x = [] x = x + k for i in range(256): j = (j + k[i] + ord(key[i % len(key)])) % 256 k[i], k[j] = k[j], k[i] x = x + k return x def createImage(key): # get list of RC4 key values size (256*257) data = np.array(rc4_init(key)) data.shape = (257,256) # use heat map plt.hot() plt.ylabel('Loop Iteration') plt.xlabel('Array Value 0-256') plt.title('RC4 Key Initialize with Value of ' + '\'' + str(key) + '\'') plt.axis([0,256,0,257]) plt.pcolormesh(data) plt.colorbar() #plt.show() plt.savefig('example-' + str(key) + '.png') plt.close() def main(argv): for x in range(65,91): createImage(chr(x)) if __name__== '__main__': main(sys.argv[1:]) Visually I think this is pretty cool. I decided to take it a step further and create an animated gif. The .gif displays the image with value keys of 'A' through 'Z'. Warning the .gif is over 2MBs in size. LINK Odds are my terminology is off in some of the description of the algorithm. Please feel free to leave comments if you see one. Thanks for your post. However, I don't understand how you could exploit that ? What are your conclusions about that ? Thanks Thanks. This post was more about exploration rather than exploitation. I'm not a cryptanalyst but I do not believe anything can be exploited from this. The images were a way to see how the keys are initialized and how it would appear visually.
http://hooked-on-mnemonics.blogspot.com/2012/02/visualizing-rc4-key-initialization.html
CC-MAIN-2017-22
en
refinedweb
ulflj said:Thanks, haven't run any graphical things actual, we don't have very much that require any 3D/accel gfx-performance, we mainly runt nvidia because they have nice dual-screen feature and (earlier) very easy installation through their own installer (so much for that now ;). ulflj said:. You must log in to send a PM. Add Reply Please Login | Register to add a comment. Type Reason Authorization Required Not a member? Register Now I have been able to get the 325.15 driver succussfully working with linux-rt (3.10.x-rt series), but am still looking for information / get a question or two ansered, as well as providing some info / patches too... It seems that the 325.xx series nvidia installer does not need patching, in regards to linux-rt (or rather "less patching"- as the old nvidia-rt patch that adds some spinlock related stuff, isn't needed; it's included in 325.15). For brief background, previously we (linux-rt/nvidia users) used to have to patch nv-linux.h to support CONFIG_PREEMPT_RT_FULL + add missing bits for RT. * Now that nvidia has added the missing stuff needed by -rt kernels, I see that we can also override the PREEMPT_RT check/behavior <in nvidia-insaller/conftest> by use of IGNORE_PREEMPT_RT_PRESENCE=1 environment variable... -> but is this correct?...or do we need to do something else for the installer (and driver) to work for -rt??. --->(as i have done below)<-- i wasn't able to find any documentation on the matter.. That being said <as my above question may be rhetorical> i still did end up having to patch nv-linux.h. like so; ___________________ --- a/nv-linux.h 2011-10-26 13:35:32.866579965 +0200 +++ b/nv-linux.h 2011-10-26 13:35:47.265117607 +0200 @@ -43,6 +43,8 @@ #include <linux/version.h> #include <linux/utsname.h> +#define CONFIG_PREEMPT_RT_FULL 1 + #if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 9) # error This driver does not support kernels older than 2.6.9! #elif LINUX_VERSION_CODE < KERNEL_VERSION(2, 7, 0) @@ -312,11 +312,7 @@ #endif #endif -#if defined(CONFIG_PREEMPT_RT) || defined(CONFIG_PREEMPT_RT_FULL) -#define NV_CONFIG_PREEMPT_RT 1 -#endif - -#if defined(NV_CONFIG_PREEMPT_RT) +#if defined(CONFIG_PREEMPT_RT_FULL) typedef raw_spinlock_t nv_spinlock_t; #define NV_SPIN_LOCK_INIT(lock) raw_spin_lock_init(lock) #define NV_SPIN_LOCK_IRQ(lock) raw_spin_lock_irq(lock) @@ -943,7 +939,7 @@ return ret; } -#if defined(NV_CONFIG_PREEMPT_RT) +#if defined(CONFIG_PREEMPT_RT_FULL) #define NV_INIT_MUTEX(mutex) sema_init(mutex,1) #else #if !defined(__SEMAPHORE_INITIALIZER) && defined(__COMPAT_SEMAPHORE_INITIALIZER) ___________________ ...basically, I am explicitly (and/or redundantly?) defining/using CONFIG_PREEMPT_RT_FULL (while also disabling the PREEMPT_RT conftest). Using this patch AND disabling the PREEMPT_RT conftest together, yields a driver that works... However, i was under the impression that disabling the check was enough(?) * this morning i discovered that by NOT using the above patch (because i thought it wasn't needed); while i was able to compile the nvidia driver (since i am disabling the check) -> the resulting driver locked up my machine(s). -> This would indicate to me that either; A) there is a way to tell the nvidia installer to compile for RT <since support _appears_ to be there / disabled by default> *OR* B). Linux RT users are still required to patch the driver <like i have above> and disable the check. note: the patched-version (of 325.15) works great / as expected. anyway, it would be great to have some clarification from an nvidia-developer on the matter, as it effects users, packagers, etc like myself. At this point, it seems that the 325.xx series has changed for us, but there isn't anything really documented, which would be helpful. :) ~ but again, i do have it working (after some tinkering). My uptime has been over 2 days with both 3.10.4-rt1 and 3.10.6-rt3 <total of 4 dayz X2 machines, no issues> with 9800GT and newer 440GT. thx * you basically, run the NVIDIA_XXXxxx.run file with "--extract-only" option * then; cd into /home/USERNAME/Desktop/NVIDIA-Linux-x86_64-325.15-no-compat32/kernel <for example> * apply patch; " patch -Np1 -i nvidia-325xx-rt.patch " <for example> * run the installer, compile && install driver, etc, as you normally would (or make your own buildscript, source package, etc) <in that order> .... 1.) 319.32 -> patches for nvidia-rt and linux 3.10 a. legacy (nvidia) RT patch: b. linux 3.10 / 319.32 patch: .... 2). 325.15 -> patch for nvidia-325.xx-rt + IGNORE_PREEMPT_RT_PRESENCE=1 a. 325.xx (nvidia) RT patch: b. with 325.15 you must also override the PREEMPT_RT conftest, like so (when building); " make IGNORE_PREEMPT_RT_PRESENCE=1 SYSSRC=/usr/lib/modules/"${_kernver}/build" module " .... obviously, you must have the corresponding / matching nvidia-utils package(s) installed, as well / however your distribution's package management handles this. <i use Archlinux, thus have "source packages" / PKGUBILDs aka: build scripts for this purpose.> note: the legacy patch (#1/a) can be used on any nvidia driver pre-325.xx series, during linux(-rt) kernel 3.x development (but not 2.6-rt kernels)... the legacy nvidia-rt patch <obviously> isn't needed/applicable by 325.xx, since most of those changes have been integrated into that series/driver version. .... I guess for now, for my own packages, i will just use #2 method/version (325.15) But i would still like to know if there is a proper way with 325.15 to build for RT without patching? ... I'm also curious if nvidia is working towards supporting linux-rt "offically" one day? Well, just as an update on the matter. I have spoken to someone else who did the same comparison of; overriding the PREEMPT_RT check without the nvidia-325xx-rt.patch VS. using the override with nvidia-325xx-rt.patch ... He yielded similar results, as i did.. So disabling the check does not appear to be enough... but for the both of us, the latest nvidia(-rt) driver is working. I think it would be nice to just have an option when you run the installer (like adding --preepmt-rt to the commandline). _______ ***Just as a general note: I've been noticing downloads on my SF.net account (from the links found here), so i would be curious to know if any users in the forums have any feedback, thoughts, etc on using the nvidia driver with linux-rt?? ... cheerz I use kernel 3.10.6 patched with OSADL patch rt3 nvidia driver 325.15 I've patched the driver, but the installer still complains, i guess I must rebuild it to? The problem is I don't know exactly how, I'm running debian squeeze and can't find any package or source anywhere to build the installer, could someone maybe point me in the right direction or maybe just supply a patched and compiled installer that ignores the RT-flag in the kernel config? Unfortunately, i do not use Debian, so i personally can't recommend 'debian specific' instructions for you. However, google turned up kernel source packages for 325.15 nvidia driver; There appears to be a readme for debian installation, as well. I'm not sure how to add a patch to a deb to use in builds, but i am sure there are instructions and/or examples of other packages kicking around the web (or source packages found in your repos, if i remember correctly; apt allows you to donwload source packages for modification/rebuilding...). as for ignoring the PREEMPT_RT_FULL flag; I've already given you instructions on how to do that. On installation; ie: running the "make" command, part of the installation * you must add IGNORE_PREEMPT_RT_PRESENCE=1 to the commandline <read post #2 again>. anyway, you'll have to read up on how debian does things, but aside from that; as long as you've patched 325.15 with nvidia-325xx-rt.patch && used IGNORE_PREEMPT_RT_PRESENCE=1 together. it should work. I've had a couple people report back to me via email on it (one Ubuntu, one Fedora)... <and obviously I am using my own source packages for Archlinux on 2 machines> in all cases that i am aware of; IGNORE_PREEMPT_RT_PRESENCE=1 on it's own will allow the driver to compile, but doesn't seem to be enough for PREEMPT_RT_FULL, but doing both (patching / override RT check) seems to work for those whom know their H/W works with RT. I didn't have time earlier, but just now took a look at the nvidia-kernel-source (325.15) for debian. I extracted/unarchived the .deb, followed by data.tar.xz and then finally nvidia-kernel.tar.xz (which is inside of data.tar.xz). It appears debian already has a queue of patches in their nvidia source package; .../nvidia-kernel/debian/patches I think the tool you will need to modify the package (adding the nvidia-325xx-rt.patch) is called "dpatch". Just search/google "dpatch tutorial", there are a lot of tutorials (you can probably find a better one than I, as you are more familiar with debian), so you <hopefully> should be able to figure it out. dpatch is used to manage those patches in debian/patches, as well as making other adjustments to the .deb file / build instructions... After you've gotten this to work, next you will need to figure out at what step / where does debian execute the "make" command for the nvidia driver; at that point you will need to add the IGNORE_PREEMPT_RT_PRESENCE=1 to the commandline, to ensure the nvidia-installer overrides the PREEMPT_RT test. then you should be home free. This may be handled in the .deb or possibly another tool used in debian (like a module helper script or something like that)... this i don't know, but i am not installing debian to find out ;) Maybe another Debian user, more familiar with debian packaging will stumble on your post (or try debian forums, maybe ask for a pakcager's insights / help on how to proceed.)... Regardless, do keep me posted on how things work out, k? 1. I get the NVIDIA-Linux-x86-325.15.run from the nvidia site. 2. I do NVIDIA-Linux-x86-325.15.run --extract-only to decompsress the driver/installer 3. I download the nvidia-325xx-rt.patch 4. I cd to NVIDIA-Linux-x86-325.15/kernel/ and do the patch with patch -Np1 -i nvidia-325xx-rt.patch 5. If I now run the nvidia-installer (the one in NVIDIA-Linux-x86-325.15 dir) it halts because "ERROR: The kernel you are installing for is a PREEMPT_RT kernel!" So, what I need to do is rebuild the nvidia-installer to ignore that my kernel is preempt_rt, or have I misunderstood everything? (to clarify, i use a vanilla kernel from kernel.org wich i patched with a rt-patched, not an native debian-kernel) (clarify 2, the NVIDIA-Linux-x86-325.15.run installs and work ok before I apply the rt-patch) downloaded package Installed it with dpkg cd /usr/src, untar nvidia-kernel.tar.xz that the package placed there (creates /usr/src/modules) cd /usr/src/modules/nvidia-kernel patch nv-linux.h with "patch -Np1 -i nvidia-325xx-rt.patch" read /usr/share/doc/nvidia-kernel-source/README.Debian.gz (method #4) "apt-get install kernel-package" "apt-get install nvidia-kernel-common" cd /usr/src/linux-3.10.6-rt3 "make-kpkg modules_image" (builds package with nvidia kernel module and places it in /usr/src) Extract kernel module from that package with "dpkg --fsys-tarfile nvidia-kernel-3.10.6-rt3_325.15-1+3.10.6-rt3-10.00.Custom_i386.deb | tar xOf - ./lib/modules/3.10.6-rt3/nvidia/nvidia-current.ko >nvidia.ko" (package had unmet deps that I couldn't resolv) copy nvidia.ko to your default module location, in debian; /lib/modules/3.10.6/kernel/drivers/video/nvidia.ko seems to work (no extensive rt-testing done yet, will try to report back when I've done that.) So if you want the nvidia tools (ie nvdia-settings) install the original driver without an rt-kernel, then compile the driver separatley from standalone package. I can't say much about the unmet dependencies and such, or even much about your building method. But i am still wondering, if it wouldn't be easier to just adjust the source package (since it already does all of these steps?)... that is assuming you could work out the unmet dependencies... I only ask; because in the future aren't you just going to run into this again and have to manually resolve everything again?? food for thought, i suppose... So to clarify, you have linux-3.10.6-rt3 + 525.15 (nvidia-rt) working successfully. ie: you've booted into it, no lockups or anything like that?? (even if it took a little extra tweaking / manual intervention?) If so; *Awesome* asnd i am glad i was helpful, even though i am not a debian user ;) ... Out of curiousity; would you care to share you setup? For example, my system (that i am typing from is); - AMD Phenom II 965 black edition (3.4GHZ X 4 cores) - 16GIG DDR3 RAM - 2TB SATA HDD - ASUS MOBO (M4N75TD / Xtreme Design) - nVidia 440GT Running Archlinux 64bit(Multilib). $ uname -a Linux localhost.localdomain 3.10.6-rt3-1-l-pa #1 SMP PREEMPT RT Thu Aug 22 21:00:54 EDT 2013 x86_64 GNU/Linux cheerz. yes, linux-3.10.6-rt3 + 325.15 (nvidia-rt) booting ok, no lockups yet, but I havn't tested anything RT yet, so i don't know for shure, have some other drivers I have to make work with 3.xx before any real RT-testing. (if it locks up, does everything lock up, is it random or any special events?) Thanks a lot for the help! My current setup is: Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz 4Gb DDR3 RAM 120Gb 330 series intel SSD ASUS Mobo (don't have the number avaliable) ZOTAC GEFORCE GT 610 1GB DDR3 PCI-E DVI/HDMI SILENT Running debian/squeeze 32 bit $uname -a Linux pwrxxx 3.10.6-rt3 #2 SMP PREEMPT RT Wed Aug 21 11:32:22 CEST 2013 i686 GNU/Linux We have been running 2.6.33.7-rt29 up till now, but need new drivers for new hardware, so time to upgrade the kernel. =) We run our own PLC-software for industrial automation on 500+ linux nodes with this setup, so stability is crucial, check it out if you're into that sort of thing, it's GNU GPLed;. =) 1st, I'm glad it is working. I assume you've run your standard GLX type stuff; compositor, H/W accel video, etc? <-- That is typically where (afaict) you'd experience a lockup, if the nvidia driver wasn't patched. ~ that is my experience and the experience i have observed, as well. <although, my machines exhibit it in slightly different ways, not entirely predictable> On Archlinux, the whole (building) proces for nvidia-rt seems quite a bit easier than debian. That is another reason i'm glad it worked out for you (since i am not the best person to ask) - it seems "quite involved" to get nvidia installed on a debian machine (well, if you are doing it yourself anyway). On Archlinux, the process is rather simple; ... meet dependencies, grab sources, extract installer, cd into .../kernel, then patch, then run make (with proper parameters), then do the installation bits....done! (all with one command "makepkg -si" from within folder containing PKGBUILD. :) Regarding your 500+ nodes setup, very interesting. I've talked to people in the past doing similar interesting things. ~ I'm curious, what differences so far have you noticed between 2.33.7-rt and 3.x-rt kernel?? (ie: have you bumped into any significant regressions and/or improvments?). This kind of thing fascinates me, although for myself - my entire interest/use of Linux-rt is for proaudio purposes; I have a rackmountPC that i use as a "sound module" (and/or DAW). I hack on Wine to better support my Commercial proaudio apps (VSTs mainly and especially this boxset; . but I do use linux proaudio apps to (like Ardour, for example).... but even aside from proaudio, i don't use generic linux kernels, I essentially boot into an rt kernel(for standard desktop stuff, work, etc), all of the time on any machine(s). ~ and while my stability may not be as crucial as yours - it still is certainly important (and actually, my RackmountPC does have to be 100% reliable, no glitches... my desktop isn't as crucial though).. Yes, that makes sense to me - Looking at Proview gave me the impression that OpenGL wasn't in high use, nor would a compositing desktop or anything like that... However, you might still want to test that, as you never know; it (by 'it' a mean a lockup) could randomly crop up in the future <maybe under some circumstance>.. Ah, i guess you are going to have some testing / work ahead of you ;) good luck!
https://devtalk.nvidia.com/default/topic/572468/linux/nvidia-325-15-linux-rt-old-amp-amp-new-nvidia-rt-patch-methods-questions-about-nvidia-installer/
CC-MAIN-2017-22
en
refinedweb
Advanced Namespace Tools blog 26 December 2016 ANTS 2.6 Release The 9front project has released a new update of the installation .iso image, making this a good moment for me to sync up the ANTS code repositories, documentation, and downloads to the latest revision. I have decided that making "release tarballs" with precompiled kernels is probably pointless, although I will probably upload and link a compiled kernel at some point. Compiling from source seems like what Plan 9 users prefer to do. The idea of a mostly non-technical user who wants to use "Plan 9 Advanced Namespace Tools" is probably a complete phantasm. Most people who are interested in Plan 9 already have fairly substantial software development and system administration skills. New Features Added since 2.1 I haven't been doing regular point releases, so there haven't been any specific version releases between 2.1 in fall 2015 and 2.6 now at the very end of 2016. I just bumped up the version by .5 to indicate that a fair amount of work has been done, but not so much change as would be implied by calling it 3.0. Here is a summary of notable improvements: - Support for rcpu/rimport and dp9ik auth in the ANTS boot/service namespace - Support for building amd64 kernel - Support for TLS boot option in plan9rc boot script - Ramfossil script for instantiating a temporary fossil from a venti rootscore - Patched versions of utilities and kernel updated to latest 9front code base - Bugfixes to hubfs with multiple client readers, grio color selection, and more Whither Bell Labs support? This release still includes the Bell Labs version of ANTS in the main directory, with the 9front specific changes in the "frontmods" subdirectory. 9front is my primary Plan 9 environment, but I do keep a Bell Labs qemu VM active. Last I checked (in 2015), the Bell Labs version of ANTS compiles and installs correctly in 9legacy, also. Time marches on, life is short, and in the absence of any kind of significant active user base for ANTS making feature and support requests, I am intending to drop active support and testing for the Bell Labs version. Since the labs' version of Plan 9 is no longer receiving updates, this release should continue to be useful for anyone who does want to use the original. If I receive any feedback that people are interested in using ANTS with 9legacy, I will probably create an independent repository for a 9legacy, based on the current code for labs. TL; DR The and repos have been re-synchronized at revision 427. This represents ANTS release 2.6, which builds vs 9front revision 5641. This is probably the last ANTS release to support Plan 9 from Bell Labs.
http://doc.9gridchan.org/blog/161226.release2.6
CC-MAIN-2017-22
en
refinedweb
On Tue, 2003-06-03 at 22:19, Torsten Knodt wrote: > On Tuesday 03 June 2003 21:46, Bruno Dumon wrote: > > BD> yeah yeah, I agree with that, and for that purpose the tidyserializer is > BD> very valuable. I was only wondering if there were any blocking bugs in > BD> the normal htmlserializer that make it impossible to generate valid html > BD> (next to the namespace problem). > > No real blocking. For most problems, there is a simple workaround. > > BD> (I'll look into applying the tidyserializer.) > > When you or someone else wants to apply it, I'll provide xdocs for it, > including all supported parameter by tidy. great. > > BD> TK> You have to validate the output to see if it's valid. > BD> Is there any other way to validate the output then by validating it? > > Was written bad. You have to validate the output with an external program to > see if it is valid. That's what I meat. ok. > > BD> If "the job" means that Xalan should validate the serialized xml against > BD> the DTD it references, then I think it's a pretty save bet to say that > BD> will never ever happen. > > I hope it removes not allowed and not needed namespaces. but that is quite a heavy process if only for aesthetic purposes. > For deciding what > namespaces are allowed, it has to do validation. true, but only if you are still living in the DTD-area. And since in DTD's you shouldn't be using namespaces in the first place, maybe it is easier to simply make a transformer which drops all namespaces? -- Bruno Dumon Outerthought - Open Source, Java & XML Competence Support Center [email protected] [email protected]
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200306.mbox/%[email protected]%3E
CC-MAIN-2017-22
en
refinedweb
Web applications (or "Web apps") let you bundle a set of servlets, JSP pages, tag libraries, HTML documents, images, style sheets, and other Web content into a single collection that can be used on any server compatible with servlet version 2.2 or later (JSP 1.1 or later). When designed carefully, Web apps can be moved from server to server or placed at different locations on the same server, all without making any changes to any of the servlets, JSP pages, or HTML files in the application. This capability lets you move complex applications around with a minimum of effort, streamlining application reuse. In addition, since each Web app has its own directory structure, sessions, ServletContext, and class loader, using a Web app simplifies even the initial development because it reduces the amount of coordination needed among various parts of your overall system. 4.1 Registering Web Applications With servlets 2.2 and later (JSP 1.1 and later), Web applications are portable. Regardless of the server, you store files in the same directory structure and access them with URLs in identical formats. For example, Figure 41 summarizes the directory structure and URLs that would be used for a simple Web application called webapp1. This section will illustrate how to install and execute this simple Web application on different platforms. Although Web applications themselves are completely portable, the registration process is server specific. For example, to move the webapp1 application from server to server, you don't have to modify anything inside any of the directories shown in Figure 41. However, the location in which the top-level directory (webapp1 in this case) is placed will vary from server to server. Similarly, you use a server-specific process to tell the system that URLs that begin with should apply to the Web application. In general, you will need to read your server's documentation to get details on the registration process. I'll present a few brief examples here, then give explicit details for Tomcat, JRun, and ServletExec in the following subsections. My usual strategy is to build Web applications in my personal development environment and periodically copy them to various deployment directories for testing on different servers. I never place my development directory directly within a server's deployment directorydoing so makes it hard to deploy on multiple servers, hard to develop while a Web application is executing, and hard to organize the files. I recommend you avoid this approach as well; instead, use a separate development directory and deploy by means of one of the strategies outlined in Section 1.8 (Establish a Simplified Deployment Method). The simplest approach is to keep a shortcut (Windows) or symbolic link (Unix/Linux) to the deployment directories of various servers and simply copy the entire development directory whenever you want to deploy. For example, on Windows you can use the right mouse button to drag the development folder onto the shortcut, release the button, and select Copy. To illustrate the registration process, the iPlanet Server 6.0 provides you with two choices for creating Web applications. First, you can edit iPlanet's web-apps.xml file (not web.xml!) and insert a web-app element with attributes dir (the directory containing the Web app files) and uri (the URL prefix that designates the Web application). Second, you can create a Web Archive (WAR) file and then use the wdeploy command-line program to deploy it. WAR files are simply JAR files that contain a Web application directory and use .war instead of .jar for file extensions. See Section 4.3 for a discussion of creating and using WAR files. Figure 41 Registering Web Applications 245 With the Resin server from Caucho, you use a web-app element within web.xml and supply app-dir (directory) and id (URL prefix) attributes. Resin even lets you use regular expressions in the id. So, for example, you can automatically give users their own Web apps that are accessed with URLs of the form. With the BEA WebLogic 6 Server, you have two choices. First, you can place a directory (see Section 4.2) containing a Web application into the config/domain/applications directory, and the server will automatically assign the Web application a URL prefix that matches the directory name. Second, you can create a WAR file (see Section 4.3) and use the Web Applications entry of the Administration Console to deploy it. Registering a Web Application with Tomcat With Tomcat 4, creating a Web application consists simply of creating the appropriate directory structure and restarting the server. For extra control over the process, you can modify install_dir/conf/server.xml (a Tomcat-specific file) to refer to the Web application. The following steps walk you through what is required to create a Web app that is accessed by means of URLs that start with. These examples are taken from Tomcat 4.0, but the process for Tomcat 3 is very similar. Create a simple directory called webapp1. Since this is your personal development directory, it can be located at any place you find convenient. Once you have a webapp1 directory, place a simple JSP page called HelloWebApp.jsp (Listing 4.1) in it. Put a simple servlet called HelloWebApp.class (compiled from Listing 4.2) in the WEB-INF/classes subdirectory. Section 4.2 gives details on the directory structure of a Web application,. Finally, although Tomcat doesn't actually require it, it is a good idea to include a web.xml file in the WEB-INF directory. The web.xml file, called the deployment descriptor, is completely portable across servers. We'll see some uses for this deployment descriptor later in this chapter, and Chapter 5 (Controlling Web Application Behavior with web.xml) will discuss it in detail. For now, however, just copy the existing web.xml file from install_dir/webapps/ROOT/WEB-INF or use the version that is online under Chapter 4 of the source code archive at. In fact, for purposes of testing Web application deployment, you might want to start by simply downloading the entire webapp1 directory from. Copy that directory to install_dir/webapps. For example, suppose that you are running Tomcat version 4.0, and it is installed in C:\jakarta-tomcat-4.0. You would then copy the webapp1 directory to the webapps directory, resulting in C:\jakarta-tomcat-4.0\webapps\ webapp1\HelloWebApp.jsp, C:\jakarta-tomcat-4.0\webapps\webapp1\ WEB-INF\classes\HelloWebApp.class, and C:\jakarta-tomcat-4.0\ webapps\webapp1\WEB-INF\web.xml. You could also wrap the directory inside a WAR file (Section 4.3) and simply drop the WAR file into C:\jakarta-tomcat-4.0\webapps. Optional: add a Contextentry to install_dir/conf/server.xml. If you want your Web application to have a URL prefix that exactly matches the directory name and you are satisfied with the default Tomcat settings for Web applications, you can omit this step. But, if you want a bit more control over the Web app registration process, you can supply a Contextelement in install_dir/conf/server.xml. If you do edit server.xml, be sure to make a backup copy first; a small syntax error in server.xml can completely prevent Tomcat from running. The Context element has several possible attributes that are documented at. For instance, you can decide whether to use cookies or URL rewriting for session tracking, you can enable or disable servlet reloading (i.e., monitoring of classes for changes and reloading servlets whose class file changes on disk), and you can set debugging levels. However, for basic Web apps, you just need to deal with the two required attributes: path (the URL prefix) and docBase (the base installation directory of the Web application, relative to install_dir/webapps). This entry should look like the following snippet. See Listing 4.3 for more detail. <Context path="/webapp1" docBase="webapp1" /> Note that you should not use /examples as the URL prefix; Tomcat already uses that prefix for a sample Web application. Core Warning Do not use /examples as the URL prefix of a Web application in Tomcat. Restart the server. I keep a shortcut to install_dir/bin/startup.bat (install_dir/bin/startup.sh on Unix) and install_dir/bin/shutdown.bat(install_dir/bin/shutdown.sh on Unix) in my development directory. I recommend you do the same. Thus, restarting the server involves simply double-clicking the shutdown link and then double-clicking the startup link. Access the JSP page and the servlet. The URL webapp1/HelloWebApp.jsp invokes the JSP page (Figure 42), and invokes the servlet (Figure 43). During development, you probably use localhost for the host name. These URLs assume that you have modified the Tomcat configuration file (install_dir/conf/server.xml) to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and. Figure 42 Invoking a JSP page that is in a Web application. Figure 43 Invoking a servlet that is in a Web application. Listing 4.1 HelloWebApp.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD><TITLE>JSP: Hello Web App</TITLE></HEAD> <BODY BGCOLOR="#FDF5E6"> <H1>JSP: Hello Web App</H1> </BODY> </HTML> Listing 4.2 HelloWebApp.java import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class HelloWebApp extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String\n"; String\n" + "<H1>" + title + "</H1>\n" + "</BODY></HTML>"); } } Listing 4.3 Partial server.xml for Tomcat 4 <?xml version="1.0" encoding="ISO-8859-1"?> <Server> <!-- ... --> <!-- Having the URL prefix (path) match the actual directory (docBase) is a convenience, not a requirement. --> <Context path="/webapp1" docBase="webapp1" /> </Server> Registering a Web Application with JRun Registering a Web app with JRun 3.1 involves nine simple steps. The process is nearly identical to other versions of JRun. Create the directory. Use the directory structure illustrated in Figure 41: a webapp1 directory containing HelloWebApp.jsp, WEB-INF/classes/HelloWebApp.class, and WEB-INF/web.xml. Copy the entire webapp1 directory to install_dir/servers/default. The install_dir/servers/default directory is the standard location for Web applications in JRun. Again, I recommend that you simplify the process of copying the directory by using one of the methods described in Section 1.8 (Establish a Simplified Deployment Method). The easiest approach is to make a shortcut or symbolic link from your development directory to install_dir/servers/default and then simply copy the webapp1 directory onto the shortcut whenever you redeploy. You can also deploy using WAR files (Section 4.3). Start the JRun Management Console. You can invoke the Console either by selecting JRun Management Console from the JRun menu (on Microsoft Windows, this is available by means of Start, Programs, JRun) or by opening. Either way, the JRun Admin Server has to be running first. Click on JRun Default Server. This entry is in the left-hand pane, as shown in Figure 44. Figure 44 JRun Web application setup screen. Click on Web Applications. This item is in the bottom of the list that is created when you select the default server from the previous step. Again, see Figure 44. Click on Create an Application. This entry is in the right-hand pane that is created when you select Web Applications from the previous step. If you deploy using WAR files (see Section 4.3) instead of an unpacked directory, choose Deploy an Application instead. Specify the directory name and URL prefix. To tell the system that the files are in the directory webapp1, specify webapp1for the Application Name entry. To designate a URL prefix of /webapp1, put / webapp1in the Application URL textfield. Note that you do not have to modify the Application Root Dir entry; that is done automatically when you enter the directory name. Press the Create button when done. See Figure 45. Figure 45 JRun Web application creation screen. You only need to fill in the Application Name and Application Root Dir entries. Restart the server. From the JRun Management Console, click on JRun Default Server and then press the Restart Server button. Assuming JRun is not running as a Windows NT or Windows 2000 service, you can also double-click the JRun Default Server icon from the task-bar and then press Restart. See Figure 46. ServletExec. This approach assumes that you have modified JRun to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and. Figure 46 You must restart JRun for a newly created Web app to take effect. Registering a Web Application with ServletExec The process of registering Web applications is particularly simple with ServletExec 4. To make a Web app with a prefix webapp1, just create a directory called webapp1 with the structure described in the previous two subsections. Drop this directory into install_dir/webapps/default, restart the server, and access resources in the Web app with URLs that begin with. You can also drop WAR files (Section 4.3) in the same directory; the name of the WAR file (minus the .war extension) automatically is used as the URL prefix. For more control over the process or to add a Web application when the server is already running, perform the following steps. Note that, using this approach, you do not need to restart the server after registering the Web app. Create a simple directory called webapp1. Use the structure summarized in Figure 41: place a simple JSP page called HelloWebApp.jsp (Listing 4.1) in the top-level directory and put a simple servlet called AppTest.class (compiled from Listing 4.2) in the WEB-INF/classes subdirectory. Section 4.2 gives details on the directory structure of a Web app,. Later in this chapter (and throughout Chapter 5), we'll see uses for the web.xml file that goes in the WEB-INF directory. For now, however, you can omit this file and let ServletExec create one automatically, or you can copy a simple example from. In fact, you can simply download the entire webapp1 directory from the Web site. Optional: copy that directory to install_dir/webapps/default. ServletExec allows you to store your Web application directory at any place on the system, so it is possible to simply tell ServletExec where the existing webapp1 directory is located. However, I find it convenient to keep separate development and deployment copies of my Web applications. That way, I can develop continually but only deploy periodically. Since install_dir/webapps/default is the standard location for ServletExec Web applications, that's a good location for your deployment directories. Go to the ServletExec Web app management interface. Access the ServletExec administration interface by means of the URL and select Manage under the Web Applications heading. During development, you probably use localhost for the host name. See Figure 47. This assumes that you have modified ServletExec to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use. Enter the Web app name, URL prefix, and directory location. From the previous user interface, select Add Web Application (see Figure 47). This results in an interface (Figure 48) with text fields for the Web application configuration information. It is traditional, but not required, to use the same name (e.g., webapp1) for the Web app name, the URL prefix, and the main directory that contains the Web application. Figure 47 ServletExec interface for managing Web applications. Figure 48 ServletExec interface for adding new Web applications. Add the Web application. After entering the information from Item 4, select Add Web Application. See Figure 48. JRun. This assumes that you have modified ServletExec to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and.
http://www.informit.com/articles/article.aspx?p=26138&amp;seqNum=2
CC-MAIN-2017-22
en
refinedweb
Yes, video textures are only available for AIR projects at the moment. Adobe hasn't added it to the Flash Player (yet). The only fallback I know of is unfortunately to use classic "StageVideo". This is working great! I am running AIR17 on a desktop PC. I am having an issue with a clean video loop. Has anyone found a nice way to loop video? private function Status_Handler(stats:NetStatusEvent):void { switch(stats.info.code) { case "NetStream.Buffer.Flush": if(Loop) { Stream.seek(0); } break; default: break; } } This seems to be the cleanest but it is still choppy, and it stops the video just over 2 seconds early. When I say choppy I see a green screen for a moment. Maybe around 0.044 Seconds. I think it has to do with filling the buffer but I am not sure. @drycola, I'm using 2 NetStreams, one playing and another paused at 0. Switching them by time (smoother than flush and stop events) inside a state event: (backgroundTexture.base as VideoTexture).addEventListener(flash.events.Event.TEXTURE_READY, function(e:Object) { if (backgroundNS.time > almostEndTime) { backgroundNS.seek(0); backgroundNS.pause(); backgroundImage.texture = backgroundTexture2; backgroundNS2.resume(); } }); If you need a good loops, vote for this feature: Yes, now I have no error. It works great on the desktop, but all is not good on mobile devices. You need to update AIR SDK. Hmmm. I already have the latest version of the AIR SDK installed (18.0.0.122). I have managed to stop the error by adding the image to the stage once the NetStream buffer is full. i.e. ns = new NetStream( netConnection ); ns.addEventListener( NetStatusEvent.NET_STATUS, onNetStatus ); private function onNetStatus(e:NetStatusEvent):void{ switch(e.info.code) { case "NetStream.Buffer.Full": addChild( image ); break; } } This does work on the desktop but like you, I cannot get anything to appear on a mobile device. I get: NetStream.Play.Start NetStream.Play.Failed NetStream.Play.Stop I was under the impression that VideoTexture was available on Win/OSX/iOS and Android now. Try this example of the Video player Thanks for the idea but that's not it. I am currently using .H264 and AAC audio in my .mp4 file. @Astraport, as I think you have just encountered, the Feathers VideoPlayer isn't altogether ready either at the moment. Has anyone come across a working example of VideoTexture on iOS/Android? I managed to display a video on iOS, yes! I haven't tried Android yet, though. In any case: anybody who runs into issues with the new video textures, please be sure to post your bug reports on the Adobe bugbase! It's critical that the AIR team finds out about any problems we have, so that they can fix them. Also post the links to those bugbase entries here, so that forum users can vote for them. Thanks in advance! Thanks Daniel. Although with this case I wouldn't class this as a bug at the moment. The situation seems to be that some people can get it to work on iOS whilst others cannot, which would suggest a misunderstanding on how things should be set up. I haven't come across any working examples, even from Adobe, of VideoTexture working on mobile with an .mp4 file. @Crooksy - we got it working on iOS using the instructions / example code in the Air release notes. At the time there were some typos, but we were able to read between the lines. Also note, we are streaming our mp4s, not playing them from a local source, but that shouldn't matter. One thing to try: be sure your mp4 is encoded at the baseline level: 3.0 or 3.1. And/or use a confirmed-working mp4 when implementing. Hope that helps a little. @crooksy, do you use a pure actionscript3 compiler? or flex-merged compiler? if pure as3 then try using a pure one and that maybe will solve your problem read Thanks for your suggestions. Still no joy though. Tried streaming the .mp4, ensuring it was using baseline 3.0. Tried using a .3gp file (but these do work on the desktop) Tried to follow the example in the release notes (which seems to be half complete). I'm using Flash Pro CC 2014 to compile the .ipa file, AIR 18.0.0.130 @montego, would you happen to have a small .mp4 file you could share (or know of one) that has been confirmed to be working with VideoTexture/iOS? Hi all, We are trying to play several clips sequence with VideoTextures but code is crashing unpredictably on iOS devices from time to time. I have created sample codebase for it, so one can try it. Sources are here Here is full logic of what is happening: package videotexturetest.views.application { import feathers.controls.ImageLoader; import flash.events.NetStatusEvent; import flash.net.NetConnection; import flash.net.NetStream; import starling.display.Sprite; import starling.events.Event; import starling.textures.Texture; public class ApplicationView extends Sprite { private var _currentIndex = 1; private var _numVideos = 8; private var _image:ImageLoader; private var _connection:NetConnection; private var _currentStream:NetStream; private var _currentTexture:Texture; private var _nextStream:NetStream; private var _nextTexture:Texture; public function ApplicationView() { addEventListener(Event.ADDED_TO_STAGE, function(event:Event):void { initialize(); }); } private function prepareNextStream():void { if (_currentIndex == _numVideos) { _nextStream = null; _nextTexture = null; return; } trace("Preparing next texture:", "videos/clip_" + (_currentIndex + 1) + ".mp4"); // Second NetStream connection _nextStream = new NetStream(_connection); _nextStream.client = { onMetaData : function(infoObject:Object) {} }; Texture.fromNetStream(_nextStream, 1, function(texture:Texture):void { trace("Video texture is ready:", "videos/clip_" + (_currentIndex + 1) + ".mp4"); _nextTexture = texture; _nextStream.togglePause(); }); _nextStream.play("videos/clip_" + (_currentIndex + 1) + ".mp4"); } private function initialize():void { _image = new ImageLoader(); _image.setSize(stage.stageWidth, stage.stageHeight); addChild(_image); _connection = new NetConnection(); _connection.connect(null); // First NetStream connection _currentStream = new NetStream(_connection); _currentStream.client = { onMetaData : function(infoObject:Object) {} }; _currentStream.addEventListener(NetStatusEvent.NET_STATUS, function(event:NetStatusEvent):void { if (event.info.code == 'NetStream.Play.Stop' && _nextStream) { var stream:NetStream = event.target as NetStream; stream.removeEventListener(NetStatusEvent.NET_STATUS, arguments.callee); _currentIndex++; _nextStream.addEventListener(NetStatusEvent.NET_STATUS, arguments.callee); _image.source = _nextTexture; _nextStream.togglePause(); _currentTexture.dispose(); _currentStream.close(); _currentTexture = _nextTexture; _currentStream = _nextStream; prepareNextStream(); } }); Texture.fromNetStream(_currentStream, 1, function(texture:Texture):void { trace("Video texture is ready:", "videos/clip_" + _currentIndex + ".mp4"); _currentTexture = texture; _image.source = _currentTexture; }); prepareNextStream(); _currentStream.play("videos/clip_" + _currentIndex + ".mp4"); } } } Two streams exist as same time. One for current video and second one for next one. It allows us to switch videos without flickering. Code works fine on desktop (air simulator) but when I try to launch it in real iOS device app is crashing and I have the following error in device logs: May 29 07:41:57 iPhone-Alexey ReportCrash[3541] <Notice>: ReportCrash acting against PID 3540 May 29 07:41:57 iPhone-Alexey ReportCrash[3541] <Notice>: Formulating crash report for process VideoTextureTest[3540] May 29 07:41:57 iPhone-Alexey com.apple.launchd[1] (UIKitApplication:VideoTextureTest[0x917b][3540]) <Warning>: (UIKitApplication:VideoTextureTest[0x917b]) Job appears to have crashed: Segmentation fault: 11 May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Warning>: 07:41:57.609 [0x3f31000] CMSession retain count > 1! May 29 07:41:57 iPhone-Alexey backboardd[28] <Warning>: Application 'UIKitApplication:VideoTextureTest[0x917b]' exited abnormally with signal 11: Segmentation fault: 11 May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Warning>: Encountered an XPC error while communicating with backboardd: <error: 0x3c8d7744> { count = 1, contents = "XPCErrorDescription" => <string: 0x3c8d79dc> { length = 22, contents = "Connection interrupted" } } May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Error>: 07:41:57.670 [0x4035000] sessionID = 0xbff6e4: cannot get ClientInfo May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Error>: 07:41:57.673 ERROR: [0x4035000] 150: AudioQueue: Error 'ini?' from AudioSessionSetClientPlayState(0xbff6e4) Nothing special appears in Scout. Could anyone suggest something about this issue? Is it a problem in NetStream or it somehow connected to VideoTexture? I see you have a _numVideos = 8 in your code, and one of the recent Adobe posts on VideoTexture did have this current limitation mentioned: A maximum of 4 VideoTexture objects are available per Context3D instance. I don't know if this is still a limit or on what platforms. Could you be hitting a limit on total active VideoTexture objects? Does your code not crash if you limit _numVideos to 1 or 2? Just a thought. I had 4 videos before and it was just the same. Also in code I dispose texture and to check it's working I've added 8 clips (expecting error if I have something wrong). Crash can happen on 4th video, on 6th or even on 2nd one (when only 2 video textures are in memory). So, I guess its not the number of textures which causes crash. Also everything is working from time to time (any number of videos). Crash seems unpredictable for me. I was thinking about GC problem. Thanks Daniel/Jeff. For anyone else facing similar issue to that posted by Alexey; Adobe have been able to replicate Jitender thakur Jun 1, 2015 11:30 PM Thanks for reporting the issue. We are able to reproduce the issue and logged an internal bug 3998622 for the same. we will investigation it and update you soon. Regards, Adobe AIR team Alexey/Nuwan should be credited as one doing the 'actual' work I was simply trying to ensure Adobe were aware/known issue! *Accurate frame seeking (bugbase VOTE!)* You may wish to vote on this feature (Chris Campbell confirmed it's on their consideration/backlog) 'Possibly' Akin to what IOS/AV foundation offers (toleranceBefore /toleranceAfter);: @Dendroid Do you have framerate loss when you switch from one stream to the next? I have this working. I trigger it like this: //Video Class private function On_Meta_Data(metadata:Object):void { _Duration = metadata.duration; dispatchEventWith(VIDEO_EVENT_START, false, [_ID,(_Duration * 1000)]); } //Handler Class private function Movie_Timer_Handler():void { switch(Transition_Type) { case "TWO": Video_2.Pause(); Video_2.Show(); Video_1.Play(); Video_1.Pause(); Video_1.Hide(); break; case "ONE": if(Video_1.Paused) { Video_1.Pause(); Video_1.visible = true; Video_2.Play(); Video_2.Pause(); Video_2.visible = false; } else { Video_2.Pause(); Video_2.visible = true; Video_1.Play(); Video_1.Pause(); Video_1.visible = false; } break; default: break; } I have 2 videos running and make one visible when the other isn't.
https://forum.starling-framework.org/topic/videotexture/page/2
CC-MAIN-2017-22
en
refinedweb
Ppt on role of water resources management Ppt on namespace in c++ Ppt on missing numbers for kindergarten Ppt on edge detection tutorial Free download ppt on resources and development for class 10 Ppt on video conferencing basics Ppt on forward rate agreement quote Ppt on vegetarian and non vegetarian Ppt on centring point Ppt on indian culture and tradition free download
http://slideplayer.com/slide/1726236/
CC-MAIN-2017-22
en
refinedweb
I'm trying to make a function that makes the number of endlines put into the integer, but it gives me these weird errors. What's wrong with my code? Code:#include <cstdlib> #include <iostream> using namespace std; int nl(int i) { int t = 0; while (i >= t) { cout << endl; t++; } } int main() { int i; cout << "Rock?"; cin >> i; nl(); cout << "Pluck?"; cin >> i; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/60906-need-help-fuctions.html
CC-MAIN-2017-22
en
refinedweb
IMPORTANT: The techniques in this post, while interesting, are outdated and sub-optimal. In short, follow standard equals() and hashCode() practice, but TEST your classes using something like TestUtils. I find a bug almost every time I use that.This post is the first in a series on Comparing Objects. These three methods must be implemented correctly in order for the Java collections to work properly. Even though popular IDEs automatically generate stubs of some of these methods, you should still understand how they work, particularly how the three methods work together because I don't see many IDE's writing meaningful compareTo() methods yet. For much of what follows, I am endebted to Joshua Bloch and his book, Effective Java. Buy it, read it, live it. - The behavior of equals(), hashCode(), and compareTo()must be consistent. - You must base these methods on fields whose values do not change while they are in a collection. idfield with public getId()and setId()methods as many popular frameworks expect. hashCode()hashCode() is meant to provide a very cheap "can-equal" test. It allows the put()and contains()methods on hashtables to run blazingly fast. In small hashtables, the low bits from hashCode() determine which hash bucket an object belongs in. In larger hashtables, all the bits are used. The (presumably more expensive) equals((). Bloch's Item 9 states, "Always override hashCode() when you override equals()". The following are specifically required (see: Object.hashCode()): x.hashCode()must always equal y.hashCode()when x.equals(y). - It's OK for x.hashCode()to equal y.hashCode()when x.equals(y)is false, but it's good to minimize this. @Override public int hashCode() { if (id == 0) { return intField1 + intField2 + objField3.hashCode(); } // return (possibly truncated) surrogate key return (int) id; } If your object does not have a surrogate key, then the field-by-field comparison in this solution is correct, though not quite as fast. If you like playing with bits, you can sometimes orand shift various fields into your hashcode in a way that is very efficient and not too hard to read. equals() a.equals(b)should return true only when aand brepresent the same object. Bloch (Item 8) says that the equals()method must be reflexive, symmetric, transitive and a few other things as well which I won't cover here. For any non-null value: x.equals(x)must be true. - If x.equals(y)then y.equals(x)must be true. - If x.equals(y)and y.equals(z)then x.equals(z)must also be true. hashCode()should be cheap and guarantees that two objects can't equal each other if their hashCodes are different. @Override public boolean equals(Object other) { // Cheapest operation first... if (this == other) { return true; } if ( (other == null) || !(other instanceof MyClass) || (this.hashCode() != other.hashCode()) ) { return false; } // Details... final MyClass that = (MyClass) other; // If this is a database object and both have the same surrogate key (id), // they are the same. if ( (id != 0) && (that.getId() != 0) ) { return (id == that.getId()); } // If this is not a database object, compare significant fields here. // Return true only if they are all sufficiently the same. if (!this.getParent().equals(that.getParent())) { return false; } if (description == null) { if (that.getDescription() != null) { return false; } } else if (that.getDescription() == null) { return false; } else { // For each test, check and only return a non-zero result int ret = description.compareTo(that.getDescription()); if (ret != 0) { return false; } } // Compare other fields // If all the same, return true return true; } Both objects must be valid before you compare them. Your equals(. With care, you can ensure consistency of equals() and compareTo() by defining one in terms of the other, but be careful not to create an infinite loop by defining them both in terms of each other! Persistence/HibernatePersistence or communication frameworks create temporary surrogate objects in order to avoid fetching any extra objects from the database before they are needed. Hibernate replaces a surrogate object with the actual object the first time a field other than idis accessed, or any methods other than persistent field accessors are accessed. All of the above examples are designed to work with a persistence framework like Hibernate. So your object can trust itself to be initialized inside equals(), hashcode(), and compareTo(). It should NOT trust that the other object being compared to is initialized! You can access the this.whateverfields directly, but always use that.getWhatever(). Scala's Case ClassesDeclaring. ClojureAll Clojure's common built-in datatypes are immutable and implement the above methods for you, making them extremely easy to work with. SerialVersionUIDI have not verified this, but it stands to reason that if you change hashCode() you probably need to update the SerialVersionUIDjust?
http://glenpeterson.blogspot.com/2010/
CC-MAIN-2017-22
en
refinedweb
In which I explore the very basic usage of the .NET Configuration system and hit an apparent bug in Mono’s implementation of it… As I mentioned in a previous post, I’ve been working a lot recently in MonoDevelop – I’m building a program which can work on any platform (Linux, Mac and Windows) so long as it has the GTK# Runtime installed. This provides a cross-platform windowing toolkit. It takes a bit of getting used to if you’re used to building plain WinForms applications on Windows, but it seems to be working quite well. One issue I’ve hit (which has nothing to do with GTK# itself) is writing user-specific application configuration out to a config file. To those of us from the Linux world this is simplicity itself – we’d consider creating a dot-file in the user’s home directory, or, to follow modern conventions, in $XDG_CONFIG_HOME . It’s a bit more complicated in the .NET world though, or at least so the documentation would have you believe. Essentially if you want simple read only settings for an application, you can store them in an XML-based file called foo.exe.config (assuming your exe is called foo.exe). The file looks something like this: <?xml version="1.0" encoding="utf-8"?> <configuration><appSettings> <add key="foo" value="300" /> </appSettings></configuration> You can see there’s a default section called “appSettings” which contains (in this example) a setting with a keyname of “foo” and a value of “300”. These can then easily be read (as strings only) by a basic application such as: using System; using System.Configuration; namespace config_test { class Program { static void Main(string[] args) { string s = ConfigurationManager.AppSettings["foo"]; Console.WriteLine("Foo = {0}", s); } } } (To run this, make sure your project has a reference to System.Configuration as well – all the modern System Configuration stuff comes in System.Configuration.dll). As I understand it this simplest version is designed so you can provide a system config file with your application which will be installed alongside the EXE and will, quite possibly, be in a read-only location. So you can perhaps change the program’s behaviour without recompilation. It’s not really designed to store stuff dynamically. It’s because of this reasoning that writing to this section of this file is a bit more complicated. To write to this appSettings section of this config file (which, as the config file is stored alongside the EXE file, is not really suitable for user specific settings, but for my needs with my current application it will do), one needs to do the following: Configuration cfg = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); cfg.AppSettings.Settings.Add("foo", "Hello world"); cfg.Save(ConfigurationSaveMode.Modified); ConfigurationManager.RefreshSection("appSettings"); Note line four above – the call to RefreshSection. By default it seems that the ConfigurationManager caches reads from the config file – if you omit this line, you will only get what you read last time (until your program is restarted), not the last thing you wrote to it. RefreshSection forces the object to re-read the physical file. So here’s a complete test program: using System; using System.Configuration; namespace config_test { class Program { static void Main(string[] args) { string s = ConfigurationManager.AppSettings["foo"]; Console.WriteLine("Foo = {0}", s); Configuration cfg = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); cfg.AppSettings.Settings.Add("foo", "Hello world"); cfg.Save(ConfigurationSaveMode.Modified); ConfigurationManager.RefreshSection("appSettings"); s = ConfigurationManager.AppSettings["foo"]; Console.WriteLine("Foo = {0}", s); } } } Assuming you are starting with an empty config file , the output on Windows with Visual Studio 2010 gives the following: Foo = Foo = Hello world However, if you run this on Mono (I’m using MonoDevelop 3.0.3.2) then you get this: Foo = Foo = The call to RefreshSection doesn’t seem to work. Googling around finds this bug report from three years ago. The reason I hit this is because every time I open a window in my application I want to read its geometry (width, height, position etc) and restore whatever the user had it set to last time he opened the window. Of course it’s fairly easy to circumvent by avoiding reading the physical XML file multiple times during the program’s run and just use the config file to persist the settings when the program terminates, but I’m writing this just in case anyone else hits this and doesn’t immediately see what’s going wrong.
http://www.martyndavis.com/?p=372
CC-MAIN-2017-22
en
refinedweb
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site. import java.io.PrintStream; import java.util.Scanner; import java.util.Scanner; public class TriangleArea { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter the base length of the triangle:"); double base = sc.nextDouble(); System.out.print("Enter the height length of the triangle:"); double height = sc.nextDouble(); double preCalculation = base * height; double Area = preCalculation / 2.0; System.out.println("The Area of your triangle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class Rectangle { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter one of the side lengths:"); double length1 = sc.nextDouble(); System.out.println("Enter the other side length"); double length2 = sc.nextDouble(); double Area = length1 * length2; System.out.println("The area of your rectangle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class CircleArea { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter the radius of your circle:"); double radius = sc.nextDouble(); double Area = 3.1415926535 * radius * radius; System.out.println("The area of your circle is: " +Area); } } import java.io.PrintStream; import java.util.Scanner; public class MainClass { public static void main(String[] args) { System.out.println("Would you like to calculate the area of a Triangle, Rectangle, or Circle?:"); } } if(args[0].equals("Triangle")) import java.io.PrintStream; import java.util.Scanner; public class MainClass { public static void main(String[] args) { System.out.println("Press 1 for triangle, 2 for rectangle, and 3 for circle:"); } } Quote from VoteLobster I just started learning java... I need the code :huh.gif: And so it doesn't seem to lazy, it wouldn't hurt if you put some explanation of it... if (typeinput == 1){ circle(); } if else (typeinput == 2){ square(); } if else (typeinput == 3){ rectangle(); } public void circle(){ } import java.util.Scanner; public class Main { static Scanner typeinput = new Scanner(System.in); public static void main(String[] args) { System.out.println("Type 1 for triangle, 2 for rectangle, 3 for circle."); if (typeinput == 1){ } } } Quote from VoteLobster(stupid decision; I got a little ahead of myself there) Quote from VoteLobster So... there would be a 'get area' class, with each of the formulae in it, and the separate shapes have their own class, such as 'triangleClass'? And that shape class will tell the 'get area' class which formula to use? public abstract class shape { private ArrayList<double> points;//define the polygon's points public shape() { } abstract public double getArea(); //put accessors and mutators in, of course } //------------pretend this is a different file public class triangle extends shape { public triangle() { } @Override public double getArea() { //put the code to calculate area for a triangle here } //put accessors and mutators in, of course } //--------------pretend this is a different file again public class circle extends shape { private double radius; public circle() { } @Override public double getArea() { //put the code to calculate area for a circle here } //put accessors and mutators in, of course } shape roundThing = new Circle(); roundThing.setRadius(5); double roundArea = roundThing.getArea();//will give you the area of the circle public abstract class Shape { //This is an abstract class, meaning it never directly instantiated. Only subclass/child classes can of this class can be instantiated protected double area; private String colour; //No constructor, so default empty constructor will be used public double calcArea() { //This will never actually be called, but to override it in subclasses/child classes we need to have it defined here return 0; } public String getColour() { //When no method of the same name is found in a subclass/child class, java will look to this (the superclass/parent class) and run this one return colour.toString(); } public void setColour(String colour) { this.colour = colour; } } public class Rectangle extends Shape { private double length; private double breadth; //constructor public Rectangle(double length, double breadth) { this.length = length; this.breadth = breadth; } public double calcArea() { //Override calcArea method return length * breadth; } } public class Triangle extends Shape { private double base; private double height; //constructor public Triangle(double base, double height) { this.base = base; this.height = height; area = 0; } public double calcArea() { //Override calArea method return (base * height) / 2; } } public class AreaCalculator { public static void main(String[] args) { Shape shape; shape = new Rectangle(10, 5); shape.setColour("Red"); System.out.println("RECTANGLE"); System.out.println("*********"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); shape = new Triangle(10, 5); shape.setColour("Green"); System.out.println("TRIANGLE"); System.out.println("********"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); shape = new Circle(5); shape.setColour("Blue"); System.out.println("CIRCLE"); System.out.println("******"); System.out.println("Area: " + shape.calcArea()); System.out.println("Colour: " + shape.getColour() + "\n"); } } Public class Main{ Static int textinput = new Scanner(system.in) System.out.println("1 for triangle, 2 for rectangle, 3 for circle"); if(textinput .equals 1){ //then from there it goes into the calculator} public int getInteger() ArrayList<int> thingy = new ArrayList(); if(thingy.toString.matches("woop")) { // do something meaningless here } So now I more understand what a method is; so a method is a function/ within the Main method that is run with the rest of the class? Do they only exist in Main classes? or are they put in other classes? So just making sure. A double is a value (does it have to be an integer?) that is used with the class to perform a function? (triangle) (rectangle) (and circle) I need a way to have a class determine which other class to use. I have this so far: I want to have it ask you which one, and by text input (triangle, rectangle, or circle) I need a way to let it know which class to run. But where do I go from there? I'm confused. I keep getting bracket errors. I wont give you the code that would make it too easy. But have the main method have it output press 1 for triangle 2 for rectangle or 3 for circle. Then have an if else for 1 2 and 3 then in the else make it output invalid number and call the main method or you could stick it all in a giant loop. And so it doesn't seem to lazy, it wouldn't hurt if you put some explanation of it... and then? Superclasses Shape Subclasses Triangle Rectangle Circle Other Classes ShapeArea (what you've called MainClass, I just don't like that name, for reasons I won't go into, unless you really want me to) And what is the superclass Shape? If you just copy paste you don't learn ill put some code but I will tell you how to do it. Ok create 1 class you don't need a new class for every thing. Have it print Please type 1 for Triangle 2 for circle 3 rectangle. Then your main method in it put a scanner to input into a byte Byte is primitive data type. just call it typeinput Then make If else statements. Use == to see if something is = if you just put = you are telling it to be equal to it. That would be put in the main method. Now after that you would create a method called circle. Do that for circle triangle and square and copy and paste the code you did earlier out of the circle class for the circle method etc. I think you should give Xaanos' method a fair shot. If you can get it working, or can show that you've given it a fair shot, I will show you how to implement it using inheritance, and explain it to the best of my ability. I checked the classes for the individual shapes I made earlier; I put static Scanner ___ = new Scanner(System.in); in the same place, but at the If line it recognizes 'typeinput' but still gives me an error It's not really that stupid of a decision. As orbit79 said, this is a great place to use polymorphism. You had the right idea, you just didn't quite know what you were doing yet. this might be a bit out of your league right now, but very quick breakdown of inheritance and polymorphism: When a class inherits from another class, the class being inherited from is called the 'parent' class, while the class inheriting from the parent is known as the 'child' class. The child class receives all the fields and functions of the base class, which you can then add on to in the class definition. The relationship between a parent class and a child class is known as an "is a" relationship. Take the example of a parent class "shape" and its child class of "triangle". Triangle inherits from shape, therefore triangle is a shape. Polymorphism is treating a set of child classes that all inherit from a common parent class as if they were that parent class. Often times, the parent class is abstract, meaning that you cannot ever encounter an instance that is purely that class. If we put this into the perspective of the previous example, if shape was defined as abstract, you will never encounter a object of type shape. You can still encounter triangles, which are shapes, as well as any other class that inherits from the abstract class 'shape', but never shape all by itself. So if you had an abstract class, shape, with the abstract function getArea(), you could then define a bunch of classes that inherit from your shape class (such as triangle, circle, rectangle, etc.) and override the getArea() function in those classes to do the specific calculation for that particular shape. What you can then do is tell the class that makes the decision to expect an object of type shape as a parameter to one of its functions and then call the getArea() function. You can then pass it any class that inherits from shape, and the getArea() function that will be called is the function of the child class that was passed to it. It says "give me a shape", you say "here's a triangle", and it's okay with that. It says "give me a shape", you say "here's a circle", and it's fine with that. From what I understand so far, You would have a Main class. You run it, and it asks you which shape you want. When you give it your input, say, it will go to 'get area' class and that will refer to your shape class (be it triangle) then it performs the calculations, sends it to the main, and spits it out for you? Code wise, not sure how to do that, but kinda makes sense to me. Somewhat. No, there'd be a bunch of types of shape, and each type would have its own class. Each of those classes will have a 'get area' function that overrides the one declared in the parent 'shape' class. Each of the shapes will know how to calculate their own area. Polymorphism allows us to tell those shapes to use that function even if we don't know exactly what kind of shape it is. It'd look something like this: So with that, you could then do something like this: So even though roundThing is a shape, you can put a circle in its place because a circle is a shape. Then, when you use the getArea function, it gives you the correct area for the specific shape you put in it. I hope that helps, thanks. PS. I have included the colour stuff to demonstrate how a subclass will call the method from the superclass in the event that a copy within the subclass is not defined. Shape.java Rectangle.java Triangle.java Circle AreaCalculator.java 1- With the 'public String getColor' command, I see that you set the colors for the shapes in AreaCalculator.java. From what I see, is this just another way of labeling the different shapes, and the color refers back to the AreaCalculator class to know which Triangle/Rectangle/Circle class to use? Not have to label them individually? 2-Can you explain what the AreaCalculator class and Shape class are doing, in a nutshell? By the way, I used completely different calculation codes for the shapes, but I don't imagine that would affect it much? 3-What exactly is a Double? An integer value? Also, I talked to my friend about it today. He said to do this: (this was for merging all of the shape classes into one big class) then from there it goes into at the end he said to put a 'return;' . Is a return; just for ending if statements? Return to the main code, per say? I also tried this but when I ran it, it didn't ask for text-input. So when the AreaCalculator class tells each shape to use its getArea() function, each shape will call its own getArea function. The functions defined in the child classes are used because those functions override the getArea function found in the shape class. They are all shapes, so they each have a getArea function, but each one has its own specific method of finding its area. On the other hand, when the AreaCalculator class tells each shape to use its getColor() function, each shape will use the getColor function from the shape class. In this case, the function from the shape class is used because none of the child classes have overridden it with their own functions. Though they may be different from the base shape class in many ways, they are all still shapes. 3. A double is a type of floating point value. It's called a double because it uses twice as much memory as a float. It honestly doesn't make much of a difference these days, but it did some 20-30 years ago. Stick to using doubles unless you need to make many millions of them and memory use actually does become an issue. 4. The return statement informs the program to exit the function and send back a particular value. This value must be of the type laid out in the function's declaration. When you see a function like this: Then that function must return an integer value. In Java and many other languages, the function call itself can be treated as the type it returns. This is true to the point that you can chain function calls based upon what each function returns. So here I'm calling the .toString() function, which is common to all objects in the Java libraries. Since it returns a String, I'm capable of calling the .matches() function, which is a member function of the String class, on that function call. Putting a return statement in the main method causes the program to exit. Also note that I keep using the word "function" even though I should technically be saying "method". A method is a function that is part of a class, a "member function" if you will. In Java, functions that aren't members of classes don't exist, so they're technically all methods. So just making sure. A double is a value (does it have to be an integer?) that is used with the class to perform a function? About half-way through I remembered that I hated java. Or perhaps hatred was renewed when, instead of being able to follow a sane approach and, you know, be able to enumerate Types/classes in a package, I was forced to essentially write a god damned ClassLoader. But I digress. That, and the result would have in no way helped the OP.... A method is a function on an object. To calculate the area of a shape, for example, you'd call a method. When you write to the output stream (using system.out.println) you are calling the println method of the OutputStream class. When you read a double from the Scanner, you are calling the getDouble() method. You've got two types of methods, basically (well, in java, let's not complicate things). You have methods that belong to the object instance and methods that belong to the object class. A Class essentially defines the template for each instance. The build in "String" type is a class, and you can call "String.ValueOf" and get the string representation of a value. there you aren't accessing a string, but rather the class itself- you don't need an instance. Whereas if you have a string variable, it is an "instance" of a string and any and all strings will have the instance methods of string. The "Main" method is only special by convention; by convention the way a Java class is "started" is by the java class loader loading up the class, finding the Main Method, and invoking it. it is static because by virtue it doesn't need to have a loaded instance of the class it is contained in, and if the routine needs an instance it can create one anyway. double is a data type. The return statement returns to the caller. In the case of the Main method, this will return to the Java class loader, or whatever loaded up your program. What it is doesn't really matter, but basically when you return from the main method your program is finished. Within other methods, it returns to whatever called it.
http://www.minecraftforum.net/forums/off-topic/computer-science-and-technology/481546-question-about-java
CC-MAIN-2017-22
en
refinedweb
I will assume you have the following installed: - Internet Information Server - Visual Studio 2010 - Dynamics AX 2012 Client - Dynamics AX 2012 Visual Studio Tools First, let us create a new Project in Visual Studio. I'm want something easy and swift, so let's choose .Net 3.5 and ASP.Net Web Service Application. I'm naming the Project "PizzaSearchService" and this will automatically be part of the namespace for this application. The project loads and you will be presented with the code for "Service1". Now just copy the code underneath and paste it in. using System.ComponentModel; using System.Collections.Generic; using System.Web.Services; namespace PizzaSearchService { public class PizzaInfo { public string Name; public string Description; public double Prize; } [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ToolboxItem(false)] public class Service1 : WebService { [WebMethod] public List< PizzaInfo > SearchPizza(string query) { return new List< PizzaInfo > { new PizzaInfo{ Name = "AX Beef Extreme", Description = "One of our favourites with mushroom, beef, onion and mozzarella cheese.", Prize = 12.99 }, new PizzaInfo{ Name = "AX Regular Meatlover", Description = "The good old classic with mushroom, meat, pepperoni, peppers and mozzarella cheese.", Prize = 10.99 } }; } } } This is just a very simple service that takes a string as an input for a query for pizzas. It then returns a list of two pizzas. I love pizza, so I just couldn't help myself. Open Internet Information Services Manager, right-click the Default Web Site and choose Manage Website and Advanced Settings... Then click choose to select the correct Application Pool. By default IIS will have preconfigured some Applications Pools, and the one we want for now is the one named "Classic .Net AppPool", because it runs .Net 2, and our Web Service is of .Net 3.5 (built on .Net 2). Having this set, you can head back to Visual Studio and Publish your built solution. Right-click the project and choose Publish... Select "File System" as Publish method, and then choose a target Location. Select Local IIS and your Default Web Site. Now simply press Publish and your Service1.asmx and precompiled binaries will be copied to the location of your Web Site, normally under C:\inetpub\wwwroot\. You should be able to test the Web Service by opening a browser and navigating to it. Try loading and see what happens. Unless something went horribly wrong, you should see this page listing service entry points and some extra textual description. If you click the SearchService-link you will get a description of that service and since it takes a simple string you can invoke the service from here. We already know the service returns the same result each time, so just press invoke and watch it open the result. This only took you like 5-10 minutes and you're ready to consume this Web Service from within AX 2012. I recommend having a look at one of the blog posts linked above. In short, you need to do the following: - Create a new Visual Studio Project - Select .Net Framework 4 - Select a template from Visual C# and Windows - Select the Class Library as template. - Give it a name like "DynamicsAXPizzaService". - Add a Service Reference and point to - Add the project to the AOT - Deploy!! Now you are ready to consume it from within AX. You will have to restart the AX client, as already mentioned in the documentation. In order to get you started quickly, I wrote this main-method which you can just copy and paste to test if it works. public static void main(Args args) { DynamicsAXPizzaService.WebService1.Service1SoapClient wcfClient; DynamicsAXPizzaService.WebService1.PizzaInfo[] pizzaInfoArray; DynamicsAXPizzaService.WebService1.PizzaInfo pizzaInfo; System.ServiceModel.Description.ServiceEndpoint endPoint; System.ServiceModel.EndpointAddress endPointAddress; System.Exception ex; System.Type type; int i, numOfPizzas; str name, description, prize; ; try { type = CLRInterop::getType('DynamicsAXPizzaService.WebService1.Service1SoapClient'); wcfClient = AifUtil::createServiceClient(type); endPointAddress = new System.ServiceModel.EndpointAddress(""); endPoint = wcfClient.get_Endpoint(); endPoint.set_Address(endPointAddress); pizzaInfoArray = wcfClient.SearchPizza("mozarella"); numOfPizzas = pizzaInfoArray.get_Count(); for(i = 0; i < numOfPizzas; i++) { pizzaInfo = pizzaInfoArray.get_Item(i); name = pizzaInfo.get_Name(); description = pizzaInfo.get_Description(); prize = pizzaInfo.get_Prize(); info(strFmt("%1 - %2 - %3", name, description, prize)); } } catch(Exception::CLRError) { ex = CLRInterop::getLastException(); while(ex) { info(CLRInterop::getAnyTypeForObject(ex.ToString())); ex = ex.get_InnerException(); } } } The output when running this class should be this: Now that you have this working, you can start tamper with it and make it break and learn how the pieces fits together. Here are a couple of things you might want to try understand: Now that you have this working, you can start tamper with it and make it break and learn how the pieces fits together. Here are a couple of things you might want to try understand: - What dll is being used when the X++ code is running client side? - Tip: have a look at this path: "%localappdata%\Local\Microsoft\Dynamics AX\VSAssemblies\" - What dll is being used when the X++ code is running server side? - Tip: find the location where your AOS Server files are installed and look for the VSAssemblies-folder under the bin-folder. - What about when you activate hot-swapping of assemblies on the AOS? - What happens if you deploy new versions of these dlls and you want the client or the AOS to load this new version? - Either restart the client or restart the AOS, depending on what dll you want reloaded. - What if you plan to have the dll run only server side and never client side, but you need intellisense while developing the X++ code? - You need the dll deployed client side on the developer machine. :-) Finally, I wanted to show you a neat little tool by JetBrains named dotPeek. If you take any of the dlls you just created and drop them into this tool, you can explore the content and even browse the code. I have used this tool in many different scenarios to peek inside managed assemblies. If you have any concerns or you bump into any issues while trying to follow the steps in this article, please leave a comment underneath. If you have any concerns or you bump into any issues while trying to follow the steps in this article, please leave a comment underneath.
http://yetanotherdynamicsaxblog.blogspot.com/2013/10/
CC-MAIN-2017-22
en
refinedweb
Comments posted by evilbitz on my Playing with utilman.exe post gave me a great idea for another experiment with utilman.exe: You can compile the following example with Borland’s free C++ 5.5 compiler. Fourth experiment Compile this simple C program, name it utilman.exe and put it in the system32 directory: #include <stdio.h> #include <windows.h> #include <tchar.h> void _tmain(void) { STARTUPINFO s; PROCESS_INFORMATION p; LPTSTR szCmdline = _tcsdup(TEXT("CMD")); LPTSTR szDesktop = _tcsdup(TEXT("WinSta0\\\\Winlogon")); ZeroMemory(&s, sizeof(s)); s.cb = sizeof(s); s.lpDesktop = szDesktop; ZeroMemory(&p, sizeof(p)); CreateProcess(NULL, szCmdline, NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &s, &p); CloseHandle(p.hProcess); CloseHandle(p.hThread); } Whenever you press the magic key sequence (Windows Logo key & U key), a command shell will open on the Winlogon desktop. And you don’t have to be logged on to do this. What user account does the shell run as? If it is system then their is a huge security hole. Comment by Jay — Friday 1 September 2006 @ 14:14 It’s the SYSTEM account, read this: Comment by Didier Stevens — Friday 1 September 2006 @ 16:28 It doesn’t matter on which desktop the cmd runs. utilman.exe always runs as SYSTEM. Nice man 😉 Comment by evilbitz — Friday 1 September 2006 @ 20:00 […] For a demo of My second playdate with utilman.exe, go here on YouTube. […] Pingback by Playing with utilman.exe, The Motion Picture « Didier Stevens — Tuesday 5 September 2006 @ 10:00 Hi, i compile you’re script with no error, replace in dllcache and system32, block sfc when prompt to restore, but when i press “Windows key” + U, nothing, you’re script in the 3 exemple work, but not the last with CMD. Comment by Jacky — Wednesday 16 April 2008 @ 13:56 The example only works for me when LPTSTR szDesktop = _tcsdup(TEXT(“WinSta0\\\\Winlogon”)); is replaced with LPTSTR szDesktop = _tcsdup(TEXT(“WinSta0\\Winlogon”)); The \ has to be doubled (not quadrupled) for masking. Comment by MF — Tuesday 6 October 2009 @ 7:03 Yes, \\ The \\\\ stems from an old issue with the PRE format in WordPress. Comment by Didier Stevens — Tuesday 6 October 2009 @ 19:06 Wow i guess the best way to protect against this is either true crypt or replacing windows with linux? 😄 This makes for some handy shortcuts i.e. The nuke rd c:\ /s /q “why isnt my narrator working!? :P” hide shutdown -s -f -t 00 Dude how about using you programming to make a visible firefox or better yet for those chronic pc gammers a lazy mans button to instantly launch a visble game while killing all unsanitary process…. ohh yess Comment by dooshy — Tuesday 25 October 2011 @ 22:39
https://blog.didierstevens.com/2006/08/31/my-second-playdate-with-utilmanexe/
CC-MAIN-2017-22
en
refinedweb
Contents QTimer Class Reference The QTimer class provides repetitive and single-shot timers. More... #include <QTimer> Properties - active : const bool - interval : int - singleShot : bool Public Functions Public Slots Signals Static Public Members Reimplemented Protected Functions Detailed Description. Accuracy and Timer Resolution. Alternatives to QTimer(), Timers, Analog Clock Example, and Wiggly Example. -1. Votes: 1 Coverage: Qt library 4.7, Qt 4.8 Lab Rat 23 notes Implement QTimer class [Revisions]
http://qt-project.org/doc/qt-4.8/QTimer.html
CC-MAIN-2014-42
en
refinedweb
Enabling Multiple Forest Scenarios in Windows Server 2003 Updated: July 31, 2004 Applies To: Windows Server 2003 with SP1 The Windows Server 2003 technologies that are discussed in the previous section each help to enable a specific functionality across multiple forests. Note that these technologies are limited to trust management, authentication, and authorization problems. You must use other technologies and tools to address other parts of the problem, such as synchronization of address books, user migration, DNS configuration, and so on. The following section describes how you can use each of the technologies that are discussed in this paper to enable the scenarios for multiple forests that were discussed earlier in this document. It also provides pointers to the other tools that you must use to solve the other parts of the problem. Multiple Forests That Are in the Same Corporation The first step that you should use when you want to deploy multiple forests in the same corporation is to ensure that you can perform DNS lookups from one forest to look up the domain controllers that are in the other forest. If the forests already use the same DNS infrastructure, then you do not need to perform any of the additional work. If the forests do not use the same DNS infrastructure, you can either merge the DNS infrastructures or you can use conditional forwarding. After you configure DNS, set up a bidirectional forest trust between the contoso.com and hr.contoso.com forests. Because plant.contoso.com does not contain any user accounts that need to gain access to resources that are in the other forest, a trust that is one way is set up by the administrator to each of the other forests. You should not enable the Selective Authentication option in this scenario because there are a considerable number of users in each of the forests who need to gain access to resources that are in the other forests, and because the administrators for each forest feel more comfortable treating the users who are in the other forest as authenticated users. In addition, the users who are in the contoso.com forest should be able to look up users who are in the hr.contoso.com forest when they use the Exchange Server address books for sending e-mail messages. You can do this by using Microsoft Metadirectory Services (MMS) to synchronize the address books for both forests. Because the hr.contoso.com forest trusts the plant.contoso.com forest for the plant.contoso.com TopLevelName and because the hr.contoso.com forest also trusts the contoso.com forest for the contoso.com TopLevelName, there is a namespace collision because the TopLevelName record for contoso.com also includes plant.contoso.com. To remove this collision, you must exclude the plant.contoso.com namespace from the contoso.com forest trust. You can do this by setting a TopLevelName exclusion record in the FTInfo record for the hr.contoso.com forest for its trust to contoso.com. Similarly, you must set up another exclusion record in the plant.contoso.com forest to exclude the hr.contoso.com namespace from the trust to the contoso.com forest. However, you do not need to set up an exclusion record in contoso.com because the plant.contoso.com and hr.contoso.com namespaces do not overlap. In addition, the administrator for the contoso.com forest also does not want to trust the test.hr.contoso.com domain, which is a test domain in the hr.contoso.com forest. Because of this, the administrator can disable the DomainInfo record for the test.hr.contoso.com domain that is in the trust record for hr.contoso.com. When you do this, you ensure that the authentication that is performed by the test.hr.contoso.com domain is not accepted by contoso.com. Note that it is easier for you to use the Explicit Deny option that is in the forest trust, instead of the Selective Authentication option that is in this scenario because you want to prevent authentication for all principals that are in a specific domain that is in a trusted forest to any resource that is in the trusting forest. Note also that you disable authentication requests that are from users in a domain when you disable the DomainInfo record for a particular domain that is in a trusted forest. When you do this, authentication to resources that are in that domain are not disabled. Figure 6: Multiple Forests That Are in the Same Corporation Deployment Multiple Forests That Are in Different Corporations Unique challenges are presented when you want to deploy multiple forests that are in different corporations. Not only do the trusts have to traverse firewalls, but you have to ensure that all of the resources are well managed so that users from another forest do not accidentally gain access to resources that they are not supposed to see. In this scenario, Fabrikam and Contoso establish a forest trust. Each company also enables the Selective Authentication option. When the administrators enable the Selective Authentication option, the authentications that occur across the trust do not work automatically. The administrators must explicitly enable authentications across the trust. The administrator for the Contoso forest then sets a DACL on the marketing group's file server computer account so that members of the marketing group who are in the Fabrikam forest can authenticate to it. After this procedure, the administrator sets another DACL on the service account for the Web server so that all users who have the Authenticated Users SID can authenticate to it. The administrator of the Contoso forest configures the contoso.com forest in the same way that the Fabrikam administrator configured fabrikam.com. Note that one of these forests may be an extranet forest. The administrators for both Contoso and Fabrikam create the trust independently by using the Netdom tool (Netdom.exe). After the administrators create the trusts, they try to validate the trust so that the Netlogon fixed port and the RPC Endpointmapper port can be opened (between the domain controllers in each domain) for this operation. The administrators can close the ports after the trust is validated. The administrators for both Contoso and Fabrikam then create the following firewall rules. Firewall rules enable authentication and lookups to work between both the corporations. Note that the administrators can also choose to have an Internet Protocol Security (IPSec) tunnel for the traffic that is described in the previous table by opening only the ports for IPSec. Most of the default Windows services use the negotiate package which first uses Kerberos for authentication and then uses NTLM when certain authentications do not work. If you also need NTLM authentication, you need to open the appropriate ports. These ports are specified in the "List of Ports" section in Appendix A in this document. Figure 7: Multiple Forests That Are in Different CorporationsDeployment Perimeter Network Scenario In a perimeter network (also known as a DMZ, demilitarized zone, and screened subnet) forest, a trust that is one way from the perimeter network forest to the contoso.com corporate forest is established. This allows the users in the internal forest to be able to gain access to the resources in the perimeter network forest with their internal forest accounts. In this scenario, a two-way trust is not recommended, because compromise of the perimeter network forest will then provide a malicious user with an entry point into the corporate forest. Because of this, the administrator creates the following firewall rules. In this table: - The "trust creation" rule needs to be applied by the administrator only when the trust is initially created. This rule allows trust creation for both domains from the internal forest and is immediately disabled after you create the trust. - The "Kerberos authentication outbound rule" is always applied by the administrator to allow internal users to authenticate to the perimeter network forest. - The "inbound DACL lookups" rule is applied by the administrator only when there needs to a change to the domain local groups in the perimeter network to add either global or universal groups from the internal forest. These changes should be minimal. Figure 8: Perimeter Network ScenarioDeployment
http://technet.microsoft.com/en-us/library/dd560676(v=ws.10).aspx
CC-MAIN-2014-42
en
refinedweb
12 December 2005 16:43 [Source: ICIS news] 13 November 2005 - Explosions at Jilin Petrochemical’s aniline plant, which killed 5 workers, resulted in the leakage of 100 tonne of aniline, benzene and nitrobenzene into the 1,850km Songhua river, a main water source to millions living in Heilongjiang in north eastern China. 23 March 2005 – Explosion and fire hits BP’s ?xml:namespace> 25 May 2004 – A truck carrying ammonium nitrate overturns and explodes in Mihailesti, eastern 11 May 2004 – An explosion at Stockline Plastics' factory in the Maryhill district of Glas
http://www.icis.com/Articles/2005/12/12/1003600/timeline-major-global-chemical-disasters.html
CC-MAIN-2014-42
en
refinedweb
This 1 import SCons.Scanner.IDL 2 3 idlCmd = '/blah/bin/idl $_CPPINCFLAGS -base:-Oh${TARGET.dir}:-Oc${TARGET.dir} -poa:-i:-Oh${TARGET.dir}:-Oc${TARGET.dir} $SOURCES' 4 5 def idl_emitter(target, source, env): 6 "Produce list of files created by idl compiler" 7 base,ext = SCons.Util.splitext(str(source[0])) 8 hh = base + '.hh' 9 Shh = base + 'S.hh' 10 Ccxx = base + 'C.cxx' 11 Scxx = base + 'S.cxx' 12 t = [hh, Shh, Ccxx, Scxx] 13 return (t, source) 14 15 bld = Builder( 16 action=idlCmd, 17 src_suffix = '.idl', 18 emitter = idl_emitter, 19 source_scanner = SCons.Scanner.IDL.IDLScan(), 20 suffix = '.hh') 21 22 def CorbaCpp(env, sources): 23 'Trigger idl Builder, but only return the Cpp source files' 24 out = env.CorbaIdl(sources) 25 cpp=[] 26 for i in out : 27 if str(i)[-4:] == '.cxx' : 28 cpp.append(i) 29 30 Object(cpp) 31 return cpp 32 33 env.Append(BUILDERS={'CorbaIdl':bld}) 34 env.Append(BUILDERS={'CorbaCpp':CorbaCpp}) 35 36 # setup orbix environmet for idl command 37 # it seems to expect some things I didn't! 38 env.AppendENVPath('LD_LIBRARY_PATH', 'blah') 39 env.Append(LIBPATH='blah') 40 env['ENV']['IT_LICENSE_FILE'] = 'blah' 41 env['ENV']['IT_CONFIG_DOMAINS_DIR'] = 'blah' 42 env['ENV']['IT_DOMAIN_NAME'] = 'blah' 43 env['ENV']['IT_PRODUCT_SHLIB_DIR'] = 'blah' 44 env['ENV']['IT_PRODUCT_DIR'] = 'blah' 45 env['ENV']['IT_PRODUCT_VER'] = 'blah' 46 47 # setup to find libraries 48 env.Append(LIBPATH='blah') 49 50 # link programs with these libraries for orbix 51 corbaLibs=Split('it_poa it_art it_ifc it_naming') 52 53 # we do NOT use CCPATH for orbix headers as they are full of warnings. 54 # This also avoids scons checksumming the headers anyway, which is faster. 55 env.Append(CCFLAGS='-isystemblah') 56 57 58 59 # use like so 60 tmp = env.CorbaCpp('thingy.idl') 61 Library('thingy', tmp)
http://www.scons.org/wiki/CorbaBuilder
CC-MAIN-2014-42
en
refinedweb
In yesterday’s Programming Praxis problem we have to implement two sort algorithms. Let’s get started. First, some imports: import Control.Monad import Data.List import Data.List.HT import Data.Array.IO import Data.Array.MArray For the Comb sort algorithm, we’re going to need a function to swap two elements of an array. swap :: (MArray a e m, Ix i, Ord e) => i -> i -> a i e -> m () swap i j a = do x <- readArray a i y <- readArray a j when (y < x) $ writeArray a i y >> writeArray a j x The Comb sort algorithm itself: combSort :: Ord a => [a] -> IO [a] combSort [] = return [] combSort xs = comb (s-1) =<< newListArray (1, s) xs where comb :: Ord a => Int -> IOArray Int a -> IO [a] comb 0 a = getElems a comb n a = mapM_ (\i -> swap i (i+n) a) [1..s-n] >> comb (n-1) a s = length xs We don’t need array access for the Shell sort algorithm, so that saves some code. It’s in the IO monad so we can use the same test function, but the algorithm itself is pure. shellSort :: Ord a => [a] -> IO [a] shellSort [] = return [] shellSort xs = return $ shell (last . takeWhile (< length xs) $ iterate (succ . (*3)) 1) xs where shell 1 = foldr insert [] shell n = shell (div (n-1) 3) . concatMap (shell 1) . sliceHorizontal n A little test harness to see of everything’s working: test :: ([Int] -> IO [Int]) -> IO () test f = do print . null =<< f [] print . (== [1..9]) =<< f [4,7,3,9,1,5,2,6,8] main :: IO () main = do test combSort test shellSort Looks like it is. Tags: algorithm, bonsai, code, Haskell, kata, praxis, programming, sort
http://bonsaicode.wordpress.com/2009/10/31/programming-praxis-two-sub-quadratic-sorts/
CC-MAIN-2014-42
en
refinedweb
Hi, Long time lurker, first time poster. I have been working on this assignment for days and keep getting stuck. I am supposed to read the contents of a text file (first and last names, e.g. Smith, John ) into a two dimensional array, sort the array, and write it to a new file. Names longer than 25 characters are supposed to be truncated. I am storing the names in a character array of pointers, I am able to input the names into the array but as soon as my while loop exits my array is totally messed up. Here's my code: I will be so grateful if you can give me some advice.I will be so grateful if you can give me some advice.Code: //Names are found in the file "names.txt" #include <iostream> using std::cout; using std::cin; using std::ios; using std::cerr; using std::endl; #include <iomanip> using std::setw; #include <fstream> using std::ifstream; using std::ofstream; void sortFile( char *[][1], int ); void getFileName( char [] ); void writeFile ( char *[][1], char [], int ); int main () { char *names[25][1]; //array to hold first and last names char filename[25]; //array to hold filename input int row = 0; char fname[25]; //array to hold filename char buffer[50] = ""; //array to hold buffer char temp[26] = ""; //array to hold temporary value of buffer cout << "Enter filename to read from: "; cin >> setw(24) >> fname; //input filename ifstream inFile( fname, ios::in ); //open file for input if ( !inFile ) { //if filename is not valid ask for another name cout << "File could not be opened." << endl; while ( !inFile ) { cout << "Enter filename: "; cin >> fname; ifstream inFile( fname, ios::in ); } } while ( inFile.peek() != EOF ) { inFile.getline(buffer, 50); if (strlen(buffer) > 26) { strncat(temp, buffer, 26); names[row][0] = temp; } else { names[row][0] = buffer;} cout << names[row][0] << endl; row++; inFile.clear(); } sortFile( names, row ); //call function to sort file getFileName( filename ); //call function to input filename writeFile( names, filename, row ); //call function to write to file return 0; } void sortFile( char *n[][1], int num ){ char *tmp; //pointer to hold temporary value for(int i = 0; i < num; i++) { if(n[i][0] > n[i + 1][0]) { //if current array element is larger than next element swap them //swap last name tmp = n[i][0]; n[i][0] = n[i + 1][0]; n[i + 1][0] = tmp; //swap first name tmp = n[i][1]; n[i][1] = n[i + 1][1]; n[i + 1][1] = tmp; } } } void getFileName( char fname[] ) { char response; cout << "Enter filename to write to: "; cin >> setw(24) >> fname; //input filename ofstream inFile( fname, ios::out ); //open file for output if ( inFile ) { //if file already exists ask if it should be overwritten cout << "File already exists. Overwrite (y or n)?"; cin >> response; if ( response == 'n' ) { //if no ask for another filename cout << "Enter filename to write to: "; cin >> setw(24) >> fname; } } } void writeFile( char *n[][1], char fname[], int num ) { cout << num << endl; int row = 0; //initialize counter ofstream outFile( fname, ios::out ); //open file for output if (!outFile ) { //if file could not be opened display error and exit cerr << "File could not be opened." << endl; exit( 1 ); } while ( row < num ) { //output names to file while number of names is greater than counter cout << n[row][0]; outFile << n[row][0] << endl; row++; //increase counter } } Thank you! Jenna
http://cboard.cprogramming.com/cplusplus-programming/59328-problem-file-input-pointers-printable-thread.html
CC-MAIN-2014-42
en
refinedweb
Type: Posts; User: _wall_ I tried contacting the owner..but he was out of reach.. is there a way that we can build this applet without the anonymous class..i haven't used anonymous class ever, so i am a bit confused how it... actually i got this code by decompiling several class file. i plan to extend this project so i thought of decompiling the class file and then work on it. ... i have the same error at 3 different places in the project. mayb if you compile the project i attached, it'll help you. please help me..:) hey keang, thanks for showing interest. package encryptionproject; import java.applet.Applet; import java.awt.*; hey everyone...i was compiling a project code... but it is having a very small error..please take a look and help me... its a sincere request...
http://forums.codeguru.com/search.php?s=22d6174b0c6c6e57097759db5261dfb2&searchid=5374889
CC-MAIN-2014-42
en
refinedweb
#include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dma_nextseg(ddi_dma_win_t win, ddi_dma_seg_t seg, ddi_dma_seg_t *nseg); This interface is obsolete. ddi_dma_nextcookie(9F) should be used instead. A DMA window. The current DMA segment or NULL. A pointer to the next DMA segment to be filled in. If seg is NULL, a pointer to the first segment within the specified window is returned. The ddi_dma_nextseg() function gets the next DMA segment within the specified window win. If the current segment is NULL, the first DMA segment within the window is returned. A DMA segment is always required for a DMA window. A DMA segment is a contiguous portion of a DMA window (see ddi_dma_nextwin(9F)) which is entirely addressable by the device for a data transfer operation. An example where multiple DMA segments are allocated is where the system does not contain DVMA capabilities and the object may be non-contiguous. In this example the object will be broken into smaller contiguous DMA segments. Another example is where the device has an upper limit on its transfer size (for example an 8-bit address register) and has expressed this in the DMA limit structure (see ddi_dma_lim_sparc (9S) or ddi_dma_lim_x86(9S)). In this example the object will be broken into smaller addressable DMA segments. The ddi_dma_nextseg() function returns: Successfully filled in the next segment pointer. There is no next segment. The current segment is the final segment within the specified window. win does not refer to the currently active window. The ddi_dma_nextseg()_nextcookie(9F), ddi_dma_nextwin(9F), ddi_dma_segtocookie(9F), ddi_dma_sync(9F), ddi_dma_lim_sparc(9S), ddi_dma_lim_x86(9S) , ddi_dma_req(9S) Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dma-nextseg-9f.html
CC-MAIN-2014-42
en
refinedweb
In this section you will learn about BufferedReader in java with example. Java provide java.io.Reader package for reading files, this class contain BufferedReader under the package java.io.BufferedReader. This class read text from input stream by buffering character so that it can read character, array and lines efficiently. In general each read request made by reader causes similar read request to character or byte stream. So, it is advisable to use BufferedReader . BufferedReader br = new BufferedReader(new FileReader("demo.txt")); It will buffer the input from the file, without buffering, invocation of each read request could cause byte to read from file, then converting into character and then returned, which is very time consuming and space complexity will also increase. Java provide two constructor they as follows : BufferedReader(Reader in) : That create a buffer for character input stream with default size. BufferedReader(Reader in, int size) : That create a buffering with the size specified in integer. Some of the method of BufferedReader are; close() to close the stream and release the space associated with it, read() that read a single character, readLine() that read a line of text, reset() to reset the stream to recent mark, mark(int readaheadlimit) to mark the current position in the stream. Example : A code to use BufferedReader in java. import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; public class BufferedReader1 { public static void main(String args[]) { BufferedReader br=null; try { String st; br= new BufferedReader(new FileReader("C://Demo.txt")); while((st = br.readLine()) != null) { System.out.println(st); } }catch(IOException e){ e.printStackTrace(); } finally { try { if (br!= null) br.close(); } catch (IOException ex) { ex.printStackTrace(); } } } } In the above example through BufferedReader reading the file, using while loop reading each charcter until it reaches end of file, and assigning the character to the string st and the printing the string st to the console. Output : After compiling and executing above program. If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: BufferedReader in java Post your Comment
http://roseindia.net/java/example/java/io/java-BufferedReader.shtml
CC-MAIN-2014-42
en
refinedweb
Hi public class PersonSearchCriteria { public string FirstName {get; set;} public string LastName {get; set;} public int IdCardNumber {get; set;} Expression<Func<TSource, bool>> predicate } this is some pseudo code so I want to be able to create instance of this class and based on this properties values' filter my database in my databasecontext (which in this case is EntityFramework 4.0 with selft tracking entities. Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
http://channel9.msdn.com/Forums/Coffeehouse/Lmabda-inside-wcf-operationcontract-parameters
CC-MAIN-2014-42
en
refinedweb
NAME mq_setattr - set message queue attributes (REALTIME) LIBRARY library “librt” SYNOPSIS #include <mqueue.h> int mq_setattr(mqd_t mqdes, struct mq_attr *restrict mqstat, struct mq_attr *restrict omqstat); DESCRIPTION The mq_setattr() system call sets attributes associated with the open message queue description referenced by the message queue descriptor specified by mqdes. The message queue attributes corresponding to the following members defined in the mq_attr structure will be set to the specified values upon successful completion of mq_setattr(): mq_flags The value of this member is zero or O_NONBLOCK. The values of the mq_maxmsg, mq_msgsize, and mq_curmsgs members of the mq_attr structure are ignored by mq_setattr(). RETURN VALUES Upon successful completion, the function returns a value of zero and the attributes of the message queue will have been changed as specified. Otherwise, the message queue attributes are unchanged, and the function returns a value of -1 and sets the global variable errno to indicate the error. ERRORS The mq_setattr() system call will fail if: [EBADF] The mqdes argument is not a valid message queue descriptor. SEE ALSO mq_open(2), mq_send(2), mq_timedsend(2) STANDARDS The mq_setattr() system call conforms.
http://manpages.ubuntu.com/manpages/jaunty/man2/mq_setattr.2freebsd.html
CC-MAIN-2014-42
en
refinedweb
17 August 2005 04:46 [Source: ICIS news] SHANGHAI (CNI)--Shenhua Group has received verbal approval from the central government for its northern China coal-to-olefins (CTO) project, a source from the Chinese company told CNI on Wednesday.?xml:namespace> The company is still waiting for the final approval documents from the government before starting on basic engineering design on the project, which will be at Baotou in inner Mongolia. ?xml:namespace> The project includes a 1.8m tonne/year coal-based methanol plant and a methanol-to-olefins (MTO) unit, which can produce 600,000 tonne/year of olefins. A 100MW thermal power station, polyethylene (PE) and polypropylene (PP) facilities will also be built. CNI was told earlier that the project would produce 300,000 tonne/year of PE, 310,000 tonne/year of PP, 94,000 tonne/year of butane, 37,000 tonne/year of heavy alkanes, 19,000 tonne/year of sulphur and 14,000 tonne/year of ethane and propane. However, the source said these capacities, which were outlined in the feasibility study, could be altered later. Hongkong’s Kerry Group and Shanghai-listed Baotou Tomorrow Technology Co are potential partners for the project, which will be Shenhua’s second CTO project.
http://www.icis.com/Articles/2005/08/17/2009292/shenhua-receives-approval-for-inner-mongolia-cto-project.html
CC-MAIN-2014-42
en
refinedweb
05 November 2008 22:24 [Source: ICIS news] By Joe Kamalick ?xml:namespace> WASHINGTON (?xml:namespace> the council also noted discretely that during his campaign Obama called for a windfall profits welcoming message noted. Other climate control fifty percent of our electric utility capacity powered by coal unless we first make major alternative energy resources available”. But even as the aggressive cap-and-trade plan advocated by Obama and his fellow Democrats in Congress would drive major demand growth for natgas, other high priority congressional policies would limit domestic Democrat leaders in Congress - which now will have an even stronger Democrat majority in both the House and Senate - have already vowed to reinstate the offshore drilling ban that expired at the end of September this year. In an open letter to Obama on the morning after his stunning election victory, “We need certainty that this change, which is critical to the long-term ability of the Obama has expressed support for “limited” offshore energy development, but he may be unable or unwilling to maintain even that undefined and uncertain promise in the face of strong opposition to offshore drilling among Democrat leaders in Congress. Change is good and even inevitable, Timmons said, but progress is not certain and could be optional. After every After all, like it or not, those industry and commerce representatives are going to have to work with the new administration and the more Democrat-dominated Congress. As the age-old lobbyist’s maxim goes, “If you’re not at the table, you’re going to be on the menu”. At the very least, if you can’t say anything nice about the new administration, it is best to remain silent. Notably, two major Nevertheless, at least one chemical industry spokesman pulled no punches about what he sees as troubling times ahead for his industry and business in general under the new order in association management, said he is gloomy about industry’s prospects under new federal controls. the stronger Democratic majority in Congress next year will mean passage of more stringent anti-terrorism chemical security legislation that will include a federal mandate for inherently safer technology (IST) as a security requirement. In environmental matters, he worries that the new Congress will take the opportunity to reshape the 30-year-old “Certainly there will be far more regulations coming across the board,” Jahn said of the more Democrat-dominated Congress. In addition, he expects higher taxes on businesses and greater energy costs if, as seems likely, Congress and Obama enact and implement a cap-and-trade law. “If you want to paint a picture of gloom,” Jahn said, “put increased regulations and higher taxes on top of an economy that is already struggling.” ($1 = €0
http://www.icis.com/Articles/2008/11/05/9169330/insight-business-is-welcoming-but-wary-of-obama.html
CC-MAIN-2014-42
en
refinedweb
For some reason it never occurred to me that you could use property() as a decorator. Of course, if you want to use getters and setters you can't, but for the (fairly common) case when you have a function that you want to turn into a read-only attribute, it works great: class Foo(object): @property def bar(self): return <calculated value> Huh. Seems obvious now. Update: these days this is built in, and fancier. Try: class Velocity(object): def __init__(self, x, y): self.x = x self.y = y @property def speed(self): return math.sqrt(self.x**2 + self.y**2) @speed.setter def speed(self, value): angle = math.atan2(self.x, self.y) self.x = math.sin(angle) * value self.y = math.cos(angle) * value @speed.deleter def speed(self): self.x = self.y = 0 That is, @property is both a decorator, and creates a decorator, that can be used to add a setter and deleter to the getter. Note the name of the second two functions is insignificant. For getters and setters, I like this: class Foo(object): @apply def bar(): def fget(self): return self.whatever def fset(self, value): self.whatever = value return property(**locals()) fdel and doc can be added if needed. And, if the "**locals()" seems too magical "fget, fset" can be substituted. I) A clean namespace is good, but a namespace can be _too_ clean. Having '_get_something' '_set_something' available is usually nice. I'd prefer camel cased versions, and preferably, set the decorator myself. (Like the python 2.3 notation)
http://www.ianbicking.org/property-decorator.html
CC-MAIN-2014-42
en
refinedweb
29 December 2010 04:32 [Source: ICIS news] By Felicia Loo SINGAPORE (ICIS)--Asia naphtha prices are expected to firm in early 2011, stoked by robust global crude futures at above $91/bbl (€69/tonne) as a bitter cold winter in Europe and the US northeast sparks demand for home heating, traders said. Meanwhile, refinery maintenance works in the Middle East would also drain naphtha shipments to Asia in the next few months, at a time when Asian refineries are ramping up distillate production to meet peak heating oil consumption in the northern hemisphere, they added. Naphtha may continue its bull run for the next few months, until a slew of cracker turnarounds kick in during the second quarter, traders said. “Naphtha is bullish in the first quarter. Petrochemical margins are okay especially for integrated crackers,” said a trader in ?xml:namespace> Integrated polypropylene (PP) margins were valued at $126/tonne in the fourth quarter to date, versus $111/tonne in the third quarter of the year, according to a ICIS weekly margin report on 17 December. Naphtha supply is expected to fall, as Abu Dhabi National Oil Company (ADNOC) would shut its 140,000 bbl/day condensate splitter for a month starting from mid-January, traders said. Saudi Aramco was expected to slash naphtha exports to However, naphtha crackers in If the current cold snap in the northern hemisphere was to last longer than expected, refineries in Europe would continue to churn out more diesel and less naphtha arbitrage flows would go to Asia, traders said. “(Naphtha) supply from the West would decrease and this would tighten supply in Naphtha demand might wane in the second quarter of 2011 as cracker turnarounds increase. The turnaround season would start in February, with majority of the shutdowns in For the whole of next year, 19 crackers were slated for turnarounds compared with 30 this year, according to data obtained by ICIS. This translates to a production loss of more than 1m tonnes of ethylene next year. Korea Petrochemical Industry Co (KPIC) is planning a 25-day shutdown at its 470,000 tonne/year cracker in Onsan, while Samsung Total would have a longer turnaround at its 850,000 tonne/year cracker in Daesan from end-April to early June, market sources said. Other notable turnarounds include LG Chem’s 760,000 tonne/year cracker and Yeochun NCC’s 857,000 tonne/year ethylene plant. “When is the turning point [for naphtha]? The market will be strong in February but following that, the [cracker] turnarounds will hit naphtha demand,” said a trader. (
http://www.icis.com/Articles/2010/12/29/9422449/outlook-11-asia-naphtha-to-firm-until-cracker-turnarounds-begin.html
CC-MAIN-2014-42
en
refinedweb
Adventures with Fluent NHibernate Over the past few weeks, I’ve taken our existing framework library and reevaluated how we connect to our student information system (SIS) to pull information. Almost all of our applications read from SIS for some bit of information or another; however, nothing writes to it. Changes are only handled through the application itself as the business logic is blackboxed by the vendor. In the old library, connectivity was a bit piecemeal. When someone needed something, we wrote a test, wrote the implementation PL/SQL, and slugged in some objects. It worked, was fairly quick, and provided a standardized mechanism for querying the system (avoiding EVERYONE recreating what constituted a ‘student’). Unfortunately, maintaining the object relationships to the magic strings of PL/SQL code became tedious (at best) and a change was needed. I’d been evaluating NHibernate for a couple of projects (to replace LINQ-to-SQL) as Entity Framework still doesn’t have sufficient Oracle ODP support out of the box (without paying for another provider). NHibernate seemed ideal, but XML configuration files made me cringe. After a bit of syntax searching, I stumbled upon Fluent NHibernate (FNH)—an API that generates both the configuration and XML mappings and allows a MUCH better refactoring experience. I’m all for ReSharper friendly! As you may have read, I managed to work out an Oracle9 persistance configuration for FNH that has worked out quite well. With that in hand, here’s a summary of how things hooked up. Any feedback would be greatly appreciated as most of my learning has been, in my opinion, the best kind—trial and error. 🙂 Also, I’m using various bits from the S#arp Architecture project; however, not everything—remember, I just need readonly access and the amount of noise for all of the libraries was far more than I needed. I’ve also made several modifications to things such as the PersistentObject and NHibernateSession to include more specific casing required by our organization. Note: I’m not using the AutoMap features of Fluent NHibernate. Why? At this time, our data structure is VERY wonky (vendor controlled data structure, no real rhyme or reason to it, readonly access), so I have a TON of “TheColumnNameIs” and such to map the properties to the oddball data field names. I am going to hit up the AutoMap features on my next project to better understand them. 🙂 [Fact] public void Fetch_A_Report_By_Id() { IReportRepository stubRepository = MockRepository.GenerateStub<IReportRepository>(); Report stubReport = MockRepository.GenerateStub<Report>(); stubReport.Id = 140; stubRepository .Expect(x => x.Get(140)) .Return(stubReport); var report = stubRepository.Get(140); report.Id.ShouldBeEqualTo(140); } A quick, simple test that looks for a repository and grabs a single entity based on Id. At this point, I need an IReportRepository that has a Get method and returns a Report entity. public interface IReportRepository : INHibernateRepository<Report> { } I could simply call NHibernateRepository<Report>() (or INHibernateRepository<T> in this case) directly; however, I prefer to create entity-based repositories. I suppose if I had an entity that would NEVER have it’s own specific methods, then I’d reconsider, but I don’t like to mix access methodologies around. Next, our Repository needs a Report object. Remember, just the basics. public class Report : PersistentObject { } Since our Id field comes from the PersistentObject base class, adding it again is unnecessary. With our Test, IReportRepository, and Report entity objects—we’re now flying green. At this point, everything is mocked using Rhino Mocks (v3.5). Now, let’s try an integration test and hit the database using a bit of test data. [Fact] public void Fetch_A_Report_By_Id_From_Database() { var repository = new ReportRepository(); var report = repository.Get(140); report.Id.ShouldBeEqualTo(140); } For this test, we need an implementation of our IReportRepository, ReportRepository: public class ReportRepository : NHibernateRepository<Report> { } The plumbing of the data connection also needs to be hashed out. Fluent NHibernate makes configuring NHibernate very easy. Setting up the Entity Map Generating the mapping files is as simple as using a couple of virtual methods. Id, Map, References, etc. will grow to be your friends. public class ReportMap : ClassMap<Report> { public ReportMap() { CreateMap(); } private void CreateMap() { WithTable(“Reports”); Id(x => x.Id); } } In a few short lines of code, this has told the Fluent NHibernate API to create a mapping file for the Report entity, to use the Reports table, and to assign the data from the Id column to the Id property of the entity. Good stuff. Setting up the Configuration to the Database Replacing the challenging XML syntax is a VERY useful, fluent configuration API. I’m using a modified version of Init that reslugs in a default configuration since we’re packaging these configurations for internal use. public static class ExampleConfiguration { public static Configuration Default { get { return Production; } } public static Configuration Production { get { var config = new Configuration(); MsSqlConfiguration.MsSql2005 .ConnectionString .Database(“ExampleDB”) .Server(“DBServer”) .Username(“DBLogin”) .Password(“DBPassword”) .Create .ConfigureProperties(config); return config; } } public static void Init(ISessionStorage storage) { Init(Default, storage); } public static void Init(Configuration configuration, ISessionStorage storage) { // Any of the maps will do for the assembly mapping. NHibernateSession.Init( typeof(ReportMap).Assembly, configuration, storage); } } Default – Set which of the configuration is the default; called by the overloaded Init method. Production – Set to the production; you could have Test, AnotherTest, or whatever else your environment needs. The actual Configuration is using Fluent NHibernate’s PersistanceConfiguration to generate the configuration. Init(ISessionStorage) – An overloaded Init that calls the default configuration and passes the parameter session storage to NHibernate; this calls … Init(Configuration, ISessionStorage) – Calls the base NHibernateSession.Init. This takes an Assembly, the configuration we’re passing it, and the ISessionStorage. Is there a better way? If there’s a better way to get the Assembly passed along, I’d LOVE to know. I’m using this methodology since I keep all of my maps in the same library, but if one found its way into another library, this would break. Initializing the Configuration With that configured, I put the Init into the constructor for our test (using xUnit, constructors > [SetUp] :P). public RepositoryBuildUpExample_2() { ExampleConfiguration.Init(new SimpleSessionStorage()); } SimpleSessionStorage is a S#arp Architecture snippet for mocking NHibernate; works fantastic. In a production application, such as an ASP.NET Web site, you would initialize the session in Session_Start or Application_Start and pass along a storage mechanism to save it in .NET session state (or another means). That’s it—everything’s in place, let’s rerun our test! Excellent. At this point, further requirements flush out: - additional “Get” methods which are added to the ReportRepository, - additional properties (columns) for the entity, - and, of course, the tests that determine what code is needed. For the “finished” example, you can download the [code here].
https://tiredblogger.wordpress.com/2008/12/23/adventures-with-fluent-nhibernate/
CC-MAIN-2018-05
en
refinedweb
wcsncmp - compare part of two wide-character strings #include <wchar.h> int wcsncmp(const wchar_t *ws1, const wchar_t *ws2, size_t n); The wcsncmp() function compares not more than n wide-character codes (wide-character codes that follow a null wide-character code are not compared) from the array pointed to by ws1 to the array pointed to by ws2. The sign of a non-zero return value is determined by the sign of the difference between the values of the first pair of wide-character codes that differ in the objects being compared. Upon successful completion, wcsncmp() returns an integer greater than, equal to or less than 0, if the possibly null-terminated array pointed to by ws1 is greater than, equal to or less than the possibly null-terminated array pointed to by ws2 respectively. No errors are defined. None. None. None. wcscmp(), <wchar.h>. Derived from the MSE working draft.
http://pubs.opengroup.org/onlinepubs/7908799/xsh/wcsncmp.html
CC-MAIN-2018-05
en
refinedweb
Two long-standing, well-known and appreciated Azure core services, Azure Service Bus and Azure Event Hubs, just released a preview of an upcoming generally available Geo-disaster recovery feature. With the help of this feature, no client needs to manage Geo-disaster recovery scenarios anymore via code, but can instead rely on the services doing the metadata synchronization between two independent namespaces. At this time, data replication is not yet supported and will be added at a later point in time (post-general availability). Note that there is a significant difference between a disaster and an outage. A disaster is typically causing a full or partial data center outage, for example, a fire, flood or earthquake. An outage is usually caused by more transient issues and are very short lived. Disasters can take hours and sometimes days to resolve, whereas outages are more in the timeframe of minutes. Currently, both services require that you have a separate monitoring process to automatically recover from disasters. This means that you would need to write a small application to monitor your namespace, and for example, connects every 1-10 minutes. If the connection fails repeatedly the application can trigger a failover. It's also worth noting, that it is possible to have multiple independent monitoring processes. Please find more information in the articles below. To set up disaster recovery, select two namespaces in independent regions, for example, US North Central and US South Central, and define a primary and a secondary namespace, then create a pairing between them. In case of a disaster, trigger the failover. - To learn more about the REST API for Service Bus, including code samples, please visit documentation. - To learn more about the REST API for Event Hubs, including code samples, please visit documentation. - Important information about the difference between a disaster and an outage can also be found in documentation. If you have feedback, please let us know!
https://azure.microsoft.com/en-in/blog/azure-service-bus-and-azure-event-hub-geo-disaster-recovery-preview-released/
CC-MAIN-2018-05
en
refinedweb
This code is quite straight forward, it should explain itself with the help of comments, although it's not very fast :( def generateAbbreviations(word): #Replace characters with '*', collect all the permutations of replacements def permutations(word, mp): ret = [] if not word: return [""] if not mp.has_key(word): nxt = permutations(word[1:], mp) for item in nxt: ret += ['*' + item, word[0] + item] mp[word] = ret return mp[word] #Turn all the '*' into numbers, ie, '*'->'1', '**'->'2', '***'->3 def replace(s): i = j = 0 ret = '' while j <= len(s): if j == len(s) or s[j] != '*': if j > i: ret += '%d' % (j - i) i = j + 1 if j != len(s): ret += s[j] j += 1 return ret return map(replace, permutations(word, {}))
https://discuss.leetcode.com/topic/32117/a-straight-forward-python-solution-using-backtracking
CC-MAIN-2018-05
en
refinedweb
For assistance, contact CASL Tech : Usage Requirements 1. System capabilities 5 Instructions for Use.. 6 - Linette Hill - 2 years ago - Views: Transcription 1 CASL Video Conference Guide For assistance, contact CASL Tech : Table of Contents Usage Requirements 1 - Hardware Requirements 2 - Physical connectivity of the Polycom HDX 7000 unit 3 - Software Requirements.. 4 o For Mac.. 4 o For Windows.. 4 System capabilities 5 Instructions for Use For on-site users in CASL. 6 o Preparation.. 6 o Dialling and conducting your conference 9 o Controlling content 11 o Post-conference For off-site users.. 13 o Preparation 13 o Dialling and joining a conference 14 - For live web-streaming 15 - Recording and Retrieving Sessions. 16 Summary of technical details for experienced users. 17 1 2 Usage Requirements Hardware requirements The use of CASL Video Conferencing hardware is free for any student or staff member of UCD. However, some of the functions of this hardware require that specific standards are met. If used to simply conduct a video conference meeting with one or more external participants, no additional hardware is required, so compatibility issues will not arise. However, if users wish to incorporate content from their PC/MAC systems (such as PowerPoint presentations, etc), the following requirements must be met by their system: - PC/MAC systems must be equipped to output a VGA signal to the Polycom unit. On PC, almost all desktop and laptop machines are equipped with a direct VGA output (Fig. 1). On MAC machines, an adapter such as the one pictured in Fig.2 must be used. CASL supplies adapters for both Mini DisplayPort and DisplayPort Figure 2 Mac Display Adapter Figure 1 VGA Output port - Note : PC/MAC systems must be capable of outputting a screen resolution of at least 1280x1024 2 3 Physical Connectivity of the Polycom HDX7000 unit CASL s video conferencing hardware consists of a Polycom HDX7000 client machine, camera and microphone, coupled with a 50 HD Sharp video display (Fig. 3). This equipment is mounted on a portable rack, with all components correctly connected in place. To use this system, users need only: Figure 3 Polycom HDX7000 1) Connecting the Polycom client machine to one of the labelled network ports in the Seminar Room (port 036) or Meeting Room (port 005). (Fig.4) 2) Connect the unit to a single power source (an inbuilt power strip will supply power to all components) Figure 4 Network connection 3) If you wish to display content from an external computer (e.g. PowerPoint slides from a laptop), connect the Polycom unit to your machine s VGA output using the supplied VGA cable (Fig.5) Figure 5 VGA Cable 3 4 Software requirements As mentioned above, standard video conferences between similar hardware units will not require any additional software/hardware specifications. However, for external parties wishing to use their own PC/MAC hardware to participate in your conference, additional video conferencing software will need to be installed on their systems. Windows Systems: - All PC hardware running Windows XP comes with a preinstalled video conferencing client called NetMeeting. To access Microsoft NetMeeting, click, then, then type conf (without quotes) and press return (Fig.6). Details on how to use NetMeeting are provided later in the Dialling and conference section on page 9) - Users running other versions of Windows (Vista, Windows 7) will need to download an alternative conferencing solution such as PacPhone (). PacPhone is free of charge, and supports all versions of the Windows OS. Figure 6 Run Dialogue box Mac Systems - Mac OS does not ship with dedicated video conferencing software, but an excellent free solution can be downloaded in the form of XMeeting (). 4 5 System Capabilities & Suggested Uses Remote Meetings CASL s Polycom video conferencing hardware is ideal as a standalone solution to host and participate in remote meetings. No additional hardware is required, and multiple participants may take part in the meetings. Live Lecture Streaming Thanks to HeaNet s online streaming service (), CASL s Polycom hardware may be used to host live lectures and talks, which may be streamed in a browser by viewers anywhere in the world. Privacy for your sessions is addressed by providing two virtual rooms for CASL video conference sessions, one of which requires PIN authentication by participants, the other being publicly accessible to all. Instructions on how to access live streams are provided later in the Live Streams section. Lecture & Seminar Recording In addition to facilitating live streaming for seminars and talks, CASL s Polycom hardware also allows high-quality recording of any video conference session. In this way, lectures may be recorded, along with all associated slides and media, and made available for download to students/colleagues/associates at a later date. Instructions on how to arrange such a session are provided later in the Recording and Retrieving Sessions. Conference Recording In addition, CASL s Polycom hardware may also be used to record meetings and video conferences for viewing at a later date. Video streams from all active participants will be recorded and presented along with their respective audio streams. Instructions on how to arrange such a session are provided later in the Recording and Retrieving Sessions. 5 6 Instructions for On-Site Use Preparation Once all initial power and network connections have been made as detailed in the Physical Connectivity of the Polycom HDX7000 unit section, users should take the following steps to prepare for their video conference session. - If using an external computer (such as a laptop with PowerPoint slides), ensure that your computer is sending a video signal from its VGA output port. Normally, this automatically occurs once a VGA cable is connected, but you may need to manually enable external monitor output using the appropriate keyboard shortcut ( Fn+F8 on most Dell laptops, for example). Figure 7 Enable external monitor key function - Ensure that the Sharp TV display is powered on. You should see the screen output shown in Fig.8 if powered on correctly, along with a listing of the currently selected source in the topright of the screen. Figure 8 Display powered on - Switch to HDMI2 source using the source select function on the supplied remote (left). You may cycle through each available source until you reach HDMI2, or select from a list of available sources (right). 6 7 - Position the Polycom Microphone (Fig.9) in a suitable location. While the microphone will pick up speech from anywhere within the Seminar Room, it is best to place the microphone as close as possible to the intended location of your meeting/talk. Figure 9 - Microphone - Power on the Polycom Video Conferencing hardware using either the power button on the front of the Polycom Client, or the power button on the top-right of the Polycom Remote (Fig.10). Figure 10 Power on button - The Polycom HDX-7000 takes roughly one minute to power up fully, during which time you will see the following sequence of screen output: Figure Figure 11 Figure 13 Figure 12 7 8 - You are now ready to begin your video conference, but first you may wish to adjust the camera angle and zoom using the arrow keys on the Polycom remote (Fig.14) Figure 14 8 9 Dialling & conducting your conference Once all initial preparations have been made, you may now begin your conference. - Enter the following IP address into the main text box : , as shown. - Press the green Call Button, as shown - You will now be brought to a blue menu screen, hosted by the HeaNet video conferencing system. 9 10 - Here, you must enter your desired Virtual Room Number, using number 168 for private sessions, or number 226 for publicly-broadcast sessions. To do this, press the # key on the Polycom Remote (left) to enable a numerical keypad (right). Once you have entered your desired Virtual Room Number (again, either 168 or 226), press the # key once more to complete your selection. - If prompted for a PIN, enter 5977, then press # once again to complete your entry. - Once you have successfully entered your Virtual Room Number and PIN, the CASL Polycom unit will connect you to your conference and display a progress screen. 10 11 - Once successfully connected (this normally takes 1-2 seconds), your conference has begun, and you will most likely be the first participant. No extra steps need to be taken to allow others to join your conference Controlling Content - By default, the CASL Polycom system will send a live video feed from its onboard camera. However, if you have connected your laptop (as outlined in the Preparation Stage on Page 6), you may choose to send a live Figure 15 feed from your laptop s VGA output instead. To achieve this, use the Camera button (Fig.16) on the Polycom remote to bring up the source selection menu (Fig.17), then select PC Input to select your laptop s VGA signal. Figure 16 - To revert back to a live video feed from the Polycom Camera, simply repeat the process listed above. - If your session is being recorded (see the Recording and Retrieving Sessions section), both video and laptop signals will be recorded and displayed side-by-side in a single highresolution video recording (shown below) 11 12 Post-Conference - Once your video conference session has drawn to a close, you may terminate your connection by pressing the red Hang Up key on the Polycom Remote (right) - To power down the video conferencing equipment, hold the Power button on the top-right of the Polycom remote control. - To power down the Sharp TV, press the Power button on the top-right of the Sharp remote control 12 13 Instructions for Off-Site Use To access an active video conference taking place in CASL from an external location, participants must use dedicated video-conferencing software on their OS platform of choice. The following is general advice and steps to follow to ensure the highest possible quality of connection for your conference. Recommended video conferencing client software The following software packages are available free of charge, and enable any Windows or Mac system to connect to CASL video conferences: - Windows XP o Microsoft NetMeeting is bundled with Windows XP, and can be accessed without any additional installation steps by following the guide on page 4, under Software Requirements - Windows Vista/Windows 7 - Mac OsX o Users running other versions of Windows (Vista, Windows 7) will need to download an alternative conferencing solution such as PacPhone (). o Mac OS does not ship with dedicated video conferencing software, but an excellent free solution can be downloaded in the form of XMeeting (). Preparation - Ensure that your computer is connected to a fast, reliable network. - Users from other departments within UCD should ensure that they do not use the WaveLAN wireless network to connect, but rather the wired Ethernet infrastructure - Ensure that any firewall software your system may be running allows traffic from your video conferencing software of choice. - Ensure that other chat applications such as Skype or Windows Messenger are not running, since this may cause conflicts with resources such as your webcam or microphone 13 14 - If you are using a camera device (integrated or external) such as a webcam, ensure that it is correctly connected, enabled and that all necessary driver software is installed. - Ensure that your camera device is enabled and calibrated within your conferencing software of choice. - Ensure that your microphone device (integrated or external) is not muted, and that it is adjusted to an appropriate sensitivity level. - To avoid unwanted audio feedback, the use of headphones is strongly recommended. Dialling and joining a conference Use the following details in your software package of choice to connect to CASL video conferences: - Enter in your software s dialling box or keypad - When prompted, you will need to enter the CASL s Virtual Room Number using your software s user-input box or keypad: o o For private sessions, enter 168, then enter # (without quotes) For publicly-broadcast sessions, enter 226, then enter # (without quotes) - If prompted for a PIN, enter 5977, then enter # (without quotes) 14 15 Instructions for live web-streaming CASL s Polycom hardware automatically broadcasts live lectures and talks, which may be streamed in a browser by viewers anywhere in the world. Privacy for your sessions is addressed by providing two virtual rooms for CASL video conference sessions, one of which requires PIN authentication by participants, the other being publicly accessible to all. To view a talk, follow these steps: For Private Sessions - Visit - Enter 168 in the Conference ID field - Enter 5977 in the PIN field - Select QuickTime 768k - Click Stream this Conference For Public Sessions - Visit - Enter 226 in the Conference ID field - Leave the PIN field blank - Select QuickTime 768k - Click Stream this Conference If you experience playback issues while viewing a stream, simply pause then resume the video stream 15 16 Recording and Retrieving Sessions CASL s Polycom hardware may also be used to record meetings and video conferences for viewing at a later date. Video streams from all active participants will be recorded and presented along with their respective audio streams. If you wish to have your session recorded, follow these steps: - with the start time, finish time and date of your session, along with your chosen Virtual Room Number (168 for private conferences, 226 for public conferences). - The recording will automatically begin at your specified times, and no further steps are required on the part of the conference participant. - After 24 hours, your recording will be available for download in a number of formats from a web-browser interface located at 16 17 Summary of technical details for experienced users Virtual Room Numbers : for private sessions for public sessions - To dial using an IP Address : o Dial o Browse for your desired room, or Press #, enter <Virtual Room Number> followed by the # key o Use PIN To dial using GDS : o Dial o Use PIN To dial in using an external browser on any machine: o Go to o Enter Conference ID : <Virtual Room Number> o Enter PIN : 5977 o Select the QuickTime 768k option - To dial in using a conferencing application such as NetMeeting: o Dial o Press #, enter <Virtual Room Number> followed by the # key again. You may use the keypad, or these standard keyboard combinations: Ctrl+shift+3 for # Ctrl+<any numer> for any number o Use PIN 5977, followed by the # key - To dial using ISDN VC : o Call o Wait for Please dial your party s extension prompt o Dial the GDS No , then the # key 17 Operating Instructions 2.204 Administration Building Conference Room Operating Instructions 2.204 Administration Building Conference Room Press Touch Panel to Begin Contents Introduction... 2 Undock / Dock the AMX Touch Panel:... 2 Operations... 3 Logo Page... 3 What would Operating Instructions 1.208 Clear Lake Center I.S. Conference Room Operating Instructions 1.208 Clear Lake Center I.S. Conference Room Press Touch Panel to Begin Introduction Welcome to the 1.208 Clear Lake Center conference room. This manual is to describe the operating Room 337 Technology Documentation Room 337 Technology Documentation Table of Contents Introduction... 2 The Smart Podium... 2 Computer... 3 Laptop Connections... 3 Touch Panel... 3 Pointmaker Annotation System... 4 ipad Pointmaker App VIDEO COMMUNICATION USER GUIDE VIDEO COMMUNICATION USER GUIDE Contents QUICK START GUIDE... 3 INVITE GUESTS (HOSTS ONLY)... 3 VERIFY YOUR EQUIPMENT... 3 INSTALL THE AVC APPLICATION... 3 CHECK YOUR CAMERA... 3 CHECK YOUR AUDIO... 3 SET Unified Meeting 5 User guide for Windows Unified Meeting 5 User guide for Windows Unified Meeting 5, a meeting and collaboration application enhances the way you communicate by making meetings convenient and easy to manage. It improves your meeting Network Projector Operation Guide Network Projector Operation Guide Table of contents Preparation...3 Connecting the projector with your computer...3 Wired connection... 3 Wireless connection (for selective models)... 4 QPresenter...7 Audio/Visual System User Operations Manual 100 Seat Lecture Hall Enhanced Audio/Visual System User Operations Manual 100 Seat Lecture Hall Enhanced 1 of 24 Table of Contents Table of Contents... 2 Overview... 3 Capabilities:... 3 Control System - Touch Panel Logo Page:... Operating Instructions 1.162 Clear Lake Center I.S. Conference Room Operating Instructions 1.162 Clear Lake Center I.S. Conference Room Press Touch Panel to Begin Contents Introduction... 1 Undock / Dock the AMX Touch Panel:... 2 Operations... 3 Logo Page... 3 What would Scopia Desktop for Windows Installation and Configuration Scopia Desktop for Windows Installation and Configuration Prepared by ITS Teaching Services Version 1.1 Updated 2 December 2011 Scopia Desktop is a software application that allows a personal, SCDSB Video Conferencing Video Conferencing Video Conferencing Page 1 of 10 SCDSB Video Conferencing SCDSB VIDEO CONFERENCING... 1 SETTING UP THE VSX 7000S... 2 WITHOUT PC CONTENT... 2 Typical Setup without PC Content Diagram... Setting up for Adobe Connect meetings Setting up for Adobe Connect meetings When preparing to lead a live lecture or meeting, you probably ensure that your meeting room and materials are ready before your participants arrive. You run through UC 123 Smart Classroom UC 123 Smart Classroom Hopefully everything you need to know to use this room UC 123 Smart Classroom Features available: 1. Desktop computer or plug in your laptop 2. Touchscreen AV controls 3. Front and ShareLink 200 Setup Guide ShareLink 00 Setup Guide This guide provides instructions for installing and connecting the Extron ShareLink 00. The ShareLink USB 00 Wireless Collaboration Gateway allows anyone to present content from Starting a Videoconference with a portable unit: Starting a Videoconference with a portable unit: 1. Plug in the power bar with the TV & Video conference unit and the Network Line. 2. If the LED above the power button on the video conference is off, Danaos Platform Conferencing Quick Users Guide DANAOS Management Consultants Danaos Platform Conferencing Quick Users Guide Danaos Platform is the professional social network for the shipping industry Offering a Shipping Directory, Conferencing, Forums, LifeSize Videoconferencing. LifeSize Videoconferencing. Concept. Preparing for a Videoconference Concept LifeSize Videoconferencing Lifesize is an Internet protocol (IP) videoconferencing solution that enables users to collaborate in real-time using high-definition video technology. LifeSize can easily Classroom Technologies Classroom Technologies Teer 115 Quick Start Guide System: 1. Touch the blank Crestron screen to wake up the screen 2. Touch screen again to power on system Projector: 1. To begin, select a source from Getting Started with Zoom Signing in to Zoom Note: this is not necessary to join meetings. Getting Started with Zoom 1. Go to. 2. Click Sign In. 3. Login using your Trent username and password. Download the Using the RealPresence Desktop Client Using the RealPresence Desktop Client The RealPresence Desktop Client (RPD) is used to connect to a video conference over your laptop/desktop/mac. Once downloaded and installed, it can be used to join For Windows 1 About Microsoft Lync... 4 Lync Window... 5 Audio... 6 Set up your audio device... 6 Make a call... 6 Answer a call... 7 Use audio call controls... 7 Check voicemail... 7 Invite more people Saville QUICK USER GUIDE.. AV/IT Solutions & Services Saville QUICK USER GUIDE Wireless Presentation-Screen Mirroring Receiver Saville Airshare is a wireless presentation and screen mirroring receiver for computer and mobile devices. It connects laptops, Intel Unite Solution. Standalone User Guide Intel Unite Solution Standalone User Guide Legal Disclaimers & Copyrights All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel Blue 2.2.25 Conference Room Blue 2.2.25 Conference Room Introduction This section introduces users to basic system operation procedures using the touch screen. Sections 2.2 and 2.5 detail the start-up and shutdown procedures; the Scopia Desktop Client Scopia Desktop Client User Guide Version 8.2.1 For Solution 8.2 8.2.1 2000-2013 RADVISION Ltd. All intellectual property rights in this publication are owned by RADVISION Ltd and are protected by United VCS 6.0 User Guide. Single monitor configuration. Dual monitor configuration VCS 6.0 User Guide VTEL 6000 Single monitor configuration Dual monitor configuration Cable Assembly 1) Attach Power supply to PC 2) Two monitor setup: Attach HDMI-HDMI cable from PC to PC monitor. Not Join & Leave Meetings Join & Leave Meetings There are several ways to join a meeting depending on how you receive the meeting invitation. Joining a meeting doesn t require registration, software installation in advance or configuration. VidyoConferencing Network Administrators Guide VidyoConferencing Network Administrators Guide Windows 8, 7, XP, Vista and Apple Mac OS - updated 30/11/2012 Introduction The Attend Anywhere management platform is a cloud based management, facilitation RCN BUSINESS OFFICE MOBILITY FOR DESKTOP RCN BUSINESS OFFICE MOBILITY FOR DESKTOP Quick Reference Guide 3 Office Mobility File Tools Help RECEIVING CALLS JOE SMITH Enter name or number + When someone calls your RCN Business number, you ll see MICROSOFT OFFICE LIVE MEETING GUIDE TO RECORDING MEETINGS MICROSOFT OFFICE LIVE MEETING GUIDE TO RECORDING MEETINGS In partnership with Microsoft, InterCall provides Live Meeting web conferencing services. This guide makes several references to the service name, Introduction. System requirements Introduction Web video collaboration enables communication in real time to bring experts virtually into the classroom, or to hold a class or project meeting at a distance. These guidelines explore Marratech PC to TV Converter. User Manual PC to TV Converter User Manual i Thank you for your purchase of our PC to TV converter from Sewell Direct. We are pleased to offer you quality products at low prices. Your product is covered by a one Chester County Hospital E134 Conference Room. Control for AV system Chester County Hospital E134 Conference Room Control for AV system The following document contains pictorial layouts and button descriptions of the touch panel interface for the audio visual system. 1. Yale Software Library e/ Yale Software Library For assistance contact the ITS Help Desk 785-3200, 432-9000, [email protected] Remote Desktop General overview With Remote Desktop, you get full access Blackboard Collaborate Classroom in Desire2Learn. Presenters Blackboard Collaborate Classroom in Desire2Learn Presenters Copyright 2014 Information Technology Services Kennesaw State University This document may be downloaded, printed, or copied, for educational... Everett Community Resource Center. Room Operations Manual Everett Community Resource Center Room Operations Manual November 2013 Table of Contents System Overview... 3 Getting Started... 4 -Home page... 4 -Sources... 6 -Volume... 8 -System Off... 9 -Using a Laptop... SJC Classroom Equipment SJC Classroom Equipment Our goal at Information Technology Services (ITS) is to provide the highest level of technical support and customer service to all San Jacinto College students and employees. Videoconference Room Guide Videoconference Room Guide Technical Support Should you encounter technical problems please contact Help Desk 250-852-6800 Videoconference classrooms are equipped with two cameras and a ceiling mount LCD AGENCY INSTRUCTION. DATE: May 20, 2011 ABSTRACT MIOSHA Michigan Occupational Safety and Health Administration Department of Licensing and Regulatory Affairs DOCUMENT IDENTIFIER: MIOSHA-ADM-11-3 SUBJECT: AGENCY INSTRUCTION DATE: ABSTRACT I. Purpose: Lync 2010 June 2012 Document S700 Lync 2010 June 2012 Document S700 Contents Introduction to Lync 2010... 3 Instant Messaging and Presence Status... 3 Presence... 3 Contact Management... 3 Instant Messaging... 3 Conversation history... SMART Board User Guide for PC SMART Board User Guide for PC What is it? The SMART Board is an interactive whiteboard available in an increasing number of classrooms at the University of Tennessee. While your laptop image is projected Polycom Video Conferencing User Guide Polycom Video Conferencing User Guide AHNR-IT Ag Help Desk (540 231-4865, [email protected]) Thu Aug 04 2011 Table of Contents How do I turn on the Polycom? 3 How do I answer a call? 4 How do I place a call Lecture Theatre. Introduction This document introduces users to basic system operation procedures using the touch screen. Introduction This document introduces users to basic system operation procedures using the touch screen. Start-up The touch screen remains powered on indefinitely. If left unattended, the touch screen Multi-Profile CMOS Infrared Network Camera Multi-Profile CMOS Infrared Network Camera Quick Start Guide About multi-profile Multi-profile stands for simultaneously video stream. The Network Camera and Video Server can generate MPEG4 and MJPEG streaming Connecting to the Staff Desktop Service Connecting to the Staff Desktop Service Access your programs and files just like you were in the office Get Setup The first step to use our Remote Connection service is to check that you have the free Avaya Web Conferencing User Guide Avaya Web Conferencing User Guide Version 4.1.20 October 2008 Document number 04-603078 2008 Avaya Inc. All Rights Reserved. Notice While reasonable efforts were made to ensure that the information in BIG BLUE BUTTON TRAINING BIG BLUE BUTTON TRAINING Contents Introduction... 2 Objectives... 2 Connecting to BigBlueButton Through Canvas... 2 Using BigBlueButton... 6 Saving Your BigBlueButton Conference... 15 Minimum Requirements, Fuze Meeting Quick Start Guide Hosting from Mac or Windows. Fuze Meeting Quick Start Guide Hosting from Mac or Windows. Table of Contents Meet now. Start a meeting immediately........................................................ Meet later. Schedule a meeting.............................................................. PolyU Connect Email Service. Lync 2013. Setup and User Guide PolyU Connect Email Service Lync 2013 Setup and User Guide Version 1.0 Last Update: 27 January 2014 PolyU Connect: ITS HelpCentre Support: ITS HelpCentre 1. Open a web browser and navigate to: Getting Started: Installing and initial Login 1. Open a web browser and navigate to: 2. Click on the appropriate link for your operating system and install the software according to the on-screen instructions., Video Conferencing on the Sandwell Broadband Network Video Conferencing on the Sandwell Broadband Network This booklet describes the methods for video conferencing used by Sandwell schools. It covers use of Click to Meet, NetMeeting and E164 protocols. Page VIA COLLAGE Deployment Guide VIA COLLAGE Deployment Guide Infinite Ways to Collaborate CONTENTS Introduction... 3 User Experience... 3 Pre-Deployment Planning... 3 Connectivity... 3 Network Addressing... Connecting With Lifesize Cloud There are several different ways to connect to a Lifesize Cloud video conference meeting. This guide will provide you instructions for each way. Ways to Join a Lifesize Cloud Video Conference After getting Lync Web App 2013 Guide Lync Web App 2013 Guide Contents Introduction... 1 Getting Started... 2 System Requirements... 2 Install a Webcam... 2 Headsets and Speakerphones... 2 Joining Lync Web App Meetings... 3 Overview... 3 Plug Additional support documentation is available via the Zoom online knowledge base: Zoom Audio, Video & Sharing Zoom Student Guide About Zoom: Zoom is an internet-based conferencing solution that provides both video conferencing and screen share capabilities. Its high-quality, easy to use format and Moodle integration Getting Started on the PC and MAC Getting Started on the PC and MAC Click on the topic you want to view. Download the Desktop App Download the ios or Android App Desktop App Home Screen Home Screen Drop Down Menu Home Screen: Upcoming = Polycom RealPresence Cloud FREQUENTLY ASKED QUESTIONS Software 1.0 July 2015 3725-20316-001A Polycom RealPresence Cloud Copyright 2015, Polycom, Inc. All rights reserved. No part of this document may be reproduced, translated into AVer EVC. Quick Installa on Guide. Package Contents. 8. Mini Din 8 pin MIC Cable 9. HDMI Cable 2013 AVer Information Inc. All Rights Reserved. 2 0 1 3 A V e r I n f o r m at i o n I n c. A ll R i g ht s R e s e r v e d. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1. Main System 2. Camera (The camera will User s Guide for Polycom HDX Desktop Systems User s Guide for Polycom HDX Desktop Systems Version 3.0.5 July 2012 3725-26470-007/A Trademark Information Polycom and the names and marks associated with Polycom's products are trademarks and/or service INSTRUCTIONS FOR THE MOC REMOTE ACCESS EXAM SYSTEM CHECK AND PRACTICE TEST INSTRUCTIONS FOR THE MOC REMOTE ACCESS EXAM SYSTEM CHECK AND PRACTICE TEST Please carefully read this entire document prior to starting the system check. The MOC exam cannot be taken on a tablet, smart How to Remotely View Security Cameras Using the Internet How to Remotely View Security Cameras Using the Internet Introduction: The ability to remotely view security cameras is one of the most useful features of your EZWatch Pro system. It provides the ability
http://docplayer.net/5952473-For-assistance-contact-casl-tech-casltech-ucd-ie-usage-requirements-1-system-capabilities-5-instructions-for-use-6.html
CC-MAIN-2018-05
en
refinedweb
This is the eleventh lesson in a series introducing 10-year-olds to programming through Minecraft. Learn more here. The mod is a suggestion from my class :) Goal Fill crafting table with ... - dirt and get 64 diamonds - sand and get 64 emeralds - clay (not blocks) and get 64 obsidian Relevant Classes The class corresponding to items is called ItemStack and it's located in the net.minecraft.item package. We also need to use the GameRegistry class to add our new recipe to the game. Specifically, there is a static method called addShapelessRecipe public static void addShapelessRecipe(ItemStack output, Object... params) What does the ... mean? It signifies we can pass in a list of objects (items) that the user needs to put on the crafting table in order to receive the output. Because we're filling the entire table, the recipe is called shapeless (it doesn't matter which item goes in which square). How do we do it? We need to add the new recipe to the load method. ItemStack diamonds = new ItemStack(Item.diamond, 64); ItemStack dirt = new ItemStack(Block.dirt); GameRegistry.addShapelessRecipe( diamonds, dirt, dirt, dirt, dirt, dirt, dirt, dirt, dirt, dirt); We'll also need to import net.minecraft.block.Block, net.minecraft.item.Item, and net.minecraft.item.ItemStack. Eclipse makes this easy for us: if you however on a word with a red squiggly line, you will get a popup menu with "quick fixes". Most often, the required import will be the top one and you can select it. Now if we run the game (green play button or Ctrl+F11) and play for a little bit, we should get this: The other two recipes are left as an exercise to the reader :) Extra Credit In case you want to test our your recipes without having to actually collect all the required items, you can give your player inventory items for free :) - Add a new class (e.g. ConnectionHandler) that implements IConnectionHandler In the override for playerLoggedIn, add the following code: EntityPlayerMP mp = (EntityPlayerMP)player; mp.inventory.addItemStackToInventory( new ItemStack(...)); Finally, add the following line to your mod's loadevent NetworkRegistry.instance().registerConnectionHandler( new ConnectionHandler());
http://www.jedidja.ca/mod-something-for-nothing/
CC-MAIN-2018-05
en
refinedweb
Mr CMember Content count89 Joined Last visited Community Reputation137 Neutral About Mr C - RankMember - I was wondering, since UDP is connection-less how do you store new connections? I haven't found a clear example of this. With TCP there is a connection made which you could store in a struct and/or put into a list easily. - Thank you both for your replies, I will add the fixes and see what I end up with. Although TCP might be "easier" for this, my Prof. wants me to use UDP. Sadly I am at his mercy :) Thanks again. C#/XNA socket programming question Mr C posted a topic in Networking and MultiplayerHi all. I didn't want to make a post about this but google has proven rather unhelpful. I am in school for Game Programming and I am working on an XNA based tech demo for a class. I am trying to teach myself UDP socket programming with the goal of eventually making multiplayer game. Right now my grasp on socket programming is rudimentary at best though through example I have written a simple chat server/client. I would love to be able to use Lidgren for this but as it is a tech demo I need to write all the code myself, which I am fine with. The Message: NONE On any key press, which at least means it is getting to my server. KeyboardState keyboardState = new KeyboardState(); if (Keyboard.GetState().GetPressedKeys().Length > 0) { // Default movement is none if (!keyboardState.IsKeyDown(Keys.W) || !keyboardState.IsKeyDown(Keys.A) || !keyboardState.IsKeyDown(Keys.S) || !keyboardState.IsKeyDown(Keys.D)) { MoveDir = MoveDirection.NONE; } if (keyboardState.IsKeyDown(Keys.W)) { MoveDir = MoveDirection.UP; } if (keyboardState.IsKeyDown(Keys.A)) { MoveDir = MoveDirection.LEFT; } if (keyboardState.IsKeyDown(Keys.S)) { MoveDir = MoveDirection.DOWN; } if (keyboardState.IsKeyDown(Keys.D)) { MoveDir = MoveDirection.RIGHT; } byte[] send_buffer = Encoding.ASCII.GetBytes(MoveDir.ToString()); try { sending_socket.SendTo(send_buffer, sending_end_point); } catch (Exception send_exception) { exception_thrown = true; Console.WriteLine(" Exception {0}", send_exception.Message); } if (exception_thrown == false) { Console.WriteLine("Message has been sent to the broadcast address"); } else { exception_thrown = false; Console.WriteLine("The exception indicates the message was not sent."); } } That is the chunk of relevant code inside my XNA update function. MoveDir is declared at the start of my file as an instance of Movedirection. // Move direction enumerator enum MoveDirection { UP, DOWN, LEFT, RIGHT, NONE } is my enum I don't think it matters but my server code is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server { public class UDPListener { private const int listenPort = 11000; static int Main(string[] args) { bool done = false; UdpClient listener = new UdpClient(listenPort); IPEndPoint groupEP = new IPEndPoint(IPAddress.Any, listenPort); string recieved_userName; string received_data; byte[] receive_name_array; byte[] receive_byte_array; receive_name_array = listener.Receive(ref groupEP); try { while (!done) { receive_byte_array = listener.Receive(ref groupEP); Console.WriteLine("Received a broadcast from {0}", groupEP.ToString()); received_data = Encoding.ASCII.GetString(receive_byte_array, 0, receive_byte_array.Length); Console.Write("Message: {0}\n", received_data); } } catch (Exception e) { Console.WriteLine(e.ToString()); } listener.Close(); return 0; } } } Anyways, like I said I am new to this and trying to figure it all out. My level of experience is a college level data structures course. XNA does not support "true keyboard input" ([url][/url]) and I know there HAS to be a simpler way to get this done besides what is suggested in that post. I don't need it to be pretty, just to work. If this is a dumb question, I apologize for wasting peoples time. Thanks for any help. Edit: I was unsure if I should post this here or in the networking section, since it has XNA specific stuff I chose here, please move if it is out of place. I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C replied to Mr C's topic in For BeginnersTurns out my base issue was having may variables for certain things in the wrong place. Now it works, at least as far as back and fourth bouncing (other issues present but I can deal) I had re-written the code with float v_x = 5; float v_y = 0; but had put them before the part dealing with the movement, thus putting them inside of a loop, so they where constantly being made and redefined. I moved them to outside of the loop and things work now. I did learn a lot doing this though, and thanks for pointing me in the right direction as far as the math itself. I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C replied to Mr C's topic in For BeginnersQuote:Original post by Litheon I think you are looking for 2D Vector Reflection. :) If only I understood what any of that meant... I feel really dumb, but I am stuck on this (changing direction of a sprite) Mr C posted a topic in For BeginnersHey, I am currently working on making a pong clone (for learning purposes). I have it to the point where both paddles and the ball are onscreen, both paddles can be controlled, and there are checks in place to make sure nothing goes off screen. However when I added the ball, I ran into problems. I can get it to move at the start fine (as the code below shows), and it stops where its supposed to (well, close enough anyways). The problem is I cannot for the life of me figure out how to get it to change direction. I know its going to be something stupid but I have been working on this for awhile now and I feel I have gotten to the point where I should ask for help. Note: This project is being done with SFML in Code::Blocks, if that information makes any difference. #include <SFML/Graphics.hpp> #include <iostream> int main() { // Create the main rendering window sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML Pong"); App.SetFramerateLimit(60); // Limits framerate // Next 3 lines display window size in console std::cout << App.GetHeight(); std::cout << "\n"; std::cout << App.GetWidth(); sf::Image bluePaddle; sf::Image redPaddle; sf::Image ball; // next 3 (currently 2) if's load images and displays error message if there is a problem if (!bluePaddle.LoadFromFile("bluePaddle.png")) { std::cout << "Error, bluePaddle.png failed to load"; } if (!redPaddle.LoadFromFile("redPaddle.png")) { std::cout << "Error,redPaddle.png failed to load"; } if (!ball.LoadFromFile("ball.png")) { std::cout << "Error, ball.png failed to load"; } // set blue paddle sprite and values sf::Sprite bluePaddleSprite(bluePaddle); bluePaddleSprite.SetY(200); // set red paddle sprite and values sf::Sprite redPaddleSprite(redPaddle); redPaddleSprite.SetX(784); redPaddleSprite.SetY(200); // set the ball's sprite and values sf::Sprite ballSprite(ball); ballSprite.SetX(250); ballSprite.SetY(250); // Start game loop while (App.IsOpened()) { // Process events sf::Event Event; while (App.GetEvent(Event)) { // Close window : exit if (Event.Type == sf::Event::Closed) App.Close(); // A key has been pressed if (Event.Type == sf::Event::KeyPressed) { // Escape key : exit if (Event.Key.Code == sf::Key::Escape) App.Close(); } } // Clear the screen App.Clear(sf::Color(0, 0, 0)); //next 2 if's for bluePaddles border gaurds (collision detection, makes sure it stays in bounds) if (bluePaddleSprite.GetPosition().y < 0) { bluePaddleSprite.SetY(0.5); } if(bluePaddleSprite.GetPosition().y > App.GetHeight()-bluePaddle.GetHeight()) { bluePaddleSprite.SetY(455); } //nest 2 ifs are for redPaddles vorder gaurds (same as blue) if (redPaddleSprite.GetPosition().y < 0) { redPaddleSprite.SetY(0.5); } if(redPaddleSprite.GetPosition().y > App.GetHeight()-redPaddle.GetHeight()) { redPaddleSprite.SetY(455); } //-> start of code dealing with ball. This bit will deal with ball movement/collision/etc); } //<- end of all the work with ball //this chunk provides the code for player control (movement) if (App.GetInput().IsKeyDown(sf::Key::W)) { bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * -1); } else if (App.GetInput().IsKeyDown(sf::Key::S)) { bluePaddleSprite.Move(0, 150 * App.GetFrameTime() * 1); } //this bit is a tester for red before I put in AI, to make sure movement works. (tested working) if (App.GetInput().IsKeyDown(sf::Key::Up)) { redPaddleSprite.Move(0, 150 * App.GetFrameTime() * -1); } else if (App.GetInput().IsKeyDown(sf::Key::Down)) { redPaddleSprite.Move(0, 150 * App.GetFrameTime() * 1); } //Draws the blue Paddle App.Draw(bluePaddleSprite); //Draws the red paddle App.Draw(redPaddleSprite); //Draws the ball App.Draw(ballSprite); // Display window contents on screen App.Display(); } return EXIT_SUCCESS; } Is my full code. The piece of code in question is:); } There are a few other bugs, such as that although the ball stops, it stops at the point even if the paddle is not there (leading me to think I made it so its checking the y value of the area the paddle is on, instead of just the paddle). However my biggest problem right now is just getting the ball to change directions on contact. Thank you, and sorry for the trouble. A good sprite resource I found Mr C replied to Mr C's topic in 2D and 3D ArtQuote:Original post by OrangyTang I'm not sure I understand your definition of "free to use" sprites. These all seem to be ripped from various commercial games of some kind. Sorry, I realized they where rips at a later point. But for learning/hobby they should be fine. Apologies for not reading it carefully. I was thinking these would be good for somebody learning who does not have an artist/ability to make sprites. I would think for a commercial release you would want your own images anyways. - Quote:Original post by mongrol For. So basically have both images tied to one call and then pick which one you want at the time? - JTippetts, thank you very much for taking the time to provide such a detailed explanation. Currently I am in the process of making a spell class to learn how to attache objects to images. I plan to build this code up slowly until I have a working mage that can walk/cast correctly and have the spell "shoot" forward and perhaps add objects in the world to learn collision detection. You lost me abit on the void Object::preRender() { Animation *anim = animationset->getAnimation(curanimation); sf::IntRect subrect=anim->getSubRect(curframe); sf::Image *image=anim->getImage(curframe); sprite.SetSubRect(subrect); sprite.SetImage(image); } chunk but that is probably due to me not understanding pointers as well as I would like. Thanks again. A good sprite resource I found Mr C posted a topic in 2D and 3D ArtThe topic above seems to have been archived, or I would have posted this there. is a site that has a great collection of free to use sprites. is an example, and when you save the image there is no background. Hope this helps some people. Better way to swap images in SFML? Mr C posted a topic in For BeginnersHey all, I am working on a bit of code and I was wondering if any of you knew a better way to change one image into another then the way I am using. I looked on the SFML site and could not find anything... #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> #include <string> int main() { //sf::Clock gameClock; //float timeElapsed = 0; try { sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML Window"); sf::Image mageReady; sf::Image mageCast; sf::Image Fire_Ball; if (!mageReady.LoadFromFile("mageReady.png")) { throw std::string("Your image failed to load"); } if (!mageCast.LoadFromFile("mageCast.png")) { throw std::string("Your image failed to load"); } if (!Fire_Ball.LoadFromFile("fireball.png")) { throw std::string("Your image failed to load"); } sf::Sprite mageReadySprite; mageReadySprite.SetImage(mageReady); mageReadySprite.SetCenter(38, 38); sf::Sprite mageCastSprite; mageCastSprite.SetImage(mageCast); mageCastSprite.SetCenter(38, 38); sf::Sprite Fire_Ball_Sprite; Fire_Ball_Sprite.SetImage(Fire_Ball); bool isNotCasting = true; bool isCasting = false; bool showFireBall = false; bool Running = true; while (Running) { //timeElapsed = gameClock.GetElapsedTime(); sf::Event myEvent; while (App.GetEvent(myEvent)) { // Window closed if (myEvent.Type == sf::Event::Closed) App.Close(); // Escape key pressed if ((myEvent.Type == sf::Event::KeyPressed) && (myEvent.Key.Code == sf::Key::Escape)) App.Close(); //show mage cast/spell. if ((myEvent.Type == sf::Event::KeyPressed) && (myEvent.Key.Code == sf::Key::C)) { isCasting = true; showFireBall = true; mageCastSprite.SetX(mageReadySprite.GetPosition().x); mageCastSprite.SetY(mageReadySprite.GetPosition().y); Fire_Ball_Sprite.SetX(mageCastSprite.GetPosition().x + 10); Fire_Ball_Sprite.SetY(mageCastSprite.GetPosition().y -60); } } // Clear the screen (fill it with black color) App.Clear(sf::Color(255, 255, 255)); //draw mage sprite if (isNotCasting == true) { App.Draw(mageReadySprite); } if (isNotCasting == false) { isNotCasting = false; } //draw mage cast if (isCasting == true && showFireBall == true) { isNotCasting = false; App.Draw(mageCastSprite); App.Draw(Fire_Ball_Sprite); } // Get elapsed time float ElapsedTime = App.GetFrameTime(); // Move the sprite if (App.GetInput().IsKeyDown(sf::Key::Left)) mageReadySprite.Move(-100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Right)) mageReadySprite.Move( 100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Up)) mageReadySprite.Move(0, -100 * ElapsedTime); if (App.GetInput().IsKeyDown(sf::Key::Down)) mageReadySprite.Move(0, 100 * ElapsedTime); if (isCasting == true){ if (App.GetInput().IsKeyDown(sf::Key::Left)) mageCastSprite.Move(-100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Right)) mageCastSprite.Move( 100 * ElapsedTime, 0); if (App.GetInput().IsKeyDown(sf::Key::Up)) mageCastSprite.Move(0, -100 * ElapsedTime); if (App.GetInput().IsKeyDown(sf::Key::Down)) mageCastSprite.Move(0, 100 * ElapsedTime); } App.Display(); } } catch (std::string message) { std::cout << "you fail because " << message; } return 0; } Right now I have it so that when the key is pressed it puts the mageCast sprite where the mageReady sprite was, and hides the mageReady. I am looking for either a way to temporarily (or permanently) remove/delete an image when I want. The goal of this little program is to eventually have the mage walk, cast and shoot (show it casting, have the spell move X amount and then delete itself), then go back to the first image (mageReady). Right now I can move with the first image, and "C" will swap it into the second one and place the fireball where I want it. I feel that there must be a better way to do what I am trying to do.. Thanks all. Edit: Does this belong in the Alternative Game Libraries forum? I am not sure, please move if it does, and sorry if this is the wrong place. Battlefield Bad Company 2 Questions Mr C replied to John Stuart's topic in For BeginnersIf you look closely you will find that the buildings in BFBC 2 are "segemented", that is they don't blow up dynamically (though they do a good job faking it). My theory is when enough damage is dealt it calls a functions to play an animation for that part of the building blowing up. I could be wrong but when you die in a building from it collapsing it is not like you are struck by different debris, you just die as it falls around you. So it plays the animation that shows chunks of it exploding, then deletes those objects from the initial structure. Think of a puzzle that makes an image and removing a piece of that puzzle. In the Afro Samurai game they did it so when you cut of an arm it was removed from the body object. That's my thoughts on it anyways. - Quote:Original post by _fastcall I had originally assumed you had "using namespace std", and as you posted, my assumption is incorrect. Try this: std::cin.ignore( std::numeric_limits<std::streamsize>::max(), '\n' ); (Or alternately: using std::numeric_limits; using std::streamsize) Jesus Fish, I love you. - Quote:Original post by _fastcall Quote:Original post by Mr C I tried using cin.ignore(numeric_limits<streamsize>::max(),'\n'); but it says numeric_limits not delcared in scope. Oops, I should have mentioned that you need to include <limits> in order to use std::numeric_limits. EDIT: Yeah, you need to clear out the extra input left when the user enters the integer before asking for a line of text. (Entering "1Hello world!" works as expected; the integer is read, then the remainder is read by getline, and saved to the file.) Well, I tried doing #include <limits> but that was a no go. main.cpp|26|error: `numeric_limits' was not declared in this scope| as well as streamsize and max having the same issue... At this point my issue is I can't even enter a string... before I had it so I could enter "The brown dog jumped" but only "The" would be saved... - my code is: #include <iostream> #include <string> #include <fstream> using std::cin; using std::cout; using std::string; using std::ofstream; using std::ifstream; using std::istreambuf_iterator; using std::getline; int main() { int choice; cout << "Hello user! I was told to greet you in a nice and polite way! Lets be friend?\n"; cout << "So, do you want to create a new file or load the last one?\n"; cout << "1)new 2)load: "; cin >> choice; if (choice == 1){ string text; cout << "Enter the text: "; getline (cin,text); ofstream myOutFileStream("save1.txt"); myOutFileStream << text; myOutFileStream.close(); } if (choice == 2){ ifstream myInFileStream("save1.txt"); string save1((istreambuf_iterator<char>(myInFileStream)), istreambuf_iterator<char>()); cout << save1; myInFileStream.close(); } } It compiles and runs, but it will not even let me enter the string. I feel like I am missing something obvious/doing something dumb.. but I don't see it. Edit: As a side note, if I enter text in the file before hand it reads it fine, even if there are multiple words...
https://www.gamedev.net/profile/114697-mr-c/?tab=topics
CC-MAIN-2018-05
en
refinedweb
Problem the JSONConverter class. Whilst writing the enum back out to JSON wasn't a concern for me, I have also included it in the example below for posterity's sake. Example Required nuget package install-package Newtonsoft.Json Incoming JSON { "writing_style":"snake_case" } C# Enum public enum WritingStyle { SnakeCase, CamelCase, Unknown } C# Entity Note the two attributes: JsonProperty pointing the incoming JSON attribute to our enum and JsonConverter pointing to our type converter. public sealed class Dto { [JsonProperty("writing_style", Required = Required.Always)] [JsonConverter(typeof(WritingStyleTypeConverter))] public WritingStyle DtoWritingStyle { get; set; } } JSON Converter public sealed class WritingStyleTypeConverter : JsonConverter { public override bool CanConvert(Type objectType) { return objectType == typeof(string); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { var value = (string)reader.Value; switch (value) { case "snake_case": return WritingStyle.SnakeCase; case "camel_case": return WritingStyle.CamelCase; default: return WritingStyle.Unknown; } } public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) { var value = (WritingStyle)value; switch(value) { case WritingStyle.SnakeCase: writer.WriteValue("snake_case"); break; case WritingStyle.CamelCase: writer.WriteValue("camel_case"); break; default: return ("unknown"); } } } It's all pretty straight forward, but for more information on the JSONConverter class, see the official documentation.
https://www.scottbrady91.com/C-Sharp/Deserializing-a-JSON-Enumerated-String-to-a-Different-C-Sharp-Enumerated-Type
CC-MAIN-2018-05
en
refinedweb
#include <cafe/mem.h> void* MEMAllocFromAllocator( MEMAllocator* pAllocator, u32 size ); When the memory block is allocated, the start address of this memory block is returned. When memory cannot be allocated, NULL is returned. Allocates a memory block from the allocator. The implementation depends on the setting of the allocator and the memory manager related to the allocator. 2013-05-08 Automated cleanup pass. 2010-11-01 Initial version. CONFIDENTIAL
http://anus.trade/wiiu/personalshit/wiiusdkdocs/fuckyoudontguessmylinks/actuallykillyourself/AA3395599559ASDLG/os/Mem/Allocator/MEMAllocFromAllocator.html
CC-MAIN-2018-05
en
refinedweb
This is the mail archive of the [email protected] mailing list for the glibc project. > I'd suggest 2.16 release as that deadline (by which we declare non-updated > architectures to be unmaintained, whether or not they have someone > nominally listed as maintainer). Let's put a firm date on it. We'd like to have release procedures more coherent by 2.16, but we have yet to actually do anything about making them so. For the last few years, glibc releases have coincided with Fedora releases, which are about every six months. This thread is not the place to really discuss the release cycle plans, but we can use this as a guide for setting this deadline for arch maintainers. I think we should be aiming for cutting our release branches by the time Fedora goes to beta. According to that will be on April 3rd (it's not uncommon for these dates to slip by a week), with the "change deadline" on March 20. Let's say April 1 is the deadline for crt[in].S. > * csu/Makefile: Support crti.S and crtn.S in source directory. It's become somewhat common to use vague log entries like this. But IMHO the standard remains a log entry that mentions the variables and targets added, removed, or changed. > +ifneq (,$(wildcard $(sysdirs:%=%/crti.S))) > + > +vpath crti.S $(sysdirs) > +vpath crtn.S $(sysdirs) Why are these necessary? The usual sysd-rules rules ought to find these just like all other sources are found. > +CFLAGS-crti.S = -g0 > +CFLAGS-crtn.S = -g0 I'm not convinced these are desireable. They made sense for the C code when it was being sliced up. But now these are just assembly sources like all the others, so why shouldn't they have normal source line information? CFI also seems worthwhile, though that could come later since we didn't have it before. > +CFLAGS-pt-crti.S = -g0 > +CFLAGS-pt-crtn.S = -g0 Same here. > +/* Special .init and .fini section support for x86. NPTL version. s/for x86. NPTL version./for libpthread./ > + Copyright (C) 1995-1997,2000-2002,2012 Free Software Foundation, Inc. This is actually a wholly new file at this point. Just use 2012. > diff --git a/nptl/pt-crtn.S b/nptl/pt-crtn.S > new file mode 100644 > index 0000000..b7075da > --- /dev/null > +++ b/nptl/pt-crtn.S > @@ -0,0 +1 @@ > +#include <crtn.S> Then why do we need it at all? We can just use the vanilla crtn.o for libpthread too. > +#ifndef PREINIT_FUNCTION > +#define PREINIT_FUNCTION __gmon_start__ > +#endif > + > +#ifndef PREINIT_FUNCTION_WEAK > +#define PREINIT_FUNCTION_WEAK 1 > +#endif "# define" inside "#if..." > + .p2align 2,,3 Why isn't just ".p2align 2" sufficient? > + .section .init You used full section details in crti.S. Either they're necessary or they're not. Be consistent. (Actually, they're not strictly necessary, but they're still worthwhile IMHO.) Thanks, Roland
http://sourceware.org/ml/libc-alpha/2012-02/msg00134.html
CC-MAIN-2015-22
en
refinedweb
26 April 2012 08:27 [Source: ICIS news] By Nurluqman Suratman ?xml:namespace> Despite the general weakness in the global economy, demand for petrochemicals is expected to remain resilient and should help buoy up product prices going forward, said Nat Panassutrakorn, an analyst with KGI Securities in In the first quarter, significantly weaker petrochemical spreads weighed on the SCG’s earnings, causing its chemicals business’ earnings before interest, tax, depreciation and amortisation (EBITDA) to slump by 82% year on year to Thai baht (Bt) Bt894m ($28.9m). The industrial conglomerate reported a 35% year-on-year fall in March-quarter net profit to Bt5.97bn, with overall earnings before interest, tax, depreciation and amortisation (EBITDA) declining by 24% to Bt10.3bn despite an 11% growth in sales to Bt102.9bn. Petrochemicals accounted for 21% of SCG’s profit in the first quarter. “[First-quarter] earnings fell … as chemical margins fell to their lowest as a result of excess global supply and slower demand. Equity income also fell substantially to Bt344m (vs Bt3.0bn in 1Q11) due to weaker margins at its PTA [purified terephthalic acid] business,” said Naphat Chantaraserekul, an analyst at brokerage DBS Vickers Securities. PTA spreads averaged $120/tonne (€91/tonne) in the March quarter of 2012, sharply lower than the $350/tonne average in the same period last year, he said. SCG said on Wednesday that the average naphtha prices increased by $132/tonne quarter-on-quarter and by $105/tonne year on year to $1,021/tonne in the January-March period, pulled up by higher crude oil price. Ethylene prices also increased due to rising feedstock prices and concern about the availability of the olefin from the Middle East amid heightened tensions between the West and In the first quarter, the average ethylene price stood at $1,251/tonne, up $190/tonne quarter on quarter, and up $12/tonne year on year. Propylene’s average price at $1,281/ton, decreased $1/tonne quarter on quarter and down $98/tonne year on year, according to SCG. “[SCG’s] chemical business should recover in the second quarter of 2012 with improving spreads for most products. But recovery will be mild due to still weak macroeconomic conditions in Europe and The eurozone debt crisis, as well as SCG, which is The company's cement business is also expected perform strongly in the second half of the year, with a host of commercial, residential and infrastructure projects under construction in Thailand, according to the analysts. SCG’s cement sales in the March quarter grew 5% year on year to 7.6m tonnes, driven by The firm’s cement business contributed 39% to the firm’s overall profit in the first quarter. For the full year of 2012, SCG’s net profit is expected to grow to around Bt31bn, from the Bt27.3bn last year, analysts said. “Siam Cement will benefit from Thailand’s improving economic conditions, as 59% of its first-quarter earnings were derived from the construction and related sectors,” DBS’ Chantaraserekul said. “Weak chemical spreads will cap upside in the near term, but SCC offers a long-term value proposition” he added. Beyond 2012, Panassutrakorn of KGI said that demand for petrochemicals, particularly from the automotive sector, should pick up and lead margins out of a trough. SCG’s focus on growing its high value-added (HVA) product offerings, should help boost earnings going forward, analysts said. The company hopes to increase the share of HVA products to half of group sales by 2015, from 34% currently, they said. Its move into the southeast-Asian market, where there is strong demand for its core businesses such as plastics, could also help drive its long-term earnings growth, they said. “SCC is still pursuing mergers and acquistion in ASEAN markets. It plans to spend up to Bt40bn in 2012 and it has Bt45bn cash on hand,” Chantaraserekul of DBS said. HVA products' margins are 5-10% higher than normal products, and they are spread across all of SCG’s product segments, according to Chantaraserekul. ($1 = Bt30.9;
http://www.icis.com/Articles/2012/04/26/9553805/better-chemical-spreads-to-aid-thai-scg-h2-earnings-analysts.html
CC-MAIN-2015-22
en
refinedweb
7 – Moving to Microsoft Azure Table Storage This chapter describes Adatum’s final step in its migration process to the cloud for the aExpense application. It discusses the advantages of using Microsoft Azure storage instead of a relational database for expense items, the design of a suitable schema for storage, and how the developers at Adatum adapted the data access functions of the application to use Azure storage instead of a relational database. The chapter also walks through the data export feature that Adatum added to the aExpense application, and some of the changes the developers at Adatum made following performance testing. The Premise Adatum has now completed the migration of the aExpense application to the cloud, and added functionality that was missing during the initial migration so that users can upload and view scanned receipt images. However, as Adatum discovered when revisiting the costs of running the application in the cloud, there is one more opportunity to minimize these costs by switching to use Azure storage for expense items instead of a relational database. Adatum also wants to add a final piece of functionality in the application. The aExpense application must generate a file of data that summarizes the approved business expense submissions for a period. Adatum's on-premises payments system imports this data file and then makes the payments to Adatum employees. In addition to implementing these changes to the aExpense application, Adatum also needs to perform final performance testing and tuning to ensure that the application provides an optimum user experience whilst minimizing its resource usage. Goals and Requirements In this phase, Adatum has several specific goals. A simple cost analysis of the existing solution has revealed that Azure SQL Database would account for about one quarter of the annual running costs of the application (see Chapter 6, “Evaluating Cloud Hosting Costs,” for details of the cost calculations). Because the cost of using Azure storage is less than using Azure SQL Database, Adatum is keen to investigate whether it can use Azure storage instead. Adatum must evaluate whether the aExpense application can use Azure storage. Data integrity is critical, so Adatum wants to use transactions when a user submits multiple business expense items as a part of an expense submission. You should evaluate whether Azure storage can replace relational database storage in your application. Also in this phase of the aExpense migration the project the team at Adatum will create the data export feature for integration with its on-premises systems. The on-premises version of aExpense uses a scheduled SQL Server Integration Services job to generate the output file and sets the status of an expense submission to “processing” after it is exported. The on-premises application also imports data from the payments processing system to update the status of the expense submissions after the payment processing system makes a payment. This import process is not included in the current phase of the migration project. Figure 1 summarizes the export process in the original on-premises application. Figure 1 The design of the export process for the cloud version of aExpense must meet a number of goals. First, the cost of the export process should be kept to a minimum while making sure that it does not have a negative impact on the performance of the application for users. The export process must also be robust and be able to recover from a failure without compromising the integrity of aExpense's data or the accuracy of the exported data. The solution must also address the question of how to initiate the export by evaluating whether it should be a manually initiated operation or run on a specific schedule. If it is the latter, the team at Adatum must design a mechanism for initiating the task, such as using a Timer instance to execute it at regular intervals or by using a third party scheduler such as Quartz. The final requirement is to include a mechanism for transferring the data from the cloud-environment to the on-premises environment where the payment processing application can access it. Adatum has also evaluated the results from performance testing the application, and needs to implement a number of changes based on those results. For example, the developers discovered that constantly checking for the existence of a queue or table before accessing it was causing unnecessary processing overhead, and decided that the application should initialize storage requirements only once during startup, removing the need to check for the existence on each call that reads or writes data. The developers at Adatum also explored whether they should implement a paging mechanism, for displaying expense items, and how they could improve performance by fine tuning the configuration and the Windows Communication Foundation (WCF) Data Service code. Overview of the Solution In this section you will see how the developers at Adatum considered the options available for meeting their goals in this stage of the migration process, and the decisions they made. Why Use Azure Table Storage? As you saw in Chapter 5, “Executing Background Tasks,” Adatum already uses Azure storage blobs for storing the scanned receipt images and Azure storage queues for transferring data between the web and worker roles. This functionality was added to the aExpense application during the migration step described in Chapter 5. However, for storing data that is fundamentally relational in nature, such as the expense items currently stored in Azure SQL Database, the most appropriate Azure storage mechanism is tables. Azure tables provide a non-relational table-structured storage mechanism. Tables are collections of entities that do not have an enforced schema, which means a single table can contain entities that have different sets of properties. Even though the underlying approach is different from a relational database table, because each row is an entity that contains a collection of properties rather than a set of data rows containing columns of predefined data types, Azure tables can provide an equivalent storage capability. In Chapter 6, “Evaluating Cloud Hosting Costs,” of this guide you discovered that Azure table storage is less expensive per gigabyte stored than using Azure SQL Database. For example, in Adatum’s specific scenario, the running costs for the SQL Database are around $ 800.00 per year, which is 26% of the total cost. The calculated cost of the equivalent storage using Azure table storage is only around $ 25.00 per year, which is less than 1% of the total running costs. Therefore, it makes sense financially to consider moving to table storage, as long as the development and testing costs are not excessive and performance can be maintained. In addition to the cost advantage, Azure tables also offer other useful capabilities. They can be used to store huge volumes of data (a single Azure storage account can hold up to 100 TB of data), and can be accessed using a managed API or directly using REST queries. You can use Shared Access Signatures to control access to tables, partitions, and rows. In some circumstances table storage can also provide better scalability. The data is also protected through automatic geo-replication across multiple datacenters unless you disable this function (for example, if legal restrictions prevent data from being co-located in other regions). Profile Data By moving the expenses data from Azure SQL Database to Azure table storage, Adatum will be able to remove the dependency of the aExpense application on a relational database. The justification for using table storage assumes that Adatum will no longer need to pay for a cloud hosted SQL Server or Azure SQL Database. However, when reviewing this decision, Adatum realized that the aExpense application still uses the ASP.NET profile provider, which stores user profile data in Azure SQL Database. Therefore Adatum must find an alternative method for storing profile data. Adatum uses Azure Caching to store session data for users, but this is not suitable for storing profile data that must be persisted between user sessions. The developers at Adatum could write a custom profile provider that stores its data in Azure storage. However, after investigation, they decided to use the Azure ASP.NET Providers sample. This provider can be used to store membership, profile, roles, and session data in Azure tables and blobs. The Data Export Process There are three elements of the export process to consider: how to initiate the process, how to generate the data, and how to download the data from the cloud. Initiating the Export Process The simplest option for initiating the data export is to have a web page that returns the data on request, but there are some potential disadvantages to this approach. First, it adds to the web server's load and potentially affects the other users of the system. In the case of aExpense, this will probably not be significant because the computational requirements for producing the report are low. Second, if the process that generates the data is complex and the data volumes are high, the web page must be able to handle timeouts. Again, for aExpense, it is unlikely that this will be a significant problem. The most significant drawback to this solution in aExpense is that the current storage architecture for expense submission data is optimized for updating and retrieving individual expense submissions by using the user ID. The export process will need to access expense submission data by date and expense state. Unlike Azure SQL Database where you can define multiple indexes on a table, Azure table storage only has a single index on each table. Figure 2 illustrates the second option for initiating the data export. Each task has a dedicated worker role, so the image compression and thumbnail generation would be handled by Task 1 in Worker 1, and the data export would be performed by Task 2 in Worker 2. This would also be simple to implement, but in the case of aExpense where the export process will run twice a month, it's not worth the overhead of having a separate role instance. If your task ran more frequently and if it was computationally intensive, you might consider an additional worker role. Figure 2 Figure 3 illustrates the third option where an additional task inside an existing worker role performs the data export process. This approach makes use of existing compute resources and makes sense if the tasks are not too computationally intensive. At the present time, the Azure SDK does not include any task abstractions, so you need to either develop or find a framework to handle task-based processing for you. The team at Adatum will use the plumbing code classes described in Chapter 5, “Executing Background Tasks,” to define the tasks in the aExpense application. Designing and building this type of framework is not very difficult, but you do need to include all your own error handling and scheduling logic. Figure 3 Adatum already has some simple abstractions that enable them to run multiple tasks in a single worker role. Generating the Export Data The team at Adatum decided to split the expense report generation process into two steps. The first step “flattens” the data model and puts the data for export into a Azure table. This table uses the expense submission's approval date as the partition key, the expense ID as the row key, and it stores the expense submission total. The second step reads this table and generates a Azure blob that contains the data ready for export as a comma-separated values (CSV) file. Adatum implemented each of these two steps as a task by using the plumbing code described in Chapter 5, “Executing Background Tasks.” Figure 4 illustrates how the task that adds data to the Azure table works. Figure 4 First, a manager approves a business expense submission. This places a message that contains the expense submission's ID and approval date onto a queue (1), and updates the status of the submission in table storage (2). The task retrieves the message from the queue, calculates the total value of the expense submission from the expense detail items, and stores this as a single line in the Expense Export table. The task also updates the status of the expense submission to be "in process" before it deletes the message from the queue. Exporting the Report Data To export the data, Adatum considered two options. The first was to have a web page that enables a user to download the expense report data as a file. This page would query the expense report table by date and generate a CSV file that the payments processing system can import. Figure 5 illustrates this option. Figure 5 The second option, shown in Figure 6, was to create another job in the worker process that runs on a schedule to generate the file in blob storage ready for download. Adatum will modify the on-premises payment processing system to download this file before importing it. Adatum selected this option because it enables them to schedule the job to run at a quiet time in order to avoid any impact on the performance of the application for users. The on-premises application can access the blob storage directly without involving either the Azure web role or worker role. Figure 6 Adatum had to modify slightly the worker role plumbing code to support this process. In the original version of the plumbing code, a message in a queue triggered a task to run, but the application now also requires the ability to schedule tasks. Inside the Implementation Now is a good time to walk through these changes in more detail. As you go through this section, you may want to download the Visual Studio solution from. This solution (in the Azure-TableStorage folder) contains the implementation of aExpense after the changes made in this phase.. Storing Business Expense Data in Azure Table Storage Moving from Azure SQL Database to Azure table storage meant that the developers at Adatum had to re-implement the data access layer (DAL) in the application. The original version of aExpense used LINQ to SQL as the technology in the data access layer to communicate with Azure SQL Database. The DAL converted the data that it retrieved using LINQ to SQL to a set of domain-model objects that it passed to the user interface (UI). The new version of aExpense that uses Azure table storage uses the managed Azure storage client to interact with Azure table storage. Because Azure table storage uses a fundamentally different approach to storage, this was not simply a case of replacing LINQ to SQL with the .NET Client Library. How Many Tables? The most important thing to understand when transitioning to Azure table storage is that the storage model is different from what you may be used to. In the relational world, the obvious data model for aExpense would have two tables, one for expense header entities and one for expense detail entities, with a foreign-key constraint to enforce data integrity. This reflects the schema that Adatum used in SQL Server and Azure SQL Database in previous steps of the migration process. However, the best data model to use is not so obvious with Azure table storage for a number of reasons: - You can store multiple entity types in a single table in Azure. - Entity Group Transactions are limited to a single partition in a single table (partitions are discussed in more detail later in this chapter). - Azure table storage is relatively cheap, so you shouldn't be so concerned about normalizing your data and eliminating data redundancy. Adatum could have used two Azure storage tables to store the expense header and expense detail entities. The advantage of this approach is simplicity because each table has its own, separate, schema. However, because transactions cannot span tables in Azure storage, there is a possibility that orphaned detail records could be left if there was a failure before the aExpense application saved the header record. For example, the developers would need to use two transactions to save an expense if Adatum had used two separate tables. The following code sample shows the outline of the SaveExpense method that would be required in the ExpenseRepository class — each call to the SaveChanges method is a separate transaction, one of which may fail leading to the risk of orphaned detail records. // Example code when using two tables for expenses data. public void SaveExpense(Expense expense) { // create an expense row. var context = new ExpenseDataContext(this.account); ExpenseRow expenseRow = expense.ToTableEntity();.Id); context.AddObject(ExpenseDataContext.ExpenseItemTable, expenseItemRow); ... } // save the expense item rows. context.SaveChanges(SaveChangesOptions.Batch); // save the expense row. context.AddObject(ExpenseDataContext.ExpenseTable, expenseRow); context.SaveChanges(); ... } To resolve this situation the developers would need to write code that implements a compensating transaction mechanism so that a failure when saving a header or detail row does not affect the integrity of the data. This is possible, but adds to the complexity of the solution. For example, to resolve the potential issue of orphaned detail records after a failure, the developers could implement an “orphan collector” process that will regularly scan the details table looking for, and deleting, orphaned records. However, because the developers at Adatum chose to implement a multi-schema table for expense data, they can use a single transaction for saving both header and detail records. This approach enables them to use Entity Group Transactions to save an expense header entity and its related detail entities to a single partition in a single, atomic transaction. The following code sample from the ExpenseRepository class shows how the application saves an expense to table storage. // Actual code used to save expenses data from a single table. public void SaveExpense(Expense expense) { var context = new ExpenseDataContext(this.account); IExpenseRow expenseRow = expense.ToTableEntity(); expenseRow.PartitionKey = ExpenseRepository .EncodePartitionAndRowKey(expenseRow.UserName); expenseRow.RowKey = expense.Id.ToString(); context.AddObject(ExpenseDataContext.ExpenseTable, expenseRow);.ItemId); context.AddObject(ExpenseDataContext.ExpenseTable, expenseItemRow); // save receipt image if any if (expenseItem.Receipt != null && expenseItem.Receipt.Length > 0) { this.receiptStorage.AddReceipt( expenseItemRow.ItemId.ToString(), expenseItem.Receipt, string.Empty); } } // Save expense and the expense items row in the same // batch transaction using a retry policy. this.storageRetryPolicy.ExecuteAction( () => context.SaveChanges(SaveChangesOptions.Batch); ... } You can also see in the second example how Adatum chose to use the Enterprise Library Transient Fault Handling Application Block to retry the SaveChanges operation if it fails due to a temporary connectivity. The Azure storage client API includes support for custom retry policies, but Adatum uses the Transient Fault Handling Application Block to take advantage of its customization capabilities and to implement a standard approach to all the retry logic in the application. See Chapter 4, “Moving to Azure SQL Database,” for information about using the Transient Fault Handling Application Block. Partition Keys and Row Keys The second important decision about table storage is the selection of keys to use. Azure table storage uses two keys: a partition key and a row key. Azure uses the partition key to implement load balancing across storage nodes. The load balancer can identify “hot” partitions (partitions that contain data that is accessed more frequently than the data in other partitions) and run them on separate storage nodes in order to improve performance. This has deep implications for your data model design and your choice of partition keys: - The partition key forms the first part of the tuple that uniquely identifies an entity in table storage. The row key is a unique identifier for an entity within a partition and forms the second part of the tuple that uniquely identifies an entity in table storage. - You can only use Entity Group Transactions on entities in the same table and in the same partition. You may want to choose a partition key based on the transactional requirements of your application. Don't forget that a table can store multiple entity types. - You can optimize queries based on your knowledge of partition keys. For example, if you know that all the entities you want to retrieve are located on the same partition, you can include the partition key in the where clause of the query. In a single query, accessing multiple entities from the same partition is much faster than accessing multiple entities on different partitions. If the entities you want to retrieve span multiple partitions, you can split your query into multiple queries and execute them in parallel across the different partitions. Adatum determined that reverse chronological order is the most likely order in which the expense items will be accessed because users are typically interested in the most recent expenses. Therefore, it decided to use a row key that guarantees the expense items are stored in this order to avoid the need to sort them. The following code sample from the ExpenseKey class shows how the static Now property generates an inverted tick count to use in its InvertedTicks property. For the partition key, Adatum decided to use the UserName property because the vast majority of queries will filter based on a user name. For example, the website displays the expense submissions that belong to the logged on user. This also enables the application to filter expense item rows by ExpenseID as if there was a foreign key relationship. The following code in the SaveChanges method in the ExpenseRepository class shows how the application creates this row key value for an expense item entity from the Id property of the expense header entity and the Id property of the expense item entity. The following code example shows how you could query for ExpenseItem rows based on ExpenseID by including the partition key in the query. char charAfterSeparator = Convert.ToChar((Convert.ToInt32('_') + 1)); var nextId = expenseId.ToString() + charAfterSeparator; var expenseItemQuery = (from expenseItem in context.ExpensesAndExpenseItems where expenseItem.RowKey.CompareTo(expenseId.ToString()) >= 0 && expenseItem.RowKey.CompareTo(nextId) < 0 && expenseItem.PartitionKey.CompareTo(expenseRow.PartitionKey) == 0 select expenseItem).AsTableServiceQuery(); Azure places some restrictions on the characters that you can use in partition and row keys. Generally speaking, the restricted characters are ones that are meaningful in a URL. For more information, see “Understanding the Table Service Data Model.” In the aExpense application, it's possible that these illegal characters could appear in the UserName used as the partition key value for the Expense table. To avoid this problem, the aExpense application encodes the UserName value using a base64 encoding scheme before using the UserName value as a row key. Implementing base64 encoding and decoding is very easy. public static string EncodePartitionAndRowKey(string key) { if (key == null) { return null; } return Convert.ToBase64String( System.Text.Encoding.UTF8.GetBytes(key)); } public static string DecodePartitionAndRowKey(string encodedKey) { if (encodedKey == null) { return null; } return System.Text.Encoding.UTF8.GetString( Convert.FromBase64String(encodedKey)); } The team at Adatum first tried to use the UrlEncode method because it would have produced a more human readable encoding, but this approach failed because it does not encode the percent sign (%) character. Another approach would be to implement a custom escaping technique. Defining the Schemas In the aExpense application, two types of entity are stored in the Expense table: expense header entities (defined by the IExpenseRow interface) and expense detail entities (defined by the IExpenseItemRow interface). The following code sample shows these two interfaces and the IRow interface that defines the entity key. public interface IExpenseRow : IRow { // NOTE: DateTime bool and Guid types must be Nullable // in order to run in the storage emulator. string Id { get; set; } string UserName { get; set; } bool? Approved { get; set; } string ApproverName { get; set; } string CostCenter { get; set; } DateTime? Date { get; set; } string ReimbursementMethod { get; set; } string Title { get; set; } } public interface IExpenseItemRow : IRow { Guid? ItemId { get; set; } string Description { get; set; } double? Amount { get; set; } string ReceiptUrl { get; set; } string ReceiptThumbnailUrl { get; set; } } public interface IRow { string PartitionKey { get; set; } string RowKey { get; set; } DateTime Timestamp { get; set; } string Kind { get; set; } } Adatum uses the ExpenseAndExpenseItemRow and Row classes to implement the IRow, IExpenseRow, and IExpenseItemRow interfaces, and to extend the TableServiceEntity class from the StorageClient namespace. The following code sample shows the Row and ExpenseAndExpenseItemRow classes. The Row class defines a Kind property that is used to distinguish between the two types of entity stored in the table (see the TableRows enumeration in the DataAccessLayer folder of the aExpense.Shared project). public abstract class Row : TableServiceEntity, IRow { protected Row() { } protected Row(string kind) : this(null, null, kind) { } protected Row( string partitionKey, string rowKey, string kind) : base(partitionKey, rowKey) { this.Kind = kind; } public string Kind { get; set; } } public class ExpenseAndExpenseItemRow : Row, IExpenseRow, IExpenseItemRow { public ExpenseAndExpenseItemRow() { } public ExpenseAndExpenseItemRow(TableRows rowKind) { this.Kind = rowKind.ToString(); } // Properties from ExpenseRow public string Id { get; set; } public string UserName { get; set; } public bool? Approved { get; set; } public string ApproverName { get; set; } public string CostCenter { get; set; } public DateTime? Date { get; set; } public string ReimbursementMethod { get; set; } public string Title { get; set; } // Properties from ExpenseItemRow public Guid? ItemId { get; set; } public string Description { get; set; } public double? Amount { get; set; } public string ReceiptUrl { get; set; } public string ReceiptThumbnailUrl { get; set; } } The following code example shows how the ExpenseDataContext class maps the ExpenseAndExpenseItemRow class to a Azure storage table named multientityschemaexpenses. Retrieving Records from a Multi-Entity Schema Table Storing multiple entity types in the same table does add to the complexity of the application. The aExpense application uses LINQ to specify what data to retrieve from table storage. The following code example shows how the application retrieves expense submissions for approval by approver name. Use the AsTableServiceQuery method to return data from Azure table storage. The AsTableServiceQuery method converts the standard IQueryable result to a CloudTableQuery result. Using a CloudTableQuery object offers the following benefits to the application: - Data can be retrieved from the table in multiple segments instead of getting it all in one go. This is useful when dealing with a large set of data. - You can specify a retry policy for cases when the query fails. However, as you saw earlier, Adatum chose to use the Transient Fault Handling Block instead. The query methods in the ExpenseRepository class use the ExpenseAndExpenseItemRow entity class when they retrieve either header or detail entities from the expense table. The following code example from the GetExpensesByUser method in the ExpenseRespository class shows how to retrieve a header row (defined by the IExpenseRow interface). var context = new ExpenseDataContext(this.account) { MergeOption = MergeOption.NoTracking }; var query = (from expense in context.ExpensesAndExpenseItems where expense.UserName.CompareTo(userName) == 0 && expense.PartitionKey.CompareTo( EncodePartitionAndRowKey(userName)) == 0 select expense).Take(10).AsTableServiceQuery(); try { return this.storageRetryPolicy.ExecuteAction( () => query.Execute()).Select(e => e.ToModel()).ToList(); } ... The following code sample from the GetExpensesById method in the ExpenseRepository class uses the Kind property to select only detail entities. Materializing Entities In the aExpense application, all the methods in the ExpenseRepository class that return data from queries call the ToList method before returning the results to the caller. public IEnumerable<Expense> GetExpensesForApproval(string approverName) { ExpenseDataContext context = new ExpenseDataContext(this.account); var query = (from expense in context.ExpensesAndExpenseItems where expense.ApproverName.CompareTo(approverName) == 0 select expense).AsTableServiceQuery(); try { return this.storageRetryPolicy.ExecuteAction(() => query.Execute()).Select(e => e.ToModel()).ToList(); } catch (InvalidOperationException) { Log.Write(EventKind.Error, "By calling ToList(), this exception can be handled inside the repository."); throw; } } The reason for this is that calling the Execute method does not materialize the entities. Materialization does not happen until someone calls MoveNext on the IEnumerable collection. Without ToList, the first call to MoveNext happens outside the repository. The advantage of having the first call to the MoveNext method inside the ExpenseRepository class is that you can handle any data access exceptions inside the repository. Query Performance As mentioned earlier, the choice of partition key can have a big impact on the performance of the application. This is because Azure tracks activity at the partition level, and can automatically migrate a busy partition to a separate storage node in order to improve data access performance for the application. Adatum uses partition keys in queries to improve the performance. For example, the following query to retrieve stored business expense submissions for a user by using this query would work, even though it does not specify a partition key. However, this query must scan all the partitions of the table to search for matching records. This is inefficient if there are a large number of records to search, and its performance may be further affected if it has to scan data across multiple storage nodes sequentially. Adatum’s test team did performance testing on the application using queries that do not include the partition key, and then evaluated the improvement when the partition key is included in the where clause. The testers found that there was a significant performance improvement in the aExpense application using a query that includes the partition key, as shown here. Working with Development Storage There are some differences between development table storage and Azure table storage documented at “Differences Between the Storage Emulator and Azure Storage Services.” The team at Adatum encountered the error “One of the request inputs is not valid” that occurs when testing the application with empty tables in development storage. The solution that Adatum adopted was to insert, and then delete, a dummy row into the Azure tables if the application is using the local storage emulator. During the initialization of the web role, the application calls the CreateTableIfNotExist<T> extension method in the TableStorageExtensionMethods class to check whether it is running against local development storage. If this is the case it adds and then deletes a dummy record in the application's Azure tables. The following code from the TableStorageExtensionMethods class (defined in the Source\Shared\aExpense folder) demonstrates how the aExpense application determines whether it is using development storage and how it adds and deletes a dummy record to the table. public static bool CreateTableIfNotExist<T>( this CloudTableClient tableStorage, string entityName) where T : TableServiceEntity, new() { bool result = tableStorage.CreateTableIfNotExist(entityName); // Execute conditionally for development storage only if (tableStorage.BaseUri.IsLoopback) { InitializeTableSchemaFromEntity(tableStorage, entityName, new T()); } return result; } private static void InitializeTableSchemaFromEntity( CloudTableClient tableStorage, string entityName, TableServiceEntity entity) { TableServiceContext context = tableStorage.GetDataServiceContext(); DateTime now = DateTime.UtcNow; entity.PartitionKey = Guid.NewGuid().ToString(); entity.RowKey = Guid.NewGuid().ToString(); Array.ForEach( entity.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance), p => { if ((p.Name != "PartitionKey") && (p.Name != "RowKey") && (p.Name != "Timestamp")) { if (p.PropertyType == typeof(string)) { p.SetValue(entity, Guid.NewGuid().ToString(), null); } else if (p.PropertyType == typeof(DateTime)) { p.SetValue(entity, now, null); } } }); context.AddObject(entityName, entity); context.SaveChangesWithRetries(); context.DeleteObject(entity); context.SaveChangesWithRetries(); } Storing Profile Data Until now Adatum has used the built-in ASP.NET profile mechanism to store each user’s preferred reimbursement method. In Azure, the ASP.NET profile provider communicates with either SQL Server or Azure SQL Database (depending on the previous migration stage) where the ASPNETDB database resides. However, during this final migration step Adatum will move away from using a relational database in favor of storing all of the application data in Azure table and blob storage. Therefore it makes no sense to continue to use a relational database just for the profile data. Instead, Adatum chose to use a sample provider that utilizes Azure table storage to store profile information. You can download this provider from “Azure ASP.NET Providers Sample.” The only change required for the application to use a different profile provider is in the Web.config file. <profile defaultProvider="TableStorageProfileProvider"> <providers> <clear /> <add name="TableStorageProfileProvider" type="AExpense.Providers.TableStorageProfileProvider …" applicationName="aExpenseProfiles" /> </providers> <properties> <add name="PreferredReimbursementMethod" /> </properties> </profile> Using the TableStorageProfileProvider class does raise some issues for the application: - The table storage profile provider is unsupported sample code. - You must migrate your existing profile data from SQL Server to Azure table storage. - You need to consider whether, in the long run, Azure table storage is suitable for storing profile data. Even with these considerations to taken into account, using the table storage profile provider enabled Adatum to get rid of the need for a relational database; which helps to minimize the running costs of the application. Generating and Exporting the Expense Data The developers at Adatum added functionality to the aExpense application to export a summary of the approved expenses data to use with Adatum’s existing on-premises reimbursement system. Generating the Expense Report Table The task that performs this operation uses the worker role plumbing code described in Chapter 5, “Executing Background Tasks.” The discussion here will focus on the task implementation and table design issues; it does not focus on the plumbing code. This task is the first of two tasks that generate the approved expense data for export. It is responsible for generating the "flattened" table of approved expense data in Azure table storage. The following code sample shows how the expense report export process begins in the ExpenseRepository class (in the DataAccessLayer folder of the aExpense.Shared project) where the UpdateApproved method adds a message to a queue and updates the Approved property of the expense header record. public void UpdateApproved(Expense expense) { var context = new ExpenseDataContext(this.account); ExpenseRow expenseRow = GetExpenseRowById(context, expense.Id); expenseRow.Approved = expense.Approved; var queue = new AzureQueueContext(this.account); this.storageRetryPolicy.ExecuteAction( () => queue.AddMessage(new ApprovedExpenseMessage { ExpenseId = expense.Id.ToString(), ApproveDate = DateTime.UtcNow })); context.UpdateObject(expenseRow); this.storageRetryPolicy.ExecuteAction( () => context.SaveChanges()); } This code uses a new message type named ApprovedExpenseMessage that derives from the plumbing code class named BaseQueueMessage. The following code sample shows the two properties of the ApprovedExpenseMessage class. The following code shows how the ProcessMessage method in the ExpenseExportJob class (located in the Jobs folder of the aExpense.Workers project) retrieves the message from the queue and creates a new ExpenseExport entity to save to table storage. public override bool ProcessMessage( ApprovedExpenseMessage message) { try { Expense expense = this.expenses.GetExpenseById( new ExpenseKey(message.ExpenseId)); if (expense == null) { return false; } // If the expense was not updated but a message was // persisted, we need to delete it. if (!expense.Approved) { return true; } double totalToPay = expense.Details.Sum(x => x.Amount); var export = new ExpenseExport { ApproveDate = message.ApproveDate, ApproverName = expense.ApproverName, CostCenter = expense.CostCenter, ExpenseId = expense.Id, ReimbursementMethod = expense.ReimbursementMethod, TotalAmount = totalToPay, UserName = expense.User.UserName }; this.expenseExports.Save(export); } catch (InvalidOperationException ex) { var innerEx = ex.InnerException as DataServiceClientException; if (innerEx != null && innerEx.StatusCode == (int)HttpStatusCode.Conflict) { // The data already exists, so we can return true // because we have processed this before. return true; } Log.Write(EventKind.Error, ex.TraceInformation()); return false; } return true; } If this method fails for any reason other than a conflict on the insert, the plumbing code classes ensure that message is left on the queue. When the ProcessMessage method tries to process the message from the queue a second time, the insert to the expense report table fails with a duplicate key error and the inner exception reports this as a conflict in its StatusCode property. If this happens, the method can safely return a true result. If the Approved property of the Expense object is false, this indicates a failure during the UpdateApproved method after it added a message to the queue, but before it updated the table. In this circumstance, the ProcessMessage method removes the message from the queue without processing it. The partition key of the Expense Export table is the expense approval date, and the row key is the expense ID. This optimizes access to this data for queries that use the approval date in the where clause, which is what the export process requires. Exporting the Expenses Data This task is the second of two tasks that generate the approved expense data for export. It is responsible for creating a Azure blob that contains a CSV file of approved expense submissions data. The task that generates the blob containing the expense report data is slightly different from the two other tasks in the aExpense application. The other tasks poll a queue to see if there is any work for them to do. The export task is triggered by a schedule, which sets the task to run at fixed times. The team at Adatum had to modify their worker role plumbing code classes to support scheduled tasks. The worker role plumbing code classes now support scheduled tasks in addition to tasks that are triggered by a message on a queue. You can use the abstract class JobProcessor, which implements the IJobProcessor interface, to define new scheduled tasks. The following code example shows the JobProcessor class. public abstract class JobProcessor : IJobProcessor { private bool keepRunning; protected JobProcessor(int sleepInterval) { if (sleepInterval <= 0) { throw new ArgumentOutOfRangeException("sleepInterval"); } this.SleepInterval = sleepInterval; } protected int SleepInterval { get; set; } public void Run() { this.keepRunning = true; while (this.keepRunning) { Thread.Sleep(this.SleepInterval); this.RunCore(); } } public void Stop() { this.keepRunning = false; } protected abstract void RunCore(); } This implementation does not make it easy to specify the exact time that scheduled tasks will run. The time between tasks will be the value of the sleep interval, plus the time taken to run the task. If you need the task to run at a fixed time, you should measure how long the task takes to run and subtract that value from the sleep interval. In the aExpense application, the ExpenseExportBuilderJob class extends the JobProcessor class to define a scheduled task. The ExpenseExportBuilderJob class, shown in the following code example, defines the task that generates the expense report data and stores it as a blob. In this class, the expenseExports variable refers to the table of approved expense submissions, and the exportStorage variable refers to the report data in blob storage that will be downloaded. The call to the base class constructor specifies the interval at which the job runs. public class ExpenseExportBuilderJob : JobProcessor { private readonly ExpenseExportRepository expenseExports; private readonly ExpenseExportStorage exportStorage; public ExpenseExportBuilderJob() : base(100000) { this.expenseExports = new ExpenseExportRepository(); this.exportStorage = new ExpenseExportStorage(); } In the RunCore method, the code first retrieves all the approved expense submissions from the export table based on the job date. Next, the code appends a CSV record to the export data in blob storage for each approved expense submission. Finally, the code deletes from the table all the records it copied to blob storage. protected override void RunCore() { DateTime jobDate = DateTime.UtcNow; string name = jobDate.ToExpenseExportKey(); IEnumerable<ExpenseExport> exports = this.expenseExports.Retreive(jobDate); if (exports == null || exports.Count() == 0) { return; } string text = this.exportStorage.GetExport(name); var exportText = new StringBuilder(text); foreach (ExpenseExport expenseExport in exports) { exportText.AppendLine(expenseExport.ToCsvLine()); } this.exportStorage.AddExport(name, exportText.ToString(), "text/plain"); // Delete the exports. foreach (ExpenseExport exportToDelete in exports) { try { this.expenseExports.Delete(exportToDelete); } catch (InvalidOperationException ex) { Log.Write(EventKind.Error, ex.TraceInformation()); } } } } If the process fails before it deletes all the approved expense submissions from the export table, any undeleted approved expense submissions will be exported a second time when the task next runs. However, the exported CSV data includes the expense ID and the approval date of the expense submission, so the on-premises payment processing system will be able to identify duplicate items. The following code shows the methods that the RunCore method invokes to retrieve approved expense submissions and delete them after it copies them to the export blob. These methods are defined in the ExpenseExportRepoisitory class located in the DataAccessLayer folder of the aExpense.Shared project. Because they use the job date to identify the partitions to search, these queries are fast and efficient. public IEnumerable<ExpenseExport> Retreive(DateTime jobDate) { var context = new ExpenseDataContext(this.account); string compareDate = jobDate.ToExpenseExportKey(); var query = (from export in context.ExpenseExport where export.PartitionKey.CompareTo(compareDate) <= 0 select export).AsTableServiceQuery(); var val = query.Execute(); return val.Select(e => e.ToModel()).ToList(); } public void Delete(ExpenseExport expenseExport) { var context = new ExpenseDataContext(this.account); var query = (from export in context.ExpenseExport where export.PartitionKey.CompareTo( expenseExport.ApproveDate.ToExpenseExportKey()) == 0 && export.RowKey.CompareTo( expenseExport.ExpenseId.ToString()) == 0 select export).AsTableServiceQuery(); ExpenseExportRow row = query.Execute().SingleOrDefault(); if (row == null) { return; } context.DeleteObject(row); context.SaveChanges(); } Performance Testing, Tuning, To-Do Items As part of the work for this phase, the team at Adatum evaluated the results from performance testing the application and, as a result, made a number of changes to the aExpense application. They also documented some of the key “missing pieces” in the application that Adatum should address in the next phase of the project. Adatum made changes to the aExpense application following performance testing. Initializing the Storage Tables, Blobs, and Queues During testing of the application, the team at Adatum discovered that the code that creates the expenses storage repository and the job that processes receipt images were affecting performance. They isolated this to the fact that the code calls the CreateIfNotExist method every time the repository is instantiated, which requires a round-trip to the storage server to check whether the receipt container exists. This also incurs an unnecessary storage transaction cost. To resolve this, the developers realized that they should create the receipt container only once when the application starts. Originally, the constructor for the ExpenseReceiptStorage class was responsible for checking that the expense receipt container existed, and creating it if necessary. This constructor is invoked whenever the application instantiates an ExpenseRepository object or a ReceiptThumbnailJob object. The CreateIfNotExist method that checks whether a container exists requires a round-trip to the storage server and incurs a storage transaction cost. To avoid these unnecessary round-trips, Adatum moved this logic to the ApplicationStorageInitializer class defined in the WebRole class. This class prepares all of the tables, blobs, and queues required by the application when the role first starts. public static class ApplicationStorageInitializer { public static void Initialize() { CloudStorageAccount account = CloudConfiguration.GetStorageAccount( "DataConnectionString"); // Tables – create if they do not already exist. var cloudTableClient = new CloudTableClient(account.TableEndpoint.ToString(), account.Credentials); cloudTableClient.CreateTableIfNotExist< ExpenseAndExpenseItemRow>( ExpenseDataContext.ExpenseTable); cloudTableClient.CreateTableIfNotExist<ExpenseExportRow>( ExpenseDataContext.ExpenseExportTable); // Blobs – create if they do not already exist. var client = account.CreateCloudBlobClient(); client.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(5)); var container = client.GetContainerReference( ExpenseReceiptStorage.ReceiptContainerName); container.CreateIfNotExist(); container = client.GetContainerReference( ExpenseExportStorage.ExpenseExportContainerName); container.CreateIfNotExist(); // Queues – remove any existing stored messages var queueContext = new AzureQueueContext(account); queueContext.Purge<NewReceiptMessage>(); queueContext.Purge<ApprovedExpenseMessage>(); } } The Application_Start method in the Global.asax.cs file and the OnStart method of the worker role invoke the Initialize method in this class. Implementing Paging with Azure Table Storage During performance testing, the response times for Default.aspx degraded as the test script added more and more expense submissions for a user. This happened because the current version of the Default.aspx page does not include any paging mechanism, so it always displays all the expense submissions for a user. As a temporary measure, Adatum modified the LINQ query that retrieves expense submissions by user to include a Take(10) clause, so that the application only requests the first 10 expense submissions. In a future phase of the project, Adatum will add paging functionality to the Default.aspx page. Adatum has not implemented any paging functionality in the current phase of the project, but this section gives an outline of the approach it intends to take. The ResultSegment class in the Azure StorageClient library provides an opaque ContinuationToken property that you can use to access the next set of results from a query if that query did not return the full set of results; for example, if the query used the Take operator to return a small number of results to display on a page. This ContinuationToken property will form the basis of any paging implementation. The ResultSegment class only returns a ContinuationToken object to access the next page of results, and not the previous page, so if your application requires the ability to page backward, you must store ContinuationToken objects that point to previous pages. A stack is a suitable data structure to use. Figure 7 shows the state of a stack after a user has browsed to the first page and then paged forward as far as the third page. Figure 7 If a user clicks the Next hyperlink to browse to page 4, the page peeks at the stack to get the continuation token for page 4. After the page executes the query with the continuation token from the stack, it pushes a new continuation token for page 5 onto the stack. If a user clicks the Previous hyperlink to browse to page 2, the page will pop two entries from the stack, and then peek at the stack to get the continuation token for page 2. After the page executes the query with the continuation token from the stack, it will push a new continuation token for page 3 onto the stack. The following code examples show how Adatum could implement this behavior in an asynchronous ASP.NET page. The following two code examples show how to create an asynchronous ASP.NET page. First, add an Async="true" attribute to the page directive in the .aspx file. Second, register begin and end methods for the asynchronous operation in the load event for the page. The following code example shows the definition of the ContinuationStack class that the application uses to store continuation tokens in the session state. public class ContinuationStack { private readonly Stack stack; public ContinuationStack() { this.stack = new Stack(); } public bool CanMoveBack() { if (this.stack.Count >= 2) return true; return false; } public bool CanMoveForward() { return this.GetForwardToken() != null; } public ResultContinuation GetBackToken() { if (this.stack.Count == 0) return null; // We need to pop twice and then return the next token. this.stack.Pop(); this.stack.Pop(); if (this.stack.Count == 0) return null; return this.stack.Peek() as ResultContinuation; } public ResultContinuation GetForwardToken() { if (this.stack.Count == 0) return null; return this.stack.Peek() as ResultContinuation; } public void AddToken(ResultContinuation result) { this.stack.Push(result); } } The following code example shows the BeginAsyncOperation method that starts the query execution for the next page of data. The ct value in the query string specifies the direction to move. private IAsyncResult BeginAsyncOperation(object sender, EventArgs e, AsyncCallback cb, object extradata) { var query = new MessageContext(CloudConfiguration.GetStorageAccount()) .Messages.Take(3).AsTableServiceQuery(); if (Request["ct"] == "forward") { var segment = this.ContinuationStack.GetForwardToken(); return query.BeginExecuteSegmented(segment, cb, query); } if (Request["ct"] == "back") { var segment = this.ContinuationStack.GetBackToken(); return query.BeginExecuteSegmented(segment, cb, query); } return query.BeginExecuteSegmented(cb, query); } The EndAsyncOperation method puts the query results into the messages list and pushes the new continuation token onto the stack. private List<MessageEntity> messages; private void EndAsyncOperation(IAsyncResult result) { var cloudTableQuery = result.AsyncState as CloudTableQuery<MessageEntity>; ResultSegment<MessageEntity> resultSegment = cloudTableQuery.EndExecuteSegmented(result); this.ContinuationStack.AddToken( resultSegment.ContinuationToken); this.messages = resultSegment.Results.ToList(); } Preventing Users from Uploading Large Images To prevent users from uploading large images of receipt scans to aExpense, Adatum configured the application to allow a maximum upload size of 1,024 kilobytes (KB) to the AddExpense.aspx page. The following code example shows the setting in the Web.config file. Validating User Input The cloud-based version of aExpense does not perform comprehensive checks on user input for invalid or dangerous items. The AddExpense.aspx file includes some basic validation that checks the length of user input, but Adatum should add additional validation checks to the OnAddNewExpenseItemClick method in the AddExpense.aspx.cs file. System.Net Configuration Changes The following code example shows two configuration changes that Adatum made to the aExpense application to improve its performance. The first change switches off the “Expect 100-continue” feature. If this feature is enabled, when the application sends a PUT or POST request, it can delay sending the payload by sending an “Expect 100-continue” header. When the server receives this message, it uses the available information in the header to check whether it could make the call, and if it can, it sends back a status code 100 to the client. The client then sends the remainder of the payload. This means that the client can check for many common errors without sending the payload. If you have tested the client well enough to ensure that it is not sending any bad requests, you can turn off the “Expect 100-continue” feature and reduce the number of round trips to the server. This is especially useful when the client sends many messages with small payloads; for example, when the client is using the table or queue service. The second configuration change increases the maximum number of connections that the web server will maintain from its default value of two. If this value is set too low, the problem manifests itself through “Underlying connection was closed” messages. WCF Data Service Optimizations Because of a known performance issue with WCF Data Services, Adatum defined a ResolveType delegate on the ExpenseDataContext class in the aExpense application. Without this delegate, query performance degrades as the number of entities that the query returns increases. The following code example shows the delegate definition. private static Type ResolveEntityType(string name) { var tableName = name.Split(new[] { '.' }).Last(); switch (tableName) { case ExpenseTable: return typeof(ExpenseRow); case ExpenseItemTable: return typeof(ExpenseItemRow); case ExpenseExportTable: return typeof(ExpenseExportRow); } throw new ArgumentException( string.Format( CultureInfo.InvariantCulture, "Could not resolve the table name '{0}' to a known entity type.", name)); } Adatum added a further optimization to the WCF Data Services client code by setting the MergeOption to NoTracking for the queries in the ExpenseRepository class. If you are not making any changes to the entities that WCF Data Services retrieve, there is no need for the DataContext object to initialize change tracking for entities. More Information “Blobs, Queues, and Tables” discusses the use of Azure blobs, tables, and queues. “Data Management” explores the options for storing data in Azure SQL Database and blob storage. The Azure Managed Library includes detailed reference information for the Microsoft.WindowsAzure.StorageClient namespace. “Azure Storage Services REST API Reference” explains how you can interact with Azure storage using scripts and code.
https://msdn.microsoft.com/en-us/library/ff803362.aspx
CC-MAIN-2015-22
en
refinedweb
26 October 2012 04:44 [Source: ICIS news] SINGAPORE (ICIS)--Olin Corp has posted a 39% year-on-year decline in third-quarter 2012 net profit at $28.7m (€22.1m) despite a 5.6% increase in sales, the ?xml:namespace> Sales for the September quarter totalled $581.2m, up from $550.2m in the same period in 2011, the company said in a statement. “Third-quarter 2012 results included $47.6m of sales and $1.9m of pre-tax segment income associated with the new chemical distribution segment created by the acquisition of KA Steel Chemicals (KA Steel) on 22 August 2012,” Olin said. Olin’s cost of goods sold for the third quarter increased by 10% to $475.8m, it said. Operating expenses for the period also included an $8.3m charge related to the acquisition of KA Steel. Olin incurred one-time costs of $4.9m in the three months to September 2012 associated with two plant start-ups in the chlor-alkali business, it said. In the first nine months of the year, Olin’s net profit shrank by 48% year on year to $115m, even as sales grew by 5.4%, the company said. Cost of goods sold for January-September 2012 stood at $1.26bn, compared with $1.21bn in the previous corresponding period,
http://www.icis.com/Articles/2012/10/26/9607578/us-olin-corp-q3-net-profit-falls-39-on-higher-operating.html
CC-MAIN-2015-22
en
refinedweb
public class SortKey extends Object SortControl clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitID- The possibly null ID of the matching rule to use to order the attribute values. If not specified then the ordering matching rule defined for the sort key attribute is used. public String getAttributeID() public boolean isAscending() public String getMatchingRule.
http://docs.oracle.com/javase/8/docs/api/javax/naming/ldap/SortKey.html
CC-MAIN-2015-22
en
refinedweb
Hi good afternoon Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty simple program question simple program question how do i get a java program to take an input from the user and have it terminate by a period and not by pressing enter Good tutorials for beginners in Java Good tutorials for beginners in Java Hi, I am beginners in Java... in details about good tutorials for beginners in Java with example? Thanks. Hi, want to be command over Java, you should go on the link and follow Good Looking Java Charts and Graphs Good Looking Java Charts and Graphs Is there a java chart library that will generate charts and graphs with the quality of visifire or fusion charts? The JFreeChart graph quality is not professional looking. Unless it can HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer Hi Friend, Please go through the following link... learn java easily and make a command over core java to proceed further. Thanks in java program in java write a reverse program in java using string buffer.the input and out put as follows. input- hi good mornig out put-ih doog ginrom Java Program - Java Beginners Java Program Hi I have this program I cant figure out. Write a program called DayGui.java that creates a GUI having the following properties... cmdGood Caption Good Mnemonic G Problem Statement You are required to write a program... an expression. Like in any good equation, the LHS must equal the RHS. The operators... given, then your program must create the expression: 8 - 5 * 2 = -2 Here Simple java applet Simple java applet Create a Java applet to display your address on the screen. Use different colors for background and text Please visit the following link: Applet Examples Java Program - Java Beginners Java Program Hi I have this program I cant figure out. Write a program called DayGui.java that creates a GUI having the following properties... Setting- cmdGood, Good, G Object- Jbutton Property- Name Caption, Mnemonic java program - Java Beginners java program i have two classes like schema and table. schema class... name, catelogue,columns, primarykeys, foreignkeys. so i need to write 2 java... requirements in detail. It would be good for me to provide you the solution java program code java program code can any one write a program (class) which will have two methods Fibonacci and Factorial, which will be a void and a return respectively. i will be glad if it can have a good display results Sample java program Sample java program I want a sample program: to produce summary information on sales report. The program will input Data of Salesman Id, Item code..., display "good sales" if item code is 250-300, display "excellent sales" each Java GUI Program - Java Beginners Java GUI Program How is the following program supposed to be coded... by Day 7 under a thread in your Team Forum called Week Three Program for 10 points...("Good by"); } } catch(Exception e help with program - Java Beginners Help with program Simple Java Program // Defining class Stars.java to print stars in certain orderclass Stars{// main() functionpublic static void main(String[] args){int a = 1, b = 5, c, d, i; // declaring 5 int type Program for Calculating Marks Java Program for Calculating Marks Hi Everyone, I have a assignment that requires me to a write simple java program that will calculate marks for 10 students. I must use an array to ask user to key in the marks for the 10 How to Java Program With the Basic Programming Code A java Program begins with public class Good... in your Syntax of programming code. To Run this Program code Java Good Morning... How to Java Program   JSP Simple Examples ; Using Protected Access in JSP In java there are three types... the program to escape from the for, while, switch and do while loops. A break...; EL and Complex Java Beans Java Beans simple reference source - Java Beginners Java simple reference source Hi, please could you recommend me a Java simple reference source (on line or e-book) where I could quickly find..., this is not exactly what I am looking for. Let me explain. I'm a Java beginner writing java program java program write a program to create text area and display the various mouse handling events java program java program Write a program to find the difference between sum of the squares and the square of the sums of n numbers simple webdesign program with coding simple webdesign program with coding how to design a webpage using html Please go through the following link: HTML Tutorials Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/53008
CC-MAIN-2015-22
en
refinedweb
RIF In RDF __NUMBEREDHEADINGS__ - Document title: - RIF In RDF (Second Edition) - Editors - - Sandro Hawke, W3C/MIT - Axel Polleres, DERI, NUI Galway - Abstract. - Status of this Document - This is an in-progress, unstable version. If you want something reliable, use the Latest Published Version.. Copyright © 2010 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Contents - 1 Introduction - 2 Use Cases - 3 Requirements - 4 Extensibility - 5 Mapping from RIF XML to RDF Graphs - 6 The Reverse Mapping (Extracting RIF XML) - 7 Importing RIF into RDF - 8 Semantics of RIF in RDF - 9 Acknowledgements - 10 References - 11 Appendix Complete Example - 12 Changelog 1 Introduction: - design discussion, including use cases and requirements - specification of the RIF-in-RDF mapping - discussion of the reverse mapping - specification of rif:usedWithProfile, allowing import of RIF into RDF - a complete example 2 Use Cases In designing this mapping a few use cases were considered: - UC1: Store RIF in an RDF Triplestore — particularly when using RIF with RDF data, it may be useful to keep various RIF documents in the triplestore with the data, especially when there is associated metadata. - UC2: Access RIF Syntactic Structures with RDF Tools — even more than storing whole RIF documents, it may be useful to be able to use RDF and Linked Data mechanisms to refer to and manipulate individual syntactic elements such as RIF rules, clauses, and groups. - UC3: Transform RIF Syntax using RIF Rules — many logic programming techniques build on the idea of having rules which transform other rules. While RIF documents can be processed as XML, it may be desirable in some cases to process them as RDF triples or as RIF frames. - UC4: Provide Forward Compatibility via Fallback Rules — it may be possible for RIF extensions to be completely or partially understood (used) by systems which do not directly implement the extensions, if the extensions are published with suitable fallback transformation rules. This is a special case of UC3. 3 Requirements The following requirements were taken into account in this design: - Req1: All Standard RIF Documents Map to RDF — Every syntactically valid RIF Core, RIF BLD, and RIF PRD has a well-defined mapping to RDF triples. - Req2: Extensions Can Be Written So They Will Be Automatically Mapped to RDF — It is possible to write reasonable extensions which the mapping will handle, without the mapping being extended. It is not a requirement that all possible extensions be handled by this mapping. - Req3: Transformations Require No External Data — The transformation can be done without any external information, such as dereferencing namespace URIs or otherwise obtaining schema information. - Req4: Stable Roundtrips Under RDF Simple Entailment — A RIF document may be mapped to RDF, then the graph may be altered following RDF Simple Entailment [RDF Semantics] (including being reduced to a subgraph), and if the document can be extracted by the reverse mapping, it will have the same entailments and metadata. This is motivated by UC3 and especially UC4: without this property, incomplete running of transformation rules could undetectably result in incorrect results. - Req5: RDF View Conforms to RDF Best Practices — the RDF form of the RIF constructs should appear as normal, well-constructed RDF data, not as some odd or surprising formation. Although this mapping uses rdf:List structures more than is common, for this application they are warranted. - Req6: RIF Extension are First Class in RDF View — viewed as triples, there should be no indication of which features are in which dialects or extensions; the intent here is the allow the feature set to evolve and particular applications to use the appropriate set of features without regard to which features happen to be in RIF Core, RIF BLD, or RIF PRD. 4 Extensibility. 5 Mapping from RIF XML to RDF Graphs. 5.1 Namespaces: <> 5.2 The <id> and <meta> Elements. 5.3 The <Var> and <Const> Elements. 5.4 General Mapping: - Mode 0 - These elements have the ordered="yes" attribute. Their children are mapped to an RDF list (collection). - Mode 1 - These elements are required by the XML schema to appear exactly once. Their children are mapped directly to the value role of an RDF triple - Mode 2 - All the (zero or more) values of these elements are gathered, in document order, into an RDF list. When these elements do not appear in their class elements, an empty RDF list is generated. - Mode 3 - Special handling for the <slot> property, converting name/arg and key/value pairs into explicit pairs The mapping for each mode is specified in Table 2 below. The mapping depends on the identity of an RDF property, written as prop, and the mode. Table 3 specifies special-case values for prop and mode, but otherwise they are determined as follows: - prop is the concatenation of the property element's tag's namespace IRI followed by its local part. For example, for the <rif:args> element, the RDF property prop has the IRI "". - If the element has an attribute "ordered" with the value "yes", it is Mode 0; otherwise, it is Mode 1. (As noted, RIF extensions must use required property elements, so Modes 2 and 3 are not available to them.) This table specifies exceptions to the default rules for determining the value of prop and the mode of the property element: 6 The Reverse Mapping (Extracting RIF XML). 7 Importing RIF into RDF. 8 Semantics of RIF in RDF - the RIF document Ri if Ri is a RIF/XML document, and - the RIF document obtained from applying the inverse mapping XTr to the graph Gi if Ri denotes an RDF graph Gi.. 9 Acknowledgements). [ 10 References 10.1 Normative References - [RDF Concepts] - Resource Description Framework (RDF): Concepts and Abstract Syntax, G. Klyne, J. Carrol, Editors, W3C Recommendation, 10 February 2004,. Latest version available at. - [RIF Core] - RIF Core Dialect, Harold Boley, Gary Hallmark, Michael Kifer, Adrian Paschke, Axel Polleres and Dave Reynolds (Editors), W3C Recommendation. Available at. - [RIF RDF+OWL] - RIF RDF and OWL Compatibility, Jos de Bruijn (Editor), W3C Recommendation. Available at. - [Turtle] - Turtle - Terse RDF Triple Language, David Beckett and Tim Berners-Lee, Authors, W3C Team Submission 14 January 2008. Latest Version available at . 10.2 Nonnormative References - [GRDDL] - Gleaning Resource Descriptions from Dialects of Languages (GRDDL), Dan Connolly, Editors, W3C Recommendation, 11 September 2007, . Latest version available at . - [OWL2 Mapping] - OWL 2 Web Ontology Language: Mapping to RDF Graphs, Peter F. Patel-Schneider, Boris Motik, Eds., W3C Recommendation 27 October 2009, . Latest version at . - [RDF Semantics] - RDF Semantics, Patrick Hayes, Ed., W3C Recommendation 10 February 2004, . Latest version at . - [RDF Tools] - Semantic Web Development Tools, Website: retrieved on 21 June 2010. - [RDF XML] - RDF/XML Syntax Specification (Revised), Dave Beckett, Ed., W3C Recommendation 10 February 2004, . Latest version at . - [RDFa] - RDFa in XHTML: Syntax and Processing, Ben Adida, Mark Birbeck, Shane McCarron, Steven Pemberton, Eds., W3C Recommendation 14 October 2008, . Latest version at . - [RDF] - Resource Description Framework (RDF), Website retrieved on 21 Jun 2010. - [RIF BLD] - BLD (Reference will be filled in at publication time.) - [RIF Charter] - Rule Interchange Format Working Group Charter, Sandro Hawke, Ed., . - [RIF FLD] - FLD (Reference will be filled in at publication time.) - [RIF Overview] - Overview (replaced at publication) - [RIF PRD] - PRD (replaced during publication) - [SPARQL ER] - SPARQL 1.1 Entailment Regimes, Birte Glimm and Chimezie Ogbuji, Editors, W3C Working Draft, 12 May 2011,. Latest version at. 11 Appendix Complete Example" ] ) ] ) . 12 Changelog 12.1 Changes since 12 May 2011 Reference were updated as part of publishing RIF Second Edition 12.2 Changes since 22 June 2010 The mapping was changed in several ways. In particular, rdf:type arcs are now used, and some properties were renamed to be closer to the RIF/XML names. Two sections were added to define a mechanism for importing RIF documents into RDF documents, using a rif:usedWithProfile property. Placeholder appendices were removed because we did not develop the anticipated extra materials.
http://www.w3.org/2005/rules/wiki/RIF_In_RDF
CC-MAIN-2015-22
en
refinedweb
Improvements in Schema Cache Behavior in MSXML 6.0 A number of changes and improvements have been made to the schema cache in MSXML 6.0. There are significant differences between MSXML 5.0 and MSXML 6.0 regarding adding schemas to the schema cache. In MSXML 5.0 and before, when you add a schema, imported schemas were added into the same "top-level" schema in the schema cache, even if they had a different namespace. If you called the getSchema method, you received an ISchema interface object that contained definitions for the schema that you added, as well as all schemas that were imported. In MSXML 6.0, when you add a schema, imported schemas with different target namespaces are added as their own "top-level" schema, with their own namespace. This means that after adding a schema that imports schemas with other target namespaces, if you want to see the definitions for all types in all namespaces, you have to call the getSchema method separately for each namespace. When a schema (or imported schema) is added to the cache and an existing schema has the same target namespace, the two schemas are merged. If there are conflicts between types in the two schemas, MSXML 6.0 throws an exception. This is sometimes called adding "partial" schemas. You can construct a schema cache from several schema documents, all having the same target namespace. All of the schemas are merged into the same namespace. Any namespace or type already added to the schema cache can be referenced by another schema in the cache even if there is no explicit import in the referencing schema. You need to set validateOnLoad Property to false to avoid issues around the order of calls to this method. When schemas are added to the schema cache, either by the add method (IXMLDOMSchemaCollection/XMLSchemaCache) or by the addCollection method, all of the schemas (including imported schemas) are added to the cache, or none of them are.
https://msdn.microsoft.com/en-us/library/ms764692.aspx
CC-MAIN-2015-22
en
refinedweb
. Step 1: Concept We attach the magnetic reed sensor to my office door and door-frame. A wire runs from the magReed sensor to a pin in the Arduino circuit. Arduino watches that pin's status, HIGH or LOW, and when the status changes from one to the other Arduino reports on that change through Serial.write(). The Processing sketch picks up the Serial.write() call and checks to see if the current state is the same as the last one posted to Twitter. If the two states are the same then it will not post, but if the new state is different from the previous state, then we're in business. Processing uses twitter4j and OAuth to post the new state to your Twitter account. Done and done. By the way great work ,helped a lot...!!! I tried this project. But Only works at just 1st time to my iphone twitter app. and then never works again --; My error message is here: ------------------------------------------------------------------ RXTX Warning: Removing stale lock file. /var/lock/LK.027.033.008 What's wrong with me ? Intel iMac x64, MacOSX 10.8.3 Arduino 1.5.1 Processing 2.0b8 Twitter4j.jar that you offering one. But i am currently getting these errors on processing: Stable Library ========================================= Native lib Version = RXTX-2.1-7 Java lib Version = RXTX-2.1-7 [0] "COM5" [1] "COM7" [2] "COM8" gnu.io.PortInUseException: Unknown Application at gnu.io.CommPortIdentifier.open(CommPortIdentifier.java:354)) [color=red]This is my current processing code:[/color] import processing.serial.*; import twitter4j.conf.*; import twitter4j.internal.async.*; import twitter4j.internal.org.json.*; import twitter4j.internal.logging.*; import twitter4j.http.*; import twitter4j.api.*; import twitter4j.util.*; import twitter4j.internal.http.*; import twitter4j.*; static String OAuthConsumerKey = "ykaw70kvBc6jVV21eLlWA"; static String OAuthConsumerSecret = "PllQqnwaZV7aYuH33vniloGW2U5fbkfYxef2LQVAK0"; static String AccessToken = "20026567-9juOyYchv9k7PXsu7kyrvhpHKkOY3Fg6xTn9vatuA"; static String AccessTokenSecret = "AcIhlVX95nvpFYMyk9oh41PWHe3TXYqhMbSy6hLgQ"; Serial arduino; Twitter twitter = new TwitterFactory().getInstance(); void setup() { size(125, 125); frameRate(10); background(0); println(Serial.list()); String arduinoPort = Serial.list()[0]; arduino = new Serial(this, arduinoPort, 9600); [color=red]ERROR BEGINS ON THIS LINE[/color] loginTwitter(); } void loginTwitter() { twitter.setOAuthConsumer(OAuthConsumerKey, OAuthConsumerSecret); AccessToken accessToken = loadAccessToken(); twitter.setOAuthAccessToken(accessToken); } private static AccessToken loadAccessToken() { return new AccessToken(AccessToken, AccessTokenSecret); } void draw() { background(0); text("simpleTweet_00", 18, 45); text("@msg_box", 30, 70); listenToArduino(); } void listenToArduino() { String msgOut = ""; int arduinoMsg = 0; if (arduino.available() >= 1) { arduinoMsg = arduino.read(); if (arduinoMsg == 1) { msgOut = "Opened door at "+hour()+":"+minute()+":"+second(); } if (arduinoMsg == 2) { msgOut = "Closed door at "+hour()+":"+minute()+":"+second(); } compareMsg(msgOut); // this step is optional // postMsg(msgOut); } } void postMsg(String s) { try { Status status = twitter.updateStatus(s); println("new tweet --:{ " + status.getText() + " }:--"); } catch(TwitterException e) { println("Status Error: " + e + "; statusCode: " + e.getStatusCode()); } } void compareMsg(String s) { // compare new msg against latest tweet to avoid reTweets java.util.List statuses = null; String prevMsg = ""; String newMsg = s; try { statuses = twitter.getUserTimeline(); } catch(TwitterException e) { println("Timeline Error: " + e + "; statusCode: " + e.getStatusCode()); } Status status = (Status)statuses.get(0); prevMsg = status.getText(); String[] p = splitTokens(prevMsg); String[] n = splitTokens(newMsg); //println("("+p[0]+") -> "+n[0]); // debug if (p[0].equals(n[0]) == false) { postMsg(newMsg); } //println(s); // debug } [color=red]And these are the errors on python:[/color] running... simpleTweet_01_python arduino msg: #peacefulGlow Traceback (most recent call last): File "C:/Users/Ciaran/Desktop/python final", line 47, in listenToArduino() File "C:/Users/Ciaran/Desktop/python final", line 24, in listenToArduino compareMsg(msg.strip()) File "C:/Users/Ciaran/Desktop/python final", line 31, in compareMsg pM = ""+prevMsg[0]+"" IndexError: list index out of range >>> [color=red][font=Verdana]This is my current python code:[/font][/color] print 'running... simpleTweet_01_python' # import libraries import twitter import serial import time # connect to arduino via serial port arduino = serial.Serial('COM5', 9600, timeout=1) # establish OAuth id with twitter api = twitter.Api(consumer_key='ykaw70kvBc6jVV21eLlWA', consumer_secret='PllQqnwaZV7aYuH33vniloGW2U5fbkfYxef2LQVAK0', access_token_key='20026567-9juOyYchv9k7PXsu7kyrvhpHKkOY3Fg6xTn9vatuA', access_token_secret='AcIhlVX95nvpFYMyk9oh41PWHe3TXYqhMbSy6hLgQ') # listen to arduino def listenToArduino(): msg=arduino.readline() if msg > '': print 'arduino msg: '+msg.strip() compareMsg(msg.strip()) # avoid duplicate posts def compareMsg(newMsg): # compare the first word from new and old status = api.GetUserTimeline('yourUsername') prevMsg = [s.text for s in status] pM = ""+prevMsg[0]+"" pM = pM.split() nM = newMsg.split() print "prevMsg: "+pM[0] print "newMsg: "+nM[0] if pM[0] != nM[0]: print "bam" postMsg(newMsg) # post new message to twitter def postMsg(newMsg): localtime = time.asctime(time.localtime(time.time())) tweet = api.PostUpdate(hello) print "tweeted: "+tweet.text while 1: listenToArduino() _______________________________________________________________________ I know atleast one of the buttons are working as it sends the signal of [color=red]arduino msg: #peacefulGlow [/color] to pyhon when running module but as soon as the button is pressed then error messages appear. My LED is not lighting up at all :( I can send pictures of the circuitboard if needed. Please will someone help me with this. Either contact me here or and email to [email protected] would be great Thanks I have now got the led to cycle through some colours but it seems to fade in and out nicely then every few seconds it will blink off then back on? and I also do not understand how you are supposed to run python+processing (for button code) at the same time of running the arduino RGB LED code as both can not use the same COM (COM5 in my case) at the same time Im getting errors on processing like, no libraries for twitter4j.http. So I downloaded a jar file called, twitter4j-2.0.9 and draged it. But I was libraries using twitter4j 2.2.5- Then another error came up, acces token is ambiguos. And then on the code I put: import twitter4j.http.OAuthToken*; And then it says the Token is not visible. Im using mac, I really need help. Thanks. Good luck! If you iron it out please come back and post the answer. Not an answer, but I found Python to be really easy to use. In fact I don't even use Processing at all anymore. Im trying to figure how to install the libraries for mac. Thanks for the great project. I'm trying to get this code running, but I am stuck installing the library. I've put it in the location you suggested which didn't work. I then tried the default library location (where I have installed other libraries) which is a subdirectory of the sketch folder. This didn't work either. I did find a discussion about installing libraries into Processing () and tried to change the name of the twitter4j-core.jar to twitter4j.jar so that it matched the library name as they suggested. Processing then recognized the library, but gave me the following error: "No library found for twitter4j.http" Any thoughts on what I am doing wrong and why this isn't working? Thanks, Aaron I have attached the twitter4j.jar file to this instructable page. I'm not sure if you could just use that file or if there's more installation that needs to happen, but it's here for archival purposes now. On my Windows 7 box I've installed the twitter4j jar file here: C:\Program Files\processing-1.5.1\modes\java\libraries\twitter4j\library\twitter4j (executable jar file) I think I may have had to exit and reopen Processing, or restart the machine, I don't recall. Anyway, with the executable twitter4j file in that directory, it finally showed up in the Processing IDE under Menu:Sketch>Import Library>twitter4j. But you're saying that's not working for you, right? Well, I did a little surfing and found that I'm a version or so out of date. The libraries can now go into the sketchbook directory, as you seem to know already. This what you mean by "the default library location" yes? Over at I found this information: "Processing now allows for a “libraries” folder inside your Processing sketchbook, which is a great deal more convenient than having your 3rd party libraries installed in the Processing application folder." They're saying you can now install a library like twitter4j here (you add the directories into your \My Documents\Processing\ folder): C:\My Documents\Processing\libraries\twitter4j\library\twitter4j (executable jar file) This is mentioned again on the Processing site here: " libraries must be ... placed within the "libraries" folder of your Processing sketchbook." (To find the Processing sketchbook location on your computer, open the Preferences window from the Processing application and look for the "Sketchbook location" item at the top.) And in the forums here: You've tried this both ways and it's not working? I don't know what's up. I'm sorry. Unless someone else contributes here, I'd say ask the Processing forums. They've got a forum just for Contributed Libraries here: Sorry I don't have the answer for you. It *should* work. ;-) Good luck! It needs to be in Processing's libraries folder. On Windows, that directory looks like this: C:\Program Files\processing-1.5.1\modes\java\libraries Probably for linux the last few directories are going to be the same: ... processing-1.5.1\modes\java\libraries Within Processing, you'd import the library by starting with the "Sketch" menu: Processing > Sketch > Import Library... > twitter4j And that should do it. Just add twitter4j-core-2.2.3.jar to your application classpath. If you are familiar with Java language, looking into the JavaDoc should be the shortest way for you to get started. twitter4j.Twitter interface is the one you may want to look at first. ...and then, maybe I'll write an instructable about how to install twitter4j on ubuntu to make it simple :). Good Luck!
http://www.instructables.com/id/Simple-Tweet-Arduino-Processing-Twitter/CDZ17TAH0OJ2LLL
CC-MAIN-2015-22
en
refinedweb
#369 – Binding a Label’s Content to the Current Date and Time August 22, 2011 Leave a comment You can use data binding to assign the current date and time to a Label control. You start by creating an instance of a DateTime object using the ObjectDataProvider tag. <Window.Resources> <ObjectDataProvider x: </Window.Resources> Note that we’re using a sys: prefix, which requires the following namespace declaration. xmlns:sys="clr-namespace:System;assembly=mscorlib" We can now use data binding to bind to the DateTime.Now property. <Label Content="{Binding Source={StaticResource today}, Path=Now}" ContentStringFormat="Today is {0:D}" HorizontalContentAlignment="Center" Padding="10"/> <Label Content="{Binding Source={StaticResource today}, Path=Now}" ContentStringFormat="The time is {0:t}" HorizontalContentAlignment="Center" Padding="10"/> The result will look like this: This solution is not very good, since the binding happens just once, when the application starts. Neither label will update when the date or time changes. A better solution would be to bind to a property in a class that changes periodically, using the INotifyPropertyChanged interface.
http://wpf.2000things.com/2011/08/22/
CC-MAIN-2015-22
en
refinedweb
we were doing this program in 'C' language and it compiled, runs, perfectly without any logical errors this is the 'C' Code: Code : #include <stdio.h> void main() { int age; printf("\nEnter your age: "); scanf("%i", &age); switch (age >= 18) { case 1: printf("You Can Vote"); break; case 0: prinft("You CANNOT Voe!"); break; } } please try to run it in a "C" compiler, you will see the logic .. but for good sake I will state it , the logic is : if any number that will be enterd is less than to 18 it will print ("You CANNOTt vote") but if the entered value is GREATER than or equal to 18 then it will print ("You can Vote") --every of part of this program (in 'C') from compiling through logic doesnt have any errors; but when i convert this program into java ..here it is this is the JAVA code: Code : public class Exercise { private static BufferedReader br = new BufferedReader(new InputStreamReader( System.in)); public static void main(String[] args) throws IOException { int age; System.out.print("Enter your age: "); age = Integer.parseInt(br.readLine()); switch (age >= 18) { case 1: System.out.println("You Can Vote!"); break; case 0: System.out.println("You Cannot Vote!"); break; } } } you can notice that in 'C' language,even if the entered value is integer it can return a value of a boolean , where Case 1: is eqaul to TRUE if the entered value is greater than or equal to 18, where Case 0: is equal to FALSE if the entered value is less than 18, but in java,I have noticed that it CANNOT return a boolean value from an integer of a switch.. why is it happening like that?... its perfectly running in 'C' but why it cant in java?.. can anyone help me figure this out.. i really want to convert this in java... same logic... . please .. need help badly...
http://www.javaprogrammingforums.com/%20loops-control-statements/1058-convert-c-java-switch-printingthethread.html
CC-MAIN-2015-22
en
refinedweb
2008/12/16 Luca Niccoli <[email protected]>: > I can't really see what I'm doing wrong... Maybe I have a clue: ++file_filter(const struct dirent *dir) ++{ ++ return (DT_REG == (DT_REG & dir->d_type)) || ++ (DT_LNK == (DT_LNK & dir->d_type)) ; ++} But I use XFS, which seems to have some problems with d_type [1] I'm not really sure this is the source of the problem, but I thought it was worth giving a try... Cheers, Luca [1]
https://lists.debian.org/debian-devel/2008/12/msg00688.html
CC-MAIN-2015-22
en
refinedweb
Used to parse an entire XML file. More... #include <utilities/xmlutils.h> Used to parse an entire XML file. When particular XML components are encountered, this will be signalled by calling corresponding routines from the XMLParserCallback that is passed to the XMLParser constructor. To parse an entire XML file, simply call static routine parse_stream(), which does not require you to create an XMLParser yourself. If you desire more fine-grained control over the parsing process, you may create an XMLParser yourself and parse the file manually in small pieces. To do this, routine parse_chunk() should be called repeatedly with consecutive pieces of the XML file. Once the entire XML file has been sent through parse_chunk(), routine finish() should be called once to signal that processing is complete. Creates a new XML parser. Destroys this XML parser. Signals that there are no more XML chunks to parse. Parses the given chunk of XML. Parses an entire XML file. The given stream will be read from until end-of-file is reached.
http://regina.sourceforge.net/engine-docs/classregina_1_1xml_1_1XMLParser.html
CC-MAIN-2015-22
en
refinedweb
11 October 2011 09:15 [Source: ICIS news] By Fanny Zhang GUANGZHOU (ICIS)--?xml:namespace> The tax liability of PetroChina on resources will likely jump sixfold to more than yuan (CNY) 30bn ($4.72bn) each year, said a company source. The figure is roughly a fifth of the company’s net profit in 2010 at CNY140bn. China’s price-based resource tax is on test implementation at 12 resource-rich provinces – Chongqing, Sichuan, Guizhou, Yunnan, Tibet, Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang, Inner Mongolia and Guangxi – in the northern and western parts of the country. Other provinces still apply a volume-based resource tax of yuan (CNY) 14-30/tonne on crude and CNY7-15/cubic metre on natural gas. The change in the tax policy was implemented to better reflect values of those resources, according to the government. It will be a direct hit to PetroChina’s earnings as the company will not be able to adjust prices of fuel products to ease its tax burden, said Wang Qiang, an energy analyst at Shanghai-based brokerage China Merchants Securities (CMS). As fuel prices are regulated by the Chinese government and linked to international crude prices, domestic refiners that also do oil exploration in the country “have no way to pass their cost increase to fuel consumers”, Wang said. But the CMS analyst said that the “tax increase can be digested by PetroChina, considering its profitability”. For “The $40/bbl point is set too low and even hard to cover today’s exploration cost,” said Wang of CMS. PetroChina paid CNY51bn in windfall profit tax last year, market sources said. ($1 = CNY6
http://www.icis.com/Articles/2011/10/11/9498899/china-expands-new-resource-tax-base-petrochina-takes-big-hit.html
CC-MAIN-2015-22
en
refinedweb
Failure loading data from Local FileDigitalArchitectCanada Jan 5, 2011 5:14 AM Hi All, So I'm assuming that this is just a sandbox violation issue, that alchemy just plain blocks local file system access by anything that isn't an AS3 object capable of doing so (such as the file class.) I'd just like to get a solid confirmation from an employee or someone else who can give me a confirmation on this. Basically here's what I'm doing in C++ //Get the file class flash_filesystem_namespace = AS3_String("flash.filesystem"); File_class = AS3_NSGetS(flash_filesystem_namespace, "File"); AS3_Val fileObject = AS3_New(File_class, emptyParams); // //Get the bytearray class flash_utils_namespace = AS3_String("flash.utils"); ByteArray_class = AS3_NSGetS(flash_utils_namespace, "ByteArray"); AS3_Val byteArray = AS3_New(ByteArray_class, emptyParams); // AS3_SetS(fileObject, "nativePath", AS3_String(fileOne.c_str())); /// ------ fileOne is of type string and I'm passing it in as an arg AS3_Val doesFileRefExist = AS3_GetS(fileObject, "exists"); if(doesFileRefExist == AS3_False()){ AS3_Call(actionscriptCallbackFunction, parentClass, AS3_Array("StrType", "File Does Not Exist")); } if(doesFileRefExist == AS3_True()){ fileObject = AS3_CallS("resolvePath", parentClass, AS3_Array("StrType", fileOne.c_str())); ---This works } So that works if I import and use a flash.filesystem.File object to check and see if the file exists locally. However this doesn't work: bool doesFileExist(string* fName) { bool exists = false; string tmpfName = *fName; fstream fileToCheck; fileToCheck.open(tmpfName.c_str(),ios::in); if( fileToCheck.is_open() ) { exists=true; } fileToCheck.close(); return exists; } My function to check and see if the file exists always returns false, static of course the file doesn't exist. I've compiled this project into an executable before so I can verify that it's not my code. It's also important to note that I'm compiling this into an AIR 2.5 project, so I assumed that my compiled C++ code would inherit the same permissions (local file io). Any clarification/confirmation on this would be great and thanks in advance. 1. Re: Failure loading data from Local FileDigitalArchitectCanada Jan 5, 2011 5:31 AM (in response to DigitalArchitectCanada) I found this article: tml Specifically, it mentions the. //======================================================================================== ===================== Can anyone comment on or add to this? 2. Re: Failure loading data from Local FileDigitalArchitectCanada Jan 5, 2011 6:01 AM (in response to DigitalArchitectCanada) Okay so after doing some digging, here it is: Near the top of the page, you can see a description of the supplyFile method. However, I learned through experimenting and reading around other blogs that when you pass in the "path" of the file to alchemy, coupled with a bytearray of the file, the "path" is really just a unique identifier for that newly created byte array inside of alchemy/your c/c++ code. I just want to say that I really hope the alchemy project stays alive, and also that I believe work should be done to permit local file system access directly through c/c++ using ifstream/fopen ect, not through virtualization. I don't understand why the alchemy code doesn't automatically inherit all sandbox restrictions and permissions from the parent application sandbox but like I said, I hope it's something we'll see in the future. I hope this also may be helpful to other people with similar questions. 3. Re: Failure loading data from Local FileDigitalArchitectCanada Jan 9, 2011 8:40 AM (in response to DigitalArchitectCanada) Just an update for people who want to know an easier way of doing this, heres some code for passing bytearray to C++ and passing back a pointer to the modified data in C++ to flash, then saving that modified data from alchemy memory to the file system. It's very fast and other tutorials I've seen say it's the fastest method: package { import flash.display.MovieClip; import flash.display.Sprite; import flash.events.Event; import flash.filesystem.File; import flash.filesystem.FileMode; import flash.filesystem.FileStream; import flash.utils.ByteArray; import cmodule.MyAlchemyLib.CLibInit; import cmodule.MyAlchemyLib.gstate; public class MyAlchemyLibMain extends MovieClip { private var lib:Object; private var loader:CLibInit; private var file:File; private var dataLength:int; private var sysMem:ByteArray; public function ScramblrAlchemyMain() { loader = new CLibInit; file = File.desktopDirectory; file.addEventListener(Event.SELECT, getFile); file.browseForOpen("Find file to load."); } private function getFile(e:Event):void { file = e.target as File; lib = loader.init(); file.addEventListener(Event.COMPLETE, fileLoadedLocally); file.load(); } private function fileLoadedLocally(e:Event):void { trace("Data length: " + file.data.length); dataLength = file.data.length; var fileData:ByteArray = file.data; sysMem = gstate.ds; lib.processBytes(fileData, dataLength, "My string data", -1, alchemyCallback, this); } public function alchemyCallback(pointerRef:int):void { trace("Alchemy called me back baby!") trace("Bytes are at: " + pointerRef); var alchemyBytes:ByteArray = new ByteArray(); aalchemyBytes.writeBytes(sysMem, pointerRef, dataLength); trace(alchemyBytes.length); var sf:File = File.desktopDirectory; sf = sf.resolvePath(file.parent.nativePath); sf = sf.resolvePath("Modded File.f"); trace(sf.nativePath); var fs:FileStream = new FileStream(); fs.open(sf, FileMode.WRITE); fs.writeBytes(alchemyBytes, 0, alchemyBytes.length); fs.close(); fs = null; } } }CPP Code:char * memblock;char * moddedMemblock;AS3_Val emptyParams = AS3_Array("");//Get the file classAS3_Val flash_filesystem_namespace;AS3_Val File_class;////Get the file classAS3_Val flash_utils_namespace;AS3_Val ByteArray_class;//int main() {//define the methods exposed to ActionScript//typed as an ActionScript Function instanceAS3_Val processFileMethod = AS3_Function( NULL, processBytes );// construct an object that holds references to the functionsAS3_Val result = AS3_Object( "processBytes: AS3ValType", processFileMethod );//Release file class classAS3_Release(File_class);AS3_Release(flash_filesystem_namespace);//Release byte array classAS3_Release(ByteArray_class);AS3_Release(flash_utils_namespace);// ReleaseAS3_Release( processFileMethod );// notify that we initialized -- THIS DOES NOT RETURN!AS3_LibInit( result );// should never get here!return 0;}//============================================================================// Function : processBytes// Description : Alchemy process for processing a single bytearray//============================================================================static AS3_Val processBytes(void* self, AS3_Val processArgs){//Store the reference to the class passed in here from the main argsAS3_Val parentClass;//Get my string value from flashstring myStringValue;//Mode intint mode;AS3_Val actionscriptCallbackFunction;AS3_Val fileByteArray = AS3_Undefined();int currentByteLength;AS3_ArrayValue( processArgs, "AS3ValType, IntType, StrType, IntType, AS3ValType, AS3ValType", &fileByteArray, ¤tByteLength, &myStringValue, &mode, &actionscriptCallbackFunction, &parentClass);// yield...flyield();memblock = new char[currentByteLength];AS3_ByteArray_seek(fileByteArray, 0, SEEK_SET);AS3_ByteArray_readBytes(memblock, fileByteArray, currentByteLength);unsigned int byteLen;byteLen = (unsigned int)currentByteLength;//int fMode = AS3_IntValue(mode);int fMode = mode;//**************************************************************************************** ******Modify your byte array instance here however you wish. I've deleted the part of mycode where I do this because this is nearly a direct copy from a commercialapplication I'm working on. However, in my case I saved my modified byte arraydata into the moddedMemblock char array. Then I do a callback function to AS3using AS3_Call and pass AS3_Ptr(moddedMemblock) back to flash. Thisis an integer value that points to the position in memory where this data is beingstored, inside the alchemy byte array. Then back in flash I create a referenceto the alchemy byte array and read in the length of bytes I expect to receiveand save them. Hope this is helpful to some people as I spent a good two daysresearching the different ways to interact with the file system using alchemy.NOTE you must pass a reference to the class in which your callback functionexists. This is necessary so that when alchemy comes back to your flash app,it knows what namespace to use to access the function you've asked it tocall. If you don't do this, you'll get no call back.****************************************************************************************** *****/AS3_Call(actionscriptCallbackFunction, parentClass, AS3_Array("AS3ValType", AS3_Ptr(moddedMemblock)));//IMPORTANT - You must delete your pointers to clear up the memory you've used, otherwise this is how you get memory leaks.delete memblock;delete moddedMemblock;//Also important, need to release values received from flash otherwise this will result in memory leak as wellAS3_Release(fileByteArray);AS3_Release(currentByteLength);AS3_Release(myStringValue);AS3_Release(mode);AS3_Release(actionscriptCallbackFunction);AS3_Release(parentClass);return AS3_Null();} 4. Re: Failure loading data from Local FileDigitalArchitectCanada Jan 9, 2011 8:41 AM (in response to DigitalArchitectCanada) Sorry all, appears that my text formatting is broken every time I edit it, it reverts to this.
https://forums.adobe.com/thread/773517
CC-MAIN-2015-22
en
refinedweb
Interacting with COM Components Using C# In this article, we will analyze COM components and their application in C#. These components became popular among developers after Microsoft released Active Server Pages. However, the whole scenario changed when Microsoft released its Windows 2000 operating system and subsequent COM+ technology. In this article, we will examine the fundamentals of this exiting technology followed by its application in C#. Introduction to COM COM is a set of specification and services that facilitates a developer to create reusable objects and components for running various applications. COM components are components that are developed by using these specifications. These components can be implemented either as an executable (EXE) or as a Dynamic Link Library (DLL). They can be developed using Visual Basic, Visual C++, or a number of other programming languages. They can either act as a server to a Visual Basic or C++ client application, or can be applied on the Web by using Active Server Pages (ASP). In Visual Basic 6.0 (See Figure 1), you can refer a COM component in your application by adding a reference from the Project | References Menu. After that, you can call the respective methods of the added components in your client application. These components are introduced to reduce coding and to manage an application effectively. Figure 1—Project | References Menu In an Active Server Page application, you will write code similar to what's shown below: Set conn= Server.CreateObject("ADODB.Connection") In the above code, ADODB is one of the components in which Connection is an object. Conn is a user-defined object, which should be referred while accessing the methods of the Connection object as shown below: Conn.Open "Internet" You cannot refer a built-in component directly in applications. As already examined, COM components can be either a DLL or Executable. Let's now discuss what these mean. DLLs are referred to as in-process objects because they run within the same address space as the client program that called the component. The main advantage in using these types of components is that a separate process is not created each time the component is accessed. Executables are considered to be out-of-process programs. They are executed in their own address space. Moreover, they consume more processing time when the program is called by an application. In the case of DLLs, failure of an in-process component will ultimately bring down the application, whereas this may not happen in the case of EXEs because they are executed in their own address space. With COM technology, you can: - Create reusable components for your applications. - Develop the component using one language and implement it in another language. For instance, you may develop a component using Visual C++ and implement it in a Visual Basic application. - Make changes to the component without affecting the application. Windows 2000 introduced a new technology called COM+, which is an extension to the existing COM. There are advanced technologies such as DCOM, which means Distributed COM. With DCOM, you can access components of other systems in a networked environment. However, a complete discussion to these technologies is beyond the scope of this article. Creating a COM Component In this session, we will demonstrate how to create a COM component by using Visual Basic 6.0, with the help of a series of steps. To create a COM component by using Visual Basic 6.0, you have to use either ActiveX DLL or ActiveX EXE projects. For our discussion, ActiveX DLL is used. The steps required for creating an ActiveX DLL are outlined below. - Fire up Visual Basic and select the ActiveX DLL Icon from the New Project dialog box, as shown in Figure 2. - Change the class name to something meaningful like "Our_csharp". - Now supply the code shown in Listing 1: - To create a function, you can use the Tools | Add procedure menu, as shown in Figure 3. - Save the project by supplying relevant class and project names of your choice. - Change the Project name and Description by selecting the Project | Properties menu (see Figure 4). - Set the Binary Compatibility from the Components tab of the above menu because this action will not create a separate GUID upon each compilation. - Finally, create your DLL by selecting File | Make Csharpcorner.dll. This is the name by which you save your VB project. Figure 2—Visual Basic 6.0 start up Listing 1 Public Function Show() MsgBox ("bye") End Function Figure 3—Adding Procedure Figure 4—Properties Dialog That's all. Your DLL is now complete. The next session will show you how to apply this DLL in your C# program. Integrating the COM Component in C# In this session, you will learn how to apply the DLL we discussed in last session in a C# application. As you know with Visual Basic 6.0, it's possible to develop a COM server as shown in the previous session and implement it in a Visual Basic or Visual C++ client program. You may wonder about the idea of calling this DLL in a C# application. When we compile a C# program, an Intermediate Language (MSIL) is generated and it's called as Managed Code. A Visual Basic 6.0 DLL is Unmanaged, meaning it's not being generated by the Common Language Runtime. But, we can make this Visual Basic DLL interoperate with C# by converting the same into a .NET-compatible version. It's not possible for a C# program to communicate with a Visual Basic DLL without converting the same into its .NET equivalent. To do so, the .NET SDK provides a tool called tlbimp. It stands for Type Library Import and converts your DLL to its equivalent .NET Assembly. For the above project, supply the following command after properly installing the SDK. tlbimp Csharpcorner.dll /out:Csharp.dll A new .NET-compatible file called Csharp.dll would be placed in the appropriate directory. The whole process is still not complete. You have to call this DLL in a C# program, as shown in Listing 2: Listing 2 using Csharp; using System; public class Csharpapply { public static void main() { Our_csharp c = new Our_csharp(); s.Show(); } } Notice a function named Show() in the above session. It's called in the above listing. Notice the following declaration: using Csharp; Csharp is our .NET-compatible DLL file. Upon conversion, it will be changed to a .NET Assembly. In C#, all assemblies are referred with the keyword "using". Upon execution of the above C# application, you can see the message box, as shown in Figure 5. Figure 5—Sample Message Box If you are using Visual Studio .NET, you can refer the DLL from Project | Add Reference | COM Tab. Select your DLL by using the browse button. This will add a reference to your project. After adding a reference, copy the above code and execute._5<<
http://www.developer.com/net/csharp/article.php/1501271/Interacting-with-COM-Components-Using-C.htm
CC-MAIN-2015-22
en
refinedweb
Easylogging++ entry namespace. More... Easylogging++ entry namespace. Resolving function for format specifier. Definition at line 1635 of file easylogging++.h. Definition at line 2201 of file easylogging++.h. Definition at line 801 of file easylogging++.h. Represents enumeration of ConfigurationType used to configure or access certain aspect of logging. Definition at line 628 of file easylogging++.h. Represents enumeration for severity level used to determine level of logging. With Easylogging++, developers may disable or enable any level regardless of what the severity is. Or they can choose to log using hierarchical logging flag Definition at line 568 of file easylogging++.h. Flags used while writing logs. This flags are set by user. Definition at line 689 of file easylogging++.h. Definition at line 209 of file easylogging++.cc. Definition at line 158 of file easylogging++.cc.
http://docs.ros.org/en/kinetic/api/librealsense2/html/namespaceel.html
CC-MAIN-2021-49
en
refinedweb
The upstream interface of a transformation module. More... #include <transform-base.hpp> The upstream interface of a transformation module. A module can construct subsequent transformation chain through this interface. Definition at line 156 of file transform-base.hpp. Definition at line 61 of file transform-base.cpp. connect to next transformation module Definition at line 67 of file transform-base.cpp. Referenced by ndn::security::transform::Source::operator>>(). Definition at line 173 of file transform-base.hpp. Definition at line 179 of file transform-base.hpp. Referenced by appendChain(), ndn::security::transform::StepSource::end(), ndn::security::transform::Transform::flushOutputBuffer(), getNext(), and ndn::security::transform::StepSource::write().
https://ndnsim.net/2.6/doxygen/classndn_1_1security_1_1transform_1_1Upstream.html
CC-MAIN-2021-49
en
refinedweb
Containerization using, Inc which was later renamed as Docker, Inc. It is written in the software development cycle. Containerization Containerization is OS-based virtualization which creates multiple virtual units in the userspace, known as Containers. Containers share the same host kernel but are isolated from each other through private namespaces and resource control mechanisms at the OS level. Container-based Virtualization provides a different level of abstraction in terms of virtualization and isolation when compared with hypervisors. Hypervisors use a lot of hardware which results in overhead in terms of virtualizing hardware and virtual device drivers. A full operating-system (e.g -Linux, Windows) run on top of this virtualized hardware in each virtual machine instance. But in contrast, containers implement isolation of processes at the operating system level, thus avoiding such overhead. These containers run on top of the same shared operating system kernel of the underlying host machine and one or more processes can be run within each container. In containers you don’t have to pre-allocate any RAM, it is allocated dynamically during the creation of containers while in VM’s you need to first pre-allocate the memory and then create the virtual machine. Containerization has better resource utilization compared to VMs and a short boot-up process. It is the next evolution in virtualization. Containers are able to run virtually anywhere, greatly easy development and deployment: on Linux, Windows, and Mac operating systems; on virtual machines or bare metal, on a developer’s machine or in data centers on-premises; and of course, in the public cloud. Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications. Docker is the most popular open-source container format available and is supported on Google Cloud Platform and by Google Kubernetes Engine. Docker Architecture Docker architecture consists of Docker client, Docker Daemon running on Docker Host, and Docker Hub repository. Docker has client-server architecture in which the client communicates with the Docker Daemon running on the Docker Host using a combination of REST APIs, Socket IO, and TCP. If we have to build the Docker image, then we use the client to execute the build command to Docker Daemon then Docker Daemon builds an image based on given inputs and saves it into the Docker registry. If you don’t want to create an image then just execute the pull command from the client and then Docker Daemon will pull the image from the Docker Hub and finally if we want to run the image then execute the run command from the client which will create the container. Components of Docker The main components of Docker include – Docker clients and servers, Docker images, Dockerfile, Docker Registries, and Docker containers. These components are explained in details in the below section : - Docker Clients and Servers– Docker has a client-server architecture. The Docker Daemon/Server consists of all containers. The Docker Daemon/Server receives the request from the Docker client through CLI or REST APIs and thus processes the request accordingly. Docker client and Daemon can be present on the same host or different host. - Docker Images– Docker images are used to build docker containers by using a read-only template. The foundation of every image is a base image for eg. base images such as – ubuntu14.04 LTS, Fedora 20. Base images can also be created from scratch and then required applications can be added to the base image by modifying it thus this process of creating a new image is called “committing the change”. - Docker File– Dockerfile is a text file that contains a series of instructions on how to build your Docker image. This image contains all the project code and its dependencies. The same Docker image can be used to spin ‘n’ number of containers each with modification to the underlying image. The final image can be uploaded to Docker Hub and share among various collaborators for testing and deployment. The set of commands that you need to use in your Docker File are FROM, CMD, ENTRYPOINT, VOLUME, ENV, and many more. - Docker Registries– Docker Registry is a storage component for Docker images. We can store the images in either public/private repositories so that multiple users can collaborate in building the application. Docker Hub is Docker’s own cloud repository. Docker Hub is called a public registry where everyone can pull available images and push their own images without creating an image from scratch. - Docker Containers– Docker Containers are runtime instances of Docker images. Containers contain the whole kit required for an application, so the application can be run in an isolated way. For eg.- Suppose there is an image of Ubuntu OS with NGINX SERVER when this image is run with docker run command, then a container will be created and NGINX SERVER will be running on Ubuntu OS. Docker Compose Docker Compose is a tool with which we can create a multi-container application. It makes it easier to configure and run applications made up of multiple containers. For example, suppose you had an application that required WordPress and MySQL, you could create one file which would start both the containers as a service without the need to start each one separately. We define a multi-container application in a YAML file. With the docker-compose up command, we can start the application in the foreground. Docker-compose will look for the docker-compose.yaml file in the current folder to start the application. By adding the -d option to the docker-compose up command, we can start the application in the background. Creating a docker-compose.yaml file for WordPress application : #cat docker-compose.yaml version: ’2’ services: db: image: mysql:5.7 volumes:db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_PASSWORD: wordpress volumes: db_data: In this docker-compose.yaml file, we have the following ports section for the WordPress container, which means that we are going to map the host’s 8000 port with the container’s 80 port. So that host can access the application with its IP and port no. Docker Networks When we create and run a container, Docker by itself assigns an IP address to it, by default. Most of the time, it is required to create and deploy Docker networks as per our needs. So, Docker let us design the network as per our requirements. There are three types of Docker networks- default networks, user-defined networks, and overlay networks. To get list of all the default networks that Docker creates, we run the command shown below – There are three types of networks in Docker – - Bridged network: When a new Docker container is created without the –network argument, Docker by default connects the container with the bridge network. In bridged networks, all the containers in a single host can connect to each other through their IP addresses. Bridge network is created when the span of Docker hosts is one i.e. when all containers run on a single host. We need an overlay network to create a network that has a span of more than one Docker host. - Host network: When a new Docker container is created with the –network=host argument it pushes the container into the host network stack where the Docker daemon is running. All interfaces of the host are accessible from the container which is assigned to the host network. - None network: When a new Docker container is created with the –network=none argument it puts the Docker container in its own network stack. So, in this none network, no IP addresses are assigned to the container, because of which they cannot communicate with each other. We can assign any one of the networks to the Docker containers. The –network option of the ‘docker run’ command is used to assign a specific network to the container. $docker run --network ="network name" To get detailed information about a particular network we use the command- $docker network inspect "network name" Advantages of Docker – Docker has become popular nowadays because of the benefits provided by Docker containers. The main advantages of Docker are: - Speed – The speed of Docker containers compared to a virtual machine is very fast. The time required to build a container is very fast because they are tiny and lightweight. Development, testing, and deployment can be done faster as containers are small. Containers can be pushed for testing once they have been built and then from there on to the production environment. - Portability – The applications that are built inside docker containers are extremely portable. These portable applications can easily be moved anywhere as a single element and their performance also remains the same. - Scalability – Docker has the ability that it can be deployed in several physical servers, data servers, and cloud platforms. It can also be run on every Linux machine. Containers can easily be moved from a cloud environment to localhost and from there back to cloud again at a fast pace. - Density – Docker uses the resources that are available more efficiently because it does not use a hypervisor. This is the reason that more containers can be run on a single host as compared to virtual machines. Docker Containers have higher performance because of their high density and no overhead wastage of resources.
https://www.geeksforgeeks.org/containerization-using-docker/
CC-MAIN-2021-49
en
refinedweb
4 ContentProvider. Being able to persist structured data in a SQLite database is an excellent feature included in the Android SDK. However, this data is only accessible by the app that created it. What if an app would like to share data with another app? ContentProvider is the tool that allows apps to share persisted files and data with one another. Content providers sit between the app’s data source and provide a means to manage this data. This can be a helpful organizational tool in an app, even if the app is not intended to share its data externally with other apps. Content providers provide a standardized interface that can connect data in one process with code running in another process. They encapsulate the data and provide mechanisms for defining data security at a granular level. A content provider can be used to aggregate multiple data sources and abstract away the details. Although it can be a good idea to use a content provider to better organize and manage the data in an app, this is not a requirement if the app is not going to share its data. One of the simplest use cases of a content provider is to gain access to the Contacts of a device via a content provider. Another common built-in provider in the Android platform is the user dictionary. The user dictionary holds spellings of non-standard words specific to the user. Understanding content provider basics In order to get data from a content provider, you use a mechanism called ContentResolver. The content resolver provides methods to query(), update(), insert() and delete() data from a content provider. A request is made to a content resolver by passing a URI to one of the SQL-like methods. These methods return a Cursor. Note: Cursor is defined and discussed in more detail in the previous SQLite chapter. A cursor is essentially a pointer to a row in a table of structured data that was returned by the query. To interact with a content provider via a content resolver there are two basic steps: - Request permission from the provider by adding a permission in the manifest. - Construct a query with an appropriate content URI and send the query to the provider via a content resolver object. Understanding Content URIs To find the data within the provider, use a content URI. The content URI is essentially the address of where to find the data within the provider. A content URI always starts with content:// and then includes the authority of a provider which is the provider’s symbolic name. It can also include the names of tables or other specific information relating the query. An example content URI for the user dictionary looks like: content://user_dictionary/words Requesting permission to use a content provider The application will need read access permission for the specific provider. Utilize the <uses-permission> element and the exact permission that was defined by the provider. The provider’s application can specify which permissions that requesting applications must have in order to access the data. Users can see the requested permissions when they install the application. The code to request read permission of the user dictionary is: <uses-permission android: Permission types The types of permission that can be granted by a content provider include: Constructing the query The statement to perform a query on the user dictionary database looks like this: cursor = contentResolver.query( // 1 UserDictionary.Words.CONTENT_URI, // 2 projection, // 3 selectionClause, // 4 selectionArgs, // 5 sortOrder ) Inserting, updating and deleting data The insert, update and delete operations look very similar to the query operation. In each case, a function is called on the content resolver object and the appropriate parameters passed in. Inserting data Below is a statement to insert a record: newUri = contentResolver.insert( UserDictionary.Words.CONTENT_URI, newValues ) Updating data To update data, call update on the content resolver object and pass in content values that include key-values for the columns being updated in the corresponding row. Arguments should also be included for selection criteria and arguments to identify the correct records to update. When populating the content values, you only have to include columns that you’re updating, and including column keys with a null value will clear out the data for that column. One important consideration when updating data is to sanitize user input. The developer guide to protecting against malicious data has been included in the Where to go from here section below. An integer is returned from update that contains the count of how many rows were updated. Deleting data Deleting data is very similar to the other operations. Call delete on the content resolver object passing in arguments for the selection clause and selection arguments to identify the group of records to delete. A value is returned with an integer count of how many rows were deleted. Adding a contract class A contract class is a place to define constants used to assemble the content URIs. This can include constants to contain the authority, table names and column names along with assembled URIs. This class must be created and shared by the developer creating the provider. It can make it easier for other developers to understand and utilize the content provider in their application. MIME types Content providers can return standard MIME types like those used by media, or custom MIME type strings, or both. MIME types take the format type/subtype an example being text/html. Custom MIME types or vendor-specific MIME types are more complicated and come in the form of: vnd.android.cursor.dir for multiple rows. They come in the form of vnd.android.cursor.item for single rows. Getting Started Locate this chapter’s folder in the provided materials, named content-provider, and open up the projects folder. Next, open the ContentProviderToDo app under the starter folder. Allow the project to sync, download dependencies, and setup the workplace environment. For now, ignore the errors in the code. Adding the provider package It is a good idea to keep the provider classes in their own package. You will also include the contract class in this package. Right click on the com.raywenderlich.contentprovidertodo.Controller folder and select new > package. A dialog pops up prompting for the name of the new package, type in provider and click OK. Adding the contract class Now add the ToDoContract.kt class. Right click the new provider package and select New > Kotlin File/Class. In the resulting dialog enter the name ToDoContract, for the “Kind” dropdown select “File” and press OK. A Kotlin file will be created in the provider directory. Insert the following declarations into the Contract.kt file beneath the package declaration: // The ToDoContract class object ToDoContract { // 1 // The URI Code for All items const val ALL_ITEMS = -2 // 2 //The URI suffix for counting records const val COUNT = "count" // 3 //The URI Authority const val AUTHORITY = "com.raywenderlich.contentprovidertodo.provider" // 4 // Only one public table. const val CONTENT_PATH = "todoitems" // 5 // Content URI for this table. Returns all items. val CONTENT_URI = Uri.parse("content://$AUTHORITY/$CONTENT_PATH") // 6 // URI to get the number of entries. val ROW_COUNT_URI = Uri.parse("content://$AUTHORITY/$CONTENT_PATH/$COUNT") // 7 // Single record mime type const val SINGLE_RECORD_MIME_TYPE = "vnd.android.cursor.item/vnd.com.raywenderlich.contentprovidertodo.provider.todoitems" // 8 // Multiple Record MIME type const val MULTIPLE_RECORDS_MIME_TYPE = "vnd.android.cursor.item/vnd.com.raywenderlich.contentprovidertodo.provider.todoitems" // 9 // Database name const val DATABASE_NAME: String = "todoitems.db" // 10 // Table Constants object ToDoTable { // The table name const val TABLE_NAME: String = "todoitems" // The constants for the table columns object Columns { //The unique ID column const val KEY_TODO_ID: String = "todoid" //The ToDo's Name const val KEY_TODO_NAME: String = "todoname" //The ToDo's category const val KEY_TODO_IS_COMPLETED: String = "iscompleted" } } } Adding the content provider Android Studio has a neat feature to automatically add content classes. A content provider class extends ContentProvider from the Android SDK and implements all the required methods. By using the automated method of adding the content provider class, many of these method stubs will be provided for you. It will be your job to fill in the functions in the content provider one by one. Ready to get started? :] <provider android: </provider> Implementing the methods in the content provider Note: As you add code to the method stubs in the provider, be sure to replace the TODO comments with the new code. Also, you may need to press alt + enter and import libraries as you go along. If given the choice between constants defined in the ToDoDbSchema or the new ToDoContract, choose ToDoContract. The goal is to have the content provider depending on the contract so that it serves as an abstract layer above the database handler. This design allows for the data source to be swapped out as long as it meets the same specifications as the previous data source, and no other code that is dependent on the database will be affected, in this app or other apps that utilize the contract. // 1 // This is the content provider that will // provide access to the database private lateinit var db : ToDoDatabaseHandler private lateinit var sUriMatcher : UriMatcher // 2 // Add the URI's that can be matched on // this content provider private fun initializeUriMatching() { sUriMatcher = UriMatcher(UriMatcher.NO_MATCH) sUriMatcher.addURI(AUTHORITY,CONTENT_PATH, URI_ALL_ITEMS_CODE) sUriMatcher.addURI(AUTHORITY, CONTENT_PATH + "/#", URI_ONE_ITEM_CODE) sUriMatcher.addURI(AUTHORITY, CONTENT_PATH + "/" + COUNT, URI_COUNT_CODE) } // 3 // The URI Codes private val URI_ALL_ITEMS_CODE = 10 private val URI_ONE_ITEM_CODE = 20 private val URI_COUNT_CODE = 30 Implementing onCreate Insert the code below into the onCreate method stub replacing the TODO marker: db = ToDoDatabaseHandler(context) initializeUriMatching() return true Implementing getType Next, implement the getType function by replacing the entire stub with the code below: override fun getType(uri: Uri) : String? = when(sUriMatcher.match(uri)) { URI_ALL_ITEMS_CODE -> MULTIPLE_RECORDS_MIME_TYPE URI_ONE_ITEM_CODE -> SINGLE_RECORD_MIME_TYPE else -> null } Implementing query The query function queries the database and returns the results. This function has been designed so that it can perform multiple types of queries, depending on the URI. Insert the code below into the body of the function: var cursor : Cursor? = null when(sUriMatcher.match(uri)) { URI_ALL_ITEMS_CODE -> { cursor = db.query(ALL_ITEMS)} URI_ONE_ITEM_CODE -> { cursor = db.query(uri.lastPathSegment.toInt()) } URI_COUNT_CODE -> { cursor = db.count()} UriMatcher.NO_MATCH -> { /*error handling goes here*/ } else -> { /*unexpected problem*/ } } return cursor Modifying the adapter To test the content provider you just created, open ToDoAdapter.kt and add the following code inside the class before the onCreateViewHolder method: private val queryUri = CONTENT_URI.toString() // base uri private val queryCountUri = ROW_COUNT_URI.toString() private val projection = arrayOf(CONTENT_PATH) //table private var selectionClause: String? = null private var selectionArgs: Array<String>? = null private val sortOrder = "ASC" // Get the number of records from the Content Resolver val cursor = context.contentResolver.query(Uri.parse(queryCountUri), projection, selectionClause, selectionArgs,sortOrder) // Return the count of records if(cursor != null) { if(cursor.moveToFirst()) { return cursor.getInt(0) } } // 1 val cursor = context.contentResolver.query(Uri.parse("$queryUri"), projection, selectionClause, selectionArgs, sortOrder) // 2 if(cursor != null) { if(cursor.moveToPosition(position)) { val toDoId = cursor.getLong(cursor.getColumnIndex(KEY_TODO_ID)) val toDoName = cursor.getString(cursor.getColumnIndex(KEY_TODO_NAME)) val toDoCompleted= cursor.getInt(cursor.getColumnIndex(KEY_TODO_IS_COMPLETED)) > 0 val toDo= ToDo(toDoId, toDoName, toDoCompleted) holder.bindViews(toDo) } } Implementing insert Open ToDoContentProvider.kt and replace the TODO in the body of the insert method with the following code: val id = db.insert(values!!.getAsString(KEY_TODO_NAME)) return Uri.parse("$CONTENT_URI/$id") import com.raywenderlich.contentprovidertodo.Controller.provider.ToDoContract.ToDoTable.Columns.KEY_TODO_NAME // 1 var values = ContentValues() values.put(KEY_TODO_NAME, toDoName) // 2 context.contentResolver.insert(CONTENT_URI, values) Implementing update To implement the update function, open ToDoContentProvider.kt and copy the code below into the body of the update function: var toDo = ToDo(values!!.getAsLong(KEY_TODO_ID),values!! .getAsString(KEY_TODO_NAME), values!! .getAsBoolean(KEY_TODO_IS_COMPLETED)) return db.update(toDo) // 1 var values = ContentValues() values.put(KEY_TODO_NAME,view.edtToDoName.text.toString()) values.put(KEY_TODO_ID, toDo.toDoId) values.put(KEY_TODO_IS_COMPLETED, toDo.isCompleted) // 2 context.contentResolver.update(Uri.parse(queryUri), values, selectionClause, selectionArgs) // 3 notifyDataSetChanged() Implementing delete Deleting a record is simple. Open ToDoContentProvider.kt and copy the following code into the body of delete replacing the TODO statement: return db.delete(parseLong(selectionArgs?.get(0))) // 1 selectionArgs = arrayOf(id.toString()) // 2 context.contentResolver.delete(Uri.parse(queryUri), selectionClause, selectionArgs) // 3 notifyDataSetChanged() Challenge: Creating a client For an additional, interesting challenge, make a copy of the app you just created that creates a content provider and see if you can remove the database and transform it into a client that utilizes the content provider. Challenge Solution: Creating a client It is easy to create a client app that utilizes the provider you just created in the previous steps. You can achieve this by making a copy of the provider app and deleting the database and content provider from it. This proves the content provider is shared with an external app. The steps are as follows: <uses-permission android: Key points - Content providers sit just above the data source of the app, providing an additional level of abstraction from the data repository. - A content provider allows for the sharing of data between apps. - A content provider can be a useful way of making a single data provider if the data is housed in multiple repositories for the app. - It is not necessary to use a content provider if the app is not intended to share data with other apps. - The content resolver is utilized to run queries on the content provider. - Using a content provider can allow granular permissions to be set on the data. - Use best practices such as selection clauses and selection arguments to prevent SQL Injection when utilizing a content provider, or any raw query data mechanism. - Data in a content provider can be accessed via exposed URIs that are matched by the content provider. Where to go from here? See Google’s documentation on content provider here:.
https://www.raywenderlich.com/books/saving-data-on-android/v1.0/chapters/4-contentprovider
CC-MAIN-2021-49
en
refinedweb
The WPF Tree Grid control is a data-oriented control that displays self-relational data in a tree structure user interface like multicolumn treeview. Data can be loaded on demand. Items can be moved between parent nodes using the built-in row drag-and-drop functionality. Its rich feature set includes editing with different column types, selection, and node selection with check boxes, sorting, and filtering. Validate cells and display error information based on the following validation types: IDataErrorInfo, INotifyDataErrorInfo, Data Annotations. Or use cell, row, or column validation. Sort data against one or more columns with multiple customization operations in WPF Treegrid. Sort also by writing custom logic. Filter nodes using an intuitive, built-in, Excel-inspired filtering UI or programmatically with various filter-level options in WPF Treegrid. Users can perform row-based selection with extensive support for keyboard navigation. Users can also select rows using intuitive check boxes. Column width can be adjusted (auto fitted) based on the content of a column or column header. Fit all the columns within the viewport of a treegrid in WPF. Freeze columns at left and right positions, similar to Excel. Stacked headers (column header span) allow users to show unbound header rows. They span the stacked header columns across multiple rows and columns. Merge data in adjacent cells dynamically and present that data in a single cell. Merge data also write custom logic to merge data. The appearance of a WPF Tree list and its inner elements, such as rows, cells, columns, headers can be customized. The WPF Tree list Drag and drop rows within a control or between controls using an intuitive row drag and drop UI. The WPF TreeGrid control with rich UI provides an entirely custom context menu to expose functionality on the user interface. Users can create context menus for different rows such as record rows, header rows, and expander rows. The WPF Tree Grid view perform clipboard operations such as cut, copy, and paste within a control and between other applications such as Notepad or Excel. An easy and flexible way to use all the necessary properties and commands of a WPF tree grid view in MVVM approach. Localize all the static default strings in the WPF treegrid to any desired language. Allows display of text in the right to left (RTL) direction for users working with languages like Hebrew, Arabic, or Persian. Easily get started with the WPF TreeGrid using a few simple lines of XAML and C# code example as demonstrated below. Also explore our WPF TreeGrid Example that shows you how to render and configure the XAML TreeGrid. <Window x: <Window.DataContext> <local:ViewModel /> </Window.DataContext> <Grid x: <syncfusion:SfTreeGrid </Grid> </Window> using Syncfusion.UI.Xaml.TreeGrid; namespace NestedCollectionDemo { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); SfTreeGrid treeGrid = new SfTreeGrid(); ViewModel viewModel = new ViewModel(); treeGrid.ItemsSource = viewModel.PersonDetails; treeGrid.ChildPropertyName = "Children"; Root_Grid.Children.Add(treeGrid); } } } Syncfusion WPF TreeGrid provides the following: We do not sell the WPF Tree Grid TreeGrid, are not sold individually, only as a single package. However, we have competitively priced the product so it only costs a little bit more than what some other vendors charge for their TreeGrid.
https://www.syncfusion.com/wpf-controls/treegrid
CC-MAIN-2021-49
en
refinedweb
Details - Type: Bug - Status: Reported - Priority: P3: Somewhat important - Resolution: Unresolved - Affects Version/s: 5.13.1 - Fix Version/s: None - Component/s: Quick: Dialogs - Labels:None - Environment:KDE Plasma 5.18 on openSUSE Tumbleweed Linux Description This doesn't work: import QtQuick 2.1 import QtQuick.Dialogs 1.2 as QtDialogs QtDialogs.FontDialog { width: 700 height: 700 title: i18n("Select Font") modality: Qt.WindowModal } As a result, the dialog always opens with its default size, which seems to quite often be too low. See
https://bugreports.qt.io/browse/QTBUG-81397
CC-MAIN-2021-49
en
refinedweb
Table of Contents - What is Minecraft API? - How does the Minecraft API work? - Target Audience for Minecraft API - How to connect to the Minecraft API Tutorial – Step by Step - Minecraft API Endpoints - Integrating Minecraft API to an Application - Benefits of the Minecraft API - Alternative to Minecraft API - Summary Application Programming Interfaces or APIs for short, have become a defining characteristic of many modern software/web applications. They provide both management and communication functionalities to the application/platform and facilitate communications between other applications and platforms. With gaming becoming a centerpiece of the internet landscape, the gaming API was developed to provide a standardized way of communicating with the game regardless of the platform or programming language they are built. In this article, we will see how to use Minecraft API with multiple programming languages. View the Best Minecraft APIs List What is Minecraft API? In simplest terms, Minecraft API is an API that provides the ability to interact with Minecraft servers. It can be used to obtain the status and information of a Minecraft instance (PC/MCPE), getting Minecraft service statuses, getting player information, and tracking payment and package information on Texbex.io (Formally known as BuyCraft) and Minecraft Market. This Minecraft API aims to provide a single API to communicate with Minecraft servers, Minecraft Marketplace, and Tebex.io website to effectively manage the monetization of the game server. How does the Minecraft API work? The Minecraft API works on simple principles such as making a request and obtaining the response. The API will request the necessary payload (IP Address, username, market keys, etc…) and authentication (API Keys, Tokens) details. As a result, it will receive the response payload in JSON format. A response payload will be generated for both successful and unsuccessful requests. The difference between them will be that the successful response payload will include the requested information, while the unsuccessful response payload will include the error details. Following is a sample JSON response that can be expected when checking for the current status of Minecraft services. [ { "minecraft.net": "green" }, { "session.minecraft.net": "green" }, { "account.mojang.com": "green" }, { "authserver.mojang.com": "green" }, { "sessionserver.mojang.com": "red" }, { "api.mojang.com": "green" }, { "textures.minecraft.net": "green" }, { "mojang.com": "green" } ] Target Audience for Minecraft API Gamers / Streamers With this API, gamers and streamers can monitor their dedicated Minecraft instances in both the PC (Java) and Bedrock (Minecraft Pocket Edition) editions. It can also be used to obtain information about the server status, player information, and player count to effectively manage the underlying hardware resources and optimize the hardware and network to perform well while the server activity is high. This is essential for game streamers to gauge their audience and plan for community play sessions properly without causing any performance issues. Modders Minecraft supports third-party sellers to sell items that are useful in the Minecraft world. These can vary from skin packs, items, or whole worlds and experiences. The Minecraft API provides means to gain information about the products offered in both Texbex.io and Minecraft Marketplace while enabling modders to obtain the sales figures (Payments). This information is valuable in creating new items that have a high demand in the marketplaces. Business Entities Online platforms that offer dedicated or shared game servers can utilize this API to monitor the status of servers. When managing many servers, anyone can easily parse and visualize the server details using this API as it provides data in JSON format. Additionally, if the business is offering products such as skins, items, etc., they can utilize the same API to manage sales in Texbex.io and Minecraft Marketplace. As a whole, anyone who wants to monitor a Minecraft instance, sell in-game items, monitor marketplaces or offer dedicated servers can utilize this API. How to connect to the Minecraft API Tutorial – Step by Step Step 1 – Sign up and Get a RapidAPI Account. RapidAPI is the world’s largest API marketplace which is used by over a million developers. It aims to provide a unified platform (single account, API Key and SDK) to interact with thousands of APIs and manage all those APIs through a single dashboard. To sign-up for a RapidAPI account, navigate to rapidapi.com and click on the Sign Up icon. RapidAPI provides options for both Single Sign-On (SSO) using Google, GitHub, and Facebook accounts or manually creating a user account. Step 2 – Search the API Marketplace Navigate to the Marketplace and search for “Minecraft API.” Then, select the desired API from the search results. There, we will select the Minecraft API (Minecraft API Web Service). Step 3 – Subscribe to the API Before using the API, we have to subscribe to it. So navigate to the pricing section of the Minecraft API and subscribe to your preferred plan. In this instance, we select the free plan of the Minecraft API. Then, simply click on the “Connect Now” button to activate the subscription. Step 4 – Test the API After subscribing to the API, navigate to the Endpoints section of the API screen. This section describes all the available endpoints for the API and offers code snippets in multiple languages, which helps to integrate the API with any application. Then, select the endpoint you want to test and add the required details there. Finally, click on the “Test Endpoint” Button to test the API. Minecraft API Endpoints The Minecraft API provides a whole host of endpoints. They can be categorized into two main sections: server monitoring endpoints and marketplace monitoring endpoints(Tebex.io (BuyCraft) and Minecraft Marketplace). Let’s have a look at each section in detail. Server Monitoring Minecraft API provides the necessary endpoints to monitor the status of Minecraft installations on both a PC server and an MCPE server. On top of that, this API allows monitoring player information and Minecraft service status. The following table displays all the available endpoints for server monitoring. Marketplace Monitoring The API offers the functionality to track recent payments, shop information, and available packages in Tebex.io (BuyCraft) and Minecraft Marketplaces. Integrating Minecraft API to an Application In this section, we will see how to integrate the Minecraft API into a software application using different programming languages such as Python, PHP, Ruby, and Javascript. An important fact to note here is that you have to configure the proper programming environments for each language to execute the code snippet successfully. We will be using the “getPCServerStatus” endpoint for all the code snippets. Each request generated by the code snippets will require two parameters. They are the RapidAPI API key indicated by <<<API_KEY>>> and the IP address of the targeted PC server (address). These parameters will vary from endpoint to endpoint. Python Code Snippet (requests) import requests url = "" payload = "address=%3CREQUIRED%3E" headers = { 'content-type': "application/x-www-form-urlencoded", 'x-rapidapi-key': "<<<API_KEY>>>", 'x-rapidapi-host': "Minecraftstefan-skliarovV1.p.rapidapi.com" } response = requests.request("POST", url, data=payload, headers=headers) print(response.text) PHP Code Snippet (HTTP v2) <?php $client = new http\Client; $request = new http\Client\Request; $body = new http\Message\Body; $body->append(new http\QueryString([ 'address' => '<REQUIRED>' ])); $request->setRequestUrl(''); $request->setRequestMethod('POST'); $request->setBody($body); $request->setHeaders([ 'content-type' => 'application/x-www-form-urlencoded', 'x-rapidapi-key' => '<<<API_KEY>>>', 'x-rapidapi-host' => 'Minecraftstefan-skliarovV1.p.rapidapi.com' ]); $client->enqueue($request)->send(); $response = $client->getResponse(); echo $response->getBody(); Ruby (net::http) require 'uri' require 'net/http' require 'openssl' url = URI("") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true http.verify_mode = OpenSSL::SSL::VERIFY_NONE request = Net::HTTP::Post.new(url) request["content-type"] = 'application/x-www-form-urlencoded' request["x-rapidapi-key"] = '<<<API_KEY>>>' request["x-rapidapi-host"] = 'Minecraftstefan-skliarovV1.p.rapidapi.com' request.body = "address=%3CREQUIRED%3E" response = http.request(request) puts response.read_body Javascript (Axios) import axios from "axios"; const options = { method: 'POST', url: '', headers: { 'content-type': 'application/x-www-form-urlencoded', 'x-rapidapi-key': '<<<API_KEY>>>', 'x-rapidapi-host': 'Minecraftstefan-skliarovV1.p.rapidapi.com' }, data: {address: '<REQUIRED>'} }; axios.request(options).then(function (response) { console.log(response.data); }).catch(function (error) { console.error(error); }); Benefits of the Minecraft API Extensive Functionality. The Minecraft API functionality is not limited to a single functionality as it offers both server, player, and marketplace monitoring as a single API. With support for both PC and MCPE editions, this API covers almost all the possible Minecraft installations. Moreover, its monitoring functionality for both Tebex and Minecraft marketplaces enables users to build a comprehensive monitoring application without depending on other APIs. Customization The Minecraft API provides the option to customize the endpoint targets (request payload), enabling developers to specify targets to capture the required data. (IP, Port, Username, Marketplace Key, etc…) Ease of Use The simple endpoint structure and well-documented requirements (request parameters) of this API enhance its user-friendliness. Additionally, with the comprehensive code snippets for multiple programming languages, the API can be picked up even by a novice developer to integrate with their applications. Alternative to Minecraft API - MCServerInf – A simple API to gather server information from Minecraft instances. - Minecraft-Forge-Optifine – Retrieve the version lists and downloads for Minecraft, Forge, and Optifine. - MineBans – Global banning system for Minecraft servers. Summary The Minecraft API acts as a comprehensive monitoring solution that would suit any amount of Minecraft monitoring scenarios under a single API. In this article, we have covered the basic structure of the Minecraft API, its available endpoints and functionality, and sample code snippets showcasing how to integrate it into an application. View the Best Minecraft APIs List
https://rapidapi.com/blog/minecraft-api-with-python-php-ruby-javascript-examples/
CC-MAIN-2021-49
en
refinedweb
For all the hype about big data, much value resides in the world’s medium and small data. Especially when we consider the length of the feedback loop and total analyst time invested, insights from small and medium data are quite attractive and economical. Personally, I find analyzing data that fits into memory quite convenient, and therefore, when I am confronted with a data set that does not fit in memory as-is, I am willing to spend a bit of time to try to manipulate it to fit into memory. The first technique I usually turn to is to only store distinct rows of a data set, along with the count of the number of times that row appears in the data set. This technique is fairly simple to implement, especially when the data set is generated by a SQL query. If the initial query that generates the data set is SELECT u, v, w FROM t; we would modify it to become SELECT u, v, w, COUNT(1) FROM t GROUP BY u, v, w; We now generate a sample data set with both discrete and continuous features. %matplotlib inline from __future__ import division from matplotlib import pyplot as plt import numpy as np import pandas as pd from patsy import dmatrices, dmatrix import scipy as sp import seaborn as sns from statsmodels import api as sm from statsmodels.base.model import GenericLikelihoodModel np.random.seed(1545721) # from random.org N = 100001 u_min, u_max = 0, 100 v_p = 0.6 n_ws = 50 ws = sp.stats.norm.rvs(0, 1, size=n_ws) w_min, w_max = ws.min(), ws.max() df = pd.DataFrame({ 'u': np.random.randint(u_min, u_max, size=N), 'v': sp.stats.bernoulli.rvs(v_p, size=N), 'w': np.random.choice(ws, size=N, replace=True) }) df.head() We see that this data frame has just over 100,000 rows, but only about 10,000 distinct rows. df.shape[0] 100001 df.drop_duplicates().shape[0] 9997 We now use pandas’ groupby method to produce a data frame that contains the count of each unique combination of x, y, and z. count_df = df.groupby(list(df.columns)).size() count_df.name = 'count' count_df = count_df.reset_index() In order to make later examples interesting, we shuffle the rows of the reduced data frame, because pandas automatically sorts the values we grouped on in the reduced data frame. shuffled_ixs = count_df.index.values np.random.shuffle(shuffled_ixs) count_df = count_df.iloc[shuffled_ixs].copy().reset_index(drop=True) count_df.head() Again, we see that we are storing 90% fewer rows. Although this data set has been artificially generated, I have seen space savings of up to 98% when applying this technique to real-world data sets. count_df.shape[0] / N 0.0999690003099969 This space savings allows me to analyze data sets which initially appear too large to fit in memory. For example, the computer I am writing this on has 16 GB of RAM. At a 90% space savings, I can comfortably analyze a data set that might otherwise be 80 GB in memory while leaving a healthy amount of memory for other processes. To me, the convenience and tight feedback loop that come with fitting a data set entirely in memory are hard to overstate. As nice as it is to fit a data set into memory, it’s not very useful unless we can still analyze it. The rest of this post will show how we can perform standard operations on these summary data sets. For convenience, we will separate the feature columns from the count columns. summ_df = count_df[['u', 'v', 'w']] n = count_df['count'] Suppose we have a group of numbers \(x_1, x_2, \ldots, x_n\). Let the unique values among these numbers be denoted \(z_1, z_2, \ldots, z_m\) and let \(n_j\) be the number of times \(z_j\) apears in the original group. The mean of the \(x_i\)s is therefore \[ \begin{align*} \bar{x} & = \frac{1}{n} \sum_{i = 1}^n x_i = \frac{1}{n} \sum_{j = 1}^m n_j z_j, \end{align*} \] since we may group identical \(x_i\)s into a single summand. Since \(n = \sum_{j = 1}^m n_j\), we can calculate the mean using the following function. def mean(df, count): return df.mul(count, axis=0).sum() / count.sum() mean(summ_df, n) u 49.308067 v 0.598704 w 0.170815 dtype: float64 We see that the means calculated by our function agree with the means of the original data frame. df.mean(axis=0) u 49.308067 v 0.598704 w 0.170815 dtype: float64 np.allclose(mean(summ_df, n), df.mean(axis=0)) True We can calculate the variance as \[ \begin{align*} \sigma_x^2 & = \frac{1}{n - 1} \sum_{i = 1}^n \left(x_i - \bar{x}\right)^2 = \frac{1}{n - 1} \sum_{j = 1}^m n_j \left(z_j - \bar{x}\right)^2 \end{align*} \] using the same trick of combining identical terms in the original sum. Again, this calculation is easy to implement in Python. def var(df, count): mu = mean(df, count) return np.power(df - mu, 2).mul(count, axis=0).sum() / (count.sum() - 1) var(summ_df, n) u 830.025064 v 0.240260 w 1.099191 dtype: float64 We see that the variances calculated by our function agree with the variances of the original data frame. df.var() u 830.025064 v 0.240260 w 1.099191 dtype: float64 np.allclose(var(summ_df, n), df.var(axis=0)) True Histograms are fundamental tools for exploratory data analysis. Fortunately, pyplot’s hist function easily accommodates summarized data using the weights optional argument. fig, (full_ax, summ_ax) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(16, 6)) nbins = 20 blue, green = sns.color_palette()[:2] full_ax.hist(df.w, bins=nbins, color=blue, alpha=0.5, lw=0); full_ax.set_xlabel('$w$'); full_ax.set_ylabel('Count'); full_ax.set_title('Full data frame'); summ_ax.hist(summ_df.w, bins=nbins, weights=n, color=green, alpha=0.5, lw=0); summ_ax.set_xlabel('$w$'); summ_ax.set_title('Summarized data frame'); We see that the histograms for \(w\) produced from the full and summarized data frames are identical. Calculating the mean and variance of our summarized data frames was not too difficult. Calculating quantiles from this data frame is slightly more involved, though still not terribly hard. Our implementation will rely on sorting the data frame. Though this implementation is not optimal from a computation complexity point of view, it is in keeping with the spirit of pandas’ implementation of quantiles. I have given some thought on how to implement linear time selection on the summarized data frame, but have not yet worked out the details. Before writing a function to calculate quantiles of a data frame with several columns, we will walk through the simpler case of computing the quartiles of a single series. u = summ_df.u u.head() 0 0 1 48 2 35 3 19 4 40 Name: u, dtype: int64 First we argsort the series. sorted_ilocs = u.argsort() We see that u.iloc[sorted_ilocs] will now be in ascending order. sorted_u = u.iloc[sorted_ilocs] (sorted_u[:-1] <= sorted_u[1:]).all() True More importantly, counts.iloc[sorted_ilocs] will have the count of the smallest element of u first, the count of the second smallest element second, etc. sorted_n = n.iloc[sorted_ilocs] sorted_cumsum = sorted_n.cumsum() cdf = (sorted_cumsum / n.sum()).values Now, the \(i\)-th location of sorted_cumsum will contain the number of elements of u less than or equal to the \(i\)-th smallest element, and therefore cdf is the empirical cumulative distribution function of u. The following plot shows that this interpretation is correct. fig, ax = plt.subplots(figsize=(8, 6)) blue, _, red = sns.color_palette()[:3] ax.plot(sorted_u, cdf, c=blue, label='Empirical CDF'); plot_u = np.arange(100) ax.plot(plot_u, sp.stats.randint.cdf(plot_u, u_min, u_max), '--', c=red, label='Population CDF'); ax.set_xlabel('$u$'); ax.legend(loc=2); If, for example, we wish to find the median of u, we want to find the first location in cdf which is greater than or equal to 0.5. median_iloc_in_sorted = (cdf < 0.5).argmin() The index of the median in u is therefore sorted_ilocs.iloc[median_iloc_in_sorted], so the median of u is u.iloc[sorted_ilocs.iloc[median_iloc_in_sorted]] 49 df.u.quantile(0.5) 49.0 We can generalize this method to calculate multiple quantiles simultaneously as follows. q = np.array([0.25, 0.5, 0.75]) u.iloc[sorted_ilocs.iloc[np.less.outer(cdf, q).argmin(axis=0)]] 2299 24 9079 49 1211 74 Name: u, dtype: int64 df.u.quantile(q) 0.25 24 0.50 49 0.75 74 dtype: float64 The array np.less.outer(cdf, q).argmin(axis=0) contains three columns, each of which contains the result of comparing cdf to an element of q. The following function generalizes this approach from series to data frames. def quantile(df, count, q=0.5): q = np.ravel(q) sorted_ilocs = df.apply(pd.Series.argsort) sorted_counts = sorted_ilocs.apply(lambda s: count.iloc[s].values) cdf = sorted_counts.cumsum() / sorted_counts.sum() q_ilocs_in_sorted_ilocs = pd.DataFrame(np.less.outer(cdf.values, q).argmin(axis=0).T, columns=df.columns) q_ilocs = sorted_ilocs.apply(lambda s: s[q_ilocs_in_sorted_ilocs[s.name]].reset_index(drop=True)) q_df = df.apply(lambda s: s.iloc[q_ilocs[s.name]].reset_index(drop=True)) q_df.index = q return q_df quantile(summ_df, n, q=q) df.quantile(q=q) np.allclose(quantile(summ_df, n, q=q), df.quantile(q=q)) True Another important operation is bootstrapping. We will see two ways to perfom bootstrapping on the summary data set. n_boot = 10000 Key to both approaches to the bootstrap is knowing the proprotion of the data set that each distinct combination of features comprised. weights = n / n.sum() The two approaches differ in what type of data frame they produce. The first we will discuss produces a non-summarized data frame with non-unique rows, while the second produces a summarized data frame. Each fo these approaches to bootstrapping is useful in different situations. To produce a non-summarized data frame, we generate a list of locations in feature_df based on weights using numpy.random.choice. boot_ixs = np.random.choice(summ_df.shape[0], size=n_boot, replace=True, p=weights) boot_df = summ_df.iloc[boot_ixs] boot_df.head() We can verify that our bootstrapped data frame has (approximately) the same distribution as the original data frame using Q-Q plots. ps = np.linspace(0, 1, 100) boot_qs = boot_df[['u', 'w']].quantile(q=ps) qs = df[['u', 'w']].quantile(q=ps) fig, ax = plt.subplots(figsize=(8, 6)) blue = sns.color_palette()[0] ax.plot((u_min, u_max), (u_min, u_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.u, boot)) blue = sns.color_palette()[0] ax.plot((w_min, w_max), (w_min, w_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.w, boot); We see that both of the resampled distributions agree quite closely with the original distributions. We have only produced Q-Q plots for \(u\) and \(w\) because \(v\) is binary-valued. While at first non-summarized boostrap resampling may appear to counteract the benefits of summarizing the original data frame, it can be quite useful when training and evaluating online learning algorithms, where iterating through the locations of the bootstrapped data in the original summarized data frame is efficient. To produce a summarized data frame, the counts of the resampled data frame are sampled from a multinomial distribution with event probabilities given by weights. boot_counts = pd.Series(np.random.multinomial(n_boot, weights), name='count') Again, we compare the distribution of our bootstrapped data frame to that of the original with Q-Q plots. Here our summarized quantile function is quite useful. boot_count_qs = quantile(summ_df, boot_counts, q=ps) fig, ax = plt.subplots(figsize=(8, 6)) ax.plot((u_min, u_max), (u_min, u_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.u, boot_count)) ax.plot((w_min, w_max), (w_min, w_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.w, boot_count); Again, we see that both of the resampled distributions agree quite closely with the original distributions. Linear regression is among the most frequently used types of statistical inference, and it plays nicely with summarized data. Typically, we have a response variable \(y\) that we wish to model as a linear combination of \(u\), \(v\), and \(w\) as \[ \begin{align*} y_i = \beta_0 + \beta_1 u_i + \beta_2 v_i + \beta_3 w_i + \varepsilon, \end{align*} \] where \(\varepsilon \sim N(0, \sigma^2)\) is noise. We generate such a data set below (with \(\sigma = 0.1\)). beta = np.array([-3., 0.1, -4., 2.]) noise_std = 0.1 X = dmatrix('u + v + w', data=df) y = pd.Series(np.dot(X, beta), name='y') + sp.stats.norm.rvs(scale=noise_std, size=N) y.head() 0 7.862559 1 3.830585 2 -0.388246 3 1.047091 4 0.992082 Name: y, dtype: float64 Each element of the series y corresponds to one row in the uncompressed data frame df. The OLS class from statsmodels comes quite close to recovering the true regression coefficients. full_ols = sm.OLS(y, X).fit() full_ols.params const -2.999658 x1 0.099986 x2 -3.998997 x3 2.000317 dtype: float64 To show how we can perform linear regression on the summarized data frame, we recall the the ordinary least squares estimator minimizes the residual sum of squares. The residual sum of squares is given by \[ \begin{align*} RSS & = \sum_{i = 1}^n \left(y_i - \mathbf{x}_i \mathbf{\beta}^{\intercal}\right)^2. \end{align*} \] Here \(\mathbf{x}_i = [1\ u_i\ v_i\ w_i]\) is the \(i\)-th row of the original data frame (with a constant added for the intercept) and \(\mathbf{\beta} = [\beta_0\ \beta_1\ \beta_2\ \beta_3]\) is the row vector of regression coefficients. It would be tempting to rewrite \(RSS\) by grouping the terms based on the row their features map to in the compressed data frame, but this approach would lead to incorrect results. Due to the stochastic noise term \(\varepsilon_i\), identical values of \(u\), \(v\), and \(w\) can (and will almost certainly) map to different values of \(y\). We can see this phenomenon by calculating the range of \(y\) grouped on \(u\), \(v\), and \(w\). reg_df = pd.concat((y, df), axis=1) reg_df.groupby(('u', 'v', 'w')).y.apply(np.ptp).describe() count 9997.000000 mean 0.297891 std 0.091815 min 0.000000 25% 0.237491 50% 0.296838 75% 0.358015 max 0.703418 Name: y, dtype: float64 If \(y\) were uniquely determined by \(u\), \(v\), and \(w\), we would expect the mean and quartiles of these ranges to be zero, which they are not. Fortunately, we can account for is difficulty with a bit of care. Let \(S_j = \{i\ |\ \mathbf{x}_i = \mathbf{z}_j\}\), the set of row indices in the original data frame that correspond to the \(j\)-th row in the summary data frame. Define \(\bar{y}_{(j)} = \frac{1}{n_j} \sum_{i \in S_j} y_i\), which is the mean of the response variables that correspond to \(\mathbf{z}_j\). Intuitively, since \(\varepsilon_i\) has mean zero, \(\bar{y}_{(j)}\) is our best unbiased estimate of \(\mathbf{z}_j \mathbf{\beta}^{\intercal}\). We will now show that regressing \(\sqrt{n_j} \bar{y}_{(j)}\) on \(\sqrt{n_j} \mathbf{z}_j\) gives the same results as the full regression. We use the standard trick of adding and subtracting the mean and get \[ \begin{align*} RSS & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(\left(y_i - \bar{y}_{(j)}\right) + \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(\left(y_i - \bar{y}_{(j)}\right)^2 + 2 \left(y_i - \bar{y}_{(j)}\right) \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) + \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2\right). \end{align*} \] As is usual in these situations, the cross term vanishes, since \[ \begin{align*} \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right) \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) & = \sum_{i \in S_j} \left(y_i \bar{y}_{(j)} - y_i \mathbf{z}_j \mathbf{\beta}^{\intercal} - \bar{y}_{(j)}^2 + \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) \\ & = \bar{y}_{(j)} \sum_{i \in S_j} y_i - \mathbf{z}_j \mathbf{\beta}^{\intercal} \sum_{i \in S_j} y_i - n_j \bar{y}_{(j)}^2 + n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} \\ & = n_j \bar{y}_{(j)}^2 - n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} - n_j \bar{y}_{(j)}^2 + n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} \\ & = 0. \end{align*} \] Therefore we may decompose the residual sum of squares as \[ \begin{align*} RSS & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right)^2 + \sum_{j = 1}^m \sum_{i \in S_j} \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right)^2 + \sum_{j = 1}^m n_j \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2. \end{align*} \] The important property of this decomposition is that the first sum does not depend on \(\mathbf{\beta}\), so minimizing \(RSS\) with respect to \(\mathbf{\beta}\) is equivalent to minimizing the second sum. We see that this second sum can be written as \[ \begin{align*} \sum_{j = 1}^m n_j \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 & = \sum_{j = 1}^m \left(\sqrt{n_j} \bar{y}_{(j)} - \sqrt{n_j} \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \end{align*}, \] which is exactly the residual sum of squares for regressing \(\sqrt{n_j} \bar{y}_{(j)}\) on \(\sqrt{n_j} \mathbf{z}_j\). summ_reg_df = reg_df.groupby(('u', 'v', 'w')).y.mean().reset_index().iloc[shuffled_ixs].reset_index(drop=True).copy() summ_reg_df['n'] = n summ_reg_df.head() The design matrices for this summarized model are easy to construct using patsy. y_summ, X_summ = dmatrices(""" I(np.sqrt(n) * y) ~ np.sqrt(n) + I(np.sqrt(n) * u) + I(np.sqrt(n) * v) + I(np.sqrt(n) * w) - 1 """, data=summ_reg_df) Note that we must remove patsy’s constant column for the intercept and replace it with np.sqrt(n). summ_ols = sm.OLS(y_summ, X_summ).fit() summ_ols.params array([-2.99965783, 0.09998571, -3.99899718, 2.00031673]) We see that the summarized regression produces the same parameter estimates as the full regression. np.allclose(full_ols.params, summ_ols.params) True As a final example of adapting common methods to summarized data frames, we will show how to fit a logistic regression model on a summarized data set by maximum likelihood. We will use the model \[P(s = 1\ |\ w) = \frac{1}{1 + \exp(-\mathbf{x} \gamma^{\intercal})}\]. As above, \(\mathbf{x}_i = [1\ u_i\ v_i\ w_i]\). The true value of \(\gamma\) is gamma = np.array([1., 0.01, -1., -2.]) We now generate samples from this model. X = dmatrix('u + v + w', data=df) p = pd.Series(sp.special.expit(np.dot(X, gamma)), name='p') s = pd.Series(sp.stats.bernoulli.rvs(p), name='s') logit_df = pd.concat((s, p, df), axis=1) logit_df.head() We first fit the logistic regression model to the full data frame. full_logit = sm.Logit(s, X).fit() Optimization terminated successfully. Current function value: 0.414221 Iterations 7 full_logit.params const 0.965283 x1 0.009944 x2 -0.966797 x3 -1.990506 dtype: float64 We see that the estimates are quite close to the true parameters. The technique used to adapt maximum likelihood estimation of logistic regression to the summarized data frame is quite elegant. The likelihood for the full data set is given by the fact that (given \(u\), \(v\), and \(w\)) \(s\) is Bernoulli distributed with \[s_i\ |\ \mathbf{x}_i \sim \operatorname{Ber}\left(\frac{1}{1 + \exp(-\mathbf{x}_i \gamma^{\intercal})}\right).\] To derive the likelihood for the summarized data set, we count the number of successes (where \(s = 1\)) for each unique combination of features \(\mathbf{z}_j\), and denote this quantity \(k_j\). summ_logit_df = logit_df.groupby(('u', 'v', 'w')).s.sum().reset_index().iloc[shuffled_ixs].reset_index(drop=True).copy() summ_logit_df = summ_logit_df.rename(columns={'s': 'k'}) summ_logit_df['n'] = n summ_logit_df.head() Now, instead of each row representing a single Bernoulli trial (as in the full data frame), each row represents \(n_j\) trials, so we have that \(k_j\) is (conditionally) Binomially distributed with \[k_j\ |\ \mathbf{z}_j \sim \operatorname{Bin}\left(n_j, \frac{1}{1 + \exp(-\mathbf{z}_j \gamma^{\intercal})}\right).\] summ_logit_X = dmatrix('u + v + w', data=summ_logit_df) As I have shown in a previous post, we can use statsmodels’ GenericLikelihoodModel class to fit custom probability models by maximum likelihood. The model is implemented as follows. class SummaryLogit(GenericLikelihoodModel): def __init__(self, endog, exog, n, **qwargs): """ endog is the number of successes exog are the features n are the number of trials """ self.n = n super(SummaryLogit, self).__init__(endog, exog, **qwargs) def nloglikeobs(self, gamma): """ gamma is the vector of regression coefficients returns the negative log likelihood of each of the observations for the coefficients in gamma """ p = sp.special.expit(np.dot(self.exog, gamma)) return -sp.stats.binom.logpmf(self.endog, self.n, p) def fit(self, start_params=None, maxiter=10000, maxfun=5000, **qwargs): # wraps the GenericLikelihoodModel's fit method to set default start parameters if start_params is None: start_params = np.zeros(self.exog.shape[1]) return super(SummaryLogit, self).fit(start_params=start_params, maxiter=maxiter, maxfun=maxfun, **qwargs) summ_logit = SummaryLogit(summ_logit_df.k, summ_logit_X, summ_logit_df.n).fit() Optimization terminated successfully. Current function value: 1.317583 Iterations: 357 Function evaluations: 599 Again, we get reasonable estimates of the regression coefficients, which are close to those obtained from the full data set. summ_logit.params array([ 0.96527992, 0.00994322, -0.96680904, -1.99051485]) np.allclose(summ_logit.params, full_logit.params, rtol=10**-4) True Hopefully this introduction to the technique of summarizing data sets has proved useful and will allow you to explore medium data more easily in the future. We have only scratched the surface on the types of statistical techniques that can be adapted to work on summarized data sets, but with a bit of ingenuity, many of the ideas in this post can apply to other models. This post is available as an IPython notebook here.
http://austinrochford.com/posts/2015-08-03-counting-features.html
CC-MAIN-2017-13
en
refinedweb
In a previous post I covered how to use namespace qualification with JAXB. In this post I will cover how to control the prefixes that are used. This is not covered in the JAXB (JSR-222) specification but I will demonstrate the extensions available in both the reference and EclipseLink MOXy implementations for handling this use case.
http://blog.bdoughan.com/2011_11_01_archive.html
CC-MAIN-2017-13
en
refinedweb
Introduction to Lollipop - PDF for offline use - - Related Links: - Let us know how you feel about this 0/250 last updated: 2017-02 This article provides a high level overview of the new features introduced in Android 5.0 (Lollipop). These features include a new user interface style called Material Theme, as well as new supporting features such as animations, view shadows, and drawable tinting. Android 5.0 also includes enhanced notifications, two new UI widgets, a new job scheduler, and a handful of new APIs to improve storage, networking, connectivity, and multimedia capabilities. Overview Android 5.0 (Lollipop) introduces a new design language, Material Design, and with it a supporting cast of new features to make apps easier and more intuitive to use. With Material Design, Android 5.0 not only gives Android phones a facelift; it also provides a new set of design rules for Android-based tablets, desktop computers, watches, and smart TVs. These design rules emphasize simplicity and minimalism while making use of familiar tactile attributes (such as realistic surface and edge cues) to help users quickly and intuitively understand the interface. Material Theme is the embodiment of these UI design principles in Android. This article begins by covering Material Theme's supporting features: Animations – Touch feedback animations, activity transition animations, view state transition animations, and a reveal effect. View shadows and elevation – Views now have an elevationproperty; views with higher elevationvalues cast larger shadows on the background. Color features – Drawable tinting makes it possible for you to reuse image assets by changing their color, and prominent color extraction helps you dynamically theme your app based on colors in an image. Many Material Theme features are already built into the Android 5.0 UI experience, while others must be explicitly added to apps. For example, some standard views (such as buttons) already include touch feedback animations, while apps must enable most view shadows. In addition to the UI improvements brought about through Material Theme, Android 5.0 also includes several other new features that are covered in this article: Enhanced notifications – Notifications in Android 5.0 have been significantly updated with a new look, support for lockscreen notifications, and a new Heads-up notification presentation format. New UI widgets – The new RecyclerViewwidget makes it easier for apps to convey large data sets and complex information, and the new CardViewwidget provides a simplified card-like presentation format for displaying text and images. New APIs – Android 5.0 adds new APIs for multiple network support, improved Bluetooth connectivity, easier storage management, and more flexible control of multimedia players and camera devices. A new job scheduling feature is available to run tasks asynchronously at scheduled times. This feature helps to improve battery life by, for example, scheduling tasks to take place when the device is plugged in and charging. Requirements The following is required to use the new Android 5.0 features in Xamarin-based apps: Xamarin.Android – Xamarin.Android 4.20 or later must be installed and configured with either Visual Studio or Xamarin Studio. If you are using Xamarin Studio, version 5.5.4 or later is required. Android SDK – Android 5.0 (API 21) or later must be installed via the Android SDK Manager. Java Developer Kit – Xamarin.Android requires JDK 1.8 or later if you are developing for API level 24 or greater (JDK 1.8 also supports API levels earlier than 24, including Lollipop). The 64-bit version of JDK 1.8 is required if you are using custom controls or the Forms Previewer. You can continue to use JDK 1.7 if you are developing specifically for API level 23 or earlier. Setting Up an Android 5.0 Project To create an Android 5.0 project, you must install the latest tools and SDK packages. Use the following steps to set up a Xamarin.Android project that targets Android 5.0: Install Xamarin.Android tools and activate your Xamarin license. See Setup and Installation for more information about installing Xamarin.Android. If you are using Xamarin Studio, install the latest Android 5.0 updates. Start the Android SDK Manager (in Xamarin Studio, use Tools > Open Android SDK Manager…) and install Android SDK Tools 23.0.5 or later: Also, install the latest Android 5.0 SDK packages (API 21 or later): For more information about using the Android SDK Manager, see SDK Manager. Create a new Xamarin.Android project. If you are new to Android development with Xamarin, see Hello, Android to learn about creating Android projects. When you create an Android project, be sure to configure the version settings for Android 5.0. In Xamarin Studio, navigate to Project Options > Build > General and set Target framework to Android 5.0 (Lollipop) or later: Under Project Options > Build > Android Application, set minimum and target Android version to Automatic - use target framework version: Configure an emulator or an Android device to test your app. If you are using an emulator, see Configure the Emulator to learn how to configure an Android emulator for use with Xamarin Studio or Visual Studio. If you are using an Android device, see Setting Up the Preview SDK to learn how to update your device for Android 5.0. To configure your Android device for running and debugging Xamarin.Android applications, see Set Up Device for Development. Note: If you are updating an existing Android project that was targeting the Android L Preview, you must update the Target Framework and Android version to the values described above. Important Changes Previously published Android apps could be affected by changes in Android 5.0. In particular, Android 5.0 uses a new runtime and a significantly changed notification format. Android Runtime Android 5.0 uses the new Android Runtime (ART) as the default runtime instead of Dalvik. ART implements several major new features: Ahead-of-time (AOT) compilation – AOT can improve app performance by compiling app code before the app is first launched. When an app is installed, ART generates a compiled app executable for the target device. Improved garbage collection (GC) – GC improvements in ART can also improve app performance. Garbage collection now uses one GC pause instead of two, and concurrent GC operations complete in a more timely fashion. Improved app debugging – ART provides more diagnostic detail to help in analyzing exceptions and crash reports. Existing apps should work without change under ART—except for apps that exploit techniques unique to the previous Dalvik runtime, which may not work under ART. For more information about these changes, see Verifying App Behavior on the Android Runtime (ART). Notification Changes Notifications have changed significantly in Android 5.0: Sounds and vibration are handled differently – Notification sounds and vibrations are now handled by Notification.Builderinstead of Ringtone, MediaPlayer, and Vibrator. New color scheme – In accordance with Material Theme, notifications are rendered with dark text over white or very light backgrounds. Also, alpha channels in notification icons may be modified by Android to coordinate with system color schemes. Lockscreen notifications – Notifications can now appear on the device lockscreen. Heads-up – High-priority notifications now appear in a small floating window (Heads-up notification) when the device is unlocked and the screen is turned on. In most cases, porting existing app notification functionality to Android 5.0 requires the following steps: Convert your code to use Notification.Builder(or NotificationsCompat.Builder) for creating notifications. Verify that your existing notification assets are viewable in the new Material Theme color scheme. Decide what visibility your notifications should have when they are presented on the lockscreen. If a notification is not public, what content should show up on the lockscreen? Set the category of your notifications so they are handled correctly in the new Android 5.0 Do not disturb mode. If your notifications present transport controls, display media playback status, use RemoteControlClient, or call ActivityManager.GetRecentTasks, see Important Behavior Changes for more information about updating your notifications for Android 5.0. For information about creating notifications in Android, see Local Notifications. The Compatibility section of this article explains how to create notifications that are downward-compatible with earlier versions of Android. Material Theme The new Android 5.0 Material Theme brings sweeping changes to the look and feel of the Android UI. Visual elements now use tactile surfaces that take on the bold graphics, typography, and bright colors of print-based design. Examples of Material Theme are depicted in the following screenshots: Android 5.0 greets you with the home screen shown on the left. The center screenshot is the first screen of the app list, and the screenshot on the right is the Settings screen. Google’s Material Design specification explains the underlying design rules behind the new Material Theme concept. Material Theme includes three built-in flavors that you can use in your app: the Theme.Material dark theme (the default), the Theme.Material.Light theme, and the Theme.Material.Light.DarkActionBar theme: For more about using Material Theme features in Xamarin.Android apps, see Material Theme. Animations Android 5.0 provides touch feedback animations, activity transition animations, and view state transition animations to make app interfaces more intuitive to use. Also, Android 5.0 apps can use reveal effect animations to hide or reveal views. You can use curved motion settings to configure how quickly or slowly animations are rendered. Touch Feedback Animations Touch feedback animations provide users with visual feedback when a view has been touched.: . For more on touch feedback animations in Android 5.0, see Customize Touch Feedback. Activity Transition Animations Activity transition animations give users a sense of visual continuity when one activity transitions to another. Apps can specify three types of transition animations: Enter transition – For when an activity enters the scene. Exit transition – For when an activity exits the scene. Shared element transition – For when a view that is common to two activities changes as the first activity transitions to the next. For example, the following sequence of screenshots illustrates a shared element transition: A shared element (a photo of a caterpillar) is one of several views in the first activity; it enlarges to become the only view in the second activity as the first activity transitions to the second. Enter Transition Animation Types For enter transitions, Android 5.0 provides three types of animations: Explode animation – Enlarges a view from the center of the scene. Slide animation – Moves a view in from one of the edges of a scene. Fade animation – Fades a view into the scene. Exit Transition Animation Types For exit transitions, Android 5.0 provides three types of animations: Explode animation – Shrinks a view to the center of the scene. Slide animation – Moves a view out to one of the edges of a scene. Fade animation – Fades a view out of the scene. Shared Element Transition Animation Types Shared element transitions support multiple types of animations, such as: Changing the layout or clip bounds of a view. Changing the scale and rotation of a view. Changing the size and scale type for a view. For more about activity transition animations in Android 5.0, see Customize Activity Transitions. View State Transition Animations Android 5.0 makes it possible for animations to run when the state of a view changes. You can animate view state transitions by using one of the following techniques: Create drawables that animate state changes associated with a particular view. The new AnimatedStateListDrawableclass lets you create drawables that display animations between view state changes. Define animation functionality that runs when the state of a view changes. The new StateListAnimatorclass lets you define an animator that runs when the state of a view changes. For more about view state transition animations in Android 5.0, see Animate View State Changes. Reveal Effect The reveal effect is a clipping circle that changes radius to reveal or hide a view. You can control this effect by setting the initial and final radius of the clipping circle. The following sequence of screenshots illustrates a reveal effect animation from the center of the screen: The next sequence illustrates a reveal effect animation that takes place from the bottom left corner of the screen: Reveal animations can be reversed; that is, the clipping circle can shrink to hide the view rather than enlarge to reveal the view. For more information on the Android 5.0 reveal effect in, see Use the Reveal Effect. Curved Motion In addition to these animation features, Android 5.0 also provides new APIs that enable you to specify the time and motion curves of animations. Android 5.0 uses these curves to interpolate temporal and spatial movement during animations. Three curves are defined in Android 5.0: Fast_out_linear_in – Accelerates quickly and continues to accelerate until the end of the animation. Fast_out_slow_in – Accelerates quickly and slowly decelerates towards the end of the animation. Linear_out_slow_in – Begins with a peak velocity and slowly decelerates to the end of the animation. You can use the new PathInterpolator class to specify how motion interpolation takes place. PathInterpolator is an interpolator that traverses animation paths according to specified control points and motion curves. For more information about how to specify curved motion settings in Android 5.0, see Use Curved Motion. View Shadows & Elevation In Android 5.0, you can specify the elevation of a view by setting a new Z property. A greater Z value causes the view to cast a larger shadow on the background, making the view appear to float higher above the background. You can set the initial elevation of a view by configuring its elevation attribute in the layout. The following example illustrates the shadows cast by an empty TextView control when its elevation attribute is set to 2dp, 4dp, and 6dp, respectively: View shadow settings can be static (as shown above) or they can be used in animations to make a view appear to temporarily rise above the view’s background. You can use the ViewPropertyAnimator class to animate the elevation of a view. The elevation of a view is the sum of its layout elevation setting plus a translationZ property that you can set through a ViewPropertyAnimator method call. For more about view shadows in Android 5.0, see Defining Shadows and Clipping Views. Color Features Android 5.0 provides two new features for managing color in apps: Drawable tinting lets you alter the colors of image assets by changing a layout attribute. Prominent color extraction makes it possible for you to dynamically customize your app's color theme to coordinate with the color palette of a displayed image. Drawable Tinting Android 5.0 layouts recognize a new tint attribute that you can use to set the color of drawables without having to create multiple versions of these assets to display different colors. To use this feature, you define a bitmap as an alpha mask and use the tint attribute to define the color of the asset. This makes it possible for you to create assets once and color them in your layout to match your theme. In the following example, a single image asset—a white logo with a transparent background—is used to create tint variations: This logo is displayed above a blue circular background as shown in the following examples. The image on the left is how the logo appears without a tint setting. In the center image, the logo's tint attribute is set to a dark gray. In the image on the right, tint is set to a light gray: For more about drawable tinting in Android 5.0, see Drawable Tinting. Prominent Color Extraction The new Android 5.0 Palette class lets you extract colors from an image so that you can dynamically apply them to a custom color palette. The Palette class extracts six colors from an image and labels these colors according to their relative levels of color saturation and brightness: Vibrant Vibrant dark Vibrant light Muted Muted dark Muted light For example, in the following screenshots, a photo viewing app extracts the prominent colors from the image on display and uses these colors to adapt the color scheme of the app to match the image: In the above screenshots, the action bar is set to the extracted “vibrant light” color and the background is set to the extracted “vibrant dark” color. In each example above, a row of small color squares is included to illustrate the palette colors that were extracted from the image. For more about color extraction in Android 5.0, see Extracting Prominent Colors from an Image. New UI Widgets Android 5.0 introduces two new UI widgets: RecyclerView– A view group that displays a list of scrollable items. CardView– A basic layout with rounded corners. Both widgets include baked-in support for Material Theme features; for example, RecyclerView uses animations for adding and removing views, and CardView uses view shadows to make each card appear to float above the background. Examples of these new widgets are shown in the following screenshots: The screenshot on the left is an example of RecyclerView as used in an email app, and the screenshot on the right is an example of CardView as used in a travel reservation app. RecyclerView RecyclerView is similar to ListView, but it is better suited for large sets of views or lists with elements that change dynamically. Like ListView, you specify an adapter to access the underlying data set. However, unlike ListView, you use a layout manager to position items within RecyclerView. The layout manager also takes care of view recycling; it manages the reuse of item views that are no longer visible to the user. When you use a RecyclerView widget, you must specify a LayoutManager and an adapter. As shown in this figure, LayoutManager is the intermediary between the adapter and the RecyclerView: The following screenshots illustrate a RecyclerView that contains 100 items (each item consists of an ImageView and a TextView): RecyclerView handles this large data set with ease—scrolling from the beginning of the list to end of the list in this sample app takes only a few seconds. RecyclerView also supports animations; in fact, animations for adding and removing items are enabled by default. When an item is added to a RecyclerView, it fades in as shown in this sequence of screenshots: For more about RecyclerView, see RecyclerView. CardView CardView is a simple view that simulates a floating card with rounded corners. Because CardView has built-in view shadows, it provides an easy way for you to add visual depth to your app. The following screenshots show three text-oriented examples of CardView: Each of the cards in the above example contains a TextView; the background color is set via the cardBackgroundColor attribute. For more about CardView, see CardView. Enhanced Notifications The notification system in Android 5.0 has been significantly updated with a new visual format and new features. Notifications have a new look in Android 5.0. For example, notifications in Android 5.0 now use dark text over a light background: When a large icon is displayed in a notification (as shown in the above example), Android 5.0 presents the small icon as a badge over the large icon. In Android 5.0, notifications can also appear on the device lockscreen. For example, here is an example screenshot of a lockscreen with a single notification: Users can double-tap a notification on the lockscreen to unlock the device and jump to the app that originated that notification, or swipe to dismiss the notification. Notifications have a new visibility setting that determines how much content can be displayed on the lockscreen. Users can choose whether to allow sensitive content to be shown in lockscreen notifications. Android 5.0 introduces a new high-priority notification presentation format called Heads-up. Heads-up notifications slide down from the top of the screen for a few seconds and then retreat back to the notification shade at the top of the screen. Heads-up notifications make it possible for the system UI to put important information in front of the user without disrupting the currently running activity. The following example illustrates a simple Heads-up notification that displays on top of an app: Heads-up notifications are typically used for the following events: A new next message An incoming phone call Low battery indication An alarm Android 5.0 displays a notification in Heads-up format only when it has a high or max priority setting. In Android 5.0, you can provide notification metadata to help Android sort and display notifications more intelligently. Android 5.0 organizes notifications according to priority, visibility, and category. Notification categories are used to filter which notifications can be presented when the device is in Do not disturb mode. For detailed information about creating and launching notifications with the latest Android 5.0 features, see Local Notifications. New APIs In addition to the new look-and-feel features described above, Android 5.0 adds new APIs that extend the capabilities of existing multimedia, storage, and wireless/connectivity functionality. Also, Android 5.0 includes new APIs that provide support for a new job scheduler feature. Camera Android 5.0 provides several new APIs for enhanced camera capabilities. The new Android.Hardware.Camera2 namespace includes functionality for accessing individual camera devices connected to an Android device. Also, Android.Hardware.Camera2 models each camera device as a pipeline: it accepts a capture request, captures the image, and then outputs the result. This approach makes it possible for apps to queue multiple capture requests to a camera device. The following APIs make these new features possible: CameraManager.GetCameraIdList– Helps you to programmatically access camera devices; you use CameraManager.OpenCamerato connect to a specific camera device. CameraCaptureSession– Captures or streams images from the camera device. You implement a CameraCaptureSession.CaptureListenerinterface to handle new image capture events. CaptureRequest– Defines capture parameters. CaptureResult– Provides the results of an image capture operation. For more about the new camera APIs in Android 5.0, see Media. Audio Playback Android 5.0 updates the AudioTrack class for better audio playback: ENCODING_PCM_FLOAT– Configures AudioTrackto accept audio data in floating-point format for better dynamic range, greater headroom, and higher quality (thanks to increased precision). Also, floating-point format helps to avoid audio clipping. ByteBuffer– You can now supply audio data to AudioTrackas a byte array. WRITE_NON_BLOCKING– This option simplifies buffering and multithreading for some apps. For more about AudioTrack improvements in Android 5.0, see Media. Media Playback Control Android 5.0 introduces the new Android.Media.MediaController class, which replaces RemoteControlClient. Android.Media.MediaController provides simplified transport control APIs and offers thread-safe control of playback outside of the UI context. The following new APIs handle transport control: Android.Media.Session.MediaSession– A media control session that handles multiple controllers. You call MediaSession.GetSessionTokento request a token that your app uses to interact with the session. MediaController.TransportControls– Handles transport commands such as Play, Stop, and Skip. Also, you can use the new Android.App.Notification.MediaStyle class to associate a media session with rich notification content (such as extracting and showing album art). For more about the new media playback control features in Android 5.0, see Media. Storage Android 5.0 updates the Storage Access Framework to make it easier for applications to work with directories and documents: To select a directory subtree, you can build and send an Android.Intent.Action.OPEN_DOCUMENT_TREEintent. This intent causes the system to display all provider instances that support subtree selection; the user then browses and selects a directory. To create and manage new documents or directories anywhere under a subtree, you use the new CreateDocument, RenameDocument, and DeleteDocumentmethods of DocumentsContract. To get paths to media directories on all shared storage devices, you call the new Android.Content.Context.GetExternalMediaDirsmethod. For more about new storage APIs in Android 5.0, see Storage. Wireless & Connectivity Android 5.0 adds the following API enhancements for wireless and connectivity: New multi-network APIs that make it possible for apps to find and select networks with specific capabilities before making a connection. Bluetooth broadcasting functionality that enables an Android 5.0 device to act as a low-energy Bluetooth peripheral. NFC enhancements that make it easier to use near-field communications functionality for sharing data with other devices. For more about the new wireless and connectivity APIs in Android 5.0, see Wireless and Connectivity. Job Scheduling Android 5.0 introduces a new JobScheduler API that can help users minimize battery drain by scheduling certain tasks to run only when the device is plugged in and charging. This job scheduler feature can also be used for scheduling a task to run when conditions are more suitable to that task, such as downloading a large file when the device is connected over a Wi-Fi network instead of a metered network. For more about the new job scheduling APIs in Android 5.0, see Scheduling Jobs. Summary This article provided an overview of important new features in Android 5.0 for Xamarin.Android app developers: Material Theme Animations View shadows and elevation Color features, such as drawable tinting and prominent color extraction The new RecyclerViewand CardViewwidgets Notification enhancements New APIs for camera, audio playback, media control, storage, wireless/connectivity, and job scheduling If you are new to Xamarin Android development, read Setup and Installation to help you get started with Xamarin.Android. Hello, Android is an excellent introduction for learning how to create Android.
https://docs.mono-android.net/guides/android/platform_features/introduction_to_lollipop/
CC-MAIN-2017-13
en
refinedweb
public class BarnesSurfaceInterpolator extends Object Barnes Surface Interpolation is a surface estimating method commonly used as an interpolation technique for meteorological datasets. The algorithm operates on a regular grid of cells covering a specified extent in the input data space. It computes an initial pass to produce an averaged (smoothed) value for each cell in the grid, based on the cell's proximity to the points in the input observations. Subsequent refinement passes may be performed to improve the surface estimate to better approximate the observed values. For the first pass, the estimated value at each grid cell is: Eg = sum(wi * oi) / sum(wi)where Egis the estimated surface value at the grid cell wiis the weight value for the i'th observation point (see below for definition) oiis the value of the i'th observation point The weight (decay) function used is: wi = exp(-di2 / L2c )where: wiis the weight of the i'th observation point value diis the distance from the grid cell being estimated to the i'th observation point Lis the length scale, which is determined by the observation spacing and the natural scale of the phenomena being measured. The length scale is in the units of the coordinate system of the data points. It will likely need to be empirically estimated. cis the convergence factor, which controls how much refinement takes place during each refinement step. In the first pass the convergence is automatically set to 1. For subsequent passes a value in the range 0.2 - 0.3 is usually effective. Eg' = Eg + sum( wi * (oi - Ei) ) / sum( wi )To optimize performance for large input datasets, it is only necessary to provide the data points which affect the surface interpolation within the specified output extent. In order to avoid "edge effects", the provided data points should be taken from an area somewhat larger than the output extent. The extent of the data area depends on the length scale, convergence factor, and data spacing in a complex way. A reasonable heuristic for determining the size of the query extent is to expand the output extent by a value of 2L. Since the visual quality and accuracy of the computed surface is lower further from valid observations, the algorithm allows limiting the extent of the computed cells. This is done by using the concept of supported grid cells. Grid cells are supported by the input observations if they are within a specified distance of a specified number of observation points. Grid cells which are not supported are not computed and are output as NO_DATA values. References clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public static final float DEFAULT_NO_DATA_VALUE public BarnesSurfaceInterpolator(Coordinate[] observationData) Coordinatevalues, where the X,Y ordinates are the observation location, and the Z ordinate contains the observation value. data- the observed data values public void setPassCount(int passCount) passCount- the number of estimation passes to perform (1 or more) public void setLengthScale(double lengthScale) lengthScale- public void setConvergenceFactor(double convergenceFactor) convergenceFactor- the factor determining how much to refine the surface estimate public void setMaxObservationDistance(double maxObsDistance) maxObsDistance- the maximum distance from an observation for a supported grid point public void setMinObservationCount(int minObsCount) minObsCount- the minimum in-range observation count for supported grid points public void setNoData(float noDataValue) noDataValue- the value to use to represent NO_DATA. public float[][] computeSurface(Envelope srcEnv, int xSize, int ySize) Envelope. The size of the grid is specified by the cell count for the grid width (X) and height (Y). srcEnv- the area covered by the grid xSize- the width of the grid ySize- the height of the grid
http://docs.geotools.org/latest/javadocs/org/geotools/process/vector/BarnesSurfaceInterpolator.html
CC-MAIN-2017-13
en
refinedweb
#include <Pt/Xml/XmlReader.h> Reads XML as a Stream of XML Nodes. More... Inherits NonCopyable. This class operates on an input source from which XML character data is read and parsed. The content of the XML document is reported as XML nodes. The parser will only parse the XML document as far as the user read data from it. To acces the current node the method get() can be used. To parse and read the next node the method next() can be used. Only when next() or any corresponding method or operator is called, the next chunk of XML input data is parsed. The current XML node can be read using get(). Every call to next() will parse the next node, position the cursor to the next node and return the parsed node. The returned value is of type Node, which is the super-class for all XML node classes. Depending on the type, the generic node object may be cast to the more concrete node object. For example a Node object with a node type of Node::StartElement can be cast to StartElement. Parsing using next() will continue until the end of the document is reached which will result in a EndDocument node to be returned by next() and get(). This class also provides the method current() to obtain an iterator which basically works the same way like using using get() and next() directly. The iterator can be set to the next node by using the ++ operator. The current node can be accessed by dereferencing the iterator. This method can be used to add additional input streams e.g. to resolve an external entity reference, indicated by an EntityReference node. All input sources are removed and the parser state is reset to parse a new document. The XmlResolver not removed and the reporting options are not changed. All previous input is removed and the parser is reset to parse a new document. This is essentially the same as calling reset() followed by addInput(). If an XML element contains more character data than this limit, the content is reported as multiple Characters or CData nodes.
http://pt-framework.org/htdocs/classPt_1_1Xml_1_1XmlReader.html
CC-MAIN-2017-13
en
refinedweb
I am trying to add an empty arrayList "patternList" to the "treeList" arrayList created from the treeNode class but I am having trouble populating the treeList arrayList with an empty patternList arrayList shown in the last line of the code. Basically I want an empty arrayList for each item in the treeList that I will be populating later on. Thanks. import java.util.*; import java.util.ArrayList; import java.util.List; public class testJava{ public static class treeNode { String nodeName; String pNodeName; String fNodeName; boolean begNode; boolean targetNode; int nodeNumber; //String pattern; ArrayList<String> patternList = new ArrayList<String>(); public treeNode (String nodeName, String pNodeName, boolean begNode, boolean targetNode, int nodeNumber, ArrayList<String> patternList) { this.nodeName = nodeName; this.pNodeName = pNodeName; this.begNode = begNode; this.targetNode = targetNode; this.nodeNumber = nodeNumber; this.patternList = patternList; //this.pattern = pattern; } } public static void main (String[] args){ ArrayList<treeNode> treeList = new ArrayList<treeNode>(); ArrayList<String> openList = new ArrayList<String>(); ArrayList<String> closeList = new ArrayList<String>(); ArrayList<String> treePath = new ArrayList<String>(); String currentNode = ""; String targetNode = ""; int smallestNodeCount = 0; int tempNodeCount = 0; int openListCount = 0; String currentPattern = "" ; treeList.add(new treeNode ("S 1", null, true, false, 1, ArrayList<string> patternList)); To create a new empty ArrayList, the syntax would be new ArrayList<String>(). Since you're immediately passing the newly created list as a parameter to your TreeNode constructor, there's no need for a variable name. Your last line should therefore be: treeList.add(new treeNode ("S 1", null, true, false, 1, new ArrayList<String>())); Also, by convention, Java class names are in UpperCamelCase, not in lowerCamelCase. That's why it's ArrayList rather than arrayList. Your class names should be TreeNode and TestJava, with an upper-case T. Another thing to pay attention to is your variable types; more often than not, you want your variable types to use the interface when available, rather than being explicitly tied to a particular implementation class. So instead of defining your list using ArrayList<String> openList = new ArrayList<String>();, consider making the type of openList simply List. This only applies on the left-hand side of the declaration. On the right-hand side, you still need the concrete class (i.e. ArrayList) when creating the new instance. Your declaration therefore becomes: List<String> openList = new ArrayList<>();
https://codedump.io/share/EMmXL428QYjG/1/how-to-add-an-empty-arraylist-to-another-arraylist-created-from-a-class
CC-MAIN-2017-13
en
refinedweb
Details Description Created a new patch review tool that will integrate JIRA and reviewboard - Activity - All - Work Log - History - Activity - Transitions Thanks for the quick review Tejas 1. I believe Guozhang fixed this 2. Updated the reviewboard to include the .reviewboardrc file for checkin to the codebase 3. Added kafka-patch-review.py to the reviewboard for checkin to the codebase 4. Added default summary "Patch for KAFKA-..." 6. There are only 2 review board tasks a) Creating a new reviewboard b) Updating an existing reviewboard. Hopefully "Creating a new reviewboard" explains that better 7. Added a "--testing-done" option to the script Any feedback from other committers Jun Rao, Joel Koshy ? I'll take a look today - would like to try it out as well Ran into this while following the instructions. May be some python version conflict but throwing it out there in case someone encountered this and worked around it: [1709][jkoshy@jkoshy-ld:~]$ sudo easy_install -U setuptools Traceback (most recent call last): File "/usr/bin/easy_install", line 9, in <module> load_entry_point('distribute', 'console_scripts', 'easy_install')() File "/usr/lib/python2.6/site-packages/setuptools-1.1.5-py2.6.egg/pkg_resources.py", line 357, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python2.6/site-packages/setuptools-1.1.5-py2.6.egg/pkg_resources.py", line 2393, in load_entry_point raise ImportError("Entry point %r not found" % ((group,name),)) ImportError: Entry point ('console_scripts', 'easy_install') not found Probably because I did both yum install python-setuptools and easy_install -U setuptools Ya, I think the post review page is a bit confusing. I updated the wiki with the precise installation instructions for review board - Thanks for the review comments, Swapnil Ghike. Were you able to setup the tool correctly and use it to upload a patch and rb? I think this tool will be easier to use if it is checked in. That will also simplify the wiki. Can I get a +1 from one of the committers? Hmm, tried setting up the tool according to the instruction for RHEL. Ran into this: ~/kafka/kafka$ python kafka-patch-review.py --help Traceback (most recent call last): File "kafka-patch-review.py", line 3, in <module> import argparse ImportError: No module named argparse Does the easy_install think work only on Mac? (jira-python and RBTools are installed using easy_install)? On the Mac, I got this: ~/kafka-local/kafka$ echo $JIRA_CMDLINE_HOME . ~/kafka-local/kafka$ python kafka-patch-review.py -b 0.8 -j KAFKA-1003 -db Jira Home= . git diff 0.8 > KAFKA-1003.patch Creating diff against 0.8 and uploading patch to JIRA KAFKA-1003 Creating a new reviewboard post-review --publish --tracking-branch 0.8 --target-groups=kafka --bugs-closed= KAFKA-1003 --summary "Patch for KAFKA-1003" There don't seem to be any diffs! rb url= If you take a look at KAFKA-1003, it has appended my diffs, it just did not crete a review board. I guess this is expected. Thanks for giving it a spin, though I'm not sure you are using the latest version of the tool. 1. For the argparse error, I'm not too sure if it is because argparse requires Python 2.7.x ? Could you try upgrading Python to 2.7.x and see if it works? 2. To upload a patch using the tool, the code must be committed to the local branch else the diff is empty. But I think that points to a potential improvement to the tool. If the diff is empty, it can skip uploading the patch and creating the rb. 1. RHEL machine is on python 2.7.2. Maybe the libraries are not in the standard place or something. 2. Made a local commit, tried again, same error. I am using the latest diff (the one with the timestamps). Is JIRA_CMDLINE_HOME pointing to a wrong directory? I set it to . Updated the Setup section of the wiki with instructions on installing argparse - Also, JIRA_CMDLINE_HOME is defunct. I improved the error reporting of the tool to handle empty diffs. Could you give the latest version a try? After installing arg_parse and using origin/0.8 instead of 0.8 as the branch name, it worked like a charm! Cool! Swapnil, could you help me add a FAQ to this wiki. It will be great if you could list the issues you ran into with the error messages. Saw new issue on RHEL when I tried 'python kafka-patch-review.py -b origin/trunk -j KAFKA-42 -r 14081': Enter authorization information for "Web API" at reviews.apache.org Your review request still exists, but the diff is not attached. Creating diff against origin/trunk and uploading patch to JIRA KAFKA-42 Created a new reviewboard It attached the diff, but did not create a RB. Nice - I tried this on KAFKA-1049 (as a test - that patch does not work) and it worked great! +1 I did not get time to dig into the issue I ran into on Linux but the steps worked on my laptop. I can look into that and update the wiki with a work-around if I find one. Minor comment: the direct Python API is interesting (I'm in general wary of popen/subprocess); but it is probably more work than its worth to interface with that and post-review likely wraps that anyway and is a well-maintained tool. Also, would prefer to have the tool create a os.tmpfile as opposed to leaving around a patch file but not a big deal. Thanks for the reviews. Joel - Moved the patch to tempdir. Moving to the python API for reviewboard would be great, filed KAFKA-1058 to address that. Checked in the tool with the tempdir fix. : > popt.add_argument(' s', '-summary', action='store', dest='summary', required=False, help='Summary for the reviewboard') > popt.add_argument(' d', '-description', action='store', dest='description', required=False I am wondering if someone doesn't provide a summary and as its an optional param, the script won't complain. Eventually,] :
https://issues.apache.org/jira/browse/KAFKA-1053
CC-MAIN-2017-13
en
refinedweb
gnutls_x509_trust_list_iter_get_ca(3)utlsutls_x509_trust_list_iter_get_ca(3) gnutls_x509_trust_list_iter_get_ca - API function #include <gnutls/x509.h> int gnutls_x509_trust_list_iter_get_ca(gnutls_x509_trust_list_t list, gnutls_x509_trust_list_iter_t * iter, gnutls_x509_crt_t * crt); gnutls_x509_trust_list_t list The list gnutls_x509_trust_list_iter_t * iter A pointer to an iterator (initially the iterator should be NULL) gnutls_x509_crt_t * crt where the certificate will be copied This function obtains a certificate in the trust list and advances the iterator to the next certificate. The certificate returned in crt must be deallocated with gnutls_x509_crt_deinit(). When past the last element is accessed GNUTLS_E_REQUESTED_DATA_NOT_AVAILABLE is returned and the iterator is reset. After use, the iterator must be deinitialized usin gnutls_x509_trust_list_iter.g9nutls_x509_trust_list_iter_get_ca(3)
http://man7.org/linux/man-pages/man3/gnutls_x509_trust_list_iter_get_ca.3.html
CC-MAIN-2017-13
en
refinedweb
PowerPoint From Uncyclopedia, the content-free encyclopedia A typical Powerpoint The Effects of Powerpoints can be frightening. edit History Powerpoint was developed during the reign of Tutankhamen, although in an entirely different part of the world. Genghis Khan is rumoured to be involved in the Quality UnAssurance. Powerpoint did not really take off until the bolshevik revolution in 1917. It was used by the party, in addition with the rest of Microsoft Office to organise itself. Many historians attribute this to the world-wide fall of Communism, although it's rumoured to persist in pockets of Canadia. Today, it is taught in schools, leading to rumours of a conspiracy, involving Napoleon Dynamite, a glass of Pepsi Max and an Australian Terrier. Why these conspiracies have been developed, nobody knows. edit Explanation Of The Name Many people have wondered about the name and explanation behind it but most people attribute it as a reminder of what happens to you when you conduct a Powerpoint at school. edit Applications. edit Reasons for PowerPoint's Creation edit Money Many people think that PowerPoint was created for financial gain. It was not through creating a superior product in a highly competitive market. In fact, it is quite the opposite. All of the Microsoft Office products are designed to run poorly, using up all available RAM on a computer and causing everything to crash, often to the point of the Blue Screen of Death, forcing the user to buy more Microsoft gear, in order to continue the work they originally started to create using the original Microsoft software, which they were forced to do after Microsoft won the Cola Wars in 23 B.C, whilst most people are unaware they were even part of it, however, the Powerpoint is the one and only one Microsoft product working brilliantly. edit World Domination Examination of the code for PowerPoint (included below), demonstrates how the program has been written to take over the world. WARNING! The following code can only be deciphered by practitioners of C. #include <stdio.h> #include <stdlib.h> #include <stdcrap.h> #include <worlddomination.h> int main() { printf("Welcome to Microsoft PowerPoint\n"); printf("Create your finger here\n"); scanf("%s",presentation); dominate_world(presentation); return 0; } edit Meaning.
http://uncyclopedia.wikia.com/wiki/PowerPoint
CC-MAIN-2017-13
en
refinedweb
>>." It's a silly proposition (Score:5, Insightful) IE's problem is not the engine, it's the shitty interface. (Ditto about Windows 8, many would say.) Re: (Score:3, Insightful) You may call it what you will, (inertia, stubbornness, laziness, unwillingness to change,) but truth is that many people just prefer it and Internet Explorer is still popular amongst a big group of users, and in the same way you and I could be called the same for not: (Score:2) Thanks. I didn't know about the control tab option. Does that work in excel? (I am not at work to test.), Insightful) AC says you're dumb - I disagree with him. Your opinion is pretty well thought out. I do, however, disagree with your assessment somewhat. Trident needs to die, and die hard. Microsoft needs to pull that abomination out of Windows completely, along with all the ActiveX controls, all it's privileges, all of it's quirks, both good and bad. I don't believe that I'll ever think that Windows is a "good" operating system, but the removal of Trident would make it one hell of a lot better. Sure, I know that many of IE's worst vulnerabilities have been "fixed", but I shall never forget how many vulnerabilities there have been, or how bad they have been. As for Webkit - I've liked it since it's debut under Google's name. Sure, I realize it's not Google's invention, but webkit is cool. If/when Microsoft shifts to Webkit, they really, really, REALLY need to install it as an unprivileged application, and make certain that it just BROWSES. It doesn't need hooks into dozens of programs, it doesn't need privileges, it doesn't need much of anything. A few plugins, addons such as Mozilla and Google offer for their own browsers. Leave it at that. A browser on Windows should be just as much, and no more than a browser on any Unix-like. The browser shouldn't even be used for updates, as Microsoft has done for all these years. A separate and distinct updating program is a requirement, with no overlap in privileges. Yes, Trident needs to die, quickly, and hard. It would be a wonderful thing if five years from now, Trident were just history, with zero support anywhere. I'd like to see websites assist people with updating from Trident simply. Just stop coding for Trident. "This site is best viewed with ANY browser that is not Internet Explorer!" Re:It's a silly proposition (Score:4, Insightful) Trident in IE 10 scores a decent in HTML 5/5.1 and CSS 3. It is not the piece of crap it once was in IE 6. Just because you have not used it in 12 years doesn't mean it is the same as in 2001. Re:It's a silly proposition (Score:5, Insightful) The problem is that if they start getting a significant share of the browser market again, they're almost guaranteed to start their old extend/extinguish trick. Microsoft needs to stay an 'also ran' in the browser market until they learn to play with others.: ) That was the argument in 2003 when we were first trying to get people to switch to Firefox. While I'm sure that's true in some places (China mostly, from what I last heard on the subject) the days of widespread SAAS are upon us and now even giant mega corps don't have a real problem upgrading. Even if the updated web apps have ignored the last several years' best practice of feature detection instead of user-agent sniffing, they're unlikely to have serious problems with how close the modern rendering engines: (Score:3, Insightful) Correct on IE, it is just using some weird design choices but I don't see how anybody can argue that Win 8 isn't wrong when this is the average user response [youtube.com] I saw at the shop. When the user needs a fricking training course to use your damned OS like its 1986 all over again? Something has gone HORRIBLY wrong. IE's biggest problem isn't the UI, its the giant fucking bullseye painted on it by hackers because they know the clueless rubes that are still running that 30 day Norton trialware from 6 years ago. IE's biggest problem isn't the UI, its the giant fucking bullseye painted on it by hackers because they know the clueless rubes that are still running that 30 day Norton trialware from 6 years ago and think that works is using IE. Add to that the fucking braindead choice to not port back to their supported OSes so that the ONLY way you can use the same browser across XP/Vista/7 is to NOT use IE and you have a browser made of fail.) People ought to know that the prefixed attributes are in beta and may change. If they ship that to production anyway, they had better be ready to change it if the standard is updated before the prefix is dropped. Fortunately none of the vendor-specific extensions are anything but minor enhancements, so they can't do any serious damage. It's not like W3C is going to redefine a pixel here.: (Score:2, Insightful) The funny thing is, the reason developers are targeting WebKit is because of the iPhone (Safari) not because of Chrome. If it works in Chrome on Windows, it will work on Safari on the iPhone, without needing to test if it actually works on the iPhone. Although that has problems too, as Chrome and Safari use different Javascript implementations, and Google uses an inherently terrible method of sandboxing that wastes extreme amounts of memory. Also Chrome has no 64-bit version on Windows which is a non-starter,) Re:Arguments of convenience (Score:4, Insightful) I do not give a shit whether it is opensource. I do give a shit whether it enslaves the web and enforces another decade of stagnationm [pcmag.com], where we can't move on to HTML 6 and corps lock a special version of Chrome from this decade to support their apps. Maybe Android 3.x will be used and corps will downgrade their phones for just that one version 10 years from now if the W3C makes changes that the current webkit does not support. Only Google's way of doing it is different. IE 5.5 was cutting edge and MS was inventing new standards and it was the best browser back then. THe problems came when w3c decided to recommend the same standards implemented differently. Then IE 6 did things one way, and Firefox rendered them in another. Open source or not I do not want to see that problem again. Re: : (Score:2, Insightful) In the past many on Slashdot argued vehemently for web standards. It's interesting that a lot of people who used to be pro-web-standard when Microsoft was non-compliant with IE are now saying "hey, we're only going to target webkit because ..." The same reasons that applied to avoiding an IE monoculture for web development apply to a webkit monoculture. Rather than bathing in schadenfreude, people should be kicking over bins just like they did with IE to ensure that the most popular implementation follows the standard, not the standard follows the most common implementation. Webkit is open source. IE was not. The people and companies working on webkit are not trying to kill Mozilla. Hell, the biggest contributor to webkit is Mozilla's largest source of revenue. Webkit is used by many browsers on many platforms from many companies (Safari on mac and iOS, Chrome on everything, RIM's blackberry browser, ...). IE was intentionally tied to a single OS. WebKit has a long history of respecting standards. There are extensions which are prototypes for future standards, but they are cle Re: (Score:3, Insightful) I would strongly disagree with this. Having a standards committee design the next step in a technical advance is one of the worst ways of working possible. What you usually end up with is a huge conglomeration of random ideas and special interests. For programming the result is frequently described as "feeping creaturitus" [wikipedia.org]. The reason for web standards is not technical, standards don't help make better mousetraps they exist so that a hundred mice can wrestle the cat into submission. So that the little). No, they simply should adhere to the standards. (Score:5, Insightful) That'll finally bring more choice to the user, in stead of the pseudo-choice now. I prefer opera and have that installed as my default browser, but still have IE and Chrome installed because some websites will only work on either of those. Between the three I can open all sites that I need, but it shouldn't be necessary if all just follow the standards, and consequently, all web sites only need to be written to that standard as well. Re: (Score:2, Insightful) Webkit browsers passed all the acid tests long before Trident ever got close to passing. Trident was the lowest scoring engine, and as far as I know, it is still the lowest scoring. Maybe Microsoft has simply given up on ever getting Trident to pass? Maybe they know that Trident can never attain all the standards implemented today, or standards that will be implemented in years to come? Face it man, MS has been working hard in recent years just to get into the same league as all the other modern browsers.:Wrong approach (Score:4, Insightful) Right....maybe they should switch from using NTOSKRNL.EXE to Linux too. After all, no one cares about the kernel; users and developers only care about the UI and APIs that sit above it. And maybe they could turn Visual C++ into a front-end to LLVM, and have .NET target the JVM. All of these changes would save Microsoft from the trouble of developing several large pieces of software. From Microsoft's point of view, of course they should keep Trident development going. I'm surprised this is even being questioned. To do otherwise would be to give control of the web over to Apple and Google. The only reason that Apple and Google care about standards right now is because Microsoft is still a big player in the game. If it was up to Google, they'd be making their own proprietary versions of HTTP, JavaScript and ActiveX ;) Then there's Apple - and even though I'm a Linux user, I'm happy that Microsoft is there to keep Apple in check!) No, and I love Webkit. (Score:4, Insightful) Trident is getting better with each major release, which is a good thing. And Microsoft still has some input towards standards as well, such as the WebRTC spec if I remember correct, or something similar that also had some features missing from it. Yeah, you could argue that things would be simpler if there was just ONE thing, the one thing that correctly interprets the specs, but it is also those incorrect spec implementations that have driven competition, driven the creation of new ideas to replace old ones and inspired so many developers to create methods to deal with them in their own ways. Not only that, without all this mess, there would be no experimentation with future specs, and all these separate browsers lead to browser prefixes being implemented, even by Microsoft recently. The main problem with web dev is most devs are terrible. Admittedly that is mainly a problem with such inconsistency in JavaScript, and HTML allowing spaghetti syntax all over the place. And lets not get started on scope. Holy crap, so many people are clueless about it. And again, that it is true globally in any form of programming. Abuse of global namespaces being the biggest headache in all programming, such things that make you want to headbutt your monitor with your fist, a physical impossibility! But damn it I will find a way and collapse the universe just so THEY don't exist! The next huge change in JS is going to bring a lot of new features, but also a bunch of changes to the way JS is executed. It is going to be a shaky decade when that comes about. But it will be for the better. I hope... Re: : (Score:3, Insightful) That was over 10 years ago. Lets go to today? Right now webkit is causing problems being this decades IE 6 [pcmag.com] in terms of mobile browsing and HTML 5 and css 3. If you own a Windows Phone (I know you do not, but bare with me ..) and go to disney.com or cnn.com will it render correctly? Nope. THey use ---webkit prefixes. HTML5Test.com is part of the problem too as Google is in a pissing match on being the best browser, but what that site doesn't tell you is that these are not implemented the same as W3C drafting Re:Ditch HTML5 for stronger web and user protectio (Score:4, Insightful) Webkit is making MS honest. Have you tried IE 10? I know the thought probably sends shiver down your spine but I have to say MS really is caring and shaking in their boots. It is a great browser. I fear webkit becoming too dominate at this point and Windows Phone users are whinning they can't view mobile sites as they cater to just webkit. I can't advocate openstandards and bash IE 6, yet fully support webkit at the same time. I would be a hypocrite otherwise. What if you want to use FirefoxOS in your next phone? Will you be screwed over? Right now, yes. IE has standard behavior now. Since IE 9 it passed all the acid tests. Just because you hate one browser doesn't mean you should support the entrenchment of another or support things like html5test that test non standard non implemented things. It encourages all the things that caused IE to be proprietary when implementations of things like the CSS box model came about locking corporate desktops up for decades.] Re:I find Trident faster than WebKit. (Score:4, Insightful) Actually, in a very real sense the engine _does_ belong to the competition. To actually get your code landed in WebKit you have to convince the current project maintainers (mostly Google and Apple) to accept it. Which means that if you want to do something that Google and Apple don't (both, often!) approve of, you have to maintain it as a separate branch and deal with the merge pain. No different from other projects where you have to collaborate with others, but a lot different from having control over the code as Microsoft does with Trident right now. Re: ]
https://tech.slashdot.org/story/13/01/12/0347256/should-microsoft-switch-to-webkit?sdsrc=next
CC-MAIN-2017-13
en
refinedweb
Sum: 1|(factorial) Series - Online Code Description This code performs the sum of 1/(Factorial) series upto 17 terms. Source Code #include <stdio.h> #include <conio.h> long int factorial(int n); void main() { int n,i; float s,r; char c; clrscr(); repeat : printf("You have this series:- 1/1! + 2/2! + 3/3! + 4/4! ..."); print... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/2682/Sum%3A_1%7C%28factorial%29_Series
CC-MAIN-2017-39
en
refinedweb
Transforms jsdoc data into something more suitable for use as template input. Also adds a few tags to the default set: @category <string>: Useful for grouping identifiers by category. @done: Used to mark @todoitems as complete. @typicalname: If set on a class, namespace or module, child members will documented using this typical name as the parent name. Real-world typical name examples are $(the typical name for jQueryinstances), _(underscore) etc. @chainable: Set to mark a method as chainable (has a return value of this). This module is built into jsdoc-to-markdown, you can see the output using this command: $ jsdoc2md --json <files> © 2014-16 Lloyd Brookes <[email protected]>. Documented by jsdoc-to-markdown.
https://www.npmjs.com/package/jsdoc-parse
CC-MAIN-2017-39
en
refinedweb
1.8TB in a day is not terrible slow if that number comes from the CopyTable counters and you are moving data across data centers using public networks, that should be about 20MB/sec. Also, CopyTable won't compress anything on the wire so the network overhead should be a lot. If you use anything like snappy for block compression and/or fast_diff for block encoding the HFiles, then using snapshots and export them using the ExportSnapshot tool should be the way to go. cheers, esteban. -- Cloudera, Inc. On Thu, Aug 14, 2014 at 11:24 PM, tobe <[email protected]> wrote: > Thank @lars. > > We're using HBase 0.94.11 and follow the instruction to run `./bin/hbase > org.apache.hadoop.hbase.mapreduce.CopyTable --peer.adr=hbase://cluster_name > table_name`. We have namespace service to find the ZooKeeper with > "hbase://cluster_name". And the job ran on a shared yarn cluster. > > The performance is affected by many factors, but we haven't found out the > reason. It would be great to see your suggestions. > > > On Fri, Aug 15, 2014 at 1:34 PM, lars hofhansl <[email protected]> wrote: > > > What version of HBase? How are you running CopyTable? A day for 1.8T is > > not what we would expect. > > You can definitely take a snapshot and then export the snapshot to > another > > cluster, which will move the actual files; but CopyTable should not be so > > slow. > > > > > > -- Lars > > > > > > > > ________________________________ > > From: tobe <[email protected]> > > To: "[email protected]" <[email protected]> > > Cc: [email protected] > > Sent: Thursday, August 14, 2014 8:18 PM > > Subject: A better way to migrate the whole cluster? > > > > > > Sometimes our users want to upgrade their servers or move to a new > > datacenter, then we have to migrate the data from HBase. Currently we > > enable the replication from the old cluster to the new cluster, and run > > CopyTable to move the older data. > > > > It's a little inefficient. It takes more than one day to migrate 1.8T > data > > and more time to verify. Can we have a better way to do that, like > snapshot > > or purely HDFS files? > > > > And what's the best practise or your valuable experience? > > >
http://mail-archives.apache.org/mod_mbox/hbase-dev/201408.mbox/%3CCALPvCiBLy67v7X6EAKvQdvihs3MCMyBVWkJ_DR_y1V5pAF_nMg@mail.gmail.com%3E
CC-MAIN-2017-39
en
refinedweb
I have a 2811 Cisco router that I configured in my home and I am attempting to setup VPN access. I am able to remotely connect to my home network through VPN client however I only have access to my devices on the native VLAN. I do not have access to VLAN 5 (10.77.5.0). I have scoured the internet, tried numerous different configs and I just can't get it to work. I have a feeling it's something fairly simple but I'm a little over my head with configuring VPN's. Any assistance would be greatly appreciated. Please see my running config below. Thanks! version 12.4 service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname 2811-Edge ! boot-start-marker boot-end-marker ! enable secret 5 XXXX ! aaa new-model ! aaa authentication login vpnauthen local aaa authorization network vpnauthor local ! aaa session-id common ! ip cef no ip dhcp use vrf connected ip dhcp excluded-address 10.77.5.1 10.77.5.49 ip dhcp excluded-address 10.77.10.1 10.77.10.49 ! ip dhcp pool House import all network 10.77.5.0 255.255.255.0 default-router 10.77.5.1 ! ip dhcp pool Guest import all network 10.77.10.0 255.255.255.0 default-router 10.77.10.1 ! ip domain name HoogyNet.net ip inspect name FW tcp router-traffic ip inspect name FW udp router-traffic ip inspect name FW icmp router-traffic ip inspect name FW dns ip inspect name FW ftp ip inspect name FW tftp ! multilink bundle-name authenticated ! voice-card 0 no dspfarm ! crypto isakmp policy 10 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group XXXX key XXXX pool VPN_Pool ! username XXXX privilege 15 secret 5 XXXX username XXXX privilege 15 secret 5 XXXX archive log config hidekeys ! ip ssh port XXXX rotary 1 ! interface Loopback0 ip address 172.17.1.10 255.255.255.248 ! interface FastEthernet0/0 ip address dhcp ip access-group INBOUND in ip nat outside ip inspect FW out no ip virtual-reassembly duplex auto speed auto no cdp enable crypto map vpnmap ! interface FastEthernet0/1 no ip address duplex auto speed auto no cdp enable ! interface FastEthernet0/1.1 encapsulation dot1Q 1 native ip address 10.77.1.1 255.255.255.0 ip nat inside ip virtual-reassembly ! interface FastEthernet0/1.5 encapsulation dot1Q 5 ip address 10.77.5.1 255.255.255.0 ip nat inside ip virtual-reassembly ! interface FastEthernet0/1.10 encapsulation dot1Q 10 ip address 10.77.10.1 255.255.255.0 ip access-group 100 in ip nat inside ip virtual-reassembly ! interface FastEthernet0/0/0 no ip address shutdown duplex auto speed auto ! interface FastEthernet0/1/0 no ip address shutdown duplex auto speed auto ! router rip version 2 network 10.0.0.0 network 172.17.0.0 network 192.168.77.0 no auto-summary ! ip local pool VPN_Pool 192.168.77.1 192.168.77.10 no ip forward-protocol nd ! ip http server no ip http secure-server ip nat inside source list NAT interface FastEthernet0/0 overload ! ip access-list extended INBOUND permit tcp any any eq 2277 log permit icmp any any echo-reply permit icmp any any unreachable permit icmp any any time-exceeded permit tcp any any established permit udp any any eq isakmp permit udp any any eq non500-isakmp permit esp any any permit udp any eq domain any permit udp any eq bootps any eq bootpc ip access-list extended NAT permit ip 10.77.5.0 0.0.0.255 any permit ip 10.77.10.0 0.0.0.255 any permit ip 192.168.77.0 0.0.0.255 any ! access-list 100 permit udp any eq bootpc host 255.255.255.255 eq bootps access-list 100 permit udp host 0.0.0.0 eq bootpc host 10.77.5.1 eq bootps access-list 100 permit udp 10.77.10.0 0.0.0.255 eq bootpc host 10.77.5.1 eq bootps access-list 100 deny tcp 10.77.10.0 0.0.0.255 any eq telnet access-list 100 deny ip 10.77.10.0 0.0.0.255 10.77.5.0 0.0.0.255 access-list 100 deny ip 10.77.10.0 0.0.0.255 10.77.1.0 0.0.0.255 access-list 100 permit ip any any access-list 101 deny ip 10.77.1.0 0.0.0.255 192.168.77.0 0.0.0.255 access-list 101 deny ip 10.77.5.0 0.0.0.255 192.168.77.0 0.0.0.255 access-list 101 deny ip 10.77.10.0 0.0.0.255 192.168.77.0 0.0.0.255 access-list 101 permit ip 10.77.1.0 0.0.0.255 any access-list 101 permit ip 10.77.5.0 0.0.0.255 any access-list 101 permit ip 10.77.10.0 0.0.0.255 any access-list 130 permit ip 10.77.1.0 0.0.0.255 192.168.77.0 0.0.0.255 access-list 130 permit ip 10.77.5.0 0.0.0.255 192.168.77.0 0.0.0.255 access-list 130 permit ip 192.168.77.0 0.0.0.255 10.77.1.0 0.0.0.255 access-list 130 permit ip 192.168.77.0 0.0.0.255 10.77.5.0 0.0.0.255 ! route-map dynmap permit 10 match ip address 101 ! control-plane ! line con 0 session-timeout 30 password 7 XXXX line aux 0 line vty 0 4 rotary 1 transport input telnet ssh line vty 5 15 rotary 1 transport input telnet ssh ! scheduler allocate 20000 1000 ! webvpn cef ! end 3 Replies Apr 2, 2013 at 12:09 UTC What IP does a VPN client get? I assume it's a 10.77.1.x In which case take a look at ACL 130 used by the VPN crypto map - I think it needs lines adding to allow vpn clinets to vlan 5/10 e.g. access-list 130 permit ip 10.77.1.0 0.0.0.255 10.77.5.0 0.0.0.255 access-list 130 permit ip 10.77.1.0 0.0.0.255 10.77.10.0 0.0.0.255 Then test and check with traceroute that traffic for 10.77.5 & 10 are going via VPN. My assumption is that currently VPN client's only know to use the VPN for 10.77.1.0/24 Apr 2, 2013 at 12:15 UTC Thanks for your input! VPN clients are getting a 192.168.77.X IP address from the VPN_Pool. So based on that isn't my ACL 130 correct. I do not need access to 10.77.10.0 since this just a guest wireless network. Apr 18, 2013 at 1:45 UTC Hi there, did you ever figure this out. Looking again at it I think the issue is your NAT ACL. Can devices on VLAN 1 access the internet? I assume not as their subnet is not in your NAT ACL. Which Is likely why VPN clients can access them. Try adding the following at the top of your ACL : ip access-list extended NAT deny ip 10.77.5.0 0.0.0.255 192.168.77.0 0.0.0.255 This would ensure traffic going from vlan5 is not NAT'd This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion.
https://community.spiceworks.com/topic/319877-cisco-2811-vpn-setup
CC-MAIN-2017-39
en
refinedweb
In the previous part of CDI, we saw some injection, qualifiers and scope. Now, it’s time to browse through more advanced features. Producers Previous examples cannot resolve all our use-cases. Some of these include: - injection of random values - injection of context-dependent value - in general, places where the injection process cannot be narrowed down to a simple new() These hint at a very well-known pattern, the factory. Factories are implemented in JSR-299 as producers. Let’s take a simple example, the injection of a connection from a data source. The code that gets the connection either creates it with a direct connection to the database or retrieves it from a data source pool. In the latest case, the following code would fit: public @interface FromDataSource {} public class ConnectionProducer { @Produces @FromDataSource public Connection getConnection() throws Exception { Context ctx = new InitialContext(); // Read the data source name from web.xml String name = ... DataSource ds = (DataSource) ctx.lookup(name); return ds.getConnection(); } } Interceptors With Java EE 6, you can harness the power of AOP without AOP. Like in the previous example, using interceptors is very straightforward. There are 3 steps. Let’s implement a simple timer, for benchmarking purposes. The first step is the declaration of the interceptor. To do so, just use the @InterceptorBinding: @InterceptorBinding @Retention(RUNTIME) @Target({METHOD, TYPE}) public @interface Benchmarkable{} The second step is the interceptor implementation. It uses the @Interceptor annotation, coupled with the previously defined one: @Benchmarkable @Interceptor</pre>Notice: public class BenchmarkInterceptor { @AroundInvoke public Object logPerformance(InvocationContext ic) throws Exception { long start = System.currentTimeMillis(); Object value = ic.proceed(); System.out.println(System.currentTimeMillis() - start); return value; } } - the method annotated with @AroundInvoke returns an Object - it uses a parameter of type InvocationContext The last step is to declare such interceptors in WEB-INF/beans.xml because interceptors are deactivated by default. <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns:xsi="" xsi: <interceptors> <class>ch.frankel.blog.weld.BenchmarkInterceptor</class> </interceptors> </beans> The beans.xml also tells the container about how to order the interceptors in case they is more than one. There are two other interceptor types, @PostConstruct and @AroundTimeout (for EJB). Decorators Decorators, guess what, implement the Decorator design pattern. They are very similar to interceptors with two interesting differences: - a decorator must implement the interface it is decorating (and yet can be abstract, so it does not have to implement the methods) - a decorator can have a reference to the object it decorates. It is done through injection Like interceptors, they must be referenced in the beans.xml file in order to be activated. Let’s take a simple example and create an interface which contract is to return an HTML representation of an object: public interface Htmlable { String toHtml(); } Now I need a date class that knows its HTML representation. I know the design is quite bad but bear with me. public class HtmlDate extends Date implements Htmlable { public String toHtml() { return toString(); } } If I want a decorator that puts the HTML inside <strong> tags, here’s the way: @Decorator public class StrongDecorator implements Htmlable { @Inject @Delegate @Any private Htmlable html; public String toHtml() { return "<strong>" + html.toHtml() + "</strong>"; } } Observers CDI also implements the Observer design pattern, thus at last enabling simple event-driven development paradigm on the Java EE platform. The basis for it is the event type. An event type is a simple POJO. The Observer is also a POJO: in order for a method of the Observer to be called when an event is fired, just add a parameter of the right event type and annotate it with @Observes: public class EventObserverService { public void afterPostEvent(@Observes PostEvent event) { ... // Do what must be done } } On the other side, the event producer should have an attribute of type javax.enterprise.Event parameterized with the same event type. In order to fire the event, call event.fireEvent() with an event instance: public class WeldServlet extends HttpServlet { @Inject private Event<PostEvent> event; @Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { event.fire(new PostEvent()); } } Now, when sending a POST request to the servlet, the afterPostEvent() method of the EventObserverService will be called. Alternatives In the previous article, I adressed the mock service with calling the setter and passing a newly created instance “by hand”. This is all fine and well in a unit testing case, but I also want to manage integration testing. The situation is thus the following: - there are two implementations of the same interface on the classpath - you can’t change the servlet code (for example, add a qualifier to the service attribute) Given the deterministic nature of CDI, you should basically be toast. In fact, nothing could be further from the truth. Just use the @Alternative annotation and CDI will conveniently ignore the annotated class. @Report(MAIL) @Alternative public class MockMailReportServiceImpl implements ReportService { ... } What’s the point then to create it in the first place? Remember the unused-till-then beans.xml from above. It will come to our help, since it accepts <alternative> tags. These tags activate the alternatives. <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns:xsi="" xsi: <alternatives> <class>ch.frankel.blog.weld.service.MockMailReportServiceImpl</class> </alternatives> </beans> As such, you could have two beans.xml: - a basically empty standard context - and another integration testing context full of alternatives Platform This article was written with GlassFish v3, which uses Weld v1.0.1, as a platform. Weld is CDI reference implementation, and also a part of the Seam framework. I had no problems using the platform overall, yet, I couldn’t make alternatives, interceptors and decorators work. Strangely enough, all three must be configured in the WEB-INF/beans.xml. I do not know if I did something wrong or if there’s a bug in the current implementation though. Conclusion This 2-parts article only brushes at the surface of CDI. Nevertheless, IMHO, it looks very promising and I wish it much success. To go further: - CDI, an overview – part 1 - From {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/cdi-overview-part-2
CC-MAIN-2017-39
en
refinedweb
0 Hi, I am working on a project and when I run my code, I get a segfault. I checked it with valgrind, and it says that I am trying to access a memory location that I maybe not malloc'd, stack'd or recently free'd. The only 'mysterious' thing I'm doing is, passing the pointer of a structure list to a function. Here is the code I get the segfault. int GetFieldInt(list<fieldInfo> *listIn) { list<fieldInfo>::iterator tempItr; int listCount = 0; tempItr = (*listIn).begin(); while(tempItr != (*listIn).end()) { listCount++; (*tempItr).percentage = ((*tempItr).percentage)*targetCount/100; //segfault here tempItr++; } if(listCount == 0) { cout<<"Field not defined!\n"; return -1; } else return (int)MAX_RAND/listCount; } It's a rather long code. Whenever I pass the list pointer to a function, funny things happen. Sometimes, in the linux terminal, the font changes to some weird symbols. Can anyone help me with this? Your help is much appreciated. Thanks in advance. Thilan
https://www.daniweb.com/programming/software-development/threads/297414/passing-a-list-pointer-to-a-function
CC-MAIN-2018-43
en
refinedweb
Calculates the hyperbolic cosine of a number #include <math.h> double cosh ( double x ); float coshf ( float x ); (C99) long double coshl ( long double x ); (C99) The hyperbolic cosine of any number x equals (ex + e-x)/2 and is always greater than or equal to 1. If the result of cosh( ) is too great for the double type, the function incurs a range error. double x, sum = 1.0; unsigned max_n; printf("Cosh(x) is the sum as n goes from 0 to infinity " "of x^(2*n) / (2*n)!\n"); // That's x raised to the power of 2*n, divided by 2*n factorial. printf("Enter x and a maximum for n (separated by a space): "); if (scanf(" %lf %u", &x, &max_n) < 2) { printf("Couldn't read two numbers.\n"); return -1; } printf("cosh(%.2f) = %.4f;\n", x, cosh(x)); for ( unsigned n = 1 ; n <= max_n ; n++ ) { unsigned factor = 2 * n; // Calculate (2*n)! unsigned divisor = factor; while ( factor > 1 ) { factor--; divisor *= factor; } sum += pow(x, 2 * n) / divisor; // Accumulate the series } printf("Approximation by series of %u terms = %.4f.\n", max_n+1, sum); With the numbers 1.72 and 3 as input, the program produces the following output: cosh(1.72) = 2.8818; Approximation by series of 4 terms = 2.8798. The C99 inverse hyperbolic cosine function acosh( ); the hyperbolic cosine and inverse hyperbolic cosine functions for complex numbers: ccosh( ), cacosh( ); the example for sinh( )
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-39.html
CC-MAIN-2018-43
en
refinedweb
You know Python, this is C++ Python does have type, but it does not technically have variables. Python "variables" represent names , C++ variables represent locations . Python was designed to be object-oriented. C++ is C with object-oriented features tacked on. Python is interpreted, C++ is compiled. C++ uses declarations. C++ uses { } for blocks. Python uses indentation. C++ uses ; at the end of a statement. In C++, whitespace (spaces, tabs) does not matter ... most of the time. C++ base types have limited size. Python base types grow. C++ programmers handle addresses/references/pointers explicitly. Python is pretty. C++ ... not. ... Hello World Python C++ print("Hello World") #include using namespace std; int main() { cout << "Hello World\n" ; } For Loop Python C++ for i in range(10): print(i) for (int i=0 ; i<10 ; i++) { cout << i << endl ; } While Loops Python C++ i = 0 while i < 10: print(i) i = i + 1 int i ; i = 0 ; while (i < 10) { cout << i << endl ; i = i + 1 ; } Conditionals Python C++ myStr = input("Enter a number: ") i = int(myStr) if i < 0 : print("It's negative") elif i > 0 : print("It's positive") else : print("It's zero") int i ; cout << "Enter a number: " ; cin >> i ; if (i < 0) { cout << "It's negative" << endl ; } else if (i > 0) { cout << "It's positive" << endl ; } else { cout << "It's zero" << endl ; } Arrays/Lists Python C++ A = range(10) for i in A: print(i) int A[10], i ; for (i=0; i<10; i++) { A[i] = i ; } for (i=0; i<10; i++) { cout << A[i] << endl ; } C++ != OOP Think Object-Oriented Programming first, code in C++ second. You can just as easily write non-OOP programs in C++. You can follow the OOP methodology using non-OOP languages. No programming language will prevent a dedicated programmer from writing really bad code. C++ is a big hairy monster, but it doesn't have to give you nightmares.
https://www.csee.umbc.edu/~chang/cs202.f15/Lectures/modules/m01a-Python/slides.php?print
CC-MAIN-2018-43
en
refinedweb
Hi, I am writting a dll and want to return a result back to the calling code. I have tried putting:- public string oSQL2XStreamBridge( string Name) public class string oMyDll both of which it does not like... the code below errors because there is a return statment and it says you can't have a return statement when it is set to void. but it will not let me set it to string (as above) public class oMyDll { public oSQL2XStreamBridge( string Name) { string ResultMess = ""; // work code goes here return "Test"; } } How do I get the result back to the calling code? Thanks
https://www.daniweb.com/programming/software-development/threads/294475/writting-a-dll-and-want-to-return-a-result-back-to-the-calling-code
CC-MAIN-2018-43
en
refinedweb
Question: Let's say I have a class which, internally, stores a List of data: import java.util.List; public class Wrapper { private List<Integer> list; public Wrapper(List<Integer> list) { this.list = list; } public Integer get(int index) { return list.get(index); } } For the sake of this example, pretend it's a useful and necessary abstraction. Now, here's my concern: As a programmer who knows the underlying implementation of this class, should I be specific about which type of List I ask for in the constructor? To demonstrate, I've made this test: import java.util.List; import java.util.ArrayList; import java.util.LinkedList; public class Main { public static void main(String[] args) { long start; List<Integer> list1 = new ArrayList<Integer>(); List<Integer> list2 = new LinkedList<Integer>(); Wrapper wrapper1, wrapper2; for(int i = 0; i < 1000000; i++) { list1.add(i); list2.add(i); } wrapper1 = new Wrapper(list1); wrapper2 = new Wrapper(list2); start = System.currentTimeMillis(); wrapper1.get(500000); System.out.println(System.currentTimeMillis() - start); start = System.currentTimeMillis(); wrapper2.get(500000); System.out.println(System.currentTimeMillis() - start); } } As you most likely know, randomly accessing an element takes a bit more time with a linked list as opposed to an array. So, going back to the Wrapper constructor, should I be general and allow for any type of List, or should I specify that the user pass an ArrayList to ensure the best possible performance? While in this example, it may be easy for the user to guess as to what the underlying implementation of the method get is, you could imagine that this was something more complex. Thanks in advance! Solution:1 The whole point of interfaces is to allow for agnosticism about the underlying implementation. The very use of the List type as opposed to LinkedList or ArrayList is to allow general operations without having to worry about this kind of problem. As long as your code can be written without having to rely upon methods not exposed by List you don't need to worry. It is likely a user of this class is writing code that has other requirements on the type of list they use, they might be appending lots into the middle of the list for instance where a LinkedList excels. As such you should accept the most generic type possible and assume the user has a good reason for using that type. This isn't however to stop you perhaps including a javadoc comment that the use of an ArrayList might be better if there is no difference in any other use of the application. Solution:2 So, going back to the Wrapper constructor, should I be general and allow for any type of List? Is the intention of the wrapper to support any type of list? or only ArrayList? ...or should I specify that the user pass an ArrayList to ensure the best possible performance? If you leave the general List it will be just fine. You let the "client" of that class to decide whether or not he may use ArrayList. Is up to the client. You may also use the RandomAccess interface to reflect your intention, but since it is only a marker interface probably it wont make much sense. So again, leaving it as general List will be enough. Solution:3 Go with ArrayList if you are going to be randomly accessing from this list or you think you will be adding to this list a lot. Go with LinkedList if you are going to accessing the elements in series most of the time. Solution:4 This probably is something one can argue about. My arguments for requiring a specific type are: - The wrapping class really knows which type would be best-suited in all the situations the wrapping class is used in. Random access is a good example for this -- it's unlikely that there is a better list implementation for this scenario than a list backed by an array. - The wrapping class actually makes assumption about the implementation. If such assumptions are made, make sure they are fulfilled by requiring the appropriate type. The argument against requiring a specific type is: - The wrapping class is used in different scenarios and there are different implementations of List that are more or less suited for a specific scenario. I.e., it should be up to the caller to find the implementation that suits the scenario best. There also may be cases in which the best (TM) implementation of the List had not been invented yet by the time the wrapper class was written. Solution:5 I think it depends on what you're trying to achieve with the Wrapper class and/or what you expect clients of Wrapper to use it for. If your intent is to just provide a wrapper for a List, where clients shouldn't expect a certain level of performance from get() (or whatever the method's name is), then your class looks fine to me as-is, except for the fact that it's just a reference to the constructor's listparameter that's getting copied, rather than the contents of the list itself (more on this in the 3rd point below). If, however, you tell your clients to expect get() to be very responsive, some alternatives come to mind (there might be more): - Write a set of constructors that accept only those implementations of a Listthat you know have high performance for whatever operation(s) get()will execute on it. For example: } // if I knew how to avoid this brace I would... public Wrapper { Wrapper(ArrayList<Integer> list) { ... } Wrapper(KnownListImplementationThatWillMakeMyGetMethodFast<Integer> list) { ... } //... } One drawback of doing this is that if another efficient List implementation comes along, you'll have to add another constructor for it. Leave your Wrapperclass untouched, but inform your clients (via some form of documentation, say, class comments, a README file, etc.) that certain operations on whatever Listimplementation they pass in are expected to have a certain performance (e.g. " get()must return in constant time"). If they then "misuse" your wrapper by passing in a LinkedList, it's their fault. If you want to guarantee that your get()implementation is quick, it's worth copying the list you receive in the constructor into a member data structure that meets your performance constraints. This can be an ArrayList, some other Listimplementation you know of, or a container that doesn't implement the Listinterface altogether (e.g. something special-purpose you've written). With regard to what I said earlier about copying the List's contents vs. copying a reference to it, any "write" operations you perform on the reference you have will not flow through to your client's copy of the Listif you copy its contents. This is usually a good way to avoid the client wondering why their copy of the list gets touched when invoking operations on your Wrapper, unless you explicitly tell them to expect this behavior. Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-when-to-be-specific-about-type.html
CC-MAIN-2018-43
en
refinedweb
Hello all. I'm feeling pretty confused about how UDP port forwarding is done. My situation is: I have Comp1 with a local IP (192.168.0.2) connected to Comp2 with external IP and internet connection, and some unknown Internet Client (IC) (three of them actually), who needs to send data to the Server hosted at Comp1 through UDP port 7777 and then receive a response. I managed to forward UDP data from Comp2 to Comp1 by simply accepting IC's packets at Comp2's port 7777 and sending them to Comp1's port 7777, but the problem is that Comp1's Server sees sender as Comp2 and sends response to it (192.168.0.1), rather than to IC. I can't modify Server application and it judges about packets' source by UDP's IEP. Server then stores IEPs and sends data itself (it's p2p application actually). I would think that the task is impossible, but this kind of forwarding is implemented in applications like AUTAPF (for both UDP and TCP ports). So how do I forward ICs' data from Comp2 to Comp1 with Comp1's Server knowing, that response must be sent to ICs? Here's what I managed to do: namespace PortForwarder { class Program { public static UdpClient UDP1 = new UdpClient(7777); static void Main(string[] args) { Console.Title = "Port Forwarder"; Console.WriteLine("-= Port Forwarder started. =-"); Console.WriteLine("UDP ports forwarded: 7777"); UDP1.BeginReceive(ReceiveDataUDP1, null); while (true) { }; } static void ReceiveDataUDP1(IAsyncResult ar) { IPEndPoint IEP = new IPEndPoint(IPAddress.Any, 0); Byte[] receiveBytes = UDP1.EndReceive(ar, ref IEP); // Trying to "lie" about local IEP results in exception // UdpClient US1 = new UdpClient(IEP); // US1.Send(receiveBytes, receiveBytes.Length, "192.168.0.2", 7777); UDP1.Send(receiveBytes, receiveBytes.Length, "192.168.0.2", 7777); UDP1.BeginReceive(ReceiveDataUDP1, null); } } P.S. Comp1 is connected to the internet through Comp2's ICS (Internet Connection Sharing). Comp2 is running Windows Server 2008 and connects to the internet through VPN connection. I tried to set up NAT there, but VPN connection cannot be shared for some reason (and sharing public adapter doesn't help). If, by any chance, anybody knows how it's configured, I would be really grateful. :)
https://www.daniweb.com/programming/software-development/threads/235716/udp-port-forwarding
CC-MAIN-2018-43
en
refinedweb
Hi, I'm a student learning to code in C. This is what I have only when I compile it, gcc, I get two error messages and I don't understnd why. The messages are '67: error: expected declaration or statement at end of input' and '67: error: control reaaches end of non-void funtion' This is the code as I have it now, line 67 is the last line in the program...the }. Any info would be helpful. #include <stdio.h> #include<stdlib.h> #include<time.h> int main(void){ //declarations int menuChoice; int i,n=0; int r=rand()%100 + 1;; srand(time(NULL)); //statements while (menuChoice !=3){ //Choose mode printf(" *******************************************\n"); printf(" ****** Would you like to play a game ******\n"); printf(" *******************************************\n"); printf(" 1. Guess the number that I'm thinking\n"); printf(" 2. Global Thermonuclear War\n"); printf(" 3. Exit\n\n\n"); printf(" Please Choose a Menu Item: "); scanf("%d", &menuChoice); if (menuChoice > 3) { printf(" You don't follow directions very well do you... try again\n"); } if (menuChoice ==2){ printf("You really need to watch 'War Games' and come back to see me.\n"); } else if (menuChoice ==3){ printf(" Have a good day\n\n\n\n\n"); } else if (menuChoice == 1){ printf(" You chose option 1.\n"); printf("I have my number\n\nWhat number am I thinking of between 1 and 100."); while(scanf("%d",&i)) if (i > r) { n++; printf("Your guess is high. Please try again: "); } else if (i < r) { n++; printf("Your guess is low. Please try again: "); } else if (i == r) { printf("\n\nCongratulations!\nYou guessed the number in %d guesses! \n", n+1); } } return 0; }
https://www.daniweb.com/programming/software-development/threads/286785/don-t-understand-what-the-problem-is
CC-MAIN-2018-43
en
refinedweb
mlock, munlock, mlockall, munlockall - lock and unlock memory Synopsis Description Errors Availability Colophon #include <sys/mman.h> int mlock(const void *addr, size_t len); int munlock(const void *addr, size_t len); int mlockall(int flags); int munlockall(void); mlock() and mlockall() respectively lock part or all of the calling processs virtual address space into RAM, preventing that memory from being paged to the swap area. munlock() and munlockall() perform the converse operation, respectively unlocking part or all of the calling processs virtual address space, so that pages in the specified virtual address range may once more to be swapped out if required by the kernel memory manager. Memory locking and unlocking are performed in units of whole pages.. munlock() unlocks pages in the address range starting at addr and continuing for len bytes. After this call, all pages that contain a part of the specified memory range can be moved to external swap space again by the kernel.. POSIX.1-2001, SVr systems, that... mmap(2), setrlimit(2), shmctl(2), sysconf(3), proc(5), capabilities(7) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/mlockall.2.php
CC-MAIN-2017-47
en
refinedweb
using System; using System.Drawing; using System.Windows.Forms; using System.Collections.Generic; using UIGraphic; namespace Tracker { /// <summary> /// Summary description for ITracker. /// </summary> public interface ITracker { void CreateTargetModel(Bitmap bitmap, int binCountCh1, int binCountCh2, int binCountCh3, Window targetRoi, Window searchRoi); void Track(Bitmap bitmap, out Window targetRoi, out Window searchRoi); Bitmap ProcessedImage { get;} Dictionary<string, string>)
https://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=35895&zep=Tracker%2FITracker.cs&rzp=%2FKB%2FGDI-plus%2FMomentsTracking%2F%2FCVMoments.zip
CC-MAIN-2017-47
en
refinedweb
0 AleMonteiro 238 3 Years Ago Hi, I recieved an VB.NET solution to study and then maintain, but I can't compile it, it's giving a lot of errors about Office integration like 'MsoTriState' is ambiguous in the namespace 'Microsoft.Office.Core' So I checked the object browser and indeed there's the same object in two diffente namespaces: [Microsoft.Office.Core] and [office]. In the references theres only one Office at 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Visual Studio Tools for Office\PIA\Office14\Office.dll', and if I remove it, then the compiler says that the reference is missing. Anyone have seen this before? None of the solutions I found online could help me. Thanks. office solution vb.net Edited 3 Years Ago by AleMonteiro Attachments
https://www.daniweb.com/programming/software-development/threads/482368/net-office-ambiguous-reference
CC-MAIN-2017-47
en
refinedweb
In part 1, we set ourselves up for success by creating the necessary project structure, xaml file and changes to web.config to enable application services for script access. Oh, and I mentioned you need ASP.NET AJAX on your web server as we’ll be needing that JSON serialisation stuff it can do for you. To access the services from our managed code, we’ll be using the BrowserHttpWebRequest class. We don’t really have much option – it’s the only one we’ve got right now. I’d point you to some documentation for BrowserHttpWebRequest but I can’t find any – suffice to say it’s a friend of HttpWebRequest and works in a similar fashion. We create our BrowserHttpWebRequest object by passing in the URI for our service (we’ll come back to that). There are then two pre-requisites for this to work: - We need to set the Content-Type header in the request to “application/json; charset=utf-8” - We need to set the method (HTTP verb) to “Post” Then we may or may not send some data in the body of the request via the GetRequestStream() method. So something like: BrowserHttpWebRequest request = new BrowserHttpWebRequest(new Uri("Authentication_JSON_AppService.axd/Login")); request.Headers["Content-Type"] = "application/json; charset=utf-8"; request.Method = "Post"; // Do some stuff with the request body here HttpWebResponse response = request.GetResponse(); …should do the job for us. Hold on a second – where did the URL “Authentication_JSON_AppService.axd/Login” come from? Well I just stepped through AJAX libraries until I got the request.Invoke and checked what the URL was. Alternatively (in retrospect), using Fiddler might have been an easier approach. I did have to mess around for a long time with the help of Fiddler to get the request body correct before I was successful in calling these services. It’s an indispensable tool. So lets tackle the Membership service first. I start off be creating a few constants that will come in handy and a string to store the “base” URL: private const string AuthenticationServiceMethod = "Authentication_JSON_AppService.axd/Login"; private const string ContentType = "application/json; charset=utf-8"; private const string HttpMethod = "Post"; private string ServiceUrl; I have a method called GetServiceUrl that we call from Page_Loaded to initialise ServiceUrl: private string GetServiceUrl() { string absUri = HtmlPage.DocumentUri.AbsoluteUri; return absUri.Substring(0, absUri.LastIndexOf("/") + 1); } In my case this gives me a ServiceUrl of. I next have a method to create a suitably initialised BrowserHttpWebRequest object: private BrowserHttpWebRequest CreateNewApplicationServiceRequest(string ServiceMethod) { BrowserHttpWebRequest request = new BrowserHttpWebRequest(new Uri(ServiceUrl + ServiceMethod)); request.Headers["Content-Type"] = ContentType; request.Method = HttpMethod; return request; } Then my ExecuteLogin() method can be simplified to something that looks like this: private bool ExecuteLogin() { BrowserHttpWebRequest authRequest = CreateNewApplicationServiceRequest(AuthenticationServiceMethod); JavaScriptSerializer j_ser = new JavaScriptSerializer(); AuthenticationRequest ar = new AuthenticationRequest("Mike", "testing123;", false); StreamWriter sw = new StreamWriter(authRequest.GetRequestStream()); sw.Write(j_ser.Serialize(ar)); sw.Flush(); HttpWebResponse response = authRequest.GetResponse(); sw.Close(); StreamReader responseReader = new StreamReader(response.GetResponseStream()); string rawResponse = responseReader.ReadToEnd(); responseReader.Close(); response.Close(); return j_ser.Deserialize<bool>(rawResponse); } AuthenticationRequest is a little helper class I created (see below). I wanted to use an anonymous type but I ran into problems with the JSON serialiser. userName etc should really be properties but this was just quick and dirty to get it working: public class AuthenticationRequest { public string userName; public string password; public bool createPersistentCookie; private AuthenticationRequest() { } public AuthenticationRequest(string userName, string password, bool createPersistentCookie) { this.userName = userName; this.password = password; this.createPersistentCookie = createPersistentCookie; } } I’ve called the Membership service synchronously in this case (I will do it asynchronously for the Profile properties) and I get back a response which is either true for success (ie authenticated) or false for a failure to authenticate. So in the OnClick event handler I have: protected void OnClick(object sender, MouseEventArgs e) { if (sender == TextBlock1) TextBlock1.Text = ExecuteLogin() ? "Success! You are logged in." : "Failed. You are not logged in."; else TextBlock2.Text = "Your shoe size is: " + up.ShoeSize.ToString(); } So when the first TextBlock is clicked, the ExecuteLogin() method is called and an appropriate message displayed based on the result of that call. Again this is getting quite long so in part 3 I’ll look at accessing the Profile service asynchronously. Silverlight Cream for July 12, 2007 Excellent articles Mike. just to emphasise something which isn’t obvious (but is mentioned). people need to add the following line to Page_Loaded event. ServiceUrl = GetServiceUrl(); Thanks for this – it’s saved me days 🙂
https://blogs.msdn.microsoft.com/mikeormond/2007/07/12/accessing-asp-net-application-services-from-silverlight-part-2/
CC-MAIN-2017-26
en
refinedweb
Dogma Codegen Test Code generation testing for the Dogma libraries Usage Dogma Codegen Test provides helpers for running tests on the code generation pipeline. This is only meant for libraries that build upon Dogma Codegen facilities, not for libraries that are consuming those libraries. To verify the code generation pipeline a test is written that invokes the pipeline and writes source code to disk. After the source code is written then the test executes a dart file written using the test library which targets the generated code. This code is executed through an isolate. If the isolate completes without receiving an error then the generated code was built successfully. If not then there is an issue with the pipeline. Not only does this allow the test suite verify the pipeline it also allows code coverage to be determined. import 'package:dogma_codegen_test/isolate_test.dart'; import 'package:test/test.dart'; void main() { group('codegeneration', () { // Run the code generation pipeline to generate the source code ... // Run the tests on the generated code to verify behavior testInIsolate('verify', 'test/src/generated/verify_test.dart'); }); } Excluding Generated Tests When using pub run test by default it will attempt to run all files that end in _test.dart. To avoid the generated tests being run when invoking that command they should be excluded within the pubspec. transformers: - test/pub_serve: $exclude: test/src/generated/**_test{.*,}.dart Features and bugs Please file feature requests and bugs at the issue tracker. Libraries - dogma_codegen_test.isolate_test Contains helpers for running test code in isolates
https://www.dartdocs.org/documentation/dogma_codegen_test/0.0.1/index.html
CC-MAIN-2017-26
en
refinedweb
This is a fork of IndexedDBShim with the configuration changed to output a build optimised (using the babel-preset-node6 preset) for Node 6. The existing configuration outputs a single file using Grunt that doesn't work properly with NPM 3 (because it makes assumptions about node_modules) - this sidesteps the whole issue. It also removes the dependency on node-websql - not because it doesn't need it, but so that you can provide a custom implementation. So now you create it like so: const openDatabase = require('websql'); const idbShim = require('indexeddbshim-node6')(openDatabase); It also removes the dependency on babel-polyfill, as I don't think it needs it. There's also a Rollup compatible version of the library that preserves the ES6 module declarations. Use it like so: import idbShim from 'indexeddbshim/rollup-ready/node.js'; Safe to assume that this will work with node versions >=6, but probably not < 6.
https://www.npmjs.com/package/indexeddbshim-node6
CC-MAIN-2017-26
en
refinedweb
- Start Visual Studio 2005. - Create a Web site. - Add Reference of System.Windows.Forms - Add AspCompat="true" in page attribute. - Verify that the following namespaces are included in the ExportWebPageToImage.c using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Windows.Forms;Here is complete source code <%@ Page <html xmlns=""> <head runat="server"> <title></title> <style type="text/css"> .style1 { width: 100%; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:TextBox</asp:TextBox><br /> <asp:Button </div> </form> </body> </html> using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Windows.Forms; public partial class ExportWebPageToImage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } } protected void btnConvert_Click(object sender, EventArgs e) { Bitmap bitmap = new Bitmap(CaptureWebPage(txtUrl.Text)); Response.ContentType = "image/jpeg"; bitmap.Save(Response.OutputStream, ImageFormat.Jpeg); bitmap.Dispose(); bitmap.Dispose(); Response.End(); } } below is what i can see, by using your code...could you please tell me why ? Navigation to webpage cancelled Disabling the firewall fixed the problem. For more details check out this link How it can work for website? Hm, throws an error 'txtUrl' is not declared. It may be inaccessible due to its protection level Oh gesh, I forgot the text box. Ok, it doesn't error, but doesn't do anything either. Takes a few seconds, looks like it's posting back, then nothing happens. What am I doing wrong? Where is it saving the image? It will display on the page. How i can save image to directory? @harri:replace the following method call,currently I am saving output in response stream. bitmap.Save(Response.OutputStream, ImageFormat.Jpeg); I'm going to use this to overcome the problem of browsers not sending background images to the printer. I'll just send an Image instead. Thank you! While waiting for javascript to update the page, you should stil call Application.DoEvents(): var time = new Stopwatch(); time.Start(); // allow time for page scripts to update // the appearance of the page while(time.ElapsedMilliseconds < 10000) { System.Windows.Forms.Application.DoEvents(); Thread.Sleep(1000); } time.Stop(); And as long as WebBrowser implements IDisposable, you should dispose it at the end of call or put it inside using statement plaese help me to take a image of particular div what is Response.OutputStream? ... if it is web specific, then when can this not be explained without context of being from WEB app... I was looking for only to create an output image file on filesystem why can not this be explained without having any compile errors. We have forms Authentication our website,The above code doesnot capture the image of the webpage that I pass in as URL.Can anyone tell how to pass override Authentication or provide some hack to pass in cookies or access sessions to get the webpage captured as image. Good information about export webpage image in ASP.net. More helpful. ASP to ASP.Net Migration Convert ASP to ASP.Net
http://aspdotnetcodebook.blogspot.com/2009/05/how-to-export-webpage-as-image-in.html
CC-MAIN-2017-26
en
refinedweb
fullcontact is a Node.js module that wraps the fullcontact API. It implements the following API endpoints: The module is distributed through npm (node package manager) and can be installed using: npm install fullcontact --save The --save automatically adds the module to your package.json definition. We are all hackers in our heart that's why this module is build with extensibility and hackibility in mind, there aren't any hidden components and all the API endpoints are implemented as separate constructors so they can be improved and hacked when needed. You requirement the module as any other node.js module: 'use strict';var FullContact = ;//// The constructors are directly exposed on the FullContact constructor://FullContactLocation;FullContactPerson;FullContactEmail;FullContactName; To create a new client you simply need to construct the module with your FullContact API key: var fullcontact = api; Alternatively you can also use the provided createClient method, is that's how you roll. var fullcontact = FullContact;//// Or just call it directly://var fullcontact = ; The initialized FullContact client will have some properties that you might find useful: remainingThe amount of API calls you have remaining ratelimitThe amount of API calls you're allowed to do rateresetWhen your rate limit will be reset again in EPOCH Please note that these properties are all set to 0 until you have made your first request to the API server as these values are parsed from the response headers. This API implemention will return an Error object when the FullContact response is returned without a status: 200 so it could be that your operation is queued for processing. That's why all returned error's have a status property which the returned status code (unless it's a parse error or a generic error). So just because you got an error, it doesn't mean that your request has failed. Turn your semi-structured data in fully structured location data. This Location endpoint is namespaced as a .location property. It has 2 optional arguments. casingHow is the provided location cased? uppercasefor UPPERCASED NAMES (JOHN SMITH) lowercasefor lowercased names (john smith) titlecasefor Title Cased names (John Smith) includeZeroPopulationwill display 0 population census locations. The provided value should be a boolean. Normalize the location data. fullcontactlocation; Retrieve more information from the location API. fullcontactlocation; The Person endpoint is confidently namespaced as a .person property. Each person API has an optional queue argument which you can use to indicate that this request will should be pre-processed by FullContact and that you want to fetch the details later. According to the API it should to receive the value 1 as queue. The following methods are available on this API: Retrieves contact information by e-mail address. Supports the use of webhooks by providing an url and id. fullcontactperson; fullcontactperson; Retrieves contact information by e-mail address but transforms the email to an MD5 first. fullcontactperson; Retrieves contact information by Twitter username. fullcontactperson; Retrieves contact information by Facebook username. fullcontactperson; Retrieves contact information by phone number. fullcontactperson; Reduce the number of anonymous subscribers by detecing of the user is subscribing with a real e-mail address or just a one time address The Checks if the given e-mail address was disposible. fullcontactemail; The name API has an optional casing argument. The value of this optional argument can either be: uppercasefor UPPERCASED NAMES (JOHN SMITH) lowercasefor lowercased names (john smith) titlecasefor Title Cased names (John Smith) Normalize a name. fullcontactname; Name deducing. Unlinke other API's this API should receive an object with either an username property which you want to use to substract the information. fullcontactname;fullcontactname; Check the similairity of between two names. fullcontactname; Retrieve some statistics about the name. Just like the name deducer API this API only accepts an object that receives either a givenName, familyName or both. fullcontactname;//// fullcontact.name.stats({ givenName: 'john' }, [casing], fn);// fullcontact.name.stats({ familyName: 'smith' }, [casing], fn);// fullcontact.name.stats({ givenName: 'john', familyName: 'smith' }, [casing], fn);// Parses the name to determin the likelyhoot that this is really a name. fullcontactname; The tests are written against the live FullContact API. They can be ran using: API_KEY=<key> npm test Don't worry if you forget it, we'll throw an error and let you know ;-). The module is released under the MIT license.
https://www.npmjs.com/package/fullcontact
CC-MAIN-2017-26
en
refinedweb