text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
import. From the Bash manual: ! Expands to the process ID of the most recently executed background (asynchronous) command. i.e., use $!.. Spaces are not allowed in variable assignation in shell, so : #!/bin/sh P='/mnt/temp/' echo "$P" Q=$(echo "$P" | sed -e "s/^.*(.)$/1/") echo "Q is $Q" echo "${P%?}" And use more quotes, see and SIGKILL cannot be trapped by the trap command, or by any process. It is a guarenteed kill signal, that by it's definition cannot be trapped. Thus upgrading you sh/bash will not work anyway.'); ?> The cronjobs you schedule in your settings are not actually added to the crontab until you run python manage.py crontab add. RTD going to sound stupid, but change your startup command from nohup ./toplog.sh to nohup ./toplog.sh & the & makes it run as a background process further removing it from the terminal stack. I finally found a working solution using anonymous pipes and bash: #!/bin/bash # this executes a separate shell and opens a new pipe, where the # reading endpoint is fd 3 in our shell and the writing endpoint # stdout of the other process. Note that you don't need the # background operator (&) as exec starts a completely independent process. exec 3< <(./a.sh 2&1) # ... do other stuff # write the contents of the pipe to a variable. If the other process # hasn't already terminated, cat will block. output=$(cat <&3) Well, in my experience, I would say that Java is not very fond of being ran by means of script. I did a quick google search. Try the ProcessBuilder, it looks perfect for this situation, in my opinion. I hope this helps you! :): You can use event/observer method of magento to do something after order is placed. you can use this event sales_order_place_after Just create one module to listen magento observer/event. In your /app/code/local/{namespace}/{yourmodule}/etc/config.xml: <config> ... <frontend> ... <events> <sales_order_place_after> <observers> <unique_event_name> <class>{{modulename}}/observer</class> <method>your function name</method> </unique_event_name> </observers> </sales_order_place_after> </ putty is interactive command line. Try the below. bash variables can be used. #!/bin/bash su - mqm -c "echo 'DISPLAY QLOCAL (<QUEUENAME>) CURDEPTH'|runmqsc QUEUEMANAGER" If you want both instructions to run in the same process you need to write them to a script: $ cat foo.db2 connect to user01 describe indexes for table table_desc and run that script in the db2 interpreter: db2 -f foo.db2 A Here Document might work as well: db2 <<EOF connect to user01 describe indexes for table table_desc EOF I can't test that, though, since I currently don't have a DB2 on Linux at hand. a shell scripting question more than it is a python one. However, I think your issue is " > test.txt" the ">" will start from a blank file each time instead of appending the results. try " >> test.txt" Try commands module for py2.3.4, note that this module has been deprecated since py2.6: Use commands.getoutput: import commands answer = commands.getoutput('./a.sh') It is because ps aux |grep SOMETHING also finds the grep SOMETHING process, because SOMETHING matches. After the execution the grep is finished, so it cannot find it. Add a line: ps aux | grep -v grep | grep YOURSCRIPT Where -v means exclude. More in man grep.' First, you have an odd indentation in the line with if __name__ ==... - guess you should check it in your script. Then, make sure with what current directory your script runs, AFAIK it is your $HOME - this is where the file would appear. Using commandArgs like this: args <- commandArgs(trailingOnly = TRUE) arg1 <- args[1] arg2 <- args[2] [...your code...] Also make sure that the Rscript executable is in your PATH.
http://www.w3hello.com/questions/how-to-execute-shell-script-in-the-same-process-in-python-closed-
CC-MAIN-2018-17
en
refinedweb
5 Reasons to Use the Kentico Cloud Boilerplate Project Bryan Soltis — Apr 5, 2017 kentico cloudopen sourcegithubboilerplate Innovative technology always presents a challenge for developers. Whether it’s a new platform or language, starting from scratch can be a tough task, even for seasoned software engineers. Luckily, nearly every new library or service has some open source projects available to help you get going. For completely new platforms like Kentico Cloud, these projects could be a huge help to developers new to the Headless CMS concept. In this article, I’ll give you 5 great reasons you should check out the Kentico Cloud Boilerplate project. So you’re giving Kentico Cloud a look and wondering where to get started? By now, you probably understand the API-driven approach to content delivery and centralized management, but may not be sure how to start building your application. Don’t worry! We have a pretty awesome Boilerplate project ready to go. Kentico Cloud Boilerplate This new project is a great way to get started with the Kentico Cloud platform quickly, and ensure your application is implementing best practices. Need more convincing? Here’s 5 reasons I think you should use it for your sites. 1. It’s full of best practices from real sites With any new platform, knowing the right way to implement features can be challenging. Because not many people have used it yet, it may be tough to know the best way to use a specific feature or functionality. The KC Boilerplate project is stocked full of these examples, to help you understand the best way to implement the service. From managing your configurations to handling errors, the project has working code from real-world applications included. // Register the IConfiguration instance which ProjectOptions binds against. services.Configure<ProjectOptions>(Configuration); services.AddMvc(); services.AddSingleton<IDeliveryClient>(c => new CachedDeliveryClient(c.GetRequiredService<IOptions<ProjectOptions>>(), c.GetRequiredService<IMemoryCache>()) { CodeFirstModelProvider = { TypeProvider = new CustomTypeProvider() }, ContentLinkUrlResolver = new CustomContentLinkUrlResolver() }); 2. It’s built on .NET Core MVC With the development community continuing its shift to modular, light-weight architectures, it makes sense that any Boilerplate be built with the same mindset. The KCB follows this pattern by being a .NET Core MVC site. This means that you can spin it up very quickly, and use MVC to build out your application using industry-standard methodologies. Because it’s .NET Core, it opens your hosting options by allowing you to deploy your site to a Microsoft or Linux environment. 3. Built-in caching One of the best parts of Kentico Cloud is its agility and flexibility. Because your content is centrally located, you can pull it into any channel quickly using the API. While the service itself is very robust and can handle production-level traffic, making calls to the API still needs to be done wisely. The KCB project has some great functionality to help you cache your DeliveryClient and responses. This can significantly decrease your calls to the service and speed up your site. public async Task<DeliveryItemResponse> GetItemAsync(string codename, IEnumerable<IQueryParameter> parameters) { string cacheKey = $"{nameof(GetItemAsync)}|{codename}|{Join(parameters?.Select(p => p.GetQueryStringParameter()).ToList())}"; return await GetOrCreateAsync(cacheKey, () => _client.GetItemAsync(codename, parameters)); } 4. It supports the dotnet new command Helping developers build better apps quicker is what .NET Core is all about. It supplies only what is needed, without any overhead or extra processing. Another big benefit is how quickly you can get a project up and going using the dotnet new command. If the publisher supplies the proper templates, you can use this great new feature to create your projects quickly, all from the VS Developer Command Prompt. The KCB project supports this functionality and allows you to get going with the solution even faster. Even more awesome, it has some built-in capabilities to dynamically rename your projects and namespaces, to ensure everything is exactly how you need it. 5. It’s open source One of the best parts of the project is that it is completely open source. While we at Kentico have some great developers, everyone can benefit from real-world examples and best practices. Because the project is open to the community, this means that anyone can contribute and make it better. We’ve already had a few of our partners pitch in with their ideas and suggestions. As more and more companies build their applications using Kentico Cloud, the project will continue to improve as best practices are established and implemented. Speaking of contributors, I'd like to give a huge shoutout to Kentico Gold Partner Get Started for all of their help with the project. Their devs have been major contributors to the GitHub solution and we could not have launched it without them! Moving Forward So there you have it! The Kentico Cloud Boilerplate project is the perfect place to start your development with our new Headless CMS. It’s full of some great examples and functionality that can simplify your architecture and give you a jump start on architecting your solutions. I encourage you to check it out for your next project. And if you have some improvements, feel free to contribute! We want this project to the best it can be. Good luck! Here are some resources to learn more about Kentico Cloud. Kentico Cloud Developer Hub Kentico Cloud Forums.
https://devnet.kentico.com/articles/5-reasons-to-use-the-kentico-cloud-boilerplate-project?feed=ccaebdb2-fa45-4245-8590-3d04b730592e
CC-MAIN-2018-17
en
refinedweb
A REST Web API framework that's as dangerous as you want it to be. Project Description Eve allows to effortlessly build and deploy a fully featured, REST-compliant, proprietary API. Simple Once Eve is installed this is what you will need to bring your glorified API online: - A database - A simple configuration file - A minimal launch script Support for MongoDB comes out of the box; extensions for other SQL/NoSQL databases can be developed with relative ease. API settings are stored in a standard Python module (defaults to settings.py). Most of the times the launch script will be as simple as: from eve import Eve app = Eve() app.run() Overall, you will find that configuring and fine-tuning your API is a very simple process. Live demo and examples Check out the live demo of a Eve-powered API at. It comes with source code and usage examples for all common use cases (GET, POST, PATCH, DELETE and more). There is also a sample client app available. Check it out at. Features Emphasis on the REST. The Eve project aims to provide the best possibile REST-compliant API implementation. Basic REST principles like separation of concerns, stateless and layered system, cacheability, uniform interface, etc have been (hopefully!) kept into consideration while designing the core API. Full range of CRUD operations via HTTP verbs. APIs can support the full range of CRUD (Create, Read, Update, Delete) operations. You can have a read-only resource accessible at one endpoint along with a fully editable resource at another endpoint within the same API. The following table shows Eve’s implementation of CRUD via REST Read-only by default. If all you need is a read-only API, then you can have it up and running real quick. Customizable resource endpoints (persistent identifiers). By default Eve will make known database collections available as resource endpoints. A contacts collection in the database will be ready to be consumed at example.com/contacts/. You can customize the URIs of your resources so in our example the API endpoint could become, say, example.com/customers/. Customizable, multiple item endpoints. Resources can or cannot provide access to their own individual items. API consumers could get access to /contacts/, /contacts/<ObjectId>/ and /contacts/smith/, but only to /invoices/ if so you wish. When you do grant access to resource items, you can define up to two lookup endpoints, both defined via regex. The first will be the primary endpoint and will match your database primary key structure (i.e. an ObjectId in a MongoDB database). The second, which is optional, will match a field with unique values, since Eve will retrieve only the first match anyway. Filtering and sorting. Resource endpoints allow consumers to retrieve multiple documents. Query strings are supported, allowing for filtering and sorting. Two query formats. Currently two query formats are supported: the mongo query syntax (?where={"name": "john doe"}), and the native python syntax (?where=name=='john doe'). Both query formats allow for conditional and logical And/Or operators, however nested and combined. Pagination. Resource pagination is enabled by default in order to improve performance and preserve bandwith. When a consumer requests a resource, the first N items matching the query are serverd. Links to subsequent/previous pages are provided with the response. Default and maximum page size is customizable, and consumers can request specific pages via the query string (?page=10). HATEOAS. Hypermedia as the Engine of Application State is enabled by default. Each GET response includes a _links section. Links provide details on their relation relative to the resource being accessed and a title. Titles and relations can be used by clients to dynamically updated their UI, or to navigate the API without knowing it structure beforehand. An example: { "_links": { "self": { "href": "localhost:5000/contatti/", "title": "contatti" }, "parent": { "href": "localhost:5000", "title": "home" }, "next": { "href": "localhost:5000/contatti/?page=2", "title": "next page" } } } In fact, a GET request to the API home page (the API entry point) will be served with a list of links to accessible resources. From there any consumer could navigate the API just by following the links. JSON and XML. Eve responses are automatically rendered as JSON or XML depending on the requested Accept header. Inbound documents (for inserts and edits) are in JSON format. Last-Modified and ETag (conditional requests).Each resource representation provides information on the last time it was updated along with an hash value computed on the representation itself (Last-Modified and ETag response headers). These allow consumers to only retrieve new or modified data via the If-Modified-Since and If-None-Match request headers. Data integrity and concurrency control. API responses include a ETag header, which allows for proper concurrency control. An ETag is an hash value representing the current state of the resource on the server. Consumers are not allowed to edit or delete a resource unless they provide an up-to-date ETag for the resource they are attempting to edit. Multiple inserts. Consumers can send a stream of multiple documents to be inserted for a given resource. The response will provide detailed state information about each item inserted (creation date, link to the item endpoint, primary key/id, etc.). Errors on one documnt won’t prevent the insertion of other documents in the data stream. Data validation. Data validation is provided out-of-the-box. Your configuration includes a schema definition for every resource managed by the API. Data sent to the API for insertion or edition will be validated against the schema, and a resource will be updated only if validation is passed. In case of multiple inserts the response will provide a success/error state for each individual item. Extensible data validation. Data validation is based on the Cerberus validation system and therefore it is extensible so you can adapt it to your specific use case. Say that your API can only accept odd numbers for a certain field values: you can extend the validation class to validate that. Or say that you want to make sure that a VAT field actually matches your own country VAT algorithm: you can do that too. As a matter of fact, Eve’s MongoDB data-layer itself is extending Cerberus’ standard validation, implementing the unique schema field constraint. Resource-level cache control directives. You can set global and individual cache-control directives for each resource. Directives will be included in API response headers (Cache-Control, Expires). This will minimize load on the server since cache-enbaled consumers will perform resource-intensive request only when really needed. Versioning. Define a default prefix and/or API version for all your endpoints. How about example.com/api/v1/<endpoint>? Both prefix and version are as easy to set up as setting a configuration variable. Installation Eve is on PyPI so all you need to do is pip install eve Testing Just run python setup.py test Eve has been tested successfully under Python 2.7 and Python 2.6. Current state Consider this a public preview (Alpha). Best way to be notified about its availability is by starring/following the project repo at GitHub. You can follow me on Twitter at. A little context At Gestionale Amica we had been working hard on a full featured, Python powered, RESTful Web API. We learned quite a few things on REST best patterns, and we got a chance to put Python’s renowned web capabilities under review. Then, at EuroPython 2012, I got a chance to share what we learned and my talk sparked quite a bit of interest there. A few months have passed and still the slides are receiving a lot of hits each day, and I keep receiving emails about source code samples and whatnot. After all, a REST API lies in the future of every web-oriented developer, and who isn’t these days? So I thought that perhaps I could take the proprietary, closed code (codenamed ‘Adam’) and refactor it “just a little bit”, so that it could fit a much wider number of use cases. I could then release it as an open source project. Well it turned out to be slightly more complex than that but finally here it is, and of course it’s called Eve. It still got a long way to go before it becomes the fully featured open source, out-of-the-box API solution I came to envision (see the Roadmap below), but I feel that at this point the codebase is ready enough for a public preview. This will hopefully allow for some constructive feedback and maybe, for some contributors to join the ranks. PS: the slides of my EuroPython REST API talk are available online. You might want to check them to understand why and how certain design decisions were made, especially with regards to REST implementation. Roadmap In no particular order, here’s a partial list of the features that I plan/would like to add to Eve, provided that there is enough interest in the project. - Documentation (coming soon!) - Granular exception handling - Journaling/error logging - Server side caching - Alternative sort syntax (?sort=name) - Authorization (OAuth2?) - Support for MySQL and/or other SQL/NoSQL databases Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Eve/0.0.3/
CC-MAIN-2018-17
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Area calculation for cartesian points. Calculates area using the Surveyor's formula, a well-known triangulation algorithm template<typename PointOfSegment, typename CalculationType> class strategy::area::surveyor { // ... }; #include <boost/geometry/strategies/cartesian/area_surveyor.hpp>
https://www.boost.org/doc/libs/1_50_0/libs/geometry/doc/html/geometry/reference/strategies/strategy_area_surveyor.html
CC-MAIN-2018-17
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. In Confluence Questions is there any way of editing reputation thresholds? For example: By default users need 500 points to be able to edit a topic, if I wanted to change this to be 250 points is there any way of achieving this? There's not yet an official, supported & documented way to do this. However, there is a secret REST API that allows this. You'll need to write and execute a script to make the change. Here's an example in Python using the Requests library: import sys import requests confluence_url = "" username = "admin" password = "admin" s = requests.session() # Login with Admin username/password r = s.post(confluence_url + "/dologin.action", data={"os_username": username, "os_password": password}) if r.status_code != 200: print "Login failed" print r.text sys.exit(1) # Change the topic reputation threshold to 250 points r = s.post(confluence_url + "/rest/questions/1.0/admin/permissionThreshold", data={"EDIT_TOPIC": "250"}) if r.status_code != 200: print "Failed to update permission threshold" print r.text sys.exit(1) print "Permission Threshold updated successfully" Edit: Here are the permissions which can be controlled by reputation limits, and their default thresholds. You can update them all in a single POST request to the REST API by specifying each one as an individual form parameter. That's great - thank you for that. We will be wanting to customise all of the reputation settings, could you let me know the data tags for each one (vote down, delete, etc)? Thank you for your help. One key question would be how to GET the current value of these settings? (So that we can verify that the POST works.) (It appears that the permissionThreshold resource only provides a POST method, no GET method.) What I am trying to do is prevent non-admins from creating new topics. Raising the UPDATE_QUESTION_TOPICS threshold (using the Python + REST script) seems to partially accomplish this – it prevents someone from creating a new topic for a question asked by someone else. But none of these seem to affect the user's ability to create a new topic for a question they have asked. Is there a way.
https://community.atlassian.com/t5/Questions-for-Confluence/In-Confluence-Questions-is-there-any-way-of-editing-reputation/qaq-p/369825
CC-MAIN-2018-17
en
refinedweb
I'm not entirely sure what I need to do about this error. I assumed that it had to do with needing to add .encode('utf-8'). But I'm not entirely sure if that's what I need to do, nor where I should apply this. The error is: line 40, in <module> writer.writerows(list_of_rows) UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 1 7: ordinal not in range(128) import csv from BeautifulSoup import BeautifulSoup url = \ '' response = requests.get(url) html = response.content soup = BeautifulSoup(html) table = soup.find('table', {'class': 'table'}) list_of_rows = [] for row in table.findAll('tr')[1:]: list_of_cells = [] for cell in row.findAll('td'): text = cell.text.replace('[','').replace(']','') list_of_cells.append(text) list_of_rows.append(list_of_cells) outfile = open("./test.csv", "wb") writer = csv.writer(outfile) writer.writerow(["Name", "Location"]) writer.writerows(list_of_rows) Python 2.x CSV library is broken. You have three options. In order of complexity: Use the fixed library ( pip install unicodecsv). Use as a drop-in replacement - Example: with open("myfile.csv", 'rb') as my_file: r = unicodecsv.DictReader(my_file, encoding='utf-8') Read the CSV manual regarding Unicode: (See examples at the bottom) Manually encode each item as UTF-8: for cell in row.findAll('td'): text = cell.text.replace('[','').replace(']','') list_of_cells.append(text.encode("utf-8"))
https://codedump.io/share/wCXeYcIRQHaV/1/python-ascii-codec-can39t-encode-character-error-during-write-to-csv
CC-MAIN-2018-17
en
refinedweb
Quick way to store db connection details out of sources Project Description Quick way to store db connection details out of sources. import configdb import ooop ooop.OOOP( configdb.configdb(required=”user pwd dbname”) ) By default, attributes are taken from a YAML file at system defined user configuration location. From the key (profile) ‘default’. If the file is not there, it is created with a yaml file with null values. You have to fill them. To change the profile you can use the ‘profile’ keyword or DBCONFIG_PROFILE environ. Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/configdb/
CC-MAIN-2018-17
en
refinedweb
#region blocks are being put into my code by the VS.NET wizards, but I have no idea what they're for This Article Covers Dynamic Languages Looking for something else? #region blocks are being put into my code by the VS.NET wizards, but I have no idea what they're for. Help me! Don't panic. The VS.NET code editor window knows how to collapse and expand certain regions of your code with little plus and minus signs in the margins. This feature is called outlining and allows you to, for example, collapse the entire implementation of one class while you're concentrating on another one. However, only certain constructs can be outlined, like classes, namespaces and methods. For other things, like a group of event handler methods, you can define a custom group for outlining using a region, defined using the #region/#endregion directives. For example, the Implement Interface feature of VS.NET produces a region for the method implementations labeled " class MyClass : IDisposable { IDisposable Members } There are a number of operations that can be performed on a region of outlined code, as shown in the Edit -> Outlining menu: Dig Deeper on Dynamic .NET programming languages Related Q&A from Chris Sells Moving from non-secure to secure pages without losing valid session Modifying a password through stored procedure in ASP.NET Where is the IIS Manager? Have a question for an expert? Please add a title for your question Get answers from a TechTarget expert on whatever's puzzling you. Meet all of our Microsoft .Net Development experts View all Microsoft .Net Development questions and answers
http://searchwindevelopment.techtarget.com/answer/region-blocks-are-being-put-into-my-code-by-the-VSNET-wizards-but-I-have-no-idea-what-theyre-for
CC-MAIN-2016-07
en
refinedweb
Copyright © 2004 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use rules apply. This document describes use cases for evaluating the potential benefits of an alternate serialization for XML. The use cases are documented here to understand the constraints involved in environments for which XML employment may be problematic because of one or more characteristics of XML 1.x. Desirable properties of XML and alternative serializations. Rather, after further development, review and refinement, it Binary XML in Broadcast Systems (video metadata and TV EPG/ESG data) 3.1.1 Description 3.1.2 Domain & Stakeholders 3.1.3 Justification 3.1.4 Analysis 3.1.5 Alternatives 3.1.6 References 3.2 Floating Point Arrays in the Energy Industry 3.2.1 Description 3.2.2 Domain 3.2.3 Justification 3.2.4 Analysis 3.2.5 Alternatives 3.2.6 References 3.3 X3D Graphics Model Compression, Serialization and Transmission 3.3.1 Description 3.3.2 Domain & Stakeholders 3.3.3 Justification 3.3.4 Analysis 3.3.5 Alternatives 3.3.6 References 3.4 Web Services for Small Devices 3.4.1 Description 3.4.2 Domain 3.4.3 Justification 3.4.4 Analysis 3.4.5 Alternatives 3.4.6 References 3.5 Web Services as an Alternative to CORBA 3.5.1 Description 3.5.2 Domain 3.5.3 Justification 3.5.4 Analysis 3.5.5 Alternatives 3.5.6 References 3.6 Embedding External Data in XML Documents 3.6.1 Description 3.6.2 Domain 3.6.3 Justification 3.6.4 Analysis 3.6.5 Alternatives 3.6.6 References 3.7 Electronic Documents 3.7.1 Description 3.7.2 Domain & Stakeholders 3.7.3 Justification 3.7.4 Analysis 3.7.5 Alternatives 3.7.6 References 3.8 FIXML in the Securities Industry 3.8.1 Description 3.8.2 Domain 3.8.3 Justification 3.8.4 Analysis 3.8.5 Alternatives 3.8.6 References 3.9 Multimedia XML Documents for Mobile Handsets 3.9.1 Description 3.9.2 Domain 3.9.3 Justification 3.9.4 Analysis 3.9.5 Alternatives 3.9.6 References 3.10 PC-free Photo Printing 3.10.1 Description 3.10.2 Domain 3.10.3 Justification 3.10.4 Analysis 3.10.5 Alternatives 3.10.6 References 3.11 PC-free Photo Album Generation 3.11.1 Description 3.11.2 Domain 3.11.3 Justification 3.11.4 Analysis 3.11.5 Alternatives 3.11.6 References 3.12 Intra/Inter Business Communication 3.12.1 Description 3.12.2 Domain & Stakeholders 3.12.3 Justification 3.12.4 Analysis 3.12.5 Alternatives 3.12.6 References 3.13 X3D CAD Files 3.13.1 Description 3.13.2 Domain & Stakeholders 3.13.3 Justification 3.13.4 Analysis 3.13.5 Alternatives 3.13.6 References 3.14 Businesses Process with XML Documents 3.14.1 Description 3.14.2 Domain & Stakeholders 3.14.3 Justification 3.14.4 Analysis 3.14.5 Alternatives 3.14.6 References 4 Summary ommitted. PVRs) (Personal Video Recorders) to automatically pick up recording schedules for given programs, based on user-defined criteria that match against metadata broadcasted alongside the data. Similarly, broadcasters are also trying to make their offer more attractive by integrating TV with Web technologies or the Web at large. This includes notably using Web Services from PVRs that benefit from a return channel, using Web UI technologies such as SVG and XForms to define their applications' interfaces, making TV services available to mobile devices, and so forth. However, there are constraints that cause problems when trying to deploy such services, all of which rely on XML, to television sets: Bandwidth. TV bandwidth is extremely expensive, and how much data you use for services directly constrains the number of channels that you are able to send. In addition to the potential technical issues, there are strong economic motivations to reduce bandwidth usage as much as possible; Processing Power. Most set-top boxes are cheap, and the low-end ones have roughly half the power (if not worse) of a low-end mobile device. Contrary to mobile devices, there are few limitations as to the processing power that can be embarked in a box the size of the average STB, notably the problems relating to heat and battery life are of little or no concern. However, on the one hand large-scale deployment of STBs and similar devices into households requires them to be extremely cheap and therefore as limited as possible, and on the other hand convergence with mobile devices remains a prime motivator for the television industry and contraints applicable to mobile devices apply equally to broadcasted XML metadata; Unidirectional Network. This being broadcast, there is not typically a way for TVs to request data. Instead, it is being continuously streamed and restreamed to them, a process which is called carouselling (the data itself being 'on a carousel'). Some set-top boxes do in fact have a return channel (notably the ones that support Web Services) but most don't. If the data were sent as an XML document, it would have to be fragmented so that STBs wouldn't have to wait for the end of the entire document to have been carouselled in order to start exploiting the data. XML has not shown to be easily fragmentable and currently the carouselling relies on the ability of the binary formats to be fragmented;. Television is a very large market that has a strong need for program metadata, and is increasingly converging with the Web at large (with a strong emphasis on mobile devices at first), notably using technologies such as XHTML, SVG, XForms, SMIL, and Web Services. Deployed systems already use binary XML, currently standardised as part of ISO MPEG and industry fora such as ARIB, DVB, or TV-Anytime., mobiles, and TVs using off-the-shelf or Open Source software. is an XML-enabled ISO Standard 3D file format to enable real-time communication of 3D data across all applications and network applications. It has a rich set of features for use in commercial applications, engineering and scientific visualization, medical imaging, training, modeling and simulation (M&S), multimedia, entertainment, educational, and more. [1][2] Computer-Aided Design (CAD) and architecture scenes are also supported, but are treated as a separate use case due to even-higher sizes and complexity. File sizes in this use case typically range from 1-1000 KB of data, often delivered over low-bandwidth links (56 Kbps or less). Binary serialization must be performed in concert with geometric compression (e.g. combining coplanar polygons, quantizing colors, etc.) Lossy geometric compression is sometimes acceptable and typically results in compression rates of 20:1. Due to interactivity requirements, the latency time associated with deserialization, decompression and parsing must be minimal. Digital signature and encryption compatibilities are also important for protecting digital content assets. Support Web-based interchange, rendering and interactivity for 3D graphics scenes. The X3D Compressed Binary Encoding Request For Proposals (RFP) [3] lists and justifies ten separate technical requirements. A related XML binary encoding development effort shows that simultaneous successful composition of all these requirements. The Web3D Consortium and X3D designers see great value in aligning this technical approach with a W3C-developed XML binary compression recommendation. Taken together, the following technical requirements for the X3D Compressed Binary Encoding perhaps provide a superset of most compressed binary XML requirements. [4] A further technical challenge is that these capabilities must coexist compatibility in a single document. X3D Compatibility: The compressed binary encoding shall be able to encode all of the abstract functionality described in X3D Abstract Specification. Interoperability: The compressed binary encoding shall contain identical information to the other X3D encodings (XML and Classic VRML). It shall support an. X3D Geometry - polygons and surfaces, including NURBS XInterpolation data - spline and animation data, including particularly long sequences such as motion capture (also see Streaming requirement) Textures - PixelTexture, other texture and multitexture formats (also see Bundling requirement) Array Datatypes - arrays of generic and geometric data types Tokens - tags, element and attribute descriptors, or field and node textual headers Processing Performance: The compressed binary encoding shall be easy and efficient to process in a runtime environment.. Ease of Implementation: Binary compression algorithms shall be easy to implement, as demonstrated by the ongoing Web3D requirement for multiple implementations. Two (or more) implementations are needed for eventual advancement, including at least one open-source implementation. Streaming: Compressed binary encoding will operate in a variety of network-streaming environments, including http and sockets, at various (high and low) bandwidths. Local file retrieval of such files shall remain feasible and practical. Authorability: Compressed binary encoding shall consist of implementable compression and decompression algorithms that may be used during scene-authoring preparation, network delivery and run-time viewing. Compression: Compressed binary encoding algorithms will together enable effective compression of diverse datatypes. At a minimum, such algorithms shall support lossless compression. Lossy compression alternatives may also be supported. When compression results are claimed by proposal submitters, both lossless and lossy characteristics must be described and quantified. Security: Compressed binary encoding will optionally enable security, content protection, privacy preferences and metadata such as encryption, conditional access, and watermarking. Default solutions are those defined by the W3C Recommendations for XML Encryption and XML Signature. Bundling: Mechanisms for bundling multiple files (e.g. X3D scene, Inlined subscenes, image files, audio file, etc.) into a single archive file will be considered. Intellectual Property Rights (IPR): Technology submissions must meet the Web3D Consortium IPR policy. (Of note is that all such submissions and the forthcoming specification are further compatible with the W3C Patent Policy.) GZIP is the specified compression scheme for Virtual Reality Modeling Language (VRML 97) specification, the second-generation ISO predecessor to X3D. GZIP is not type-aware and does not compress large sets of floating-point numbers well. GZIP allows staged decompression of 64KB blocks, which might be used to support streaming capabilities. GZIP outputs are strings and require a second pass for any parsing, thus degrading parsing and loading performance. Numerous piecemeal, incompatible proprietary solutions exist in the 3D graphics industry for Web-page plugins. None address the breadth of technical capabilities that might be enabled by binary XML compression. An X3D-specific binary compression and serialization algorithm for XML is certainly feasible and demonstrated. Compatibility with a general recommendation for XML compression is desirable in order to maximize interoperability with other XML technog. Mainstream devices limiting code size to 64K and heap size to 230K are the target platform of this use case. The transport packet size may vary from network to network, but it is typically measure in bytes (e.g. 128 bytes)., quering. (more to come ...) coupledness of the systems can be adjusted if this helps achieving the desired performance goals. The main requirement for this use case is reducing XML processing time in order to achieve a level of performance comparable to the existing systems. Due to the available. Keep using existing technology without migrating to Web services. Re-design the system's interfaces to make them more coarse grained in order to reduce the number of messages exchanged. Although the data in an XML document is encoded as text, it is often the case that portions of that text are in fact embedded documents in and of themselves. This frequently occurs when XML documents contain images, recordings, or other multimedia elements which have their own file formats --JPEG, MP3, and so on. In order to embed these documents they are translated to a textual representation using base64-encoding or a similar scheme. It is worth noting that these embedded documents are often much larger than the encapsulating document. For document-oriented applications, these embedded files are often part of the document, in the sense that they are intended to be rendered along with the text in the encapsulating XML document. Thus, the translation from text back to the original file format often occurs as the rest of the document is being parsed. The case in which the embedded document is in fact an XML document (e.g., an SVG graphic embedded within an XSL-FO document) can be regarded as a special case in which no translation is required. For message-oriented applications, embedded documents are simply payload elements encapsulated within, e.g., a SOAP body element. In these cases, the payloads may be of arbitrary or even unknown formats. They are often translated to text during transmission and back to their original form upon reception, even if not otherwise immediately consumed, in order to reduce storage space. Because these embedded documents can be large, storage requirements are further reduced by streaming on input and output. In these cases the embedded document may also be XML but may contain either processing instructions or DTDs, both of which should not appear within a SOAP body element (WS-I Basic Profile, R1008 and R1009). Therefore, such files may be treated as if they were binary data and base64-encoded even if they are, in fact, valid XML files. XML was not designed to contain binary data, and other packaging mechanisms such as MIME, exist and are in many ways suitable to the purpose at hand. On the other hand, the ability to treat embedded documents as part of the primary documents, and therefore make them accessible to XML-based standards and tools like XPath without resorting to additional standards like MIME, is useful in practice. Thus, this use case is a good demonstration of why one might wish to extend XML with a binary encoding. This use case builds on applications of XML for documents and Web services that are already well established. Furthermore, those uses already involve the transmittion of documents with binary data like integers and floats. The question is, for each given embeddable datum, should it be placed inside the document or should it be carried as an attachment? The drawbacks to embedding are the penalties in time due to the translation into text, and in space, due to the larger size of the translated data. The benefits include access to other XML-based technologies like XPath, XQuery, etc. and the avoidance of an additional dependency on a packaging technology such as MIME.To address this use case, a binary XML format must permit binary documents to be embedded within an XML document without requiring a translation to a text form; a binary XML format must also support the streaming of such XML documents, a desirable feature in Web service calls. SOAP with Attachments provides a MIME-based mechanism for packaging binary data with SOAP messages. It avoids the translation costs, but does not make the binary data part of the XML document itself. In that respect, it is not a streamable format. XOP describes how a MIME-based package can be used to encapsulate the binary data without a translation overhead by keeping it (at least conceptually) as part of the encapsulating XML document. Because of its use of MIME, this approach suffers from many of the same shortcomings of the SOAP with attachments case. [WS-I Attachments Profile] Documents are the most basic form of recorded human communication, dating back thousands of years. Electronic documents are the transition of this invention to the online, computerized world. Books, forms, contracts, emails, assembly, repur. XML does not meet this requirement because it requires that such resources are transformed to a text encoding, which adds both time and space costs. The conversion of a document between different encodings must preserve all information in the document, including digital signatures. It must be possible to navigate to and render a specified location in better than linear time with respect to the size of the document (i.e., "random access").. The current de facto standard for interchange of electronic documents is Adobe's Portable Document Format, or PDF. PDF meets all of the requirements stated here. The Securities industry has cooperated to define a standard protocol and a common messaging language called FIX which allows real-time, vendor/platform neutral electronic exchange of securities transactions between financial institutions. The original definition of FIX was as a tag-value pair format. Due to increase competition by the year 1999, and to better accomodate mutiple parties to share a common transport format. However, the bloated size of the XML instances resulted in artificial changes to the schemas, with the sole purpose of reducing the number of bytes on the wire. Clearly, XML Schema is not the right place to tackle this problem given that the syntax verbosity is a property exclusive to the XML serialization. Stated differently, XML Schema is the point of agreement in terms of vocabulary and structure, not in terms of syntax. Message size alone can be substantially reduced by standard compression methods. However, there is a study that shows compression of FIXML instances increases round trip time over 10 Mbps network.. PictBridge is a standard used to directly connect digital cameras to print devices, supported by numerous vendors. Future products may require improved prints containing borders, metadata, etc. The ideal format for such display is an XML based presentation format such as SVG or XHTML. Cameras and printers both have limited CPU power and thus cannot afford to consume cycles to base64 encode and decode the image data. A binary packaged aggregate containing the XML document and its referenced image data is comes from the energy and banking industries. In the energy industry, the major upstream (exploration and production) operations of oil companies are largely in developing countries (e.g. Nigeria, Angola, Papua New Guinea), (very)K, for a total size of 3 Megs typically broken up into 12 documents (transmittal every 2 hours, referred to as "trickle feed"). Each document would then average 250 K. Currently the scope is for many thousands of sites connected to several regional back-office hubs. Connectivity ranges from VSAT to 32k analog connections. The 32k usually. Retailing operations of large companies, particularly those where the actual retail outlets are SME's (Small to Medium Size Enterprises) and large companies with various small business partners and/or branch offices. The belief is that the experience gained in the scenarios described above is likely to be directly applicable to a number of other scenarios in the energy, banking, and other industries. Note that the players in this use case have rather different situations and needs. The large company has significant sunk investment in complex backoffice. This is a case where there is a need to compress the. X3D is a 3D graphics standard developed by the Web3D Consortium designed to enable real-time communication of 3D data. It has a rich set of features for use in engineering and scientific visualization, CAD and architecture, medical visualization, training and simulation, multimedia, entertainment, education, and more. The CAD working group within the consortium focuses on how to deliver CAD data for downstream uses like visualization and training. These files are XML encoded and typically range from 10-1000 MBytes of data. A desired feature is to deliver these files utilizing multi-namespaces to embed 2D data using SVG and other CAD industry specific languages. Current XML parsing speeds for these files are a major hinderance to further usage. In addition compression schemes (such as gzip) do not deliver the compression rates needed for Internet delivery. For this reason, X3D allows for pluggable algorithms to support content-aware compression. In XML terms, this translates to registering specific algorithms for attribute types, elements and document fragments (i.e. an element and its children). Typically float intensive data formats have been deployed as custom binary formats. These formats will not mesh well with other XML specifications. 3D data cannot just live in its own island, it needs to be interleaved between other formats like XHTML and SVG to form a complete document. Using multiple XML specifications together make using an X3D specific binary problematic. Basically it would need to duplicate a general XML binary compression scheme. [Extensible 3D (X3D) Graphics]. This Use Case pertains to small, medium, and large businesses that utilize XML to support intra and inter business process workflow.. In addition, each business process requires a distinct and disjoint subset of the entire document to perform its task. The document passed to each entity can be large, meaning that large amounts of potentially unused data is passed to each endpoint. GZIP is not an option as you would pay for the zip and unzip at each endpoint. In addition, each endpoint may require direct access into the document. If the document was compressed with GZIP, it would make this type of access impossible without first uncompressing it. To avoid the zip and unzip problem and the bandwidth problem, a binary encoding that represented the data in a more compact fashion could be used. The encoding would also need to allow each endpoint to quickly extract and process. The requirements include that the alternate form of the data be more compact than the original XML. direct access into the document such that the entire document does not have to be processed only to access a small subset contained at some specified location. Usage of XInclude is a possible alternative whereby the document sent only includes the relevant pieces of the original. This may not work however, as point A may send a subset of the document to point B, then point B needs to send the entire document to point C. Point B would have the entire document if point A only sent is a subset. This section includes observations concerning overlaps and other patterns in use case requirements.
http://www.w3.org/TR/2004/WD-xbc-use-cases-20041104/
CC-MAIN-2016-07
en
refinedweb
Basic Setup Swift is designed to provide seamless compatibility with Cocoa and Objective-C. You can use Objective-C APIs in Swift, and you can use Swift APIs in Objective-C. This makes Swift an easy, convenient, and powerful tool to integrate into your development workflow. This guide covers three important aspects of Swift and Objective. Migration from existing Objective-C code to Swift is made easy with interoperability and mix and match, making it possible to replace parts of your Objective-C apps with the latest Swift features. Before you get started learning about these features, you need a basic understanding of how to set up a Swift environment in which you can access Cocoa system frameworks. Setting Up Your Swift Environment To start experimenting with Cocoa app development using Swift, create a new Swift project from one of the provided Xcode templates. To create a Swift project in Xcode Choose File > New > Project > (iOS, watchOS, tvOS, or OS X) > Application > your template of choice. Click the Language pop-up menu and choose Swift. A Swift project’s structure is nearly identical to an Objective-C project, with one important distinction: Swift has no header files. There is no explicit delineation between the implementation and the interface—all of the information about a class, function, or constant resides in a single .swift file. This is discussed in more detail in Swift and Objective-C in the Same Project. From here, you can start experimenting by writing Swift code in the app delegate or a new Swift file you create by choosing File > New > File > (iOS, watchOS, tvOS, or OS X) > Source > Swift. Understanding the Swift Import Process After you have your Xcode project set up, you can import any framework from Cocoa or Cocoa Touch to start working with Objective-C from Swift. Any Objective-C framework or C library that supports modules can be imported directly into Swift. This includes all of the Objective-C system frameworks—such as Foundation, UIKit, and SpriteKit—as well as common C libraries supplied with the system. For example, to use Foundation APIs from a Swift file, add the following import statement to the top of the file: import Foundation With this import statement, that Swift file can now access all of Foundation’s classes, protocols, methods, properties, and constants. The import process is straightforward. Objective-C frameworks vend APIs in header files. In Swift, those header files are compiled down to Objective-C modules, which are then imported into Swift as Swift APIs. The importing process determines how functions, classes, methods, and types declared in Objective-C code appear in Swift. For functions and methods, this process affects the types of their arguments and return values. For types, the process of importing can have the following effects: Remap certain Objective-C types to their equivalents in Swift, like idto AnyObject Remap certain Objective-C core types to their alternatives in Swift, like NSStringto String Remap certain Objective-C concepts to matching concepts in Swift, like pointers to optionals For more information on using Objective-C in Swift, see Interacting with Objective-C APIs. The model for importing Swift into Objective-C is similar to the one used for importing Objective-C into Swift. Swift vends its APIs—such as from a framework—as Swift modules. Alongside these Swift modules are generated Objective-C headers. These headers vend the APIs that can be mapped back to Objective-C. Some Swift APIs do not map back to Objective-C because they leverage language features that are not available in Objective-C. For more information on using Swift in Objective-C, see Swift and Objective-C in the Same Project. Interacting with Objective-C APIs Copyright © 2016 Apple Inc. All rights reserved. Terms of Use | Privacy Policy | Updated: 2016-01-25
https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/BuildingCocoaApps/
CC-MAIN-2016-07
en
refinedweb
WEB-9021 (Bug) Several plugins are incompatible in the current build RUBY-13850 (Bug) SCSS should auto-complete imports to paths with no extensions WEB-6019 (Usability Problem) CSS, LESS: 'Autoscroll from source" in Structure view doesn't work for selectors like <tag>.<class> IDEA-47151 (Usability Problem) Settings of "Result of method call ignored" inspection should have choosers and/or completion for class and method names IDEA-112125 (Bug) "Accept suggested Final modifier" action in inspection view does nothing IDEA-98897 (Bug) no coverage info on gutter IDEA-100392 (Usability Problem) Allow file template configuration from generate popup IDEA-109493 (Bug) Ctrl+Alt+B (Navigate to implementations) on equals shows Object instead of interface implementors. PY-10618 (Bug) Terminal: inconsistent action and tool-window button behavior during indexing IDEA-112134 (Usability Problem) $SELECTION$ variable doesn't work as true variable, but as $END$ IDEA-112315 (Bug) Smart code completion for 'do' is wrong in RM 6.0 IDEA-112373 (Bug) Rectangular selection paste bug IDEA-112194 (Bug) Incorrect type was displayed in autocomplete dialog for unicode character WEB-8180 (Feature) Allow usage of Path Variables (and Environment path variables ?) in File Watcher WEB-8965 (Feature) Stylus: provide predefine File Watcher IDEA-112377 (Usability Problem) Caret moving problem inside "Find" input IDEA-111918 (Bug) Find: comments / string literals only: just 1 entry is found in each comment or literal RUBY-14033 (Bug) 'Changes' tool window is unusable with multiple projects IDEA-70769 (Usability Problem) Settings panel: increase speed of scrollbars IDEA-101452 (Bug) A number of IPA symbols that are represented correctly as Enum items with the main editor pane are mangled and unrecognizable in the Code Completion popup list. IDEA-112106 (Bug) Good code red - unnecessary semicolon after imports IDEA-112133 (Bug) newline can be pasted into file rename dialog WEB-8382 (Usability Problem) function expression color unchangeably linked to variables instead of functions WEB-7910 (Bug) Bad code green: object literal property names are not arbitrary expressions WEB-6982 (Bug) JSDoc highlighting improvement WEB-8562 (Bug) JS incorrect "Unresolved variable" WEB-757 (Bug) JS: local variable is highlighted as global after splitting declaration and then merging back WEB-3688 (Bug) Wrong inspection is referenced in "Edit inspection profile settings" WEB-8968 (Bug) JSDoc: correctly treat optional parameters for functions defined with @name WEB-9049 (Bug) JSDoc: properties of a type defined with named @typedef not resolved-8536 (Bug) JS incorrect 'Invalid number of parameters, expected 2' WEB-8902 (Usability Problem) EJS Inspection settings should be placed under appropriate parent WEB-8805 (Bug) EJS - More strange commenting behaviour WEB-8644 (Bug) WEB-8904 (Bug) LESS formatting problem with ampersand - extra spaces WEB-6878 (Bug) LiveEdit CSS hot swap works incorrectly under windows (local file system) WEB-8042 (Bug) Live Edit JS/CSS - doesn't hot swap correctly WEB-8894 (Usability Problem) Nodejs Cloned (Copied) configurations can't have different environment variables WEB-8768 (Bug) NPM: Available Packages: provide more detailed description WEB-8770 (Bug) Available Packages: UI improvements WEB-8909 (Bug) NPM: provide sorting for the installed packages WEB-8864 (Bug) NPM: Install/Uninstall: refresh Project View WEB-1887 (Bug) Error run node from command window WEB-8791 (Bug) package.json: recognize npm generated fields WEB-8896 (Exception) NPM: WebStorm crashes on scrolling Available Packages list IDEA-112304 (Bug) Compile annotations.jar with lowest possible version of java IDEA-112462 (Feature) Crossplatform loading of native libraries by idea plugin. WI-18913 (Bug) "Apply" button is never disabled in Deploy - > Mappings panel WEB-8951 (Bug) Stylus: Red code: recognize rest parameters WEB-8950 (Bug) Stylus: Red code: recognize keyword arguments WEB-8953 (Bug) Stylus: Red code: recognize shorthand arithmetic operators WEB-8958 (Bug) Stylus: Red code: recognize literal CSS WEB-8937 (Bug) Stylus: Red code: recognize unary and ternary operators WEB-8938 (Bug) Stylus: Red code: recognize conditional assignment operators WEB-8989 (Bug) Stylus: Red code: recognize tilde(~) operator11 (Usability Problem) Task management: JIRA: JQL: code completion suggests nothing after closing parenthesis IDEA-111813 (Cosmetics) Task management: JIRA: JQL: Tab in code completion inside function name doubles parentheses WEB-2264 (Feature) TypeScript: warn user if a class doesn't include all members declared in interface implemented by it WEB-9053 (Bug) TypeScript generics wrong syntax check WEB-8959 (Bug) Major Issue with Typescript WEB-6868 (Bug) Typescript: Primitive type names aren't being syntax highlighted. WEB-6944 (Bug) Add 'Implement methods' quickfix in TypeScript IDEA-112328 (Bug) IDEA consumes CPU during test run WEB-8936 (Exception) Karma: read access throwable exception for Coverage IDEA-96644 (Usability Problem) With clean config settings frames for new projects are opened at full-screen size IDEA-99336 (Usability Problem) Trash icon in Event log would be helpful IDEA-111753 (Usability Problem) Colors blend when I cmd+click on method name with blue breakpoint line WI-19694 (Cosmetics) PHP-CGI console should force new lines after each output IDEA-64561 (Feature) Provide navigation for XSD enum values IDEA-60895 (Feature) No completion for enumerated and boolean values of xml tags IDEA-112136 (Bug) "Optimize Imports" removes XML namespace declarations which are in use
http://confluence.jetbrains.com/display/WI/WebStorm+131.24+Release+Notes
CC-MAIN-2016-07
en
refinedweb
JavaScript class browser: once again with jQuery I’ve already posted twice about that little class browser application. The first iteration was mostly declarative and can be found here: The second one was entirely imperative and can be found here: This new version builds on top of the code for the imperative version and adds the jQuery dependency in an attempt to make the code leaner and simpler. I invite you to refer to the imperative code (included in the archive for this post) and compare it with the jQuery version, which shows a couple of ways the Microsoft Ajax Library lights up when jQuery is present. The first thing I want to do here is convert the plain function I was using to build the browser’s namespace and class tree into a jQuery plug-in: $.fn.classBrowserTreeView = function (options) { var opts = $.extend({}, $.fn.classBrowserTreeView.defaults, options); return this; }; That plug-in will have two options: the data to render (which will default to the root namespaces in the Microsoft Ajax Library), and the node template selector (which will default to “#nodeTemplate”): $.fn.classBrowserTreeView.defaults = { data: Type.getRootNamespaces(), nodeTemplate: "#nodeTemplate" }; For the moment, as you can see, this plug-in does nothing. We want it to create a DataView control on each of the elements of the current wrapped set. We will do this by calling into the dataView plug-in. You may be wondering where this plug-in might come from. Well, that’s the first kind of lighting up that the Microsoft Ajax Library’s script loader (start.js) will do in the presence of jQuery: every control and behavior will get surfaced as a jQuery plug-in, and components will get added as methods on the jQuery object. This is similar to what I had shown a while ago in this post, only much easier: For example, we can write this in our own plug-in to create DataView components over the current jQuery wrapped set: return this.dataView({ data: opts.data, itemTemplate: opts.nodeTemplate, }); Now we can wire up the itemRendered event of the data view and start enriching the markup that the DataView control rendered for each data item. First, let’s get hold of the nodes in the rendered template and wrap them: var elt = $(args.nodes); Then, if the current node is representing a namespace, we want to hook up the expansion button’s click event so that it toggles visibility of the list of children, and we want to “unhide” the button itself (it has a “hidden” class in the default markup): if (Type.isNamespace(args.dataItem)) { elt.find(".toggleButton").click(function (e) { e.preventDefault(); return toggleVisibility(this); }).removeClass("hidden"); } You can see here that we’re taking advantage of chaining. Next thing is to set-up the node link itself. We start by inhibiting the link’s default action. Then we set the text for the link, and finally we set the command that will bubble up to the DataView when the link gets clicked: elt.find(".treeNode").click( function (e) { e.preventDefault(); return false; }) .text(getSimpleName(args.dataItem.getName())) .setCommand("select"); Here, I’m using a small plug-in to set the command: $.fn.setCommand = function (options) { var opts = $.extend({}, $.fn.setCommand.defaults, options); return $(this).each(function () { $.setCommand(this, opts.commandName, opts.commandArgument, opts.commandTarget); }); } $.fn.setCommand.defaults = { commandName: "select", commandArgument: null, commandTarget: null }; I’m using $.setCommand here, which does get created by the framework for me, but I still need to create that small plug-in to make it work on a wrapped set instead of a static method off jQuery. I’ve sent feedback to the team that setCommand and bind should get created as plug-ins by the framework and hopefully it will happen in a future version. The last thing we need to do here is to recursively create the child branches of our tree: elt.find("ul").classBrowserTreeView({ data: getChildren(args.dataItem) }); This just finds the child UL element of the current branch and calls our plug-in on the results with the children namespaces and classes as the data. And this is it for the tree, we can now create it with this simple call: $("#tree").classBrowserTreeView(); The details view rendering will only differ in minor ways from the code we had in our previous imperative version. The only differences is the use of jQuery to traverse and manipulate the DOM instead of the mix of native DOM APIs and Sys.get that we were using before. For example, args.get("li").innerHTML = args.dataItem.getName ? args.dataItem.getName() : args.dataItem.name; becomes: $(args.nodes).filter("li").text( args.dataItem.getName ? args.dataItem.getName() : args.dataItem.name); Notice how jQuery’s text method makes things a little more secure than the innerHTML we had used before. Updating the details view with the data for the item selected in the tree is done by handling the select command of the tree from the following function: function onCommand(sender, args) { if (args.get_commandName() === "select") { var dataItem = sender.findContext( args.get_commandSource()).dataItem; var isClass = Type.isClass(dataItem) && !Type.isNamespace(dataItem); var childData = (isClass ? getMembers : getChildren)(dataItem); var detailsChild = Sys.Application.findComponent("detailsChild"); detailsChild.onItemRendering = isClass ? onClassMemberRendering : onNamespaceChildRendering; detailsChild.onItemRendered = onDetailsChildRendered; detailsChild.set_data(childData); $("#detailsTitle").text(dataItem.getName()); $(".namespace").css( "display", isClass ? "none" : "block"); $(".class").css( "display", isClass ? "block" : "none"); $("#details").css("display", "block"); } } Not much change here from the previous version, again, except for the use of jQuery and some chaining. And that is pretty much it. I’ve made other changes in the script to make use of the new script loader in the Microsoft Ajax Library but that will be the subject of a future post. Hopefully, this has shown you how the Microsoft Ajax Library can light up with jQuery. The automatic creation of plug-ins feels very much like native jQuery plug-ins and brings all the power of client templates to jQuery. Once we have bind and setCommand plug-ins as well, the Microsoft Ajax Library may become a very useful tool to jQuery programmers just as much as jQuery itself is a very useful tool to Microsoft Ajax programmers. The code can be found here: Update: fixed a problem in Firefox & Chrome.
http://weblogs.asp.net/bleroy/javascript-class-browser-once-again-with-jquery
CC-MAIN-2016-07
en
refinedweb
Summary This article shows how to extend SWT Canvasto. By Chengdong Li ([email protected]) Research in Computing for Humanities, University of Kentucky March 15, 2004 The following typographic conventions are used in this article: Italic: Used for references to articles. Courier New: The following terms are used in this article: client area The drawable area of canvas. Also called the paint area or canvas domain. source image The image constructed directly from the original image data with the same width and height. Unlike image data, it is device dependent. It is the sourceImage in source code. Also called the original image or image domain. The goal of this article is to show you how to implement an image viewer with scrolling and zooming operations using affine transforms. If you are new to graphics in SWT, please read Joe Winchester's article Taking a look at SWT Images. A screenshot of the image viewer is shown in Figure 1: Figure 1 - Image viewer The implementation here is different from the implementation of the Image Analyzer example that comes with Eclipse. This implementation uses affine transforms for scrolling and zooming. The advantages of this implementation are : In the following sections, we will first review the structure of the package and relationship of classes, then we will discuss how the image canvas works and how to implement a scrollable and zoom-able image canvas - SWTImageCanvas. We will use affine transforms to selectively render images and to generally simplify the implementation. After that, we will show briefly how to use the local toolbar to facilitate image manipulation. For the detailed steps on how to implement a view, Dave Springgay's article: Creating an Eclipse View is the most helpful one. You can compare this implementation with the Image Analyzer example by running both of them: To compile and run the image viewer from source code, unzip the imageviewersrc.zip file (inside imageviewer.zip), then import the existing project into the workspace, update the class path, compile, and run. The following diagram (Figure 2) shows all classes in this demo plug-in. The SWTImageCanvas is a subclass of org.eclipse.swt.widgets.Canvas; it implements image loading, rendering, scrolling, and zooming. The ImageView class is a subclass of org.eclipse.ui.part.ViewPart; it has an SWTImageCanvas instance. The helper class SWT2Dutil holds utility functions. PushActionDelegate implements org.eclipse.ui.IViewActionDelegate; it delegates the toolbar button actions, and has an ImageView instance. The ImageViewerPlugin extends org.eclipse.ui.plugin.AbstractUIPlugin (this class is PDE-generated). Figure 2 - Class diagram ( The classes without package prefix are implemented in our approach. ) A plug-in manifest file plugin.xml defines the runtime requirements and contributions (view and viewActions extensions) to Eclipse. Eclipse will create toolbar buttons for ImageView and delegate toolbar actions via PushActionDelegate. SWTImageCanvas handles image loading, painting, scrolling, and zooming. It saves a copy of the original SWT image (the sourceImage) in memory, and then translates and scales the image using java.awt.geom.AffineTransform. The rendering and transformation are applied only to the portion of the image visible on the screen, allowing it to operate on images of any size with good performance. The AffineTransform gets applied changes as the user scrolls the window and pushes the toolbar buttons. First, let's have a look at how to load an image into memory. There are several ways to load an image: In this simple implementation, we only allow the user to choose an image from the local file system. To improve this, you could contribute to the org.eclipse.ui.popupMenus of the Navigator view for image files; that way, whenever an image file is selected, the menu item will be available and the user can choose to load the image from the workspace (you need add nameFilters, and you may also need to use workspace API). To see how to load an image from a URL, please refer to the Image Analyzer of SWT examples. The image loading process is as following (Figure 3): Figure 3 - Image-loading diagram Now let's take a look at the code for loading images. SWT provides ImageLoader to load an image into memory. Image loading is done by using the Image(Display,String) constructor. To facilitate image loading, we provides a dialog to locate all image files supported by SWT ImageLoader. public void onFileOpen(){ FileDialog fileChooser = new FileDialog(getShell(), SWT.OPEN); fileChooser.setText("Open image file");){ loadImage(filename);loadImage(filename); currentDir = fileChooser.getFilterPath(); } } public Image loadImage(String filename) { if(sourceImage!=null && !sourceImage.isDisposed()){ sourceImage.dispose(); sourceImage=null; }currentDir = fileChooser.getFilterPath(); } } public Image loadImage(String filename) { if(sourceImage!=null && !sourceImage.isDisposed()){ sourceImage.dispose(); sourceImage=null; } sourceImage= new Image(getDisplay(),filename);sourceImage= new Image(getDisplay(),filename); showOriginal(); return sourceImage; }showOriginal(); return sourceImage; } We use currentDir in and to remember the directory for the file open dialog, so that the user can later open other files in the same directory. The loadImage method (shown above) in disposes the old sourceImage and creates a new sourceImage, then it calls the showOriginal() to notify the canvas to paint new image. If loading fails, the canvas will clear the painting area and disable the scrollbar. Notice that we cannot see ImageLoader directly in the code above, however, when we call Image(Display, String) in , Eclipse will call ImageLoader.load() to load the image into memory. is used to show the image at its original size; we will discuss this in more detail later. In fact, the above two functions could be merged into one method. The reason why we separate them is we may invoke them separately from other functions; for example, we may get the image file name from the database, then we can reload the image by only calling loadImage(). Now, let's see how to create a canvas to render the image and do some transformations. The org.eclipse.swt.widgets.Canvas is suitable to be extended for rendering images. SWTImageCanvas extends it and adds scrollbars. This is done by setting the SWT.V_SCROLL and SWT.H_SCROLL style bits at the Canvas constructor: public SWTImageCanvas(final Composite parent, int style) { super(parent,style|SWT.BORDER|SWT.V_SCROLL|SWT.H_SCROLL |SWT.NO_BACKGROUND);super(parent,style|SWT.BORDER|SWT.V_SCROLL|SWT.H_SCROLL |SWT.NO_BACKGROUND); addControlListener(new ControlAdapter() { /* resize listener */ public void controlResized(ControlEvent event) { syncScrollBars(); } });addControlListener(new ControlAdapter() { /* resize listener */ public void controlResized(ControlEvent event) { syncScrollBars(); } }); addPaintListener(new PaintListener() { /* paint listener */ public void paintControl(PaintEvent event) { paint(event.gc); } });addPaintListener(new PaintListener() { /* paint listener */ public void paintControl(PaintEvent event) { paint(event.gc); } }); order to speed up the rendering process and reduce flicker, we set the style to SWT.NO_BACKGROUND in (and later we use double buffering to render) so that the background (client area) won't be cleared. The new image will be overlapped on the background. We need to fill the gaps between the new image and the background when the new image does not fully cover the background. registers a resize listener to synchronize the size and position of the scrollbar thumb to the image zoom scale and translation; registers a paint listener (here it does paint(GC gc)) to render the image whenever the PaintEvent is fired; registers the SelectionListener for each scrollbar, the SelectionListener will notify SWTImageCanvas to scroll and zoom the image based on the current selection of scrollbars; another function of the SelectionListener is to enable or disable the scrollbar based on the image size and zoom scale. Whenever the SWT PaintEvent is fired, the paint listener ( paint(GC gc)) will be called to paint the damaged area. In this article, we simply paint the whole client area of the canvas (see Figure 4). Since we support scrolling and zooming, we need to figure out which part of the original image should be drawn to which part of the client area. The painting process is as following: imageRectinside the source image (image domain); the image inside this rectangle will be drawn to the client area of canvas (canvas domain). imageRectto the client area and get destRect. imageRectto destRect(scaling it if the sizes are different). Figure 4 - Rendering scenarios 1) and 2) can be done based on AffineTransform, which we will discuss next. 3) draws a part of the source image to the client area using GC's drawImage: drawImage (Image image, int srcX, int srcY, int srcWidth, int srcHeight, int destX, int destY, int destWidth, int destHeight) which copies a rectangular area from the source image into a destination rectangular area and automatically scale the image if the source rectangular area has a different size from the destination rectangular area. If we draw the image directly on the screen, we need to calculate the gaps in 4) and fill them. Here we make use of double buffering, so the gaps will be filled automatically. We use the following approach to render the image: We save only the source image. When the canvas needs to update the visible area, it copies the corresponding image area from the source image to the destination area on the canvas. This approach can offer a very large zoom scale and save the memory, since it does not need to save the whole big zoomed image. The drawing process is also speeded up. If the size of canvas is very huge, we could divide the canvas into several small grids, and render each grid using our approach; so this approach is to some extent scalable. The Image Analyzer example that comes with Eclipse draws the whole zoomed image, and scrolls the zoomed image (which is saved by system) to the right place based on the scrollbar positions. This implementation works well for small images, or when the zoom scale is not large. However, for large-sized images and zoom scale greater than 1, the scrolling becomes very slow since it has to operate on a very large zoomed image. This can be shown in Image Analyzer. Now let's look at the code used to find out the corresponding rectangles in the source image and the client area: private void paint(GC gc) { 1 Rectangle clientRect = getClientArea(); /* canvas' painting area */ 2 if (sourceImage != null) { 3 Rectangle imageRect=SWT2Dutil.inverseTransformRect(transform, clientRect); 4 5 int gap = 2; /* find a better start point to render. */ 6 imageRect.x -= gap; imageRect.y -= gap; 7 imageRect.width += 2 * gap; imageRect.height += 2 * gap; 8 9 Rectangle imageBound=sourceImage.getBounds(); 10 imageRect = imageRect.intersection(imageBound); 11 Rectangle destRect = SWT2Dutil.transformRect(transform, imageRect); 12 13 if (screenImage != null){screenImage.dispose();} 14 screenImage = new Image( getDisplay(),clientRect.width, clientRect.height); 15 GC newGC = new GC(screenImage); 16 newGC.setClipping(clientRect); 17 newGC.drawImage( sourceImage, 18 imageRect.x, 19 imageRect.y, 20 imageRect.width, 21 imageRect.height, 22 destRect.x, 23 destRect.y, 24 destRect.width, 25 destRect.height); 26 newGC.dispose(); 27 28 gc.drawImage(screenImage, 0, 0); 29 } else { 30 gc.setClipping(clientRect); 31 gc.fillRectangle(clientRect); 32 initScrollBars(); 33 } } Line 3 to line 10 are used to find a rectangle ( imageRect) in the source image, the source image inside this rectangle will be drawn to the canvas. This is done by inverse transforming the canvas's client area to the image domain and intersecting it with the bounds of image. The imageRect of line 10 is the exact rectangle we need. Once we have got imageRect, we transform imageRect back to the canvas domain in line 11 and get a rectangle destRect inside the client area. The source image inside the imageRect will be drawn to the client area inside destRect. After we get the imageRect of the source image and the corresponding destRect of the client area, we can draw just the part of image to be shown, and draw it in the right place. For convenience, here we use double buffering to ease the drawing process: we first create a screenImage and draw image to the screenImage, then copy the screenImage to the canvas. Line 30 to line 32 are used to clear the canvas and reset the scrollbar whenever the source image is set to null. Line 5 to line 7 are used to find a better point to start drawing the rectangular image, since the transform may compress or enlarge the size of each pixel. To make the scrolling and zooming smoothly, we always draw the image from the beginning of a pixel. This also guarantee that the image always fills the canvas if it is larger than the canvas. The flowchart of rendering is as following (Figure 5): Figure 5 - Rendering flowchart In the code above, we use inverseTransformRect() in line 3 and transformRect() in line 11 for transforming rectangles between different domain. We will discuss them in detail in the next section. When we say scrolling in this section, we mean scrolling the image, not the scrollbar thumb (which actually moves in the opposite direction). Our primary goal is to develop a canvas with scrolling and zooming functions. To do that, we must solve the following problems: Scrolling and zooming entails two transformations: translation and scaling (see Figure 6). Translation is used to change the horizontal and vertical position of the image; scrolling involves translating the image in the horizontal or vertical directions. Scaling is used to change the size of image; scale with a rate greater than 1 to zoom in; scale with a rate less than 1 to zoom out. Figure 6 - Translation and scaling SWTImageCanvas uses an AffineTransform to save the parameters of both the translation and the scaling. In this implementation, only translation and scaling are used. The basic idea of AffineTransform is to represent the transform as a matrix and then merge several transforms into one by matrix multiplication. For example, a scaling S followed by a translation T can be merged into a transform like: T*S. By merging first and then transforming, we can reduce times for transforming and speed up the process. SWTImageCanvas has an AffineTransform instance transform: private AffineTransform transform; AffineTransform provides methods to access the translation and scaling parameters of an affine transform: public double getTranslateX(); public double getTranslateY(); public double getScaleX(); public double getScaleY(); To change the AffineTransform, we can either reconstruct an AffineTransform by merging itself and another transform, or start from scratch. AffineTransform provides preConcatenate() and concatenate() methods, which can merge two AffineTransforms into one. Using these two methods, each time the user scrolls or zooms the image, we can create a new transform based on the changes (scrolling changes translation and zooming changes scaling) and the transform itself. The merge operation is matrix multiplication. Since 2D AffineTransform uses a 3x3 matrix, so the computation is very cheap. For example, when the user scrolls the image by tx in the x direction and ty in the y direction: newTransform = oldTransform.preconcatenate(AffineTransform.getTranslateInstance(tx,ty)); To construct a scaling or translation transform from scratch: static AffineTransform getScaleInstance(sx, sy); static AffineTransform getTranslateInstance(tx,ty); Once you have an AffineTransform, the transformation can be easily done. To transform a point: public static Point transformPoint(AffineTransform af, Point pt) { Point2D src = new Point2D.Float(pt.x, pt.y); Point2D dest= af.transform(src, null); Point point=new Point((int)Math.floor(dest.getX()),(int)Math.floor(dest.getY())); return point; } To get the inverse transform of a point: static Point2D inverseTransform(Point2D ptSrc, Point2D ptDst); Since we use only translation and scaling in our implementation, transforming a rectangle can be done by first transforming the top-left point, and then scaling the width and height. To do that, we need to convert an arbitrary rectangle to a rectangle with positive width and length. The following code shows how to transform an arbitrary rectangle using AffineTransform (the inverse transform is almost the same): 1 public static Rectangle transformRect(AffineTransform af, Rectangle src){ 2 Rectangle dest= new Rectangle(0,0,0,0); 3 src=absRect(src); 4 Point p1=new Point(src.x,src.y); 5 p1=transformPoint(af,p1); 6 dest.x=p1.x; dest.y=p1.y; 7 dest.width=(int)(src.width*af.getScaleX()); 8 dest.height=(int)(src.height*af.getScaleY()); 9 return dest; 10 } The absRect() function in line 3 is used to convert an arbitrary rectangle to a rectangle with positive width and height. For more detail about AffineTransform, you can read the Java API document from SUN website. AffineTransform also supports shear and rotation. In this article, we only need translation and scaling. AffineTransform is widely used in the AWT's image packages, and it has no relation with UI event loop, so it can be used in SWT. (Even if AffineTransform were unavailable, we could easily replace or rewrite it since we only use the translation and scaling). We have seen how we save the scrolling and scaling parameters in AffineTransform, and how we can change them. But how do they control the image rendering? Figure 7 - Scrolling and zooming diagram The basic idea is shown in Figure 7. When the user interacts with GUI (scrollbars and toolbar buttons), her/his action will be caught by Eclipse, Eclipse will invoke the listeners (for scrollbars) or delegates (for toolbar buttons) to change the parameters in the transform, then the canvas will update the status of scrollbars based on the transform, and finally it will notify itself to repaint the image. The painter will consider the updated transform when it repaints the image. For example, it will use transform to find out the corresponding rectangle in the source image to the visible area on the canvas, and copy the source image inside the rectangle to the canvas with scaling. Let's take a look at some methods which use AffineTransform to translate and zoom images. First let's see how to show an image at its original size: public void showOriginal() { transform=new AffineTransform(); syncScrollBars(); }transform=new AffineTransform(); syncScrollBars(); } Here we first change transform in (defaults to a scaling rate of 1, and no translation), and then call syncScrollBars() to update the scrollbar and repaint the canvas. It's that simple. Now let's try another one - zooming. When we zoom the image, we will zoom it around the center of the client area (centered zooming). The procedure for centered zooming is: The syncScrollBars() (see next section) guarantees that the image will be centered in the client area if it is smaller than the client area. Steps 2-4 can be used to scale images around an arbitrary point (dx,dy). Since the same steps will be used by many other methods, we put them in the method centerZoom(dx,dy,scale,af): public void centerZoom(double dx,double dy,double scale,AffineTransform af) { af.preConcatenate(AffineTransform.getTranslateInstance(-dx, -dy)); af.preConcatenate(AffineTransform.getScaleInstance(scale, scale)); af.preConcatenate(AffineTransform.getTranslateInstance(dx, dy)); transform=af; syncScrollBars(); } Now the code for zoomIn is: public void zoomIn() { if (sourceImage == null) return; Rectangle rect = getClientArea(); int w = rect.width, h = rect.height; /* zooming center */ double dx = ((double) w) / 2; double dy = ((double) h) / 2; centerZoom(dx, dy, ZOOMIN_RATE, transform); } Here the ( dx, dy) is the zooming center, ZOOMIN_RATE is a constant for incremental zooming in. centerZoom() will also call syncScrollBars() to update the scrollbar and repaint the canvas. Each time user zooms or scrolls the image, the scrollbars need to update themselves to synchronize with the state of image. This includes adjusting the position and the size of the thumbs, enabling or disabling the scrollbars, changing the range of the scrollbars, and finally notifying the canvas to repaint the client area. We use syncScrollBars() to do this: public void syncScrollBars() { if (sourceImage == null){ redraw(); return; } AffineTransform af = transform; double sx = af.getScaleX(), sy = af.getScaleY(); double tx = af.getTranslateX(), ty = af.getTranslateY();if (sourceImage == null){ redraw(); return; } AffineTransform af = transform; double sx = af.getScaleX(), sy = af.getScaleY(); double tx = af.getTranslateX(), ty = af.getTranslateY();); tx = (cw - imageBound.width * sx) / 2; }tx = (cw - imageBound.width * sx) / 2; } horizontal.setSelection((int) (-tx));horizontal.setSelection((int) (-tx)); horizontal.setThumb((int)(getClientArea().width)); /* update vertical scrollbar, same as above. */ ScrollBar vertical = getVerticalBar(); .... /* update transform. */horizontal.setThumb((int)(getClientArea().width)); /* update vertical scrollbar, same as above. */ ScrollBar vertical = getVerticalBar(); .... /* update transform. */ af = AffineTransform.getScaleInstance(sx, sy); af.preConcatenate(AffineTransform.getTranslateInstance(tx, ty)); transform=af;af = AffineTransform.getScaleInstance(sx, sy); af.preConcatenate(AffineTransform.getTranslateInstance(tx, ty)); transform=af; redraw(); }redraw(); } If there is no image, the paint listener will be notified to clear the client area in . If there is an image to show, we correct the current translation to make sure it's legal (<=0). The point ( tx, ty) in corresponds to the bottom-left corner of the zoomed image (see the right-hand image in Figure 4), so it's reasonable to make it no larger than zero (the bottom-left corner of the canvas client area is (0,0)) except if the zoomed image is smaller than the client area. In such a situation, we correct the transform in so that it will translate the image to the center of client area. We change the selection in and the thumb size in , so that the horizontal scrollbar will show the relative position to the whole image exactly. The other lines between and set the GUI parameters for the horizontal scrollbar, you can change them to control the scrolling increment. The process for the vertical scrollbar is exactly the same, so we don't show it here. Lines between and create a new transform based on the corrected translation and the scaling and update the old transform. Finally, line notifies the canvas to repaint itself. Joe Winchester's Taking a look at SWT Images explains pixel manipulation in great detail. Here we will show how to rearrange the pixels to get a 900 counter-clockwise rotation. In order to demonstrate how other classes can interact with SWTImageCanvas, we put the implementation in the PushActionDelegate class. The basic steps for our rotation are: SWTImageCanvas. SWTImageCanvas. The code in PushActionDelegate for rotation is: ImageData src=imageCanvas.getImageData(); if(src==null) return; PaletteData srcPal=src.palette; PaletteData destPal; ImageData dest; /* construct a new ImageData */ if(srcPal.isDirect){ destPal=new PaletteData(srcPal.redMask,srcPal.greenMask,srcPal.blueMask); }else{destPal=new PaletteData(srcPal.redMask,srcPal.greenMask,srcPal.blueMask); }else{ destPal=new PaletteData(srcPal.getRGBs()); }destPal=new PaletteData(srcPal.getRGBs()); } dest=new ImageData(src.height,src.width,src.depth,destPal); /* rotate by rearranging the pixels */ for(int i=0;i<src.width;i++){ for(int j=0;j<src.height;j++){ int pixel=src.getPixel(i,j);dest=new ImageData(src.height,src.width,src.depth,destPal); /* rotate by rearranging the pixels */ for(int i=0;i<src.width;i++){ for(int j=0;j<src.height;j++){ int pixel=src.getPixel(i,j); dest.setPixel(j,src.width-1-i,pixel); } }dest.setPixel(j,src.width-1-i,pixel); } } imageCanvas.setImageData(dest);imageCanvas.setImageData(dest); The code for setImageData() is: public void setImageData(ImageData data) { if (sourceImage != null) sourceImage.dispose(); if(data!=null) sourceImage = new Image(getDisplay(), data); syncScrollBars(); } Since we won't change the pixel value, we needn't care about the RGB of each pixel. However, we must reconstruct a new ImageData object with different dimension. This needs different PaletteData in and for different image formats. creates a new ImageData and sets the value of each pixel. setImageData() in will dispose the previous sourceImage and reconstruct sourceImage based on the new ImageData, then update the scrollbars and repaint the canvas. We put setImageData() inside SWTImageCanvas so it could be used by other methods in the future. We have known how the image canvas works. Now, let's talk briefly about how to implement the plug-in. Follow the steps 1-3 in Creating an Eclipse View, we can create a plug-in with a single view. The plugin.xml is: <plugin id="uky.article.imageviewer" name="image viewer Plug-in" version="1.0.0" provider- <runtime> <library name="imageviewer.jar"/> </runtime> <requires> <import plugin="org.eclipse.ui"/> </requires> <extension point="org.eclipse.ui.views"> <category name="Sample Category" id="uky.article.imageviewer"> </category> <view name="Image Viewer" icon="icons/sample.gif" category="uky.article.imageviewer" class="uky.article.imageviewer.views.ImageView" id="uky.article.imageviewer.views.ImageView"> </view> </extension> </plugin> The ImageViewerPlugin and ImageView are as following: public classImageViewerPlugin extends AbstractUIPlugin { public ImageViewerPlugin(IPluginDescriptor descriptor) { super(descriptor); } } public class ImageView extends ViewPart { declares an instance variable imageCanvas to point to an instance of SWTImageCanvas, so that other methods can use it. creates an SWTImageCanvas to show the image. When the view gets focus, it will set focus to imageCanvas in . The dispose method of SWTImageCanvas will be automatically called in whenever the view is disposed. The image viewer view has five local toolbar buttons: . To take the advantage of Eclipse, we contribute to the org.eclipse.ui.viewActions extension point by adding the following lines to plugin.xml: <extension point="org.eclipse.ui.viewActions"> <viewContribution targetID="uky.article.imageviewer.views.ImageView" id="uky.article.imageviewer.views.ImageView.pushbutton"> <action label="open" icon="icons/Open16.gif" tooltip="Open image" class="uky.article.imageviewer.actions.PushActionDelegate" toolbarPath="push_group" enablesFor="*" id="toolbar.open"> </action> ..... </viewContribution> </action> ..... </viewContribution> The delegate class PushActionDelegate in will process all the actions from the toolbar buttons. It is defined as following: publicclass PushActionDelegate implements IViewActionDelegate { This class implements the IViewActionDelegate interface. It has a view instance in . It gets the view instance during the initialization period , and later in it uses the view instance to interact with the SWTImageCanvas. We have shown the detail on how to implement a simple image viewer plug-in for Eclipse. The SWTImageCanvas supports scrolling and zooming functions by using AWT's AffineTransform. It supports unlimited zoom scale and smooth scrolling even for large images. Compared with the implementation of Image Analyzer example, this implementation is both memory efficient and fast. The SWTImageCanvas can be used as a base class for rendering, scrolling, and scaling image. Shortcut keys are a must for usable image viewers. In the interest of space, we did not show this here; however, they can be easily added. I appreciate the help I received from the Eclipse mailing list; the Eclipse team's help in reviewing the article; Pengtao Li's photo picture; and finally the support from RCH lab at the University of Kentucky. Creating an Eclipse View. Dave Springgay, 2001 Taking a look at SWT Images. Joe Winchester, 2003 The Java Developer's Guide to ECLIPSE. Sherry Shavor, Jim D'Anjou, Scott Fairbrother, Dan Kehn, John Kellerman, Pat McCarthy. Addison-Wesley, 2003
http://www.eclipse.org/articles/Article-Image-Viewer/Image_viewer.html
CC-MAIN-2016-07
en
refinedweb
Pinned topic Document on Automation Script Could anyone help me with automation script documents. We need to use automation scripts for our project but I couldn't find any documents on automation scripts. Thanks in advance, - tivoli-i lov it 270002R2E789 Posts Re: Document on Automation Script2011-12-27T04:58:14Z This is the accepted answer. This is the accepted answer.I'm looking for the same. Please help Re: Document on Automation Script2011-12-27T05:08:00Z This is the accepted answer. This is the accepted answer. - tivoli-i lov it 270002R2E7 - 2011-12-27T04:58:14Z Following documents might be helpful for you :- Jython Scripts Service Catalog Jython Validation Re: Document on Automation Script2011-12-27T18:07:13Z This is the accepted answer. This is the accepted answer.Attached is a guide provided by IBM that has been invaluable in understanding Automation scripts. This was given in response to one of their webinars (though is also available online). This is based on Maximo 75 (which I'm assuming is the version you were interested in). There were some slight differences (such as instead of mbo using scriptHome) in TSRM 7.2. Attachments - MichaelSmithson 2700035K6J9 Posts Re: Document on Automation Script2012-02-19T06:38:15Z This is the accepted answer. This is the accepted answer.Hi all, I am trying to develop an automated script (using python) in IBM Maximo 7.5 which adds a period of time (no. of days) to an existing date. I have 3 attribute fields (existing date, period of time and resultant date), am using an attribute launch point with the period of time as the implicit attribute, and existing date (type IN) and resultant date (type OUT) set as explicit variables. I only need help with how to develop the script itself to unravel and rebuild the dates which I have in the following format: 31/01/2012 16:00 (i.e. 31-Januray, 2012 4pm). Please can anyone help - MichaelSmithson 2700035K6J9 Posts Re: Document on Automation Script2012-03-26T09:57:42Z This is the accepted answer. This is the accepted answer. - MichaelSmithson 2700035K6J - 2012-02-19T06:38:15Z import time import java.util.Date p = java.util.Date.getTime(purchasedate) import java.math.BigInteger s = java.math.BigInteger.floatValue(p) s = s / 1000 a = assetdesignlife * 86400 s = s + a t = time.localtime(s) designendoflifedate = time.strftime("%d/%m/%Y %H:%M", t) If only we had had automated scripting in earlier versions of IBM Maximo. Re: Document on Automation Script2012-03-27T04:46:12Z This is the accepted answer. This is the accepted answer. - MichaelSmithson 2700035K6J - 2012-03-26T09:57:42Z That said, I think you should explore using Calendar classes in the Java API. It would make a much cleaner approach to doing the same thing. You can find the Java APIs here: Bowser. Re: Document on Automation Script2012-04-26T14:01:44Z This is the accepted answer. This is the accepted answer. Here's an example script. This script assuming my Jython/lib directory on the AppServer is the one shipped within WebSphere, caution is suggested when using this embedded Jython in WebSphere, I've found that it's kinda messed up for some modules, and would recommend instead pointing to a full Jython installation lib directory. In the 7.5.0.3 version of Tpae and going forward, it appears that this directory is automatically included, though, unfortunately in 7.5.0.2 Tpae (and SCCD 7.5) this workaround is still required. import sys print sys.path foundJython = False for path in sys.path: if (path.find("jython\Lib") != -1) : foundJython=True print "already found jython in path" if (foundJython==False): sys.path.append(r'C:\Progra~1\IBM\WebSphere\AppServer\optionalLibraries\jython\Lib') print sys.path import httplib Thanks, Scott Automation script loops around and update all open records2012-10-24T16:21:35Z This is the accepted answer. This is the accepted answer.Hi, I am trying to develop a script that will fire on SR and update workorder with set of data. It works but instead of updating the workorder created from SR, it goes on and updates all the workorders. I am not able to get around this on how to limit or control this to update only the workorder that was created from SR. I applied some conditions but that did not work completely. scriptHome does not work on 7.5? Is there way to get around this problem? Here is a sample of this code. woQuesAnsSet = mbo.getMboSet("G_ENVQUESANS"); woQuesSet = mbo.getMboSet("G_ENVQUEST"); for i in range(0,woQuesSet .count()): woQAS = woQuesSet .getMbo(i) myAns=woQuesAnsSet.addAtEnd() myAns.setValue("G_APP", woQAS.getString("G_APP"), 2L) myAns.setValue("G_CATEGORY", woQAS.getString("G_CATEGORY"), 2L) Let me know if you have any suggestion or recommendation on what changes this needs to make it work. Thanks in advance, Kumar - 7FF5_David_Brawner 2700057FF51 Post Re: Automation script loops around and update all open records2013-05-13T15:34:17Z This is the accepted answer. This is the accepted answer. mahato01, It doesn't appear from your example that you qualified the WOset, ie. you have not identified the WO you wish to update so the for statement loops through all returned WO's. Maybe I missed domething in your example code Re: Document on Automation Script2015-02-25T18:00:15Z This is the accepted answer. This is the accepted answer. We have both Windows and Linux servers, and we were able to use the above code with some modifications. It works fine for Windows. However, we are not able to get it working on the Linux servers. We get an import error that the module was not found. Following is the code we used: import sys foundJythonW = False foundJythonL = False for path in sys.path: if(path.find("C:\Program Files\IBM\WebSphere\AppServer\optionalLibraries\jython\Lib") != -1): foundJythonW = True print "Already found in jython path for Windows" if(path.find("/opt/IBM/WebSpere/AppServer/optionalLibraries/jython/Lib") != -1): foundJythonL = True print "Already found in jython path for Linux" if(foundJythonW==False): sys.path.append(r"C:\Program Files\IBM\WebSphere\AppServer\optionalLibraries\jython\Lib") if(foundJythonL==False): sys.path.append(r"/opt/IBM/WebSpere/AppServer/optionalLibraries/jython/Lib") print "sys.path after: " + str(sys.path) import ups_testLib Hoping you can help so we can build libraries using jython. Thank you. Heidi
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014772398&ps=25
CC-MAIN-2016-07
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 05-16-2012 04:28 PM I have read some blog posts and forum entries about trying to implement a QNXApplicationEvent.SWIPE_DOWN listener, but somehow none work for me. I have tried signals: _swipedDown = newNativeSignal(QNXApplication.qnxApplication, QNXApplicationEvent.SWIPE_DOWN, QNXApplicationEvent); _swipedDown.add(onSwipedDown); And I have tried with regular event handlers: QNXApplication.qnxApplication.addEventListener(QNX But the handler never traces anything: private function onSwipedDown(_:QNXApplicationEvent):void { trace("\n", this, "--- onSwipedDown ---"); } The class holding this code extends a Sprite. Anything I should know about? Solved! Go to Solution. 05-16-2012 04:36 PM Hi, What version of the SDK are you using? I've seen the same thing with bb10 SDK when trying to run on the PlayBook. I think it comes from the fact that some of the qnx packages are now implemented as ANEs... 05-16-2012 04:46 PM I'm using these: Yea, there are some ANEs linked in there. 05-16-2012 04:52 PM My code was working well when I built it against tablet sdk 2.0. But since I build it on tablet sdk 3.0 (I called it bb10 SDK in the previous post ) the event is not working anymore. I don't now if this is because sdk 3 is only supported on bb10, or if it is a bug... I only tested on the PlayBook as I don't have a bb10 yet. 05-16-2012 04:54 PM Yea, feels like a bug to me then. A functionality cannot be taken away in the hopes that users are necessarily up to date. Especially if an update is not available to their device. 05-17-2012 12:15 AM I just tried again and this works for me: import qnx.events.QNXApplicationEvent; import qnx.system.QNXApplication; QNXApplication.qnxApplication.addEventListener(QNX ApplicationEvent.SWIPE_DOWN, showSwipeUI);ApplicationEvent.SWIPE_DOWN, showSwipeUI); private function showSwipeUI(e:QNXApplicationEvent):void { trace("zomg swipez!"); } 05-17-2012 09:51 AM Are you testing this on a PlayBook or using the BlackBerry Dev Alpha Simulator? Applications produced in the BB10 SDKs require BlackBerry 10+ to run. They will not run correctly on Tablet OS 2.x. 05-17-2012 09:54 AM The code I pasted above Mark's post worked for me with the 2.0 SDK on the PlayBook and the 3.0 SDK on the Dev Alpha. I didn't try it with the 3.0 on the PlayBook for the reasons Mark stated. Just thought I'd clear that up quick 05-17-2012 10:08 AM I'm having the same problem. My exisitng app with SDK 2.0, swipe down works fine on PlayBook. Updated app to SDK 3.0, running on the BB10 simulator, swipe down won't work. 05-17-2012 10:18 AM Thanks for clarification Mark, let's hope a bb10 dev Beta will soon be available on the PlayBook, as was the case with 2.0. I'm already upgrading all my apps to bb10 ;-)
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/QNXApplicationEvent-SWIPE-DOWN-not-firing-at-all-in-an-AS3/m-p/1720639/highlight/true
CC-MAIN-2016-07
en
refinedweb
Results 1 to 5 of 5 - Join Date - Nov 2008 - 7 - Thanks - 5 - Thanked 0 Times in 0 Posts beginner problem - using methods does not detect variable. hello I have been taking a course in java and doing horrible. Well, not horrible, I do the assigments, but ask tons of questions. I am writing a program to return the conversion of meters to other units, I have the code part working, and was working on asthetics, I needed to break down into methods, thats where the problem came in. I am using meters (double) and when I try to pass it off to another method it wont recognize the variable. Can someone please help? thanks Code: import java.util.Scanner; //Needed for Scanner Class /** my name and infor here */ public class meterProblem { public static void main(String [] args) { double meters; // A number entered by the user int number; //selection by the user //Create Scanner object keyboard input Scanner keyboard = new Scanner(System.in); //Get number from user. System.out.println("Enter a distance in meters: "); meters = keyboard.nextDouble(); if (meters < 0) { System.out.println("Number has to be greater than 0, please enter meter: "); meters = keyboard.nextDouble(); } //Get Selection from user. System.out.print("Enter your choice: "); System.out.println("\n1. Convert to kilometers.\n" + "2. Convert to inches.\n" + "3. Convert to feet.\n" + "4. Quit the program."); number = keyboard.nextInt(); //Determine number entered. switch (number) { case 1: showKiloMeters(); break; case 2: showInches(); break; case 3: showFeet(); break; case 4: System.out.println("thanks, goodbye."); application.shutdown(); default: System.out.println("That's not 1,2,3, or 4."); break; } } public static void showKiloMeters() { double kiloMeters = 0.00; kiloMeters = (meters * 0.001); System.out.print(meters + " meters is " + kiloMeters + " Kilometers."); } public static void showInches() { double showInches = 0.00; showInches = (meters * 39.37); System.out.print(meters + " meters is " + showInches + " inches."); } public static void showFeet() { double feet = 0.00; feet = (meters * 3.281); System.out.print(meters + " meters is " + feet + " feet."); } } Last edited by jlopez3203; 11-07-2008 at 08:31 PM. - Join Date - May 2006 - Location - Ontario, Canada - 392 - Thanks - 2 - Thanked 20 Times in 20 Posts Could you make things a little easier on those wishing to help by showing us exactly what line(s) of code have an issue and the error(s), if any, that you receive? From the looks of things though, my guess is you're having issues with your showKiloMeters, showInches and showFeet methods. If this is correct it's because your "meters" variable is declared inside of your "main". This means that the "meters" variable is not visible to any code outside of your main method (i.e. showKiloMeters, showInches and showFeet). For further reading google something along the lines of "java variable scope". To fix your issue I see a couple possibilities. 1. Make meters a class variable so that all methods in the class can see and access it. Code: public class meterProblem { //If declared here, your show methods should be able to access meters double meters; // A number entered by the user public static void main(String [] args) { int number; //selection by the user ... Code: ... case 1: showKiloMeters(meters); break; case 2: showInches(meters); break; case 3: showFeet(meters); break; ... public static void showKiloMeters(double meters) {...} public static void showInches(double meters) {...} public static void showFeet(double meters) {...} Last edited by Gox; 11-07-2008 at 04:25 AM. Users who have thanked Gox for this post: Users who have thanked shyam for this post: - Join Date - Nov 2008 - 7 - Thanks - 5 - Thanked 0 Times in 0 Posts Its my first post, and Im not posting literate yet, I did try to read the rules and post guidelines and abide by them, but all you guys rock, thanks, I will try your suggestions and my posting will get cleaner with time. thanks again. Now i have separated them into separate methods, but Im suppose to create a menu loop that shows the options to chose from. 1. convert to kilometers 2. convert to inches 3. convert to feet 4. quit the program. Where would I add a loop like that into the problem, the beginning, the end, the middle? would it be considered an infinite loop? Code: import java.util.Scanner; //needed for scanner class /** Java Thursday Evening */ public class convertingMetersMethod { public static void main(String [] args) { int number; //number entered by user double meters; // create a scanner object for keyboard input Scanner keyboard = new Scanner(System.in); //enter meters System.out.println("Enter a distance in meters: "); meters = keyboard.nextDouble(); // get a selection from menu System.out.print("Enter your choice: "); System.out.println("\n1. Convert to kilometers\n2. Convert to inches\n" + "3. Convert to feet\n4. Quit the program"); number = keyboard.nextInt(); //determine number entered switch (number) { case 1: showKilometers(meters); break; case 2: showInches(meters); break; case 3: showFeet(meters); break; case 4: System.out.println("Thanks. Good day!"); System.exit(0); default: System.out.println("That's not a valid option, try again: "); number = keyboard.nextInt(); } } public static void showKilometers(double meters) { double kiloMeters; kiloMeters = meters * 0.001; System.out.println(meters + " meters is " + kiloMeters + " kilometers."); } public static void showInches(double meters) { double inches; inches = meters * 39.37; System.out.println(meters + " meters is " + inches + " inches."); } public static void showFeet(double meters) { double feet; feet = meters * 3.281; System.out.println(meters + " meters is " + feet + " feet."); } } Last edited by jlopez3203; 11-08-2008 at 12:58 AM.
http://www.codingforums.com/java-and-jsp/151758-beginner-problem-using-methods-does-not-detect-variable.html
CC-MAIN-2016-07
en
refinedweb
Part II: Pointers By NickDMax Previous: Enumerations Pointers Although this topic is not directly in our path, I think a little side trip is in order since we will use pointers from here on out. Pointers are wonderful, powerful, and yet misunderstood they are a disaster waiting to happen. Many languages don’t have pointers since they allow the programmer to get into all kinds of trouble. However, in most of these languages there are “workarounds” that have been developed to allow the adventurous to perform many of the techniques that use pointers. So what are pointers? They are integer data types that hold an address to a location in memory. More than that, to the compiler they are associated with another data type (of a known size). The programmer can manipulate the pointer itself, for example incrementing it so that it points to the next element (not necessarily the next byte), or the programmer can manipulate the data that the pointer addresses. Let’s look at a specific example. The notation int *ptr; declares a pointer to an integer. The variable ptr is nothing more than an integer itself, but it is a special integer to the compiler. To the compiler the value of ptr is a memory address to a 4 byte (assuming a 32 bit integer) block of memory. When we tell the compiler ptr++ the compiler will move to the NEXT integer, which means that the value in ptr is increased by 4. When we tell the compiler ptr-- the value of ptr is decreased by 4 because an integer is 4 bytes and the compiler wants to move ptr to address adjacent integers (not overlapping integers). To access the data that the pointer addresses we would use the syntax *ptr. To increase the value that the pointer addresses, we may use (*ptr)++, which will increase *ptr by 1, but will leave the value of ptr unaffected. Should we code something like ptr += 8 this would increment ptr by 4*8= 16 bytes (or 8 integers). Pointers come with three special operators (* , [ ], and ->), and there are two associated operators we will want to discuss ( &, and sizeof( )). The first operator (*) is usually called the dereference operator as it “dereferences” the pointer and returns the value or object that the pointer addresses. The next operator ([ ]) is the Array Index operator. This operator is a hybrid of the discussion in the last two paragraphs. That is to say that (ptr[i]==*(ptr+i)), it chooses an offset for ptr, and then dereferences that value. The next operator (->) is also a dereference operator that is used to dereference an element of a structure. Since we have not discussed such things yet, I will reserve the discussion of this operator until after I have introduced structures. Not exactly pointer operators, but important to the use of pointers, there is the (&) operator. This operator is known as the reference operator or, as I like to think of it, the “Address of” operator. It gives the address of a variable. The last operator I wish to discuss is the sizeof() operator. This operator acts more like a function and it returns the number of bytes needed to represent a type or variable. ---------- Pointer101.C: Demo Program ---------- #include <stdlib.h> #include <stdio.h> //Utility function to pause output until user presses enter. void pause(); //My crash course in pointers. int main() { int *ptrToInt; int Array[] = {0, 10, 20, 30, 40, 50, 60}; //C uses pointers to access arrays. The variable Array is a pointer // to a block of memory containing the integers |0|10|20|30|...|60| // All array variable are pointers, and all pointer can be array vaiables!!! int counter; //This assigns ptrToInt to the address of the block of memory that is Array[] ptrToInt = Array; puts("ptrToInt = Array, Same as ptrToInt = &Array[0]\n"); //First lets use ptrToInt as though it were ptrToInt[] for (counter=0; counter<7; ++counter) { printf("Array[%d]==%d\tptrToInt[%d]==%d\n", counter, Array[counter], counter, ptrToInt[counter]); } //Now lets see if it is true that ptrToInt[i]==*(ptrToInt + i), What is *(ptrToInt)+i doing? for (counter=0; counter<7; ++counter) { printf("*(ptrToInt + %d)==%d\t*(ptrToInt)+%d==%d\n", counter, *(ptrToInt+ counter), counter, (*(ptrToInt)+counter)); } //Take a min or two... pause(); // Lets see what happens when we do ptrToInt++. puts("\n*ptrToInt += counter"); for (counter=0; counter<7; ++counter) { *ptrToInt += counter; printf("Array[%d]==%d\tptrToInt==%d\t*ptrToInt==%d\n", counter, Array[counter], ptrToInt, *ptrToInt); ptrToInt++; //Note that ptrToInt goes up by either 2 or 4. } puts("\nptrToInt=&counter\n"); ptrToInt = &counter; //make ptrToInt point to our counter for (counter=0; counter<7; ++(*ptrToInt)) { printf("counter==%d\tptrToInt==%d\t*ptrToInt==%d\n", counter, ptrToInt, *ptrToInt); if (*ptrToInt==3) { *ptrToInt=7; } //*ptrToInt can affect the loop } return 0; } void pause() { puts("Press ENTER to continue..."); while (getchar()!='\n'); return; } ---------- END Pointer101.C ---------- The above program, although very dry, points out that pointers and arrays are intertwined in C/C++. Each array variable is actually a pointer, and each pointer can be used to access and array. In this tutorial it is the latter feature which will be of the most use, as pointers allow us to use dynamically allocated memory. In all my examples so far, the various variables and data structures that I have defined have all been “static” data that is either allocated when the program loads, or is made out of chucks of our stack space. This means that so far our data has essentially been very limited in size. This is fine for small amounts of data, but when our games have many large tables of data we risk running out of available stack space and thus “out of memory” or worse putting tight limits on what our game can do. There is another way. Rather using our data-segment and stack to store our data we can ask to allocate memory in the heap during program execution (called dynamically allocating memory) and this allows us access to larger data structures, data structures that may have different sizes, data structures that are not continuous blocks of memory. What makes this all possible? Pointers. As an example this next program (too long to do in color) loads room descriptions into dynamically allocated memory. The room descriptions are stored in a file that attached to this tutorial. //In the Enum2.C example from the last section I used an array of //strings to hold the descriptions of the rooms. In this example //I would like to load these descriptions from file. That way I //can update the room descriptions without having to update any //any code. This is also handy as it makes it easer to use a //word processors (with spell check) to edit the text. This also //means that non-programmer game designer can write/edit the text. //To do this I will need to sketch out a file-format. That is I //need rules to how the file will look so that my program knows //how to read it in. //Rule #1 All rooms are entered in the order defined by the enum. //Rule #2 All room descriptions end with an <END> tag on a line by itself. //Rule #3 Each line should be no more than 80 characters wide //Rule #4 All line starting with a # will be ignored (allows you to // add comments to the file.) //Example: //# This line would be a comment //This is a description of the Forest! //<END> //# Yet another comment //This is a description of the Entrance! //<END> //etc. //<END> //There are FAR better file formats. This one has many failings // but it will work for an example. #include <stdio.h> #include <stdlib.h> #include <string.h> //Enum of our rooms. enum Room_Names { FOREST=0, ENTRANCE, //=1 PARLOR, //=2 KITCHEN, //=3 MAINHALL, //=4 GRANDSTAIRCASE, //=5 HIDDENPASSAGE1, //=6 NUM_OF_1ST_FLOOR_ROOMS, //=7 BEDROOM1 = NUM_OF_1ST_FLOOR_ROOMS, //=7 BEDROOM2, //=8 GRANDCLOSET, //=9 ALCHEMYLAB, //=10 NUMBER_OF_ROOMS, //=11 NUM_OF_2ND_FLOOR_ROOMS = NUMBER_OF_ROOMS - NUM_OF_1ST_FLOOR_ROOMS //= 11- 7 = 4 }; //Names of all of the rooms based upon the order of enumeration. const char Room_Names[][20] = {"Forest", "Enterance", "Parlor", "Kitchen", "Main Hall", "Grand Staircase", "North Passage","Master Bedroom", "Guest Bedroom", "Grand Closet", "Alchemy Lab" }; //Utility function to get input. char Get_Char(); int main() { //The variable we will declare is an array of pointers (that is right, // RoomDescriptions is an array, of pointers (to arrays)). char *roomDescriptions[NUMBER_OF_ROOMS]; // This is no different that delaring char RoomDescriptions[NUMBER_OF_ROOMS][1024] // Except that in this version the memory to store the information is unknown. // We will be able to use this variable just as we did before. //Since we have no idea how long the room descriptions may be, we will need to create // a temporary buffer to hold the data as it is loaded from the file. char *tempInputBuffer; int inputLength; //Will tell us how long our description is. int currentRoom; //Let us know which room we are working on. //Next we need a buffer to get each line. // Since computer screens have a maximum length of 80 chars // our text file should contain no more than about 80 or so // characters per line. We need to add an extra byte for the // zero to terminate the string. And 2 for EOL markers. char lineBuffer[83]; char tempChar; //This will hold a character so we can shorten lineBuffer to 5 chars. //---- These variables are used in the "display descriptions" routine char cInput; //Used to get input from the user int iRoomNumber=0; //Used to get the room number the user wants to see //Next we need to make a pointer to a file stream. FILE *fp; //A pointer to a file buffer. //How big should this buffer be? Well The room descriptions should be short, // the average word length in English is 5 letters (also used to determine // typing rate). Each line can be up to 80 bytes, about 16 words and assuming // an 80x25 screen, we don't want the user to have to look past about 12 lines. // so 12 * 80 = 960, bump that up to 1024 (1k) and we have about 200 words/description maximum. tempInputBuffer = malloc(1024); //return a NULL (0) if it fails. //We should check to make sure that the memory did get allocated //If the memory is not allocated then tempInputBuffer == NULL (which equates to a false in C) if (tempInputBuffer) { fp = fopen("rooms.txt", "r"); //Next we can see if the file was opened, if it was not // opened then fp is a NULL pointer (NULL Pointers point to 0) if (fp != NULL) //same as (fp != 0) same as (fp==true) same as (fp) { //Our file was opened successfully! puts("File open: Reading in descriptions...\n"); currentRoom=0; //This tells us which RoomDescriptions[] we are working on tempInputBuffer[0]= 0; //This sets tempInputBuffer = "" inputLength = 0; //Get inputs until either EOF or we have enough rooms... while (fgets(lineBuffer, 82, fp) != NULL && currentRoom < NUMBER_OF_ROOMS) { //We read a line in from the file. // Most lines will end in a '\n' char, but the very last line of the file will not // as it was terminated with a EOF marker. Originally I used strcmp(lineBuffer,"<END>\n")==0 // but this a little bug in it at the EOF. Not a big deal, just remember to press enter // after finishing the last description. BUT, what if we get lazy? // The next two lines ensure that we find just "<END>". tempChar=lineBuffer[5]; lineBuffer[5] = 0; //This will set lineBuffer = "?????" so we can do our compairison if (strcmp(lineBuffer,"<END>")==0) { printf("Done reading room #%d: %s\n", currentRoom,Room_Names[currentRoom]); inputLength = strlen(tempInputBuffer); roomDescriptions[currentRoom] = malloc(inputLength + 1); if (roomDescriptions[currentRoom]) { roomDescriptions[currentRoom][0] = 0; strcat(roomDescriptions[currentRoom], tempInputBuffer); } else { //We should never see this code run. But in case it does lets make sure // we do everything correctly. printf ("ERROR: Memory Allocation Failure at room #%d: %s\n", currentRoom, Room_Names[currentRoom]); //We need to ensure we free any memory that we allocated. puts("\n!!!Freeing all allocated memory!!!"); puts("*Freeing input buffer."); free(tempInputBuffer); puts("*Freeing room descriptions:\n--------------------------"); while (currentRoom) { currentRoom--; printf("*Freeing room discription #%d: %s\n", currentRoom, Room_Names[currentRoom]); free(roomDescriptions[currentRoom]); } puts("*Closing input file"); fclose(fp); puts("*** EXITING WITH ERROR CODE -1 ***"); exit(-1); } currentRoom++; tempInputBuffer[0]=0; //reset to null string "" inputLength = 0; } else if (lineBuffer[0]!='#') { //First restore missing char. lineBuffer[5] = tempChar; //Check the length. tempInputBuffer can only hold 1024 bytes (and this must include the 0) inputLength += strlen(lineBuffer); if (inputLength < 1023) { strcat(tempInputBuffer, lineBuffer); } //else this line of input is ignored. } else { //First restore missing char. lineBuffer[5] = tempChar; //Print out the comment line. printf("%s",lineBuffer); } } //We have our inputs, lets close the file. puts("Closing input file"); fclose(fp); printf("\nProgram read %d descriptions\n", currentRoom); //---- USER REVIEW ROUTINE ----- // here we will let the user review the rooms to ensure they are loaded correctly. do { do { puts("Would you like to view the description of a room? (Y/N)"); cInput=Get_Char(); cInput=toupper(cInput); //Lets only deal with upper case. } while(cInput!='Y' && cInput!='N'); if (cInput=='Y') { //User said they wanted to see, so let us print a menu... do { int i; for (i=0; i<currentRoom; ++i) { printf("%d) %s\n", i, Room_Names[i]); } puts("Enter number to see:"); scanf("%d",&iRoomNumber); //We must ensure that the user enters a valid menu item... } while (iRoomNumber <0 || iRoomNumber >= currentRoom); printf("\nRoom %d: %s\n%s\n", iRoomNumber,Room_Names[iRoomNumber], roomDescriptions[iRoomNumber]); } //Continue to ask the user until they get the answer right! } while (cInput != 'N'); //We must ALWAYS free the memory that we allocate. This little routine will do that. puts("\n!!!Freeing all allocated memory!!!"); puts("Freeing input buffer."); free(tempInputBuffer); //free our buffer... puts("Freeing room descriptions:\n--------------------------"); while (currentRoom) { currentRoom--; printf("*Freeing room description #%d: %s\n", currentRoom, Room_Names[currentRoom]); free(roomDescriptions[currentRoom]); } } else //Goes with if (fp != NULL) { puts("ERROR: Could not open file!"); } } else //Goes with if (tempInputBuffer) { puts("Could not allocate memory for a an input buffer!!!"); } return 0; } //Utility function to eat '\n' characters from getchar() char Get_Char() { char cIn; //Will ignore any char less then ESC (most non printable ones). while((cIn=getchar())<27); return cIn; } As useful as dynamic memory is, pointers can do something even more useful: They allow us to pass data by reference to functions. Rather than passing an entire array back and forth on the stack (which might use all the stack space quickly), I can pass my function a pointer to the array then it can read and manipulate that array. By using pointers as function parameters I can make functions that, manipulate arrays, return large amounts of data, return multiple values. ---------- BEGIN NameGen.C ---------- #include <stdlib.h> #include <stdio.h> #include <time.h> #include <string.h> //Needed for strcat() //Here I will create an array of prefixes to help generate names. // I am banking on multiplication to ensure a large number of names // by using 7 prefixes and 20 stems, and 16 suffixes I should be able to // create about 7 * 20 * 16 = 2240 names out of 312 bytes of data (In my earlier // example from the forum I used this code to generate male and female names, // but here I combined them). char NamePrefix[][5] = { "", //who said we need to add a prefix? "bel", //lets say that means "the good" "nar", //"The not so good as Bel" "xan", //"The evil" "bell", //"the good" "natr", //"the neutral/natral" "ev", //Man am I original }; char NameSuffix[][5] = { "", "us", "ix", "ox", "ith", "ath", "um", "ator", "or", "axia", "imus", "ais", "itur", "orex", "o", "y" }; const char NameStems[][10] = { "adur", "aes", "anim", "apoll", "imac", "educ", "equis", "extr", "guius", "hann", "equi", "amora", "hum", "iace", "ille", "inept", "iuv", "obe", "ocul", "orbis" }; //Declare the function up here so that we can use it // note that it does not return a value, rather it // edits the character passed to it by reference. void NameGen(char *PlayerName); char get_Char(); //little utility function to make getting char input easer... int main() { char Player1Name[21]; //Used to hold our character's name char cIn; //Used to get user answers to prompts. do { NameGen(Player1Name); printf("Generated Name: %s\n\n", Player1Name); puts("Would you like to generate another (Y,N): "); cIn = get_Char(); } while (cIn != 'n' && cIn != 'N'); return 0; } //Utility function for input char get_Char() { char cIn; while((cIn = getchar())<27); //ignore anything less then ESC return cIn; } //The return type is void because we use a pointer to the array holding // the characters of the name. void NameGen(char* PlayerName) { srand((long)time(NULL)); //Seed the random number generator... PlayerName[0]=0; //initialize the string to "" (zero length string). //add the prefix... strcat(PlayerName, NamePrefix[(rand() % 7)]); //add the stem... strcat(PlayerName, NameStems[(rand() % 20)]); //add the suffix... strcat(PlayerName, NameSuffix[(rand() % 16)]); //Make the first letter capital... PlayerName[0]=toupper(PlayerName[0]); return; } ---------- END NameGen.C ---------- I have to admit that some of the names sound like exotic psychological disorders (Natraduraxia, Xanineptaxia), or new drug names (Xanoculix, Narguiusum), or painful surgical procedures (Beloculaxia, Narapollus). In fact the addition of the suffix was probably a bad idea as my tests turned up only a few choices acceptable for character names; however, the process is a lot of fun. By choosing different prefixes, stems, and suffixes you can change the feel/sound of the names generated. These names are loosely based on Latin stems and endings, which resultes in very scientific sounding names. There is just one last thing I want to mention about pointers before I move along. What happens if the pointer points to address 0? A pointer that points to 0 is considered a NULL pointer. Very often this is used to mean “unassigned” or “unused” but you should never ASSUME that a pointer is NULL just because it is unused/uninitialized as the compiler will not initialize the pointer to NULL for you. Pointers are probably the one thing that sets C/C++ vastly apart from most other languages. Many languages share the same syntax, some even feel very much like C/C++, but it is this pointer that give C/C++ its low-level feel and untamed power. If you wish to master C/C++ you will need to master pointers. Next Section: Structures. References and Additional Information: Pointers [1] Banahan, Mike. Brady, Declan. Doran, Mark.The C Book, second edition Ch 5. [2] Oualline, Steve. Practical C Programming, 3rd Edition Ch 13. [3] Jensen, Ted. A TUTORIAL ON POINTERS AND ARRAYS IN C [4] Hosey, Peter. Everything you need to know about pointers in C [5] Wikipedia. Pointer (computing)
http://www.dreamincode.net/forums/topic/27024-data-modeling-for-games-in-c-part-ii/page__pid__219953__st__0
CC-MAIN-2016-07
en
refinedweb
Three Little Hive UDFs: Part 3 By dan.mcclary on Apr 07, 2013 Introduction In the final installment in our series on Hive UDFs, we're going to tackle the least intuitive of the three types: the User Defined Aggregating Function. While they're challenging to implement, UDAFs are necessary if we want functions for which the distinction of map-side v. reduce-side operations are opaque to the user. If a user is writing a query, most would prefer to focus on the data they're trying to compute, not which part of the plan is running a given function. The UDAF also provides a valuable opportunity to consider some of the nuances of distributed programming and parallel database operations. Since each task in a MapReduce job operates in a bit of a vacuum (e.g. Map task A does not know what data Map task B has), a UDAF has to explicitly account for more operational states than a simple UDF. We'll return to the notion of a simple Moving Average function, but ask yourself: how do we compute a moving average if we don't have state or order around the data? As before, the code is available on github, but we'll excerpt the important parts here. Prefix Sum: Moving Average without State In order to compute a moving average without state, we're going to need a specialized parallel algorithm. For moving average, the "trick" is to use a prefix sum, effectively keeping a table of running totals for quick computation (and recomputation) of our moving average. A full discussion of prefix sums for moving averages is beyond length of a blog post, but John Jenq provides an excellent discussion of the technique as applied to CUDA implementations. What we'll cover here is the necessary implementation of a pair of classes to store and operate on our prefix sum entry within the UDAF. public class PrefixSumMovingAverage {static class PrefixSumEntry implements Comparable{int period;double value;double prefixSum;double subsequenceTotal;double movingAverage;public int compareTo(Object other){PrefixSumEntry o = (PrefixSumEntry)other;if (period < o.period)return -1;if (period > o.period)return 1;return 0;} } Here we have the definition of our moving average class and the static inner class which serves as an entry in our table. What's important here are some of the variables we define for each entry in the table: the time-index or period of the value (its order), the value itself, the prefix sum, the subsequence total, and the moving average itself. Every entry in our table requires not just the current value to compute the moving average, but also sum of entries in our moving average window. It's the pair of these two values which allows prefix sum methods to work their magic. //class variablesprivate int windowSize;private ArrayList<PrefixSumEntry> entries;public PrefixSumMovingAverage(){windowSize = 0;}public void reset(){windowSize = 0;entries = null;}public boolean isReady(){return (windowSize > 0);} The above are simple initialization routines: a constructor, a method to reset the table, and a boolean method on whether or not the object has a prefix sum table on which to operate. From here, there are 3 important methods to examine: add, merge, and serialize. The first is intuitive, as we scan rows in Hive we want to add them to our prefix sum table. The second are important because of partial aggregation. We cannot say ahead of time where this UDAF will run, and partial aggregation may be required. That is, it's entirely possible that some values may run through the UDAF during a map task, but then be passed to a reduce task to be combined with other values. The serialize method will allow Hive to pass the partial results from the map side to the reduce side. The merge method allows reducers to combine the results of partial aggregations from the map tasks. @SuppressWarnings("unchecked")public void add(int period, double v){//Add a new entry to the list and update tablePrefixSumEntry e = new PrefixSumEntry();e.period = period;e.value = v;entries.add(e);// do we need to ensure this is sorted?//if (needsSorting(entries))Collections.sort(entries);// update the table// prefixSums firstdouble prefixSum = 0;for(int i = 0; i < entries.size(); i++){PrefixSumEntry thisEntry = entries.get(i);prefixSum += thisEntry.value;thisEntry.prefixSum = prefixSum;entries.set(i, thisEntry);} The first part of the add task is simple: we add the element to the list and update our table's prefix sums. // now do the subsequence totals and moving averagesfor(int i = 0; i < entries.size(); i++){double subsequenceTotal;double movingAverage;PrefixSumEntry thisEntry = entries.get(i);PrefixSumEntry backEntry = null;if (i >= windowSize)backEntry = entries.get(i-windowSize);if (backEntry != null){subsequenceTotal = thisEntry.prefixSum - backEntry.prefixSum;}else{subsequenceTotal = thisEntry.prefixSum;}movingAverage = subsequenceTotal/(double)windowSize;thisEntry.subsequenceTotal = subsequenceTotal;thisEntry.movingAverage = movingAverage;entries.set(i, thisEntry); } In the second half of the add function, we compute our moving averages based on the prefix sums. It's here you can see the hinge on which the algorithm swings: thisEntry.prefixSum - backEntry.prefixSum -- that offset between the current table entry and it's nth predecessor makes the whole thing work. public ArrayList<DoubleWritable> serialize(){ArrayList<DoubleWritable> result = new ArrayList<DoubleWritable>();result.add(new DoubleWritable(windowSize));if (entries != null){for (PrefixSumEntry i : entries){result.add(new DoubleWritable(i.period));result.add(new DoubleWritable(i.value));}}return result; } The serialize method needs to package the results of our algorithm to pass to another instance of the same algorithm, and it needs to do so in a type that Hadoop can serialize. In the case of a method like sum, this would be relatively simple: we would only need to pass the sum up to this point. However, because we cannot be certain whether this instance of our algorithm has seen all the values, or seen them in the correct order, we actually need to serialize the whole table. To do this, we create a list of DoubleWritables, pack the window size at its head, and then each period and value. This gives us a structure that's easy to unpack and merge with other lists with the same construction. @SuppressWarnings("unchecked")public void merge(List<DoubleWritable> other){if (other == null)return;// if this is an empty buffer, just copy in other// but deserialize the listif (windowSize == 0){windowSize = (int)other.get(0).get();entries = new ArrayList<PrefixSumEntry>();// we're serialized as period, value, period, valuefor (int i = 1; i < other.size(); i+=2){PrefixSumEntry e = new PrefixSumEntry();e.period = (int)other.get(i).get();e.value = other.get(i+1).get();entries.add(e);} } Merging results is perhaps the most complicated thing we need to handle. First, we check the case in which there was no partial result passed -- just return and continue. Second, we check to see if this instance of PrefixSumMovingAverage already has a table. If it doesn't, we can simply unpack the serialized result and treat it as our window. // if we already have a buffer, we need to add these entrieselse{// we're serialized as period, value, period, valuefor (int i = 1; i < other.size(); i+=2){PrefixSumEntry e = new PrefixSumEntry();e.period = (int)other.get(i).get();e.value = other.get(i+1).get();entries.add(e);} } The third case is the non-trivial one: if this instance has a table and receives a serialized table, we must merge them together. Consider a Reduce task: as it receives outputs from multiple Map tasks, it needs to merge all of them together to form a larger table. Thus, merge will be called many times to add these results and reassemble a larger time series. // sort and recomputeCollections.sort(entries);// update the table// prefixSums firstdouble prefixSum = 0;for(int i = 0; i < entries.size(); i++){PrefixSumEntry thisEntry = entries.get(i);prefixSum += thisEntry.value;thisEntry.prefixSum = prefixSum;entries.set(i, thisEntry); } This part should look familiar, it's just like the add method. Now that we have new entries in our table, we need to sort by period and recompute the moving averages. In fact, the rest of the merge method is exactly like the add method, so we might consider putting sorting and recomputing in a separate method. Orchestrating Partial Aggregation We've got a clever little algorithm for computing moving average in parallel, but Hive can't do anything with it unless we create a UDAF that understands how to use our algorithm. At this point, we need to start writing some real UDAF code. As before, we extend a generic class, in this case GenericUDAFEvaluator. public static class GenericUDAFMovingAverageEvaluator extends GenericUDAFEvaluator {// input inspectors for PARTIAL1 and COMPLETEprivate PrimitiveObjectInspector periodOI;private PrimitiveObjectInspector inputOI;private PrimitiveObjectInspector windowSizeOI;// input inspectors for PARTIAL2 and FINAL// list for MAs and one for residualsprivate StandardListObjectInspector loi; As in the case of a UDTF, we create ObjectInspectors to handle type checking. However, notice that we have inspectors for different states: PARTIAL1, PARTIAL2, COMPLETE, and FINAL. These correspond to the different states in which our UDAF may be executing. Since our serialized prefix sum table isn't the same input type as the values our add method takes, we need different type checking for each. @Overridepublic ObjectInspector init(Mode m, ObjectInspector[] parameters) throws HiveException {super.init(m, parameters);// initialize input inspectorsif (m == Mode.PARTIAL1 || m == Mode.COMPLETE){assert(parameters.length == 3);periodOI = (PrimitiveObjectInspector) parameters[0];inputOI = (PrimitiveObjectInspector) parameters[1];windowSizeOI = (PrimitiveObjectInspector) parameters[2];} Here's the beginning of our overrided initialization function. We check the parameters for two modes, PARTIAL1 and COMPLETE. Here we assume that the arguments to our UDAF are the same as the user passes in a query: the period, the input, and the size of the window. If the UDAF instance is consuming the results of our partial aggregation, we need a different ObjectInspector. Specifically, this one: else{loi = (StandardListObjectInspector) parameters[0]; } Similar to the UDTF, we also need type checking on the output types -- but for both partial and full aggregation. In the case of partial aggregation, we're returning lists of DoubleWritables: // init output object inspectorsif (m == Mode.PARTIAL1 || m == Mode.PARTIAL2) {// The output of a partial aggregation is a list of doubles representing the// moving average being constructed.// the first element in the list will be the window size//return ObjectInspectorFactory.getStandardListObjectInspector(PrimitiveObjectInspectorFactory.writableDoubleObjectInspector); } But in the case of FINAL or COMPLETE, we're dealing with the types that will be returned to the Hive user, so we need to return a different output. We're going to return a list of structs that contain the period, moving average, and residuals (since they're cheap to compute). else {// The output of FINAL and COMPLETE is a full aggregation, which is a// list of DoubleWritable structs that represent the final histogram as// (x,y) pairs of bin centers and heights.ArrayList<ObjectInspector> foi = new ArrayList<ObjectInspector>();foi.add(PrimitiveObjectInspectorFactory.writableDoubleObjectInspector);foi.add(PrimitiveObjectInspectorFactory.writableDoubleObjectInspector);foi.add(PrimitiveObjectInspectorFactory.writableDoubleObjectInspector);ArrayList<String> fname = new ArrayList<String>();fname.add("period");fname.add("moving_average");fname.add("residual");return ObjectInspectorFactory.getStandardListObjectInspector(ObjectInspectorFactory.getStandardStructObjectInspector(fname, foi) ); } Next come methods to control what happens when a Map or Reduce task is finished with its data. In the case of partial aggregation, we need to serialize the data. In the case of full aggregation, we need to package the result for Hive users. @Overridepublic Object terminatePartial(AggregationBuffer agg) throws HiveException {// return an ArrayList where the first parameter is the window sizeMaAgg myagg = (MaAgg) agg;return myagg.prefixSum.serialize();}@Overridepublic Object terminate(AggregationBuffer agg) throws HiveException {// final return value goes hereMaAgg myagg = (MaAgg) agg;if (myagg.prefixSum.tableSize() < 1){return null;}else{ArrayList<DoubleWritable[]> result = new ArrayList<DoubleWritable[]>();for (int i = 0; i < myagg.prefixSum.tableSize(); i++){double residual = myagg.prefixSum.getEntry(i).value - myagg.prefixSum.getEntry(i).movingAverage;DoubleWritable[] entry = new DoubleWritable[3];entry[0] = new DoubleWritable(myagg.prefixSum.getEntry(i).period);entry[1] = new DoubleWritable(myagg.prefixSum.getEntry(i).movingAverage);entry[2] = new DoubleWritable(residual);result.add(entry);}return result;} } We also need to provide instruction on how Hive should merge the results of partial aggregation. Fortunately, we already handled this in our PrefixSumMovingAverage class, so we can just call that. @SuppressWarnings("unchecked")@Overridepublic void merge(AggregationBuffer agg, Object partial) throws HiveException {// if we're merging two separate sets we're creating one table that's doubly longif (partial != null){MaAgg myagg = (MaAgg) agg;List<DoubleWritable> partialMovingAverage = (List<DoubleWritable>) loi.getList(partial);myagg.prefixSum.merge(partialMovingAverage);} } Of course, merging and serializing isn't very useful unless the UDAF has logic for iterating over values. The iterate method handles this and -- as one would expect -- relies entirely on the PrefixSumMovingAverage class we created. @Overridepublic void iterate(AggregationBuffer agg, Object[] parameters) throws HiveException {assert (parameters.length == 3);if (parameters[0] == null || parameters[1] == null || parameters[2] == null){return;}MaAgg myagg = (MaAgg) agg;// Parse out the window size just once if we haven't done so before. We need a window of at least 1,// otherwise there's no window.if (!myagg.prefixSum.isReady()){int windowSize = PrimitiveObjectInspectorUtils.getInt(parameters[2], windowSizeOI);if (windowSize < 1){throw new HiveException(getClass().getSimpleName() + " needs a window size >= 1");}myagg.prefixSum.allocate(windowSize);}//Add the current data point and compute the averageint p = PrimitiveObjectInspectorUtils.getInt(parameters[0], inputOI);double v = PrimitiveObjectInspectorUtils.getDouble(parameters[1], inputOI);myagg.prefixSum.add(p,v); } Aggregation Buffers: Connecting Algorithms with Execution One might notice that the code for our UDAF references an object of type AggregationBuffer quite a lot. This is because the AggregationBuffer is the interface which allows us to connect our custom PrefixSumMovingAverage class to Hive's execution framework. While it doesn't constitute a great deal of code, it's glue that binds our logic to Hive's execution framework. We implement it as such: // Aggregation buffer definition and manipulation methodsstatic class MaAgg implements AggregationBuffer {PrefixSumMovingAverage prefixSum;};@Overridepublic AggregationBuffer getNewAggregationBuffer() throws HiveException {MaAgg result = new MaAgg();reset(result);return result; } Using the UDAF The goal of a good UDAF is that, no matter how complicated it was for us to implement, it's that it be simple for our users. For all that code and parallel thinking, usage of the UDAF is very straightforward: ADD JAR /mnt/shared/hive_udfs/dist/lib/moving_average_udf.jar;CREATE TEMPORARY FUNCTION moving_avg AS 'com.oracle.hadoop.hive.ql.udf.generic.GenericUDAFMovingAverage';#get the moving average for a single tail numberSELECT TailNum,moving_avg(timestring, delay, 4) FROM ts_example WHERE TailNum='N967CA' GROUP BY TailNum LIMIT 100; Here we're applying the UDAF to get the moving average of arrival delay from a particular flight. It's a really simple query for all that work we did underneath. We can do a bit more and leverage Hive's abilities to handle complex types as columns, here's a query which creates a table of timeseries as arrays. #create a set of moving averages for every plane starting with N#Note: this UDAF blows up unpleasantly in heap; there will be data volumes for which you need to throw#excessive amounts of memory at the problemCREATE TABLE moving_averages ASSELECT TailNum, moving_avg(timestring, delay, 4) as timeseries FROM ts_example WHERE TailNum LIKE 'N%' GROUP BY TailNum; Summary We've covered all manner of UDFs: from simple class extensions which can be written very easily, to very complicated UDAFs which require us to think about distributed execution and plan orchestration done by query engines. With any luck, the discussion has provided you with the confidence to go out and implement your own UDFs -- or at least pay some attention to the complexities of the ones in use every day.
https://blogs.oracle.com/datawarehousing/entry/three_little_hive_udfs_part2
CC-MAIN-2016-07
en
refinedweb
Re: [PBML] basic questoin Expand Messages - On Oct 27, nt557 said: > What this code do ?There's still no $tag. ;) When a variable name has a "::" in it, > > $tag::entry = 0; > > there was NO $tag before inthis code, this is the first usage. everything before the LAST "::" is the variable's package -- the namespace it resides in. $CGI::POST_MAX is the $POST_MAX variable in the CGI namespace, and $Data::Dumper::Indent is the $Indent variable in the Data::Dumper namespace. > what does this mean ?That's a special case of the package::variable syntax; it means $group in > > $::group = 2; the main namespace. It's the same as $main::group. See perldoc perldata for more details. --.
https://groups.yahoo.com/neo/groups/perl-beginner/conversations/topics/22113?o=1&d=-1
CC-MAIN-2016-07
en
refinedweb
You can use this module with the following in your ~/.xmonad/xmonad.hs: import XMonad.Layout.LayoutBuilder Then edit your layoutHook by adding something like: myLayouts = ( 0) $ (layoutR 0.1 0.5 (relBox (2/3) 0 1 1) Nothing $ Tall 0 0 0) $ (layoutAll (relBox 0 0 (1/3) 1) $ Tall 0 0Layouts }: XMonad.Doc.Extending You may wish to add the following keybindings: , ((modMask x .|. shiftMask, xK_h ), sendMessage $ IncLayoutN (-1)) , ((modMask x .|. shiftMask, xK_l ), sendMessage $ IncLayoutN 1) For detailed instruction on editing the key binding see: XMonad.Doc.Extending.
http://hackage.haskell.org/package/xmonad-contrib-bluetilebranch-0.8.1/docs/XMonad-Layout-LayoutBuilder.html
CC-MAIN-2016-07
en
refinedweb
WinRTError Object (JavaScript) When a Windows Runtime call returns an HRESULT that indicates a failure, JavaScript converts it to a special Windows Runtime error. It is available only in Windows 8.x Store apps, when the Windows Runtime is available, as part of the global JavaScript namespace. The following example shows how a WinRTError is thrown and caught. The WinRTError object has the same properties as the Error Object (JavaScript) object. Requirements The WinRTError object is supported only in Windows 8.x Store apps, not in Internet Explorer. Show:
https://msdn.microsoft.com/en-us/library/hh699852(v=vs.94).aspx
CC-MAIN-2016-07
en
refinedweb
In order to run OrdDoc methods, you will need to include the following import statements in your Java file: import oracle.ord.im.OrdMediaUtil; import oracle.ord.im.OrdDoc; You may also need to import classes from the following Java packages: java.io. java.sql. oracle.jdbc. Before running OrdDoc methods, the following operations must have already been performed: A connection has been made to a table that contains a column of type OrdDoc. A local OrdDoc object has been created and populated with data. For examples of making a connection and populating a local object, see Oracle interMedia User's Guide.
http://docs.oracle.com/cd/B12037_01/appdev.101/b10830/im_docref001.htm
CC-MAIN-2016-07
en
refinedweb
Physicianlife MAY/JUNE 2011 health...wealth...lifestyle... whistleblowing in the australian healthcare system Ageing in Doctors & Cognitive Decline When is it time to hang up the stethoscope? RRP $12.95 fukushima burning Anatomy of a Nuclear Disaster borrow up to 100% and buy the home you want, why wouldn’t you? At Investec,. Medical Finance Asset Finance • Commercial Property Finance • Deposit Facilities • Goodwill & Practice Purchase Loans • Home Loans Income Protection & Life Insurance • Professional Overdraft Experien Investec Experien Pty Limited ABN 94 110 704 464 (Investec Experien) is a subsidiary of Investec Bank (Australia) Limited ABN 55 071 292 594 AFSL 234975. All finance is subject to our credit assessment criteria. Terms and conditions, fees and charges apply. We reserve the right to cease offering these products at any time without notice. Physicianlife health...wealth...lifestyle... Highlights 14 20 28 58 Fukushima Burning Anatomy of a nuclear disaster State to Federal Cost-Shifting Billing Medicare for services provided in public hospitals – what’s right and what’s wrong? Ageing in Doctors and Cognitive Decline When is it time to hang up the stethoscope? Whistleblowing Ethics, professionalism and healthcare management Departments 10 Features 34 Business & Finance 56 Risk Management 61 Careers 64 Alpha: Technology & Reviews 66 Lifestyle 68 Travel contents 10 FEATURES Medicine – Still a Calling or Just a Job? How those of you who have lost the passion can re-kindle that old flame Fukushima Burning Anatomy of a nuclear disaster State to Federal Cost-Shifting Billing Medicare for services provided in public hospitals – what’s right and what’s wrong? 10 14 20 Sexual Dynamics at Work Blunt tool or sharp instrument 24 Ageing in Doctors and Cognitive Decline 28 24 When is it time to hang up the stethoscope? BUSINESS & FINANCE Self Managed Super For Young Wealth Accumulators Why it’s becoming so popular Who Wins While Art Loses? The Government’s assault on art is damaging the entire art industry Smart Investors Target Carbon Tax Opportunities How will the proposed carbon tax affect your investments? Risk Issues Unique to Senior Physicians When age matters 34 38 43 46 38 MAY/JUNE 2011 64 End of Year Tax-Planning Checklist Top 5 tax-saving tips for physicians 50 RISK MANAGEMENT The Rise and Rise of Natural Medicine 68 Dilution of the words ‘doctor’ and ‘medicine’ 56 Whistleblowing in The Australian Healthcare System 58 Ethics, professionalism and healthcare management CAREERS So You Want To Build Your First Practice? The things you need to know and do 61 ALPHA iPad 2 The doctor's preferred tablet 64 LIFESTYLE Wine Rules! The matching game 66 TRAVEL Having a Whale (shark) of a Time Off the Coast of WA 68 editor’s note W elcome to the May/June Edition of Physician Life. This edition marks a very special occasion – our one year celebration. So here is wishing Physician Life a very Happy Birthday! Over the last year we have presented you with six editions of Physician Life and we hope that you have enjoyed reading these as much as we have enjoyed putting them together. Over this time we have learned more about your likes and dislikes and have taken on your feedback for future articles. I am pleased to have marked our first birthday with the huge success of the Part 3 Course event that was run in Melbourne on 16th April 2011. A special thanks to our speakers and delegates who made this event possible and proved that the ‘Business of Medicine’ really is an area that requires further nurturing. The response from attendees and those who missed the event has reinforced the message that there is a general lack of information provided to doctors on the business and financial aspects of medical practice. Both our magazine and future events aim to fill this void. Regards, Physicianlife health...wealth...lifestyle... MAY/JUNE 2011 Selina Vasdev Editor [email protected] Contributing Sources Dr. Stephen Bolsin Dr. Tony Blinde Dr. Lisa Ferrier-Brown Dr. Richard Cavell Dr. Mark Colson Hilary Doling Gillian Hyde Dr. Peter Karamoskos Dr. Michael Levitt Dr. James Nguyen Dev Sharma The Physician Life magazine is published bi-monthly by Medical Life Publishing Pty Ltd. Physician Selina Vasdev Editor Advertising Joe Korac Cover Image: "Japan Nuclear Radiation Suits" by ssoosay Images licensed under a Creative Commons Attribution 2.0 Generic Licence. Phone: 02 9872 7708 Fax: 02 9872 1002 Mobile: 0414 487 199 Email: [email protected] CAB Member LETTERS TO THE EDITOR O ur Letters to the Editor section encourages you to submit your comments and suggestions to Physician Life. As always, I would like to hear more about your opinions on our articles as well as your thoughts on the subjects we cover. At the same time, I am always open to hearing your compliments and criticisms about how we handled subjects and where improvements can be made. Please send your comments to [email protected] marked letters to the editor. Dear Editor, I agree with Dr T Blinde’s contention that “adulation of (medical) administration” could be detrimental to the provision of health care and the morale of clinicians (“Altruism in medicine…is it declining? Physician’s Life March/April 2011). Lauding health care managers for delivering cost savings and improved productivity has not been rigorously evaluated. Administrators are likely to favour self-preservation or enhanced self-importance, being tempted to look anywhere but their own territory for cost savings and to not critically appraise their own meaningful contribution. Has burgeoning and ever more layered senior health management, unlike proven medical treatment, been subjected to the rigour of evidence - based scrutiny for clinical - and cost-effectiveness? There is no direct evidence that senior health management, the often self-perceived fount of organisational strategy and vision, confers a health benefit at all. A Medline search I conducted in March 2011 combining “hospital administration” and “cost-effectiveness” yielded a mere 43 abstracts; when limited to more valid study types (Clinical Trial, Meta - Analysis, Randomized Controlled Trial, Clinical Trial, Phase I, Clinical Trial, Phase II, Clinical Trial, Phase III, Clinical Trial, Phase IV, Comparative Study, Controlled Clinical Trial, Evaluation Studies, Multicentre Study, Validation Studies) only three remained: a cross-sectional study of the complementary health care centre governance, review of the economic aspects of implantable defibrillators and a cost-effectiveness analysis of immunoglobulin use. Has burgeoning and ever more layered senior health management, unlike proven medical treatment, been subjected to the rigour of evidence - based scrutiny for clinical - and costeffectiveness? Furthermore, a search of the Cochrane Library yielded no citations on the health effects of medical and health care administration. Indeed, it could be argued that scarce health care funds are being diverted from patient care to costly medical administration. Dr. J.T., QLD Please send your comments to [email protected] marked letters to the editor. Physicianlife 07 LETTERS TO THE EDITOR Dear Editor, I would like to pass comment on the article featured in your March/April Edition about research fraud. As a professor I found it of great interest and thank you for exposing this topic. It is a frightening, yet all too common occurrence. Clearly, those involved in misconduct either in the form of plagiarism; falsification and fabrication lack ethics and have no place in research or medicine. The concept of research fraud is cheating; it is misguiding our profession and misleading the public. It undermines the public’s trust in medical research and ultimately doctors. Although there is greater response from journals to detect scientific fraud, this is still a slow process and there are still plenty of cases that are likely to go undetected or even unreported. Dear Editor, Thank you for sending me bi-monthly copies of your magazine. The articles are well written and you seem to cover topics that are relevant to our profession. There needs to be a stricter code of good practice and a body of independents to investigate allegations. You seem to raise issues in a way that clearly fills a gap. For me personally, the ones relevant to starting out in private practice have been especially valuable. They have helped reinforce ideas and strategies for my business, but have also identified areas of complacency and development (marketing). Sincerely, For this I thank you. Prof. F.R., VIC Dr. R.F., VIC Dear Editor, It’s not difficult to see why some doctors are guilty of research fraud, particularly as the incentives for plagiarism are inadvertently enhanced when career advancement and financial rewards are linked with producing papers – often with the emphasis on speed and volume rather than quality. Best, Dr. S.T., WA Please send your comments to [email protected] marked letters to the editor. 08 Physicianlife Dear Editor, I'd like to pass on some positive feedback to you and your team about the magazine you are producing. Just like every other doctor, I receive a tonne of glossy marketing material dressed up as news and information claiming to be of value to me as a doctor as well as enhancing the efficiency of my practice. Initially I thought your magazine fell into the same category. But I have been very pleasantly surprised to find the articles are actually quite 'meaty' and there are many of them in each edition (rather than just a couple amongst pages of ads). The ones that most stand out are from the business section along with some of the hard hitting subjects you cover in the features. Keep up the good work. Dr. L.C., NSW Dear Selina, I would like to congratulate both you and your team on a very modern approach to imparting information which may be of interest to the medical community. For me, one article really stood out in the March/ April edition. This was Altruism in Medicine ... Is it Declining? Part 2: The Reasons Why by Dr Tony Blinde. I have to agree that there are now issues between and within all of the competing interests in the health sector. Unfortunately, it has led to administrators in one camp and doctors usually in another. Of course moving forward from this presents an ongoing challenge. I would have also added that strong leadership is now needed in both camps to try and now move the whole health agenda forward, firstly by re-establishing appropriate dialogue. LETTERS TO THE EDITOR Dear Editor, I write in relation to an incorrect advertisement that appeared in the January – February issue of Physician Life. This marred what was otherwise a very helpful series of articles in the magazine. On Page 19 there was an advertisement to raise funds for a young Palestinian child, Malak, with a congenital abnormality (very small external ears, and deaf), and who needs to come to Australia for surgery to create new ears and to provide a specific type of hearing aid, because “it cannot be done anywhere in the Middle East”. There is no doubting that the Palestinian healthcare system has extremely high demands on its limited resources and that children in particular are innocent victims. Whilst the aim of the advertisement was humanitarian, it should be noted that both the surgery and provision of these types of bone anchored hearing aids (BAHA) are available in the Middle East. There at least 7 hospitals in Israel which provide these services, including Hadassah & Shaare Zedek Hospitals (Jerusalem), Bnei Zion Hospital (Haifa), Sheba & Ichilov Hospitals (Tel Aviv), Soroka Hospital (Beer Sheva) and Schneider Hospital (Petach Tikva). Sincerely, Palestinian children do have access to some services in Israel, with the Peres Center for Peace handling such humanitarian aid requests, and, I am advised, usually paying full costs of the procedures and providing the equipment needed. The funds come from humanitarian sources in Italy, Switzerland, the Netherlands and the USA. The “Saving Children” campaign was begun in 2003 and has had > 6500 referrals since, the majority being for neurosurgery, cardiac, and orthopaedic surgery; plastic surgery is also performed. Details can be found at. org/, together with an application form for a physician to complete requesting assistance for children like Malak. Dr. Deborah J Verran., NSW Sincerely I do not know if Dr Tony Blinde plans to write about how to move forward from the current position that he described but I do hope that someone plans to. Dr. Bernie Tuch, NSW Please send your comments to [email protected] marked letters to the editor. Physicianlife 09 F E AT U R E S Medicine Still a Calling or Just a Job? How those of you who have lost the passion can re-kindle that old flame 10 Physicianlife F E AT U R E S Can you honestly say you love being a doctor? Or like many GP’s and specialists who were recently surveyed would you too express an admixture of feelings - a combination of enjoyment, satisfaction and frustration? All mingled in with a constant fear of being sued? Would you encourage your children to seek a medical career or push them to train in finance or some other field which looks to hold a better future financially and that is less likely to suffer government interference? Do you feel like you have lost your medical “mojo”? If so, how do you get it back in the midst of long hours, never ending demands and not to mention the perception that doctors are called to their profession for the needs of their patients first, and themselves, and their families second? Psychiatrist Dr Lisa Ferrier-Brown takes a personal look at the practice of medicine. The always pithy great grand daddy of physicians William Osler wrote: “The practice of medicine is an art, not a trade; a calling, not a business; a calling in which your heart will be exercised equally with your head.” Is this aphorism still true several hundred years ahead or has our evidence-based platform replaced the best of what we do? The changing medical landscape There is no doubt how much the practice of medicine has changed, even over the short period of the last ten to twenty years. Mid career doctors still remember and bore their younger colleagues with tales of an era when hospital patients stayed as long as there was a clinical need. The option of a “social” admission was still possible - particularly in the Repatriation Hospital and where it was understood that Christmas was a difficult and lonely time for old veterans. As an intern I can still remember when patients other than those suffering with the most extreme forms of psychosis and depression had a similar right to a hospital bed. Nostalgic longings have gone for days when paperwork was clinically and not bureaucratically generated and when negligence claims were rarely made, and even more rarely won, unlikely to best fit us for the remainder of our careers in medicine. A 60 something surgical friend of mine often tells a story of how even a noticeably drunk general surgeon practised for years without complaint by staff or patients. Thank goodness those days are gone. But if you are hankering for the post-war and pre-Medicare glory days of medicine in Australia it’s worth stopping to think about why we still do what we do and what makes it worthwhile. Although surveys of doctors’ work satisfaction usually confirm the intellectual stimulation of dealing with the human body and its ills, less often do we tell our colleagues about, or even stop to review for ourselves, those little triumphs against illness and injury that must surely make the practice of medicine the most emotionally gratifying of all the professions. In an era of evidence-based medicine, meta-analyses and the decline of case reports as a form of teaching and expression, it’s easy to lose the threads of what binds us together despite our differing sub-specialities. Sharing the “moment” with patients There are for all of us who work in clinically intense evnironments, heartening moments of making a difference to a patient’s suffering. More William Osler wrote: “The practice of medicine is an art, not a trade; a calling, not a business; a calling in which your heart will be exercised equally with your head.” Physicianlife 11 F E AT U R E S often it is an objectively small win over the tidal wave of ills which befall those we care for; sometimes it isn’t an actual “win” but just a consultation moment when we and our patients share a sense of real connection. Influential American psychologist Carl Rogers dominated mid 20th Century thinking about self actualisation, a process whereby we are fully present in the moment and which is associated with a heightened sense of connection with ourselves and the world. The concept still holds good today; those “moments” of either shared understanding with a patient or their family over efforts rewarded, or at least appreciated, should not be glossed over. As doctors we are sometimes more comfortable with our professional boundaries remaining in place than we are with allowing patients to thank us and share with them with their triumphs and losses. The intense emotions of these particular “moments” are easily submerged by avoiding eye contact, changing the subject and looking through the notes or at the computer. If we become distracted by the ‘busyness’ of the consultation or feel awkward accepting heartfelt thanks we miss out on being in the moment with our patients. We also miss If we become distracted by the busyness of the consultation or feel awkward accepting heartfelt thanks we miss out being in the moment with our patients. out on the satisfaction “elixir” we all need to help us bear the inevitable difficulties of medical life. As a psychiatrist I know all too well how confronting even positive emotions expressed by patients can be. Allowing our patients the opportunity to connect with us does not have to lead to a loss of boundaries. Rather, both at the time and later when the working day is over it’s worth stopping just to savour the daily “moments” that make medicine worthwhile. Mostly no one will ever know about these interactions and some of the time even our patients don’t know how good their outcome has been. As well as stopping to appreciate these moments at the time, some of them will be worth sharing in peer review and others might be worth putting pen to paper about. Regardless, their appreciation either in a consultation or privately, is a sure antidote for “burnout”, cynicism and the treadmill of keeping up with the ever growing demands of paperwork and protocol. All of us have our own stories to tell about why our chosen area of medicine is gratifying. Non-psychiatrists, including doctors and the lay population often view psychiatry as being a “depressing” pursuit. There is a stereotype that the conditions psychiatrists treat are “chronic” and incurable. The truth is very different. I am constantly amazed at how well even very disturbed or 12 Physicianlife F E AT U R E S mentally ill patients do over time; there is nothing more satisfying than to watch an individual’s recovery evolve and to sometimes be made redundant in the process! I never learnt in medical school, and I’m assuming you didn’t either, how much I would learn from my patients; seeing their bravery and determination to get better and how they cope with the challenges of being sick, and the ultimate challenge of facing death. All this leaves me feeling privileged to share in their triumphs and losses. By the process of sharing our patient’s experiences, within our professional boundaries, we have an enormous opportunity to undergo personal growth “on the job”. latter sounds familiar it’s also worth thinking about how to regain passion about what you do and to restart a process of personal growth. Teaching or mentoring a new wave of doctors, whether they are generation Y or beyond is a great way to start. Opportunities exist for those outside the academic paradigm to volunteer for student practice visits which are not too time intensive or to become a more personal mentor to a junior colleague. Each of us underestimates the amount of real, not just textbook knowledge, accumulated even a decade into our careers. Osler recognised this still fresh truth in another of his often quoted pieces of wisdom: stymie our efforts to get on with the job. Avoiding “stagnation” by mentoring “No bubble is so iridescent or floats longer than that blown by a successful teacher.” Despite the worry about suicidal patients and the increasing demands of paperwork Dr Ferrier-Brown still loves being a psychiatrist - “nothing beats the satisfaction of knowing you have made a positive difference to a patient’s life; it’s a privilege to share the gains after the hard times”. Another famous psychologist of the 1900’s, Erik Erikson developed a theory of life stages. The challenge of mid adult years (45 to 65) as conceptualised by Erikson was to evolve into a life phase of generativity and creativity versus the potential trap of “stagnation”. If the Although none of us know what the medical landscape will look like in another twenty or thirty years, each of us needs to respond to the changes with flexibility, particularly when the so-called improvements in the system seem to In times like this viewing the practice of medicine as more than a minefield to negotiate is increasingly a challenge but one which remains within our grasp. Seize the (medical) moment and savour it, either privately or with colleagues and pass on the wisdom of your experience to the next generation of doctors. They will need to learn the same lessons themselves if medicine is to remain a “calling” rather than just a job. Dr Lisa Ferrier-Brown, forensic and general psychiatrist. Physicianlife 13 F E AT U R E S 14 Physicianlife Fukushima Burning Anatomy of a nuclear disaster “the [Fukushima] disaster has enormous implications for nuclear power and confronts all of us with a major challenge. The worries of millions of people throughout the world about whether nuclear energy is safe must be taken seriously" - Yukiya Amano, IAEA Director General, 5th Review meeting on the Convention on Nuclear Safety 4th April, 2011 The Fukushima nuclear disaster ranks as the second worst nuclear reactor accident in history. It also ranks as the worst multiple reactor accident in the world. What happened? How did it happen? And what are the implications for the people of the Fukushima district and the surrounding areas most affected? Why is ionising radiation a public health hazard? What happened? On 11th March, 2011 at 2:46 p.m. local time an earthquake of magnitude 9.0 occurred at a depth of 32km in the Pacific Ocean 130km east of the industrial city of Sendai on Honshu. This earthquake is the most powerful experienced in Japan, and was followed by many aftershocks of considerable magnitude. The effects of the earthquake in Pacific coastal regions of northeast Japan were greatly exacerbated by the tsunami generated by the earthquake which hit the coast some minutes later at heights of 10 metres or more. The Fukushima I reactor complex comprising of six nuclear reactors was most severely affected. At the time, only three reactors were operating (1, 2 and 3), and the active core fuel rods of reactor 4 had been placed in the spent fuel pond in the ceiling of its building. All reactors at the complex stored up to seven times the amount of fuel rods in their cores with spent fuel ponds in the ceiling of each building with minimal containment structures to protect them. Physicianlife 15 As far as human health is concerned comparisons therefore between Chernobyl and Fukushima disasters are valid. The earthquake and subsequent tsunami interrupted the AC power to the primary and secondary cooling systems of the complex. The backup diesel generators failed as they were inundated by flooding, having been placed below the level of the sea wall. Backup batteries to power the pumps were eventually depleted. Subsequently, the four active reactors’ cores overheated and sustained partial core melts resulting in explosions which severely damaged the buildings. The spent fuel rods of reactors 3 and 4 were exposed to air also resulting in overheating and a fire in the spent fuel pond of reactor 4. Containment structures of reactors 2 and 3, designed to contain highly radioactive active fuel were also damaged. Reactor 3 is fuelled by MOX (mixed oxide fuel which is a blend of uranium and plutonium). As of early April, there were significant amounts of ongoing radioactive fallout. This was made worse by the large volumes of seawater needed in an attempt to externally cool the reactors and spent fuel ponds. This resulted in extensive offshore and local contamination including the groundwater, exacerbated by the rupture of reactor 2, secondary containment which continues to leak the damaged core contents into the plant precinct. case scenario and still remains poorly controlled. Any pronouncements as to the eventual conclusion of this disaster are therefore currently speculative. However, the currently known facts are troubling enough. The International Atomic Energy Agency (IAEA) uses a 7 point INES (International Nuclear Event Scale)1, 2 to categorise nuclear incidents (2-3) and accidents (4-7).3 The Chernobyl disaster ranked as a 7/7 accident. The Japanese nuclear regulator (Nuclear and Industrial Safety Agency) initially ranked the Fukushima disaster as a 5/7 accident (comparable to the Sellafield, UK reactor fire in 1957, and Three Mile Island USA core melt in 1979). However, the French nuclear regulator (ASN) and the US Nuclear Regulatory Commission subsequently classified it as a 6/7 accident, representing a “serious accident” resulting in “a significant release of radioactive material likely to require implementation of planned countermeasures.” On 12th April, NISA upgraded its classification of the disaster as a 7/7 (“major release of radioactive material with widespread health and environmental effects requiring implementation of planned and extended countermeasures”),3 and thus on a par with Chernobyl. What are the consequences? What happens when a nuclear reactor overheats? We still don’t know what the full consequences of this disaster are or what they will be. We do know, however, that the nuclear accident has tracked closer to the worst case than the best When nuclear cores overheat due to a lack of water coolant, they ultimately melt. Remaining water quickly turns to steam preventing replenishment of the 16 Physicianlife in the air, contaminating all vegetation, clothing and any other surfaces including water sources. Those that pose the greatest health threat are Cesium-137 (half-life 30 years) and Iodine-131 (halflife4 in that hemisphere. There is effectively an 'air. Utilising CTBT monitoring data, the Austrian Central Institute for Meteorology and Geodynamics calculated that in the first three days, the activity of I-131 emitted was 20% and Cesium-137 20-60% of the entire Chernobyl emissions of these isotopes. Although Chernobyl emitted vastly more fallout than Fukushima has to date, it was the I-131 and Cs-137 that accounted for most of the terrestrial human and environmental hazard, and these are the main Fukushima fallout components. Also, the Fukushima plant has around 1760 tonnes of fresh and used nuclear fuel FIGURE 1: Main transfer pathways of radionuclides in the terrestrial environment (UNSCEAR 2011). Physicianlife 17 F E AT U R E S “… there is a linear dose-response relationship between exposure to ionizing radiation and the development of solid cancers in humans. It is unlikely that there is a threshold below which cancers are not induced.” – US National Academy of Science, BEIR VII report, 2006. 18 Physicianlife What are the health impacts of ionising radiation? Ion 1050 (or more) years (latency), although can be as short as 5 years for leukaemia. Ionising radiation is classified as a Class 1 carcinogen by the International Agency for Research in Cancer (IARC) of the World Health Organisation (WHO), the highest classification consistent with certainty of its carcinogenicity. Two types of IR health effects are recognised. blood production begins to be impaired.5 Deterministic effects which exceed around 1000mSv induce acute radiation sickness with vomiting, diarrhoea, F E AT U R E S the dose. The current risk coefficients for the development of cancer are approximately 8% per 1000 mSv (ie 1:12 chance) and 5% for cancer fatality (1:20). The US National Academy of Sciences reviewed the effects of low level ionising radiation (defined as less than 100mSv) in their seminal report and concluded that: “… there is a linear dose-response relationship between exposure to ionizing radiation and the development of solid cancers in humans. It is unlikely that there is a threshold below which cancers are not induced.” – US National Academy of Science, BEIR VII report, 2006 Emergency workers at the plant are likely to have developed deterministic effects as their upper allowable occupational doses have been increased to 250 mSv (from the 100mSv total dose over five years allowable, and the 1mSv per annum allowable dose to the public). One incident induced radiation burns to two emergency workers’ legs from stepping in highly radioactive water in reactor 2, with a calculated total dose of 180mS minimised since. Conclusion The second worst nuclear power disaster has brought tremendous devastation on the Japanese communities affected. The lives that have been, and ultimately will be, lost are difficult to determine at this stage. Although, it is likely that they will not be of the scale of those killed from the earthquake and tsunami. This is due to prompt countermeasures having been implemented. The economic devastation and social dislocation is vast. 200,000 people have been evacuated from the exclusion zone, and 163,000 remain in shelters. If the surrounding 20km of the plant are as contaminated as predicted, most of these people will never return to their homes and the ground will lie fallow for hundreds of years. It is clear that there is no other man-made civilian technology that has the potential for such widespread devastation to public health, the environment or society. Dr Peter Karamoskos is a Nuclear Radiologist in Melbourne. He is the public representative on the Radiation Health Committee of the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA), and Treasurer of the Medical Association for the Prevention of War. References 3 A 1/7 INES event is termed an ‘anomaly’ 4 CTBT (Comprehensive Test Ban Treaty) monitoring sites. These are able to detect minute trace amounts of radioactivity. 5 Compared with a current per capita average of approximately 2mSv from natural background radiation. 1 2 Cover Image: "Japan Nuclear Radiation Suits" by ssoosay p14 & 15, "Japan Nuclear Explosions" by ssoosay p16, "image-191637-panoV9free-vflj" by Oldmaison om/5528825487/ p16 & 17, "Japan needs our help (Sendai <> Rennes, France)" by leopoSs p17, " fukushima inside exploded plant with steam rising from spent fuel rods storage ponds" by daveeza. com/photos/vizpix/5566184568/ Images licensed under a Creative Commons Attribution 2.0 Generic Licence Physicianlife 19 State to Federal CostShifting Billing Medicare for services provided in public hospitals what's right and what's wrong? 20 Physicianlife FEATURES What is Cost-Shifting? C ost-shifting occurs ‘when service delivery is arranged so that responsibility for services can be transferred by one player in the health services sector to programs financed by other players, without the agreement of those other players’.1 Opportunities for cost-shifting exist as a result of complexity in the funding and delivery arrangements in the health system, most particularly, the division of responsibilities between the Commonwealth and State Governments. Cost-shifting often results from ‘perverse incentives’ in the system that make it more financially beneficial to offload costs onto other jurisdictions, rather than work in the interests of the overall health system. How Does Cost-Shifting Occur? Classically, when clinical services which would ordinarily be provided as part of a public hospital’s suite of services are instead rebadged as being ‘co-located’ external entities. This enables these ‘external’ services to be billed through Medicare thereby cost-shifting to the Federal Government. Historically, this has happened most frequently with radiology, pathology, specialist clinics and increasingly perioperative procedural services. The internet is littered with stories of disgruntled overseas-trained doctors who were employed as public staff specialists in Australian hospitals and paperless Medicare billing was done on their behalf for months or years before they even found out. The most obvious way is where State-Funded public hospitals don’t offer (or offer very limited) public outpatient services. Instead, they send the patient’s to the specialists’ own rooms which are usually located in close proximity. All consults are then paid for by Medicare (Federally-Funded) with or without the addition of an out-of-pocket fee to the patient. This usually works out better for most specialists as they would mostly receive a higher hourly rate than if they were paid a sessional rate at the corresponding public hospital. The benefit to the public hospital is obvious as they have now avoided the need to fund public outpatient services. The Australian Health System—a Brief Overview The division of responsibilities for health care between the Commonwealth and States is complex: one commentator has described it as ‘one of the more mixed, disintegrated and confusing systems on earth’.2 There are many types and providers of services, and a range of funding and regulatory mechanisms. Broadly, the Commonwealth Government’s Major Contributions to the Health System Include: • the two national subsidy schemes, Medicare, which subsidises payments for services provided by doctors, and the Pharmaceutical Benefits Scheme (PBS), which subsidises prescription medicines • shared responsibility for funding for public hospital services through the Australian Health Care Agreements with the State and Territory Governments: under these agreements, the Commonwealth provides funding assistance for the operation of public hospitals • subsidisation of private health insurance through the 30 per cent rebate on the cost of private health insurance premiums • funding for a range of other health and health-related services, including public health programs, residential aged care, and programs targeted at specific populations, and • regulation of various aspects of the health system, including the safety and quality of pharmaceuticals and other therapeutic goods, and the private health insurance industry. The State and Territory Governments’ Major Contributions to the Health System Include: • management of and shared responsibility for funding public hospitals • funding for and management of a range of community health services • management of ambulance services. Physicianlife 23 21 F E AT U R E S Other examples of cost-shifting are MBS-funded clinics, radiology, pathology services, endoscopy, and perhaps the most sneaky, uninsured private-in-public patients. State Governments have been extremely keen for hospitals to costshift wherever possible, with perhaps the most blatant example of this encouragement has been the Victorian DHS publishing a “Resource Kit for MBS-billed services in Public Hospitals”4 which provides the blueprint for creation, implementation and a detailed description of the ‘legitimate’ billing arrangements to provide Federally-Funded services within a public hospital. Encouraging patients to choose to opt for treatment as a private patient in a public hospital where the specialist is happy to take the 75% MBS rebate alone as payment is perhaps the sneakiest of all the tricks. Section 19.2 of the Health Insurance Act explicitly prohibits charging Medicare for services that are fully financed by Statefunded public hospitals3. The grey area is when patients are classified as receiving treatment on an outpatient basis or as a private-in-public patient. In these cases, it may be possible for the specialist providing the medical service, to bill Medicare directly and receive funds even though the services were provided using equipment, staff-time and property fully financed by the State health departments. If the specialist accepts 75% of the Medicare Scheduled Fee as full payment, then it will even appear ‘invisible’ to the patient. An arrangement which was win-win for both Specialists (who can now earn 75% of the MBS rate for procedures/ services rather than a paltry public sessional rate) and Public Hospitals (they have their specialist’s wages funded by Medicare or alternatively, they bill on behalf of the specialist and pool the earnings in a ‘special purpose fund’). MBS-billed clinics and procedural services are growing in popularity Australia-wide, particularly in Victoria, NSW and Queensland. In fact, many public specialists routinely partake in these clinics with the billing for these services performed on their behalf with 100% of all MBS funds being retained by the employing hospital. The internet is littered with stories of disgruntled overseas-trained doctors who were employed as public staff specialists in Australian hospitals and paperless Medicare billing was done on their behalf for months or years before they even found out. Perhaps the most vulgar example of State to Federal Cost shifting has to be the rebadging of public patients as uninsured private-in-public patients. Encouraging patients to choose to opt for treatment as a private patient in a public hospital where the specialist is happy to take the 75% MBS rebate alone as payment is perhaps the sneakiest of all the tricks. The patient jumps the queue, the specialist is better remunerated and the hospital saves money. Win-Win-Win. The only loser is Medicare, therefore, the Federal Government and ultimately the tax payer. With all the examples of cost-shifting seen, the only real risk posed is to the specialist who is accessing Medicare funds whilst on paid public time. He is the only one who would be deemed to be in breach of Section 19.2 of the Health Insurance Act. Any penalties imposed would be on them and them alone. In cases where this action has been brought, the hospital, the State health departments and senior beaurocrats have escaped unscathed. 22 Physicianlife I advise any specialist who is asked to partake in an arrangement where their provider number is billed on their behalf for MBSfunded services provided within the confines of a public hospital to obtain independent specialist advice. Dr James Nguyen References B. Ross, J. Snasdell-Taylor, Y. Cass and S. Azmi, Health financing in Australia: the objectives and players, Occasional Papers: Health Financing Series, Volume 1, 1999, p. 38. 2 S. Leeder, ‘We have come to raise Medicare, not to bury it’, Australian Health Review, vol. 21, no. 2, 1998, p. 30.. html 3 Specialist Clinics in Public Hospitals outpatients/specialist-clinics0608.pdf 1 F E AT U R E S F E AT U R E S 24 Physicianlife Sexual Dynamics at Work Blunt tool or sharp instrument T here are broadly two categories of people who might have started to read this article: 1) Those who will have turned to it first, in the hope that it features abundant pictures of svelte, slightly built nymphs wearing nothing more than a come-hither pout and/or strategically arranged limbs and shadows or 2) those who were anticipating an interestingly complex socio-politico-gender based discussion around the ongoing struggle for woman to gain equal status with their mean minded male colleagues. It is not my intention to implicitly cover either of these possible topics, but to open up commentary and refer to the way sexual allure/power are used/misused in the medical workplace and some of the consequences. We have all seen it in action or may even have experienced its effects, good/bad, constructive/ destructive, magnificent/mischievous. So despite its prevalence, why is it not recognised and why is it not taken into account more seriously? Failure to do so, adds to the problems that we face in our daily work. (see the article ‘Altruism in Medicine... Is it Declining? The reasons why’ in the in the last issue) Consider the following scenarios which are based upon events that have occurred in hospitals around Australia in the last 10 years. The details have of course, been somewhat modified. 1) A young woman makes many attempts, most of which involve sexual relationships with numerous ‘eminent’ (and married) leaders of their specialties, to get into various training programmes. She fails to allure them sufficiently, gives up and goes into an alternative career of pole dancing. 2) A senior surgeon who controls entry to training programmes is well known for serial relationships with attractive (much younger) would-be trainees. The trainees do not always get a post on the training programme but there is no shortage of applicants. 3) A senior surgeon attempts (in an unsurgeon like, ham fisted way) to establish a sexual relationship with an attractive trainee. The trainee encourages the attention until in post then sues for harassment. The trainee wins a large pay-out, but has to continue to train elsewhere. 4) A handsome young (fe)male GP is asked out by a handsome young (fe)male patient. She/he refuses because, being Physicianlife 25 F E AT U R E S young and a little shy, he/she does not want to discuss their personal life with interested lay people at a Medical Board hearing if the relationship ‘goes bad’. 5) A pleasant looking and personable young hospital doctor keeps to himself at work, does not engage in ‘flirty repartee’ and is not considered to be a ‘spunk’. He is in fact subjected to passive aggressive behaviour by nursing staff around him who expect to be seen as professional yet alluring. Under the stress of the bad atmosphere, a significant diagnosis is delayed. The doctor suffers badly from this mistake, but happily the patient survives. Questions 1) What is the common thread here? 2) Consider the various iterations possible in 4. Are they all the same? 3) In each case, who/what, if anybody/ thing, is ‘wrong’? 4) Why? 5) Who suffers? 6) Why? In my honest opinion, there are no clear answers to question 2 to 6, but the answer to question 1 is of course, sex. It is obvious why sex is so pervasive and persuasive. It is essential to continuation of life and thus has become an activity, It is generally understood that the female of the species is on the hunt for the most powerful male capable of giving her offspring the best genetic and physical chance of survival. mediated and modulated by all manner of neuro-chemico-psychologic-socioeconomico factors. However, in many animals as well as Homo sapiens, sexual activity can be and is used for a number of ‘purposes’, other than propagation of the species. I will refer to these in turn and talk about the way these purposes can impact upon us in our professional lives and in brief how we can protect ourselves. Re-Creation This well recognised behaviour is seen in most living organisms and of course, also in Homo sapiens. It is generally understood that the female of the species is on the hunt for the most powerful male capable of giving her offspring the best genetic and physical chance of survival i.e The Alpha Male. Once inseminated, the female usually takes on the role of rearing and teaching her young. worthy of his genes as possible, thus maximising the chance of his offspring dominating the ‘gene pool’. Once his part is played, he will often as not go and lay down and relax until another opportunity presents itself. Interestingly the males of many species are likely to kill the young sired by another male and mate with the female. This of course is in keeping with continuity of their genes. As young doctors (financial alpha people) clearly the field is largely open in the race to find, attract and keep a worthy mate. In-house competition, not just between the unmarried, can be the cause of much workplace tension and activity, professional and otherwise. Once the mate has been attracted, there is always the risk that the actual resources needed by said mate will exceed the ability of the provider. If this failing (and many others) are not met, be it the latest shiny SUV, designer hair clip or ‘trendy togs 4 tots’, then the provider may find themselves sidelined even as they continue providing. Recreation The male of the species is generally intent on inseminating as many females 26 Physicianlife Pleasure, be it the taste, smell or feel of F E AT U R E S something, is a strong behavioural driver. It seems clear that Homo sapiens are not the only species to engage in sexual activity for the pleasure of it. All manner of sexual activity has been observed in many other animals species, unrelated to oestrus. Apparently the first squid couple caught “in flagrante delicto” were males. Clearly Homo sapiens are not the only species with sex on the brain much of the time. Normally, of course, this is constrained to appropriate times and venues, but sometimes it spills over into the workplace. In short, if somebody seeking to ‘hook up’ is considered so ‘hot’ they are ‘cool’ then they are also a ‘spunk’ and liable to succeed. If not ‘hot’ they are simply a ‘sleaze’ and liable to a harassment claim. Further, if they are thought ‘hot’ or even ‘tepid’ but do not respond to the infallible charms of some around and just want to work, then they are likely to fall foul and seemingly increasing human activity of rape as a weapon. immune to the initial sexual wiles and later legal tactics, of their ‘chosen mate’. It is not uncommon for the career advancement of any half attractive professional to be dependent upon how many influential staff they can persuade with their alternate knowledge and abilities. There is no doubt in the collective professional mind (of many different professions, as well as ours) that career advancement can be accelerated and assured by sexual favour. The best way to avoid trouble is, of course, to be aware of the possibility, but explicitly: If demanded as expected, this is clearly harassment and all that implies. At least, (see the article on Bullying in the last issue) if such favours are offered, it is extremely unwise to accept. Blackmail and extortion can easily follow and that is only the beginning of the troubles. Most of us in medicine for any length of time will be well aware of one or more individuals who, if really scrutinised properly, would not stand up to the 1) As alpha types (at least financially) we are juicy targets and so must watch our backs 2) If you’re planning on getting married, then do get a prenuptial agreement... If you are in any doubt about this one, Give Sir Paul Mcartney a call and ask his opinion 3) Be open to the fact that not everybody who chats you up is enticed by your body or mind. It is wise for all to bear these short, simple comments in mind next time a passing fancy recurs and remember what the wise gorilla never does more than once, in his own nest. It is not uncommon for the career advancement of any half attractive professional to be dependent upon how many influential staff they can persuade with their alternate knowledge and abilities. of revengeful gossip that can damage their work and thus their professional prospects. This is very common but very hard to do anything about, and especially hard to prove. In a nutshell, all that can be done is to be aware and play the game. Regulation This is where the biologically useful and the universally pleasurable can collide and become of legal interest or worse still, grotesquely corrupted. Sexual activity as a control mechanism, a manifestation of one individual’s power over another, can range from the subservience displays in Baboons, where a submissive non-oestrus female allows a dominant male to mate, to the barbaric standards of propriety that they might attempt to exude. There are many sexualised skeletons bumping about in many personal and departmental closets. Granting or grabbing sexual favour in exchange for other benefits is not only practised by grunting gorillas. Dr Tony Blinde Dr Blinde believes science can easily explain all the many wonders we enjoy on this our only planetary home. Conclusion Sexual stimuli are around us all the time, in the different forms that attract different people. That is the way it has evolved to be. However, we are no longer uninhibited proto humans running about in the skins of others animals, grabbing the nearest female by the hair and running back to our cave with them. Neither are the alpha types amongst us References p26, "Male and Female Lion Playing" by wwarby. flickr.com/photos/wwarby/3302602311/ Images licensed under a Creative Commons Attribution 2.0 Generic Licence Physicianlife 27 F E AT U R E S 28 Physicianlife F E AT U R E S Ageing inDoctors Cognitive Decline and When is it time to hang up the stethoscope? T he Australian medical population is ageing and similarly, so is the medical workforce. Approximately one in six registered medical practitioners are over 60 years of age1 and already experiencing some decline in both physical and mental capabilities. Allowing doctors to work until advanced age can be beneficial in that it preserves highly experienced practitioners, teaching resources and skills in the community. However, the downside is that there is the potential for increased medical errors due to burnout, physical limitations (e.g. tremor, loss of dexterity, reduced tactile sensation, poor fine motor skills and coordination, visual impairment, auditory impairment) and cognitive impairment (poor judgement, impaired ability to make rapid decisions, memory impairment). What is the evidence? Firstly, there is plentiful evidence correlating cognitive decline with increased age. The ageing process significantly targets cognitive speed and short-term memory as well as ‘fluidity of thinking’ which can be thought of as the ability to solve new problems. However, there have not been any large scale studies correlating impaired ability to practice as a doctor with advanced age. The sparse evidence that may suggest a correlation between advanced age and cognitive decline includes: • A study of 109 doctors evaluated in the Peer Assessment program in Ontario, Canada found 10% needed significant assistance with their day-to-day medical practice due to impairment. 18% of the doctors needing assistance were aged over 70, while this demographic made up less than 5% of the total population size. A significant overrepresentation of this 70+ age cohort.2 • Morrison and Wickersham studied US state licensing boards and disciplinary action in doctors across all specialties over several decades. Their findings were a weakly positive association between age and disciplinary action.3 The dilemma The issue of ageing doctors with cognitive impairment creates a dilemma in that how do we ensure the maximal safety of patient care without infringing on the civil liberties and professional autonomy of elderly doctors? Currently, monitoring by AHPRA (formerly by State medical boards) for signs of cognitive decline in There is no regular dementia screening for ageing doctors in Australia. Physicianlife 29 F E AT U R E S senior doctors is only instigated after a complaint or patient injury.4 There is no regular dementia screening for ageing doctors in Australia. In other parts of the developed world, dementia screening has been accepted and integrated into the medical registration and renewal process. The screening itself creates a whole new set of problems revolving around assessing doctors and intervening when it is declared that their cognitive functioning is impaired. Why should we single out doctors for cognitive screening? Doctors are scrutinised for cognitive decline more than any other professional group simply because there are few professions where the welfare of others that free recall, encoding and retrieval, visuospatial abilities, abstraction, and mental flexibility decline with increasing age.6 Executive functioning deficits often precede memory loss in dementia. Ideally, it would be best to try to detect dementia at the mild cognitive impairment (MCI) stage to initiate closer scrutiny and monitoring for current behaviour which suggest incompetence, because medical errors due to cognitive impairment can occur long before memory lapses become obvious to supervisors and co-workers. Approximately 12% of cases with MCI convert to dementia annually, reaching 80% at six-year follow-up.7 However, the necessity of self-acknowledged memory doctor's family have noticed subtle cognitive decline, irritability, depression and driving problems. The normal systems in place which one would hope would alert the ageing doctor to his/her cognitive decline are self-awareness, recognition by medical colleagues, family members and other hospital staff (managers, nurses, medical administrators etc). Interestingly, with many senior doctors, it is extremely common for family, colleagues and institutions to inadvertently collude to protect the doctor’s feelings at the expense of patients . And so, the doctor unwittingly continues to practice as before without awareness of their newly acquired Even doctors who retain some insight may minimise or deny functional impairment, fearing social stigma, loss of registration and income, legal liability, and diminished self-esteem. is so intimately related to the high-level reasoning required of its competent practitioners. problems for the diagnosis of MCI makes early detection of those less insightful individuals especially problematic. limitations. Naturally, loss of insight and self-awareness are frequent sequelae of the dementia process. What happens as the dementia process unfolds? Before some of the more obvious signs of dementia emerge (i.e. memory problems) clues suggesting the onset of dementia in a doctor may include prescription errors, late payments, irrational business decisions, practice staff concerns, dissatisfied patients, patient injuries, and lawsuits. And very commonly, the Even doctors who retain some insight may minimise or deny functional impairment, fearing social stigma, loss of registration and income, legal liability, and diminished self-esteem. Also, those who have enjoyed highly distinguished or academic careers, commonly have their sense of identity and self-esteem Cognitive changes are a normal part of ageing, but the incidence of dementia increases exponentially with age.5 Although professional development and experience may have a positive effect on a doctor's abilities, it is undeniable 30 Physicianlife F E AT U R E S tied to their ability to practice as a doctor. Quoting a GP transitioning into retirement, “One minute you are a respected member of the community and the next you’re a nobody”. Families may recognise the problem but avoid discussing it for fear of upsetting the doctor, or causing loss of livelihood or community standing. Practice partners may fear financial loss or loss of goodwill value and so perpetuate the myth that the doctor is still ‘safe to practice’. Hospitals may ignore early warning signs and delay reporting of minor age-related dyscompetence because of the revenue the doctor generates. All these factors contribute to the fact, that relying on social and professional systems for recognition of dyscompetence has the potential for serious disaster. What happens overseas? A successful example of a systematic doctor screening programme is the ‘Physician's Achievement Review’ by the College of Physicians and Surgeons in Alberta, Canada.8 Every doctor is screened every five years using a 360-degree pre-screening survey which is completed by peers, patients and non-doctor colleagues where they rate the doctor’s knowledge and skills, communication skills, psychosocial management and office management. The lowest one-third ranked doctors are given an onsite assessment by senior medical staff from the medical board. The purpose behind this methodology is to be able to identify deficiencies including cognitive deficits in doctors before patient injury or errors occur. A novel screening procedure used to identify underperforming doctors is found in Ontario, Canada. The College of Physicians and Surgeons of Ontario, the province's medical licensing authority, sponsors the Physician Review Program (PREP) at McMaster University. The College of Physicians and Surgeons of Ontario quality assurance committees refer approximately 30 of the 26,000 Ontario doctors annually to PREP because of identified competency concerns. 9 The PREP assessment is an intensive oneday evaluation that includes structured oral and written examinations, simulated patient encounters with peer observation, and chart-stimulated recall. Results are combined into a summary competency score falling into one of six categories: diagnosis of cognitive impairment and retraining? I (no deficiencies) II (minor deficiencies) III (moderate deficiencies) IV (major deficiencies) V (unsafe to practice) VI (unsafe to practice in any setting) 18 doctors who performed poorly on PREP initially were reassessed with PREP one to three years later, after remedial education. Of the 12 doctors who remained unsatisfactory at PREP retesting, nine showed moderate to severe dysfunction on the neuropsychological battery. This suggests a strong likelihood of cognitive impairment in underperforming doctors who fail remediation by self-directed CME. There is minimal data available on what happens to doctors after the onset of cognitive decline as screening is performed in so few jurisdictions. However, in Ontario where it has been studied to a small extent, the prognosis appears to be quite poor. 75% of doctors who failed to correct their PREP score after remediation never reached a level of satisfactory cognitive function to rejoin the medical workforce. Doctors found to have dyscompetence (i.e., Categories III through VI) are offered the opportunity to remediate through self-directed CME and then to be reassessed. This has been extensively studied by Turnbull et al.9-11 to determine the relationship of the PREP competency score to cognitive impairment. 45 PREP participants were interrogated with a neuropsychological battery that evaluated five cognitive domains (verbal problem solving; visual-spatial problem solving; learning and memory; fluency “One minute you are a respected member of the community and the next you’re a nobody” and attention; and mental tracking) and produced a summary score of cognitive impairment (none, minimal, mild, moderate, or severe). Of the 14 doctors who had satisfactory PREP results (Category I or Category II), 13 (93%) showed no, minimal, or mild cognitive impairment as judged by age-independent norms on the neuropsychological battery. Of the 31 doctors with unsatisfactory PREP results, 17 (55%) showed moderate or severe cognitive impairment on the neuropsychological battery. Therefore, the incidence of moderate or severe cognitive impairment was 7% in the doctors who could be remediated versus 55% in the group who could not be remediated. What is the long term prospect of practicing medicine again after What should happen if concerns are reported about a doctor’s cognitive functioning? The ageing doctor with dementia must decide how long to continue practicing medicine. Unfortunately, assessment tools cannot yet predict to what extent cognitive impairment translates into dysfunction in professional duties. Doctors may display dyscompetence or underperformance and come to the attention of AHPRA by their patients, peers, or supervisors reporting them. The precipitating event may be a medical error, a malpractice suit or disciplinary action. Once dyscompetence is identified, the state medical board must decide whether the doctor's disability is remediable or permanent. If the doctor's cognitive disability is judged to be significant, then discussing social and financial issues with the Physicianlife 31 F E AT U R E S healthcare team and hospital administration may help the doctor and his family to feel more secure. The doctor will need to undergo assessment and to stop practicing. Hospitals should try to involve the doctor's family to safeguard the doctor's dignity and to respect their long service. Referrals to the Alzheimer's Australia support group may assist families with decisions regarding retirement and long-term care. Summary It should be remembered that whilst most older doctors in practice are not impaired, the prevalence of cognitive decline that impairs performance is greater amongst those over the age of 65. With such a significant increase in the likelihood of cognitive decline, there may be merit in screening to identify those ageing doctors who do not self-select out of the practice pool when deficits begin to surface. It is recognised that the main criterion that should be used regarding maintenance of registration and treating patients is current level of functioning. There should not be an age cut-off which determines when doctors should reduce their clinical exposure. However, the question that needs to be raised is whether advanced age should be considered a risk factor that merits automatic screening to assess for adequate functioning. Selina Vasdev References 1. Robert G Adler and Conn Constantinou: Knowing — or not knowing — when to stop: cognitive decline in ageing doctors. MJA 2008; 189 (11/12): 622-624 2. Norton PG, Faulkner D: A longitudinal study of performance of physician's office practices: data from the Peer Assessment Program in Ontario, Canada. Jt Commiss J Qual Improv 1999; 25: 252-258 3. Morrison J, Wickersham MS: Physicians disciplined by a state medical board. JAMA 1998; 279:18891894 4. Leape LL, Fromson JA: Problem doctors: is there a system level solution? Ann Intern Med 2006; 144:107-115 5. Jorm AF, Jolley D: The incidence of dementia: a meta-analysis. Neurol 1998; 51:728-733 6. Peterson RC: Mild cognitive impairment: prevalence, prognosis, aetiology and treatment. Lancet Neurol 2005; 4:576-579 7. McAuley RG, Paul WM, Morrison GH, et al: Five year results of the peer assessment program of the College of Physicians and Surgeons of Ontario. Can Med Assoc J 1990; 143:1193-1199 8. College of Doctors and Surgeons of Alberta. Doctor Achievement Review (PAR) program. Available at:. Accessed February 6, 2007 9. Turnbull J, Carbotte R, Hanna E, et al: Cognitive difficulty in doctors. Acad Medicine 2000; 75:177-181 10. Turnbull J, Cunnington J, Unsal A, et al: Competence and cognitive difficulty in doctors: a follow-up study. Acad Medicine 2006;81:915-918 11. Hanna E, Premi J, Turnbull J: Results of remedial continuing medical education in dyscompetent doctors. Acad Med 2000; 75:174-176 p28 & 29, "Rusty mechanics" by daoro p30, "090804-20756-LX3" by hopeless128 p32, "Old hands" by daoro Images licensed under a Creative Commons Attribution 2.0 Generic Licence. org/licences/by/2.0 32 Physicianlife MEDICAL MB The Futur e Of A esthetics MEGATREND N o 1 G L O B A L G R OW T H FACT: DID YOU KNOW? Australians spent over $448.5 million in 2010 on non-surgical cosmetic treatments, most on Anti-Wrinkle Injections 30% Growth in Australia Cosmetic Injection Specialist 40% Growth Internationally Despite GFC “Australian Franchise sector has out performed the economy” (PWC) 1300 1 B O T O X W W W. M Y B O T I Q U E . C O M . A U INVEST IN YOUR FUTURE FRANCHISES NOW AVAILABLE SYDNEY WESTFIELD CHATSWOOD CHASE WARRINGAH MALL ... MORE COMING SOON MB COSMETIC INJECTIONS LASER THERAPY BODY THERAPY SKIN THERAPY contributing WANTED riters Feel you have something you would like to share with all other Physicians? We are currently looking for articles and submissions for PHySiciAnlife . Please email: [email protected] Self Managed Super For Young Wealth Accumulators Why it’s Becoming So Popular 34 Physicianlife B U S I N E S S & F inance Everyone seems to be jumping on the self managed super bandwagon. But is it appropriate for young investors? And if it is, what are the ground rules for building wealth successfully? We explain why self managed super funds are quickly becoming the number one choice of young wealth accumulators. Self Managed Super is on The Rise S elf managed super is the fastest growing sector of the superannuation market, with about 428,000 SMSFs holding $390.9 billion – almost one-third of the total super pool in Australia, according to the Australian Prudential Regulation Authority’s latest super bulletin. Once considered to be the exclusive domain of the over 50s, self managed super funds are slowly gaining popularity among young wealth accumulators. Why Choose an SMSF Over a Traditional Super Fund? Better returns. The average annual return over 10 years for funds with more than four members (in other words, any super fund other than an SMSF) is 3.3%. This dismal return, along with the allure of flexibility, control and the ability to link super assets to estate planning strategies has led to the rise of SMSFs over traditional super funds. But without a doubt, the Government’s back flip on borrowing within SMSFs in 2007 has been the catalyst in bringing SMSFs within the radar of young investors as a smart, long-term wealth-building vehicle. Latest research suggests an additional 40% of SMSF trustees plan to use gearing in the coming 12 months. The Investment Trends 2010 SMSF Borrowing Report found 29,000 SMSFs used a gearing strategy in 2009 compared with 13,500 in 2008. This is a 115% jump in less than two years. And the most favoured assets are, not surprisingly, property (41%) and shares (30%). So, What Are The Ground Rules? Rule No.1 – Never lose money Rule No.2 – Never forget Rule No.1 Legendary investor Warren Buffet passed on this gem – like many others – and it is as relevant to borrowing within an SMSF as it is to any other form of investment. Why would you borrow in your SMSF if the interest cost of the loan exceeds the return on the investment? For example, if you believe share markets return on average 8% a year, it makes no sense to pay 10% to borrow money to invest in shares. And with the cost of borrowing still relatively high, if you do decide to borrow, it’s vital you choose the right investment that will deliver both income and capital growth. Case Study Building Wealth in Your SMSF Without Sacrificing Your Lifestyle One of the real benefits of gearing within an SMSF is the ability to repay debt more quickly, cheaply and tax effectively, without impacting your cash flow. Take the simple case of Mr and Mrs Smith. Mr Smith is a 32-year-old obstetrician operating in private practice and currently earning $150,000 per annum. Mrs Smith works part-time as a practice manager and earns $50,000 per annum. With a $200,000 super balance, Mr and Mrs Smith set up their own SMSF and borrow to invest in a quality $600,000 property (plus $40,000 stamp duty and legals), returning 4% rental income. Physicianlife 35 B U S I N E S S & F inance Case Study Super balance Super income Mr Smith $ 150,000 $13,500 (9% SG contributions) Mrs Smith $ $ 4,500 (9% SG contributions) Total super balance $ 200,000 $18,000 $ 600,000 $24,000 (4% rental income) 50,000 Add Investment property Less Deposit -$ 120,000 Stamp duty & legal costs -$ Non-recourse borrowing for investment -$ 480,000 property 40,000 -$38,400 (8% interest cost on borrowing of $480,000) Rental income 4% $24,000 Return on remaining $40,000 cash 4% $ 1,600 Super contributions 9% $18,000 Gross income $43,600 Less Interest costs -$38,400 Net Income $ 5,200 Superannuation tax @ 15% -$ Net cash flow $ 4,420 They use $160,000 capital from their super balance and borrow $480,000. Outcome Income No effect on lifestyle income Capital Assuming the asset doubles in value every 10 years, by age 60 (28 years later) the asset should be worth $4,178,000 Tax Within super: 10% if sold in accumulation phase OR 0% in pension phase Outside super: assuming asset is held for more than 1 year 23.25% 36 Physicianlife 780 Statistics show that most self managed funds are owned by married couples, who also act as trustees. As SMSF owners age, the risk posed by the onset of dementia and ultimately death of trustees is another just argument to allow younger family members to join their parent’s SMSF and seamlessly transition wealth over generations. Shifting Goal Posts Self managed super law is continually evolving. Without admitting it had ‘let the genie out of the bottle’, the Government acknowledged the potential dangers of excessive gearing in SMSFs when it announced last December it would review gearing practices in two years’ time to see if ‘leverage posed a risk to superannuation fund assets … in SMSFs’. So, stay tuned for developments in this space. If you are considering setting up a self managed super fund, it makes sense to consult a specialist adviser who can not only help you navigate complex super tax and estate planning laws, but also approach asset selection with financial rigour and independence. Roger Wilson is a Wealth Management Partner and Eric Maillard is a Business Advisory and Accounting Partner at Lachlan Partners. The case study does not take account of inflation or salary growth, nor does it consider the positive cash flow / tax impacts of depreciation deductions – all of which would most likely result in no tax being payable. Importantly, in the real world, investing in one asset alone would be considered a high risk strategy. Mr and Mrs Smith should allocate future super savings towards other asset classes to reduce risk and diversify their investment portfolio. SMSFs Are Good For Family Succession Planning _____________________________________________________ Disclaimer: This document is of a general nature and does not take into account your personal objectives, situation or needs. Before making a decision about setting up a self managed super fund, you should consider your financial requirements. Speaking with a qualified financial adviser may help. B U S I N E S S & F inance strictly on a fee for service basis, which means the advice we offer is totally independent and objective. Our sole purpose is to help you achieve your financial and lifestyle goals. B U S I N E S S & F inance Whowhile wins Art loses? The Government’s assault on art is damaging the entire art industry For more than a decade, I have been collecting Australian fine art within my self managed superannuation fund (SMSF). Many people will adhere to the view that this is, at best, unwise. Others, no doubt, dismiss the acquisition of art - whether or not for investment - as nothing more than ‘bourgeoisie’ self-indulgence. But I believe that art can be a valuable asset and my investment in art has always been a serious one and totally within the “law”. 38 Physicianlife B U S I N E S S & F inance I n May 2009, the then Rudd Government established a panel of experts to review and provide recommendations regarding Australia’s superannuation system. Chaired by Mr Jeremy Cooper, a former Deputy Chairman of ASIC, the Panel’s key preliminary recommendations were released in June 2010 and specifically sought to prohibit investment in collectables and personaluse assets such as artworks within SMSFs. Had the Cooper Review’s recommendations been implemented, no SMSF in this country would have been allowed to invest in art and all art currently held within any SMSF would have to have been sold off within a tight time frame. An outcry from the art industry ensued, followed in July 2010 by a disastrous result for a prominent indigenous art auction conducted in Melbourne. As a result and in the lead-up to the August 2010 Federal election, all major political parties distanced themselves from the Cooper Review’s prohibition of art within SMSFs. More recently, however, the Labor Government, egged on by the Australian Taxation Office (ATO), has done a back-flip on this pre-election commitment and has re-stated its intention to apply “tighter restrictions” to SMSF investments in collectables “to ensure that they are made for retirement income purposes rather than current day benefit”. The detail of these intentions is yet to be made clear and “will be set out in regulations”. As it relates to art, this might yet mean that all works of art currently held in an SMSF will need to be sold within a five year period as advised by the Cooper Review. Even if art is permitted to remain in SMSFs, the Government clearly intends to enforce the so-called “Sole Purpose Test”. The practical impact of this test – an edict of the ATO - is that any art held within an SMSF must not be displayed where the owners, their friends or their family might be able to see it and thereby gain the benefit of seeing it. The Sole Purpose Test makes it clear that even if art can be owned by an SMSF and even if it is a recognised “masterpiece”, astonishingly, it cannot be viewed in a home or in a workplace. It must be hidden away. Angry ATO punishes art investors Art can appreciate in value just like any other asset. In well-informed hands, it is a sound investment. This might be a systematically accumulated collection (like mine) or it might be an entirely isolated but well-considered acquisition of a masterpiece. Either can offer excellent capital gains. Yet we all know that every investment vehicle – stocks, shares, property and “collectables” - require care and consideration to avoid poor long term returns. Bad judgment and/or bad advice can affect any investment vehicle, not just art. In seeking to exclude art from SMSFs, the Government makes it plain that SMSF investments should be made for the purpose of creating income after retirement rather than to deliver current day benefit. This is all well and good – but art clearly can be acquired for both and it is wrong and unfair to insist that art that has been purchased with the entirely reasonable expectation of long term capital growth is, inherently, unsuitable for inclusion in a SMSF just because it is enjoyable to look at now. The ATO approach is both clumsy and punitive. It is crude, even lazy, to lump all art together and to exclude from SMSFs art that has real potential for future capital growth. The quality of an investment certainly plays no part in the ATO’s thinking when it comes to other investment vehicles. There has never been any suggestion that the ATO intends to apply any restriction whatsoever upon the acquisition of even the most palpably useless stocks and shares. But they are anxious, almost desperate, to exclude all art regardless of investment merit. Physicianlife 39 In seeking to exclude art from SMSFs, the Government makes it plain that SMSF investments should be made for the purpose of creating income after retirement rather than to deliver current day benefit. Even insisting that art held in SMSFs not be hung at home or at work (as dictated by the Sole Benefit Test) is both inexplicable and odious. Hanging a valuable work of art, one with reasonable prospects of capital growth, only adds to its value; and it certainly makes it less likely to be damaged without that damage going undetected. The ATO is working very hard to prevent people from adding non-investment artworks to their SMSFs. The ATO resents – deeply – any action that misuses the tax concessions permissible for investments in superannuation funds. But this attitude has resulted in their blinkered implementation of the Sole Benefit Test, another strategy that disadvantages genuine art collectors like me. While it is undeniable that some SMSFs have acquired art entirely for the purpose of making decoration of their home or office less expensive, this cannot be said of me and others like me who have taken the whole art-as-an-investment issue seriously. As it turns out, regrettably, genuine art investors are insufficiently important to 40 Physicianlife warrant any particular consideration from their Government. The desire to eliminate every possible occasional art buyer from the SMSF scene is so important to the Government that they are ready to sacrifice my hard-earned asset as “collateral damage”. One of the uncomfortable undercurrents emanating from this angry alliance of the ATO and the Labor Government is the ease with which individual liberty and freedom of choice is being discarded. At the end of the day, Australians should retain the right to invest for their future – well or not - as they see fit. The truth is that most Australians will spend most of their investment money genuinely attempting to build for their future; I don’t think that anyone could hope for more than that. Some Australians will, however, spend some of their investment money on less rational options, including an occasional piece of art that might not appreciate in value over time and that is primarily (if not entirely) intended for current benefit and that is very much up to them. It might be a foolish investment for a particular SMSF but, in the vast majority of cases, it is only a tiny component of the value of that SMSF. And it is that individual who will, in time, bear the fruits of their own investment choice. So, why is the ATO so enraged by this? Who really suffers when an SMSF invests unwisely in a painting? And is that suffering any less when an SMSF invests equally unwisely on the stock market? Whatever else is true, the ATO will not be any better off whichever bad investment the SMSF selects. The tax concession that the ATO is obliged to apply is equal regardless of the category of superannuation investment. And how can it be fair that, in the wake of the ATO’s anger at these non-investment art purchases, my entirely appropriate and carefully constructed acquisitions are considered dispensable? This is an unequivocal infringement upon my rights and upon those of other SMSF directors. If I make a bad investment decision, I will suffer, as I should. But who exactly is suffering when an ill-informed (or illadvised) SMSF acquires a piece of art? By B U S I N E S S & F inance will drive art prices (especially the prices of new art) down. Art investors, art galleries and the artists themselves will all suffer. The indigenous art industry will suffer at least as badly as any other, on present evidence probably moreso. curtailing my SMSF, devaluing it forever, who actually benefits? For me, and for large parts of the art industry, this is not an academic matter but a very real and personal one. The value of my art collection – of my SMSF – is threatened by any decree that forces me to sell within a short time frame (in a fire sale environment as has been proposed by the Cooper Review) or that actively diminishes the pool of potential buyers of art. Without any doubt, if this sustained Government attack on art in SMSFs is maintained, including the shameful Sole Benefit Test, I will lose. But who will gain? And, if someone else does gain at my expense, is that fair? The tax incentives offered for superannuation investments have acted as a great stimulus to a number of industries, including the art industry. Dismantling those incentives for art will damage the value of collections like mine, will discourage potential art buyers, will reduce art sales and The attack on art in SMSFs is misguided and Government has been misinformed by advisors, like Jeremy Cooper, with skewed agendas or ideological axes to grind. I don’t think any Australian citizen will be better off as a result of the restrictions that have been and that are about to be imposed upon art in SMSFs, but some citizens – me, for one – will be much worse off. For what it is worth, I say: Get rid of the Sole Benefit Test. It is an embarrassing, boorish, culturally ignorant piece of law that reflects very poorly upon its sponsors and casts Australia in a poor light. Where else in the world is it mandated by Government that beautiful and valuable fine art must be stored and hidden precisely so it cannot be seen? Allow art to be included in SMSFs. It is a potentially good investment which supports an important Australian industry. And exclusion of art from SMSFs will damage real people whose livelihoods or whose investments are built upon art. through education. But, in the name of our open, civilised and egalitarian way of life, let’s not legislate against individual liberty and freedom of choice. The Government and the ATO need to learn to live with the tax concessions they themselves offer through superannuation investments. Tax concessions are just as much a part of Australian life as tax itself. And the Government should embrace the tremendous boost to industries associated with the trade in collectables that has been delivered by having them included in SMSFs; and relish the GST that this delivers. Dr Michael Levitt is a Colorectal Surgeon in Perth who has been collecting art for investment for over 10 years. He has been an invited speaker at art exhibitions and is a passionate advocate of art and the art industry. References p40 & 41, "Exhibition View 2 - Impressions, The Printed Image El Camino Art Gallery" by Marshall Astor - Food Pornographer p41, "JSOVT(MB): Musei Vaticani chair" by Yaisog Bonegnasher Images licensed under a Creative Commons Attribution 2.0 Generic Licence Promote sound investment strategies for all investment vehicles, including art, Physicianlife 41 Physicianlife B U S I N E S S & F inance FinanciaL Smart Investors Target Carbon Tax Opportunities How will the proposed carbon tax affect your investments? T he move is on to implement a carbon scheme in Australia. In late February 2011, the Gillard Government proposed a two-stage plan for a carbon price mechanism, to commence on 1st July 2012, subject to Parliamentary approval this year. It looks likely that the legislation will pass, but many of the most contentious areas remain under discussion - namely the level of the fixed price, phasing in of sectors, and assistance for both households and industry. However, unlike discussions in late 2009, which got bogged down. It is thought that the Green Party may yield more influence this time. Potentially this might result in less assistance to industry (electricity generators and trade exposed sectors). It also means that certain sectors might benefit from increased investment, if Australian households and companies are pushed to move to cleaner practices. Opportunities While carbon tax proposal incentivises individuals and businesses to be more mindful of their energy use. It is also designed to increase the competitiveness of sustainable energy technologies. Certainly around the world, a global clean energy economy is already mobilising. Our re-insurers, superannuation funds and smart investors are beginning to realise this is the case. In 2010 clean energy investments hit record levels globally at $243 billion.1 The EU and China are leading the charge with renewable energy and energy efficiency investment expected to grow between $180 -260 billion per annum by 2030 (Bloomberg).1 Australian investment is currently lagging behind the rest of the world. Regional boost from renewable energy The Climate Institute commissioned leading energy and industry specialists to model the opportunities and to talk to regional business and community leaders, to not only see what extra opportunities exist but to see what else is necessary to turn this opportunity into reality. This research is from Sinclair Knight Merz-MMA and Ernst & Young titled Clean Energy Jobs: Regional Australia.2 This recently released research shows the largely untapped energy resources that Australia has in geothermal, large-scale solar, bio-energy, hydro, wind and natural gas. The modelling shows that by 2030, close to 43% of Australia’s electricity could be produced from clean energy, up from around 12% today. Regional analysis shows that greater proportions of renew- Physicianlife 43 B U S I N E S S & F inance able electricity are attainable with extra policies and focus. Furthermore, the report commissioned by the Climate Institute, states that reducing greenhouse gas emissions by 25% could boost regional Australian employment by 34,000 jobs.2 Climate Institute Chief Executive, John Connor, said “It shows that clean energy projects could provide an economic foundation to support strong regional populations”. With this potential change in legislation, tions. They also participate in US NAIC Insurer Climate Risk disclosure each year. Certain smaller companies like Blackmores also take carbon risk seriously. “The company’s headquarters, opened in 2009 has a carbon footprint one eighth of that of a comparable development, along with a range of additional environmental features. In addition they actively reduce product packaging and have an LPG vehicles fleet. valuations in a low carbon environment. Deutsche Bank analysts recently reported that the hardest hit companies, in terms of net present value, will be Virgin Blue, Caltex and Alumina. BlueScope Steel, Qantas, Origin and OneSteel are also high on the list. A report by Morgan Stanley Smith Barney analysts also believes that the cement, aluminium, oil refining and steel industries would be impacted under the proposed carbon tax.3 It will be increasingly important for all companies to proactively address their All prudent investors should review their portfolios in light of the changing carbon environment. Smart investors need their portfolios to be future ready. Karen McLeod CFP®, is an Authorised Representative (242000) of Ethical Investment Advisers (AFSL 276544). References smart investors should look at mitigating their investment risk by selecting companies and investments that are actively addressing their carbon use and climate change risks. Investors can benefit from the investment performance of companies that focus on sustainable business practices. Investment opportunities There are many listed companies that are actively reducing their carbon footprint and addressing carbon risk. A few examples of these companies are highlighted below: QBE Insurance is a signatory to ClimateWise in Europe, which provides a framework for insurance companies to build climate change into their business opera- 44 Physicianlife carbon footprint. As the Responsible Investment Association Australasia (RIAA) states, “….” Investment risks There are certainly a number of companies that stand to suffer reduced 1 Mercer - Climate Change Report February 2011 2 3 “Carbon Scheme Impacts on Heavy Industry: Valuation Scenarios (NPAT, EBITDA & NPV) for Miners & Materials”, 2 March 2011, Analysts Elaine Prior and Felipe Faria. p43, "Power Cables" by woodleywonderworks. com/photos/wwworks/2651308367/ p44, "Going Green" by ManoharD manohard/2857676975/ Images licensed under a Creative Commons Attribution 2.0 Generic Licence Disclaimer: Ethical Investment Advisers (AFSL 276544) has been certified by RIAA according to the strict disclosure practices required under the Responsible Investment Certification Program. See. org. Save the world there’s money in it. For Investors, Society and the Environment. Contact Karen McLeod on (07) 3333 2187 or visit Karen McLeod is an Authorised Representative of Ethical Investment Advisers Pty Ltd (AFSL 276544). Ethical Investment Advisers (AFSL 276544) has been certified by RIAA according to the strict disclosure practices required under the Responsible Investment Certification Program. See for details. B U S I N E S S & F inance Risk Issues Unique to Senior Physicians When Age Matters I n the words of Mark Twain ‘Age is an issue of mind over matter. If you don’t mind, it doesn’t matter’. Unfortunately, your age matters a great deal to the actuaries of insurance companies who determine the sorts of financial protection products that are available to you as well as what you will have to pay to obtain them. Similarly, the Australian Tax Office and government authorities governing superannuation place great importance of your age sometimes to your advantage. This article has been written to inform medical specialists about a number of pertinent risk insurance issues and opportunities unique to those aged 50 and over. Focus Shifts Away from Baby Boomers Bernard Salt places the Australian baby boom between 1946 and 1961. By his account, the youngest Baby Boomers are turning 50 this year. While pre-retirees tend to be the largest consumers of financial planning advice, risk insurance solutions for this demographic tend to get more restrictive and more expensive. It would seem that insurers welcome those in their 30’s and 40’s with open arms and yet those above 50 are given the cold shoulder. This is seen in the forced expiry of policies, the inability to even apply for new cover above a certain age and the rapid increase of premiums, creating a risk protection maze that is extremely difficult to navigate. Policy Restrictions Insurers lawfully discriminate on the basis of age as seen in the vastly different premiums and options available to say, a 55 year old compared to a 65 year old. Industry data confirms that over 95% of income protection policies in force today are due to expire when the policy holder turns 65 or sooner. And yet, it is increasingly 46 Physicianlife B U S I N E S S & F inance common for medical specialists to work well beyond this age. This may be due to financial necessity, but even where wealth is sufficient to provide for a comfortable retirement, protecting one’s ability to earn an income remains a high priority for those choosing to work. While certain income protection policies can be continued to age 70, there is little awareness of this and they generally need to be put in place prior to age 60. To their dismay, many specialists over 60 discover this only once they have passed this age. As one medical specialist in his 60’s recently told me ‘I’ll be supporting my children for a long time to come and I can’t possibly afford to stop working just yet’. FindinG SOLutions Aside from forward thinking and taking a pro-active approach, there are still a number of avenues to pursue for those wanting cover beyond age 60 or 65. A handful of insurers will offer new Income Protection policies to those over 60. Age 63 is currently the maximum entry age for Income Protection across the wider Australian market. We have also had success in negotiating for insurers to make special exceptions on a case by case basis. Another approach is to look at Critical Illness cover - also referred to as Trauma - as an alternative to Income Protection. The maximum entry ages for this sort of cover are generally more favourable than Disability cover (including both TPD and Income Protection). Additionally, most new policies on today’s market now expire at 70 compared to 65 which was the standard expiry age until quite recently. While Critical Illness cover won’t generally pay benefits for short term injuries (such as bone fractures) or disabling sicknesses such as pneumonia, lump sum benefits are payable on events such as cancer, heart attack and stroke. Not only is Critical Illness more accessible for those approaching (or over) 65, it is often a much more sensible way to maximise your benefits. To illustrate, take a medical specialist aged 62 who undergoes by-pass surgery which results in being off work for 2 months post-op. Having an income protection policy of say $20,000 pm and a 30 day waiting period would typically result in a single payment of $20,000 (fully taxable). By comparison, a trauma policy may provide a taxfree payment of $500,000. Importantly, this payment is made regardless of whether the specialist is able to return to work, chooses to take time off or even commences an early retirement. Policy Definitions Can Become More Restrictive With Age Total & Permanent Disability cover (TPD) typically provides a payout when the person insured can never work again. It is a little known fact that policies typically have an ‘Auto-Conversion’ clause as standard. Consequently, at age 65 (and with some policies, at age 60), the cover becomes much more restrictive as it converts from an ‘occupation- specific’ definition of total and permanent disability to the much harder to claim ‘loss of independence’ definition. Under most policies, a physician who had become blind would be unlikely to claim on TPD after age 65. Thankfully, the Auto-Conversion clause on some policies enables ‘OwnOccupation’ cover to continue till age 70, rather than age 65 or 60. Another area in which cover can become ‘diluted’ pertains to the partial benefits payable under Critical Illness. Payouts for events such as loss of sight in one eye, loss of hearing in one ear as well as early stage prostate or breast cancer are only available for those under age 60. Again, selecting the right insurers means that such events can be covered to age 70. Consider Owning Cover via Super (But Beware) Australia’s superannuation legislation sees many potential advantages arise from owning personal insurances within a super environment. These include tax deductible premiums as well as the cash flow benefits of paying for covers with funds you are not reliant on for living expenses and debt management. At the Insurers lawfully discriminate on the basis of age as seen in the vastly different premiums and options available to say, a 55 year old compared to a 65 year old. Benchmark of Maximum Ages For New Cover and Policy Expiry Maximum age for new owners Policy expiry age Industry Standard Latest Age on Market Industry Standard Latest Age on Market Income Protection 60 63 65 70 Total and Permanet Disability (TPD) 59 64 65 70 Critical Illness (Trauma) 60 67 65 70 Life Cover 70 78 99 99 COVER TYPE Physicianlife 47 B U S I N E S S & F inance same time, super provides a minefield of potential negative outcomes. These include insurance payouts becoming trapped within super and payouts being taxable in the hands of their recipients (as distinct from being tax-free to recipients outside of super). The various pros and cons will need to wait for another article but in the meantime, it is worth double checking that you are using super appropriately. Reduce Your Covers When You No Longer Need Them Financial vulnerability is often greatest when debts are high, children are young and you are still a long way off from meeting your financial goals. Thankfully, there comes a time in the lives of most professionals where financial freedom becomes more of a reality than a dream. While there may be an emotional attachment to keeping one’s covers going, reducing covers may well be an appropriate action to take. With the compounding effect of accepting CPI increases insurers offer each year, you may also find that your current covers are well more than you ever needed. Before taking a knife to your covers, quality personalised advice should help ensure that you do not ‘cut your nose to spite your face’. For example, you may find Critical Illness cover to be a sensible substitute for a soon-to-expire Income Protection policy. Similarly, you may find that reducing your life cover levels to an amount less than the other bundled covers (eg Crticial Illness), results in you paying stamp duty and even higher premiums. Professional advice should also help prevent the tendency to take an ‘all-or-nothing’ approach which is likely to see you either un-insured or over-insured down the track. When Premiums Get Astronomically High Premium increases of 14% to 16% are not uncommon for the over 60 (based on the actuarial assessment of the increased likelihood of claim as we get older). Premiums that may have started as several hundred per month may have crept up on you over the years to be many 48 Physicianlife Financial vulnerability is often greatest when debts are high, children are young and you are still a long way off from meeting your financial goals. thousands per month. Medical specialists with premiums of between $25,000 and $50,000 per annum are becoming increasingly common. Aside from reducing covers to levels that are more reflective of your needs, it is imperative that your covers represent excellent value for money. This is not a suggestion that you pursue cheap over quality but rather, that you take a well informed and pro-active approach regarding the best offerings on today’s market. The table below shows current annual premiums amongst the major life insurers for a 60 year old, non-smoking male seeking $1m of cover: The price variation across the market is quite astounding, especially considering that life cover (paid on death) is the most commodity-like product of all the covers. The more expensive half of the market Insurer % more than most Annual Premium competitive AIA $6,290 n/a Zurich $6,702 6.6% Tower $6,838 8.7% Macquarie Life $7,049 12.1% AXA $7,246 15.2% MLC $7,329 16.5% Asteron Life $7,482 19.0% OnePath (ex ING) $7,520 19.6% AMP $7,608 21.0% CommInsure $7,768 23.5% MetLife $7,931 26.1% is on average 20% more than the most competitive offering available. As overpriced some of the more expensive covers may appear, these may still be relative bargains compared to your current covers. As life expectancy has increased, insurers have been able to reduce their premiums and still remain highly profitable. You might naively think that your insurer will reward your loyalty by reducing your premiums accordingly. Sadly, most often this is not the case. Consequently, life cover premiums that are five or more years old are often 5% to 10% more expensive than the same insurer’s offering on new policies. It is therefore of great benefit to revisit the options available to you (even with your current providers). Aaron Zelman, is a partner of specialist risk advisory firm, Priority Life.. Industry Standard COVER TYPE B U S I N E S S & F inance End of year tax-planning checklist 50 Physicianlife B U S I N E S S & F inance Top 5 tax-saving tips for physicians U sing the following checklist, most physicians should be able to reduce their taxable income by a substantial amount. We highly recommend systematically working your way down this list with your accountant or financial advisor and seeing which of the following five categories can benefit you: Summary 1. Defer Income 2. Prepay Tax-deductible Expenses a. Medical indemnity fees b. RACP fees, AMA fees c. Prepay all travel expenses and registration fees for next year’s medical conferences d. Prepay rent for rooms (if applicable) for the 12 months ahead e. Prepay staff bonuses, superannuation (if applicable) f. Purchase any education-related expenses (e.g. medical books) g. Purchase any work-related equipment including laptops, software, mobile phones, PDAs h. Prepay interest on business related loans (e.g. car loans, equipment finance, rooms fitout finance) i. Prepay marketing expenses including website development, printing, advertising costs j. Prepay the medical group fees (if you are an associate in a group, it may be possible to pay an estimated ‘service charge’ in advance for the whole financial year ahead) 3. Scrap obsolete equipment to bring forward the unused depreciation value —for physicians this would include anything from old ultrasound machines, ECG machines, stethoscopes to old examination couches. 4. Personal and Spousal Superannuation Contributions a. Up to $25,000 concessional (under 50 years of age) b. Up to $50,000 concessional (over 50 years of age) c. Non-concessional contribution of up to $450,000 in any three-year period d. Spousal superannuation contributions 5. Prepay interest on investment loans used to purchase investment property or shares. Physicianlife 51 B U S I N E S S & F inance Physicians are an extremely variable bunch when it comes to income and expenses. The expenses are highly dependent on your subspecialty, the number of staff you employ and the location of your rooms. In addition, the income generated by physicians is highly variable both between subspecialties and within any single subspecialty. To the right is a worked example of a physician in a metropolitan solo practice with a single full-time equivalent staff member (receptionist/ practice manager) as an employee. Work-Related Deduction Typical Value (annualised) $ Medical Indemnity 15,000 Anaesthetic Group Fees/ Billing Agent Fees 25,000 Medical indemnity 12,000 Annual Rooms Rental 40,000 Conferences/ CME (books, travel, conference registration, (books, travel, conference 15,000 Conferences/ CMEhotels, 15,000 courses) registration, hotels, courses) petrol, insurance, Work-related travel Work-Related Travel (car,(car, petrol, maintenance, loans) loans) insurance, maintenance, 15,00020,000 Income Protectionand Insurance superannuation training) Premiums 10,000 Mobile Phone Costs 2,000 Employee related costs (wages, Income protection insurance premiums Rooms running costs (stationery, lighting, Home Office Costs (stationery, heating lighting, heating Professional subscriptions and memberships (RACP, AMA) Professional Subscriptions and (internet, phones, Telephony(ANZCA, and IT costs Memberships ASA, AMA) practice software, hardware, website hosting) Total Physicians are an extremely variable bunch when it comes to income and expenses. Total Physicianlife 10,000 10,000 4,000 3,000 5,000 8,000 $178,000 $91,000 The categories are more or less standard 1.5% Medicare levy). for most physicians but the actual costs The dilemma that now arises is that after will be highly variable depending on thisanaesthetists point, the and incentive The many categories arephysicians more or less standard the rangetoin continue costs how other the roomsfor most for each of the categories are not that varied. Based on these typical figures, there working hard is limited because for every are shared with and the location of the reaches a point in the financial year at which you have earned enough money to paygoes dollar you earn, almost half (46.5%) rooms. Basedprofessional on these typical figures, for all these expenses. towards tax. there reaches a point in the financial year at which you have earned enough money One (short-term) solution is to simply to pay for all these professional expenses. not submit any more patient bills until Anything after this belongs to you (and the following financial year. However, this the ATO). usually means you have merely deferred your tax liability until the next financial Then later in the year, there reaches a year so is not really a well thought out new point where you have earned enough strategy. to cover all the expenses and $180,000 above this to reach the top end of the 37% tax bracket. All income earned from this point onwards (i.e. above $180,000) will be taxed at the highest rate of 45% (plus 52 60,000 In reality, the only way to circumvent this issue is to build your wealth (the asset value of the possessions you own) B U S I N E S S & F inance in a low-tax environment. So how do you achieve this? Here is a quick and simple summary: If a typical good investment generating a 10% growth per annum is owned by an investor with a marginal tax rate of 46.5%, and all the return from the investment is taxable, then the after tax rate of return will be 5.35% of the original investment. (Calculated as 10% x (100-46.5) = 5.35%) If we assume inflation is approximately 3%, the real post-tax return (i.e. after inflation) will be 2.35%. That reward is not enough for most of us to consider taking any investments. Why would you want to take any form of risk when a ‘good’ return will only return a meagre 2.34% after tax and inflation? Maybe it’s better to simply take some time off work, work fewer hours, go on holiday or purchase a new car? So how do you avoid your investment returns being destroyed by tax and inflation? You cannot do much about inflation, although staying in growth assets and avoiding non-growth assets is one way to overcome inflation. (Growth assets are items such as shares and property which provide investment returns usually in excess of inflation. Non-growth assets are items such as fixed-securities or cash deposits whose returns are eroded by inflation). What you can do is reduce the tax applied to your investment returns by following these three simple rules: - Choose investments where the return is either tax-free or concessionally taxed - Choose tax efficient legal structures, or combinations of legal structures, i.e. legal structures that are taxed at a rate less than the top marginal individual tax rates, i.e., use companies, trusts and SMSFs (self-managed super funds) to hold your investments. - Choose concessional (i.e. deductible) super contributions and non-concessional contributions (i.e. non-deductible) Here are the typical investment strategies we would offer physicians to accumulate wealth through tax-savvy investment plans: Superannuation fund Make sure you buy a home, structure your debt so you can pay off your home as early as possible, and then reap the rewards of any rise in value of this asset being CGT free. Building up wealth and assets in your SMSF is a great opportunity for nearly all physicians. This money is concessionally taxed when it enters the super fund, capital gains are taxed at a modest rate of 10% and income generated from it is taxed at 15%. Superannuation, especially an SMSF, because of the additional freedom to choose your asset classes is an excellent way to accumulate wealth in a low-tax environment. Principal place of residence The appreciation in value of your personal home is 100% Capital Gains Tax free. Pay off your mortgage and any rise in value over the years is yours to keep untouched by the tax-man. Make sure you buy a home, structure your debt so you can pay off your home as early as possible, and then reap the rewards of any rise in value of this asset being CGT free. Negatively geared investment properties Buying investment property can be a highly tax-effective means of reducing your taxable income and building up your asset base. Prepaying interest on the loans and also having an accelerated depreciation schedule for the investment properties allow you to maximise your taxdeductions in the immediate years. The growth in value of the properties remain untaxed until you choose to sell them (which will often be after retirement) Physicianlife 53 B U S I N E S S & F inance Similar to buying negatively geared property, you can buy shares in your own name or an investment entity . or in financial year when you have lower earnings (e.g. a sabbatical year) or even in a financial year when you may incur a business loss (e.g. a share market crash). Even then, when you do dispose of the properties, they will be entitled to a 50% exemption on any CGT (if they have been held by you for over 12 months) making this a brilliant way to reduce your taxable income in the short-term and accumulate wealth over the long-term. Geared share investments Similar to buying negatively geared property, you can buy shares in your own name or an investment entity (we frequently set up an investment trust entity for many of my clients). The option is there to use margin lending to gear your share portfolio. The interest for these loans for the year ahead can be prepaid making this an excellent way of bringing forward tax deductions. In summary, as a physician, your income is likely to always be high and substantially above your costs of living. The problem with working harder to generate a higher income is that this now increases your tax burden and becomes a case of diminishing returns. your private practice is, but it depends on how much you save, invest and shield from tax. ďƒ¨James Clyne is the accounting partner and Adam Faulkner is the medical wealth strategist at MEDIQ Medical Financial Services. Your success in building up wealth during your working lifetime depends not on how hard you work, not even on how lucrative. 54 Physicianlife Financial Planning Home Loans Medical Indemnity Fit-out Finance Medical Practice Strategic Consulting MEDIQ Financial Planning is a Corporate Authorised Representative of Synchron AFSL 243313 SMSF The Rise and Rise of Natural Medicine Dilution of the words ‘doctor’ and ‘medicine’ In recent years, there has been widespread proliferation of books and courses relating to so-called ‘natural’ or ‘alternative’ medicine. Practitioners of these arts are everywhere. There are naturopaths, osteopaths, traditional Chinese medicine practitioners, chiropractors, iridologists, reflexologists, and many other kinds of –ologist, -path and -practor as well. 56 Physicianlife O ne type of natural medicine is homeopathy. Homeopathists believe that the activity of pharmacologically active substances is enhanced by dilution. A typical homeopathic remedy is diluted to one part in 10 to the power 60, which makes it extremely unlikely that any molecules of the original substance remain in the bottle. Who is allowed to call themselves ‘Doctor’? There have been changes to the law that have allowed some of these people to call themselves ‘Doctor’. Previously, the RISK MANAGEMENT Specialty Colleges Alternative practitioners have, of course, pounced on the opportunity to market themselves by putting the word ‘Doctor’ in front of their names. This cheapens the very word ‘Doctor’. There is a ‘College of Natural Health’ across the road from Melbourne Central train station, which hands outs ‘degrees’ and operates a ‘clinic’. The clinic has professional sign writing, glossy fullcolour pamphlets and cubicles wherein professionally dressed people dispense medicines of various types and usually at considerable expense. There is also an Australian College of Natural Therapies in Brisbane and Sydney. Lay people might have trouble telling the difference between the Australian College of Natural Medicine and, say, the Australian College of Emergency Medicine. They may not understand that natural medicine is outside the circle of traditional medical specialties. The first college is not from the same tradition as the second – it’s something completely different. Its foundations are not scientific, and its practitioners are proud of this. Yet they have named their college similarly to the scientific ones in order to give the appearance that they belong in the same category. Private patients and insurers are willing to spend money on alternative health care word ‘Doctor’ was restricted only to practitioners of conventional medicine and dentistry. Even within the medical profession, use of the word ‘Doctor’ is restricted. South of the Murray River, most surgeons call themselves ‘Mister’, or (for the handful of female surgeons) ‘Miss’ or ‘Ms’. Alternative health care has gained considerable legitimacy in the eyes of the public. Most private health insurance schemes offer their customers access to alternative health care in addition to conventional (for want of a better antonym to the word ‘alternative’) health care. Doctors have to compete with alternative health for the finite amount of money that private patients and their insurers are willing to spend on health care. Alternative practitioners have, of course, pounced on the opportunity to market themselves by putting the word ‘Doctor’ in front of their names. This cheapens the very word ‘Doctor’. An art such as osteopathy is to doctoring as Scientology is to science. Osteopathy is not medicine; it is a belief system. So far, Medicare and the state governments do not insure citizens for visits to an alternative health care practitioner, but it may happen one day. With all this in mind, it seems that the boundary between conventional medicine and alternative medicine is blurring. Some conventional medical practitioners are happy to refer to alternative health care as ‘complementary’, implying that it is compatible with conventional medicine. Conventional medicine practitioners should protect their territory! But alternative medicine is not compatible with conventional medicine. Conventional medicine is (or should be) based on empirical observation. It is possible to demonstrate conclusively, that antidepressants, beta-blockers, cholecystectomies and topical steroid creams work. The very basis of alternative medicine is that it rejects empirical observation. It rejects science. Just as the homeopaths would dilute medicine to the point where it is no longer effective, the reputation of conventional medicine is in danger of being diluted by its unwanted association with alternative medicine. The unique features that attach to being able to call oneself a ‘doctor’ will dissolve if doctors do not actively protect their territory. Doctors have to compete with alternative health for the finite amount of money that private patients and their insurers are willing to spend on health care. Dr Richard Cavell This article is based on Dr Cavell's “What does it mean to be a doctor?” (2007) Panacea. Physicianlife 57 RISK MANAGEMENT Whistleblowing in the australian heaLthcare system Ethics, professionalism and healthcare management 58 Physicianlife RISK MANAGEMENT T here have been four recent, memorable examples of whistleblowing in the Australian healthcare system, involving three States and one Territory1,2. Perhaps the most famous episode is the Bundaberg Hospital disaster in which Chief Surgeon, Jayant Patel, was accused and convicted of manslaughter for planning and then undertaking the operations on several patients treated at the Bundaberg Base hospital. In this event, a senior intensive care nurse, Toni Hofmann, had already raised concerns about the quality of surgical care within the hospital. Despite involvement of the senior general and medical management of the hospital little was done to curtail the dangerous operations until a State MP was contacted, who raised the concerns within Queensland Health and publicised the shortcomings of the surgical service through a Brisbane Newspaper. While there has been significant publicity of Jayant Patel’s subsequent flight to the United States and his fight to avoid trial in Queensland little publicity has followed the heroic whistleblower, Toni Hofmann. Despite her excellent qualifications and work record Toni Hofmann has not worked as a nurse since the affair was first made public and she must be identified as one of tragic victims of Dr Patel. Interestingly she was undertaking a Masters degree in Medical Ethics at Monash University when the Bundaberg tragedy was unfolding. Other whistleblowers in the Australian healthcare system have fared equally badly. Gerald McLaren, a conscientious rehabilitation physician at the Canberra Hospital, noticed a number of severe complications occurring in the work of one of his neurosurgical colleagues. When he tried to raise these issues with the neurosurgeons as a group and the hospital management he was initially unsuccessful and was forced to ask the ACT ombudsman to inquire into the issue. Unfortunately even under this independent inquiry the medical professionals felt unable to fully comply with the ombudsman and even ignored their statutory duty to cooperate with a lawful investigation, which the ombudsman duly noted in his report. Despite the fact that a subsequent inquiry by the ACT Medical Board found that the surgeon had failed in his ethical duties and removed his name from the medical register, Dr McLaren was forced to resign from the Canberra Hospital and has not been successful despite applications for permanent positions in other hospitals, representing a second, well-qualified victim of poor practice and whistleblowing in the Australian Healthcare system. The nurses that raised their concerns about clinical practice at the Camden and Campbelltown Hospitals have been completely vindicated by a subsequent inquiry that judged clinical standards to have been poor and the service provided to patients to have been below an acceptable standard. A redesign of health services in the area has followed and care for patients is now of a much higher standard. However, the failure to acknowledge the nurses as heroes or heroines of the event has led to allegations of harassment and criticism when they returned to work such that some of them do not now practice nursing and others have moved jobs and hospitals to avoid retaliation for their noble actions. Again, more victims of Australian healthcare, part of a group courageously trying to improve services to patients and prevent patient harm at all costs. Even if that cost was their own career. The final case from Western Australia involved a senior manager at the King Edward Hospital in Perth, who raised concerns about clinical standards in the hospital collected through the normal reporting systems. The response of the healthcare profession was vigorous and defensive. Even when a commissioned report identified improvements to the service that could be achieved, and recommended changes, the profession still chose to blame the CEO, who had raised the concerns in the first place. He resigned but obtained work as a healthcare manager in another State. What then is the problem in Australia? It is the same problem as the NHS, where a similar episode led to a reduction in mortality from 30% to <3% (See Figure 1.) in about three years, but the whistle blower had to leave the UK and settle in Australia and it is as deeply rooted in Australian Public Hospitals as the UK3. I suspect the issue is in the secrecy of the healthcare professions, particularly the medical profession in not wanting to admit that complications or Figure 1. Mortality for paediatric cardiac surgery in Bristol and other UK centres 1991-20013 Annual Litigation Costs Average time to resolution of claims and lawsuits No. of claims and lawsuits $3 Million $1 Million 20.7 Months 9.5 Months 262 114 August 2001 Physicianlife August 2005 59 RISK MANAGEMENT deaths may be attributable to poor care instead of patient related factors. This fear is partly generated by the medical indemnity organisations that encourage doctors not to criticise colleagues to patients and certainly not to publicise high complication and mortality rates for fear of the financial consequences of medical indemnity payments to those patients harmed. In the UK this has led medical students to achieve a <5% level of reporting poor care when they have been asked about reporting in clinical scenarios at the completion of medical training4. The authors blamed the hidden curriculum for this transformation of reporting but it was noteworthy that only 13% of students would have reported poor care at the commencement of training. The power of the profession, to select and achieve the norms of behaviour that endorse secrecy, seems to be unassailable. responsible for the expenditure of between US$17 billion and US$29 billion per annum in the Institute of Medicine Report ‘To Err is human. Building a Safer Health System’ has led to significant efforts to improve healthcare safety5. (There is no evidence that Australian healthcare is any safer than the US. This means that the cost of systemic error in Australian Healthcare is likely to be Au$1.5-2 billion each year we don’t make it safer) These efforts in the US have been coupled with the innovative “Open disclosure” policies developed by Steve Kramman and Ginnie Hamm in Kentucky and translated into significant savings in a Michigan public hospital6. The problem with publishing this data required that it was published with non-medical authors (Clinton H & Obama B) in the New England Journal of Medicine (Figure 2.). So the conclusion from this work is that reporting incidents requires improved ethical behaviour. However, this improved ethical behaviour improves healthcare The recognition in the United States that systemic errors in healthcare were Figure 2. University of Michigan Risk Management Program Legal costs, claim numbers and duration in the first 4 years after introduction of ‘Open Disclosure Program’ 6. safety. Open disclosure, also indicative of improved ethical standards, reduces legal payments in the US. Then finally there is evidence that Australian trainees will report 97% of the incidents in their work if provided with appropriate tools7. Thus correct behaviours can be achieved, using technology that is attractive to generations X and Y, underpinned by improved ethical standards. I would like to think that the encouragement of improved ethical behaviour should be a key goal of medical education in order to cement this improved capacity in future healthcare systems. The key requirement is that this behaviour must come from the medical profession because it cannot and will not come from management and administration whose goals are financially and organisationally aligned. The bottom line is that it is down to doctors and the sooner we achieve it the less our healthcare system will cost us. Dr Stephen Bolsin is an Adjunct Clinical Professor of Perioperative Medicine at Monash University and Specialist Anaesthetist at Geelong Hospital Dr Mark Colson is a Specialist Anaesthetist at the Geelong Hospital. Mortality at Bristol Mortality for 11 center combined Total number of procedures for 11 centers combined 35 1200 30 Mortality (%) 800 20 600 15 400 10 Physicianlife 9 9/ 20 0 20 0 00 /1 20 01 /2 98 / 19 9 8 19 97 / 7 19 96 / 6 19 95 / 19 94 / 19 93 / 19 92 / 19 91 / 60 5 0 4 0 3 200 2 5 No. of procedures 1000 25 19 References 1. Faunce T, Bolsin S. Three Australian whistleblowing sagas: Lessons for internal and external regulation. MJA 2004;181:44-47. 2. Van Der Weyden MB. The Bundaberg hospital scandal: the need for reform in Queensland and beyond. Medical Journal of Australia 2005;183(6):284-285. 3. Spiegelhalter DJ. Mortality and volume of cases in paediatric cardiac surgery: retrospective study based on routinely collected data. BMJ 2002;324(7332):261-262. 4. Goldie J, Schwartz L, McConnachie A. Students' attitudes and potential behaviour with regard to whistle blowing as they pass through a modern medical curriculum. Medical Education 2003;37:368-375. 5. Kohn CT, Corrigan JM, Donaldson MS. To Err is Human. Building a Safer Health System. Washington: Institute of Medicine; 1999. 6. Clinton HR, Obama B. Making Patient Safety the Centrepiece of Medical Liability Reform. N Engl J Med 2006;354(21):2205-2208. 7. Freestone L, Bolsin S, Colson M, Patrick A, Creati B. Voluntary incident reporting by anaesthetic trainees in an Australian hospital. International Journal of Quality in Health Care 2006;18(6):452-7. CAREERS so you want to the things you need to know and do build your first practice? H ave you walked past the ideal location or just got out of bed with a voice in the back of your head urging you to go out and start up on your own? Well if you have, you are not alone. “So what do I do about it?â€? you ask. Essentially, no one project is the same. Therefore, it is paramount that you do a small amount of basic research before plunging into what should be, both an exciting and pleasurable experience. To ensure a smooth start and practice. However, depending on your area of speciality, two to four consulting room practices can easily be accommodated within 80 to 130 square metres of floor space. Before settling on a location there are a number of other factors that need to be considered. The relevant local council should be your first point of contact and they will be able inform you of the various conditions which must be met in order to obtain the necessary planning approval. For instance, will they allow a medical practice to operate at this location? Does it require you to provide extra car parking before granting planning approval? A property professional such as an independent advocate can assist you through the buying or leasing process. You may already have a good idea of how you would like your practice to look and the space you need to work in. With all this in mind, designing your own practice may seem like an attractive option and for Physicianlife 61 CAREERS some of you with the time and expertise it may also be a good idea. However, there are many design factors you need to consider that you may be unaware of. Factors such as disability access, energy efficiency compliance, infection control and the level of documentation required by contractors are all areas which require investigation to ensure a good outcome and are usually best handled by someone with the inside know how. By ensuring that you have allocated a realistic budget and come up with a detailed brief of what you want and need will be an invaluable step if you’re planning on dedicating the job to a designer. Good design is the basis of a successful outcome. Your nominated designer will exercise due diligence through a thorough site assessment and space planning to demonstrate that the proposal will be able to satisfy any council conditions for planning approval, The adage "a failure to plan is a plan to fail" holds particular relevance to practice design. Combined design and construction firms have the advantage of being able to more accurately estimate the cost of a project at the concept design stage, before you embark on the next stage of the process documentation. These estimates are usually 90% correct and are provided at no extra cost. Ideally, as concept design is nearing completion you should finalise any specialist equipment requirements with your supplier. This is critical to the success of the documentation stage. Importantly this information allows the practice any apparent cost savings. If you choose to use a combined design and construct firm, once the documentation is completed, then you do not have to do much more than wait for the. and allow you to proceed with your purchase or lease. The adage "a failure to plan is a plan to fail" holds particular relevance to practice. 62 Physicianlife Be aware that taking the traditional approach of using one office for the design and contracting another to do the construction as it can introduce errors or cost. For instance, if it’s not on the drawings it may then be regarded as a variation, which you may have to end up paying for. With the design and construct option it is the responsibility of the firm to get it right the first time. Sure there may be variations in the future, but these more than likely will be items, introduced by you, during the course of construction, or factors beyond the control of everyone involved. So, do you still want to open your first practice? Of course you do! Nathan Reid is the marketing manager for Medifit, who offer comprehensive design and construction services to medical specialists throughout Australia. BEST PRACTICE As medical fitout specialists, best practice is something we take very seriously. Whether you require ground-up design and build project or a transformation of your existing surgery, Medifit follow industry best practices and have developed systems to ensure consistent excellent results, on time and on budget. Wherever you are in Australia, Medifit will bring your vision to life and create the operating environment that your patients and staff deserve. To join our large and ever growing portfolio of happy clients, call us today on (08) 9328 8349 or visit our website at SPECIALIST MEDICAL EXPERIENCE COMPLETE TURN KEY SOLUTIONS DESIGN EXCELLENCE EFFICIENT & OPEN COMMUNICATION EXPERT PROJECT MANAGEMENT A COMMITMENT TO BEST PRACTICES (08) 9328 8349 Physicianlife ALPHA The doctor's preferred tablet iPad 2 I t’s hard to imagine that little under a year ago, Apple unveiled the iPad to the world, with the tag-line “...a magical and revolutionary device”. The product launch was met with its usual collection of critique and praise from the technorati and fans alike. However within a year, the fastest selling consumer device ever to be sold has undergone a complete re-design, offering more and without a price increase – quite an achievement considering the closest competitors have yet to release their first device. The most common reaction to the launch of the original iPad was “I don’t really need one...” With the mass-migration to smartphones and laptops in the last five years, most users have all of their computing needs satisfied, without a need to introduce a third device. Despite this, the iPad went on to sell 15 million units worldwide (making it the fastest selling consumer device ever) and has stirred responses in industry from the likes of Google, Blackberry and Microsoft. So why has the interest in tablet computers grown, and how does the new iPad 2 fit in this new world? 64 Physicianlife Emotions of a device When you pick up the iPad 2, the first thing you notice is how thin it is. It’s easy to forget that you are holding the computing power of your desktop five years ago on a device smaller than a magazine. The iPad 2 is 33% thinner than its predecessor and puts it thinner than the iPhone 4. 33% may not seem much, but at the 3 to 5mm levels, this is quite noticeable. It’s also lighter, making the device more portable than before. Carrying a laptop seems obsolete in comparison. Yet the real charm of a tablet is the intimacy it brings. You don’t need to interface using a mouse or a keyboard, it’s all touch based. The ergonomics of touching a screen that you hold is natural and easy for anyone to pick up and use. And when sharing content on your screen with others, the iPad 2 makes it easy to show someone what you’re talking about or just pass the device over. Something that a laptop just can’t do. The very design of a laptop to lift up the screen, creates a ‘wall’ between you and the world – making it less intimate than before. This simple but important design shift will have implications when interacting with patients. The nuts and bolts With the new A5 processor from Apple, the iPad 2 is noticeably snappier than the previous model. Web pages load quicker than before, the screen flows are smooth and lag-free, and the apps load almost twice as fast. With over 65,000 apps designed specifically for the iPad, and access to a further 350,000 iPhone apps (that can run on the iPad, but at a lower resolution) – there is enough opportunity to download and install any conceivable need. As of when this article was written, there are currently 1,404 medical apps, written for the iPad in the Australian app store. There are countless apps for medical handbooks, tools to help with eye exams or even human atlases, just to name a few. The iPad 2 now supports a front facing camera, which is useful only if you wish to video conference on the device. Apple includes its FaceTime app pre-installed, however services from others like Skype should follow soon. Battery life remains ALPHA at the industry leading 10 hours (video) or 1 month standby, which again is an accomplishment given the size decrease and speed increase. Additionally, Apple now provides you the option to buy the new Smart Cover for the iPad 2. Coming in either polyurethane ($45) or leather ($79) in a variety of colours, the cover latches on through the use of clever magnets and can be bent in a variety ways to make the iPad stand, sit and go to sleep. A worthy investment especially, if your iPad goes travelling or is being used by others. The iPad 2 comes in the either WiFi or WiFi+3G, sporting sizes of 16 GB, 32 GB or 64GB. New to this release is the option of the white model (in addition to the black model, with both having a metallic back) brings a total of 12 available models. Choices, choices… “We are finding with the iPad, that doctors are spending more time with patients, in fact doctors are engaging patients by showing them images, showing them data on the screen,” he added. “So it is empowering doctors to be more productive. But it has also brought doctors and patients together...what is so exciting about the iPad is that it will change the way doctors practice medicine,” Halamka concluded. Although the iPad 2 has minimal updates compared to the previous model, it may be just enough to bring the would-be-buyers to the table. It still lacks some muchneeded features like the high-resolution retina display (currently on the iPhone 4), and a USB port, yet this may not be deal-breakers for some. For the current iPad owner, upgrading may not reap the benefit you’d expect, but for the first time buyer, the iPad 2 makes the whole tablet experience even more engaging. A tablet for doctors With a company like Apple trying to find new ways to improve the doctor-patient interaction, the future for the iPad as a tool in the health professions looks bright. Dev Sharma, technology enthusiast Apple introduced the iPad 2 with a special event to highlight its use in the medical sector. John Halamka, a spokesman for an American medical centre said: It’s clear from this event that Apple has the medical sector firmly within its sights. ...what is so exciting about the iPad is that it will change the way doctors practice medicine Physicianlife 65 Wine Rules! the matching game I have a funny relationship with rules - I’m far too mindful of them (I wish it were otherwise). I put it down to my upbringing… T Ihere used to be rules for everything – from when not to swim (within an hour of a meal), to what not to wear (red and green should not be seen without a colour in between; and of course busty girls should avoid horizontal stripes at all cost). But try telling that to your well-endowed Rabbitohs supporter. There were also rules for what not to eat (green apples – they give you the runs), and what not to drink (Adelaide tap water – same reason). Sorry Adelaide, I’m talking 50 years ago - I’ll bet it’s beaut now. 66 Physicianlife LIFESTYLE No-one likes rules: they always start with words like Don’t and Never, and who wants to see those words ever applied to wine? The great news is that the number one rule of food and wine matching is: There are no rules. Woo hoo… wine anarchy!! Well… not quite. But it is time to throw out those old notions about white wine for seafood, and red wine for meat. In any case, this is the era of Master Chef – there’s a celebrity chef in every kindergarten in the country. With such an extraordinary level of culinary sophistication, it’s time to get a bit more savvy with our food and wine matches. The thing that distinguishes wine from all those other beverages that make us witty and attractive is its extraordinary ability to partner with food. When it comes to food and wine pairing, you don’t need a great wine, you need the right wine. But here’s the catch: it has to be the right wine for you. So your first step is to work out what you like. Because however dreamy your celebrity chef thinks the aged Riesling will be with his chargrilled eggplant, if you don’t enjoy Riesling (or eggplant for that matter) it won’t be a memorable meal. But even for a law-abiding kid like me, some rules were hard to swallow - especially the no swimming after a meal rule. It defied logic. On a blistering summer’s day, you’d pack up the Ford, drive to the coast (with the windows wound up in case a passing car kicked up a stone), search for a beach without a rip but with a picnic area; then unpack the car, throw the old Onkaparinga over the bindies, open the Tupperware and force down a limp salad with hard boiled eggs and warm beetroot. You’d think a kid would deserve a treat after that, but no – every time they’d pull the one hour rule. Torture. So let’s say you’ve put a few toes in the water, and you have a fair idea of the kinds of wine you like. Now it’s time to start experimenting. There are two - equally valid - ways to approach food and wine matching: Contrasts and Complements. The Complementary (like-for-like) approach balances the weight, texture, and flavour of the food with a complementary wine. Think: Sauvignon Blanc with Asparagus risotto; or a spicy aromatic Gewürztraminer with Thai red curry; and, OK –a full-bodied Aussie Shiraz with beef fillet. The Contrast method takes the reverse view, and pitches food and wine from opposite ends of the flavour spectrum, to create balance. Much like the delights of putting maple syrup on your pancakes with bacon, think: stinky blue cheese partnered with sweet, luscious Tokay; or a lively young Riesling with pan roasted scallops in a rich butter sauce. Whether you decide to opt for a complementary wine, or a contrast, what you are striving for is balance. Pay close attention to the key ingredients in your meal. Consider the weight and textures of your choices as much as the actual flavour, and then play on your hunch. And don’t let anyone tell you your choices are wrong. If you love them, you’ve made the perfect match! If you’re not up for experimentation, there are a number of classic food and wine matches, which will stand you in good stead. For example, Oysters with a squeeze of lemon and fresh cracked pepper are indecently good with Champagne or Sparkling wine. Pinot Noir is sensational with duck. Serve your great big full bodied Barossa Shiraz with Venison or Kangaroo, and bring out the vintage Port to serve with hard cheese and dried fruits. Finally, here are a few guidelines (NOT RULES!) to help you in your quest to find the perfect food and wine match: • Close your eyes. The colour of your wine is immaterial • All wines go better with company • There is no wine match for All Bran, and there is a reason for this. Gillian Hyde Ten years ago, she made a mid-life career change from Show Business to the wine industry, and today holds the position of Head of Membership at The Wine Society. Physicianlife 67 TRAVEL Having a Whale (shark) of a Time Off the Coast of WA 68 Physicianlife TRAVEL Out of the depths of an indigo ocean a shark the size of a small submarine swims towards me. Its eye the size of a soccer ball regards me with distain. And here I am face to face with the world’s largest sharks — eat your heart out Great White — with nothing but a snorkel for protection. L uckily for me this giant shark is benign. Nevertheless it is disturbingly large dwarfing the hull of the small boat above me and disconcertingly close. I’d come to Western Australia, off the coast of Exmouth because this is the only place on the planet where you can see whale sharks in such numbers — where you see them much at all in fact. Every year from April to July following the mass spawning of coral, the world’s biggest species of fish congregate in the Ningaloo Marine Park. Visitors from Europe and the US pay thousands of dollars to travel to this remote coast to see these extraordinary creatures, yet many Australians don’t even know they exist. The massive fish have a habit of swimming just below the surface so you don’t even have to be a scuba diver to see them. I swim alongside with no more equipment than a mask and my aforementioned snorkel. The sharks don’t necessarily appear on cue. You get in a boat and then wait while small spotter planes, circling like seagulls, search the water for whale sharks. Eight of us are Physicianlife 69 TRAVEL A huge shark, a monster shark, a colossal shark...the BIGGEST shark you will ever see. on the deck of a small boat, most of us are looking up at the plane, not down at the water but before we even have time to stow all our stuff, the radio began to crackle. The spotter plane had seen sharks and they are close, the boat’s engine coughed into life and we shot out through the foam. “Two minutes,” yells out boatman. Action is at fever pitch as we reach for fins for our feet and struggle with snaking snorkels. “OK, go, go GO,” he bellows. Masks snapped over our eyes like visors and with snorkels firmly in our mouths we flap towards the back of the boat and jump out in quick succession. “Get ready”, yells the captain at the wheel. “You’ve got five minutes”. Whale sharks don’t hang around and once they’re spotted you have to get there fast, get in the water and start swimming before they move on. I feel like part of a naval special operations force. “Move it,” the captain yells like the officer in charge. We jump into the water — and into another world. We struggle out of our shorts and into the black wetsuits, it’s not an easy task when you’re sweating in 35 degrees and the boat is listing. My suit suddenly seems to be two sizes too small and my mask is steaming up. From the deck of a boat swimming with the sharks doesn’t look like anything special. Their spotted backs are so well camouflaged that, despite their bulk, when the light reflects off the water you can hardly see them. All you see 70 Physicianlife is a line of fluoro-coloured snorkels bobbing raggedly in the water accompanied by an excited hum as people try to talk through their snorkels. From the swimmers’ perspective things are very different. As soon as you hit the water you forget all about the boat. Because there it is, right there with you — a fish the size of a submarine. A huge shark, a monster shark, a colossal shark...the BIGGEST shark you will ever see. And it’s less than 3m from your face. You are drawn instantly, irresistibly into the domain of the shark. All around you is the indigo ocean and whale sharks appearing like giant shadows out of the deep. TRAVEL Nobody knew much about the life of whale sharks until the l970s. Certainly nobody knew they were found off Australia’s shores in such numbers. And it is only in the last ten years that anyone has begun to study them seriously. The creatures are not thought to be migratory. The theory is that they live deep in the ocean only coming up at certain times of the year. Off Western Australia the continental shelf drops away only 20km offshore. This may be why these deep sea dwellers come up so close to land. In most other parts of the world the shelf extends much further out to sea. One possibility is that the sharks come into shallow water between April and the end of June to Their tail fin is twice as tall as your average basketball player (about 4m) and as I swim alongside its head, I swear that a 10-year-old child could play hide-andseek in its giant swaying gills. feed on the coral spawn which is released at that time of year. Wherever it came from, the shark I am seeing is hard to miss. The thing is immense. Whale sharks grow to up to 18m long, that’s longer than a city bus. Their tail fin is twice as tall as your average basketball player (about 4m) and as I swim alongside its head, I swear that a 10-year-old child could play hide-and-seek in its giant swaying gills. My overwhelming feeling is of being in the presence of an ancient, primitive and very alien creature. I have been lucky enough to get almost as close to humpback whales, which are similar in size. Both species are equally awesome but with the whales you get some sense of communication, some feeling that they have a curiosity about you. We swim close by the whale shark for around 15 minutes before it dived deeper but there isn’t a flicker of interest in us — not even as potential lunch. Whale sharks feed on plankton, krill and very small fish, although they do have 40 rows of hand grenade- sized teeth; which seems to me an excessively large number for a shark that only need to sieve plankton. However, Physicianlife 71 TRAVEL Whale sharks feed on plankton, krill and very small fish, although they do have 40 rows of hand grenade- sized teeth the last time they chewed on anything substantial is thought to have been several centuries ago. 72 Physicianlife I don’t see any teeth, just a yawning grey cavern. I pity the plankton. I have time to check out the theory on our next encounter. Waddling to the back of the boat like hysterical ducks — “go, go, go” — we get entangled in each others’ fins and I belly flop into the water; one fin still aboard, throat full of ocean. Hilary Doling is editor in Chief of The Luxury Travel Bible, the world's ultimate destination guide.. com. She has also been in the water with Great Whites -but that is another story. Right below me and rising to the surface fast is a mouth the size of an open garbage truck and there I am dressed in rubbish-bag black. I splash sideways just in time - but not before the fate of Jonah and a similarly large creature flashes through my mind. Details Day tours to snorkel with Whale sharks depart daily during the season from both Exmouth and Coral Bay. For bookings contact the Exmouth Visitor Centre on 1800 287 328. Interested in advertising in this free publication sent to every Australian Physician? Pa C O LIM cove rw ith spin e fi nal .ind d 1 w.P Mel Lach day bo ur M focu lan 8 W Leve 4.30 arch ne hitem l 1, - 6.3 31 in M sed onPartne Cr 0p st elbo (Free an Ste own m 2011 rs cl urne ient ne is a Parki et, So Towe RE uth rs, ng , Sy ed Priv GIS Avail bank dney s an ate Cl TER able) VIC and d finan ient NO To at W Brisb cial Advi 3 m tend ane. goal sory O on Inve s w Firm th R ob ith up sting com tain offic da Tim plim a Se es Inve te@la es em en minar chla ail tary D new sting np your subs VD an artn de crea slette Times crip d re tion r pr , Au er tails tion ceive ww stra ovidin stralia s.com to a to tegi .au g ’s Tele w.la es sin finan fore phon chla cial mos ce np e 18 1971 and t inde artn 00 inve pe . 643 ers. stm nden 631 com ent t (Fre .au wea ecal lth l) Thurs nd art LAC sec ES ure you AVA IL 3 Cou AB r pl ace rs LE at: e.c 7 BD 9.00 TOP om mis psychi atr made btakes y th.. .life sty 2010 le.. . MA RC rs 11 Ex & f Sa La Invest liba m ‘D ch Asse ynam lan Pa ent O t ic rt ffice the Alloca Appr ners r new tion oach Zone - Ap to Syst plying em ’ loca tion s eal EMBER 20 will ww DP Enr ol a Glo in bally ’ R IL od g he ld in We the dnesd Sy follo ay dney M win The 7.30 arch Le g 207 vel 5, Ports 9.30a 16th Kent Symaide Ce m 2011 nt Str eet, ntec re, Sydn Hous ey e, NS W Do FULL con doct D ver ors A Y 16 TH D COU sat ma ion ke RAC RSio ress alis bad E n,drug V Cl APsu ReLp2id ub, M Iic ts? abuse a e a 0 m o 11 ngst doct nd elbo u ors rne $14 C /AP Go Prac pe sid iar rie tic depe y of e Pu n Inv nd rcha apply ent estec se harge for financia Bank Lo l, tax (Au an s ap this s• ply produ an stralia Ho . We d ct ) me res and legal Limite erv Lo ad e the cons vice, d AB an s right ider yo as N 55 07 ur ap to ceas pe propri 1 29 2 e off rsona ate. ering l ne Depo 594 AF the eds an sit pro SL se 23 d du pro 49 ductscircu cts are 75 ms (In iss veste at an tan y timces beued by c Ba e wit for Inv nk). hout e inv estec notic estin e. g. e QL D ..w CH s• isb er ur tuni os ane S ties it ra cial p t w MSF rop not tes, ww to read and erty, .in day. vest Con ily avai ec .co tact yo lab m.a le u/p ur loca rofe l ssio banke nal r, ca fin an ll ce . visi BER/DEC MA R or ITE NOVEM lth. cia Fin w In e • M demn l Plan ance pts ark ity & nin etin g g & Risk IT inve Prot stm ect y ma xim ent our wh y w ise y s in ould our S n’t 201 you MSF ? re t ur n 1 s , Physi c ianlife PH YS IC Con Mar k v e Kil erysaour Psychia tingIA p l s t e t y r i c r i st, heal th pronacthiicatreic NLi yself fe to pote referrential ON CO E DAY UR •P S BUSI lteh..E.w CeO NES • A hraeca t i c S • B ccoun Setu aVltEhR...lif a INGestyle... • F nkin ting C p & R g& ina hea e o vie nc • n PH YS IC IAN Life 3 TAR U G R FO ETED S RS PE BUSIN E CIA LIS ESS TS KNO IN PR WLE AC DG TIC E E 201 Lach 1 is thei lan P - getti shapin ng r ke artn th g up y se ers If yo min is bri e righ to be em u ha t ad a ch ars ngin C ve p to vice Ch hris C utili ower a S al g ief BT aton elf se ed w len Ec Ou be hel toget Fina on Man your to r em d her ill m ging nc om ‘G SM contro aged •b and loba ial G ist inen in M som ake al year SF l orro Mar Econ roup l th for in • co t sp arch e of to: l your Super w ket om e eaki 201 Au retir Out ic Fu • ac nsiste funds John stra differenvestors look em nd (S 1 n cess ntly to p ’ ent Mar g pan acro lia’s to inve MSF) el in ss th fore ce. the spec enjoyi urchas CEO asco yo ‘In stm Co bro ng ia clu e Ea most e andvestin llier ents uTuare Br com a re ader lised Mak des ster s isba Re g in Sem butesdayno pet siden n Se exper mar inve wha siden Com t ne 13 e the inar tia m 2M- arcon 00 tia m ket. stm itive Aust t is h ly abo ts at 200 The No ay ent 13 most s ar cash l or co ralia happ l Prop ercial alm 8th 2011 Cree vo 4p ard 1 14 tel so er e op en of yo k Paul Ch and Br . be St, ing ties por dep mm 1 ie Br isban in tie rt The practice owners ic How yo avoid thu can em Al in truis M Is m Parit dec ed Crossin t Tw lini ng? ici o g : n An insig the Bo undaOrPi The Reason W e ht into w h y s hy form se EeRs xual reome doctoO ATI lations O rsUT h usts ip S O with pa ad our O N tienm tsinis cing y UR o tra Why tive ur m CE bel you wo edic ME D circulation over 10,189 IFE LL ICA $12.95 RR P$ 12 G HIN BLIS PU .95 rese iev arc frau h e w can’t h at a l y o u w ay rea s d rk al d 3/1/ 2011 9:10 :14 AM Contact Joe Korac on [email protected] Visit our website and download the media kit for pricing and further information H/A PR IL 2 01 1 The t3 sha CH/A practice owners $12.95 R R P$ 12 .9 5 Crossing Alt in Mruism edic ine Is it Part declinin Two : Theg? Re E PU BLISH the BOoun ason Why ING PE An insight into wh OU RATIO Te doctors N form sexua yOusom tsou SO l rel UsR rcon adm ati ingsh yoip wiinthistrpa ur m CE atitie nt s edica ve w resea daries ork rch Full Name: fraud Why you b e l ie v e w c a n ’t a h at y o u l w ay s read 3/1/20 11 l 9:10 :14 AM Phone Number: Postal Address where you would prefer to receive Physician Life: Please tick the appropriate box: I would like to continue to receive Physician Life I do not want to continue to receive Physician Life Please keep my comments private Return to: Fax: (03) 9923 6662 Email: [email protected] Address: PO Box 2471, Mount Waverley, VIC 3149 74 Physicianlife 2010 MAR referrers al mista TOP made by kes psychiatri c How you can avoid them MEDIC AL LIF 1 DECEMBER to potenti 7 CBD om e fina l.indd urne 20 11 spin 0 REG ey and finan t Advi ISTE R NOW Brisb cial goal sory Firm ane. s with To atten offic 3 mon d OR es obta th Inve sting complimin a Sem upd inar ate@ Times ema entary DVD lach and sub Inve rece lanp il your deta scrip sting artn tion ive a new ers. ils to slett Times, Aust to com er crea .au tion providing ralia’s strat ww finan foremost egie w.la s since cial inde Telep chla 197 and inve pend npa hon 1. stme ent rtne e 180 nt wea 0 643 rs.com. lth 631 (Free au call) Asse Inco t Finance me Prote • Com with $149.0 PR IL SMSF trol you Super erts s Seaboa ‘Inve Colliers at Fun • bor r retir to: and sting in rd. eme d (SM Resid • con row fund Com nt inve SF) you entia merc wha Chie Paul Salib ial stm • acc sistentl s to pur are Bris Sem Aust t is happ l Propertie f Inve ents Tuesd y enjo not ban a cha inar ralia ess Lach stment e but ay Marconly se a s are and ening in s to the special ying Glob 2 - h 8th 2011 Offic The may bein ‘Dyn lan com residen Novo 4pm ally’ also broade ised 200 g held Asse amic Partners er pet tel Brisba Creek tial t Alloc Appr St, Brisba r mar investm itive cas or com in the ne Mak the new ation oach to ent ket. ne QLD e Wedn follo Syd opp h deposit mercial Zone - Applying esday ney 130 the mos win ortu Syste 0 131 g loca pro t 7.30 March 16th rate nitie m’ The tion s, and perty, 141 of your s not Level Portsi 9.30am 2011 s or visit SMSF de Centr 5, 207 read Kent Symantec ily ava e, today. Street House , Sydne ilable , Con Thurs Melbou ves y NSW day rne tec.co tact you March Level 4.30 - 6.30p31st r m.a Lach 8 White 2011 1, u/p local ban m man Crow focu lan Part rofe (Free Steet, n Towe ker, sed ssio Parkin South rs, call in Melb on clienners is nalf g Availa bank inan t need a Private ourn ble) VIC ce. e, Sydn Clien s and ction merc Invest & Life ial Prop Invest ec Exper Insu erty Bank. ec Exper ien Pty Fina Eranc ien Limite x p ee • Profe All financYou should nce is not d • Depo r i e n ssion offerin ABN e is 94 subjec obtain g financ 110 al Oversit Faci a t to our copy of ial, tax 704 464 Exp draft lities credit the or • Good erie asses Produ legal (Investec advice smen will ct n Exper t criteri Disclo & Prac . ien) sure You should is a a. Terms tice Statem Purc obtainsubsid and hase condi ent before indepiary of tions, enden Invest Loan fees you apply t financec Bank s • Hom and charg for this ial, tax (Austr e Loan alia) es apply. produ and s legal Limite ct We advice d ABN reserv and consid , as 55 e the 071 right er your appropriate 292 to cease perso 594 . Depos AFSL offerinnal needs it 23497 g these and produ 5 (Inves produ circum cts are stanc issued tec Bank) cts at any es by time before Invest . withou invest ec t notice ing. . NOVEMBER/ practice eal thyself FU LL DA 16 THA Y COURSE PRIL 2 0 ub, M elbo 11 RACV Cl LIMIT Physician Life is a free publication sent to all Physicians in Australia ED Pevery two LAC ES A Enro months. We hope you have enjoyed reading this edition and havel afound it VAILABLE nd s e cure www your .Pa like to plac contributes to your knowledge, practice and lifestyle. If you would e at: rt Cou 3 r continue to receive this publication, please email or fax back the form below. se.c cover Do convdoctors ersa mak tion e ba alist d s? Depression, suicide am drug abuse an d ongst doc tors CH /A Lachla - get ping max their n Partne ting the up to be righ key imis t adv a challen sem rs is brin inars ice will gin why e ging to be g tog year ma Chr wou your S for inve Our held inether som ke all Chie is Cato the emine BT Finaf Econ n Ma ldn’t e M If you omis nt spe rch 201 of Austral differencestors ‘Glob ncial Gro t you? SF re emp have a akin 1 . al and owe Self g pan across ia’s fore John tu Mar Economic up utilise red ket rnOutls, Mar el incl the Eas most to conManage asco you ook’ d CEO ude tern exp r PhPy icianlif HsY e SIC I A Conv Ng Lif Marketin Ki l ersatio er n your psyc e Psychlia hiatric trist, h ONE COU DAY BU S R IN SE C • Pra S OVE EShe • Ac ctice Se RING alth...w c hea lifesty ealth • Ba ounting tup & Re lth... n ...we le... • Fin king & Concep view alth.. F • Ind ancial P inance ts .lifes la tyle.. • Ma emnity & nning . rketi ng & Risk IT M AR invesProtect tmen your ts in 2011 201 1 is U TAR R S GE FOR TED BU E S SPE CIAL INESS ISTS KNO IN P WLED RAC G TICE E Life feedback form O PHYS ICIA N Par C I am happy for my comments to be published MAY/JUNE 2011 PRIL 201 1 have your say. . . Physicianlife R eadership S urvey 2 0 1 1 We're keen to know your thoughts on Physician Life magazine What you think, what you like and what you want to see more of. At Physician Life magazine, we're committed to providing the most relevant features, views and analysis on life as a medical professional, and your insights will help us deliver exactly what you want from Physician Life magazine. The online readership survey should take approximately 10 minutes to complete. Go to to have your say
http://issuu.com/medicallife/docs/physicianlife_final_v1
CC-MAIN-2016-07
en
refinedweb
Hi, I currently am using Blender 2.32 from an RPM on RedHat 9.0. I tried installing the Yafray 0.0.6 from an RPM and it worked fine. I have tried switching the render options in Blender to use Yafray, although all it comes up with is a black screen, and the console gives me this message: import site' failed; use -v for traceback Starting scene conversion. Scene conversion done. No export directory set in user defaults! Yafray found at : /usr/bin/ What does this mean, and how do I fix this? Help installing YAFray with Blender Blender's renderer and external renderer export Moderators: jesterKing, stiv 2 posts • Page 1 of 1 - Site Admin - Posts: 1848 - Joined: Fri Oct 18, 2002 12:48 pm - Location: Finland Re: Help installing YAFray with Blender jmja89 wrote:import site' failed; use -v for traceback A full python installation (2.2) is not found found. This can be ignored. jmja89 wrote:Starting scene conversion. Scene conversion done. Pretty obvious, no? jmja89 wrote:No export directory set in user defaults! This means you need to set YFExport in the user defaults (pull down the upper window in Blender, and click on the button for paths) jmja89 wrote:Yafray found at : /usr/bin/ YafRay is found. There is nothing to be done, except uninstalling (but that you probably don't want, right? /jesterKing 2 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
https://www.blender.org/forum/viewtopic.php?p=17676
CC-MAIN-2016-07
en
refinedweb
KTextEditor #include <templateinterface.h> Detailed Description This is an interface for inserting template strings with user editable fields into a document. Definition at line 39 of file templateinterface.h. Constructor & Destructor Documentation Definition at line 266 of file ktexteditor.cpp. Definition at line 270 of file ktexteditor.cpp. Member Function Documentation Parses templateString for macros in the form [$%]{NAME} and finds the value corresponding to NAME if any. The NAME string may contain any non-whitespace character execpt '}' - Parameters - - Returns - true if all macros was successfully expanded - See also - insertTemplateText for a list of supported macros Definition at line 38 of file templateinterface.cpp. Inserts an interactive ediable template text at line "line", column "col". - Returns - true if inserting the string succeeded Use insertTemplateText(lines(), ...) to append text at end of document Template strings look like "for( int ${index}=0;${index}<10;${index}++) { ${cursor} };" or "%{date}" This syntax is somewhat similar to the one found in the Eclipse editor or textmate. There are certain common placeholders (macros), which get assigned a default initialValue, If the second parameter does not a given value. For all others the initial value is the name of the placeholder. Placeholder names may only consist of a-zA-Z0-9_ - Since - 4.5 if a placeholder is a mirror, the place holder name may contain additional information ${something/regexp/replacement/} takes the value of the placeholder something and replaces the match with the replacement before inserting the mirrored value ${something/regexp/replacement/g} like above, but for all occurences The syntax of the regexp and the replacement are the ones from kateparts regexp search/replace ${something/regexp/replacement/i} like above, but case insensitive The syntax of the regexp and the replacement are the ones from kateparts regexp search/replace Possible flags: g and i. Those flags can be combined too If a literal / should appear in the regexp, it has to be escaped \/, literal \ has to be escaped too If you have mirrored ranges and want another occurence than the first one as the master you can add @ directly after the placeholder name. The interface2 version invokes the function specified by functionName within the script specified by the scriptToken, if a placeholder is specified with backticks like ${placeholder functionName} The function has a global environment containing "view", "document" and "debug", at least in the katepart implementation. The function invokation is not allowed to be mixed with other replacements If a / or ` replacement is done on a master the initial value is modified and therefor also all mirrored placeholders are affected too, later on the replacement is not done anymore on master ranges The parameters for invoked javascript functions will be the following: value of the master (or initial value), //to be done: placeholder name, small wrapper around the template handler (to do more sophisticated things, like adding additional placehlder points, aattaching custom properties for state keeping, ....) you tell Specification of initial values You can specify initial values which are different from the placeholder name this is done via, this makes only sense for $ placeholders, not for % ${placeholder:some value} or ${placeholder@:some value} It is not allowed to mix : and / after the first colon, everything is interpreted as default value, } in the default value have to be escaped (backslashes before } have to be escaped themselves) and regexp searches are ignored. The : has to be directly after the placeholder name or after an optional @ symbol. Common placeholders and values are - index: "i" - loginname: The current users's loginname - firstname: The current user's first name retrieved from kabc - lastname: The current user's last name retrieved from kabc - fullname: The current user's first and last name retrieved from kabc - email: The current user's primary email address retrieved from kabc - date: current date - time: current time - year: current year - month: current month - day: current day - hostname: hostname of the computer - selection: The implementation should set this to the selected text, if any - cursor: at this position the cursor will be after editing of the template has finished, this has to be taken care of by the actual implementation. The placeholder gets a value of "|" assigned. If a macro is started with a % (persent sign) like "%{date}" it isn't added to the list editable strings ( for example TAB key navigation) if a value differing from the macro name is found. If the editor supports some kind of smart indentation, the inserted code should be layouted by the indenter. Definition at line 248 of file templateinterface.cpp. You must implement this, it is called by insertTemplateText, after all default values are inserted. If you are implementing this interface, this method should work as described in the documentation for insertTemplateText above. - Returns - true if any text was inserted. Implemented in KTextEditor::TemplateInterface2. DO NOT USE !!!! THIS IS USED INTERNALLY by the interface only !!!!!! Behaviour might change !!!!!!! Definition at line 118 of file templateinterface.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2016 The KDE developers. Generated on Fri Feb 5 2016 23:11:21 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
http://api.kde.org/4.x-api/kdelibs-apidocs/interfaces/ktexteditor/html/classKTextEditor_1_1TemplateInterface.html
CC-MAIN-2016-07
en
refinedweb
I've never seen this on 10.6.5. Can't reproduce. I'm running 10.6.5. Can the poster give more specific instructions? Better yet, can he reproduce it himself on a machine that he hasn't configured. Perhaps there's something that he tweaked that's triggering this. I haven't seen that behavior. Does it happen with all your applications or just a particular one? Never seen this, and I've worked on hundreds of 10.6.5 systems. In which app(s) is this happening? I just tried this with TextEdit and it works normally, or not as described as a bug. When the dialog is in list view, clicking the Save button will indeed open a selected folder and the next Save click (or return) will save in that folder. In column view the last selected folder is already open and clicking Save will save the file in that folder. However, with the dialog in list view, I never see a folder in the dialog being selected by itself. I need to click on a folder to select it. You can first use selecting and clicking the Save button to navigate to the intended folder and then click Save to save the file. I myself usually double-click folders for navigating. I'm on 10.6.5. I followed the recipe given. Hitting return (or clicking Save) saved the file into the file dialog's current folder; it did not open the selected folder. Note: I use Default Folder. Perhaps it does the right thing when the native OS X file dialog wouldn't. That's interesting that with Default Folder you don't see this bug. I wonder if that 'fixes' it. I don't have Default Folder or any open/save dialogue modifications. Just did the same in Safari. Absolutely no problem. Probably for *some* apps only. When I do a "Save as" in Safari, I still see the problematic behavior. When in list view, instead of saving in the opened folder, a selected folder will open when the Save button is clicked. And moving up in the hierarchy using command-arrow up will select the folder you came from, resulting in again opening the folder you came from when the Save button is clicked instead of saving in the folder that was opened. I see this in 10.6.5 and it is only a problem with the dialog in list view, not with the dialog in column view. I do not know if this is a recent change in Snow Leopard. I tried this in 10.4.11 on my PowerBook and there clicking Save will not open a selected folder but will save in the opened folder. In Mac OS 9, when a folder is selected, the Save button changes to 'Open' and one can open the selected folder by clicking that button. In fact the same behavior in Mac OS X 10.6.5 as in Mac OS 9, only now the button does not change from Save to Open when a folder is selected. The problem doesn't appear in 10.5.8 with Safari 4.0.5 nor Mail. I wonder if this has something to do with keyboard access to all controls? I have that enabled. I have just checked, and the bug occurs whether or not "tab access to all controls" is selected in the keyboard preference pane, and whether or not "enable access for assistive devices" is selected in the accessibility preference pane. DRM 1. I can confirm that tab access to all controls does not modify this behavior on 10.6.5, restarting included. 2. It is definitely annoying, especially when I have been working in one folder then need to save a file in the parent folder (so this tip is useful, but it's too bad that it's necessary). 3. The behavior SHOULD be that cmd-S saves in the current folder and cmd-O opens a folder. It is counterintuitive that cmd-S would open a folder. Or that pressing return on a save button would open a folder. It's just wrong. 4. Can someone confirm that this behavior was present on older systems? It seems like a recent problem -- I've only become aware of it in the past few weeks. 5. Can someone confirm that this behavior is not present on their 10.6.5 system? If so there must be a preference that controls it. I suppose that people who experience this problem use the Save dialog in list view, while those who do not see the problem use the dialog in column view. The problem (bug) is only manifest when the Save dialog is used in list view. In column view there is no problem because the selected folder is also the opened folder and clicking Save will save in that folder. I run into this all the time. was able to reproduce it but it's not really a bug ... if a folder is selected, it means you to save the file in it. In that case, pressing enter just open the selected folder and then you can repress enter to save it, exactly where you wanted to. This behavior seems logical to me, even practical. Visit other IDG sites:
http://hints.macworld.com/article.php?story=20101227120319896
CC-MAIN-2016-07
en
refinedweb
CheckBox is a fully implemented widget provided by Android. Checkboxes are used for selecting one or more options from a set. In this tutorial we would be discussing about creating the standard and customized CheckBoxes. CheckBox is a special type of button that has two states, checked and unchecked. The CheckBox in Android is as below. As you can see the above picture, CheckBox in android has a TextView whose properties can be used to format the widget. One must avoid using a single checkbox to turn an option off or on. The checked event of the CheckBox is handled by setting the onCheckedChangeListener for the CheckBox. The callback for the listener is onCheckedChanged(), which is responsible for receiving the CheckBox whose state has changed and its new state. Creating a CheckBox You can create a CheckBox by creating an instance of android.widget. CheckBox as below, <CheckBox android: Within Java, you can manage the state of a checkbox by using the following, - isChecked() : used to obtain the state of the CheckBox. - toggle() : toggles the CheckBox state. - setChecked() : Used to set the CheckBox state to checked or unchecked. Let us now create a sample application that shows the usage of CheckBoxes in Android. Step 1: Set up the android development environment This topic has already been discussed in one of our previous posts Environment. Please refer the post for more queries. I would be using Android 4.0 for this example Step 2: Create An Android Project Create an Android project named “CheckBoxDemo” with the launcher activity CheckBoxDemoActivity.java. For more information on how to create a new project in android, please refer to the post Create an android project. Step 3: Create the required layouts I would be using main.xml layout file. To know more about layout xml files, please go through the post Layouts. Open the main.xml file and paste the below code. <" /> </RelativeLayout> As you can see in the above code, the main.xml has a RelativeLayout which in turn has a textview and a checkbox. The CheckBox tag is used to create the Checkbox widget. Step 4: Create the required activities The launcher Activity CheckBoxDemoActivity.java. has the below code, package my.app; import android.app.Activity; import android.os.Bundle; import android.widget.CheckBox; import android.widget.CompoundButton; import android.widget.CompoundButton.OnCheckedChangeListener; public class CheckBoxDemoActivity extends Activity implements OnCheckedChangeListener { CheckBox ck1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ck1=(CheckBox)findViewById(R.id.checkBox1); ck1.setOnCheckedChangeListener(this); } //Called when the checked state of a compound button has changed @Override public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { // TODO Auto-generated method stub if(isChecked){ ck1.setText("CheckBox: Checked"); } else{ ck1.setText("CheckBox: Unchecked"); } } } Each CheckBox is managed separately. As you can see, we have registered a listener which is to be notified when the state of the Chekcbox changes. In this example, we would be changing the text of the checkbox when the user clicks on it. As you can notice, we have registered onCheckedChangeListener and not onClickedListener as in the case of normal button. The callback for the listener is onCheckedChanged(). This callback receives the CheckBox whose state has changed. Step 5: Declaring the Activities in AndroidManifest.xml The AndroidManifest.xml file has the below code. To know more about manifest file, please refer the post Manifest. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-sdk android: <application android: <activity android: <intent-filter > <action android: <category android: </intent-filter> </activity> </application> </manifest> Step 6: Run your application Select your project->Right click->Run As->Android Application Your launcher activity opens up as below, At first the CheckBox is in unchecked state as shown by the TextView. Clicking the CheckBox immediately updates its text as below, You can see the changed text in the above figure. Unchecked has been changed to checked when the CheckBox state is checked. Creating custom CheckBox Till now we have studied how to create a standard CheckBox in Android. So now you would be learning how to create custom CheckBox in android. Like all widgets, the standard CheckBox can be customized. You can change the color, shape and behaviors. This is done by creating a XML selector and assigning that selector to the CheckBox in your layout. selectors are the elements provided by android which allows users to create a single reference to multiple images and the conditions under which they shuold be visible. Follow below steps for creating custom CheckBoxes. - Prepare two images for CheckBox state. Here in this tutorial I would be using a Red square for CheckBox unchecked state and Green square for a checked state. Place these images in res/drawable folder. checked.png is to display when the CheckBox is in checked state. unchecked.png is to display when the CheckBox is in unchecked state. - Create a selector XML in res/drawable folder. The content of the selector XML checkbox_selector.xml is as below, <?xml version="1.0" encoding="utf-8"?> <selector xmlns: <item android: <item android: <item android: </selector> The project structure with selector XML is as below, - Assign the selector to your CheckBox in your layout – Unlike buttons, Checkboxes have a slightly different mechanism for changing the states. In case of CheckBox, the background is not associated with the state so assigning the selector to the checkbox is done through another attribute called android:button. The below is the code of the main.xml for creating custom checkbox. <" android: </RelativeLayout> Now Run your code with the changes made to the project, The custom adapter is displayed as below, Change the state of the checked, you can see the changed color of the checkbox drawable, This is how you can create and use the standard and custom Checkbox in android. CheckBox can be added to AlerDialogs. To know more about how to add CheckBoxes to AlertDialogs, please refer the post AlertDialogCheckBox Hope this turorial served your purpose. Feel free to post your comments.
http://www.javabeat.net/checkboxes-android/
CC-MAIN-2016-07
en
refinedweb
Creating a Splash Screen - PDF for offline use: - - Related Samples: - - Related Links: - Let us know how you feel about this. 0/250 last updated: 2016-01 An Android app takes some time to to start up, especially the first time the app is run on a device. A splash screen may display start up progress to the user or branding. Overview An Android app takes some time to to start up, especially the first time the app is run on a device (sometimes this is referred to as a cold start). The splash screen may display start up progress to the user or branding information to identify and promote the application. This guide will discuss one technique to implement a splash screen in an Android application. It will cover is targetting Android API level 15 (Android 4.0.3) or higher. The application must also have the Xamarin.Android.Support.v4 and the Xamarin.Android.Support.v7.AppCompat NuGet packages added to the project. All of the code and XML in this guide may be found in the sample project for this guide. Implementing A Splash Screen The quickest way to render and display the splash screen is not to use a layout file, but to create a custom theme and apply that to an Activity that is the splash screen. When the Activity is rendered, it will load the theme and apply the drawable resource referenced by the theme to the background of the activity. The splash screen is implemented as an Activity that will display the branded drawable and perform any initialization/start up tasks. Once the app has bootstrapped, the splash screen Activity will start the main Activity and remove itself from the application back stack. Creating a Drawable for the Splash Screen The splash screen will display a XML drawable in the background of the splash screen Activity. It is necesary to have a bitmapped image, such as a PNG or JPG, for the image to display. This guide will use a layer list to centre the splash screen image in the application. The following snippet is an example of a drawable resource using a layer-list: <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns: <item> <color android: </item> <item> <bitmap android: </item> </layer-list> This layer-list will centre the splash screen image, a PNG name splash on a background specified by the @color/splash_background resource. After the splash screen drawable has been created, the next step is to create a theme for the splash screen. Implementing a Theme The next step is to create a custom theme for the splash screen Activity. Edit (or add) the file values/styles.xml, and create a new style element for the splash screen. A sample values/style.xml file is show below. With a style named MyTheme.Splash: <resources> <style name="MyTheme.Base" parent="Theme.AppCompat.Light"> </style> <style name="MyTheme" parent="MyTheme.Base"> </style> <style name="MyTheme.Splash" parent ="Theme.AppCompat.Light"> <item name="android:windowBackground">@drawable/splash_screen</item> <item name="android:windowNoTitle">true</item> </style> </resources> MyTheme.Splash is very spartan, it only declares the window background and explicitly removes the title bar from the window. Create a Splash Activity Now we need a new Activity for Android to launch that has our splash image and performs any startup tasks. The following code is an example of a complete splash screen: [Activity(Theme = "@style/MyTheme.Splash", MainLauncher = true, NoHistory = true)] public class SplashActivity : AppCompatActivity { static readonly string TAG = "X:" + typeof(SplashActivity).Name; public override void OnCreate(Bundle savedInstanceState, PersistableBundle persistentState) { base.OnCreate(savedInstanceState, persistentState); Log.Debug(TAG, "SplashActivity.OnCreate"); } protected override void OnResume() { base.OnResume(); Task startupWork = new Task(() => { Log.Debug(TAG, "Performing some startup work that takes a bit of time."); Task.Delay(5000); // Simulate a bit of startup work. Log.Debug(TAG, "Working in the background - important stuff."); }); startupWork.ContinueWith(t => { Log.Debug(TAG, "Work is finished - start Activity1."); StartActivity(new Intent(Application.Context, typeof(Activity1))); }, TaskScheduler.FromCurrentSynchronizationContext()); startupWork.Start(); } } This new Activity is set as the launcher activity for the application and explicitly uses the theme we created in the previous section, override the default theme of the application. Equally important is setting the NoHistory=true attribue as this will remove the Activity from the back stack. There is no need to load a layout in OnCreate as the theme declares a drawable as the background. The startup work is performed asynchronously in OnResume. This is so that the startup work does not slow down or delay the appearance of the launch screen. When the work has completed, SplashActivity will launch Activity and the user may begin interacting with the app. The final step is to edit Activity.cs, and remove the MainLauncher attribute: [Activity(Label = "@string/ApplicationName")] public class Activity1 : AppCompatActivity { // Code omitted for brevity } Summary This guide discussed one way to implement a splash screen in a Xamarin.Android application by applying a custom theme to the launch activity.
https://developer.xamarin.com/guides/android/user_interface/creating_a_splash_screen/
CC-MAIN-2016-07
en
refinedweb
I'm developing a free .NET obfuscator. It doesn't support very much functionality at the moment, though. It can: rename types and remove namespaces, rename methods and fields, remove properties and events declarations, rename (private) assembly files. Basically, my primary goal was to make it remove all meaningful information that can help hackers to analyze the program. It provides 2 non-standard features, that I couldn't find in existing products: 1. Fine grained control over obfuscation parameters of assemblies and their sub-items 2. It can obfuscate public types and members (useful for assemblies that are not exposed outside of your program) Feedback is welcome. Developing a .NET obfuscator. Need feedback.Page 1 of 1 0 Replies - 2742 Views - Last Post: 24 April 2014 - 09:22 AM #1 Developing a .NET obfuscator. Need feedback. Posted 24 April 2014 - 09:22 AM Page 1 of 1
http://www.dreamincode.net/forums/topic/345589-developing-a-net-obfuscator-need-feedback/page__pid__2002705__st__0
CC-MAIN-2016-07
en
refinedweb
Django 1.5 custom User model error when i try to custom User model with Django 1.5 , got an exception: AttributeError: Manager isn't available; User has been swapped for 'xxx.MyUser' and i noticed it was thrown from oauth_provider.models: from django.contrib.auth.models import User Thanks, I'll take a look at that. I think I know where is the problem, but at first I must automate testing in different django and python versions (django-oauth-plus wasn't tested against django 1.5 yet). fixed by 778bd49 Please sync your fork or use latest repository code until new release ( 2.0.1) is available. thank you for the fix
https://bitbucket.org/david/django-oauth-plus/issues/22/django-15-custom-user-model-error
CC-MAIN-2016-07
en
refinedweb
added libltdl (no integration yet) /* loader-dld_link.c -- dynamic linking with dld Copyright (C) 1998, 1999, 2000, 2004, 2006, 2007, 2008 Free Software Foundation, Inc. Written by Thomas Tanner, 1998 NOTE: The canonical source of this file is maintained with the GNU Libtool package. Report bugs to [email protected].. */ #include "lt__private.h" #include "lt_dlloader.h" /* Use the preprocessor to rename non-static symbols to avoid namespace collisions when the loader code is statically linked into libltdl. Use the "<module_name>_LTX_" prefix so that the symbol addresses can be fetched from the preloaded symbol list by lt_dlsym(): */ #define get_vtable dld_link_LTX_get_vtable LT_BEGIN_C_DECLS LT_SCOPE lt_dlvtable *get_vtable (lt_user_data loader_data); LT_END_C_DECLS /* Boilerplate code to set up the vtable for hooking this loader into libltdl's loader list: */ static int vl_exit (lt_user_data loader_data); static lt_module vm_open (lt_user_data loader_data, const char *filename, lt_dladvise advise); static int vm_close (lt_user_data loader_data, lt_module module); static void * vm_sym (lt_user_data loader_data, lt_module module, const char *symbolname); static lt_dlvtable *vtable = 0; /* Return the vtable for this loader, only the name and sym_prefix attributes (plus the virtual function implementations, obviously) change between loaders. */ lt_dlvtable * get_vtable (lt_user_data loader_data) { if (!vtable) { vtable = lt__zalloc (sizeof *vtable); } if (vtable && !vtable->name) { vtable->name = "lt_dld_link"; vtable->module_open = vm_open; vtable->module_close = vm_close; vtable->find_sym = vm_sym; vtable->dlloader_exit = vl_exit; vtable->dlloader_data = loader_data; vtable->priority = LT_DLLOADER_APPEND; } if (vtable && (vtable->dlloader_data != loader_data)) { LT__SETERROR (INIT_LOADER); return 0; } return vtable; } /* --- IMPLEMENTATION --- */ #if defined(HAVE_DLD_H) # include <dld.h> #endif /* A function called through the vtable when this loader is no longer needed by the application. */ static int vl_exit (lt_user_data LT__UNUSED loader_data) { vtable = NULL; return 0; } /* A function called through the vtable to open a module with this loader. Returns an opaque representation of the newly opened module for processing with this loader's other vtable functions. */ static lt_module vm_open (lt_user_data LT__UNUSED loader_data, const char *filename, lt_dladvise LT__UNUSED advise) { lt_module module = lt__strdup (filename); if (dld_link (filename) != 0) { LT__SETERROR (CANNOT_OPEN); FREE (module); } return module; } /* A function called through the vtable when a particular module should be unloaded. */ static int vm_close (lt_user_data LT__UNUSED loader_data, lt_module module) { int errors = 0; if (dld_unlink_by_file ((char*)(module), 1) != 0) { LT__SETERROR (CANNOT_CLOSE); ++errors; } else { FREE (module); } return errors; } /* A function called through the vtable to get the address of a symbol loaded from a particular module. */ static void * vm_sym (lt_user_data LT__UNUSED loader_data, lt_module LT__UNUSED module, const char *name) { void *address = dld_get_func (name); if (!address) { LT__SETERROR (SYMBOL_NOT_FOUND); } return address; }
http://www.complang.tuwien.ac.at/viewcvs/cgi-bin/viewcvs.cgi/gforth/libltdl/loaders/dld_link.c?rev=1.1&sortby=file&view=auto
CC-MAIN-2013-20
en
refinedweb
Hello World -- Your First Program (C# Programming Guide) The following console program is the C# version of the traditional "Hello World!" program, which displays the string Hello World!. Let us now look at the important parts of this program in turn. The first line contains a comment: The characters // convert the rest of the line to a comment. You can also comment a block of text by placing it between the characters /* and */, for example: The Main Method The C# program must contain a Main method, in which control starts and ends. The Main method is where you create objects and execute other methods. The Main method is a static method that resides inside a class or a struct. In the previous "Hello World!" example, it resides in a class called Hello. Declare the Main method in one of the following ways: It can return void: It can also return an int: With both of the return types, it can take arguments: -or- The parameter of the Main method is a string array that represents the command-line arguments used to invoke the program. Notice that, unlike C++, this array does not include the name of the executable (exe) file. For more information on using command-line arguments, see the example in Main() and Command Line Arguments (C# Programming Guide) and How to: Create and Use C# DLLs (C# Programming Guide). Input and Output C# programs generally use the input/output services provided by the run-time library of the .NET Framework. The statement, System.Console.WriteLine("Hello World!"); uses the WriteLine method, one of the output methods of the Console class in the run-time library. It displays its string parameter on the standard output stream followed by a new line. Other Console methods are used for different input and output operations. If you include the using System; directive at the beginning of the program, you can directly use the System classes and methods without fully qualifying them. For example, you can call Console.WriteLine instead, without specifying System.Console.Writeline: For more information on input/output methods, see System.IO. Compilation and Execution You can compile the "Hello World!" program either by creating a project in the Visual Studio IDE, or by using the command line. Use the Visual Studio Command Prompt or invoke vsvars32.bat to put the Visual C# tool set on the path in your command prompt. To compile the program from the command line: Create the source file using any text editor and save it using a name such as Hello.cs. C# source code files use the extension .cs. To invoke the compiler, enter the command: csc Hello.cs If your program does not contain any compilation errors, a Hello.exefile will be created. To run the program, enter the command: Hello For more information on the C# compiler and its options, see C# Compiler Options. See Also ReferenceInside a C# Program ConceptsC# Programming Guide Other ResourcesVisual C# Samples C# Reference
http://msdn.microsoft.com/en-US/library/k1sx6ed2(v=vs.80).aspx
CC-MAIN-2013-20
en
refinedweb
Feb 25, 2012 04:42 PM|LINK I have a handler that works fine in C# but I can't get it to work in VB. I don't know if maybe I just did something wrong in the translation? Can anyone look at it and see if anything jumps out? The site I'm working on currently, for various reasons needs to be in VB so I'm hoping to be able to get this code to work properly. The connection string in the web.config that the handler is referencing is exactly the same in both C and VB codes. using System; using System.Web; using System.Data; using System.Data.SqlClient; using System.Configuration; public class Handler : IHttpHandler { public void ProcessRequest (HttpContext context) { string strConnection = ConfigurationManager.ConnectionStrings["connectionString"].ToString(); SqlConnection conn = new SqlConnection(strConnection); conn.Open(); string sql = "SELECT Image FROM [Gallery] " + "WHERE [ImageID]=@ImageID"; SqlCommand cmd = new SqlCommand(sql, conn); cmd.Parameters.Add("@ImageID", SqlDbType.Int).Value = context.Request.QueryString["id"]; cmd.Prepare(); SqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { context.Response.ContentType = dr["Image"].ToString(); context.Response.BinaryWrite((byte[])dr["Image"]); } conn.Close(); } public bool IsReusable { get { return false; } } } Imports System Imports System.Web Imports System.Data Imports System.Data.SqlClient Imports System.Configuration Public Class Handler Implements IHttpHandler Public Sub ProcessRequest(context As HttpContext) Implements IHttpHandler.ProcessRequest Dim strConnection As String = ConfigurationManager.ConnectionStrings("ConnectionString2").ToString() Dim conn As New SqlConnection(strConnection) conn.Open() Dim sql As String = "SELECT Image FROM [Gallery] " + "WHERE [ImageID]=@ImageID" Dim cmd As New SqlCommand(sql, conn) cmd.Parameters.Add("@ImageID", SqlDbType.Int).Value = context.Request.QueryString("id") cmd.Prepare() Dim dr As SqlDataReader = cmd.ExecuteReader() If dr.Read() Then context.Response.ContentType = dr("Image").ToString() context.Response.BinaryWrite(DirectCast(dr("Image"), Byte())) End If conn.Close() End Sub Public ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Member 501 Points Feb 25, 2012 05:25 PM|LINK what error exactly you are getting? Feb 25, 2012 05:34 PM|LINK Well, it's not really giving an error. In C it shows the image properly with no trouble. In VB is just simply shows an X. The handler connection string shows "connectionString" in one and "connectionString2" in the other, but the actual string is the same. I just wonder if I messed something up in the VB code because it won't show the images and I don't understand why it does in C#. Feb 26, 2012 03:42 AM|LINK AsPxFrKIf dr.Read() Then context.Response.ContentType = dr("Image").ToString() context.Response.BinaryWrite(DirectCast(dr("Image"), Byte())) End If What happens when you use While dr.Read() // Do the stuff.. End While Regards Feb 26, 2012 07:26 PM|LINK Okay, update on this. I've got it to work using the While/End While statement. For some reason it only works if the page is in the root of the web. If I have the page in a subdirectory the handler apparently won't work or the page isn't communicating with it. Any thoughts on that problem? The tag I'm using is <asp:Image Feb 26, 2012 07:29 PM|LINK Guess I answered my own question. The handler has to be in the same directory as the page. I moved the handler to the folder I wanted the page to be in and it worked fine. Thanks for the advice on the While statement change, that fixed my issue! 6 replies Last post Feb 26, 2012 07:29 PM by AsPxFrK
http://forums.asp.net/t/1773760.aspx/1?Select+Image+works+in+C+but+not+in+VB
CC-MAIN-2013-20
en
refinedweb
Anatomy of a Unit Test [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] figure shows the first few lines of code, including the reference to the namespaces, the TestClassAttribute, and the TestContext class. See the walkthrough if you want code samples. figure shows the latter part of the code that is generated in the walkthrough, which includes the "Additional test attributes" section, the TestMethod attribute, and the logic of the method, which includes an Assert statement. this is to establish a known state for running your unit test. For example, you may use the [ClassInitialize()] or the [TestInitialize()] method to copy, alter, or create certain data files that your test will use. Create methods that are marked with either the [ClassCleanup()] or [TestCleanUp{}] attribute to return the environment to a known state after a test has run. This might mean the deletion of files in folders or the return of a database to a known state. An example of this is to reset an inventory database to an initial state after testing CreditTest unit test method as it was generated, including its TODO statements. However, we initialized the variables and replaced the Assert statement in the DebitTest test method. TODO statements act as reminders that you might want to initialize these lines of code. A note about naming conventions: The Visual Studio. Solution Items: Solution Items contains two files: Local.testsettings: These settings control how local tests that do not collect diagnostic data are run. Bank.vsmdi: This file contains information about test lists that are present in the solution and populates the Test List Editor window. TraceAndTestImpact.testsettings: These settings control how local tests that collect a specific set of diagnostic data are run..
http://msdn.microsoft.com/en-us/library/ms182517.aspx
CC-MAIN-2013-20
en
refinedweb
07 April 2010 20:26 [Source: ICIS news] WASHINGTON (ICIS news)--The US economy has stabilised and is beginning to grow again, but the mounting and unsustainable federal debt poses a crippling threat to the nation’s financial existence, Federal Reserve Chairman Ben Bernanke said on Wednesday. Speaking at a chamber of commerce meeting in ?xml:namespace> “Unless we as a nation demonstrate a strong commitment to fiscal responsibility, in the longer run we will have neither financial stability nor healthy economic growth,” Bernanke said. He quoted an economist’s dictum that “If something cannot go on forever, it will stop”. “That adage certainly applies to our nation’s fiscal situation,” the Fed chairman said. Bernanke noted that as the “baby boomer” generation begins to enter their retirement years and birth rates continue to decline, the ratio of working-age Americans to the elderly will fall, imposing dramatic financial obligations “which could hold back the long-run prospects for living standards in our country”. Baby boomers are those born between 1946 and 1964, and the oldest members of that major population bubble are already eligible for payments under the It is estimated that Social Security obligations to the “boomers” are approximately $7,700bn (€5,775bn) and Medicare costs for healthcare for the aging group could run to $38,000bn. Current US federal debt is approximately $12,700bn, having risen sharply since the 1970s when it had been more or less steady at around $3,000bn since the end of World War II. Federal debt is roughly equal to 90% of the “To avoid large and unsustainable budget deficits,” Bernanke said, “the nation will ultimately have to choose among higher taxes, modifications to entitlement programmes such as Social Security and Medicare, less spending on everything else from education to defence, or some combination of the above.” He said the challenge of reducing federal debt and budget deficits “are daunting indeed”, and he called on policymakers to begin immediately “to develop a credible plan for meeting our long-run fiscal challenges”. Bernanke also said that while the US financial crisis “looks to be mostly behind us”, he is still troubled by continuing high unemployment, the struggling housing sector and looming problems in commercial real estate that pose risks for communities and banks holding loans on such properties. ($1 = €0.75).
http://www.icis.com/Articles/2010/04/07/9348855/us-fed-chief-warns-of-crippling-federal-debt-and-deficits.html
CC-MAIN-2013-20
en
refinedweb
#include <protocol.hpp> Because each FastCGI request has a RequestID and a file descriptor associated with it, this class defines an ID value that encompasses both. The file descriptor is stored internally as a 16 bit unsigned integer in order to keep the data structures size at 32 bits for optimized indexing. Definition at line 48 of file protocol.hpp. Construct from a FastCGI RequestID and a file descriptor. The constructor builds upon a RequestID and the file descriptor it is communicating through. Definition at line 58 of file protocol.hpp. Definition at line 59 of file protocol.hpp. Definition at line 61 of file protocol.hpp. Referenced by Fastcgipp::Protocol::operator<(), Fastcgipp::Protocol::operator==(), and Fastcgipp::Protocol::operator>(). Associated File Descriptor. Definition at line 63 of file protocol.hpp.
http://www.nongnu.org/fastcgipp/doc/1.2/structFastcgipp_1_1Protocol_1_1FullId.html
CC-MAIN-2013-20
en
refinedweb
Customizing Team Foundation Server Project Portals Phil Hodgson, Microsoft Corporation May 2010 This article describes important concepts about project portals for Microsoft® Visual Studio® Team Foundation Server 2010 and provides guidance for process authors who customize process templates used to create project portals for team projects. Team Foundation Server supports project portals that are matched to the out-of-the-box process templates. Process templates are designed to be customized to a team’s or organization’s specific needs. Similarly Team Foundation Server project portals can be customized to provision custom content or features to the portal site. Examples of customizations that may be considered are: Changing the dashboards provisioned including adding new dashboards Adding new Excel workbooks Changing the Web Parts (or Web Part properties) on standard dashboards Changing the visual appearance of the portal site Activating custom SharePoint Features Overview of Project Portals Visual Studio Application Lifecycle Management encourages team collaboration by allowing team members to access team project data from a variety of tools and platforms. Team Foundation Server integrates with Microsoft SharePoint Products to enable SharePoint sites to present and modify team project data. A SharePoint site that is linked to a team project is referred to as the team project's project portal. Project portals present and link information from the Team Foundation Server data stores, the team project's SQL Server Reporting Services site, Team Web Access and process guidance. A project portal can be created in multiple ways. A common approach is to create the portal at the same time that the team project is created using the New Team Project Wizard from Visual Studio's Team Explorer window. Other creation options are available but may require more steps to complete the configuration. Each of the process templates that are shipped with the product have a default SharePoint site definition that is designed to present information relevant to the process template. In addition, the out-of-the-box project portals support a variety of deployment scenarios and will choose the best set of features based on the configuration of Team Foundation Server and the version of SharePoint Products that is being used. Project portals are intended to be customized to the team's or the organization's needs. Two types of customizations are available – changing individual portals and customizing the process template to affect portals created in the future. Customizing an existing portal is usually done through the user interface for SharePoint Products and requires less skills or access to complete. This document addresses the second type of customization because a deeper understanding of the components involved is needed. To deploy process template customizations the user needs to be a member of the Project Collection Administrators group (or have the Manage process template permission). To deploy customizations for SharePoint Products, the user needs to be a member of the farm administrators group in SharePoint Products. The intended audience of this document is process template authors who need to make changes to both the process template components and the project portal. Team Foundation Server project portals are SharePoint sites that are able to present and link to data for a team project. Once a SharePoint Web application has been configured for use with a Team Foundation Server application-tier, SharePoint sites within the Web application can begin to incorporate access to team project data. A project portal can present data from a team project using any or all of the following (depending on the capabilities of the environment): Excel Web Access Web Parts that display Excel workbooks which are retrieving data from the Server SQL Server Analysis Services database for Team Foundation Server, Team Web Access Web Parts that access data from the Team Foundation Server operational data store, TfsRedirect application page that enables access to other team project resources such as the associated SQL Server Reporting Services report folder, Team Web Access site or process guidance. Figure 1 Figure 1 shows the flow of data from the Operational data stores, through the Data Warehouse and into the Analysis Services cube. From there the data can be retrieved and shown on the dashboards through a variety of methods. Project Portal Artifacts The SharePoint site created through the New Team Project Wizard is a combination of artifacts and changes specified in the site definition configuration, the document libraries and content specified in the process template and the artifacts and changes made by any additional Features activated by the wizard (whose identifiers are specified in the process template). Dashboard Document Library and Dashboard Pages The "dashboard" pages in a project portal are Web Part Pages provisioned into a hidden document library (accessible from Quick Launch). The pages are provisioned by a number of SharePoint Features. The Features combine a custom ASPX layout page (which defines the Web Part Zones and dashboard toolbar) with details about the Web Parts on each dashboard. The ASPX layout page is called dashboardlayouts.aspx and is deployed with the files for each Team Foundation Server SharePoint Feature that provisions dashboard pages. If you want to use the same layout for your pages you can take a copy of the dashboardlayouts.aspx page and include it in your own Feature. The Team Foundation Server dashboards make use of the following Web Parts: Windows SharePoint Services 3.0 or SharePoint Foundation 2010 List View Web Part Content Editor Web Part Page Viewer Web Part Microsoft Office SharePoint Server 2007 or SharePoint Server 2010 Excel Web Access Team Foundation Server Extensions for SharePoint Products Query Results Web Part Work Item Summary Web Part Completed Builds Web Part Recent Check-ins Web Part Excel Reports Document Library The Excel Reports document library holds the workbooks that retrieve data from the SQL Server Analysis Servicesdatabase for Team Foundation Server. When the reports are first provisioned the data connection to the cube is set. If the workbook contains a PivotTable report with a Team Project Hierarchy filter then the filter is set to the team project for the project portal. A timer job for SharePoint Products periodically scans project portal sites to see if some aspects of the team project have changed that may affect the data connection or the Team Project Hierarchy filter. If it finds a change then it automatically updates all workbooks that are using the previous values. Quick Launch Navigation The project portal's Quick Launch makes use of the TfsRedirect application page to direct users to URLs specific to the associated team project. The application page is located at ~site/_layouts/TfsRedirect.aspx. The resulting redirect URL depends on the query parameters passed to the application page. The following navigation links use TfsRedirect to find resources: Team Web Access: Redirects to the team project's Team Web Access home page Reports: Redirects to the team project's SQL Server Reporting Services folder Process Guidance: Redirects to the team project's process guidance. Other navigation entries include: Dashboards: The document library that holds the dashboard Web Part Pages. Individual child entries are added for each dashboard during site creation and when a dashboard is copied. Excel Reports: The document library that holds Excel workbooks that are shown on the dashboards via Excel Web Access Web Parts. Documents child entries: Entries are added for the Wiki Pages document library, and for each of the out-of-the-box document libraries created by the process templates. Lists child entry: The Calendar list. Dashboard Toolbar The toolbar that appears on each dashboard page contains controls that provide access to team project data or perform operations on the dashboard and reports. The toolbar is defined in the dashboardlayouts.aspx page. Process Template Content Project portals may contain document libraries and documents specified in a process template. When using the New Team Project Wizard, after creation of the project portal from the specified template, the document libraries are created and content is uploaded. See Defining the Project Portal Plug-in for a Process Template for further details. Team Foundation Server SharePoint Features The artifacts provisioned within an out-of-the-box project portal are controlled by a set of SharePoint Features. A number of these are visible site-scoped Features whose action is to inspect the capabilities of the environment, determine which of the other Features best suit the environment and to activate them in the required order. Project portals can be used with Team Foundation Server instances that have varying levels of features activated and versions of SharePoint Products that support different capabilities. For example, a Team Foundation Server instance may or may not be using the Reporting features. This would determine whether a data warehouse or SQL Server Analysis Services cube was available. As for SharePoint Products, the server may be running any of the following products: Windows SharePoint Services 3.0 SharePoint Foundation 2010 Microsoft Office SharePoint Server 2007 SharePoint 2010 for Internet Sites, Standard SharePoint 2010 for Internet Sites, Enterprise The project portals can also be used with different process templates that require specific content (such as MSF Agile vs. MSF CMMI). The Team Foundation Server Features recognize three levels of capability. Each level allows progressively more components to be provisioned: To provide users with a project portal experience that best utilizes their environment the visible Features activate other hidden Features depending on the servers' configuration. The capability detection is done by a "top-level" Feature event receiver triggered from either the TfsDashboardAgileMoss (ID: 0D953EE4-B77D-485b-A43C-F5FBB9367207) or TfsDashboardCmmiMoss (ID: 3D0BA288-BF8E-47F0-9680-7556EDEF6318) Features. These are the ones referenced in the WssTasks.xml file of the out-of-the-box templates. The Features activated from the process templates can be changed to accommodate desired customizations. It is not required that the out-of-the-box Features be used, however they are designed to provide a great experience with the out-of-the-box process templates. The section Customization Options describes customizations that are available for the project portals. The Team Foundation Server SharePoint Features can be grouped into the following categories: Activation: Determine which Features should be activated Infrastructure: Provisions content and makes site changes that other Features will rely upon Content: Provision content such as dashboards and reports. User Interface: Makes changes to the site's user interface The tables below show the Features used in Team Foundation Server 2010. Base Features (always activated) No Reporting Features (activated if SQL Server Reporting Services is not used) Reporting Features (activated if SQL Server Reporting Services is being used) Enterprise Features (activated if SQL Server Reporting Services and SharePoint 2010 for Internet Sites, Enterprise, with Excel Services is used) Team Foundation Server Site Definition Team Foundation Server 2010 installs a site definition with two configurations that are designed to work with the process templates that are shipped with the product: "TFS2010 Agile Dashboard" for the "MSF for Agile Software Development v5.0" process template, and "TFS2010 CMMI Dashboard" for the "MSF for CMMI Process Improvement v5.0" process template.. Portal Creation Scenarios Project portals and the default site definition configurations are designed to work in three different activation scenarios. When designing customizations for the project portals you should consider how these scenarios are impacted. Firstly, when creating a team project with the New Team Project Wizard a project portal can be created using the Team Foundation Server dashboard site definition. The Wizard will setup the connections between the team project and the project portal which are required for data to be available on the site. Without a connection to a team project many of the Team Foundation Server SharePoint components will not function as intended. Secondly, a project portal site can be created through the UI for SharePoint Products using either of the site definition configurations, and then manually have the Team Foundation Server Dashboard Features activated. The difference here is that document libraries and content in the process template are not added to the site. Because a project portal site is connected to a team project if any of its containing (ancestor) sites are connected, this method is a convenient way for a team to create their own portal that still retrieves data from the same team project but has its own dashboards and reports. Thirdly the Team Foundation Server SharePoint Features can be activated on a site created with a different site definition. The Features still require a connection to the team project before they can be activated. See Access a Team Project Portal and Process Guidance for more details. There are a number of ways to customize Team Foundation Server to effect changes in future project portals. With the enhanced portal features available in Team Foundation Server 2010 come additional customization requirements. Customizations can now be made to the process templates, SharePoint Features or SharePoint site templates (with some restrictions). Process Template Changes Process templates define key aspects of a team project that affect how a team works. If a project portal is created using the New Team Project Wizard then the process template can control the following aspects of the project portal: Which site definition configuration or global site template is used to create the site Document libraries that are created within the project portal site Documents that are uploaded to the document libraries SharePoint Features that are activated on the site after creation Customizations of the process template are covered in detail in the topic Customizing Team Projects and Processes. To upload process template changes to Team Foundation Server requires the Manage process template permission on the team project collection. See the online documentation for more details on uploading a process template. SharePoint Development Aside from the process template customizations above, portal customization is essentially a customization task for SharePoint Products. The recommended method for packaging customizations of SharePoint Products is the SharePoint Feature. Since the wizard can activate Features on the site it is recommended that customizations be made by packaging them into one or more SharePoint Features and modifying the set of activated Features in the process template. It is not recommended that modifications be made to any of the files deployed from the Team Foundation Server solution packages (including Feature files) since during upgrade these modifications may be lost. Features can provision new content along-side Team Foundation Server standard content and execute any code needed to complete the site's setup. One approach to Feature development is to identify a Team Foundation Server content Feature that is similar to your needs, clone it and modify it as required. Any customizations that rely on SharePoint Features need to be deployed to the SharePoint farm before they can be used. The Team Foundation Server or team project administrators may not have access to that farm so assistance from that farm's administrators may be required. One of the key locations to be aware of when you develop SharePoint Features is the installation directory for SharePoint Products - commonly called "the hive". The hive is usually located in the following directory on the server that is running SharePoint Products: This path assumes that the server is running Windows SharePoint Services 3.0 or Microsoft Office SharePoint Server 2007. The path for SharePoint 2010 Products will use the "14" subdirectory instead of "12". For the remainder of this document we will refer to this path as [SPHive]. In order to deploy customizations to SharePoint Products you will need to be a member of the farm administrator’s group in SharePoint Products. SharePoint Features Features are a convenient method of deploying content to a SharePoint farm. They also have the ability to execute code that runs on the server. While content can be added to a site through the API for SharePoint Products, some aspects of content creation are only available through Features – such as "ghosting" of files. Depending on the desired customizations you may wish to have the existing Team Foundation Server Features activated and then activate your own custom Features to add additional content or make API calls to modify the site. Additional Feature activations can be added to the process template. Alternatively you may wish to take greater control over which of the Team Foundation Server Features are activated. One advantage of this is that the default Team Foundation Server Features attempt to provision as much applicable content to as site as the environment will support. If you wish to use your own content in place of certain Team Foundation Server content then avoiding unwanted activations may be a simpler approach. When customizing which Features will be activated it is recommended that at a minimum you continue to activate the infrastructure Features listed in the section Team Foundation Server SharePoint Features. Anatomy of Team Foundation Server SharePoint Features The files that make up each Team Foundation Server Feature are structured in the following manner: Figure 2: Feature.xml Files The Feature.xml file details metadata for the Feature including its unique ID, title, visibility, scope and the code (if any) that should be run when the Feature is installed, activated, deactivated or uninstalled. It also specifies the list of element files or element manifests that define the Feature's content. For details on Feature.xml files see the online documentation. Feature Element Files The Feature element files are referenced from the Feature.xml file using the ElementManifest element. These files contain the definitions of the atomic units of functionality that are applied to a site. For the Team Foundation Server Features the Feature element files contain definitions for: dashboard pages with their Web Part specifications list/document library instances custom actions controls that are added to pages content type specifications When creating your own Features you can start with a copy of an example Feature element file and customize it for your own purposes. Alternate Site Definition Configuration A site definition configuration defines the initial content of a SharePoint site. If a project portal site is created through the SharePoint web interface then it will contain the artifacts defined in the definition (e.g. menus, list templates) and the Features for that site definition configuration will be activated. If the customizations required can be achieved through a different site definition configuration then once this has been deployed to the SharePoint server the configuration can be referenced in the process template. A custom site definition should at least activate the infrastructure Features listed in the section Team Foundation Server SharePoint Features. Solution Packages The preferred method of packaging SharePoint customizations for deployment is the solution package. This is a cabinet archive file with a ".wsp" extension. See Solutions Overview for more details. Using Out-of-the-box Content as Your Starting Point In many cases the out-of-the-box Team Foundation Server Features will provision content (e.g. dashboards or reports) that closely matches the types of artifacts that you want. Rather than starting Feature development from scratch it is often quicker to identify a Feature that is close to what you want and then use that as a base for development. For example, the XML for the My Dashboard page is stored in the following directory on a server that is running SharePoint Products: To use a Team Foundation Server Feature as your starting point: Identify a candidate Feature and locate its definition in the [SPHive]\TEMPLATE\FEATURES directory on the server that is running SharePoint Products. Copy the Feature's folder and edit details in the Feature.xml file to give the Feature a new ID and Title. You may also wish to change the Hidden property to make the Feature visible in the UI for SharePoint Products. You should also remove the ReceiverAssembly and ReceiverClass properties unless you explicitly want the same code executed when the Feature is used. Make your changes to the Feature content Deploy your Feature to the server and register it with the SharePoint farm using stsadm.exe (stsadm -o installfeature -name MyFeature). Test it by activating the Feature on a site using stsadm.exe (stsadm -o activatefeature -name MyFeature -url) or through the UI for SharePoint Products. Replacing Out-of-the-box Content When using the Feature method of provisioning replacing existing content is not supported. Suppose that within a Feature's element manifest file there is a <Module> element with a <File> element that uses the same URL as an existing file. Depending on the value of the IgnoreIfAlreadyExists attribute of the <File> element, during activation of the Feature, SharePoint Products will either fail the Feature activation (if the attribute value was FALSE), or not provision the new content. One method of achieving the desired result is to provision the new content to a temporary URL and then execute code to move the content to the desired location. The following class implements the SharePoint Features receiver base class and provides some utility methods that simplify replacing existing content. It will be used later on in this white paper. using System; using System.Security.Permissions; using Microsoft.SharePoint; using Microsoft.SharePoint.Navigation; using Microsoft.SharePoint.Security; using Microsoft.SharePoint.Utilities; namespace Contoso.TfsPortals { public class ContosoBaseFeatureReceiver : SPFeatureReceiver { // The fixed GUID for the TfsDashboardBaseContent Feature. const stringThe site relative URL for the file to be moved.</param> /// <param name="target">The site relative URL to move the file to</param> /// <remarks>Move an existing file to a new URL, overwriting any existing file /// at the target location</remarks> protected void ReplaceFile(string source, string target) { SPFile sourceFile = Web.GetFile(source); if (sourceFile != null && sourceFile.Exists) sourceFile.MoveTo(target, true); } /// <summary> /// Mark the site so that the TFS Dashboard timer job will process Excel /// workbook connections and pivot table filters /// </summary> protected void MarkSiteForTimerJobProcessing() { // Clear the properties cached by the Team Foundation Server // Timer Job so it re-processes the site SPPropertyBag properties = Web.Properties; properties["teamfoundation.dashboards.CubeConnectcion"] = null; // spell as shown properties["teamfoundation.dashboards.ProjectMdxId"] = null; properties["teamfoundation.dashboards.Sso"] = null; properties.Update(); } /// <summary> /// Add a reference to the dashboard specified by <paramref name="file"/> to /// the Quick Launch under the Dashboards navigation item /// </summary> /// <param name="file">The file the references the dashboard</param> /// <exception cref="ArgumentNullException">If <paramref name="file"/> is /// <c>null</c></exception> /// <exception cref="ArgumentException">If the <paramref name="file"/> does not /// exist</exception> protected void AddDashboardNavigation(SPFile file) { if (file == null) throw new ArgumentNullException("file"); if (!file.Exists) throw new ArgumentException("File does not exist. " + file.Url); SPNavigationNode node = Web.Navigation.GetNodeByUrl(file.Url); if (node == null) { // 1102 is the ID for the Dashboards navigation node SPNavigationNode parentNode = Web.Navigation.GetNodeById(1102); if (parentNode != null) { node = new SPNavigationNode(file.Title, file.Url); parentNode.Children.AddAsLast(node); } } } /// <summary> /// Set the default dashboard to be the <paramref name="file"/> passed in /// </summary> /// <param name="file">The file that will be the site's default dashboard</param> /// <exception cref="ArgumentNullException">If <paramref name="file"/> /// is <c>null</c></exception> /// <exception cref="ArgumentException">If the <paramref name="file"/> does not /// exist</exception> protected void SetDefaultDashboard(SPFile file) { if (file == null) throw new ArgumentNullException("file"); if (!file.Exists) throw new ArgumentException("File does not exist. " + file.Url); SPFeature feature = Web.Features[new Guid(TfsBaseContentGuid)]; if (feature == null) return; SPFeatureProperty property = feature.Properties[DefaultDashboardProperty]; if (property == null) { property = new SPFeatureProperty(DefaultDashboardProperty, file.Url); feature.Properties.Add(property); } else { property.Value = file.Url; } feature.Properties.Update(); } } } SharePoint Site Templates SharePoint site templates were used for the project portal solution in Team Foundation Server for Visual Studio Team System 2008. Site templates can still be as the base site for a project portal. If you are going to create a site template for use as a project portal start by customizing a site but do not activate any of the Team Foundation Server Dashboard Features on the site. The Team Foundation Server Dashboard Features add properties and other artifacts to a site which may produce incorrect results if those changes are saved into the site template. Once you have saved the site as a site template, the STP archive can be added to the global site template gallery and then referenced from a process template. You can test how your site template will work with the Team Foundation Server Dashboard Features by using it to create a sub-site underneath an existing project portal site and then manually activating the desired Team Foundation Server Dashboard Features. Dashboard and Process Template Customizations The following customizations are described in the remainder of this whitepaper. Process Templates Excel Workbooks Dashboards SharePoint Sites Development of SharePoint Features Having outlined the components that make up a project portal and discussed some of the customization options available we'll now take a walk through some example customization scenarios. Meet Wesley. Wesley is the Team Foundation Server process author for Contoso and is responsible for designing and implementing the customizations required to support the company's development processes. Wesley is neither a Team Foundation Server administrator, nor a member of the farm administrator’s group for SharePoint Products in the production environments. He works with the appropriate administrators to deploy customizations. He does have his own development servers where he is an administrator. As a process author, I want to use my own site definition configuration so that my users have a familiar environment "The IT department has developed a custom SharePoint site definition with a single configuration called "Contoso Standard Site" that is used for all new SharePoint sites in the company. The site definition defines a customized look-and-feel for the site along with a number of standard lists and document libraries. Wesley needs to customize the creation of project portals for new team projects so that the project portals use the Contoso site definition configuration. He also needs to understand and communicate to his team how the portal experience will be different from other sites created with company's site definition so that the team members can utilize the Team Foundation Server dashboard functionality." To use the Contoso site definition configuration for a process template Wesley performs the following steps: Identifies that the Contoso site definition's configuration is titled "Contoso Standard Site" Follows the guidance for customizing the process template's portal plug-in to change the process template to use the site definition configuration. His portal task file (WssTasks.xml) now looks like: Uploads the modified process template to his development Team Foundation Server instance and confirms that the Contoso site definition has been installed on his development server for SharePoint Products. He creates a new team project using the modified template. After creation he confirms that the new site has used the correct Contoso site definition configuration and notes the differences compared to a standard SharePoint site for Contoso. He sends the modified process template to the administrator of the production Team Foundation Server instance who uploads the new template to each of the production team project collections. SharePoint Artifacts In addition to the artifacts (lists, document libraries, etc.) created by a site definition for SharePoint Products, the process templates create a number of Document Libraries and Lists. Some of these are defined in the process templates, and some are defined in SharePoint Features which are activated by the process templates. Wesley notes the following changes from the standard Contoso site. New Document Libraries or Lists: Dashboards Excel Reports Process Guidance Important Dates Samples and Templates Shared Documents New Quick Launch navigation entries: Documents (with child entries for each dashboard) Team Web Access Dashboards Excel Reports Reports Process Guidance As a process author I want to add a new state so that I can track in-progress work "The Contoso development managers decide that all future projects should have a convenient means of identifying which work is currently in-progress. Wesley works with the teams and they agree that adding an "In Progress" state to their Task work item, and reflecting this in appropriate reports and queries would be the best solution. Wesley reviews the team project artifacts to determine where changes will be required, and then identifies how those changes should be packaged for new projects." Many teams find that the out-of-the-box Team Foundation Server process templates provide a good starting point. It is also common that some aspects of the process templates need to be modified to provide the best fit for an organization. Many of the components in a process template revolve around the definitions of the work item types. In particular most artifacts that users interact with for a team project relate to or depend on the work item type definitions. Figure 3: Relationship between artifacts created for a team project For our scenario we look at what is required to add a new "State" to the Task work item type from the MSF Agile process template, and the flow-on changes needed to make that visible and useful to team members. Changes are required to: The Task work item type definition SQL Server Reporting Services reports Work item Queries Project portal Excel workbooks Project portal Team Web Access Web Parts Updating the Work Item Type Definition Adding a new state to the Task work item type definition requires adding the new In Progress state and adding transitions that allow the work item's state to change between it and the existing states Active and Closed. Figure 4 Follow the online guidance for How to: Change the Workflow of a Work Item Type Updating SQL Server Reporting Services Reports The SQL Server Reporting Services reports for a team project are specified in the process template. The RDL source files are located in the Reports folder. "Wesley reviews the reports that reference the Task work item and determines that the only change required is to add a specific color to use for In Progress tasks in the Remaining Work report. He uses the SQL Server Business Intelligence Development Studio tool to open the Reports\Remaining Work.rdl file, makes the change and saves the results in place." For more details on managing reports in the process template see the online guidance for Uploading Reports Using the Reports Plug-in. For more details on customizing SQL Server Reporting Services reports for Team Foundation Server see Creating and Managing Reporting Services Reports for Visual Studio ALM. Updating Work Item Queries The work item queries that are provisioned under the "Team Queries" node for a team project are specified in the process template. The source files are located in the WorkItem Tracking\Queries folder. "Wesley creates a new "Work In Progress" query and saves it into his working folder as WorkItem Tracking\Queries\WorkInProgress.wiq. He edits WorkItems.xml and adds an entry that references the new query source file and provisions it as "Work In Progress". For more details on adding queries to process templates see Add a Query to a Process Template Updating Project Portal Excel Workbooks The Excel workbooks provisioned to a project portal using the out-of-the-box templates are located within two SharePoint Features, one for the MSF Agile template and one for the MSF Formal (CMMI) template. The files are deployed within a Feature because after deployment the data connections in the workbooks are updated to point to the SQL Server Analysis Services cube for the Team Foundation Server instance and pivot table filters are set to restrict results to the team project's data. Note that the out-of-the-box Excel workbooks are only included in a project portal if the Team Foundation Server instance has an associated data warehouse and SQL Server Analysis Services cube. "Wesley decides to create one new Excel workbook to show which users have "in-progress" work items assigned to them and an updated version of the Task Progress workbook. Because SharePoint does not allow existing content to be overwritten by files in a SharePoint Feature, Wesley needs to choose a unique URL for the updated Task Progress workbook and then copy it to its desired location." He creates a new Feature ContosoReports which includes the following files: feature.xml elements.xml In Progress Work.xlsx Task Progress.xlsx Feature.xml File "Wesley references the elements.xml file and the two reports. He wants the workbooks to have their details (data connection, authentication settings and team project pivot table filter) updated so he references code through the ReceiverAssembly and ReceiverClass attributes that will execute after the Feature is activated:" <Feature xmlns="" Id="5A8979D8-2540-4443-A354-974ADF17E6C8" Title="Contoso Project Portal Reports"="In Progress Work.xlsx" /> <ElementFile Location="Task Progress.xlsx" /> </ElementManifests> </Feature> Elements.xml File "Wesley creates the elements.xml file to provision the workbooks into the correct location within the site:" Activating Features from the process template To have the Feature activated during process creation a reference to its unique identifier (specified in the Feature.xml file) is placed in the WssTasks.xml file. <?xml version="1.0" encoding="utf-8"?> <tasks> <task id="SharePointPortal" ... > <taskXml> <Portal> ... <activateFeatures> <!-- TfsDashboardAgileMoss --> <feature featureId="0D953EE4-B77D-485b-A43C-F5FBB9367207" /> <!-- TfsDashboardAgileQuickLaunch --> <feature featureId="1D363A6D-D9BA-4498-AD1A-9874ACA5F827" /> <!-- ContosoReports --> <feature featureId="5A8979D8-2540-4443-A354-974ADF17E6C8" /> </activateFeatures> </Portal> </taskXml> </task> </tasks> Updating Workbook Details During activation of the out-of-the-box Features that provision workbooks, code is executed to set: Connection to the Analysis Services database for Team Foundation Server Excel Services Authentication Settings Team Project Hierarchy filter for the PivotTable report While this same code cannot be called from custom Features, you can use the Team Foundation Server timer job to do the same work. Periodically the timer job runs over sites looking for portals that need to have their connection details updated. Note that the name of the connection in the workbook must be "TfsOlapReport" if it is to be updated by the timer job. To have the timer job re-process a site you need to clear the following properties on the SPWeb for the site: teamfoundation.dashboards.CubeConnectcion [spelled exactly as shown] teamfoundation.dashboards.ProjectMdxId teamfoundation.dashboards.Sso This can be done from a Feature event receiver within the FeatureActivated method. The code will need to be built into an assembly and provisioned to the global assembly cache for the server that is running SharePoint Products. You can provision this assembly either manually or through a solution package for SharePoint Products. Once the Task Progress workbook has been provisioned to "Contoso Task Progress.xlsx" it can be moved to overwrite the out-of-the-box version during Feature activation. "Wesley creates the following class to handle Feature activation events: Updating Project Portal Team Web Access Web Parts "Wesley wants to change the Project Work Items Web Part on the My Dashboard page to show in progress Tasks. He considers whether he should replace the entire dashboard page with another that has the desired Web Parts, or whether he should write code to update the existing dashboard page after it has been provisioned. He starts by creating a new Feature called ContosoDashboards." Replacing Existing Dashboard Pages One option is to provision his own version of the My Dashboard page and copy it over the out-of-the-box version from a Feature event receiver. To do this would require: Creating a Feature that contained the definition for a dashboard file and provisioned it to a temporary name A reference to a Feature event receiver that moved the file to "MyDashboards.aspx" and added a Quick Launch navigation entry for the updated dashboard. The XML used to provision dashboard pages are stored in the Feature folders and could be used as a starting point when creating Features that provision customized dashboards. For example, the XML for the My Dashboard page is stored in the following directory on a server that is running SharePoint Products: The feature.xml for a possible Feature that achieves this is: <Feature xmlns="" Id="3BDED29A-F078-4E61-85F7-370A41B6DF08" Title="Contoso Custom Dashboard"="dashboardlayouts.aspx" /> </ElementManifests> </Feature> and the corresponding Feature event receiver: The GUID for the ContosoDashboards Feature should also be added to the WssTasks.xml file so that it is activated by the New Team Project Wizard. Editing Existing Dashboard Pages Another customization option is to manipulate existing pages through the SharePoint object model after they have been provisioned. A Feature does not need to provision any actual content in order to have a Feature event receiver execute custom code, so this is an easy way to execute code in the context of a specific site. In order to change a property on a Web Part on the My Dashboard page you would need to: Get a reference to the SPFile object for the dashboard page Get the Web Part Manager for the file Iterate over the Web Parts in the manager and make any desired changes to the Web Parts Save the changes. A sample Feature event receiver that performs this is below: using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebPartPages; namespace Contoso.TfsPortals { class ContosoUpdateRecentBuilds : ContosoBaseFeatureReceiver { protected override void OnActivate() { SPFile dashboard = Web.GetFile("Dashboards/MyDocuments.aspx"); SPLimitedWebPartManager manager; using (manager = dashboard.GetLimitedWebPartManager(PersonalizationScope.Shared)) { foreach (System.Web.UI.WebControls.WebParts.WebPart part in manager.WebParts) { using (part) { // make change to parts ///... // save changes manager.SaveChanges(part); } } } } } } "Wesley decides that replacing the My Dashboard page with his own definition is the best choice since it is simpler to code and makes future changes to the page easier." Packaging and Deploying the Solution "Wesley identifies all of his customizations. The process template changes include the modified work item type definitions, updated SQL Server Reporting Services reports, new work item queries and references to activating two new SharePoint Feature. He uploads the new process template to his team project collection. The changes to SharePoint Products include the two new workbooks, an assembly to execute activation actions and the Feature definition files. He packages these ContosoReports and ContosoDashboards Features." As a process author I want to change the default set of dashboard pages for new projects so that I get my process dashboards instead of Agile or CMMI "The development teams have been successfully developing on Team Foundation Server 2010 for a while. The various teams have manually customized the dashboards in the project portals for their team projects and have come together for a review. The review identifies that there are a common set of dashboard changes which should be included in all future project portals. The changes include: A new Executives dashboard containing three new Excel charts and two Team Web Access Query Web Parts. This should be the default project portal page. Provision new versions of the standard Burndown, Quality and Test dashboards Removal of the Build dashboard Provisioning of the Project dashboard Wesley reviews the list of requirements and identifies that he could use one Feature to: Provision the new Executives dashboards Ensure that the TfsDashboardWssAgileContent Feature is activated (which provisions the Project dashboard) Execute code that will: Add Quick Launch links to the dashboard pages Set the default dashboard page to the Executives dashboard Because there are quite a few changes to the dashboards normally provisioned for the Agile template, Wesley identifies that he doesn't want the default Team Foundation Server Feature (TfsDashboardAgileMoss) to be activated from the Template. Instead he will activate his own Feature that contains modified copies of some dashboards and control which other Team Foundation Server Features are activated." Creating and Provisioning a Dashboard To provision a Custom Dashboard as part of a solution it is often easiest to start with an out-of-the-box Feature and make required modifications. See Section 3.2.4 Using Out-of-the-box Content as Your Starting Point for more details.. Each dashboard is provisioned as a File in a Module. For details on the syntax available, see the online documentation for Modules. To add Web Parts to a dashboard page you use the AllUserWebPart element. The basic structure of the element is: The content of the CDATA section defines a single Web Part that is added to the specified Web Part Zone. If multiple Web Parts are added to the Web Part zone then they are ordered based on the WebPartOrder property. The CDATA section defines the full class name of the Web Part and the property values that should be set for that instance of the Web Part. It can sometimes be tricky to get the property syntax correct, especially when dealing with non-trivial property types. One approach that works is to get SharePoint to do the work for you. First, manually create a Web Part Page in a site and add your desired Web Parts to the page. Ensure that the "Export Mode" Web Part property under the "Advanced" Web Part editor section is set to "Export all data". Configure the value of the properties to your liking then use the "Export..." option for the Web Part. Figure 5 This should allow you to save a .webpart file that can be copied directly into the CDATA section. The .webpart file contains the current value of all of the Web Part's exportable properties, many of which you can delete if you wish to use the default values when provisioning your dashboard page. Putting this together, to provision a custom dashboard titled "Executive Dashboard" that displays a single Excel report called "ExecutiveStatus.xlsx" you could create an elements file with the following: <Elements xmlns=""> <Module Name="ExecutivePage" List="101" Url="Dashboards"> <File Name="Executive.aspx" Type="GhostableInLibrary" Url="dashboardlayouts.aspx"> <Property Name="Title" Value="Executive Dashboard" /> <AllUsersWebPart WebPartZoneID="<ZoneID>" WebPartOrder="<Order>"> <![CDATA[ <webParts> <webPart xmlns=""> <metaData> <type name="Microsoft.Office.Excel.WebUI.ExcelWebRenderer, Microsoft.Office.Excel.WebUI, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" /> <importErrorMessage>Cannot import this Web Part.</importErrorMessage> </metaData> <data> <properties> <property name="Width" type="string">285pt</property> <property name="WorkbookUri" type="string">~site/Reports/ExecutiveStatus.xlsx</property> <property name="AllowInteractivity" type="bool">False</property> <property name="AutoGenerateDetailLink" type="bool">True</property> <property name="AutoGenerateTitle" type="bool">False</property> <property name="Title" type="string">Executive Status</property> <property name="AllowParameterModification" type="bool">True</property> <property name="AllowNavigation" type="bool">False</property> <property name="ShowWorkbookParameters" type="bool">False</property> <property name="Height" type="string">188pt</property> <property name="ToolbarStyle" type="Microsoft.Office.Excel.WebUI.ToolbarVisibilityStyle, Microsoft.Office.Excel.WebUI, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c">None</property> </properties> </data> </webPart> </webParts> ]]> </AllUsersWebPart> </File> </Module> </Elements> A detailed discussion on the properties of various Web Parts is not within this document's scope. It is however worth pointing out some specific property values: Excel Web Access Web Part If either AllowInteractivity or AllowNavigation are set to True then the Excel Web Access Web Part generates additional browser cookies in the HTTP response. These cookies can hold a substantial amount of data. While the EWAs manage the total amount of data taken up by these cookies we have encountered scenarios where the data exceeds the available space in the HTTP header and causes a "Bad Request" response from the web server. If you don't specifically need the interactivity we suggest you set both properties to False. The WorkbookUri property can include the "~site" token which is replaced by the path to your site during provisioning. To avoid additional whitespace or unwanted scroll bars, we suggest that you set the Width and Height properties to match the dimensions of your chart. Query Results Web Part The Query and ServerQuery properties work in tandem. If ServerQuery is True then Query is the name of the Team Query on the server that will be executed (where the path start with "Team Queries/"). If ServerQuery is False then Query is the text of the Work Item Query that will be executed. Rather than generating the WIQL text by hand it is usually easier to export the property values from a QueryResultsWebPart Web Part that has been configured to the correct values through the web UI. The PersistenceStorage property affects how the query results are displayed in the Web Part. Without specifying this property the query results will have the columns and row order defined in the WIQL query and the result columns will have widths based on default values for their data types. By specifying this property you can change which columns are returned from the query (including dropping all the columns defined in the WIQL), what the sort order for the rows is, and specific column widths. Again, the simplest approach to getting this property value correct is to export a previously configured Web Part and copy the property value. When working with a ServerQuery set to True the columns and sort order should be customized by manually resizing the column widths in the browser and using the dialog accessed from the "Column Options" toolbar button. After making the changes, select "Save Query Changes" from the Web Part's menu before exporting the .webpart file. Figure 6 When working with ServerQuery set to False the columns, widths and sort order should be customized through the Query Picker that is accessed from the Web Part's editor. After applying the changes you can export the .webpart file to get the property values. It will probably be worth specifying the Width and Height properties for the Web Part. By default the Height of the result area is set to 600 pixels and the Width property is set to 100% (which collapses the Web Part to only show the Web Part title and any toolbar specified). If the Width property is set and a toolbar is used (either Minimal or Full) then the Width property should be set wide enough to avoid clipping off the toolbar. Work Item Summary Web Part The WorkItemSummaryWebPart summarizes the work items returned from a query by grouping and counting the results based on the Work Item Type and State fields. The actual columns and sort order of the underlying query are not used in the calculations of the totals, only the filters for top level work items. The underlying query's columns and sort order are used if a user clicks on one of the summary totals which then displays the query results for that particular State and/or Work Item Type. "Wesley uses the approach above to generate the AllUsersWebPart entries for each of the five Web Parts on the Executive dashboard. The three Excel Web Access Web Parts display data from three new Excel workbooks which Wesley includes in the Feature. For the modified Burndown, Quality and Test dashboards Wesley copies their definitions from the elements.xml file in the TfsDashboardMossAgileContent Feature into his own Feature and modifies their definitions accordingly." Activating Other Features Activating Features can be done in a variety of ways: From the process template, by adding a new <feature> element Through the API for SharePoint Products By defining a Feature activation dependency "Firstly Wesley changes the process template to avoid activating the TfsDashboardAgileMoss Feature. He does this by removing the corresponding element (<feature featureId="0D953EE4-B77D-485b-A43C-F5FBB9367207" />) from WssTasks.xml. Wesley decides that activating the TfsDashboardWssAgileContent Feature should not be included as a direct dependency of his custom Feature so he scopes the activation to the process template by adding a new <feature> element He also adds a <feature> element for his custom Feature." Adding links to Quick Launch "To add the dashboards to the site's Quick Launch navigation Wesley defines a Feature event receiver that calls the ContosoBaseFeatureReceiver.AddDashboardNavigation method. using Microsoft.SharePoint; namespace Contoso.TfsPortals { class ContosoCustomDashboards : ContosoBaseFeatureReceiver { protected override void OnActivate() { AddDashboardNavigation(Web.GetFile("Dashboards/Executive.aspx")); AddDashboardNavigation(Web.GetFile("Dashboards/Burndown.aspx")); AddDashboardNavigation(Web.GetFile("Dashboards/Quality.aspx")); AddDashboardNavigation(Web.GetFile("Dashboards/Test.aspx")); } } } Setting a Default Dashboard The Team Foundation Server dashboards include a control that allows redirection to a "default dashboard". The control is included in the "default.aspx" page which is provisioned by the TfsDashboardBaseUI Feature when using one of the Team Foundation Server site definition configurations. When executing, the control looks to see if a default dashboard has been set. The default site-relative URL (if it exists) is stored in the Feature property bag for the TfsDashboardBaseContent Feature. If the property doesn't exist, then the control will look for the Dashboards document library and pick the first .aspx item in the list. To set the default dashboard you can either set the Feature property or make a call to the SetDefaultDashboardPage.aspx page. Setting the default dashboard through the API To set a default dashboard URI through the API you need to: Get the SPFeature instance for the TfsDashboardBaseContent Feature that is activated for your SPWeb Get the properties for the SPFeature Add or update the property that stores the default dashboard URL Updated the Feature's properties The code to do this is contained in the ContosoBaseFeatureReceiver. SetDefaultDashboard method defined earlier. "Wesley decides to use this method to set the Executives dashboard as the default dashboard". Setting the default dashboard through a web call To set the default dashboard through a web call involves constructing a query string that contains the GUID for the Dashboard's document library and the Id of the item within the library. The URL for the SetDefaultDashboardPage is: where ~site is the URL for the site, {ListID} is the GUID for the Dashboards document library using the "B" format specifier for the System.Guid structure and {ItemId} is the integer ID of the dashboard page within the document library (note that this is not the index of the item in the collection of items for the list). void SetDefaultDashboard(SPListItem dashboard) { string path = SPUrlUtility.CombineUrl(dashboard.Web.Url, "_layouts/Microsoft.TeamFoundation/SetDefaultDashboardPage.aspx"); UriBuilder uri = new UriBuilder(path); uri.Query = String.Format("List={0:B}&Item={1}", dashboard.ParentList.ID, dashboard.ID); WebRequest request = WebRequest.Create(uri.Uri); HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // expect that response.StatusCode == HttpStatusCode.Redirect } Packaging and Deploying the Solution "Wesley identifies all of his customizations. His process template changes include changes to the Features activated by the process template. He uploads the new process template to his team project collection. He has created one new Feature that: Provisions three new Excel workbooks Provisions the new Executives dashboard Provisions customized copies of the Burndown, Quality and Test dashboards Calls a Feature event receiver He has created an assembly that implements the Feature's event receiver which on activation: Adds Quick Launch entries for each dashboard Sets the Executives dashboard as the default page for the site He packages the assembly and the Feature files custom Feature." As a process author I can change the visual appearance of the site so I can have my own branding "Wesley is asked to consult for a Contoso customer who is starting to use Team Foundation Server. The customer likes the out-of-the-box Team Foundation Server sites but wants to change the branding to reflect their own colors and logos." The branding for the project portals uses standard customization techniques in the UI for SharePoint Products. The branding consists of: Deployment of image and CSS files through the WSP mechanism Setting the site logo in a Feature event receivers Deployment of a theme for SharePoint Products and applying that to the site In-line CSS styles and HTML mark-up in the pages used to deploy Dashboards Using the solution package deployment mechanism for SharePoint Products, files can be deployed to the file system of all Web front-end servers in a SharePoint farm. In particular files can be deployed into the "SharePoint hive" which has sub-folders for administrative pages ([SPHive]\TEMPLATE\LAYOUTS), Themes ([SPHive]\TEMPLATE\THEMES), images ([SPHive]\IMAGES), etc. Site Logos The site logo for a site is defined by a server-relative URL that points to the image file to use. For example, the default Team Site for Windows SharePoint Services 3.0 has its site logo path set to /_layouts/images/titlegraphic.gif. To change the logo for new sites, first have your logo deployed to a folder in the SharePoint hive. Setting the logo for the site can then be done through the API for SharePoint Products, or through the site definition. To set the site logo through the API set the SPWeb.SiteLogoUrl property in a Feature event receiver that is activated during site setup. For more information about this property, see the following page on the Microsoft Web site: SPWeb.SiteLogoUrl Property (Microsoft.SharePoint). When working with project portals that use the Team Foundation Server branding, you need to use this method to set the SiteLogoUrl after the Team Foundation Server TfsDashboardBaseUI Feature has been activated; otherwise, it will get overwritten. To set the site logo through a site definition file, first identify the ONET.xml file used for your site definition (usually under the SharePoint hive in [SPHive]\TEMPLATE\SiteTemplates\<TEMPLATENAME>\xml\ONET.xml). In this file the <Project> element allows a SiteLogoUrl attribute to be set. Themes for SharePoint Products When deployed on a server that is running Windows SharePoint Services 3.0 or Microsoft Office SharePoint Server 2007, the project portals apply a theme to SharePoint Products. Themes are collections of images and CSS files that can be linked to a site and override the default HTML styling. The project portals use a theme named "TFSDASHBOARDS" which is deployed under the SharePoint hive in [SPHive]\TEMPLATES\THEMES\TFSDASHBOARDS. Guidance on creating themes for SharePoint Products is available online. To set the Theme used for a project portal, ensure the Theme files have been deployed and then use a Feature event receiver to call the SPWeb.ApplyTheme(themeName) method. When deployed to SharePoint Server 2010 the TFSDASHBOARDS theme is not used and the project portal uses the site's default styling. In-line Styling for Dashboards The layout and styling of the Team Foundation Server dashboard pages is determined by a dashboardslayout.aspx file that is deployed as part of the Team Foundation Server SharePoint Features. Each Feature that creates a Dashboard page provisions a "ghosted" copy of its dashboardslayout.aspx file, and then adds Web Part content to the page. When provisioning your own Dashboard pages, you can take a copy of the file, incorporate it into your Feature and then deploy one or more Dashboard pages from this file. Because the pages are initially ghosted any changes to the dashboardlayouts.aspx file will be picked up immediately in the web browser. The dashboardlayouts.aspx file defines the structure of the page including the Team Foundation Server dashboard toolbar and the location of the Web Part zones. It also defines a small number of CSS rules that help with spacing and alignment of the Web Parts. Most of the border or color information for the dashboard pages is set by the containing site. Dashboard Toolbar The toolbar that appears on each dashboard page contains controls that provide access to team project data or perform operations on the dashboards and Excel reports. Three of the controls are provisioned as delegated controls: Portal Link New Work Item Link Go To Work Item The property values used for the controls are defined in the Element Manifest of the TfsDashboardBaseContent Feature. To change the text you can provision a new Control that will override the default version by specifying a lower sequence number (the dashboards use 50). For example to change the New Work Item control to use the name "Create Work Item" you would add the following to your Feature's element manifest file: <Control Id="NewWorkItemLinks" Sequence="30" ControlAssembly="Microsoft.TeamFoundation.WebAccess.WebParts, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" ControlClass="Microsoft.TeamFoundation.WebAccess.WebParts.NewWorkItemLinksUserControl"> <Property Name="ID">newWorkItemLinks</Property> <Property Name="TitleText">Create Work Item</Property> <Property Name="DisplayMode">PopupMenu</Property> <Property Name="TfsName">@default</Property> </Control> The text for the Copy Dashboard toolbar button can be changed by adding the Text attribute to the control within the dashboardslayout.aspx file. The text for the New Excel Report toolbar button cannot be overridden. Summary "Wesley recommends that the customer creates a Solution Package (WSP) that contains: the company's logo that will be deployed to [SPHive]\TEMPLATE\LAYOUTS\IMAGES a theme for SharePoint Products that is based on the TFSDASHBOARDS Theme and modified to use the customer's branding colors; an assembly that contains a class that acts as a Feature event receiver which, on activation, sets the site's logo and applies the Theme; a SharePoint Feature that references the class as its Feature event receiver. The Feature will be activated after the out-of-the-box Team Foundation Server SharePoint Features by appending a <feature> element to the WssTasks.xml file. Wesley advises that once the code changes are made, the new WSP is deployed to the server, and the modified process template is uploaded to the project collection, then new projects created with that template will get the updated branding." Figure 7 shows a sample of the types of customizations that could be made, which include: Changing the site logo Applying a different Theme Reorganizing the layout of the dashboard page Changing names on the dashboard toolbar Figure 7 Team Foundation Server integrates with SharePoint Products to encourage collaboration between team members. Project portals provide access to Team Foundation Server data within a SharePoint site and the project portals can be customized to suit a team or organization.This document introduced important concepts about project portals. It described the components that work together to create the project portals used with the default Team Foundation Server process templates. It has shown a number of ways in which customizations can be developed so that new project portals will include those changes. The types of customizations illustrated include adding new content, removing or modifying standard content, and changes to the user interface for a project portal. SharePoint Products and Technologies Customization Best Practices SharePoint Developer Center
http://msdn.microsoft.com/en-us/library/ff678492(d=printer,v=vs.100).aspx
CC-MAIN-2013-20
en
refinedweb
04 May 2009 16:55 [Source: ICIS news] WASHINGTON (ICIS news)--?xml:namespace> The March pace of non-residential construction spending - including both government and private sector building - also was 1.7% ahead of the same month in 2008, the department said in its monthly report. Non-residential building activity includes construction of hotels, office space, hospitals and clinics, schools, highways and roads, sewage and water treatment facilities, among others. Even as the Some economists believe that a recovery in the non-residential construction market may presage a return of growth for the crucial single-family home building industry. Both residential and non-residential construction are key downstream consuming sectors for a wide variety of chemicals, resins and derivatives. The gain in non-residential construction in March also is noteworthy for being due almost wholly to a 2.9% improvement in private-sector spending on manufacturing facilities compared with February. The $83.9bn spent on new or improved manufacturing plants also marked a 64.4% gain compared with March 2008 when the figure was $51bn. To some, this suggests that manufacturers are gearing up for what they believe will be a general economic recovery in the not too distant future. There were other but more modest gains in private spending on construction for lodging (up 5.3% from February), commercial properties (up 1.5%) and communications facilities (up 1.7%). Government spending on non-residential construction - schools, roads, sewage and water treatment plants, etc. - increased in March by 1.2% to $301bn and was 2.5% ahead of March 2008. Spending by government at all levels - national, state and local - was expected to increase further as the $787bn federal stimulus package, approved in mid-February - begins to filter through the nation's economy. ($1 = €0.75).
http://www.icis.com/Articles/2009/05/04/9213107/us-non-residential-construction-gains-on-manufacturing.html
CC-MAIN-2013-20
en
refinedweb
Feb 23, 2012 07:52 PM|LINK When I work with the WebServicesClientProtocol class, there is no available requestsoapcontext field; only equals, referenceequals, and generatexmlmappings. How can i make the requestsoapcontextfield available? Contributor 7128 Points Feb 23, 2012 08:02 PM|LINK Did you added namespace and dll to your project -- Microsoft.Web.Services2 Contributor 7128 Points Feb 23, 2012 08:52 PM|LINK If you dont mind can you post your code. Member 42 Points Feb 27, 2012 07:15 AM|LINK Hi, please see the code below: You should create a new WebServicesClientProtocol object and then call the RequestSoapContext method 4 replies Last post Feb 27, 2012 07:15 AM by LightSwitch
http://forums.asp.net/t/1773161.aspx/1
CC-MAIN-2013-20
en
refinedweb
Operator Overloading The operator keyword declares a function specifying what operator-symbol means when applied to instances of a class. This gives the operator more than one meaning, or "overloads" it. The compiler distinguishes between the different meanings of an operator by examining the types of its operands. You can redefine the function of most built-in operators globally or on a class-by-class basis. Overloaded operators are implemented as functions. The name of an overloaded operator is operatorx, where x is the operator as it appears in the following table. For example, to overload the addition operator, you define a function called operator+. Similarly, to overload the addition/assignment operator, +=, define a function called operator+=. 1 Two versions of the unary increment and decrement operators exist: preincrement and postincrement. See General Rules for Operator Overloading for more information. The constraints on the various categories of overloaded operators are described in the following topics: The operators shown in the following table cannot be overloaded. The table includes the preprocessor symbols # and ##. Although overloaded operators are usually called implicitly by the compiler when they are encountered in code, they can be invoked explicitly the same way as any member or nonmember function is called: The following example overloads the + operator to add two complex numbers and returns the result. // operator_overloading.cpp // compile with: /EHsc #include <iostream> using namespace std; struct Complex { Complex( double r, double i ) : re(r), im(i) {} Complex operator+( Complex &other ); void Display( ) { cout << re << ", " << im << endl; } private: double re, im; }; // Operator overloaded using a member function Complex Complex::operator+( Complex &other ) { return Complex( re + other.re, im + other.im ); } int main() { Complex a = Complex( 1.2, 3.4 ); Complex b = Complex( 5.6, 7.8 ); Complex c = Complex( 0.0, 0.0 ); c = a + b; c.Display(); } ReferenceC++ Operators C++ Keywords
http://msdn.microsoft.com/en-us/library/5tk49fh2(v=vs.80).aspx
CC-MAIN-2013-20
en
refinedweb
I mentioned in my first post, I have just finished an extensive tech job search, whichfeatured eight on-sites, along with countless phone screens and informal chats.I wasinterviewing for a combination of data science and software engineering (machinelearning)positions, and I got a pretty good sense of what those interviews are like. In this post, I give anoverview of what you should expect in a data science interview, and some suggestions for how toprepare.An interview is not a pop quiz. You should know what to expect going in, and youcan take thetime to prepare for it. During the interview phase of the process, your recruiter is on your sideand can usually tell you what types of interviews youll have. Even if the recruiter is reluctant toshare that, common practices in the industry are a good guide to what youre likely to see.In this post, Ill go over the types of data science interviews Ive encountered, and offermy advice on how to prepare for them. Data science roles generally fall into twobroad ares offocus: statistics and machine learning. I only applied to the latter category, so thats the type ofposition discussed in this post. My experience is also limited to tech companies, so I cant offerguidance for data science in finance, biotech, etc..Here are the types of interviews (or parts of interviews) Ive come across.Always:Coding (usually whiteboard) Your backgroundOften:Culture fit Dataset analysis Stats You will encounter a similar set of interviews for a machine learning software engineeringposition, though more of the questions will fall in the coding category. If you dont have much software engineering experience, see if you can get afriend to look over your practice code and provide feedback.During the interview:Make sure you understand exactly what problem youre trying to solve. Askthe interviewer questions if anything is unclear or underspecified. Make sure you explain your plan to the interviewer before you start writingany code, so that they can help you avoid spending time going down lessthan-ideal paths. Mention what invalid inputs youd want to check for (e.g. input variable typecheck). Dont bother writing the code to do so unless the interviewer asks. Inall my interviews, nobody has ever asked. Before declaring that your code is finished, think about variable initialization,end conditions, and boundary cases (e.g. empty inputs). If it seems helpful,run through an example. Youll score points by catching your bugs yourself,rather than having the interviewer point them out.Applied machine learning All the applied machine learning interviews Ive had focused on supervised learning. Theinterviewer will present you with a prediction problem, and ask you to explain how you wouldset up an algorithm to make that prediction. The problem selected is often relevant to thecompany youre interviewing at (e.g. figuring out which product to recommend to auser, whichusers are going to stop using the site, which ad to display, etc.), but can alsobe a toy example (e.g. recommending board games to a friend). This type of interview doesnt dependon muchbackground knowledge, other than having a general understanding of machine learningconcepts (see below). However, it definitely helps to prepare by brainstorming the types ofproblems a particular company might ask you to solve. Even if you miss the mark,thebrainstorming session will help with the culture fit interview (also see below).When answering this type of question, Ive found it helpful to start by laying outthe setup of theproblem. What are the inputs? What are the labels youre trying to predict? What machinelearning algorithms could you run on the data? Sometimes the setup will be obvious from thequestion, but sometimes youll need to figure out how to define the problem. In the latter case,youll generally have a discussion with the interviewer about some plausible definitions (e.g.,what does it mean for a user to stop using the site?).The main component of your answer will be feature engineering. There is nothingmagical aboutbrainstorming features. Think about what might be predictive of the variable youare trying topredict, and what information you would actually have available. Ive found it helpful to givecontext around what Im trying to capture, and to what extent the features Im proposing reflectthat information.For the sake of concreteness, heres an example. Suppose Amazon is trying to figure out whatbooks to recommend to you. (Note: I did not interview at Amazon, and have no idea what theyactually ask in their interviews.) To predict what books youre likely to buy, Amazon can look forbooks that are similar to your past Amazon purchases. But maybe some purchases were mistakes,and you vowed to never buy a book like that again. Well, Amazon knows how youve interactedwith your Kindle books. If theres a book you started but never finished, it mightbe a positivesignal for general areas youre interested in, but a negative signal for the particular author. Ormaybe some categories of books deserve different treatment. For example, if a year ago you werebuying books targeted at one-year-olds, Amazon could deduce that nowadays youre looking forbooks for two-year-olds. Its easy to see how you can spend a while exploring thespace betweenwhat youd like to know and what you can actually find out.Your backgroundYou should be prepared to give a high-level summary of your career, as well as to do a deep-diveinto a project youve worked on. The project doesnt have to be directly related tothe positionyoure interviewing for (though it cant hurt), but it needs to be the kind of workyou can have anin-depth technical discussion about.To prepare: Practice explaining your project to a friend in order to make sure you aretelling a coherent story. Keep in mind that youll probably be talking tosomeone whos smart but doesnt have expertise in your particular field. Be prepared to answer questions as to why you chose the approach that youdid, and about your individual contribution to the project.Culture fitHere are some culture fit questions your interviewers are likely to be interested in. Thesequestions might come up as part of other interviews, and will likely be asked indirectly. It helpsto keep what the interviewer is looking for in the back of your mind.Are you specifically interested in the product/company/space youdbe working in? It helps to prepare by thinking about the problems thecompany is trying to solve, and how you and the team youd be part of couldmake a difference. Will you work well with other people? I know its a clich, but most workis collaborative, and companies are trying to assess this as best they can.Avoid bad-mouthing former colleagues, and show appreciation for theircontributions to your projects. Are you willing to get your hands dirty? If theres annoying work thatneeds to be done (e.g. cleaning up messy data), will you take care of it? Why you want to split data into training and test sets The idea that models that arent powerful enough cant capture the rightgeneralizations about the data, and ways to address this (e.g. different modelor projection into a higher-dimensional space) The idea that models that are too powerful suffer from overfitting, and waysto address this (e.g. regularization)You dont need to know a lot of machine learning algorithms, but you definitely need tounderstand logistic regression, which seems to be what most companies are using.I also hadsome in-depth discussions of SVMs, but that may just be because I brought them up.Dataset analysisIn this type of interview, you will be given a data set, and asked to write a script to pull outfeatures for some prediction task. You may be asked to then plug the features into a machinelearning algorithm. This interview essentially adds an implementation componentto the appliedmachine learning interview (see above). Of course, your features may now be inspired by whatyou see in the data. Do the distributions for each feature youre considering differ between thelabels youre trying to predict?I found these interviews hardest to prepare for, because the recruiter often wouldnt tell me whatformat the data would be in, and what exactly Id need to do with it. (For example, do I need toreview Pythons csv import module? Should I look over the syntax for training a model in scikitlearn?) I also had one recruiter tell me Id be analyzing big data, which was a bit intimidating(am I going to be working with distributed databases or something?) until I discovered at theinterview that the big data set had all of 11,000 examples. I encourage you to push for as muchinfo as possible about what youll actually be doing.If you plan to use Python, working through the scikit-learn tutorial is a good way to prepare.StatsI have a decent intuitive understanding of statistics, but very little formal knowledge. Most of thetime, this sufficed, though Im sure knowing more wouldnt have hurt. You should understandhow to set up an A/B test, including random sampling, confounding variables, summary statistics(e.g. mean), and measuring statistical significance.Preparation Checklist & ResourcesHere is a summary list of tips for preparing for data science interviews, alongwith a few helpfulresources.1. Coding (usually whiteboard)o Get comfortable with basic algorithms, data structures and figuring outalgorithm complexity. oPractice writing code away from the computer in your programminglanguage of choice.oResources:Pretty exhaustive list of what you might encounter in aninterview oGet comfortable with a set of technical tools for working with data.oResources:If you plan to use Python, work through the scikit-learntutorial (you could skip section 2.4).7. StatsoGet familiar with how to set up an A/B test.oResources:Quora answer about how to prepare for interview questionsabout A/B testing Sample size calculator, which you can use to get some intuitionabout sample sizes required based on the sensitivity (i.e.minimal detectable effect) and statistical significance yourelooking forThe Interview Process: What a Company WantsI have just finished a more extensive tech job search than anyone should reallydo. Itfeatured eight on-sites, along with countless phone screens and informal chats.There were a fewreasons why I ended up doing things this way: (a) I quit my job when my husbandand I movedfrom Boston to San Francisco a few months ago, so I had the time; (b) I wasnt sure what I waslooking for big company vs. small, data scientist vs. software engineer on a machine learningsystem, etc.; (c) I wasnt sure how well it would all go.This way of doing a job search turned out to be an awesome learning experience.In this seriesof posts, Ive tried to jot down some thoughts on what makes for a good interviewprocess, bothfor the company and for the candidate. I was interviewing for a combination of data science andsoftware engineering positions, but many observations should be more broadly applicable.What are we trying to do here, anyway?Before we can talk about what is a good or bad interview process, we need to und erstand thecompanys objectives. Here are some things your company might be trying to do, orperhapsshould be trying to do. Note that Im focusing on the interview stage here; thereare manyseparate questions about finding/filtering candidates. The candidate will be more positive when discussing your company with theirfriends. Its a small world. Even if you dont want to hire the candidate right now, you might want to hirethem in a year. There is intrinsic merit in being nice to people as theyre going through whatis often a stressful experience.Feel good doing it: Make sure the interviewers have a positive interviewexperience.As someone on the other side of the fence, this one is harder for me to reason about. But here aresome thoughts on why this is important:Your employees might be spending a lot of time interviewing (as much as 10hours a week during the fall recruiting season), and you dont want them tobe miserable doing it. If the interviewer is grumpy, the candidate will be less likely to think well ofthe company (see above). One of the companies I interviewed at requiresinterviewers to submit detailed written feedback, which resulted in themdedicating much of their attention to typing up my whiteboard code duringthe interview. More than one interviewer expressed their frustration with theprocess. Even if they were pretty happy with their job most of the time, itcertainly didnt come across that way.In the next post, Ill take a look at some job postings. Do you have thoughts onother goals companies should strive for? Please comment!kGet that job at GoogleI ve been meaning to write up some tips on interviewing at Google for a good longtime gonna get fiiiiiiiiiiredOooh yeah baaaby baaaay-beeeeee....I didn t realize this was such a typical reaction back when I first started writingabout interviewing, way back at other companies. Boy-o-howdy did I find out in ahurry.See, it goes like this:Me: blah blah blah, I like asking question X in interviews, blah blah blah...You: Question X? Oh man, I haven t heard about X since college! I ve never neededit for my job! He asks that in interviews? But that means someone out there thinksit s important to know, and, and... I don t know it! If they detect my ignorance, not those positions.These tips are actually generic; there s nothing specific to Google vs. any othersoftware company. I could have been writing these tips about my first software job20 years ago. That implies that these tips are also timeless, at least for the span ofour careers.These tips obviously won t get you a job on their own. My hope is that by followingthem you will perform your very best during the interviews.Oh, and um, why Google?Oho! Why Google, you ask? Well let s just have that dialog right up front, shallwe?You: Should I work at Google? Is it all they say it is, and more? Will I be serenelyhappyat mycurrent company, or at least I have become relatively inured to the discomfort.Iknow people here and nobody at Google! I would have to learn Google s buildsystem and technology and stuff! I have no credibility, no reputation there I wouldhave to start over virtually from scratch! I waited too long, there s no upside!I mafraaaaaaid!Me: DUDE. The answer is Yes already, OK? It s an invariant. Everyone else whocame to Google was in the exact same position as you are, modulo a handful offamous people with beards that put Gandalf s to shame, but they re a very tinyminority. Everyone who applied had the same reasons for not applying as you do.And everyone here says: "GOSH, I SURE AM HAPPY I CAME HERE!" So just applyalready. But prep first.You: But what if I get a mistrial? I might be smart and qualified, but for somerandom reason I may do poorly in the interviews and not get an offer! That wouldbea huge blow to my ego! I would rather pass up the opportunity altogether than havea chance of failure!Me: Yeah, that s at least partly true. Heck, I kinda didn t make it in on my firstattempt, but I begged like a street dog until they gave me a second round of interviews. I caught them in a weak moment. And the second time around, Iprepared, and did much better.The thing is, Google has a well-known false negative rate, which means wesometimes turn away qualified people, because that s considered better thansometimes hiring unqualified people. This is actually an industry-wide thing, but thedial gets turned differently at different companies. At Google the false-negative rateis pretty high. I don t know what it is, but I do know a lot of smart, qualifiedpeoplewho ve not made it through our interviews. It s a bummer.But the really important takeaway is this: if you don t get an offer, you may still bequalified to work here. So it needn t be a blow to your ego at all!As far as anyone I know can tell, false negatives are completely random, and areunrelated to your skills or qualifications. They can happen from a variety of factors,including but not limited to:1. you re having an off day2. one or more of your interviewers is having an off day3. there were communication issues invisible to you and/or one or more of theinterviewers4. you got unlucky and got an Interview Anti-LoopOh no, not the Interview Anti-Loop!Yes, I m afraid you have to worry about this.What is it, you ask? Well, back when I was at Amazon, we did (and they undoubtedlystill do) a LOT of soul-searching about this exact problem. We eventually concludedthat every single employee E at Amazon has at least one "Interview Anti-Loop": aset of other employees S who would not hire E. The root cause is important for youto understand when you re going into interviews, so I ll tell you a little aboutwhatI ve found over the years.First, you can t tell interviewers what s important. Not at any company. Not unlessthey re specifically asking you for advice. You have a very narrow window of perhapsone year after an engineer graduates from college to inculcate them in the art ofinterviewing, after which the window closes and they believe they are a "goodinterviewer"possibly specific questions that he or she feels is an accurate gauge of a candidate sabilities. The question sets for any two interviewers can be widely different andeven entirely non-overlapping.A classic example found everywhere is: Interviewer A always asks about C++ trivia,filesystems, network protocols and discrete math. Interviewer B always asks aboutJava trivia, design patterns, unit testing, web frameworks, and software projectmanagement. For any given candidate with both A and B on the interview loop, Aand B are likely to give very different votes. A and B would probably not even hireeach other, given a chance, but they both happened to go through interviewer C,who asked them both about data structures, unix utilities, and processes versusthreads, and A and B both happened to squeak by.That s almost always what happens when you get an offer from a tech company. Youjust happened to squeak by. Because of the inherently flawed nature of theinterviewing process, it s highly likely that someone on the loop will be unimpressedwith you, even if you are Alan Turing. Especially if you re Alan Turing, in fact, since itmeans you obviously don t know C++.The bottom line is, if you go to an interview at any software company, you shouldplan for the contingency that you might get genuinely unlucky, and wind up withone or more people from your Interview Anti-Loop on your interview loop. If thishappens, you will struggle, then be told that you were not a fit at this time, and thenyou will feel bad. Just as long as you don t feel meta-bad, everything is OK. Youshould feel good that you feel bad after this happens, because hey, it means yourehuman.And then you should wait 6-12 months and re-apply. That s pretty much the bestsolution we (or anyone else I know of) could come up with for the false-negativeproblem. We wipe the slate clean and start over again. There are lots of peopleherewho got in on their second or third attempt, and they re kicking butt.You can too.OK, I feel better about potentially not getting hiredGood! So let s get on to those tips, then.If you ve been following along very closely, you ll have realized that I m interviewerD. Meaning that my personal set of pet questions and topics is just my own, andit sno better or worse than anyone else s. So I can t tell you what it is, no matterhow much I d like to, because I ll offend interviewers A through X who have slightlydifferent working sets.Instead, I want to prep you for some general topics that I believe are shared bythemajority of tech interviewers at Google-like companies. Roughly speaking, thismeans the company builds a lot of their own software and does a lot of distributedcomputing. There are other tech-company footprints, the opposite end of thespectrum being companies that outsource everything to consultants and try to useas much third-party software as possible. My tips will be useful only to the extentthat the company resembles Google.So you might as well make it Google, eh?First, let s talk about non-technical prep.The Warm-UpNobody goes into a boxing match cold. Lesson: you should bring your boxing glovesto the interview. No, wait, sorry, I mean: warm up beforehand!How do you warm up? Basically there is short-term and long-term warming up, andyou should do both.Long-term warming up means: study and practice for a week or two before theinterview. You want your mind to be in the general "mode" of problem solving onwhiteboards. If you can do it on a whiteboard, every other medium (laptop, sharednetwork document, whatever) is a cakewalk. So plan for the whiteboard.Short-term warming up means: get lots of rest the night before, and then dointense, fast-paced warm-ups the morning of the interview.The two best long-term warm-ups I know of are:1) Study a data-structures and algorithms book. Why? Because it is the mostlikely to help you beef up on problem identification. Many interviewers are happywhen you understand the broad class of question they re asking withoutexplanation. For instance, if they ask you about coloring U.S. states in differentcolors, you get major bonus points if you recognize it as a graph-coloring problem,even if you don t actually remember exactly how graph-coloring works.And if you do remember how it works, then you can probably whip through theanswer pretty quickly. So your best bet, interview-prep wise, is to practice theart ofrecognizing that certain problem classes are best solved with certain algorithmsanddata structures. with you. The best way to appear arrogant is to question the validity of theinterviewer s question it really ticks them off, as I pointed out earlier on.Remember how I said you can t tell an interviewer how to interview? Well, that sespecially true if you re a candidate.So don t ask: "gosh, are algorithms really all that important? do you ever needto dothat kind of thing in real life? I ve never had to do that kind of stuff." You ll just getrejected, so don t say that kind of thing. Treat every question as legitimate, even ifyou are frustrated that you don t know the answer.Feel free to ask for help or hints if you re stuck. Some interviewers take points off forthat, but occasionally it will get you past some hurdle and give you a goodperformance on what would have otherwise been a horrible stony half-hour silence.Don t say "choo choo choo" when you re "thinking".Don t try to change the subject and answer a different question. Don t try to divertthe interviewer from asking you a question by telling war stories. Don t try tobluffyour interviewer. You should focus on each problem they re giving you and makeyour best effort to answer it fully.Some interviewers will not ask you to write code, but they will expect you to startwriting code on the whiteboard at some point during your answer. They will giveyouhints but won t necessarily come right out and say: "I want you to write some codeon the board now." If in doubt, you should ask them if they would like to see code.Interviewers have vastly different expectations about code. I personally don t careabout syntax (unless you write something that could obviously never work in anyprogramming language, at which point I will dive in and verify that you are not,infact, a circus clown and that it was an honest mistake). But some interviewers arereally picky about syntax, and some will even silently mark you down for missingasemicolon or a curly brace, without telling you. I think of these interviewers as well, it s a technical term that rhymes with "bass soles", but they think ofthemselves as brilliant technical evaluators, and there s no way to tell themotherwise.So ask. Ask if they care about syntax, and if they do, try to get it right. Lookoveryour code carefully from different angles and distances. Pretend it s someone else scode and you re tasked with finding bugs in it. You d be amazed at what you canmiss when you re standing 2 feet from a whiteboard with an interviewer staring atyour shoulder blades.It s OK (and highly encouraged) to ask a few clarifying questions, and occasionallyverify with the interviewer that you re on the track they want you to be on. Some interviewers will mark you down if you just jump up and start coding, even if youget the code right. They ll say you didn t think carefully first, and you re oneof those"let s not do any design" type cowboys. So even if you think you know the answertothe problem, ask some questions and talk about the approach you ll take a littlebefore diving in.On the flip side, don t take too long before actually solving the problem, or someinterviewers will give you a delay-of-game penalty. Try to move (and write) quickly,since often interviewers want to get through more than one question during theinterview, and if you solve the first one too slowly then they ll be out of time. They llmark you down because they couldn t get a full picture of your skills. The benefit ofthe doubt is rarely given in interviewing.One last non-technical tip: bring your own whiteboard dry-erase markers. They sellpencil-thin ones at office supply stores, whereas most companies (including Google)tend to stock the fat kind. The thin ones turn your whiteboard from a 480i standarddefinition tube into a 58-inch 1080p HD plasma screen. You need all the helpyoucan get, and free whiteboard space is a real blessing.You should also practice whiteboard space-management skills, such as not startingon the right and coding down into the lower-right corner in Teeny Unreadable Font.Your interviewer will not be impressed. Amusingly, although it always irks me whenpeople do this, I did it during my interviews, too. Just be aware of it!Oh, and don t let the marker dry out while you re standing there waving it. I mtellinya: you want minimal distractions during the interview, and that one is surprisinglycommon.OK, that should be good for non-tech tips. On to X, for some value of X! Don t stabme!Tech Prep TipsThe best tip is: go get a computer science degree. The more computer science youhave, the better. You don t have to have a CS degree, but it helps. It doesn t have tobe an advanced degree, but that helps too.However, you re probably thinking of applying to Google a little sooner than 2 to 8years from now, so here are some shorter-term tips for you.Algorithm Complexity: you need to know Big-O. It s a must. If you struggle withbasic big-O complexity analysis, then you are almost guaranteed not to get hired.It s, like, one chapter in the beginning of one theory of computation book, so just goread it. You can do it. Sorting: know how to sort. Don t do bubble-sort. You should know the details ofatleast one n*log(n) sorting algorithm, preferably two (say, quicksort and merge sort).Merge sort can be highly useful in situations where quicksort is impractical, sotakea look at it.For God s sake, don t try sorting a linked list during the interview.Hashtables: hashtables are arguably the single most important data structureknown to mankind. You absolutely have to know how they work. Again, it s like onechapter in one data structures book, so just go read about them. You should be ableto implement one using only arrays in your favorite language, in about the spaceofone interview.Trees: you should know about trees. I m tellin ya: this is basic stuff, and itsembarrassing to bring it up, but some of you out there don t know basic treeconstruction, traversal and manipulation algorithms. You should be familiar withbinary trees, n-ary trees, and trie-trees at the very very least. Trees are probably thebest source of practice problems for your long-term warmup exercises.You should be familiar with at least one flavor of balanced binary tree, whetherit s ared/black tree, a splay tree or an AVL tree. You should actually know how it simplemented.You should know about tree traversal algorithms: BFS and DFS, and know thedifference between inorder, postorder and preorder.You might not use trees much day-to-day, but if so, it s because you re avoidingtreeproblems. You won t need to do that anymore once you know how they work. Studyup!GraphsGraphs are, like, really really important. More than you think. Even if you alreadythink they re important, it s probably more than you think.There are three basic ways to represent a graph in memory (objects and pointers,matrix, and adjacency list), and you should familiarize yourself with eachrepresentation and its pros and cons.You should know the basic graph traversal algorithms: breadth-first search anddepth-first search. You should know their computational complexity, their tradeoffs,and how to implement them in real code. You should try to study up on fancier algorithms, such as Dijkstra and A*, if you geta chance. They re really great for just about anything, from game programming todistributed computing to you name it. You should know them.Whenever someone gives you a problem, think graphs. They are the mostfundamental and flexible way of representing any kind of a relationship, so it sabout a 50-50 shot that any interesting design problem has a graph involved in it.Make absolutely sure you can t think of a way to solve it using graphs before movingon to other solution types. This tip is important!Other data structuresYou should study up on as many other data structures and algorithms as you can fitin that big noggin of yours. You should especially know about the most famousclasses of NP-complete problems, such as traveling salesman and the knapsackproblem, and be able to recognize them when an interviewer asks you them indisguise.You should find out what NP-complete means.Basically, hit that data structures book hard, and try to retain as much of it as youcan, and you can t go wrong.MathSome interviewers ask basic discrete math questions. This is more prevalent atGoogle than at other places I ve been, and I consider it a Good Thing, even thoughI m not particularly good at discrete math. We re surrounded by counting problems,probability problems, and other Discrete Math 101 situations, and those innumerateamong us blithely hack around them without knowing what we re doing.Don t get mad if the interviewer asks math questions. Do your best. Your best willbe a heck of a lot better if you spend some time before the interview refreshingyour memory on (or teaching yourself) the essentials of combinatorics andprobability. You should be familiar with n-choose-k problems and their ilk the morethe better.I know, I know, you re short on time. But this tip can really help make the differencebetween a "we re not sure" and a "let s hire her". And it s actually not all that bad discrete math doesn t use much of the high-school math you studied and forgot. Itstarts back with elementary-school math and builds up from there, so you canprobably pick up what you need for interviews in a couple of days of intense study.Sadly, I don t have a good recommendation for a Discrete Math book, so if you do, appealing meaningful plots. The answer to this question varies based on the requirements forplotting data.5) What is the main difference between a Pandas series and a single-columnDataFrame in Python?6) Write code to sort a DataFrame in Python in descending order.7) How can you handle duplicate values in a dataset for a variable in Python?8) Which Random Forest parameters can be tuned to enhance the predictive power ofthe model?9) Which method in pandas.tools.plotting is used to create scatter plot matrix?Scatter_matrix10) How can you check if a data set or time series is Random?To check whether a dataset is random or not use the lag plot. If the lag plot for the givendataset does not show any structure then it is random.11) Can we create a DataFrame with multiple data types in Python? If yes, how canyou do it?12) Is it possible to plot histogram in Pandas without calling Matplotlib? If yes, thenwrite the code to plot the histogram?13) What are the possible ways to load an array from a text data file in Python?Howcan the efficiency of the code to load data file be improved?numpy.loadtxt ()14) Which is the standard data missing marker used in Pandas?NaN15) Why you should use NumPy arrays instead of nested Python lists?16) What is the preferred method to check for an empty array in NumPy?17) List down some evaluation metrics for regression problems. 18) Which Python library would you prefer to use for Data Munging?Pandas19) Write the code to sort an array in NumPy by the nth column?Using argsort () function this can be achieved. If there is an array X and you would like tosort the nth column then code for this will be x[x [: n-1].argsort ()]20) How are NumPy and SciPy related?21) Which python library is built on top of matplotlib and Pandas to ease data plotting?Seaborn22) Which plot will you use to access the uncertainty of a statistic?Bootstrap23) What are some features of Pandas that you like or dislike?24) Which scientific libraries in SciPy have you worked with in your project?25) What is pylab?A package that combines NumPy, SciPy and Matplotlib into a single namespace.26) Which python library is used for Machine Learning?SciKit-LearnLearn Data Science in Python to become an Enterprise Data ScientistBasic Python Programming Interview Questions27) How can you copy objects in Python?The functions used to copy objects in Python are1)Copy.copy () for shallow copy2)Copy.deepcopy () for deep copy However, it is not possible to copy all objects in Python using these functions.For instance,dictionaries have a separate copy method whereas sequences in Python have to becopied bySlicing.28) What is the difference between tuples and lists in Python?Tuples can be used as keys for dictionaries i.e. they can be hashed. Lists are mutable whereastuples are immutable - they cannot be changed. Tuples should be used when the order ofelements in a sequence matters. For example, set of actions that need to be executed in sequence,geographic locations or list of points on a specific route.29) What is PEP8?PEP8 consists of coding guidelines for Python language so that programmers can write readablecode making it easy to use for any other person, later on.30) Is all the memory freed when Python exits?No it is not, because the objects that are referenced from global namespaces ofPython modulesare not always de-allocated when Python exits.31) What does _init_.py do?_init_.py is an empty py file used for importing a module in a directory. _init_.py provides aneasy way to organize the files. If there is a module maindir/subdir/module.py,_init_.py is placedin all the directories so that the module can be imported using the following commandimport maindir.subdir.module32) What is the different between range () and xrange () functions in Python?range () returns a list whereas xrange () returns an object that acts like an iterator for generatingnumbers on demand.33) How can you randomize the items of a list in place in Python?Shuffle (lst) can be used for randomizing the items of a list in Python34) What is a pass in Python?Pass in Python signifies a no operation statement indicating that nothing is tobe done. 35) If you are gives the first and last names of employees, which data type in Python willyou use to store them?You can use a list that has first name and last name included in an element or use Dictionary.36) What happens when you execute the statement mango=banana in Python?A name error will occur when this statement is executed in Python.37) Write a sorting algorithm for a numerical dataset in Python.38) Optimize the below python codeword = wordprint word.__len__ ()Answer: print word._len_ ()39) What is monkey patching in Python?Monkey patching is a technique that helps the programmer to modify or extend other code atruntime. Monkey patching comes handy in testing but it is not a good practice touse it inproduction environment as debugging the code could become difficult.40).41) How are arguments passed in Python- by reference or by value?The answer to this question is neither of these because passing semantics in Python arecompletely different. In all cases, Python passes arguments by value where all values arereferences to objects.42) You are given a list of N numbers. Create a single list comprehension in Python tocreate a new list that contains only those values which have even numbers from elements ofthe list at even indices. For instance if list[4] has an even value the it has be included in thenew output list because it has an even index but if list[5] has an even value itshould not beincluded in the list because it is not at an even index. word = aeioubcdfgprint word [:3] + word [3:]The output for the above code will be: aeioubcdfg .In string slicing when the indices of both the slices collide and a + operator isapplied on thestring it concatenates them.48)list= [a,e,i,o,u]print list [8:]The output for the above code will be an empty list []. Most of the people mightconfuse theanswer with an index error because the code is attempting to access a member inthe list whoseindex exceeds the total number of members in the list. The reason being the codeis trying toaccess the slice of a list at a starting index which is greater than the numberof members in thelist.49)What will be the output of the below code.50) Can the lambda forms in Python contain statements? No, as their syntax is restrcited to single expressions and they are used for creating functionobjects which are returned at runtime.This list of questions for Python interview questions and answers is not an exhaustive one andwill continue to be a work in progress. Let us know in comments below if we missed out on anyimportant question that needs to be up here.Python Developer interview questionsThis Python Developer interview profile brings together a snapshot of what to look for incandidates with a balanced sample of suitable interview questions. Introduction Implement the linux whereis command that locates the binary, source, andmanual page files for a command. olist = [ a , b , c , d , e ] oprint list[10:]What will be the output of the following code in each step?o class C:odangerous = 2ooc1 = C()oc2 = C()oprint c1.dangerouso oc1.dangerous = 3oprint c1.dangerousoprint c2.dangerousoodel c1.dangerousoprint c1.dangerousooC.dangerous = 3oprint c2.dangerousTop Python Interview Questions Most AskedHere are top 30 objective type sample Python Interview questions and their answers are givenjust below to them. These sample questions are framed by experts from Intellipaat who trains forPython training to give you an idea of type of questions which may be asked in interview. Wehave taken full care to give correct answers for all the questions. Do comment your thoughtsHappy Job Hunting!Top Answers to Python Interview Questions1. What is Python?Python is an object oriented and open-source programming language, whichsupports structured and functional built-in data structures. With a placid andeasy-to -understand syntax, Python allows code reuse and modularity ofprograms. The built-in DS in Python makes it a wonderful option for RapidApplication Development (RAD). The coding language also encourages fasterediting, testing and debugging with no compilation steps.2. What are the standard data types supported by Python?It supports six data types:1. Number : object stored as numeric value2. String : object stored as string3. Tuple : data stored in the form of sequence of immutable objects4. Dictionary (dicts): associates one thing to another irrespective of the typeof data, most useful container (called hashes in C and Java) >>> y.split(,)Result: (true, false, none)What is the use of generators in Python?Generators are primarily used to return multiple items but one after the other.They are used for iteration in Python and for calculating large result sets. Thegenerator function halts until the next time request is placed.One of the best uses of generators in Python coding is implementing callbackoperation with reduced effort and time. They replace callback with iteration.Through the generator approach, programmers are saved from writing aseparate callback function and pass it to work-function as it can applying forloop around the generator.13. How to create a multidimensional list in Python?As the name suggests, a multidimensional list is the concept of a list holdinganother list, applying to many such lists. It can be one easily done by creatingsingle dimensional list and filling each element with a newly created list.14. What is lambda?lambda is a powerful concept used in conjunction with other functions likefilter(), map(), reduce(). The major use of lambda construct isto create anonymous functions during runtime, which can be used where theyare created. Such functions are actually known as throw-away functions inPython. The general syntax is lambda argument_list:expression.For instance:>>> def intellipaat1 = lambda i, n : i+n>>> intellipaat(2,2)4Using filter()>> intellipaat = [1, 6, 11, 21, 29, 18, 24]>> print filter (lambda x: x%3 = = 0, intellipaat)[6, 21, 18, 24]15. Define Pass in Python?The pass statement in Python is equivalent to a null operation and aplaceholder, wherein nothing takes place after its execution. It is mostly usedat places where you can let your code go even if it isnt written yet.If you would set out a pass after the code, it wont run. The syntax is pass16. How to perform Unit Testing in Python?Referred to as PyUnit, the python Unit testing framework-unittest supportsautomated testing, seggregating test into collections, shutdown testing codeand testing independence from reporting framework. The unittest module makes use of TestCase class for holding and preparing test routines andclearing them after the successful execution.17. Define Python tools for finding bugs and performing static analysis?. PyChecker is an excellent bug finder tool in Python, which performs staticanalysis unlike C/C++ and Java. It also notifies the programmers about thecomplexity and style of the code. In addition, there is another tool, PyLint forchecking the coding standards including the code line length, variable namesand whether the interfaces declared are fully executed or not.18. How to convert a string into list?Using the function list(string). For instance:>>> list(intellipaat) in your lines of code will return[i, n, t, e, l, l, i, p, a, a, t]In Python, strings behave like list in various ways. Like, you can accessindividual characters of a string>> > y = intellipaat>>> s[2]t19. What OS do Python support?Linux, Windows, Mac OS X, IRIX, Compaq, Solaris20. Name the Java implementation of Python?Jython21. Define docstring in Python.A string literal occurring as the first statement (like a comment) in anymodule, class, function or method is referred as docstring in Python. This kindof string becomes the _doc_ special attribute of the object and provides aneasy way to document a particular code segment. Most modules do containdocstrings and thus, the functions and classes extracted from the modulealso consist of docstrings.22. Name the optional clauses used in a try-except statement in Python?While Python exception handling is a bit different from Java, the formerprovides an option of using a try-except clause where the programmerreceives a detailed error message without termination the program.Sometimes, along with the problem, this try-except statement offers asolution to deal with the error.The language also provides try-except-finally and try-except-else blocks. The allocation of Python heap space for Python objects is done by Python memorymanager. The core API gives access to some tools for the programmer to code. Python also have an inbuilt garbage collector, which recycle all the unused memory andfrees the memory and makes it available to the heap space.6) What are the tools that help to find bugs or perform static analysis?PyChecker is a static analysis tool that detects the bugs in Python source codeand warns aboutthe style and complexity of the bug. Pylint is another tool that verifies whether the module meetst behashed for e.g as a key for dictionaries.9) How are arguments passed by value or by reference?Everything in Python is an object and all variables hold references to the objects. The referencesvalues are according to the functions; as a result you cannot change the value of the references.However, you can change the objects if it is mutable. Sets DictionariesImmutable built-in typesStrings Tuples Numbers12) What is namespace in Python?In Python, every name introduced has a place where it lives and can be hooked for. This isknown as namespace. It is like a box where a variable name is mapped to the object placed.Whenever the variable is searched out, this box will be searched, to get corresponding object.13) What is lambda in Python? Python sequences can be index in positive and negative numbers. For positive index, 0 is thefirst index, 1 is the second index and so forth. For negative index, (-1) is thelast index and (-2)is the second last index and so forth.23) How you can convert a number to a string?In order to convert a number into a string, use the inbuilt function str(). If you want a octal orhexadecimal representation, use the inbuilt function oct() or hex().24) What is the difference between Xrange and range?Xrange returns the xrange object while range returns the list, and uses the samememory and nomatter what the range size is.25) What is module and package in Python?In Python, module is the way to structure program. Each Python program file is amodule, whichimports other modules like objects and attributes.The folder of Python program is a package of modules. A package can have modulesorsubfolders.21 Must-Know Data Science InterviewQuestions and AnswersKDnuggets Editors bring you the answers to 20 Questions to Detect Fake Data Scientists,including what is regularization, Data Scientists we admire, model validation, and more.By Gregory Piatetsky, KDnuggets.commentsThe recent post on KDnuggets20 Questions to Detect Fake Data Scientists has been very popular - most viewedinthe month of January.However these questions were lacking answers, so KDnuggets Editors got togetherand wrote the answers to these questions. I also added one more critical question number 21, which was omitted from the 20 questions post. Here are the answers. Because of the length, here are the answers to the first 11questions, and here is part 2.Q1. Explain what regularization is and why itis useful.Answer by Matthew Mayo.Regularization is the process of adding a tuningparameter to a model to induce smoothness inorder to prevent overfitting. (see also KDnuggetsposts on Overfitting)This is most often done by adding a constant multiple to an existing weight vector.This constant is often either the L1 (Lasso) or L2 (ridge), but can in actualitycan beany norm. The model predictions should then minimize the mean of the lossfunction calculated on the regularized training set.Xavier Amatriain presents a good comparison of L1 and L2 regularization here, forthose interested.Fig 1: Lp ball: As the value of p decreases, the size of the corresponding Lp space also decreases.Q2. Which data scientists do you admire most? which startups?Answer by Gregory Piatetsky:This question does not have a correct answer, but here is my personal list of 12 Geoff Hinton, Yann LeCun, and Yoshua Bengio - for persevering with Neural Netswhen and starting the current Deep Learning revolution. Demis Hassabis, for his amazing work on DeepMind, which achieved human orsuperhuman performance on Atari games and recently Go.Jake Porway from DataKind and Rayid Ghani from U. Chicago/DSSG, for enablingdata science contributions to social good.DJ Patil, First US Chief Data Scientist, for using Data Science to make USgovernment work better.Kirk D. Borne for his influence and leadership on social media.Claudia Perlich for brilliant work on ad ecosystem and serving as a great KDD-2014chair.Hilary Mason for great work at Bitly and inspiring others as a Big Data Rock Star.Usama Fayyad, for showing leadership and setting high goals for KDD and DataScience, which helped inspire me and many thousands of others to do their best.Hadley Wickham, for his fantastic work on Data Science and Data Visualization inR,including dplyr, ggplot2, and Rstudio.There are too many excellent startups in Data Science area, but I will not listthemhere to avoid a conflict of interest.Here is some of our previous coverage of startups.Q3. How would you validate a model you created to generate a predictivemodel of a quantitative outcome variable using multiple regression.Answer by Matthew Mayo.Proposed methods for model validation:If the values predicted by the model are far outside of the response variablerange, this would immediately indicate poor estimation or model inaccuracy. Use the model for prediction by feeding it new data, and use the coefficient ofdetermination (R squared) as a model validity measure. Ensure that the test data has sufficient variety in order to be symbolic of reallife data (helps avoid overfitting) Ensure that the results are repeatable with near similar results One common way to achieve the above guidelines is through A/B testing, whereboth the versions of algorithm are kept running on similar environment for aconsiderablyroot causes of faults or problems. A factor is considered a root cause if removalthereof from the problem-fault-sequence prevents the final undesirable event fromrecurring; whereas a causal factor is one that affects an event s outcome, but is nota root cause.Root cause analysis was initially developed to analyze industrial accidents, butisnow widely used in other areas, such as healthcare, project management, orsoftware testing.Here is a useful Root Cause Analysis Toolkit from the state of Minnesota.Essentially, you can find the root cause of a problem and show the relationshipofcauses by repeatedly asking the question, "Why?", until you find the root of theproblem. This technique is commonly called "5 Whys", although is can be involvemore or less than 5 questions. Fig. 5 Whys Analysis Example, from The Art of Root Cause Analysis .Q7. Are you familiar with price optimization, price elasticity, inventorymanagement, competitive intelligence? Give examples.Answer by Gregory Piatetsky:Those are economics terms that are not frequently asked of Data Scientists but theyare useful to know.Price optimization is the use of mathematical tools to determine how customers willrespond to different prices for its products and services through different channels.Big Data and data mining enables use of personalization for price optimization.Nowcompanies like Amazon can even take optimization further and show different pricesto different visitors, based on their history, although there is a strong debateaboutwhether this is fair.Price elasticity in common usage typically refers toPrice elasticity of demand, a measure of price sensitivity. It is computed as:Price Elasticity of Demand = % Change in Quantity Demanded / % Change inPrice.Similarly, Price elasticity of supply is an economics measure that shows how thequantity supplied of a good or service responds to a change in its price. commentsThe post on KDnuggets 20 Questions to Detect Fake Data Scientists has been verypopular - most viewed post of the month.However these questions were lacking answers, so KDnuggets Editors got togetherand wrote the answers. Here is part 2 of the answers, starting with a "bonus"question.Bonus Question: Explain what is overfitting and how would you control foritThis question was not part of the original 20, but probably is the most important onein distinguishing real data scientists from fake ones.Answer by Gregory Piatetsky.Overfitting is finding spurious results that are due to chance and cannot bereproduced by subsequent studies.We frequently see newspaper reports about studies that overturn the previousfindings, like eggs are no longer bad for your health, or saturated fat is not linked toheart disease. The problem, in our opinion is that many researchers, especiallyinsocial sciences or medicine, too frequently commit the cardinal sin of Data Mining Overfitting the data.The researchers test too many hypotheses without proper statistical control, untilthey happen to find something interesting and report it. Not surprisingly, nexttimethe effect, which was (at leastexaggerated or the findings could not be replicated. In his paper, he presentedstatistical evidence that indeed most claimed research findings are false.Ioannidis noted that in order for a research finding to be reliable, it should have:Large sample size and with large effects Minimal bias due to financial and other factors (including popularity of thatscientific field)Unfortunately, too often these rules were violated, producing irreproducible results.For example, S&P 500 index was found to be strongly related to Production of butterin Bangladesh (from 19891 to 1993) (here is PDF) See more interesting (and totally spurious) findings which you can discover yourselfusing tools such as Google correlate or Spurious correlations by Tyler Vigen.Several methods can be used to avoid "overfitting" the data Randomization Testing (randomize the class variable, try your method on thisdata - if it find the same strong results, something is wrong) Nested cross-validation (do feature selection on one level, then run entiremethod in cross-validation on outer level) Tag: OverfittingQ12. Give an example of how you would use experimental design toanswer a question about user behavior.Answer by Bhavya Geethika. Fig 12: There is a flaw in your experimental design (cartoon from here)Step 4: Determine Experimental Design.We consider experimental complexity i.e vary one factor at a time or multiplefactors at one time in which case we use factorial design (2^k design). A designisalso selected based on the type of objective (Comparative, Screening, Responsesurface) & number of factors. Q13. What is the diference between "long" ("tall") and "wide" formatdata?Answer by Gregory Piatetsky.In most data mining / data science applications there are many more records (rows)than features (columns) - such data is sometimes called "tall" (or "long") data.In some applications like genomics or bioinformatics you may have only a smallnumber of records (patients), eg 100, but perhaps 20,000 observations for eachpatient. The standard methods that work for "tall" data will lead to overfittingthedata, so special approaches are needed. Fig 13. Diferent approaches for tall data and wide data, from presentationSparse Screening for Exact Data Reduction, by Jieping Ye.The problem is not just reshaping the data (here there are useful R packages), butavoiding false positives by reducing the number of features to find most relevantones.Approaches for feature reduction like Lasso are well covered in Statistical Learning with Sparsity: The Lasso and Generalizations, by Hastie, Tibshirani, and Wainwright.(you can download free PDF of the book)Second part of the answers to 20 Questions to Detect Fake Data Scientists, including controllingoverfitting, experimental design, tall and wide data, understanding the validityof statistics in themedia, and more.Pages: 1 2 3By Gregory Piatetsky, KDnuggets.Q14. What method do you use to determine whether the statisticspublished in an article (or appeared in a newspaper or other media) areeither wrong or presented to support the author s point of view, ratherthan correct, comprehensive factual information on a specific subject?A simple rule, suggested by Zack Lipton, isif some statistics are published in a newspaper, then they are wrong.Here is a more serious answer by Anmol Rajpurohit.Every media organization has a target audience. This choice impacts a lot ofdecisions such as which article to publish, how to phrase an article, what partof anarticle to highlight, how to tell a given story, etc.In determining the validity of statistics published in any article, one of the first stepswill be to examine the publishing agency and its target audience. Even if it isthesame news story involving statistics, you will notice that it will be publishedverydifferently across Fox News vs. WSJ vs. ACM/IEEE journals. So, data scientists aresmart about where to get the news from (and how much to rely on the stories basedon sources!). Fig 14a: Example of a very misleading bar chart that appeared on FoxNews Fig 14b: how the same data should be presented objectively, from 5 Ways toAvoid Being Fooled By StatisticsOften the authors try to hide the inadequacy of their research through cannystorytelling and omitting important details to jump on to enticingly presented falseinsights. Thus, a thumb s rule to identify articles with misleading statisticalinferences is to examine whether the article includes details on the researchmethodology followed and any perceived limitations of the choices made related toresearch methodology. Look for words such as "sample size", "margin of error", etc.While there are no perfect answers as to what sample size or margin of error isappropriate, these attributes must certainly be kept in mind while reading the endresults. Another common case of erratic reporting are the situations when journalists withpoor data-education pick up an insight from one or two paragraphs of a publishedresearch paper, while ignoring the rest of research paper, just in order to maketheirpoint. So, here is how you can be smart to avoid being fooled by such articles:Firstly, a reliable article must not have any unsubstantiated claims. All theassertions must be backed with reference to past research. Or otherwise, is mustbeclearly differentiated as an "opinion" and not an assertion. Secondly, just becausean article is referring to renowned research papers, does not mean that it is usingthe insight from those research papers appropriately. This can be validated byreading those referred research papers "in entirety", and independently judgingtheir relevance to the article at hand. Lastly, though the end-results might naturallyseem like the most interesting part, it is often fatal to skip the details aboutresearch methodology (and spot errors, bias, etc.).Ideally, I wish that all such articles publish their underlying research data aswell asthe approach. That way, the articles can achieve genuine trust as everyone is freeto analyze the data and apply the research approach to see the results forthemselves.Q15. Explain Edward Tufte s concept of "chart junk."Answer by Gregory Piatetsky:Chartjunk refers to all visual elements in charts and graphs that are not necessaryto comprehend the information represented on the graph, or that distract the viewerfrom this information.The term chartjunk was coined by Edward Tufte in his 1983 book The Visual Displayof Quantitative Information. Fig 15. Tufte writes: "an unintentional Necker Illusion, as two back planes opticallyflip to the front. Some pyramids conceal others; and one variable (stacked depthofthe stupid pyramids) has no label or scale." Here is a moremodern example from exceluser where it is very hard to understand the column plotbecause of workers and cranes that obscure them.The problem with such decorations is that they forces readers to work much harderthan necessary to discover the meaning of data.16. How would you screen for outliers and what should you do if you findone?Answer by Bhavya Geethika.Some methods to screen outliers are z-scores, modified z-score, box plots, Grubbstest, Tietjen-Moore test exponential smoothing, Kimber test for exponential distribution and moving window filter algorithm. However two of the robust methodsin detail are:Inter Quartile RangeAn outlier is a point of data that lies over 1.5 IQRs below the first quartile (Q1) orabove third quartile (Q3) in a given data set.High = (Q3) + 1.5 IQR Pandora uses the properties of a song or artist (a subset of the 400 attributesprovided by the Music Genome Project) in order to seed a "station" that playsmusic with similar properties. User feedback is used to refine the station sresults, deemphasizing certain attributes when a user "dislikes" a particularsong and emphasizing other attributes when a user "likes" a song. This is anexample of a content-based approach.Here is a good Introduction to Recommendation Engines by Dataconomy and anoverview of building a Collaborative Filtering Recommendation Engine by Toptal.Forlatest research on recommender systems, check ACM RecSys conference.19. Explain what a false positive and a false negative are. Why is itimportant to diferentiate these from each other?Answer by Gregory Piatetsky:In binary classification (or medical testing), False positive is when an algorithm (ortest) indicates presence of a condition, when in reality it is absent. A false negativeis when an algorithm (or test) indicates absence of a condition, when in realityit ispresent.In statistical hypothesis testing false positive is also called type I error andfalsenegative - type II error. It is obviously very important to distinguish and treat false positives and falsenegatives differently because the costs of such errors can be hugely different.For example, if a test for serious disease is false positive (test says disease,butperson is healthy), then an extra test will be made that will determine the correctdiagnosis. However, if a test is false negative (test says healthy, but person hasdisease), then treatment will be done and person may die as a result.20. Which tools do you use for visualization? What do you think ofTableau? R? SAS? (for graphs). How to efficiently represent 5 dimension ina chart (or in a video)?Answer by Gregory Piatetsky:There are many good tools for Data Visualization. R, Python, Tableau and Excel areamong most commonly used by Data Scientists.Here are useful KDnuggets resources:Visualization and Data Mining Software Fig 20a: 5-dimensional scatter plot of Iris data, with size: sepal length; color:sepal width; shape: class; x-column: petal length; y-column: petal width, from here.For more than 5 dimensions, one approach is Parallel Coordinates, pioneered byAlfred Inselberg. See alsoQuora: What s the best way to visualize high-dimensional data?.
https://id.scribd.com/doc/316766810/316551847-Data-Science-Interview-Question
CC-MAIN-2020-10
en
refinedweb
Python Command Overview Contents - 1 Python API Command Overview - 1.1 The Entire Command - 1.2 Breakdown - 1.3 User Facing Strings: Command Help Python API Command Overview This page will attempt to give some "boilerplate" code for creating a Python API command (with arguments) in MODO. There is much more you can do with commands, but this should cover common uses. We'll start with listing the entire code and then breaking it down. The Entire Command #!/usr/bin/env python import lx import lxu.command class MyCommand_Cmd(lxu.command.BasicCommand): def __init__(self): lxu.command.BasicCommand.__init__ (self)) def cmd_Flags(self): return lx.symbol.fCMD_UNDO | lx.symbol.fCMD_MODEL.") lx.bless (MyCommand_Cmd, "my.command") Breakdown The Header #!/usr/bin/env python import lx import lxu.command This piece simply tells MODO this is a Python file, and imports the basic modules we need for creating a command. You may require other modules depending on the functions you intend to carry out in your command. The Command Class & Init class MyCommand_Cmd(lxu.command.BasicCommand): def __init__(self): lxu.command.BasicCommand.__init__ (self) This section defines a new class which inherits from lxu.command.BasicCommand. The __init__ function is called when the command is loaded by MODO at start up. At the very least, this function should contain lxu.command.BasicCommand.__init__ (self) Argument Definition (Optional) In this example, we have also defined arguments for our command. These are not required, but must go inside the __init__ function if you wish your command to have arguments. They are added to the command in the order they are defined here and assigned corresponding indices. NOTE: This order is important for accessing the argument values later on, as we will be accessing them via their index. At their most basic, arguments are defined with a name and a type. The name is the internal name of the argument and should not contain spaces - we will give them user-friendly display names later on. The names can be used by the Command Help for this command to define user-friendly display names from a config file (which allows for localisation), but we won't be covering that here. Argument Flags By default, all arguments are required to be set for the command to run. If, when the command is called and all the required arguments are not set, a dialog will be displayed to the user allowing them to enter the values. However, you'll note that we've specified flags on two of the arguments. The flags are specified by referencing the index of the argument they apply to (see, order is important!) and giving the flag itself. In this case, we've specified the flag of "OPTIONAL", which means that the user does not have to specify a value for the second and third arguments. We will have to make sure that we assign any unspecified arguments default values in the main execution of the command later on. Argument Types As a brief aside... Although MODO appears to have several different argument types, they are all user-friendly wrappers for the storage of 4 core types; integers, floats, strings and objects. You'll have seen these friendly wrappers for things like distance fields, where you can enter a value in metres, millimetres, feet, microns, etc... However, internally, these are read and stored as a simple float value which is the displayed distance as metres. Similarly, angle fields, where you can enter a value in degrees, are stored internally as a float of the displayed angle in radians. Boolean values (often shown as checkboxes or toggle buttons) are simply stored as integers which are 0 or 1. NOTE: These internal values are what you'll be dealing with when you write commands. Command Flags def cmd_Flags(self): return lx.symbol.fCMD_UNDO | lx.symbol.fCMD_MODEL This is a very important part of a command if you are using the Python API (or TD SDK) to edit the scene in the command execution. The command flags tell MODO what the command is expected to do when it executes and how to handle it. In our case, we specify the standard flags; MODEL and UNDO. These flags are bit masks and as such are joined together into a single integer return value using the pipe | separator. The MODEL flag tells MODO that we will be editing a part of the scene. The name is slightly misleading as it implies changes to a mesh only, however it means any change to anything in the scene; channels, meshes, selection changes, adding or removing items, etc... The UNDO flag is specified by default as part of the MODEL flag, however it's not harmful to add the flag to be clear. This tells MODO that the command should be undoable and that it should set up an undo state for it. NOTE: It is very important that this flag is set, as changing the scene without this flag set causes instability in MODO and usually leads to a crash (if not immediately then very soon). Generally, these should be your standard flags unless you have specific reason to change them. Main Execution.") This is the meat of the command - the code that's actually run when the command is fired. Here, we're not doing anything other than reading the arguments and writing to the Event Log. You can see here that those core types are how we read the arguments in the command, with dyna_Bool (a friendly wrapper for dyna_Int, checking if it's equal to 1 or 0), dyna_Float and dyna_String. These are accessed via index, the same indices we've used throughout the command. Optionally, a default value is given as the second parameter, which is the value returned if the argument has not been set by the user. We can also make use of the command's dyna_IsSet function, which will return True or False depending on whether the argument with that index was specified by the user (this function is what is used internally for the dyna_Float and related functions, to determine whether to return the default value or not). It is important to note that any arguments which have UI hints (such as minimum or maximum values - including text hints) are for UI purposes only, and that arbitrary values can be entered as arguments. This means that if you have a UI hint that gives it a range of 10-50, the user can still enter 12000 manually from a script or the command entry, or enter an out-of-range integer instead of a text hint string. So be sure to manage such values and take appropriate action if such values would cause problems in your code (e.g. aborting execution or limiting the value supplied by the user to the desired range). Also note that, as this is part of the Python API, we can freely use lx.eval() for calling commands and querying values, just like regular fire-and-forget Python scripts. Making Main Execution Useful This is a simple outline of writing a command with arguments. It doesn't actually do anything. For further reading, you can find other examples of code to go into the Execute function on the wiki (such as Creating a Selection Set). Blessing the Command Here, we call the bless command. This promotes the class we created to be a fully fledged command inside of MODO, as opposed to simply a Python script. It takes two arguments. One is the command class we defined, the second is the command that we'll want to assign to this inside MODO. This means that the script is not run via the usual @scriptName.py command that fire-and-forget scripts are. Instead, this command is run be entering my.command as it is a proper command in MODO. lx.bless (MyCommand_Cmd, "my.command") User Facing Strings: Command Help All user-facing strings, such as for the command name, argument names, and so on, should be defined in Command Help config files whenever possible. Config files are considered to be part of any command or kit, and not as optional extras. This is also the only place that the Command List gets user-friendly information that can teach others how to use your commands. This is an example of what the command help for this command would look like. This would be inside a .cfg file that is distributed with the .py file of the actual command. If distributed with this config, you would be able to remove all of the above command's functions which relate to returning user-friendly names for things and MODO would use this file to get the relevant values automatically. It is also worth noting that these can be localized by duplicating the <hash type="Command" key="my.command@en_US"> fragment and it's contents and changing the @en_US' to @[localized language code] then replacing the contents of the atom fragments accordingly. By default, MODO will look for @en_US and this will also be the fallback if the user's specified language isn't found. - Command: this command's internal name and a language code. These are used to find the user strings for the current system locale. - UserName: The name displayed for the command in the Command List and other parts of the UI. - ButtonName: The label of the control when the command is in in a form. - ToolTip: The text displayed in a tooltip when hovering the mouse over the control in a form. - Example: A simple example usage of the command, shown in the Command List. - Desc: Basic documentation for the command, as shown in the command list. - Argument: Given an argument's internal name, this defines its username ad description. The username is shown in command dialogs and the command list. The command list also displays the description, which should be considered documentation for how to use that argument, the default behavior if the argument is optional and not set, and so on. <?xml version="1.0"?> <configuration> <atom type="CommandHelp"> <!-- note the command's name in here; my.command --> <hash type="Command" key="my.command@en_US"> <atom type="UserName">My Command Dialog - also shown in the command list.</atom> <atom type="ButtonName">My Command</atom> <atom type="Tooltip">My command's tooltip</atom> <atom type="Desc">My command's description - shown in the command list.</atom> <atom type="Example">my.command true 1.5 "hello!"</atom><!-- An example of how my command would be called - shown in the command list. --> <hash type="Argument" key="myBoolean"> <atom type="UserName">A Boolean</atom> <atom type="Desc">The boolean argument of my command - shown in the command list.</atom> </hash> <hash type="Argument" key="myDistance"> <atom type="UserName">A Distance</atom> <atom type="Desc">The distance argument of my command - shown in the command list.</atom> </hash> <hash type="Argument" key="myString"> <atom type="UserName">A String</atom> <atom type="Desc">The string argument of my command - shown in the command list.</atom> </hash> </hash> </atom> </configuration> User-Facing String Overrides Generally speaking, you should define all user-facing strings in cmdhelp configs. The configs allow the command to be localized into different languages, and are used in the Command List part of the Command History to provide usage information for users. However, it is sometimes useful to the able to override the cmdhelp-based strings with dynamic strings. These should never be used to return static strings -- those should be in cmdhelp configs. Even when returning dynamic strings with these methods, the strings should come from message tables whenever possible to ensure that they can also be localized. Cases where strings are used directly in code are those that provided by the user, like the name of an item, but care should be take to avoid hard-coding any user-facing text as string literals directly into the code itself. The ButtonName() method provides a short name for when the control is displayed in a form. UserName() is usually a somewhat more verbose name that is display in other parts of the interface. Tooltip() defines a string to display in the tooltip window when the user hovers over the control in a form. UIHints() has a variety of methods, but can be used to set the label for a specific command argument when displayed in the UI, such as in a command dialog. Most of these should never be ended, as these values are usually static and should be set in cmdhelp configs. def basic_ButtonName(self): return dynamicButtonNameStringHere def cmd_Tooltip(self): return dynamicTooltipStringHere def cmd_UserName(self): return dynamicUsernameStringHere def arg_UIHints (self, index, hints): if index == 0: hints.Label (dynamicArgumentName0Here) elif index == 1: hints.Label (dynamicArgumentName1Here) elif index == 2: hints.Label (dynamicArgumentName2Here)
https://modosdk.foundry.com/wiki/Python_Command_Overview
CC-MAIN-2020-10
en
refinedweb
pthread_setname_np() Name a thread Synopsis: #include <pthread.h> int pthread_setname_np(pthread_t tid, const char* newname); Since: BlackBerry 10.0.0 Arguments: - tid - The ID of the thread you want to name, or 0 if you want to name the calling thread. - newname - NULL, or a NULL-terminated string that specifies the new name. The maximum length is _NTO_THREAD_NAME_MAX (defined in <sys/neutrino.h>). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The pthread_setname_np() function sets the name of the specified thread to newname. If newname is NULL, the function deletes any name already assigned to the thread. The "np" in the function's name stands for "non-POSIX." This function is currently implemented as follows: - If a thread is setting its own name, pthread_setname_np() uses ThreadCtl(). - If a thread is setting another thread's name, pthread_setname_np() needs read/write access to the /proc/ pid /as entry for the process. Only one program can have write access to the a process's entry in the /proc filesystem at a time, so if another program (such as a debugger) already has write access to it, pthread_setname_np() fails with an error of EBUSY. For this reason, it's better to have a thread set its own name than have it set by another thread. Returns: - EOK - Success. - E2BIG - The name is too long. - EBUSY - As described above, you're trying to name a thread other than the calling thread, and another program already has write access to /proc/ pid /as. - EPERM - You don't have the appropriate permissions to set the name. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_setname_np.html
CC-MAIN-2020-10
en
refinedweb
Null- and undefined-aware types # TypeScript has two special types, Null and Undefined, that have the values null and undefined respectively. Previously it was not possible to explicitly name these types, but null and undefined may now be used as type names regardless of type checking mode. The type checker previously considered null and undefined assignable to anything. Effectively, null and undefined were valid values of every type and it wasn’t possible to specifically exclude them (and therefore not possible to detect erroneous use of them). --strictNullChecks # --strictNullChecks switches to a new strict null checking mode. In strict null checking mode, the null and undefined values are not in the domain of every type and are only assignable to themselves and any (the one exception being that undefined is also assignable to void). So, whereas T and T | undefined are considered synonymous in regular type checking mode (because undefined is considered a subtype of any T), they are different types in strict type checking mode, and only T | undefined permits undefined values. The same is true for the relationship of T to T | null. Example # // Compiled with --strictNullChecks let x: number; let y: number | undefined; let z: number | null | undefined; x = 1; // Ok y = 1; // Ok z = 1; // Ok x = undefined; // Error y = undefined; // Ok z = undefined; // Ok x = null; // Error y = null; // Error z = null; // Ok x = y; // Error x = z; // Error y = x; // Ok y = z; // Error z = x; // Ok z = y; // Ok Assigned-before-use checking # In strict null checking mode the compiler requires every reference to a local variable of a type that doesn’t include undefined to be preceded by an assignment to that variable in every possible preceding code path. Example # // Compiled with --strictNullChecks let x: number; let y: number | null; let z: number | undefined; x; // Error, reference not preceded by assignment y; // Error, reference not preceded by assignment z; // Ok x = 1; y = null; x; // Ok y; // Ok The compiler checks that variables are definitely assigned by performing control flow based type analysis. See later for further details on this topic. Optional parameters and properties # Optional parameters and properties automatically have undefined added to their types, even when their type annotations don’t specifically include undefined. For example, the following two types are identical: // Compiled with --strictNullChecks type T1 = (x?: number) => string; // x has type number | undefined type T2 = (x?: number | undefined) => string; // x has type number | undefined Non-null and non-undefined type guards # A property access or a function call produces a compile-time error if the object or function is of a type that includes null or undefined. However, type guards are extended to support non-null and non-undefined checks. Example # // Compiled with --strictNullChecks declare function f(x: number): string; let x: number | null | undefined; if (x) { f(x); // Ok, type of x is number here } else { f(x); // Error, type of x is number? here } let a = x != null ? f(x) : ""; // Type of a is string let b = x && f(x); // Type of b is string | 0 | null | undefined Non-null and non-undefined type guards may use the ==, !=, ===, or !== operator to compare to null or undefined, as in x != null or x === undefined. The effects on subject variable types accurately reflect JavaScript semantics (e.g. double-equals operators check for both values no matter which one is specified whereas triple-equals only checks for the specified value). Dotted names in type guards # Type guards previously only supported checking local variables and parameters. Type guards now support checking “dotted names” consisting of a variable or parameter name followed one or more property accesses. Example # interface Options { location?: { x?: number; y?: number; }; } function foo(options?: Options) { if (options && options.location && options.location.x) { const x = options.location.x; // Type of x is number } }. For example, a type guard for x.y.z will have no effect following an assignment to x, x.y, or x.y.z. Expression operators # Expression operators permit operand types to include null and/or undefined but always produce values of non-null and non-undefined types. // Compiled with --strictNullChecks function sum(a: number | null, b: number | null) { return a + b; // Produces value of type number } The && operator adds null and/or undefined to the type of the right operand depending on which are present in the type of the left operand, and the || operator removes both null and undefined from the type of the left operand in the resulting union type. // Compiled with --strictNullChecks interface Entity { name: string; } let x: Entity | null; let s = x && x.name; // s is of type string | null let y = x || { name: "test" }; // y is of type Entity Type widening # The null and undefined types are not widened to any in strict null checking mode. let z = null; // Type of z is null In regular type checking mode the inferred type of z is any because of widening, but in strict null checking mode the inferred type of z is null (and therefore, absent a type annotation, null is the only possible value for z). Non-null assertion operator #. Similar to type assertions of the forms <T>x and x as T, the ! non-null assertion operator is simply removed in the emitted JavaScript code. // Compiled with --strictNullChecks function validateEntity(e?: Entity) { // Throw exception if e is null or invalid entity } function processEntity(e?: Entity) { validateEntity(e); let s = e!.name; // Assert that e is non-null and access name } Compatibility # The new features are designed such that they can be used in both strict null checking mode and regular type checking mode. In particular, the null and undefined types are automatically erased from union types in regular type checking mode (because they are subtypes of all other types), and the ! non-null assertion expression operator is permitted but has no effect in regular type checking mode. Thus, declaration files that are updated to use null- and undefined-aware types can still be used in regular type checking mode for backwards compatibility. In practical terms, strict null checking mode requires that all files in a compilation are null- and undefined-aware. Control flow based type analysis # TypeScript 2.0 implements a control flow-based type analysis for local variables and parameters. Previously, the type analysis performed for type guards was limited to if statements and ?: conditional expressions and didn’t include effects of assignments and control flow constructs such as return and break statements. With TypeScript 2.0, the type checker analyses all possible flows of control in statements and expressions to produce the most specific type possible (the narrowed type) at any given location for a local variable or parameter that is declared to have a union type. Example # function foo(x: string | number | boolean) { if (typeof x === "string") { x; // type of x is string here x = 1; x; // type of x is number here } x; // type of x is number | boolean here } function bar(x: string | number) { if (typeof x === "number") { return; } x; // type of x is string here } Control flow based type analysis is particularly relevant in --strictNullChecks mode because nullable types are represented using union types: function test(x: string | null) { if (x === null) { return; } x; // type of x is string in remainder of function } Furthermore, in --strictNullChecks mode, control flow based type analysis includes definite assignment analysis for local variables of types that don’t permit the value undefined. function mumble(check: boolean) { let x: number; // Type doesn't permit undefined x; // Error, x is undefined if (check) { x = 1; x; // Ok } x; // Error, x is possibly undefined x = 2; x; // Ok } Tagged union types # TypeScript 2.0 implements support for tagged (or discriminated) union types. Specifically, the TS compiler now support type guards that narrow union types based on tests of a discriminant property and furthermore extend that capability to switch statements. Example # interface Square { kind: "square"; size: number; } interface Rectangle { kind: "rectangle"; width: number; height: number; } interface Circle { kind: "circle"; radius: number; } type Shape = Square | Rectangle | Circle; function area(s: Shape) { // In the following switch statement, the type of s is narrowed in each case clause // according to the value of the discriminant property, thus allowing the other properties // of that variant to be accessed without a type assertion. switch (s.kind) { case "square": return s.size * s.size; case "rectangle": return s.width * s.height; case "circle": return Math.PI * s.radius * s.radius; } } function test1(s: Shape) { if (s.kind === "square") { s; // Square } else { s; // Rectangle | Circle } } function test2(s: Shape) { if (s.kind === "square" || s.kind === "rectangle") { return; } s; // Circle } A discriminant property type guard is an expression of the form x.p == v, x.p === v, x.p != v, or x.p !== v, where p and v are a property and an expression of a string literal type or a union of string literal types. The discriminant property type guard narrows the type of x to those constituent types of x that have a discriminant property p with one of the possible values of v. Note that we currently only support discriminant properties of string literal types. We intend to later add support for boolean and numeric literal types. The never type # TypeScript 2.0 introduces a new primitive type never. The never type represents the type of values that never occur. Specifically, never is the return type for functions that never return and never is the type of variables under type guards that are never true. The never type has the following characteristics:. Because never is a subtype of every type, it is always omitted from union types and it is ignored in function return type inference as long as there are other types being returned.) { } } Some examples of use of functions returning never: // Inferred return type is number function move1(direction: "up" | "down") { switch (direction) { case "up": return 1; case "down": return -1; } return error("Should never get here"); } // Inferred return type is number function move2(direction: "up" | "down") { return direction === "up" ? 1 : direction === "down" ? -1 : error("Should never get here"); } // Inferred return type is T function check<T>(x: T | undefined) { return x || error("Undefined value"); } Because never is assignable to every type, a function returning never can be used when a callback returning a more specific type is required: function test(cb: () => string) { let s = cb(); return s; } test(() => "hello"); test(() => fail()); test(() => { throw new Error(); }) Read-only properties and index signatures # A property or index signature can now be declared with the readonly modifier is considered read-only. Read-only properties may have initializers and may be assigned to in constructors within the same class declaration, but otherwise assignments to read-only properties are disallowed. In addition, entities are implicitly read-only in several situations: - A property declared with a getaccessor and no setaccessor is considered read-only. - In the type of an enum object, enum members are considered read-only properties. - In the type of a module object, exported constvariables are considered read-only properties. - An entity declared in an importstatement is considered read-only. - An entity accessed through an ES2015 namespace import is considered read-only (e.g. foo.xis read-only when foois declared as import * as foo from "foo"). Example # interface Point { readonly x: number; readonly y: number; } var p1: Point = { x: 10, y: 20 }; p1.x = 5; // Error, p1.x is read-only var p2 = { x: 1, y: 1 }; var p3: Point = p2; // Ok, read-only alias for p2 p3.x = 5; // Error, p3.x is read-only p2.x = 5; // Ok, but also changes p3.x because of aliasing class Foo { readonly a = 1; readonly b: string; constructor() { this.b = "hello"; // Assignment permitted in constructor } } let a: Array<number> = [0, 1, 2, 3, 4]; let b: ReadonlyArray<number> = a; b[5] = 5; // Error, elements are read-only b.push(5); // Error, no push method (because it mutates array) b.length = 3; // Error, length is read-only a = b; // Error, mutating methods are missing Specifying the type of this for functions # Following up on specifying the type of this in a class or an interface, functions and methods can now declare the type of this they expect. By default the type of this inside a function is any. Starting with TypeScript 2.0, you can provide an explicit this parameter. this parameters are fake parameters that come first in the parameter list of a function: function f(this: void) { // make sure `this` is unusable in this standalone function } this parameters in callbacks # Libraries can also use this parameters to declare how callbacks will be invoked. Example # interface UIElement { addClickListener(onclick: (this: void, e: Event) => void): void; } this: void means that addClickListener expects onclick to be a function that does not require a this type. Now if you annotate calling code with this: class Handler { info: string; onClickBad(this: Handler, e: Event) { // oops, used this here. using this callback would crash at runtime this.info = e.message; }; } let h = new Handler(); uiElement.addClickListener(h.onClickBad); // error! --noImplicitThis # A new flag is also added in TypeScript 2.0 to flag all uses of this in functions without an explicit type annotation. Glob support in tsconfig.json # Glob support is here!! Glob support has been one of the most requested features. Glob-like file patterns are supported two properties "include" and "exclude". Example # { "compilerOptions": { "module": "commonjs", "noImplicitAny": true, "removeComments": true, "preserveConstEnums": true, "outFile": "../../built/local/tsc.js", "sourceMap": true }, "include": [ "src/**/*" ], "exclude": [ "node_modules", "**/*.spec.ts" ] }, and jspm_packages directories when not specified. Module resolution enhancements: BaseUrl, Path mapping, rootDirs and tracing # TypeScript 2.0 provides a set of additional module resolution knops to inform the compiler where to find declarations for a given module. See Module Resolution documentation for more details. Base URL # Using a baseUrl is a common practice in applications using AMD module loaders where modules are “deployed” to a single folder at run-time. All module imports with non-relative names are assumed to be relative to the baseUrl. Example # { "compilerOptions": { "baseUrl": "./modules" } } Now imports to "moduleA" would be looked up in ./modules/moduleA import A from "moduleA"; Path mapping # Sometimes modules are not directly located under baseUrl. Loaders use a mapping configuration to map module names to files at run-time, see RequireJs documentation and SystemJS documentation. The TypeScript compiler supports the declaration of such mappings using "paths" property in tsconfig.json files. Example # For instance, an import to a module "jquery" would be translated at runtime to "node_modules/jquery/dist/jquery.slim.min.js". { "compilerOptions": { "baseUrl": "./node_modules", "paths": { "jquery": ["jquery/dist/jquery.slim.min"] } } Using "paths" also allow for more sophisticated mappings including multiple fall back locations. Consider a project configuration where only some modules are available in one location, and the rest are in another. Virtual Directories with rootDirs # Using ‘rootDirs’, you can inform the compiler of the roots making up this “virtual” directory; and thus the compiler can resolve relative modules imports within these “virtual” directories as if were merged together in one directory. Example # Given this project structure: src └── views └── view1.ts (imports './template1') └── view2.ts generated └── templates └── views └── template1.ts (imports './view2') A build step will copy the files in /src/views and /generated/templates/views to the same directory in the output. At run-time, a view can expect its template to exist next to it, and thus should import it using a relative name as "./template". "rootDirs" specify a list of roots whose contents are expected to merge at run-time. So following our example, the tsconfig.json file should look like: { "compilerOptions": { "rootDirs": [ "src/views", "generated/templates/views" ] } } Tracing module resolution # --traceResolution offers a handy way to understand how modules have been resolved by the compiler. tsc --traceResolution Shorthand ambient module declarations # If you don’t want to take the time to write out declarations before using a new module, you can now just use a shorthand declaration to get started quickly. declarations.d.ts # declare module "hot-new-module"; All imports from a shorthand module will have the any type. import x, {y} from "hot-new-module"; x(y); Wildcard character in module names # Importing none-code resources using module loaders extension (e.g. AMD or SystemJS) has not been easy before; previously an ambient module declaration had to be defined for each resource. TypeScript 2.0 supports the use of the wildcard character ( *) to declare a “family” of module names; this way, a declaration is only required once for an extension, and not for every resource. Example # declare module "*!text" { const content: string; export default content; } // Some do it the other way around. declare module "json!*" { const value: any; export default value; } Now you can import things that match "*!text" or "json!*". import fileContent from "./xyz.txt!text"; import data from "json!"; console.log(data, fileContent); Wildcard module names can be even more useful when migrating from an un-typed code base. Combined with Shorthand ambient module declarations, a set of modules can be easily declared as any. Example # declare module "myLibrary/*"; All imports to any module under myLibrary would be considered to have the type any by the compiler; thus, shutting down any checking on the shapes or types of these modules. import { readFile } from "myLibrary/fileSystem/readFile`; readFile(); // readFile is 'any' Support for UMD module definitions # Some libraries are designed to be used in many module loaders, or with no module loading (global variables). These are known as UMD or Isomorphic modules. These libraries can be accessed through either an import or a global variable. For example: math-lib.d.ts # export const isPrime(x: number): boolean; export as namespace mathLib; The library can then be used as an import within modules: import { isPrime } from "math-lib"; isPrime(2); mathLib.isPrime(2); // ERROR: can't use the global definition from inside a module It can also be used as a global variable, but only inside of a script. (A script is a file with no imports or exports.) mathLib.isPrime(2); Optional class properties # Optional properties and methods can now be declared in classes, similar to what is already permitted in interfaces. Example # class Bar { a: number; b?: number; f() { return 1; } g?(): number; // Body of optional method can be omitted h?() { return 2; } } When compiled in --strictNullChecks mode, optional properties and methods automatically have undefined included in their type. Thus, the b property above is of type number | undefined and the g method above is of type (() => number) | undefined. Type guards can be used to strip away the undefined part of the type: function test(x: Bar) { x.a; // number x.b; // number | undefined x.f; // () => number x.g; // (() => number) | undefined let f1 = x.f(); // number let g1 = x.g && x.g(); // number | undefined let g2 = x.g ? x.g() : 0; // number } Private and Protected Constructors # A class constructor may be marked private or protected. A class with private constructor cannot be instantiated outside the class body, and cannot be extended. A class with protected constructor cannot be instantiated outside the class body, but can be extended. Example # class Singleton { private static instance: Singleton; private constructor() { } static getInstance() { if (!Singleton.instance) { Singleton.instance = new Singleton(); } return Singleton.instance; } } let e = new Singleton(); // Error: constructor of 'Singleton' is private. let v = Singleton.getInstance(); Abstract properties and accessors # An abstract class can declare abstract properties and/or accessors. Any sub class will need to declare the abstract properties or be marked as abstract. Abstract properties cannot have an initializer. Abstract accessors cannot have bodies. Example # abstract class Base { abstract name: string; abstract get value(); abstract set value(v: number); } class Derived extends Base { name = "derived"; value = 1; } Implicit index signatures # An object literal type is now Including built-in type declarations with --lib # Getting to ES6/ES2015 built-in API declarations were only limited to target: ES6. Enter --lib; with --lib you can specify a list of built-in API declaration groups that you can chose to include in your project. For instance, if you expect your runtime to have support for Map, Set and Promise (e.g. most evergreen browsers today), just include --lib es2015.collection,es2015.promise. Similarly you can exclude declarations you do not want to include in your project, e.g. DOM if you are working on a node project using --lib es5,es6. Here is a list of available API groups: - Example # tsc --target es5 --lib es5,es2015.promise "compilerOptions": { "lib": ["es5", "es2015.promise"] } Flag unused declarations with --noUnusedParameters and --noUnusedLocals # TypeScript 2.0 has two new flags to help you maintain a clean code base. --noUnusedParameters flags any unused function or method parameters errors. --noUnusedLocals flags any unused local (un-exported) declaration like variables, functions, classes, imports, etc… Also, unused private members of a class would be flagged as errors under --noUnusedLocals. Example # import B, { readFile } from "./b"; // ^ Error: `B` declared but never used readFile(); export function write(message: string, args: string[]) { // ^^^^ Error: 'arg' declared but never used. console.log(message); } Parameters declaration with names starting with _ are exempt from the unused parameter checking. e.g.: function returnNull(_a) { // OK return null; } Module identifiers allow for .js extension # Before TypeScript 2.0, a module identifier was always assumed to be extension-less; for instance, given an import as import d from "./moduleA.js", the compiler looked up the definition of "moduleA.js" in ./moduleA.js.ts or ./moduleA.js.d.ts. This made it hard to use bundling/loading tools like SystemJS that expect URI’s in their module identifier. With TypeScript 2.0, the compiler will look up definition of "moduleA.js" in ./moduleA.ts or ./moduleA.d.t. Support ‘target : es5’ with ‘module: es6’ # Previously flagged as an invalid flag combination, target: es5 and ‘module: es6’ is now supported. This should facilitate using ES2015-based tree shakers like rollup. Trailing commas in function parameter and argument lists # Trailing comma in function parameter and argument lists are now allowed. This is an implementation for a Stage-3 ECMAScript proposal that emits down to valid ES3/ES5/ES6. Example # function foo( bar: Bar, baz: Baz, // trailing commas are OK in parameter lists ) { // Implementation... } foo( bar, baz, // and in argument lists ); New --skipLibCheck # TypeScript 2.0 adds a new --skipLibCheck compiler option that causes type checking of declaration files (files with extension .d.ts) to be skipped. When a program includes large declaration files, the compiler spends a lot of time type checking declarations that are already the declaration file is checked. However, in practice such situations are rare. Allow duplicate identifiers across declarations # This has been one common source of duplicate definition errors. Multiple declaration files defining the same members on interfaces. TypeScript 2.0 relaxes this constraint and allows duplicate identifiers across blocks, as long as they have identical types. Within the same block duplicate definitions are still disallowed. Example # interface Error { stack?: string; } interface Error { code?: string; path?: string; stack?: string; // OK } New --declarationDir # --declarationDir allows for generating declaration files in a different location than JavaScript files.
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-0.html
CC-MAIN-2020-10
en
refinedweb
When you are first starting out learning how to program, one of the first things you will want to learn is what an error message means. In Python, error messages are usually called tracebacks. Here are some common traceback errors: - SyntaxError - ImportError or ModuleNotFoundError - AttributeError - NameError When you get an error, it is usually recommended that you trace through it backwards (i.e. traceback). So start at the bottom of the traceback and read it backwards. Let’s take a look at a few simple examples of tracebacks in Python. Syntax Error A very common error (or exception) is the SyntaxError. A syntax error happens when the programmer makes a mistake when writing the code out. They might forget to close an open parentheses, or use a mix of quotes around a string on accident, for instance. Let’s take a look at an example I ran in IDLE: >>> print('This is a test) SyntaxError: EOL while scanning string literal Here we attempt to print out a string and we receive a SyntaxError. It tells us that the error has something to do with it not finding the End of Line (EOL). In this case, we didn’t finish the string by ending the string with a single quote. Let’s look at another example that will raise a SyntaxError: def func return 1 When you run this code from the command line, you will receive the following message: File "syn.py", line 1 def func ^ SyntaxError: invalid syntax Here the SyntaxError says that we used “invalid syntax”. Then Python helpfully uses an arrow (^) to point out exactly where we messed up the syntax. Finally we learn that the line of code we need to look at is on “line 1”. Using all of these facts, we can quickly see that we forgot to add a pair of parentheses followed by a colon to the end of our function definition. Import Errors Another common error that I see even with experienced developers is the ImportError. You will see this error whenever Python cannot find the module that you are trying to import. Here is an example: >>> import some Traceback (most recent call last): File " ", line 1, in ImportError: No module named some Here we learn that Python could not find the “some” module. Note that in Python 3, you might get a ModuleNotFoundError error instead of ImportError. ModuleNotFoundError is just a subclass of ImportError and means virtually the same thing. Regardless which exception you end up seeing, the reason you see this error is because Python couldn’t find the module or package. What this means in practice is that the module is either incorrectly installed or not installed at all. Most of the time, you just need to figure out what package that module is a part of and install it using pip or conda. AttributeError The AttributeError is really easy to accidentally hit, especially if you don’t have code completion in your IDE. You will get this error when you try to call an attribute that does not exist: >>>>> my_string.up() Traceback (most recent call last): File " ", line 1, in my_string.up() AttributeError: 'str' object has no attribute 'up' Here I tried to use a non-existent string method called “up” when I should have called “upper”. Basically the solution to this problem is to read the manual or check the data type and make sure you are calling the correct attributes on the object at hand. NameError The NameError occurs when the local or global name is not found. If you are new to programming that explanation seems vague. What does it mean? Well in this case it means that you are trying to interact with a variable or object that hasn’t been defined. Let’s pretend that you open up a Python interpreter and type the following: >>> print(var) Traceback (most recent call last): File " ", line 1, in print(var) NameError: name 'var' is not defined Here you find out that ‘var’ is not defined. This is easy to fix in that all we need to do is set “var” to something. Let’s take a look: >>>>> print(var) Python See how easy that was? Wrapping Up There are lots of errors that you will see in Python and knowing how to diagnose the cause of those errors is really useful when it comes to debugging. Soon it will become second nature to you and you will be able to just glance at the traceback and know exactly what happened. There are many other built-in exceptions in Python that are documented on their website and I encourage you to become familiar with them so you know what they mean. Most of the time, it should be really obvious though. Related Reading - Python documentation on Errors and Exceptions - Built-in Exceptions in Python - The traceback module - Handling Exceptions in Python
http://www.blog.pythonlibrary.org/2018/07/24/understanding-tracebacks-in-python/
CC-MAIN-2020-10
en
refinedweb
#include <unistd.h> effective.); } Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
https://nxmnpg.lemoda.net/2/setuid
CC-MAIN-2020-10
en
refinedweb
Representational State Transfer (REST): - Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, is very lightweight, and relies upon the HTTP standard to do its work. It is great to get a useful web service up and running quickly. If you don't need a strict API definition, this is the way to go. REST essentially requires HTTP, and is format-agnostic (meaning you can use XML, JSON, HTML, whatever). Why SOAP - Basically SOAP web services are very well established for years and they follow a strict specification that describes how to communicate with them based on the SOAP specification. SOAP (using WSDL) is a heavy-weight XML standard that is centered around document passing. The advantage with this is that your requests and responses can be very well structured, and can even use a DTD. However, this is good if two parties need to have a strict contract (say for inter-bank communication). SOAP also lets you layer things like WS-Security on your documents. SOAP is generally transport-agnostic, meaning you don't necessarily need to use HTTP. Why Rest Services -Now REST web services are a bit newer and basically look like simpler because they are not using any communication protocol. Basically what you send and receipt when you use a REST web service is plain XML / JSON through an simple user define Uri (through UriTemplate). Which and When (Rest Or Soap) -Generally if you don't need fancy WS-* features and want to make your service as lightweight, example calling from mobile devices like cell phone, pda etc. REST specifications are generally human-readable only. How to Make Rest Services -In .Net Microsoft provide the rest architecture building api in WCF application that manage request and response over webHttpBinding protocol here is the code how to create simple rest service through WCF (Only GET http method example). We can also make our own rest architecture manually through http handler to creating the rest service that would be more efficient and simple to understand what going on inside the rest, we’ll discuss it further at next post. Just add new WCF application/services or make a website and add a .srv (WCF Service) in it. CS Part: (IRestSer.cs, RestSer.cs) -Below is the rest service contract (interface) part and implementation of contract methods part, It is required to add WebInvoke or WebGet attribute on the top of the function that we need to call through webHttpBinding protocol, Methods include Get, Post, Put, Delete that is http methods. UriTemplate is required to make fake host for rest service, Uri address actually not exist in real time it is created by WebServiceHostFactory at runtime (when application starts). Value inside curly braces ({name},{id}) matches with function parameter and Rest api passes value at runtime to corresponding parameters. IRestSer.cs Part: (Contract Interface): using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; using System.ServiceModel.Web; [ServiceContract] public interface IRestSer { [OperationContract] [WebInvoke(Method = "GET", UriTemplate = "/FakeUrl/P1={Name}", ResponseFormat=WebMessageFormat.Json, RequestFormat=WebMessageFormat.Json)] string SayHello(string Name); } RestSer.cs Part (Implementation of contract) using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; public class RestSer : IRestSer { public string SayHello(string Name) { return "Hello " + Name; } } RestSer.svc Part -Below is the Rest service .svc part where you have to mention factory attribute and set the value (System.ServiceModel.Activation.WebserviceHostFactory) as shown below - This is required for communication with UriTemplate and makes fake URL host according to UriTemplate. <%@ ServiceHost Language="C#" Debug="true" Service="RestSer" Factory="System.ServiceModel.Activation.WebServiceHostFactory" %> Need to change binding protocol (webHttpBinding) as rest service call over webHttpBinding protocol only. <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="RestBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="RestBehavior" name="Rest"> <endpoint address="" binding="webHttpBinding" contract="IRest"> <identity> <dns value="localhost" /> </identity> </endpoint> </service> </services> </system.serviceModel> Calling Through direct Uri / Ajax request :(Make Virtual directory in IIS named RestExample) <script type="text/javascript"> function CallgetJSON() { $.getJSON("", function(data) { alert(data); }); } </script> Output: Or Just Type your fake UriTemplate Url in browser address bar below is the result Or You can use WebClient , WebRequest and WebResponse Classes in asp.net C# for calling rest service through c# in server side.
https://www.dotnetbull.com/2012/07/rest-services-tutorial-aspnet.html
CC-MAIN-2020-10
en
refinedweb
Google Cloud Dataflow SDK for Java, version 1.9.1 Class PubsubUnboundedSource<T> - java.lang.Object - com.google.cloud.dataflow.sdk.transforms.PTransform<PBegin,PCollection<T>> - com.google.cloud.dataflow.sdk.io.PubsubUnboundedSource<T> - All Implemented Interfaces: - HasDisplayData, Serializable public class PubsubUnboundedSource<T> extends PTransform<PBegin,PCollection<T>>A PTransform which streams messages from Pubsub. - The underlying implementation in an UnboundedSourcewhich receives messages in batches and hands them out one at a time. - The watermark (either in Pubsub processing time or custom timestamp time) is estimated by keeping track of the minimum of the last minutes worth of messages. This assumes Pubsub delivers the oldest (in Pubsub processing time) available message at least once a minute, and that custom timestamps are 'mostly' monotonic with Pubsub processing time. Unfortunately both of those assumptions are fragile. Thus the estimated watermark may get ahead of the 'true' watermark and cause some messages to be late. - Checkpoints are used both to ACK received messages back to Pubsub (so that they may be retired on the Pubsub end), and to NACK already consumed messages should a checkpoint need to be restored (so that Pubsub will resend those messages promptly). - The backlog is determined by each reader using the messages which have been pulled from Pubsub but not yet consumed downstream. The backlog does not take account of any messages queued by Pubsub for the subscription. Unfortunately there is currently no API to determine the size of the Pubsub queue's backlog. - The subscription must already exist. - The subscription timeout is read whenever a reader is started. However it is not checked thereafter despite the timeout being user-changeable on-the-fly. - We log vital stats every 30 seconds. - Though some background threads may be used by the underlying transport all Pubsub calls are blocking. We rely on the underlying runner to allow multiple UnboundedSource.UnboundedReaderinstances to execute concurrently and thus hide latency. NOTE: This is not the implementation used when running on the Google Cloud Dataflow service. -, populateDisplayData, toString, validate Constructor Detail PubsubUnboundedSource public PubsubUnboundedSource(com.google.cloud.dataflow.sdk.util.PubsubClient.PubsubClientFactory pubsubFactory, @Nullable ValueProvider<com.google.cloud.dataflow.sdk.util.PubsubClient.ProjectPath> project, @Nullable ValueProvider<com.google.cloud.dataflow.sdk.util.PubsubClient.TopicPath> topic, @Nullable ValueProvider<com.google.cloud.dataflow.sdk.util.PubsubClient.SubscriptionPath> subscription, Coder<T> elementCoder, @Nullable String timestampLabel, @Nullable String idLabel)Construct an unbounded source to consume from the Pubsub subscription. Method Detail getProject @Nullable public com.google.cloud.dataflow.sdk.util.PubsubClient.ProjectPath getProject() getTopic @Nullable public com.google.cloud.dataflow.sdk.util.PubsubClient.TopicPath getTopic() getTopicProvider @Nullable public ValueProvider<com.google.cloud.dataflow.sdk.util.PubsubClient.TopicPath> getTopicProvider() getSubscription @Nullable public com.google.cloud.dataflow.sdk.util.PubsubClient.SubscriptionPath getSubscription() getSubscriptionProvider @Nullable public ValueProvider<com.google.cloud.dataflow.sdk.util.PubsubClient.SubscriptionPath> getSubscriptionProvider() apply public PCollection<T> apply(PBegin input<T>>
https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/PubsubUnboundedSource.html?hl=ca
CC-MAIN-2020-10
en
refinedweb
A generic settings block. More... #include <settings.h> A generic settings block. Definition at line 298 of file settings.h. Settings block. Definition at line 300 of file settings.h. Referenced by autovivified_settings_free(), generic_settings_init(), netdev_settings(), and netdev_settings_init(). List of generic settings. Definition at line 302 of file settings.h. Referenced by find_generic_setting(), generic_settings_clear(), generic_settings_init(), and generic_settings_store().
https://dox.ipxe.org/structgeneric__settings.html
CC-MAIN-2020-10
en
refinedweb
Apache Camel Deployment Modes Apache Camel Deployment Modes Join the DZone community and get the full member experience.Join For Free In this part of the article I will present various runtime and deployment modes that Apache Camel provides. There are so many possibilities and I won’t discuss them all but I will try to focus on the most popular and useful ones such as: 1. Maven goal (useful for testing purposes), 2. Main method (useful for JSE applications), 3. Web container (useful for web applications), 4. OSGi (useful for modular development). Before I commence, I need sample applications to deploy them in different ways. Fortunately, Camel is distributed with a few archetypes for maven users. I’ll use three of them: • camel-archetype-spring, • camel-archetype-web, • camel-archetype-blueprint. To generate the project, use the following command: mvn archetype:generate \ -DarchetypeGroupId=org.apache.camel.archetypes \ -DarchetypeArtifactId=*archetype-name* \ -DarchetypeVersion=2.9.0 \ -DarchetypeRepository= where *archetype-name* is the name of one of the above mentioned archetypes. Maven goal This is extremely simple: just create camel-archetype-spring and execute command mvn camel:run. That’s it! We have a fully working Apache Camel application. Take a look at camel-context.xml. You will notice that the file code is exactly the same as in the previous article. Main method If you need to create simple integration platform that will run as a classical JSE Main application, you need to use Main class from org.apache.camel.main package. Below there is a code for a very simple application that will print on your console "Invoked at " + new Date() every 2 seconds (2000 miliseconds). import java.util.Date; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.main.Main; public class MainExample { private Main main; public static void main(String[] args) throws Exception { MainExample example = new MainExample(); example.boot(); } public void boot() throws Exception { main = new Main(); ① main.enableHangupSupport(); ② main.addRouteBuilder(new MyRouteBuilder()); ③ main.run(); } private static class MyRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { from("timer:foo?delay=2000") .process(new Processor() { public void process(Exchange exchange) throws Exception { System.out.println("Invoked at " + new Date()); } }); } } } Line ① enables hangup support which means that you can terminate the JVM using ctrl + C. Line ② adds route builder with simple logic. Line ③ runs application. Web container Firstly create camel-archetype-web and then execute command jetty:run. The message “Hello Web Application, how are you?” will then appear every 10 seconds. Let’s take a closer look at the Camel configuration and web descriptor. > Web descriptor registers Spring context loader listener with specified context configuration location which contains Camel context. Camel context handler then starts all defined routes. OSGi OSGi is the specification that describe module and service based platform for the Java. For more information about OSGi, visit . To begin, you need to generate project from camel-archetype-blueprint. Single module (in our example jar) in OSGi is called bundle. Firstly let’s take a look at maven configuration file (pom.xml) at line 167: <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.3.4</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>camel-blueprint</Bundle-SymbolicName> <Private-Package>com.blogspot.mw.camel.blueprint.*</Private-Package> <Import-Package>*,org.apache.camel.osgi</Import-Package> </instructions> </configuration> </plugin> The code shows Apache Felix plugin execution configuration that will generate MANIFEST.MF file in META-INF directory. Private-Package indicates which of the available packages to copy into the bundle but not export. Import-Package indicates which of the available external packages to use. In our case we need to import org.apache.camel.osgi. Below you can see generated MANIFEST.MF: Manifest-Version: 1.0 Export-Package: com.blogspot.mw.camel.blueprint Bundle-Version: 1.0.0.SNAPSHOT Build-Jdk: 1.6.0_25 Built-By: Michal Tool: Bnd-1.15.0 Bnd-LastModified: 1338148449092 Bundle-Name: A Camel Blueprint Route Bundle-ManifestVersion: 2 Created-By: Apache Maven Bundle Plugin Import-Package: org.apache.camel.osgi,org.osgi.service.blueprint;versi on="[1.0.0,2.0.0)" Bundle-SymbolicName: camel-blueprint The second file worth discussing is blueprint.xml from src/main/resources/OSGI-INF/blueprint/blueprint.xml. Blueprint specification describe information about dependency injection and inversion of control for OSGi environment. As you can see in the code hereunder this is very similar to Spring configuration (in fact blueprint derives Spring configuration): <blueprint xmlns="" xmlns: <bean id="helloBean" class="com.blogspot.mw.camel.blueprint.HelloBean"> <property name="say" value="Hi from Camel"/> </bean> <camelContext trace="false" id="blueprintContext" xmlns=""> <route id="timerToLog"> <from uri="timer:foo?period=5000"/> <setBody> <method method="hello" ref="helloBean"></method> </setBody> <log message="The message contains ${body}"/> <to uri="mock:result"/> </route> </camelContext> </blueprint> To run this example we have to download some OSGi runtime environment. I’ll use for this purpose Apache ServiceMix (). To run ServiceMix, execute servicemix.bat (or servicemix.sh if your operating system is unix based). Next you have to install bundle into ServixeMix from Maven repository. In order to do this execute command osgi:install -s mvn:project-group-id/project-articafact-id. In my case that is osgi:install –s mvn:com.blogspot.mw/camel-blueprint. Now you can display logs by typing command log:display. In the screen above you can see the logs so the application is up and running. To summerize, in this part of the article we have learned how to deploy Apache Camel applications in four different ways. Examples were very simple so in the next part I will show a more complex integration solution using a few components like file, activemq, email and webservice. }}
https://dzone.com/articles/apache-camel-deployment-modes
CC-MAIN-2020-10
en
refinedweb
YAMI4 is a messaging solution for distributed systems. ThreadX is a Real-Time Operating System (RTOS) designed specifically for deeply embedded, real-time, and IoT applications. NetX is an advanced IP network stack, designed as part of the ThreadX solution family. FreeRTOS and the LwIP network stack. Article outline: IDE workspace and options Running the subscription example Both ThreadX and NetX are provided with the IAR project files, which makes it easier to integrate them within a single workspace (a workspace is a collection of projects). In order to fit this scheme, similar project definitions exist for the YAMI4 libraries - the core library, which provides basic messaging services, and the general-purpose C++ library, which supports higher-level messaging patterns on top of the core library. Of course, the actual application also needs to be included in the workspace, so the complete workspace setup might look like this: Above, netx, tx, YAMI4-ThreadX-core and YAMI4-ThreadX-cpp are library projects and Test is the actual application program that directly relies on the general-purpose C++ library, but indirectly requires all the others. The NetX library comes with the driver that is dedicated to the target device. This driver, composed of two source files that are added to the application project, is dedicated for a single device or a family of related devices and might not be appropriate for other boards. Both YAMI4 library project files are included in the YAMI4 package, in src/core and src/cpp directories and the workspace setup described here assumes that ThreadX, NetX,): Interestingly, just setting the heap size value is sufficient for the code generated by STM32CubeMX, but the other components also need a hint for the "free space" border in the memory space. This hint can be provided by adding the following line at the end of the generated Test/EWARM/stm32f429xx_flash.icf file: place in RAM_region { last section FREE_MEM }; Once the project skeleton is generated by STM32CubeMX and put together with the ThreadX, NetX C++ language can be used to its reasonable extent. The YAMI4 Core library does not use exceptions or RTTI, but calls back into higher levels that do. Finally, the YAMI4 Core library relies on services provided by ThreadX and NetX and also has its own system-abstraction layer that needs to be referenced. The include directories reflect these dependencies. Note also the YAMI4_WITH_NETX symbol, it needs to be defined to enable code paths specific to the NetX integration. For the general-purpose YAMI4 C++ library: Above, the include paths refer to the other library projects. Apart from the nxd and tx directories higher in the tree, the dependencies point to the content of the YAMI4 distribution package. Note again the YAMI4_WITH_NETX symbol. Finally, for the actual application: The application project was generated by STM32CubeMX and most of the include directories are for the involved CMSIS drivers. Still, it is necessary to provide dependencies to ThreadX, NetX and YAMI4. Note again the YAMI4_WITH_NETX symbol added to the list, it is still needed, even at the application level. Above, the application contains both C and C++ files and all C++ facilities need to be available. ... and of course the application needs to link with all involved libraries (ThreadX, NetX, YAMI4 Core and YAMI4 C++), picked from where they are built. Unfortunately, the STM32CubeMX tool and ThreadX/NetX do not know about each other and a bit of manual work needs to be done to properly integrate all involved components. Interrupts Both CubeMX-generated code and ThreadX rely on SysTick, but two handlers cause linker conflicts. A possible way to resolve them is to modify the tx_initialize_low_level.s source file, where the SysTick_Handler function is defined and comment out (or remove) the lines that define that symbol: ; PUBLIC SysTick_Handler PUBLIC __tx_SysTickHandler ;SysTick_Handler: __tx_SysTickHandler: ; VOID SysTick_Handler (VOID) ; { ; PUSH {lr} #ifdef TX_ENABLE_EXECUTION_CHANGE_NOTIFY BL _tx_execution_isr_enter ; Call the ISR enter function #endif BL _tx_timer_interrupt #ifdef TX_ENABLE_EXECUTION_CHANGE_NOTIFY BL _tx_execution_isr_exit ; Call the ISR exit function #endif POP {lr} BX LR ; } Fortunately, the __tx_SysTickHandler is also defined for the same function, so it is still possible to call it from the other handler - the one generated by STM32CubeMX, in the stm32f4xx_it.c file: void __tx_SysTickHandler(); void SysTick_Handler(void) { HAL_IncTick(); __tx_SysTickHandler(); } Above, the __tx_SysTickHandler() function (the one defined in ThreadX) is declared and then called from the single SysTick_Handler handler, in addition to the CubeMX-generated processing. Another linker conflict relates to the PendSV_Handler, which can be commented out entirely in the stm32f4xx_it.c file, because that version is by default empty. Finally, the ETH_IRQHandler should be commented out, too - the one defined by NetX will take over and properly process the interrupts delivered by the ETH peripheral. The main function The main function initializes the hardware components as defined by the STM32CubeMX project configurator and includes an example infinite loop where the user code can be written. In the case of network-oriented application that infinite loop is elsewhere and is more involved, so the only thing that needs to be done in the main.c file is a call to the actual example code in the publisher.cpp file: void run_publisher(); int main(void) { // STM32CubeMX-generated initialization calls: HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_ETH_Init(); MX_USART3_UART_Init(); MX_USB_OTG_FS_PCD_Init(); // transfer control to the actual application: run_publisher(); } Above, the run_publisher function is declared and then called instead of the infinite loop at the end of main., ThreadX and NetX.
http://www.inspirel.com/articles/YAMI4_ThreadX_STM32F429.html
CC-MAIN-2020-10
en
refinedweb
US8996563B2 - High-performance streaming dictionary - Google PatentsHigh-performance streaming dictionary Download PDF Info - Publication number - US8996563B2US8996563B2 US12/755,391 US75539110A US8996563B2 US 8996563 B2 US8996563 B2 US 8996563B2 US 75539110 A US75539110 A US 75539110A US 8996563 B2 US8996563 B2 US 8996563B2 - Authority - US - United States - Prior art keywords - leaf - node - key - comprises - - 238000003780 insertion Methods 0 abstract claims description 82 - 238000003860 storage Methods 0 abstract claims description 48 - 238000004590 computer program Methods 0 abstract claims description 9 - 239000010912 leaf Substances 0 claims description 937 - 238000000034 methods Methods 0 claims description 39 - 230000000875 corresponding Effects 0 claims description 32 - 239000010911 seed Substances 0 claims description 31 - 239000002609 media Substances 0 claims description 14 - 239000000727 fractions Substances 0 abstract description 2 - 239000000047 products Substances 0 claims 24 - 239000000872 buffers Substances 0 description 160 - 101700060969 MYO10 family Proteins 0 description 95 - 229940119254 Rebalance Drugs 0 description 48 - 206010000210 Abortion Diseases 0 description 17 - 238000007906 compression Methods 0 description 16 - 238000002955 isolation Methods 0 description 12 - 230000000694 effects Effects 0 description 10 - 238000000638 solvent extraction Methods 0 description 10 - 238000003046 intermediate neglect of differential overlap Methods 0 description 8 - 230000001965 increased Effects 0 description 7 - 238000005192 partition Methods 0 description 7 - ZGRQKCWNBYXGOB-UHFFFAOYSA-H dialuminum;chloride;pentahydkuNzgxMicAwLjkxNScgzLjgyLjA0OScMyLjjA1LjA1MScUxLjY0OTQnIHk9JzgxLjUNTkuMzIzJyB5PScyMzQuOT42MDkUuMDk3NycgeT0nNjcuNzEwOTgnIHk9JzUzLjIydGV4dCB4PSczOS41ODUjA5OCcgeT0nMzguNzU0Ljk3NTknTEuNTEyJyB5PScyNC4yNTzPC90c3Bhbj48dHNwYW4+PC90c3Bhbj48L3RleHQ+Cjx0ZXh0IHg9JzcwLjk3OTInL3N2Zz4K [OH-].[OH-].[OH-].[OH-].[OH-].[Al+3].[Al+3].[Cl-] ZGRQKCWNBYXGOB-UHFFFAOYSA-H 0 description 6 - 239000000203 mixtures Substances 0 description 6 - 238000004064 recycling Methods 0 description 6 - 201000006625 congenital myasthenic syndrome 5 Diseases 0 description 5 - 239000010410 layers Substances 0 description 4 - 230000003044 adaptive Effects 0 description 3 - 238000007792 addition Methods 0 description 3 - 230000001721 combination Effects 0 description 3 - 230000001747 exhibited Effects 0 description 3 - 239000000284 extracts Substances 0 description 3 - 238000006011 modification Methods 0 description 3 - 230000004048 modification Effects 0 description 3 - 230000036961 partial Effects 0 description 3 - 230000002829 reduced Effects 0 description 3 - 238000004364 calculation methods Methods 0 description 2 - 230000003247 decreasing Effects 0 description 2 - 238000009826 distribution Methods 0 description 2 - 230000000051 modifying Effects 0 description 2 - 230000003287 optical Effects 0 description 2 - 239000010832 regulated medical waste Substances 0 description 2 - 239000007787 solids Substances 0 description 2 - OFCNXPDARWKPPY-UHFFFAOYSA-N AllopuruNDEyLDI1Mi41ODYgMjMxLjg5MywyMTkuNzEuODkzLDIxOS42NDEgMjIyLjM3MywxODYzczLDE4Ni42OTUgMjQ1LjYsMTYyLYsMTYyLjU0NSAyNjguODI3LDEzOC4zMjksMTY4Ljg1OSAyMzQuNTg4LDE1MS41ODgsMTUxLjk1NCAyNTAuODQ3LDEzNSi4zNzMsMTg2LjY5NSAxNDguMjEyLDE2My4xNjEsMTI0LjEzNSAyNjMuNjQxLDkxLjEMy42NDEsOTEuMTg5NCAyNTQuMTIyLDU4LjINC4xMjIsNTguMjQ0MSAyMjAuMjkxLDQ5Lj4yOTEsNDkuODgyNCAxODYuNDYxLDQxLjUyMC4zMDcsNzAuNTY3OCAyMTYuNjI1LDY0LjcNi42MjUsNjQuNzE0NyAxOTIuOTQ0LDMy40Niw0Ni42NzMxIDE1MC4yMzMsNzAuODIzIzMyw3MC44MjM5IDEyNy4wMDUsOTQuOTjAwNSw5NC45NzQ2IDE0OC4yMTIsMTY0Ljg2NCwxMDEuNzQyIDE1OS43MDksMTUzwMDUsOTQuOTc0NiA5NS4zMyw5My45UuMzMsOTMuOTMzMSA2My42NTQ3LDkyLjgyMTIsMTY4LjM2NSA4NC45NjYyLDIxMS0Ljk2NjIsMjExLjIxMyA1OC4wNjk0LDE5MC4OTQsMTkwLjI4NyAzMS4xNzI1LDE2OSjI3OTEsMTkyLjg3NiA2Ny40NTEzLDE3OCQ1MTMsMTc4LjIyOCA0OC42MjM1LDE2My41zg0NCwxNTYuODAzIDQ3Ljk0MTEsOTkuOTwNzQnIHk9JzI2Ny414LjgyNycgeT0nMTM5LjEzMy40NicgeT0nNDcuNDuMTcxMycgeT0nMTcxLjjY1MjMnIHk9Jzk5LjkQ3MTgsNjkuNTgzNiA2NC45ODg4LDYwLjkk4ODgsNjAuOTkwMyA2Mi41MDU3LDUyLuNTA1Nyw1Mi4zOTcgNjguNDQ0Myw0Ni4yM40NDQzLDQ2LjIyMjMgNzQuMzgyOSw0MC4wxNjcyLDQ3LjU0MzggNjUuMzI0Miw0My4yMMjQyLDQzLjIyMTUgNjkuNDgxMiwzOC44OTMDU3LDUyLjM5NyQ2NzIsMzMuMTg5MSA3My45ODQyLDI0LjUk4NDIsMjQuNTk1OCA3MS41MDExLDE2LjAwjUwMTEsMTYuMDAyNSA2Mi41NTgzLDEzLjcjU1ODMsMTMuNzkyMSA1My42MTU1LDEx3Ljc3OTYsMTkuNTQxOSA2MS41MTk2LDE3LjUxOTYsMTcuOTk0NiA1NS4yNTk2LDE2LjQM2MiwxNC4wNiA0MS40MjM0LDIwLjIzQyMzQsMjAuMjM0OCAzNS40ODQ4LDI2LjQ4NDgsMjYuNDA5NSjU0NDksMjguMzI2OSA0NC43NTA5LDQyLjg40ODQ4LDI2LjQwOTUgMjcuNzk1MSwyNi4xuNzk1MSwyNi4xNTY2IDIwLjEwNTMsMjUuOTAzuNDkzMyw0Ny4yMDM1IDIzLjU3MzgsNTkuMuNTczOCw1OS4zNDM2IDE2LjU5NTQsNTMuOTEuNTk1NCw1My45MTQ0IDkuNjE3MTMsNDguNDTM4NSw1NC4yOTgyIDE5LjI1MzYsNTAuNDjUzNiw1MC40OTc3IDE0LjM2ODgsNDYuNjNy43OTUwNyw0Mi40NDUxIDEyLjU0NzIsMjkuMzAAxODInIHk9Jzc2Ljc4zODI5JyB5PSc0MC40MD3LjM2MicgeT0nMTQuNDENDkuNjuNTk4MzUnIHk9JzI5LjM OC1=NC=NC2=C1C=NN2 OFCNXPDARWKPPY-UHFFFAOYSA-N 0 description 1 - 206010003736 Attention deficit/hyperactivity diseases Diseases 0 description 1 - 101700058563 BLH9 family Proteins 0 description 1 - 241000270299 Boa Species 0 description 1 - 101700041285 LEUK family Proteins 0 description 1 - 229920002319 Poly(methyl acrylate) Polymers 0 description 1 - 102100017511 SPN Human genes 0 description 1 - 239000003570 air Substances 0 description 1 - 238000004458 analytical methods Methods 0 description 1 - 230000001174 ascending Effects 0 description 1 - 230000003190 augmentative Effects 0 description 1 - 230000006399 behavior Effects 0 description 1 - 238000010276 construction Methods 0 description 1 - 239000011162 core materials Substances 0 description 1 - 230000003111 delayed Effects 0 description 1 - 239000010432 diamond Substances 0 description 1 - 238000005516 engineering processes Methods 0 description 1 - 238000001914 filtration Methods 0 description 1 - 239000002421 finishing Substances 0 description 1 - 244000144992 flock Species 0 description 1 - 238000005304 joining Methods 0 description 1 - 238000005457 optimization Methods 0 description 1 - 230000001737 promoting Effects 0 description 1 - 230000000644 propagated Effects 0 description 1 - 230000004044 response Effects 0 description 1 - 230000001360 synchronised Effects 0 description 1 - 230000001131 transforming Effects 0 description 1 - 238000009827 uniform distribution Methods 0 description 1 - 230000003245 working Effects 0 description 1 Images Classifications - G06F17/30 invention relates to the storage of information on computer-readable media such as disk drives, solid-state disk drives, and other data storage systems. An example of a system storing information comprises a computer attached to a hard disk drive. The computer stores data on the hard disk drive. The data is organized as tables, each table comprising a sequence of records. For example, a payroll system might have a table of employees. Each record corresponds to a single employee and includes, for example, - 1. First name (a character string), - 2. Last name (a character string), - 3. Social Security Number (a nine-digit integer), - 4. A birth date (a date), and - 5. An annual salary, in cents (a number). The system might maintain another table listing all of the payments that have been made to each employee. This table might include, for example, - 1. Social Security Number, - 2. Payroll date (a date), and - 3. Gross pay (a number). The employee table might be maintained in sorted order according to social security number. By keeping the data sorted, the system may be able to find an employee quickly. For example, the data were not sorted then the system might have to search through every record to find an employee. If the data is kept sorted, on the other hand, then the system could find an employee by using a divide-and-conquer approach, in the same way that one can look up a phone number in a hardcopy phone book by dividing the book in two, and determining whether your party is in the first half or the second half, and then repeating this divide-and-conquer approach on the selected half. The problem of efficiently maintaining sorted data can become more difficult when disk drives or other real data storage systems are used. Storage systems often have interesting performance properties. For example, having read a record from disk, it is typically much cheaper to read the next record than it is to read a record at the other end of the table. Many storage systems exhibit “locality” in which accessing a set of data that is stored near each other is cheaper than accessing data that distributed far and wide. This invention can be used to maintain data, including but not limited to these sorted tables, as well as other uses where data needs to be organized in a computer system. This invention can be used to implement dictionaries. Many databases or file systems employ a dictionary mapping keys to values. A dictionary is a collection of keys, and sometimes includes values. In some systems, when data is stored in a disk storage system, the data is stored in a dictionary data structure stored on the disk storage system, and data is fetched from the disk storage system by accessing the a dictionary. In some systems, there is a computer-readable medium having computer-readable code thereon, where the code encodes instructions for storing data in a disk storage system. The computer readable medium includes instructions for defining a dictionary data structure stored on the disk storage system. In some systems, a computerized device is configured to process operations disclosed herein. In such a system the computerized device comprises a processor, a main memory, and a disk. The memory system is encoded with a process that provides a high-performance streaming dictionary that when performed (e.g. when executing) on the processor, operates within the computerized device to perform operations explained herein. Other systems that are disclosed herein include software programs to perform the operations summarized above and disclosed in detail below. More particularly, a computer program product can implement such a system. The computer program logic, when executed on at least one processor in a computing system, causes the processor to perform the operations indicated herein. Such arrangements of logic can be provided as software, code and/or other data structures arranged or encoded on a computer readable medium, or combinations thereof, including but not limited to an optical medium (for example, CD-ROM), floppy or hard disk (for example, rotating magnetic media, solid state drive, etc.) or other media including but not limited to firmware or microcode in one or more ROM or RAM or PROM chips or as an Application Specific Integrated Circuit (ASIC), networked memory servers, or as downloadable software images in one or more modules, shared libraries, etc. The software or firmware or other such configurations can be installed onto a computerized device to cause one or more processors in the computerized device to perform the techniques explained herein. Software processes that operate in a collection of computerized devices, including but not limited to in a group of data communications devices or other entities can also provide the system described here. The system can be distributed between many software processes on several data communications devices, or all processes could run on a small set of dedicated computers, or on one computer alone. The system can be implemented as a data storage system, or as a software program, or as circuitry, or a some combination, including but not limited to a data storage device. The system may be employed in data storage devices and/or software systems for such devices. The memory system of a computer typically comprises one or more storage devices. Often the devices are organized into levels in a storage hierarchy. Examples of devices include registers, first-level cache, second-level cache, main memory, a hard drive, the cache inside a hard drive, tape drives, and network-attached storage. As technology develops other devices may be developed. Additional examples of storage devices will be apparent to one of ordinary skill in the art. In this patent, we often describe the system as though it consists of only two levels in a hierarchy, and discuss how to optimize the number of transfers between one level and another. But the same analysis applies whether considering transfers from cache to main memory, or transfers from main memory to disk, or transfers between main memory and a hard drive, or transfers between any two storage devices, even if they are not organized into levels in a hierarchy. And a memory hierarchy can comprise many different levels. For convenience of description we will often refer to one device as RAM, in-RAM, in-memory, internal memory, main memory, or fast memory, whereas we will refer to a second level as disk, out of memory, on disk, or slow memory. It will be apparent to one of ordinary skill in the art that a dictionary can be implemented to use combinations of storage devices, such pairs including cache versus main memory, different parts of cache, main memory versus disk cache, disk cache versus disk, disk versus network attached storage, registers versus cache, etc. Furthermore, a dictionary can be implemented using more than two storage devices, for example using all of the storage devices mentioned above. Instead of analyzing the number transfers between two devices which are adjacent in a storage hierarchy, one could analyze the transfers between non-adjacent levels of memory, or between any two devices of a memory system. Furthermore, there could be multiple instances of each level, that is, there might be multiple caches, for example one or more for each processor or there may be multiple disks. may be practiced in a computer system that operates on data stored in a computer. Typically a computer system, as illustrated in A system is said to cache a value if it stores the value in a faster part of the memory hierarchy rather than in a slower part of the memory hierarchy (or rather than precomputing the value). For example, the system may cache blocks in the cache (102) from RAM (103). It may cache values in RAM (103) that might otherwise require accessing disk (104). Or if a value is expensive to compute, it may cache a copy of that value, to avoid precomputing the value in the future. In a typical mode of operation, the system operates as shown in In one mode of operation, the system organizes data in a tree structure. A tree comprises nodes. A node comprises a sequence of children. If a node has no children, then it is called a leaf node. If the node has children, it is called a nonleaf node or internal node. There is one root node that has no parent. All other nodes have exactly one parent. The tree nodes can have different on-disk and in-RAM representations. When a tree node is read from disk and brought into internal RAM, the node is first converted from the on-disk data format to the in-RAM data format, if different. When a tree node is evicted from RAM and written back to disk, it is converted from the in-RAM data format back to the on-disk data format, if different. Each leaf node (306, 307, 308, and 309 respectively) includes three employee records, each with a social security number, a name, and a salary. The leaf nodes collectively contain the employee records (310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, and 321). To find a given employee, such as the one with social security number 333-22-2222, the system examines the root node (301) and determines that the pivot key (305) stored therein is less than the employee's, so the system examines the right child (304), where it discovers that the pivot key stored therein is greater than the employee's, so the system examines the left leaf node (308) of the right child (304) where it can find the complete record of the employee. When the system needs to examine a node, including the root node (301), the system may be storing the node on disk (104) or in memory (103). If the node is not in memory (103), then the system moves a copy of the node from disk (104) to memory (103), changing the node's representation if appropriate. If memory (103) becomes full, the system may write nodes back 225 from memory (103) to disk (104). If the in-memory node has not been changed since it was read, the system may simply delete the node from memory (103) if there is a still a copy on disk (104). There are variations on this memory hierarchy. For example, the data can be moved between any two or more different types of computer-readable media. Another example is that the system may sometimes store data to disk in the in-RAM format, and sometimes store data in the on-disk 230 format. Alternatively, organizing data with different representations can be employed for other structures besides trees. For example, if the dictionary is a cache-oblivious look-ahead array or a cascading array, then different in-RAM and on-disk representations could be employed for different subarrays. In the example of In a tree, the height of a leaf node is 0. The height of a nonleaf node is one plus the maximum height of its children nodes. The depth of the root node is 0. The depth of a nonroot node is one plus the depth of its parent. The system employs trees of uniform depth. That is, all of the leaves are the same depth. Alternatively, the tree could be of nonuniform depth, or that system could employ another structure. For example a system could employ a structure in which some nodes have two or more parents, or in which a tree has multiple roots, or a structure which contains cycles. The subtree of a tree rooted at node n comprises n and the subtrees rooted at the children of n. This implies the subtree rooted at a leaf node is the node itself. A tree or subtree can contain only one node, or it can contain many nodes. Whenever a new key-value pair For dictionaries that allow duplicate keys other rules apply. For example, all the different key-value pairs may be kept, even though some have the same key. In such a dictionary, pairs might be stored logically in sorted order. That is, record (k,v) is logically before record (k′, v′) if and only if k<k′ or (k=k′ and v<v′), where the comparisons are made with the appropriate comparison functions. The system supports dictionaries with no duplicate keys (NODUP) as well as dictionaries with duplicate keys (DUP) which break ties by comparing values. in a DUP dictionary, inserting a duplicate key with a duplicate value typically has no effect. A key is represented as a byte string. The user of the dictionary may supply comparison functions for keys. If no comparison function is supplied by the user, then a default comparison 260 function is used, in which keys are ordered lexicographically as byte strings. Similarly values are represented as byte strings, and the user may supply a comparison function for values. A tree-structured data structure is organized as a search tree when nonleaf nodes of the tree comprise pivot keys (which may be keys or key-value pairs or they may be substrings of keys or key-value pairs). If the tree has n children, then for 0≦i<n−1, the subtree rooted at child i contains pairs that are less than or equal to pivot key i, and for 1≦i<n, the subtree rooted at child i contains pairs that are greater than pivot key i−1. We say that a pair p belongs to child i if 1. i=0 and p is less than or equal to pivot key 0, or 2. i=n−1 and p is greater than pivot key n−1, or 3. 0<i<n−1 and p is less than or equal to pivot key i and greater than pivot key i−1. The system includes a front-end module that receives commands from a user and converts them to operations on a dictionary. For example, the front-end a SQL database receives SQL commands which are then executed as a sequence of dictionary operations. Dictionaries The system implements a dictionary in which keys can be compared. That is, given two keys, they are either considered to be the same, or one is considered to be ordered ahead of the other. For example, if a dictionary uses integers as keys, then the number 1 is ordered ahead of the number 2. In some dictionaries the keys are not ordered. Another example is that a character string can be used as a key. A character string is a sequence of characters. For example the string “abc” denotes the string s where the first character ‘a’, the second character is ‘b’, and the third character is ‘c’. We denote the first character as s0 the second character as S1, and so forth. Thus, in this example, indexing of strings starts at 0. Strings can be ordered using a lexicographic ordering. Typically, in a lexicographically ordered system, two strings s and r are considered to be the same if they are the same length (that is |s|=|r|) and the ith character is the same for all relevant values of i (that is si=ri for all 0≦i<|s|). If there is some index i such that si≠ri then let j be the minimum such index. String s is considered to be ahead of string r if and only if si is before ri. If there is no such i, then the remaining case is that one of the strings is a prefix of the other, and the shorter string is considered to be ahead of the longer one. Another example is when one has a collection of vectors, all the same length, where corresponding vector elements can be compared. One way to compare vectors is that two vectors are considered the same if their respective components are the same. Otherwise, the first differing component determines which vector is ahead of the other. In some systems, the vectors may be of different lengths, or corresponding elements may not be comparable. Alternatively, there exist many other ways of constructing keys. Examples include comparing the last element of a sequence first, or ordering the keys in the reverse of their natural order (including but not limited to ordering the integers so that the descend rather than ascend). A dictionary can be conceived as containing key-value pairs. A key-value pair comprises a key and a value. One way to use dictionaries is for all the keys to be unique. Another way to use dictionaries allows keys to be duplicated for different entries. For example, some dictionaries might allow duplicate keys in which case ties are broken using some other technique including but not limited to based on which pair was inserted earlier or based on the contents of the values. The same data can be stored in many different dictionaries. For some of these dictionaries, the role of the values comprising the key and value may be changed. For example, a key in one dictionary may be used as the value in another dictionary. Or the key may comprise the key of another dictionary concatenated or otherwise combined with parts of a value. Each dictionary may have an associated total ordering on the keys. Different dictionaries may contain the same key-value pairs, but with a different ordering function. For example, a system might employ two dictionaries, one of which is the reverse of the other. An example would be a dictionary containing names of people as keys. A system might maintain one dictionary in which the names are sorted by last name, and another in which the names are sorted by first name. Given a key, a search operation can determine if a key is stored in a dictionary, and return the key's associated value if there is one. Given a key, finding the corresponding key-value pair if it exists or reporting the nonexistence if it does not exist is called looking up the key. It is also referred to as a search or a get. In some situations, a look up, search, or get may perform different operations (including but not limited to not returning the associated value, or performing additional operations). Given a key k, the system can find the successor of k in the dictionary (if there is one), which is the smallest key greater than k in the dictionary. The system can also find the predecessor (if there is one). Another common dictionary operation is to perform a range scan on the dictionary: Given two keys k, and k′, find all the key-value pairs Typically a system performs a range scan in order to perform an operation on each pair as it is found. An example operation is to sum up all the values when the values are numbers. Other examples are to make a list of all the pairs or keys or values; or to make a list of the first element of every value (for example, if the values are sequences); or to count the number of pairs. Many other operations can be performed on the pairs of a range query. Some range scan operations can be more efficient if the values are produced in a particular order (for example, smallest to largest, or largest to smallest). For example, joining two dictionaries in a relational database can be more efficient if the dictionaries are traversed in a particular order. Other range scan operations may be equally efficient in any order. For example, to count the number of pairs in a range, the values can be found in any order. There are several ways that dictionaries can deal with the possibility of duplicate keys, that is key-value pairs with the same key. For example, some dictionaries forbid duplicate keys. One way to forbid duplicate keys is to ensure that whenever a key-value pair Another way to handle duplicate keys is to extend the comparison keys to allow comparisons on key-value pairs. In this approach, duplicate keys are allowed as long as any two records with the same key have different values, in which case a value comparison function is provided by the system to induce a total order on the values. Key-value pairs are stored in sorted order based on the key comparison function, and for duplicate keys, based on the value comparison function. This kind of duplication can be employed, for example, to build an index in a relational database. Alternatively, there are other ways to accommodate duplicate keys in a dictionary. For example, a system might “break ties” by considering pairs that were inserted earlier to be ordered earlier than pairs that were inserted later. Such a system could even accommodate “duplicate duplicates”, in which both the key and the value are equal. Alternatively, when storing pairs with duplicate keys, the key might be stored only once, which could save space, and for duplicate duplicates that the value could be stored only once which could save space. Alternatively, other space-saving techniques can be employed. For example when keys and values are strings, often two adjacent keys share a common prefix. In this case, the system could save space by storing the common prefix only once. The system employs tree structure to implement dictionaries. As the system traverses the tree from left to right, it encounters key-value pairs in sorted order. Leaf Node in Memory - 1. isdup (403), a Boolean that indicates whether the leaf node is part of a DUP or a NODUP 365 tree; - 2. blocknum (404), a 64-bit number that indicates which block number is used to store the data on disk; - 3. height (405), a 16-bit number that indicates the height of the node (which is 0 for leaf nodes); - 4. randfingerprint (406), a 32-bit number employed for calculating fingerprints; - 5. localfingerprint (407), a 32-bit number that provides a fingerprint of the values stored in the leaf; - 6. fingerprint (408), a 32-bit number that contains the fingerprint of the node; - 7. dirty (409), a Boolean that indicates that the in-RAM node represents different data than does the on-disk node (that is, that the node has been modified in RAM); - 8. fullhash (410), a 32-bit number that is a hash value used to find the leaf node in a Buffer Pool (4601); - 9. nodelsn (411), a 64-bit number that equals the log sequence number (LSN) associated with the most recent change to the node; - 10. nbytesinbuffer (412), a 32-bit number that indicates how many bytes of leaf entries are stored in the leaf node including overheads such as length; - 11. seqinsert (413), a 32-bit number that indicates how many insertions have been performed with sequentially increasing keys, - 12. statistics (414), a structure that maintains statistical information; - 13. omt_pointer (415), a pointer to an OMT (1101); and - 14. mem_pointer (416), a pointer to a memory pool (1001). The system calculates the fingerprint (408) of a leaf node by taking the sum, over the leaf entries in the node, of the fingerprints of the leaf entries. The fingerprint of a leaf entry in a node is taken by computing a checksum, for example as shown in The system establishes the fingerprint seed randfingerprint (406) when a node is created by choosing a random number (e.g., with the random( ) C library function, which in turn can be seeded e.g., with the date and time.) The fullhash (410) is a hash of the blocknum (404) and a dictionary identifier. The system 395 employs fullhash (410) to look up blocks in the buffer pool. The system keeps track of how many insertions have been performed on sequentially increasing keys using the seqinsert (413) counter. The system increments the counter whenever a pair is inserted at the rightmost position of a node. Every time a pair is inserted elsewhere, the counter is decremented with a lower limit of zero. When a leaf node splits, if the seqinsert (413) counter 400 is larger than one fourth of the inserted keys, the system splits the node unevenly. Alternatively, other methods for maintaining and using such a counter can be employed. For example, the system could split unevenly if the counter is greater than a constant such as four. For another example, the system could remember the identity of the most recently inserted pair, and increment the counter whenever a new insertion is adjacent to the previous insertion. In that case the system when choosing a point to split a node, if the counter is large the system can split the node at the most recently inserted pair. Alternatively the particular sizes of the numbers chosen can be chosen differently. For example, the nbytesinbuffer (412) field could be made larger so that more than 232 bytes could be stored in a leaf block. Similar size changes could be made throughout the system. In the following description, we use the word “number” to indicate a number with an appropriate number of bits to represent the range of numbers required. The system sets the dirty (409) Boolean to T To insert a key-value pair into a leaf node (401), the system first allocates space in the node's 415 memory pool (1001) (which may invoke the memory pool's mechanism for creating a new internal buffer and copying all the values to that space), and copies the value into the newly allocated space. Then the memory-pool pointer to that value is stored in the OMT (1101). Memory Pool - 1. mpbase (1005), a memory pointer to a memory block (1006); - 2. mpsize (1003), a number which indicates the length of the memory block; - 3. freeoffset (1002), a number which indicates the beginning of the unused part of memory block. All bytes in the memory block beyond the freeoffset (1002) are not in use. In FIG. 10, the free space is shown inside the memory block as a crosshatched region marked “never yet unused” (1007); and - 4. fragsize (1004), a number indicating how many allocated bytes of the memory block are no longer in use. In FIG. 10the hatched regions (1009) are no longer in use. The fragsize (1004) is the sum of the sizes of the blocks no longer in use. The blocks that are still in use (1008) are also shown. To allocate n bytes of memory in a memory pool, the system increments freeoffset (1002) by n. If the freeoffset (1002) is not larger than mpsize (1003), then the memory has been allocated. Otherwise a new block of memory is allocated (using for example the system's standard library malloc( ) function) of size 2·(freeoffset−fragsize), and all useful data is copied from the old memory block to the beginning of the new memory block. The useful data can be identified as pointer values stored in the OMT (1101). The mpbase (1005) is set to point at the new memory block, and the old memory block is freed. The mpsize (1003) is set to the new size, the freeoffset (1002) is set to (freeoffset−fragsize), and the fragsize (1004) is set to 0. To free a subblock of size n of memory in a memory pool, the system increments the fragsize (1004) by n. Order Maintenance Tree An order-maintenance tree (OMT) is an in-memory dictionary. An OMT has two representations: a sorted array, and a weight-balanced tree. An OMT can insert and look up a particular key-value pair by using the comparison function on pairs. An OMT can also look up the ith key-value pair, knowing only i (similarly to an array access). For example, an OMT can look up the third value in the sorted sequence of all the values. Also an OMT can insert a pair after the ith pair. - 1. is_array (1102), a Boolean indicating whether the OMT uses the array or tree representation; - 2. omt_cursors (1103), a linked list of OMT cursors; - 3. omt_array (1104), a pointer pointing to the array (in the case that is_array (1102) is T RUE); and - 4. omt_tree (1105), a pointer pointing to the tree (in the case that is_array (1102) is F ALSE). Since omt_array (1104) and omt_tree (1105) are never both valid at the same time, the same memory can be used to hold both points using, for example, a C-language union. In the array representation, an OMT's omt_array (1104) pointer points at a sorted array of key-value pairs. To look up a key, perform a binary search on the array. To look up the ith value, index the array using i. To insert or delete a value, first convert the OMT into the tree representation, and then insert it. - 1. omt_weight (1807), a number which is the size of the subtree rooted at this OMT node; - 2. omt_left (1808), a pointer to the left subtree (the pointer is NULL if the left subtree is empty); - 3. omt_right (1809), a pointer to the right subtree (the pointer is NULL if the right subtree is empty); and - 4. omt_keyvalue (1810), a key-value pair. By convention, if there is no left (or right) child of a node, we say that the left (respectively right) child is NULL. The OMT tree is a search tree, meaning that all the pairs in the left subtree of a node are less than the pair of the node, and the value of the node is less than all the pairs in the right subtree. We define the left-weight of an OMT node to be one plus the number of nodes in its left subtree. The left-weight of a node N can be calculated by examining the pointer in omt_left (1808). If that is NULL then the left-weight of N is zero. Otherwise the left weight is the value stored in omt_weight (1807) of the OMT node pointed to by omt_left (1808). We define the right-weight of an OMT node to be one plus the number of nodes in its right subtree. The OMT tree is weight balanced, meaning that left-weight is within a factor of two of the right-weight for every node in the OMT tree. For example Node (1802) has left-weight equal to 4 (it has three descendants, plus 1), and right-weight equal to 2 (one descendant plus 1). Since 2 is within a factor of two of 4, Node (1802) is weight balanced. Given a pair p, to find the index of that pair in an OMT rooted at Node N, the system performs a recursive tree search, starting at the root of the OMT, as shown in - 1. if p is less than the omt_keyvalue (1810) of a node, then the index can be found by looking in the left child of the node, omt_left (1808), as shown at Line 2; - 2. if p equals the omt_keyvalue (1810) of the node, then the index equals the left-weight of the node because that is how many values are in the tree that are less than p, as shown at Line 4; and - 3. otherwise the index equals one plus the left-weight of the node plus the index of the node in the right subtree, as shown at Line 5. To look up a value given an index i in an OMT rooted at Node x, the system traverses the tree recursively, as shown in - 1. if i is less than the left-weight of N, then the value can be found by looking in the left child of the node, omt_left (1808), as shown at Line 2; - 2. if i equals the left-weight of N, then the omt_keyvalue (1810) stored at N is returned, as shown at Line 4; and - 3. otherwise, the value can be found by searching in the right subtree, omt_right (1809), with an index equal to i minus the left-weight of N minus 1, as shown at Line 5. To look up a value given a pair p, one can first find the index using O To insert a pair p into an OMT tree, the system first inserts the node in an unbalanced fashion, and then rebalances the tree if needed by rebuilding the largest unbalanced subtree. - 1. if N is NULL then a new OMT node is created, as shown at Line 2; - 2. if i is less than or equal to the left weight, then the value is inserted in the left child of N at index i, and the resulting tree is stored as the left child of N, as shown in Line 5, and N is returned as shown in Line 6; and - 3. otherwise the value is inserted in the right child of N, at the index i minus the left-weight of N minus 1, and the resulting tree is stored as the right child of N, as shown in Line 8, and N is returned as shown in Line 9. After performing the unbalanced insertion, any unbalanced subtree is rebalanced. As the O Alternatively, one can delete a pair from the OMT array by first deleting in an unbalanced 525 fashion and then rebuilding the largest unbalanced subtree. An OMT cursor can be thought of as a pointer at a pair inside an OMT. Given a cursor, the system can fetch the pair, or it can move the cursor forward or backward. The system implements a cursor as an integer, which is the index in the array representation. Since that index can be used both the tree and the array representation, it suffices. However any time the tree changes shape (due 530 to an insertion or deletion), that integer must be updated. When this occurs, the system invalidates the OMT cursor, and then the user of the OMT cursor reestablishes the cursor by looking up the relevant key-value pair. The OMT cursor provides a callback method to notify its user that the cursor is about to become invalid, so that the user can copy out the key-value pair, which will enable the user to reestablish the cursor. Alternatively the system can update the integer as needed, or otherwise maintain the cursor in a valid state. All of the OMT cursors that refer to a given OMT, are maintained in a linked list stored at omt_cursors (1103). Leaf Entries The objects stored in an OMT can have extra information beyond the key-value pairs themselves. These objects, which comprise the key-value pairs and any additional information, and are called leaf entries, and they are looked up in an OMT using the same key comparison used for key-value pairs. That is, for NODUP dictionaries, they are identified by a key, and for DUP dictionaries they are identified by a key-value pair. In this system, the extra information records whether the transaction that last inserted or deleted the key-value pair has committed or aborted. - 1. A LE_COMMITTED leaf entry then encodes - (a) a key le_key (2202) that is encoded by encoding a numeric length and the key bytes, and - (b) a committed value le_cvalue (2204) that is encoded by encoding a numeric length and the key bytes. 2. A LE_BOTH leaf entry then encodes - (a) a key le_key (2202), - (b) a transaction identifier (XID) le_xid (2203) that encodes a 64-bit number, - (c) a committed value le_cvalue (2204), and - (d) a provisional value le_pvalue 2205 that is encoded by encoding a numeric length and the key bytes. 3. A LE_PROVDEL leaf entry then encodes - (a) a key le_key (2202), - (b) an XID le_xid (2203), and - (c) a committed value le_cvalue (2204). 4. a LE_PROVVAL leaf entry then encodes - (a) a key le_key (2202), - (b) an XID le_xid (2203), and - (c) a provisional value le_pvalue 2205. These four leaf entry types further comprise a checksum le_checksum (2206). Alternatively, other encodings can be used to implement a dictionary. For example, for a dictionary without transactions, it may suffice to employ only one type of leaf entry comprising a key, a value, and a checksum. Alternatively, the checksum can be modified to be more robust or less robust (or even removed). For example, if the reliability demanded by users of the system is much less than the reliability provided by the system, then the checksum might be removed to save cost. Checksums A checksum le_checksum (2206) can be calculated using any convenient checksum, such as a CRC. The system computes a checksum of a block of memory B of length l, calculated as shown in For simplicity of further explanation, we focus the inner loop of the checksum, which after optimizing for operating directly 64-bit values can be expressed in the C99 programming language as shown in where ai is the ith 64-bit number. The function (2401) of To compute the same checksum in parallel the system operates as follows. If a and b are vectors of 64-bit values, and a+b is the concatenation of vectors, and |b| is the length of b then checksum(a+b)=checksum(a)·17|b|+checksum(b) where all calculations are performed in 64-bit unsigned integer arithmetic. The system computes 17x by repeated squaring. For example 17100=1764·1732·174 so to compute it the system computes x 2=17·17; x 4 =x 2 ·x 2; x 8 =x 4 ·x 4; x 16 =x 8 ·x 8; x 32 =x 16 ·x 16; x 64 =x 32 ·x 32; x 100 =x 64 ·x 32 ·x 4; Thus the system computes 17x modulo 264 in O(logx) 64-bit operations. Note that the “big-Oh” notation is used to indicate how fast a function grows, ignoring constant factors. Let f(n) and g(n) be non-decreasing functions defined over the positive integers. Then we say that f(n) is O(g(n)) if there is exist positive constants c and n0 such that for all n>n0, f(n)<cg(n). Non-Leaf Nodes A nonleaf data block (2602) is a structure comprising - 1. isdup (403), a Boolean; - 2. blocknum (404), a number; - 3. height (405), a number; - 4. randfingerprint (406), a number; - 5. localfingerprint (407) a number; - 6. fingerprint (408), a number; - 7. dirty (409), a Boolean; - 8. fullhash (410), a number; - 9. nodelsn (411), a number; and - 10. statistics (414) a structure. all of which serve essentially the same role as in the leaf node of FIG. 4. The nonleaf data block (2602) further comprises - 1. nchildren (2606), a number indicating how many children the node has; - 2. totalpivotkeylens (2607), a number indicating a sum of the lengths of the pivot keys; - 3. nbytesinbufs (2608), a number indicating a sum of the number of bytes in buffers; - 4. pivotkeys (2610), a pointer to an array of pivot keys; and - 5. childinfos (2609), a pointer to an array of structures containing information for each child. The pointer childinfos (2609) refers to a child information array (2603) in RAM. The ith element, a child information structure (2605), of the array is a structure that contains information about the ith subtree of the node, comprising - 1. subtreefingerprint (2611), a number which equals the fingerprint of the subtree; - 2. childblocknum (2613), a number which equals the block number where the subtree's root is stored; - 3. childfullhash (2614), a number which equals a hash of the subtree root, used to quickly find the leaf node in a Buffer Pool (4601); - 4. bufferptr (2615), a pointer to a buffer buffer structure (2701); and - 5. nbytesinbuf (2616), a number equal to the sum of the number of bytes in buffer structure (2701). If the node has n children then the child information array (2603) contains n structures, each a child information structure (2605). The value of nbytesinbufs (2608) is the sum of the various nbytesinbuf (2616) values in the child information array (2603). In FIG. 26the child information array (2603) is shown with three elements, labeled 0, 1, and 2. Each element is a child information structure (2605). The pointer pivotkeys (2610) refers to a pivot keys array (2604) of pivot keys. For a NODUP dictionary a pivot key comprises the key of a key-value pair. For a DUP dictionary a pivot key comprises both the key and the value of a key-value pair. If the node has n children then the pivot keys array (2604) contains n−1 pivot keys. In - 1. n_in_fifo (2702), a number indicating how many messages (2708) are in the FIFO; - 2. fifo_memory (2703), a pointer to a block of memory (2707) holding the messages (2708); - 3. fifo_mem_size (2704), a number that says how big the block of memory is; - 4. fifo_start (2705), a number that indicates the offset in the block of the oldest message; and - 5. fifo_end (2706), a number that indicates the offset in the block of just beyond the newest message (that is, the offset where the next message will be enqueued). A buffer structure (2701) contains zero or more messages. In To enqueue a message of size M in a FIFO, the system uses the following procedure: - 1. Let S be the size of the data in the FIFO (that is, the difference between fifo_end (2706) and fifo_start (2705)). - 2. Let R be the size remaining space after the newest message (that is, the difference between fifo_mem_size (2704) and fifo_end (2706)). - 3. If M<R (there is not enough space at the end of the memory block), then - (a) If either 2(S+M) is greater than fifo_mem_size (2704) or 4(S+M) is less than fifo_mem_size (2704), then allocate a new block of memory of size 2(S+M) and - i. copy the block of size S from offset fifo_start (2705) in the old block to the beginning of the new block, - ii. copy the message to offset S in the new block, - iii. set fifo_memory (2703) to point at the new block, - iv. set fifo_mem_size (2704) to 2(S+M), - v. set fifo_start (2705) to 0, - vi. set fifo_end (2706) to S+M, and - vii. free the old block. - (b) Otherwise (reuse the same block) - i. move the block of size S from offset fifo_start (2705) in the block, copying from left to right to avoid overwriting one portion of the block in its old location before copying it to its new location, - ii. copy the message to offset S, - iii. set fifo_start (2705) to 0, and - iv. set fifo_end (2706) to S+M. - 4. Otherwise there is space, therefore - (a) copy the message to offset fifo_end (2706), and - (b) increment fifo_end (2706) by M. The fingerprint (408) of a nonleaf node is calculated by taking the sum, over all the messages in the node, of the fingerprints of the messages, further summing the fingerprints of the children nodes of the node. The system maintains a copy in each node of the fingerprint of each child in subtreefingerprint (2611). The fingerprint is calculated incrementally as the tree is updated. Alternatively, the fingerprint of a node can be updated when the node is written to disk (also updating the subtreefingerprint (2611) at that time). The system maintains the fullhash (410) for a node and update the childfullhash (2614) of the node's parent so that the recalculation of the fullhash (410) of the child can be avoided when the system is requesting a child block from the buffer pool. Messages An insert message (2801) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; - 3. key (2810), a key; and - 4. value (2811), a value. A delete_any (2803) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; and - 3. key (2810), a key. A delete_both (2804) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; - 3. key (2810), a key; and - 4. value (2811), a value. A commit_any (2805) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; and - 3. key (2810), a key. A commit_both (2806) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; - 3. key (2810), a key; and - 4. value (2811), a value. An abort_any (2807) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; and - 3. key (2810), a key. An abort_both (2808) message is a structure comprising - 1. message_type (2809); - 2. transaction_id (2812), an XID; - 3. key (2810), a key; and - 4. value (2811), a value. An abort_any_any (2802) is a structure comprising - 1. message_type (2809); and - 2. transaction_id (2812), an XID. Each message is encoded into a block of RAM. The message_type (2809) discriminates between the various types of messages, for example, between a commit_any message and an abort_both message. The message format in RAM is organized so that the message_type (2809) is at the same offset in every message, so that the system can, given a block of memory containing an encoded message, determine which message type the message is, and the system can to determine the offset of each of the other fields. The message_type (2809) is 1 for an insert, 2 for a delete_any, 3 for a delete_both, and so forth. A message is encoded into a block of memory by encoding each of its fields, one after the other. Thus the first byte of memory contains the message_type (2809). The XID, which is a 64-bit number, is stored in the next 8 bytes. The key is then stored using 4 bytes to store the length of the key, followed by the bytes of the key. The value, if present, is then stored using 4 bytes to store the length of the value, followed by the bytes of the value. Integers are stored in network order (most significant byte first). - 01: The message_type (2809). - 00 00 00 00 00 00 04 12: The transaction_id (2812), which is 1042 encoded in hexadecimal. - 00 00 00 03: The length of the key, which is 3. - 61 62 63: The three bytes of the key. Note that the letter ‘a’ encodes as hexadecimal 61 in ASCII. - 00 00 00 04: The length of the value, which is 4. - 77 78 79 7A: The four bytes of the value. When a data structure, including but not limited to a message, has been converted into an array of bytes we say the data structure has been serialized. In many other cases throughout this patent, when we describe a data structure as being serialized, we use a similar technique as shown here for message serialization. The system identifies nodes by a block number. The system converts a block number to to a file offset (or a disk offset) and length via a block translation table. The file offset and length together are called a segment. Alternatively, for some message types, the system could combine messages at nonleaf nodes. For example if two insert messages with the same key, value, and XID are found, then only one needs to be kept. Alternatively, there are other types of operations that can be stored as messages. For example, one could implement a lazy query, in which the query is allowed to be returned with a long delay. Alternatively, one could implement an insertion of a key value pair (k,v) that is subject to different overwrite rules, i.e., different rules about when to overwrite a key-value pair (k, v′) that was already in the dictionary. One could implement an update operation U(k,v,c), in which c is a call-back function that specifies how the value v is combined with the existing value of key k in the database. For example, this update mechanism can be used to implement a counter increment functionality. There can also be addition types of operations for the case when duplicates are allowed. Block Translation Table - 1. free_blocks (3002), a block number which implements the head of a free list; - 2. unused_blocks (3003), a block number which indicates that all block numbers larger than the unused_blocks (3003) value are free; - 3. xlated_blocknum_limit (3004), a number which indicates the number of block numbers that are translated; - 4. block_translation (3005), a pointer to an block translation array (3009) stored in RAM; - 5. xlation_size (3006), a number which indicates the size of the block translation array (3009) as stored on disk; - 6. xlation_offset (3007), a number which indicates where on disk the block translation array (3009) is stored (thus the xlation_offset (3007) and xlation_size (3006) together identify a segment); and - 7. block_allocator_pointer (3008), a pointer that points to a segment allocator (3201). The block translation array (3009) comprises an array of block translation pairs. - 1. offset (3102), a number which indicates the offset of the block on disk; and - 2. size (3103), a number which indicates the size of the block. A block translation pair (3101) thus contains enough information to identify a segment on disk. To implement a free list, the free_blocks (3002) in the block translation table (3001) names a free block number. A free block has its size (3103) set to a negative value in its block translation pair (3101), and has the identity of the next free block is stored in its offset (3102). The last free block in the chain sets its offset (3102) to a negative value. To allocate a new block number, the system first checks to see if free_blocks (3002) identifies a block or has a negative value. If it does, then the list is popped, setting free_blocks (3002) 805 from the identified block's offset (3102), and using the old value of free_blocks (3002) as the newly allocated block number. If there are no free blocks in the block list, then the block number named unused_blocks (3003) is used, and unused_blocks (3003) is incremented. If unused_blocks (3003) is larger than xlated_blocknum_limit (3004), then the block translation array (3009) is grown, by allocating a new array that is twice as big as xlated_blocknum_limit (3004), copying the old array into the new array, freeing the old array, and storing the new array into block_translation (3005). To free a block number, the block is pushed onto the block free list by setting the block's offset (3102) to the current value of free_blocks (3002), and setting free_blocks (3002) to the block number being freed. When the block translation array (3009) is written to disk, then a segment is allocated using the segment allocator (3201), and the block is written. The size of the segment is stored in xlation_size (3006), and the offset of the segment is stored in xlation_offset (3007). Alternatively, other implementations of a set of free blocks can be used. For example, the set of free blocks could be stored in a hash table. Similarly, the translation array could be represented differently, for example in a hash table. The segment allocator (3201) implements a segment allocator (3201) which manages the allocation of segments in a file. A segment allocator (3201) is a structure comprising - 1. ba_align (3202), a number which indicates the alignment of all segments; - 2. ba_nsegments (3203), a number which indicates how many allocated segments are on disk; - 3. ba_arraysize (3204), a number which indicates the size of the array pointed to by ba_arrayptr (3205); - 4. ba_arrayptr (3205), a pointer which points at an array of segment pairs; and - 5. ba_nextf it (3206), a number which remembers in the array of segment pairs the system last looked for an allocated segment. The system ensures that every segment's offset is a multiple of ba_align (3202). Whenever the array of segment pointers is discovered to be too small (based on ba_arraysize (3204)), the system doubles the size of the array by doubling ba_arraysize (3204), allocating a new array, copying the data to from the old array to the new array, and freeing the old array. The array of segment pairs is kept in sorted order, sorted by the segment offset. A segment pair comprises an offset and a size. To find a new segment of size S, the system rounds S up to be a multiple of ba_align (3202), that is the system uses ba_align·┌S/ba_align┐. The system then looks at the segment pair identified by the ba_nextfit (3206). The system can determine that the size of the unused space between the segment named in that segment pair and the segment named in the next segment pair. If the unused space is size S or larger, then a all the segment pairs from that point are moved up in the array by one element, creating a new segment pair. The new segment is then initialized with size S and offset at the end of the original segment pair. If the unused space is smaller than S (possibly with no space) then ba_nextfit (3206) is incremented wrapping around at the end, and the system looks again. If the system makes one complete round looking at all the free slots without finding a large enough free segment, then the system allocates space at end. The system does not allocate a segment that has offset between 0 and ba_reserve (3207), reserving that segment for file header information (including but not limited to information about where the block translation table is stored on disk). In the segment allocator (3201) described above, the free space is stored implicitly by storing the in-use segments in sorted order. In some situations the system stores the free segments explicitly in an OMT sorted in increasing order by the size of the free segment. In this mode, the system allocates a segment of size S by performing a search to find the smallest free segment of size greater than equal to S. The found segment is removed from the OMT. If the found segment's size is equal to size S then that segment is used. If the found segment is larger than S then the system breaks the segment into two parts, one of size S which is used, and the other which is the remaining unused space. The unused segment is stored in the OMT. When a node with block number b is written to disk, it is first serialized into a string of bytes of length U, then it is compressed, producing another string of bytes of length C. Then the 4-byte encodings of C and U are prepended to the compressed string, yielding a string of length C+8. Then a segment of size D=C+8 is allocated, and stored in the block translation table, recording the segment for block number b, along with the length of the segment (C+8). Then the sequence is written to disk at the segment. To read a block from disk to RAM, the system consults the block translation table to determine the segment on disk holding the compressed data. The length, D, of the compressed block with the prepended lengths is also retrieved from the block translation table. A block of RAM of size D is allocated, and the data is read from the segment on disk into the RAM block. Then the size, U, of the uncompressed block is obtained from Bytes 4-7 of the retrieved block. Then a block of size U is allocated in RAM. The D-sized RAM block is decompressed into the U-sized RAM block. Then the D-sized RAM block is freed. The U-sized RAM block is then decoded into an in-RAM data structure. For leaf nodes, which have a memory pool, the U-sized block is used for the memory block (1006) of the memory pool. For each dictionary, the system maintains a block translation table (BTT). In some modes of operation, the system maintains a checkpointed block translation table (CBTT). And in some modes of operation the system maintains a temporary block translation table (TBTT). Pushing Messages The system composes messages and then executes them on the root node of a dictionary. Executing a message on a node of the dictionary may result in the message being incorporated into the node, or other actions being taken, as follows. To execute a sequence of messages on a nonleaf node N that is in RAM: - 1. For each message in the sequence: - (a) The system examines the pivotkeys (2610) to determine which child or children to which a copy of the message shall be sent. - i. Insert messages (2801) are sent to the ith subtree where the i is the largest number such that ith pivot key is greater-than-or-equal to the message (note that for DUP trees comparing the pivot key involves comparing both the key and the value of the message). If all the pivot keys are less than the message, then the message is sent to the rightmost subtree. - ii. Any of the “_both” messages (delete_both (2804), commit_both (2806), and abort_both (2808)) are sent to the same subtree that an insert message would be sent to. - iii. In the case of an “_any” message (delete_any (2803), commit_any (2805), and abort_any (2807)) for NODUP trees, the message is sent to the same subtree that an insert message would be sent to. - iv. In the case of an “_any” message (delete_any (2803), commit_any (2805), and abort_any (2807)) for DUP trees, the message may be sent to more than one subtree. The subtrees include any subtree i for which the key part of the ith pivot is greater-than or equal to the key of the message and the key part of the i−1st pivot key is less than or equal to the key of the message. The message is sent to the leftmost subtree if the first pivot key is greater than or equal to the message, and is sent to the last subtree is the last pivot key is less than or equal to the message. - v. For an abort_any_any (2802), the message is sent to all the subtrees. - (b) The system copies the message into the respective message buffers of all the children identified in Step 1a. - 2. For each Child C of Node N: - (a) If the node of Child C is in RAM and is dirty and is not temporarily locked or otherwise inaccessible, then - i. Let B be the buffer in N corresponding to the child. - ii. While B is not empty and the oldest message in B can fit into the child without exceeding the child's target size (even when the message is replicated many times in Step 1b): - A. Dequeue the oldest message from B′s FIFO. - B. Construct a sequence of length one containing that message. - C. Execute the sequence on the child of the node. - 3. If node N is larger than its target size: - (a) Find the child with the largest value of nbytesinbuf (2616) (which corresponds buffer with the most bytes in its FIFO). (If all the child FIFOs are empty, then the system is finished with N.) - (b) Let B be the buffer of that child. - (c) Construct a sequence of messages by dequeueing some (possibly all) of the messages 920 in B′s FIFO, the first element of the sequence being the oldest message in the FIFO, and the last element of the sequence being the newest message in the FIFO. (Note: the FIFO is now empty.) - (d) Bring the node of the child into RAM if it is not in RAM. - (e) Execute the sequence of messages on the child node. Alternatively, variants of these rules can be employed. For example, in Step 2a the system could ignore whether the child is dirty or temporarily inaccessible. Another example is that the system could, whenever it finds a nonempty buffer B in a node where the corresponding child is dirty unlocked and in memory, remove all the messages from B and execute them on the child. Emptying a buffer in a node by moving messages to the child is called flushing the buffer. It is possible that a node will be larger than their target sizes after executing a sequence of messages on the node. For example, an abort_any message may be replicated many times in Step 1b. Then in Step 3, the buffer of only one child is emptied. The node could still be larger than its target size, which is acceptable, because the system can empty additional buffers in future operations. Alternatively, there are other ways to accomplish the movement of messages to the child of a node nodes. For example, it is not necessary to actually construct the sequence of messages. Instead one could dequeue one message at a time and insert it into the child node. Alternatively, there are many ways to implement the movement of messages in a data structure in which messages move opportunistically into nodes that are in RAM, but are sometimes delayed if the destination node is not in RAM. For example, the system could use part of main memory to store a balanced search tree or other dictionary. Most of the time, the balanced search tree remains in RAM. At each of the leaves of the dictionary is a reference to another dictionary. When a message is inserted, the balanced search tree sends the message to the appropriate dictionary. That is, when a message is inserted, the balanced search tree in RAM is used to partition the search space. Then, the message is inserted directly into a dictionary. In one mode the system does not use a tree-based structure in the leaves but instead uses a cache-oblivious lookahead array (COLA). Alternatively, a system could move only some of the messages to the destination. For example if the destination fills up, the system could delay sending additional messages to the destination until some future time when the destination has forwarded its messages onward. The process by which messages move directly down the tree without being stored in intermediate buffers is referred to as aggressive promotion. Alternatively, a system can implement aggressive promotion that is adaptive, even when the particular data structure is not tree-based. For example, a COLA can implement aggressive promotion, as follows: Rather than putting the message directly in the lowest-capacity level of the COLA, put the message (in the appropriate rank location) in the deepest level of the COLA that is still in RAM and where space can be made. Thus, the system could use a packed-memory array to make space in the levels. The system could also use a modified packed-memory array where rebalance intervals are chosen adaptively to avoid additional memory transfers. In this picture, the new message is inserted directly into the fifth array with 16 array positions. In order to make room for the message, there is a rebalance, as indicated by a rebalance interval (1308). The rebalance interval is chosen so that it only involves array cells that are paged into memory. If such a rebalance interval had not been found on one level, then the element would be inserted into a higher level. Alternatively, this structure can be modified to support messages with different lengths. For example, one could use a PMAVSE (which is described below). The structure can be modified so that the ratio between different levels is different from 2. Moreover, one could use a different structure from a PMA at each of the levels. Alternatively, the paging scheme might depend on how messages move through the data structure. For example, the system may choose to preemptively bring into RAM a part of the data structure that is the destination of messages. When a key-value pair is inserted into a dictionary, the system constructs an insert message (2801) containing the XID of the transaction inserting the message, and the key and the value. Then a sequence of length one is created containing that message. - 1. If the root of the tree node is not in RAM, then the system brings it into RAM from disk. - 2. The sequence is then executed on the node. Alternatively, one can process the messages differently. For example, for each leaf, the system could maintain a hash table of all transactions which are provisional, indexed by XID. If, when an abort_any_any (2802) arrives at a leaf, the system could operate only on those leaf entries that mention the XID. Similarly, the system could maintain, for each nonleaf node, a hash table of all the uncompleted transactions in the subtree, so that an abort_any_any message would only need to be sent to certain subtrees. Alternatively, instead of using a hash table, the system could use another data structure, such as a Bloom filter, which would indicate definitively that a particular subtree does not contain messages or leaf entries for a particular transaction. Messages on Leaves To execute an insert message with XID x, key k, and value v on a leaf, the system looks up, in the OMT (1101) of the leaf, the leaf entry the key of which equals the key of the message (that is, the message key matches the leaf entry key) (for NODUP dictionaries) or matches both the key and the value (for DUP dictionaries). - 1. If there is no matching leaf entry, then a LE_PROVVAL leaf entry is inserted into the OMT (1101) with key k, XID x, and value v. - 2. If there is a LE_COMMITTED leaf entry with key k′ and committed value c, then that leaf entry is replaced by a new LE_BOTH leaf entry with key k, XID x, committed value c, provisional value v. - 3. If there is a LE_BOTH leaf entry with key k′, XID x′, committed value c, and provisional value p, then the system does the following: - (a) If x=x′ then replace the leaf entry with a new LE_BOTH leaf entry with key k, XID x, committed value c, and provisional value p. - (b) Otherwise replace the leaf entry with a new LE_BOTH leaf entry with key k, XID x, committed value p, and provisional value v. - 4. If there is a LE_PROVDEL leaf entry with key k′, XID x′ and committed value c, then the system does the following: - (a) If x=x′ then replace the leaf entry with a new LE_BOTH leaf entry with key k, XID x, committed value c, and provisional value v. - (b) Otherwise replace the leaf entry with a new LE_PROVVAL leaf entry with key k, XID x, and provisional value v. - 5. If there is a LE_PROVVAL leaf entry with key k′, XID x′, and provisional value p, then the system does the following: - (a) If x=x′ then replace the leaf entry with a new LE_PROVVAL leaf entry with key k, XID x, and provisional value v. - (b) Otherwise replace the leaf entry with a new LE_BOTH leaf entry with key k, XID x, committed value p, and provisional value v. To execute on an OMT a delete_any (2803) with XID x and key k, for each leaf entry in the OMT that has a key matching k the system does the following: - 1. If the leaf entry is a LE_COMMIT TED leaf entry with key k and committed value c, then replace the leaf entry with a new LE_PROVVAL leaf entry with key k, XID x, and committed value c. - 2. If the leaf entry is a LE_BOTH leaf entry with key k, XID x′, committed value c, and provisional value p, then the system does the following: - (a) If x=x′ then replace the leaf entry with a LE_PROVVAL leaf entry with key k, XID x, and committed value c. - (b) Otherwise replace the leaf entry with a LE_PROVVAL leaf entry with key k, XID x, and committed value p. - 3. If the leaf entry is a LE_PROVVAL leaf entry with key k, XID x′ and committed value c, then the system does the following: - (a) If x=x′ then replace the leaf entry with a LE_PROVDEL leaf entry with key k, XID x, and committed value c. - (b) Otherwise remove the leaf entry without replacing it. - 4. If the leaf entry is a LE_PROVVAL leaf entry with key k, XID x′, and provisional value p, then the system does the following: - (a) If x=x′ then remove the leaf entry without replacing it. - (b) Otherwise replace the leaf entry with a LE_PROVDEL leaf entry with key k, XID x, and committed value p. To execute on an OMT a delete_both (2804) the system finds all leaf entries that match both the key and the value of the message, and for each such leaf entry performs the steps specified in the previous paragraph, just as if the message were a delete_any (2803). To execute on an OMT a commit_any (2805) replace the leaf entry with a LE_COMMITTED leaf entry with key k, XID x, and committed value p. - 3. If the leaf entry is a LE_PROVDEL leaf entry with key k, XID x′ and committed value c, then remove the leaf entry without replacing it. - 4. If the leaf entry is a LE_PROVVAL leaf entry with key k, XID x′, and provisional value p, then replace the leaf entry with a LE_PROVDEL leaf entry with key k, XID x, and committed value p. To execute on an OMT a commit_both (2806), the system finds all leaf entries that match both the key and the value of the message, and for each such leaf entry performs the steps specified in the previous paragraph, just as if the message were a commit_any (2805). To execute on an OMT an abort_any (2807) message - (a) if x=x′ then replace the leaf entry with a LE_COMMITTED leaf entry with key k, and committed value c. - (b) otherwise replace the leaf entry with a LE_COMMITTED leaf entry with key k, and committed value p. - 3. If the leaf entry is a LE_PROVDEL leaf entry with key k, XID x′ and committed value c, then replace the leaf entry with a LE_COMMITTED leaf entry with key k and committed value c. - 4. If the leaf entry is a LE_PROVVAL leaf entry with key k, XID x′, and provisional value p, then remove the leaf entry without replacing it. To execute on an OMT an abort_both (2808) message, the system finds all leaf entries that match both the key and the value of the message, and for each such leaf entry performs the steps specified in the previous paragraph, just as if the message were an abort_any (2807). To execute on an OMT an abort_any_any (2802) message, the system finds all the leaf entries that have provisional states that match the XID of the message, and transforms those as if an abort_any (2807) were executed. For example - 1. For LE_COMMITTED leaf entries, no change is made. - 2. For LE_BOTH leaf entries, if the XID of the leaf entry matches the message, then replace the leaf entry with a LE_COMMITTED leaf entry using the previously committed value. If the XIDs do not match, then no change is made. - 3. For LE_PROVDEL leaf entries, if the XID of the leaf entry matches, then replace the leaf entry with a LE_COMMITTED message using the previous committed value from the leaf entry. If the XIDs do not match, then no change is made. - 4. For LE_PROVVAL leaf entries, if the XID of the leaf entry matches, then delete the leaf entry, otherwise no change is made. In all the cases above, when a leaf entry is created its checksum is also computed. In some conditions when a leaf entry is queried, the system can change the state. For example, the system maintains a list of all pending transactions. If a leaf entry is being queried, then all of the messages destined for that leaf entry have been executed. If the leaf entry reflects a provisional state for a transaction that is no longer pending, then the system can infer that the transaction committed (because otherwise an abort message would have arrived), and so the system can execute an implicit commit message. The system maintains in each node statistical or summary information for the subtree rooted at the node. The statistics (414) structure comprises the following elements: - 1. a number ndata (3301) representing an estimate of the number of key-value pairs in the subtree rooted at the node, - 2. a number ndata_error_bound (3302) bounding the estimate error for ndata (3301), - 3. a number nkeys (3303) representing an estimate of the number of distinct keys in the subtree, - 4. a number nkeys_error_bound (3304) representing the estimate error for nkeys (3303), - 5. a key or key-value pair (depending on whether the tree is a NODUP or DUP key respectively), minkey (3305), representing an estimate of the least pair in the subtree, - 6. a key or key-value pair (depending on whether the tree is a NODUP or DUP key respectively), maxkey (3306), representing an estimate of the greatest pair in the subtree, and - 7. a number dsize (3307) representing an estimate of the sum of the lengths of the leaf entries. In a leaf node, the system can maintain a count of the number of leaf entries in the ndata (3301) field. If the system quiesces, and all transactions are committed or aborted, then this count is the number of rows in the node. If the system is not quiescent or some transactions are pending, then the count can be viewed as an estimate of the number of entries in the dictionary. The difference between the estimate and the quiescent value is called the estimate error, and the estimate error cannot be determined until the system quiesces and the relevant transactions are completed. Every time a leaf entry is added, the count is incremented, and every time a leaf entry is removed, the count is decremented. The system maintains in each leaf node a count ndata_error_bound (3302) bounding the estimate error for ndata (3301) - 1. Each LE_COMMITTED leaf entry contributes zero to the bound, - 2. each LE_BOTH leaf entry contributes zero to the bound (because whether the transaction aborts or commits, the count will not change), - 3. each LE_PROVDEL leaf entry contributes one to bound (because ndata (3301) is counting the leaf entry, but if the appropriate transaction commits, the leaf entry will be removed), and - 4. each LE_PROVVAL leaf entry contributes one to the bound (because if the appropriate transaction aborts the leaf entry will be removed). For nonleaf nodes, the system maintains the ndata (3301) field as the sum of the ndata (3301) 1150 fields of its children. The system maintains the ndata_error_bound (3302) as the sum of the ndata_error_bound (3302) fields of its children, plus the number of messages in the buffers of the node. If any of the entries are delete_any messages, then the ndata_error_bound (3302) is set to ndata (3301) of the node, since in some cases all the leaf entries could be deleted by those messages. Alternatively, tighter bounds for ndata_error_bound (3302) can be used. For 1155 example, a delete_any message can only delete one key, so if there are many unique keys, then the ndata_error_bound (3302) can sometimes be reduced. Similarly, the system can maintain a count of the number of unique keys nkeys (3303) in a leaf node, along with correct values for minkey (3305) and maxkey (3306). For nonleaf nodes, the system can combine results from subtrees to compute nkeys (3303). Given two adjacent subtrees, A and B, (A to the left of B), then if the maxkey (3306) of A equals the minkey (3305) of B, then the number of distinct keys in A and B together is the number of unique keys in A plus the number of unique keys in B minus one. If the maxkey (3306) of A is not equal to the minkey (3305) of B, then the number of unique keys in A and B together is the sum of the number of unique keys in A and B. Thus, by combining all the results from the children, the proper 1165 value of nkeys (3303) can be computed. The nkeys_error_bound (3304) can be computed in a way similar to the ndata_error_bound (3302). For the data size estimate ds i ze (3307), each leaf can keep track of the sum of the sizes of its leaf entries, and a subtree can simply use the sum of its children. In many cases an estimate of the number of rows or distinct keys or data size in a tree is useful 1170 even if the estimate has an error. For example, a query optimizer may not need precise row counts to choose a good query plan. In such a case, the summary statistics at the root of the tree suffice. In the case where an exact summary statistic is needed, the system can compute the count exactly. To compute exact statistics, or to compute the statistics to within certain error tolerances, as viewed by a particular transaction, the system can perform the following actions: - 1. Check ndata_error_bound (3302) for that subtree. If the error bound is tight enough (for example if it is zero), then ndata (3301) is the correct value and can be used. Also the if the ndata_error_bound (3302) is zero, then the dsize (3307) is exactly right. Similarly, the nkeys_error_bound (3304) can determine if the nkeys (3303) is accurate enough. - 2. For any value that has too-loose error bounds, in a leaf node, the system iterates through the leaf entries, performing a query on each the key in each leaf entry. Assuming that the lock tree permits the queries to run without detecting a conflict, then after all the implicit commits operate, the number of leaf entries remaining in the node is the correct number, and the estimate error will be zero. (If the lock tree does not permit the queries, then there is a conflict, and the translation must wait or abort or try again.) - 3. For nonleaf nodes, the system can iterate through the children performing this computation recursively. Any subtree with an accurate enough estimate can calculate it quickly, and otherwise the computation descends the tree summing up the statistics appropriately. Alternatively, the statistics (414) field is a structure that can be incorporated directly into some other structure, as shown in For each child of a nonleaf nodes, the system stores a copy of the child's statistics in the subtreestatistics (2612) field of the appropriate child information structure (2605). The system can use those cached values to incrementally recompute the statistics of a node when a child's statistics change. Alternatively, additional statistical summary information could be added to the statistics (414). For example, if a dictionary comprises rows comprising fields, then the statistics could keep a summary value for some or all of the fields. Examples of such summary values are the minimum value of a field, the maximum value of a field, the sum of the field values, the sum of the squares of the field values (which could for example be used to compute the variance and standard deviation of the field values), the logical “and”, logical “or”, or logical “exclusive or” of fields treated as Booleans or as integers (where the logical operations operate bitwise on the values). The system could also be modified to maintain an estimate of the median value, or percentile values for particular percentile ranks (such as quartiles). A subtree fingerprint calculation can also be viewed as a kind of summary. Alternatively, the summary information can be maintained incrementally as the tree is updated. For example, each parent's summary can be updated as soon as its child is updated. Alternatively, a parent's summary can be updated in a “lazy” fashion, waiting until the child is written to disk to update the parent. In this alternative case, when performing a query on the statistical summary, the system can walk the in-RAM part of the tree to calculate summary information, optionally updating the summary for the various nodes, and setting a Boolean to remember that the subtree has not been changed since the summary information was calculated. To implement nested transactions, the system uses a different kind of leaf entry that comprises a stack of XIDs (described in more detail below). In this mode, transactions can be created, committed, and aborted. Given a transaction, operations can be performed within that transaction, including looking up values in the tree and inserting new values into the tree, and creating a child transaction. The child transaction is said to be inside the parent transaction. The system maintains a set of all the open transactions using an instance of an OMT. The set of open transactions can be held in another data structure, including but not limited to a hash table, using the least significant bits of the XID to select the hash bucket. Alternatively, one can implement implicit commits, and maintain the counters such as ndata_error_bound (3302) and ndata (3301) in a system with nested transactions. Alternatively, one can reduce the number of accesses into the open-transaction set, for example, by employing an optimistic locking scheme. One implementation of such a scheme would be to maintain a global counter that is incremented every time a transaction begins, aborts, or commits. If the counter does not change between two XID look ups then the result can be assumed to be the same. If the counter does change, then another look up would be required. Another alternative is to record a pointer directly to the transaction record along with every XID, thus entirely avoiding the look up. Yet another alternative is to maintain a per-thread cache of recently tested XIDs that are known to be closed. Nested Transactions In a mode that implements nested transactions, the system operates as follows. A leaf entry comprises a key and a stack of transaction records. The bottom of the stack represents the outermost transaction to modify this leaf entry, the top of the stack represents the innermost transaction. Each transaction record comprises an XID and a value. The value in each transaction record is the value for the key if the transaction successfully commits. Each transaction also comprises some Boolean flags. When a transaction performs an insert, the new value is stored in the transaction record. When a transaction performs a delete, the value is replaced by a delete flag. In this scheme each message (including but not limited to insert, delete, abort) contains the XID of the current transaction and also the XIDs of all enclosing transactions. When a transaction aborts, an abort message is sent to every leaf entry modified by that transaction. When a transaction is committed, no messages are sent. When a message arrives at a leaf entry, the list of transaction ids in the message is compared with the transaction records in the leaf entry to find the Least Common Ancestor (LCA). Any transactions in the leaf entry newer than the LCA could only be missing from the message if they had committed, so the system can promote the values in those transaction records to a committed state. Each message contains the XIDs of the current transaction and of all enclosing transactions. For example, Transaction X3 did not directly modify the entry at key k, so there is no message addressed to k with X3 as its first XID. But the XID for X3 is included in message (3504) because transaction X4 is enclosed within X3. When these messages arrive at the leaf entry for key k, they are processed as shown in With the arrival of message (3501), the message contents are inserted into a new stack. The leaf contents (3601) mean that key k is now associated with the value v0 and that if transaction X0 successfully commits, then key k will have value v0. Because there is no entry before the entry for transaction X0, if transaction X0 does not successfully commit then the leaf entry for key k will be destroyed. After processing message (3502), the leaf entry stack reflects not only the value k it will have if X1 commits successfully (v1), but also the value it would have if transaction X1 aborts but X0 commits (v0), as well as the value (none) if both X1 and X0 abort. Upon processing message (3503), the system infers that transaction X1 committed successfully by going up the list of enclosing transactions in message (3503) and comparing it with the list of enclosing transactions in leaf entry (3602). The system calculates that the LCA is transaction X0. In the absence of an abort message, this implies that transaction X1 committed successfully. Since transaction X1 is now complete with v1, the value that would be saved if X0 were to commit successfully is now v1. So the v1 is copied from the stack entry for X1 overwriting the value previously stored in the stack entry for X0. This process of moving a value higher in the stack is called promotion. Upon processing message (3504), two changes are made to the leaf. A new stack entry is created for transaction X4 with a value of v4, and a new stack entry is created for transaction X3. Even though X3 did not directly modify the value associated with this key, the enclosed transaction X4 enclosed inside X3 did. This is reflected in the stack of leaf entry (3604). In this example, after processing message (3504), the stack of transaction records contains the value v2 twice, once each for X2 and X3. The system employs a memory optimization to replace v2 in the transaction record for X3 with a placeholder flag, indicating that the value for transaction X3 is the same as the value in the transaction record below it, in this case X2. During a query, or look up of a key, the read lock for the leaf entry is not necessarily taken, since the system tests the lock after the read. If a transaction unrelated to the transaction issuing the query is writing to this leaf entry, then that unrelated transaction is open and the system does not implicitly promote the value. So any implicit promotions done during the query can be based solely on whether the transactions with XIDs that are recorded in the leaf entry are still open. The system operates as follows when performing a look up. For every transaction in the leaf entry (starting with the innermost and going out), if the transaction is no longer open then implicitly promote it. Each query is accompanied by a list of XIDs of all the enclosing transactions, similar to the stack of transaction ids that accompany each insert. The set of transaction ids is passed on the call stack as an argument to the query function, but it could be passed in other ways, for example as a message. While this list may not be sufficient to determine that a given transaction is definitely closed, it can prove that a transaction is still open. This information can be used as a fast test to determine whether a dispositive test is required. If a transaction is definitely open, then it is not necessary to look up its XID in the global set of open transactions. The query (3910) inside transaction X3 is accompanied by the sequence of XIDs (X3, X2, X0). When the query is processed, the XIDs in leaf entry (3903) are compared with the set of open transactions. Transaction X2 is the innermost transaction in the leaf entry, so the system compares 1305 it with the list of XIDs accompanying the query message, and sees that X2 is still open and no further action needs to be taken. The query (3911) after the close of transaction X3 is accompanied by the sequence of XIDs The query (3912) is performed after X0 is committed, so when it is processed the set of open transactions is empty. The implicit promotion logic recognizes that transactions X2 and X0 have been committed and modifies the leaf entry to have only one transaction record marked as the committed value with a XID of zero. An XID of zero is the root transaction, and is shown as a “root” XID in both the query (3912) and the leaf entry (3907) of Deletes are handled in a manner that is similar to inserts. When a delete message arrives at a leaf entry, the same implicit promotion logic is applied as when an insert arrives. But instead of copying a value into the innermost transaction record, the system sets a “delete” flag. Furthermore, if the next outer transaction record in the leaf entry is a delete, then the newly arrived delete is not recorded because no matter whether the transaction for the new delete is committed or aborted there will be no change to the leaf entry. The leaf entry will still be subject to to the delete issued by the enclosing transaction, and any query in this transaction (after the delete and before an insert) discovers no value. Alternatively, other approaches could be taken. For example the system could store transaction records for nested deletes and then remove up those records at a later time to facilitate the destruction of the leaf entry. Also, if after the delete message is applied to the leaf entry, the only transaction record is a delete then the leaf entry is removed from the OMT. If the transaction is committed or if it aborts, the leaf entry will not exist, which can be represented by the absence of a leaf entry. The arrival of message (4111) has no effect. It would be logically correct to add a transaction record When message (4112) arrives at the leaf entry, the leaf entry is destroyed. The implicit promotion logic causes the leaf entry to temporarily take on the value The arrival of an abort message at a leaf entry, is similar to the arrival of an insert or delete, causing the implicit promotion of values set by transactions that have been committed. After performing that implicit promotion the system removes the transaction record for the aborted transaction in the leaf entry, and then removes any placeholders that are on the top of the transaction record stack. Alternatively, other variants of this strategy can be implemented. For example, when a transaction is committed, a message could be sent. As another example, if a message is sent whenever a transaction is committed, then the system can query the data without implicitly promoting leaf entries. As another example, the system could send commit messages when it is otherwise idle, in which case the system when querying would perform implicit promotion if needed. Alternatively, the scheme can be adapted to support different isolation levels. For example, to support a read-uncommitted isolation level during a query the system can return the value at the top of the value stack if the top of the stack identifies a pending transaction. Balancing The system employs a parameter called the maximum degree bound, which is set to 32. If the number of children of n ever exceeds the maximum degree bound then the system splits the node in two. If two nodes are adjacent siblings (that is one is child i of a node and the other is child i+1 of the node) and the total number of children of two sibling nodes is less than half the maximum degree bound, then the system merges the two nodes. Alternatively, the maximum degree bound could be set to some other number, which could be a constant or could be a function of some system or problem-specific parameters, or a function of the history of operations on the data structure, or a function of the sizes of the pivot keys, or according to other reasons. It may also vary within the tree. When a nonleaf node has c children, numbered from 0 inclusive to c exclusive, then it can split a node in two as follows. The system allocates a new block number using the block translation table (3001). It moves the children numbered from c/2 inclusive to c exclusive to the new node, numbering them from 0 inclusive to c−c/2 exclusive in the new node. When moving a child, the buffer is moved too. The pivot keys, which are numbered from 0 inclusive to c−1 exclusive, are also reorganized. The pivot keys from c/2 inclusive to c−1 exclusive are moved, renumbering them from 0. Pivot key number c/2−1, called the new pivot, is removed from the old node, and is inserted into the pivot keys of the node's parent. If the node child number i of its parent, then the moved pivot key becomes numbered i in the parent, and the higher numbered pivot keys are shifted upward by one. If the node has no parent, then a new parent is created with a single pivot key. In the parent, the block number of the new child is inserted so that the new child is child number i+1 in the parent, and any higher numbered children are shifted up by one. The buffer in the parent is also split. That is, if the parent existed, then the messages in buffer number i of the parent are removed from that buffer, and are copied into buffers i and i−1 as they would be during message execution in a nonleaf node. That is, each message is examined and if its key is less than or equal to the new pivot then it is copied into buffer i, and if its key is greater than or equal to the new pivot then it is copied into buffer i+1. After splitting a node, the node may end up being larger than its target size. In that case, the system flushes some buffers. Alternatively, the system may wait until some future operation to flush some buffers. After splitting a node, the parent node may have more children than the maximum degree bound. In that case, the system splits the parent. Alternatively, the system may wait until some future operation to split the parent. When a leaf-node exceeds its target size, the system splits the leaf node creating a new node, and moving the greater half of the key value pairs to the new node. An appropriate pivot key is constructed which distinguishes between the lesser half and the greater half of the key values, and the pivot key and new node are inserted into the parent, and the corresponding buffer in the parent is split, just as for the case of splitting a nonleaf node. Similarly, if there is no such parent, then a new parent is created just as when splitting a nonleaf node. To merge two nonleaf nodes that are adjacent siblings is essentially the opposite of splitting a node. If one node has c0 children and is child i of its parent and the other node has c1 children and is child i+1 then pivot key i in the parent is moved to be pivot key number c0 in the first node, and the parent's higher numbered pivot keys shift down, and the pivot keys of the second node are set to be the pivot keys numbered from c0+1 inclusive to c0+1+c1 exclusive. The child pointers and buffers from the second node are moved to the first node. And in the parent buffer i and buffer i+1 are merged together by dequeueing each item from buffer i+1 and enqueing it into buffer 1. Buffer i+1 is freed, and the buffer and child pointers are shifted downward. To merge two leaf nodes that are adjacent siblings is similar. The parent node is changed in the same ways (merging buffers and shifting pivot keys, buffers, and child pointers down). The two children are merged by moving all the leaf entries from child i+1 to child i. The now-unused child's block number is returned to the free list in the block translation table (3001). After merging two nodes, the resulting node may be larger than the target size for that node. In that case system flushes buffers. Alternatively, the system may flush the buffers at a future time. Alternatively, there are other ways of splitting and merging nodes. For example, the buffers that are to be split or merged may be flushed before the split or merge actually takes place. Alternatively, there are other ways of implementing the tree. For example, the fanout and number of pivot keys in each node can be variable, and could depend on the size of the pivot keys. Some fixed amount of space could be dedicated to the pivot keys. For 1 MB blocks, this space could be between 1 KB and 4 KB, unless the pivot keys are larger than 4 KB. In this case, there might be only a single pivot key. Alternatively, it is possible to place a maximum limit on the number of pivot keys, regardless of how small the keys are. In each node the system keeps a counter of how many successive insertions have inserted at the rightmost edge of a node. When splitting a node, if that counter is more than half the number of leaf entries in the node, then instead of splitting the node in half, the system splits the node unevenly so that few or no leaf entries are moved into the new node. This has the effect of packing the tree more densely when performing sequential insertions. Alternatively, the system can employ other ways of optimizing sequential insertions or other insertion patterns. For example, another way to detect sequential insertions is for the system to keep track of the last key inserted, and whenever an insertion is to the immediate left of the last insertion, and a node splits, the system splits the node just to the right of the last insertion. Furthermore, the system could keep a counter for each node, or for the whole tree, of how many successive insertions inserted just to the left of the previous insertion, and use that information to decide how to split a node. Similarly the system could detect and optimize for sequential insertions of successively smaller keys. Alternatively, when merging nodes, the system could consider the node to the left or to the right of a node, and merge more than two nodes. The decision to merge could be based on a different threshold, including but not limited to that the combined size is less than 10% of a node's target size. Alternatively, the system could adjust the target size of a node based on many factors. For example, the system could keep a time stamp on each node recording the last time that the node was modified. The system could then adjust the target size depending on the time since the node was modified. Cursors A cursor identifies a location in a dictionary. One way to identify a location is using the key-value pair stored in that location. A cursor can be set to the position of a given key-value pair, and can be incremented (moved to the next larger key-value pair) or decremented (moved to the next-smaller key-value pair) in the tree. The system employs Cursors to implement other query and update operations in the tree, as well as other functions, such as copying a tree for a backup. The system implements cursors comprising: - 1. A root-to-leaf path in the tree, where the cursor indicates one of the key-value pairs in that leaf. Multiple cursors are allowed to point to a single key-value pair in a given leaf node. - 2. Each leaf node stores a set of all cursors pointing to key-value pairs in that leaf. The root-to-leaf path for a cursor is stored as follows: - - - . . . - - leaf number When the tree changes shape (e.g., because of tree-balancing operations) the system updates any affected paths. A cursor points to leaf nodes that are in RAM. That is, a node containing a key-value pointed to by a cursor is pinned in RAM and is not ejected until the cursor is deleted or moves to another node. The cursor implementation maintains the property that every buffer on the path of a cursor is empty. This means that setting a cursor to point to a given node triggers emptying of the buffers on the cursor's root-to-leaf path. Each buffer maintains a reference count of the number of cursors that pass through that buffer. When the reference count of a buffer is nonzero, a message is sent directly to the child node of the buffer. When the reference count is nonzero, a message is stored in the buffer or passed down according to the buffer management rules outlined above. Alternatively, there are other ways of implementing cursors. For example, rather than storing root-to-leaf paths, one could store the key-value pairs in an in-RAM dictionary. The cursor rootto-leaf paths are implicit, rather than explicit. The solution is efficient for enabling a node of the streaming dictionary to query whether any cursors travel through it by performing a query on the in-RAM dictionary. All cursor updates involve predecessor and successor searches in the dictionary. This solution also further decouples the paging from the cursors. The solution can be useful for cursors that operate on non-tree dictionaries. In another mode of operation, the a cursor is represented using a pointer at an OMT along with an index. The cursor also includes a pointer that points into the memory pool (1001) of the OMT, pointing at the key-value pair that the cursor is currently referencing. All of buffers are flushed on the root-to-leaf path from the root of the dictionary to the leaf node containing the OMT. The cursor provides a callback function to disassociate the cursor from the OMT. The callback function copies the key-value pair into a newly allocated region of RAM, and causes the cursor to stop referring to the OMT and the memory pool (1001). When the cursor is disassociated it contains a pointer to an allocated region of RAM containing the key-value pair. If any operation results in a message entering one of the buffers along the path, or if the OMT reorganizes itself in RAM, or if the pointer into the memory pool becomes invalid, or the pointer to the OMT or the index in the OMT become invalid, then the system invokes the dissociation callback function. To advance the cursor, if the cursor is associated with an OMT, then the index is incremented, and the OMT is used to find the next value. If the cursor is disassociated, then the cursor finds the OMT by searching from the root of the dictionary down to a leaf, using an allocated copy in RAM, and then associates the cursor with the OMT. Whenever the cursor searches down the tree, it flushes the buffers it encounters. To implement a point-query an associated cursor returns a copy of the key-value pair it points to. A disassociated cursor returns a copy of the allocated key-value pair it contains. When a cursor searches, it operates as follows: - 1. Let u denote the root node. First bring u into RAM, if it is not there already. - 2. If u is a leaf-node, then look up the value in the OMT of leaf by finding the first value that is greater than or equal to the key-value pair being searched for. If looking for a matching key in a DUP database, then the system must skip any leaf entries that contain provisionally deleted values, and find the first non-deleted value. If that skipping proceeds off the end of the OMT, then the system sets u the parent, and tries to examine the next child of the parent. If there are no more children, then the system will return to the grandparent, and look at the next child, and so forth, in this way finding the next leaf node in the left-to-right ordering of the tree. - 3. The system identifies the appropriate buffer and child of u where k may reside. To do so, identify the largest pivot key pi in that node less than or equal to key k. - 4. If there is no such key, then proceed with the leftmost buffer and child of the root. Otherwise, proceed with the buffer immediately after pivot key pi and the child associated with that buffer. Now search in that buffer for a message M(k,z). - 5. If there are messages in buffer i, then flush those messages to the next level of the tree. Flushing entails bringing the node for child i into RAM, and moving the messages into the appropriate buffers of the child. - 6. The system sets u to the child i of u, and then proceeds to step 2. Thus as a successor (or predecessor) query proceeds along a root-to-leaf search path, the system flushes each buffer that the search path travels through. Once the search path reaches the leaf, the smallest key just larger (or smaller) than k in the leaf is the successor (or predecessor), with appropriate care taken for boundary cases where k is the larger or smaller that all other keys in its leaf. In more detail, when searching for k, the system first searches in the root u0. The system compares pivot keys, identifying the appropriate buffer and child node u1 to search next. The system flushes the buffer completely, and then searches in child node u1. At that node, the system identifies the appropriate buffer and child node u2 to search next, and so forth down the tree. When the leaf node ul is reached, the system inserts a cursor at an element in that node and scans forward and/or backward to return the predecessor and/or successor of k, visiting an adjacent leaf, as necessary. Alternatively, there are other ways of satisfying predecessor and successor queries. For example, here is a way to do so in which buffers are not flushed. In the nonleaf nodes, the system could maintain a dictionary, including but not limited to a PMA (described below). The dictionary could store keys and messages so that successor/predecessor queries can be answered efficiently at each node. In effect, each logical cursor comprises a set of cursors, one at each node on the root-to-leaf path. A successor/predecessor query on the logical cursor comprises checking for a successor/predecessor at each node cursor and returning the appropriate value (which will be the minimum/maximum of the successors/predecessors so computed). One way to satisfy range queries is by using cursors. To implement a range query between two keys [k1, k2], first set a cursor to the key k1, if it exists, and otherwise to the successor of k1. Then increment the cursor, returning elements, until and element is found whose key is greater than k2. Alternatively, the system can employ any correct implementation of successor/predecessor queries to implement correct range query implementation. The system could avoid not flushing buffers when performing queries, or the system could always flush buffers when performing queries. Avoiding flushing buffers can be used when a query is read-only and do not change the structure of the tree. Alternatively, the system could preemptively flush all buffers affected by a query before answering the query. Alternatively, range queries could be implemented in other ways. For example, the client could provide a function to apply to every key-value pair in a range, and so the system could iterate over the tree and the OMT data structures to apply that function. Some such functions admit a parallel implementation. For example, if the function is to add up all the values in the range, then since addition is associative, it can be performed in a tree-structured parallel computation. Packed Memory Array Supporting Variable-Size Elements In some modes of operation, the system can store key-value pairs in another dictionary data structure called a packed-memory array supporting variable-size elements (PMAVSE). The packed memory array (PMA) data structure, is an array that stores unit-size elements in sorted order with gaps between the elements. A gap is an empty element in the array. A run of contiguous empty spaces constitutes a number of gaps equal to the length of the run. Let N denote the number of elements stored in a PMA. The value of N may change over time. A PMA maintains the following density invariant: In any region of a PMA of size S, there are Θ(S) elements stored in it, and for S greater than some small value, there is at least 1 element stored in the region. Note: the “big-Omega” notation is used similarly to the big-Oh notation described earlier. We say that f(n) is Ω(g(n)) if g(n) is O(f(n)). The “big-Theta” notation is the intersection of big-Oh and big-Omega. f(n) is Θ(g(n)) exactly when f(n) is O(g(n)) and f(n) is Ω(g(n)). Alternatively, a PMA could use both upper and lower density thresholds. To search for a given record x in a PMA, the system uses binary search. The binary search is slightly modified to deal with gaps. In particular, if a probe of a cell in the array indicates that that array position is a gap, then the system scans left or right to find a nonempty array cell. By the density invariant, only a constant number of cells need to be scanned to find a nonempty cell. Alternatively, there are other ways of searching within the array with gaps. For example, one might use a balanced search tree or any of a variety of search trees optimized for memory performance, including but not limited to a van Emde Boas layout, to index into the array. The leaves of the index could be associated with some cells of the array. The system rearranges elements in a subarray in an activity called a rebalancing. Given a subarray with elements in it, a rebalancing of the subarray distributes the elements in the subarray as evenly as possible. To insert a given record y into a PMA, the system first searches for the largest element x in a PMA that is less than y. If there is a gap in the array directly after x, then put y into this gap. If there is no gap immediately after x, then to make room for y, rebalance the elements in a certain subarray enclosing x. To delete a given record x from a PMA as follows. First search for x and then remove it from a PMA, creating a new gap. Then scan the immediate neighborhood of x. If there are more than a certain number of gaps near x, then rebalance a certain subarray surrounding x. If the entire PMA contains more than a certain number of elements then the system allocates a new array of twice the size of the old array, and copies the elements from the old array into the new array, distributing the elements into the array as evenly as possible. The old array is then freed. Alternatively, the new array could be some other size rather than twice the size of the old array. For example the new array could be 3/2 the size of the old array. If the entire PMA contains fewer than a certain number of elements then the system allocates a new array of half the size of the old array, and copies the elements from the old array into the new array, distributing the elements into the array as evenly as possible. The old array is then freed. The system sometimes rebalances so that there are additional gaps near areas that are predicted to have many insertions or few deletions in the future, and places fewer gaps near areas that are predicted to have fewer insertions or more deletions in the future. The following terminology is used to describe the workings of a PMA in our system. A subarray of a PMA is called a window. If W is a window then the following definitions apply. When the array gets too sparse or too dense, it is either grown or shrunk by the factor of G, where G=2. A smallest subarray that is involved in a rebalance is called a parcel. That is, an insertion that causes a rebalance must affect at least one parcel. The size of a parcel is P. The parameter A denotes the size of the entire array. That is, A=Capacity (entire PMA). The maximum and minimum allowed densities of a PMA are denoted D(A) and d(A), respectively. The maximum and minimum densities allowed in any parcel are denoted D(P) and d(P), respectively. Several relationships between parameters are maintained. D(A)≧G 2 ·d(A) 1. - This inequality says that if the elements are recopied from one array (at density D(A)) into a larger array, then the new larger array has density a factor of at least G larger than d(A) and a factor of at least G smaller then D(A). The same holds true if the elements are recopied from one array (at density d(A)) into a smaller array. d(P)<d(A)<D(A)<D(P)≧1 2. P=Θ(log(A)). 3. Alternatively, these parameters can be set to favor certain operations over others. A rebalance window has an upper density threshold and a lower density threshold, which together determine the target density range of a given window. The density thresholds are functions of the window size. As the window size increases, the upper density threshold decreases and the lower density threshold increases. When A is a power of two, the system can calculate density thresholds as follows. G=2 P=2c A=2c+h. where c=Θ(loglogA) and h=(1 gA)−c, where 1 gA denotes the log-base-two of A. Thus for various values of l the parameters are set as follows: Consider a PMA having the following basic parameters: G=2 A=512 P=16 D(P)=1.0 D(A)=0.5 d(A)=0.12 d(P)=0.07 The minimum and maximum density thresholds of subarrays are set as follows: D(P)=D(23)=1.0 D(24)=0.9 D(25)=0.8 D(26)=0.7 D(27)=0.6 D(A)=D(28)=0.5 d(A)=d(28)=0.12 d(27)=0.11 d(26)=0.1 d(25)=0.09 d(24)=0.08 d(P)=d(23)=0.07 It can be verified that all above properties hold. For arbitrary values of G>1 the density thresholds of an window of size W are set as follows: A window is said to be within threshold if the density of that window is within the upper and lower density thresholds. Otherwise, it is said to be out of threshold. An insertion of an element y into a PMA proceeds as follows. First, search for the element x that precedes y in the PMA. Then check whether the density of the entire array is above threshold. If so, recopy all the elements into another array that is larger by a factor of G. Otherwise check whether there is an empty array position directly after element x, and if so, insert y after x. Otherwise, rebalance to make space for y as follows. Choose a window size W to rebalance as follows. Choose a parcel that contains x, and consider the parcel to be a candidate rebalance interval. If that candidate is within threshold, then rebalance, putting y after x during the rebalance. If not, then arbitrarily grow the left and right extents of the candidate until the candidate is within threshold. Then rebalance, putting y after x during the rebalance. A deletion of an element x proceeds as follows. First, search for the element x in the PMA and remove it. Then check whether the density of the entire array is below threshold. If so, then recopy all the elements into another array that is smaller by a factor of G. Otherwise choose a parcel that contained x, and call it a candidate rebalance interval. If the candidate is within threshold then the deletion is finished, otherwise grow the left and right extents of the candidate until the candidate is within threshold. Then rebalance the candidate. Alternatively, there are many ways to choose candidate rebalance intervals. For example, the candidates could be drawn from a fixed set (e.g., the entire array, the first and second halves, the four quarters, the eight eighths, and so forth). Another example is to choose the rebalance window so that all the elements all move in the same direction (e.g., to the right) during the rebalancing. Alternatively, there are several ways to implement a rebalance in-place. One way is to compress all the elements to one end of the rebalance interval and then put them in their final positions. This procedure moves each element twice. The rebalance can also be implemented so that each element only moves once. The system divides the rebalance window into left-regions and right-regions. In a left region the initial position of the element is to the right of the final position and needs to be moved left. A right region is defined analogously. For each left region, move each element directly to its final position starting from the leftmost element. For each right region, move each element directly to its final position starting from the rightmost element. Now we explain how a PMAVSE operates. A PMAVSE supports elements that can have different sizes and also supports cursor operations and a cursor set. The PMAVSE comprises the following elements: - 1. Cursor set—The set of cursors is stored as an unsorted array. - 2. Record array—The record array is a PMA. Each element in this record array comprises two or more pointers and a small amount (also unit size) of auxiliary information. Each element thus has unit size. Each element in the record array represents a key-value pair stored in the PMAVSE. Specifically, the record array stores the following: - (a) A pointer to the key pi.k and the length of key pi.k. - The pointer points to a particular location in another array, called the key array, in which the actual key is stored. - (b) A pointer to the value pi.v and the length of the value. The pointer points to a particular location in another array, called the value array, in which the actual value is stored. - (c) A flag indicating that the record has been deleted, but that there still exists one or more cursors pointing to the record. The record remains stored in the PMAVSE until all cursors point elsewhere. - 3. Key array—The keys are stored in a PMA-type structure, modified to support different-length keys. The lower-bound density thresholds are set to zero as d(A)=d(P)=0. - 4. Value array—The keys are stored in a PMA-type structure, modified, as with the key array, to support different-length keys. The lower-bound density thresholds are set to zero as d(A)=d(P)=0. To search in the PMAVSE, perform a binary search on the record array. This binary search 1705 involves probes into the key array. To perform the binary search for a given key-value pair, pj, use the record array to find the middle element. Call the middle element pi. Then, use the key array to compare pj.k with pi.k. To perform an insert of a key pj.k once the predecessor key pi.k has been found, insert the new key into the key array and the new value into the value array. It remains to explain how to perform these new insertions, because the keys and values have variable lengths. Insertions into the key and value arrays use the same computation, except for the minor differences between storing keys and values. The description here is for the key array. For example, in the system all keys can be divided into bytes, which are used as a unit-length chunk. The system divides the keys into unit-length chunks. Each unit-length chunk is inserted or deleted independently. This representation, where keys are split into independent unit-length chunks, is called here a smeared representation. A rebalance in the smeared representation is called here a smeared rebalance. Refer to The PMA insertion, deletion, and rebalance computations can be thus be used. To read keys and to perform functions including but not limited to string comparison on the keys, the system regroups key chunks together, with the gaps removed. The system can also store different-length keys without splitting the keys into chunks. Instead, each key is stored in a single piece. This representation is called here a squished representation. The system rebalances the PMA as follows. Find the appropriate rebalance interval. Proceed as in a PMA using smeared representation—grow a rebalance interval until it is within threshold. Then rebalance the elements in the smeared representation. Then squish the elements, i.e., store the unit-size chunks continuously, that is, with no gaps in between chunks. This rebalance of the elements in the smeared representation can be performed implicitly or explicitly. Squish the gaps as follows. If the entire element is contained in the rebalance interval, then squish the smeared key evenly from both sides so that half of the gaps go before the squished element and half go after (up to a roundoff error if there is an odd number of chunks). Refer to The smeared rebalance can be performed implicitly, rather than explicitly. An element that is only partially located within the rebalance interval does not move at all. To move an element that is entirely contained in a rebalance interval, place the middle unit-size chunk, or middle two unit-size chunks in the placement of the smeared representation. Next, place the rest of the chunks so that all the gaps are squeezed out. The PMA stores a set of cursors. The system stores the cursors unordered in an array. Whenever an element in the PMAVSE shifts around, all cursors pointing to that element are updated. This update involves a scan through the cursor set every time there is a rebalance. An element is not removed from the PMAVSE while there are one or more cursors pointing to it. Instead, the element remains in the PMA with a flag indicating that it has been deleted. Eventually, when no cursors point to the element, it is actually removed. Alternatively, there are other data structures for storing cursors. For example, the cursors could be stored in an ordered list where the elements have back pointers to the cursor list. Then each element would contain a list of pointers to the cursors at that element. This representation guarantees that one never has to traverse many cursors to find all cursors that have to be updated on a rebalance. Alternatively, the cursors could also be stored in any dictionary structure, including but not limited to a sorted linked list, a balanced search tree, a streaming disk-resident dictionary, or a PMA, ordered by the elements that they point to, with no back pointers. File Header The system stores each dictionary in a file. At the beginning of the file are two headers, each of which comprise a serialization comprising - 1. a literal string “tokudata”, - 2. a version number, - 3. a number indicating the size of the header, stored in a canonical order (most significant byte first), - 4. a check sum, - 5. a number used to determine whether the system is storing data in big-endian or little-endian order, or some other order, - 6. a number indicating how many checkpoints are stored - 7. an offset in the file at which a block translation table (BTT) is stored, - 8. a number indicating the disk block number of the root of the tree, - 9. a number encoding the LSN of the operation that most recently modified the tree rooted at the tree, and - 10. a string that encodes dictionary-specific data (including but not limited to the type of each column in the dictionary). The root block number along with the BTT provide information for an entire tree. The root block number can be translated using the BTT to a segment. The segment in turn may contain block numbers of children, which are translated by the BTT. Two completely different trees may be referred to by different headers, since the BTTs may map the same block numbers to different segments, and the two trees may share subtrees (or the entire trees may be the same), since their respective BTTs may map the same block number to the same segment. Alternatively, multiple dictionaries can be stored in one file, or a dictionary can be distributed across multiple files, or several dictionaries can be distributed over a collection of files. For example, for implementations that use multiple files for one or more dictionaries, the block translation table can store a file identifier as well as an offset in each block translation pair of a block translation array (3009). Alternatively, more than two headers can be employed. For example, to take a snapshot of the system, a copy of the BTT and header can be stored somewhere on disk, including but not limited to in a third header location. The system could maintain an array of headers to manage arbitrarily many snapshots. Buffer Pool The system employs a buffer pool which provides a mapping between the in-RAM and on-disk representations of tree nodes. When a node is brought into RAM, it is pinned. When a node is pinned, it is kept in RAM until it is unpinned. Pinning a node is a way of informing the system to keep a node in RAM so that it can be manipulated. A node can have multiple simultaneous pins, since multiple functions or concurrent operations can manipulate a tree node. To pin a node in RAM, the system first checks whether that node is already in the buffer pool and if not, bring it into RAM. Then the system updates a reference count saying how many times 1810 the node has been pinned. A node can be removed from RAM when the reference count reaches zero. When a node is transferred from disk into RAM, the size of the in-RAM representation is calculated. Then the system constructs the in-RAM representation of the node. The buffer pool provides a function getandpin which given a block number pins the conesponding node in RAM, bringing it into RAM if it is not already there. The buffer pool also provides a function maybegetandpin, which pins the node only if it is already in RAM. The system employs maybegetandpin to decide whether to move data from one node to another depending on whether the second node is in RAM. The system also employs maybegetandpin to control aggressive promotion. In one mode, the system aggressively promotes messages into any in-memory node. In another mode, the system aggressively promotes messages only to dirty in-memory nodes. When the total size of the nodes in RAM becomes larger than the buffer pool's allocated memory, the system may evict some nodes from RAM. The system can evict the least recently used unpinned node from the buffer pool. To evict a node, the node is deleted from RAM, first writing it to disk if the node is dirty. A node, block, or region of RAM is defined to be dirty if it has been modified since being read from disk to RAM. Alternatively, there are other ways to optimize the page-eviction strategy in the buffer pool. The decision of which node to unpin can be weighted by one or more factors, for example, the size of the node, the amount of time that the node has been ready to be ejected, the number of times that the node has been pinned, or the frequency of recent use. A Buffer Pool (4601) is a structure comprising - 1. n_in_table (4602), a number indicating how many nodes are stored in the buffer pool; - 2. table (4603), an array of pointers to pairs (each array element is called a bucket, and the table itself acts as a hash table); - 3. table_size (4604), a number indicating how many buckets are in the table; - 4. lurlist (4605), a doubly linked list threaded through the pairs, ordered so that the more recently used pairs are ahead of the less recently used pairs; - 5. cachefile_list (4606), a list of pointers to cachefiles; - 6. size (4607), a number which is the sum of the in-RAM sizes of the nodes in the buffer pool; - 7. size_limit (4608), a number which is the total amount of RAM that the system has allocated for nodes; - 8. mutex (4609), a mutual exclusion lock (mutex); - 9. workqueue (4610), a work queue; and - 10. checkpointing (4611), a Boolean which indicates that a checkpoint is in progress. A buffer pool pair is a structure comprising - 1. a pointer to a cachefile; - 2. a block number; - 3. a pointer to the in-RAM representation of a node; - 4. a number which is the size of the in-RAM representation of the node; - 5. a Boolean, dirty, indicating that the node has been modified since it was read from disk; - 6. a Boolean, checkpoint_pending, indicating that the node is to be saved to disk as part of a checkpoint before being further modified; - 7. a hash of the block number; - 8. a pointer, hash_chain which threads the pairs from the same bucket into a linked list; - 9. a pair of pointers, next and prev, which are used to form the doubly linked list ordered by how recently used each node is; - 10. a readers-writer lock; and - 11. a work list comprised of work items, each item comprising a function and an argument. A cachefile is organized so that it is in one-to-one correspondence with the open dictionaries. A cachefile is a structure comprising - 1. ref count, a number, called a reference count, which is incremented every time a dictionary is opened, every time a rollback entry is logged, and any time any other use is made of the cachefile to prevent the cachefile from being closed until all uses of the cachefile have finished; - 2. fd, a number which is a file descriptor for the file that holds the on-disk data; - 3. filenum, a number which is used to number files in the recovery log; - 4. fname, a string which is the name of the file; - 5. a pointer to the header for the file; - 6. a pointer to the BTT for the dictionary; - 7. a pointer to the CBTT for the dictionary; and - 8. a pointer to the TBTT for the dictionary. A work queue is a structure comprising - 1. a doubly linked list of work items; - 2. a condition variable called wait_read; - 3. a number called want_read; - 4. a condition variable called wait_write; - 5. a number called want_write; - 6. a Boolean indicating that the work queue is being closed; and - 7. a number which counts the number of work items in the list. To enqueue a work item onto a work queue, the system performs the following operations: - 1. Lock the work queue. - 2. Increment the counter. - 3. Put the work item into the doubly linked list. - 4. If want_read>0 then signal the wait_read condition. - 5. Unlock the work queue. To dequeue a work item from a work queue, the system performs the following operations: - 1. Lock the work queue. - 2. While the work queue is empty and the Boolean indicates that the queue is not closed - (a) Increment want_read. - (b) Wait on the wait_read condition. - (c) Decrement the want_read. - 3. Decrement the counter. - 4. Remove a work item from the doubly linked list. - 5. Unlock the work queue. In some cases the locking and unlocking steps are be skipped. For example, if the work queue is being filled before any worker threads are initialized. When a buffer pool is created, a set of worker threads is created. Each thread repeatedly dequeues a work item from the work queue (waiting if there are no such items), and then applies the work item function to the work item. In some cases, the system decides that there is a large backlog of work items, and prevents additional writes into the buffer pool, using the want_write condition variable. In some cases a thread writes a node to disk directly. In other cases, a thread schedules a node to be written to disk. For example, when reading one node, if the buffer pool becomes oversubscribed, the system schedules the recently used node to be written to disk by enqueuing a work item. That enqueued work item, when run, obtains a writer lock on the pair, and writes the node to disk. When a dictionary is open in the buffer pool, a cachefile is associated with the dictionary. When a dictionary is opened, the system find the currently associated cachefile (in which case the reference count is incremented), or creates a new cachefile. In the case where a new cachefile is created, the system opens a file descriptor, and stores that in the cachefile. The system stores the file name in the cachefile. The system allocates a file number, and logs the association of the file number with the path name. If the file exists, then the header is read in, a header node is created, and the pointer to the header is established. If the file does not previously exist, a new header is created. When a dictionary is closed, the reference count is decremented. When the reference count reaches zero, the system - 1. flushes any pairs that belong to that cachefile, writing them into the file; - 2. waits for any pairs in the work queue to complete; - 3. writes the header to the file; - 4. removes the cachefile from the linked list of cachefiles; - 5. closes the file descriptor; - 6. deallocates the RAM associated with the cachefile; and - 7. performs any additional housekeeping that is needed to close the cachefile. To perform a getandpin operation of a node, the system computes a hash on the block number, and looks up the node in the hash table. If the node is being written or read by another thread, the system waits for the other thread to complete. If the node is not in the hash table, the system reads the node from disk, decompressing it, and constructs the in-RAM representation of the node. Once the node is in RAM, the system modifies the least-recently-used list, and acquires a reader lock on the pair. If the checkpoint_pending flag is T - 1. writes the node to disk (updating the BTT), - 2. also updates the temporary BTT used for the node's dictionary, (the temporary BTT is created, for example, during a checkpoint), and - 3. sets the checkpoint_pending to F ALSE before returning from getandpin. If the buffer pool hash table ever has more nodes in the buffer pool than there are buckets in the hash table, the system doubles the size of the hash table, and redistributes the values. Each pair p has a hash value h(p) stored in it. If the length of the table is n, then p is stored in bucket h(p) mod n. When storing a node n from cachefile c that was previously not in the buffer pool, a buffer pool pair is created pointing at c and n. The pair is initialized to hold the block number of the node. The dirty bit is initially set to F For each nonleaf node in RAM, the system maintains the hashes of the each of the nodes' children (in childfullhash (2614)), which can help to avoid the need to recompute the hash function on the node. Alternatively, the system could use different buffer-pool constructions. For example, the system could build a buffer pool based on memory mapping (e.g., the mmap( ) library call), or instead of using a hash table, an OMT could be used. In some modes of operation the system maintains the invariant that if a node is pinned then its parent is pinned. The system maintains this invariant by keeping a count of the number of children of a node that are in RAM, and treating any node with a nonzero count as pinned. The children can maintain a pointer to the in-RAM representation of the parent. Whenever the tree's shape changes (for example when a node is split) the counters and the parent pointers are updated. This invariant can be useful when updating fingerprint and the estimates of the number of data pairs, the number of distinct keys, and the number of data pairs. The estimates are propagated up the tree when just before a node is evicted, rather than on every update to the node. In some modes of operation, the system propagates data upward every time any node is updated, and does not need to maintain the invariant at all times, but only needs to maintain the invariant when a child node is actually being updated. Data Descriptors The system employs a byte string called a data descriptor that describes information stored in a dictionary. The descriptor comprises a version number and a byte string. Associated with each dictionary is a descriptor. The system uses descriptors for at least two purposes. - 1. For comparison functions. The system uses the same C-language function to implement comparisons in different dictionaries. The C-language function uses the descriptor associated with a dictionary two compare two key-value pairs from that dictionary. The descriptor includes information about each field in a key. For example, the descriptor could contain information that the first field of a key is a string which should be sorted in ascending order, and the second field is an integer which should be sorted in descending order. - 2. For generated derived rows. In one mode, the system maintains at least two dictionaries. One dictionary is a primary dictionary, and a second dictionary is a derived dictionary. For each key-value pair in the primary dictionary, the system automatically generates a key-value pair for the derived dictionary. For example, if a primary dictionary pairs comprise a first name, a last name, and a social security number, then in a secondary dictionary the pairs might comprise the social security number then the last name. Thus a descriptor describes the types and sort order for each field in a key-value pair, and for derived dictionaries, a descriptor further describes which fields from a primary row are used to populate a derived row. The system upgrades descriptors incrementally. The system organizes each dictionary into one or more nodes. Each node contains the version number of the descriptor for rows stored in that node. If the users of the system need to change the descriptor for a dictionary, the old descriptor and the new descriptor are both stored in the header of the dictionary. When a node is read in, if the descriptor version for that node is an old version, then the system calls a user-provided upgrade function to upgrade all the pairs stored in that node. On-Disk Encoding and Serialization To write data to disk, the system first converts a node into a serialized representation (an array of bytes), in much the same way that messages are converted into an array of bytes. Then the data is compressed. Then a node header is prepended to the compressed data, and the node header and compressed data are written to disk as a single block. A node, as written to disk, comprises the following serialized representation: - 1. a literal string “tokuleaf” or “tokunode” depending on whether the node is a leaf node, or a nonleaf node; - 2. a number indicating which file version the node is, which can facilitate changing the encoding of a block in future versions and can facilitate the reading of older versions of the block; - 3. the dictionary's descriptor version; - 4. nodelsn (411); - 5. the compressed length of the compressed subblock that follows; - 6. the uncompressed length of the compressed subblock that follows; - 7. a compressed subblock, comprising the following information, that is then compressed as a block as - (a) a target size of the node (which defaults to 4 megabytes), - (b) isdup (403), - (c) height (405), - (d) randfingerprint (406), and - (e) localfingerprint (407). For leaf nodes, the statistics (414) can be represented on disk by recalculating all the values as the leaf node is read in from memory. That is, the system can encode a leaf nodes statistics using no bits on disk. After the localfingerprint (407), leaf nodes are additionally serialized by encoding - 1. the number of leaf entries in the node, and - 2. for each leaf entry, from least to most in sorted order, the serialized leaf entry. After the localfingerprint (407), nonleaf nodes are additionally serialized by encoding - 1. statistics (414), which are encoded by - (a) a number ndata (3301), - (b) a number ndata_error_bound (3302), - (c) a number nkeys (3303), - (d) a number nkeys_error_bound (3304), - (e) a number minkey (3305), - (f) a number maxkey (3306), and - (g) a number dsi ze (3307); - 2. the subtree fingerprint, which is the sum of the fingerprints of the children; and - 3. the number of children. - 4. For each child the nonleaf nodes further encode - (a) the subtreefingerprint (2611) of the child, - (b) the stored statistics for child, encoded as for the node statistics (414) in Item 1 above, - (c) the block number of the child, - (d) the FIFO buffer of the child, represented by - i. the number of entries in the FIFO buffer, - ii. for each message in the FIFO buffer, from oldest to newest, the serialized representation of the message. - 5. For each pivot key the nonleaf nodes further encode - (a) the key of the pivot key (encoded as a length followed by the bytes of the key), and - (b) for DUP dictionaries the value of the pivot key (encoded as a length followed by the bytes of the key). After the previously encoded information, each node further encodes a checksum for the all of the data including the uncompressed node header and the compressed subblock. This checksum is computed on the compressed subblock before the data is compressed, so that the system can verify the checksum after the data has been decompressed after beginning read from disk. The checksum is the end of the compressed block. Alternatively, data can be represented on disk in other ways. For example minkey (3305) can be eliminated from the on-disk representation if the system takes care to make sure that the pivot keys actually represent a value present in the left subtree. In one mode of operation, the system compresses blocks using a parallel compression computation. In this case, instead of storing the compressed length and uncompressed of the subblock, the system divides the subblock into N subsubblocks, and stores the value N. Each subsubblock can be compressed or decompressed independently by a parallel thread. The compressed and uncompressed lengths of the subsubblocks are stored. Alternatively, the system can choose how much processing time to devote to compression. For example, if the system load is low, the system can use a compression computation that achieves higher compression. In one mode, the system adaptively increases the target size of nodes depending on the effectiveness of compression. If a block has never been written to disk, the system sets the block target size to 4 megabytes (4 MB). When a block is read in, the system remembers the compressed size. For example, if the block was 3 MB of uncompressed data and required 0.5 MB after compression, then the block was compressed at 6-to-1, and so the system increases the target size from its default (4 MB) by a factor of 6 to 24 MB. When a block is split, both new blocks inherit the compression information from the original block. If later data is inserted that has more entropy, then when the data is written to disk, a new compression factor is computed, and the block will be split at a smaller size in future splits. Alternatively, the system could use other ways to implement compression, depending on the specifics of the node representations. For example, each leaf entry or message could be compressed individually. Alternatively, the leaf entries or messages could be compressed in subblocks of the node. If the dictionary is used in a database organized as rows and columns, the keys and values may have finer structure (including but not limited to fields that represent columns). In such a case, a system can separate the fields and store like fields together before compressing them. Alternatively, other representations of tree nodes could be used. For example, the data could be stored in compressed and/or encrypted form on disk. The data can be stored in a different order. The target node size need not be 4 MB or even any particular fixed value. It need not be constant over the entire tree, but could depend on the particular storage device where the node is located, or it could depend on other factors such as the depth of the node within the tree. Alternatively, there are other ways of building in-RAM representations, permitting fast searches and updates of key-value pairs in nodes and nodes' buffers. For example, instead of using a FIFO queue in each buffer, one could use a hash table or OMT in a buffer, and merge messages at nonleaf nodes of the tree, and on look up to sometimes get values directly out of messages stored at nonleaf nodes. Two or more messages could be merged into one message. A packed-memory array could be used instead of a hash table or OMT. A block translation table is serialized by encoding - 1. a number indicating the size of the block translation table; - 2. for each block translation pair - (a) the disk offset of the block translated by the pair (encoded as −1 for unallocated block numbers), and - (b) the size of the block (encoded as −1 for unallocated block numbers); and - 3. a checksum. That information is enough to determine all the information needed in the block translation table. For example, the set of free segments are those segments which are not allocated to a block. For each dictionary, the system serializes the following information at the beginning of the file containing the dictionary: - 1. The literal string “tokudata”. - 2. The layout version, stored in network order. - 3. The size of the header, stored in network order. - 4. A byte-ordering literal, which is a 64-bit hexadecimal number 0x0102030405060708 which the system uses to determine the byte order for the data on disk including but not limited to big-endian or little-endian. Many integers are stored in the byte order consistent with the byte-ordering literal. - 5. A count of the number of checkpoints in which the dictionary is participating. - 6. The target node size for the dictionary, which defaults to 222, which is 4 megabytes. - 7. The location of the BTT on disk. - 8. The size of the BTT. - 9. The block number of the root of the dictionary. - 10. isdup (403). - 11. A “old” layout version, used to maintain the oldest layout version used by any node in the dictionary. - 12. A checksum. File Names and File Operations The system uses a level of indirection for dictionary file names. Associated with each dictionary are two names, a dname and an iname. Dnames are the logical names of the dictionaries. Inames are the file names. The system maintains a dictionary called the dname-iname directory as a NODUP dictionary. The directory maps dname to iname, where dname is the key and iname is the value. A dname and an iname both of the syntax of a pathname. An iname is a pathname relative to the root of a file directory hierarchy, which is the structure called an environment, containing all the dictionaries of a particular storage system. The iname is the name of a file in a file system. In most situations where a dictionary is renamed, the system does not rename the underlying file, but instead treats inames as immutable. Every iname is unique over the lifetime of the log. This uniqueness is enforced by embedding the XID of the file creation operation in the iname. In one mode, the iname is a 16-digit hex number with a .tokudb suffix. In another mode the name contains a hint to the original user name, for example tablename.columnname.01234567890ABCDE.tokudb where tablename is the name of the table, columnname is the name of a column being indexed, and 01234567890ABCDE is a hexadecimal representation of the XID. Most file operations occur within transaction. The close operation is a non-transactional file operation. The iname-dname directory uses the string comparison for its comparison function, and has no descriptor. The iname-dname directory is a dictionary. The system applies checkpointing, logging, and recovery to the dictionary. The directory is recovered like any other dictionary. The system logs fassociate (4703) entry in the recovery log when it opens the directory. When performing file operations, the system typically takes on or more locks on the directory. For example, when renaming a file, an exclusive lock on the old dname and the new dname is acquired. The lock is held until the transaction completes. The recovery log contains dnames for the purposes of debugging and accountability, stored for example, in comment fields. On system start up, the system receives three pathnames from a configuration file, command line argument, or other mechanism. - 1. envdir, the environment pathname, - 2. datadir, the pathname of the filesystem directory where the dictionaries are stored. - 3. logdir, the pathanem of the filesystem directory which holds the recovery log files. All new data dictionaries are created in datadir. The datadir is relative to the environment envdir, unless it is specified as an absolute pathname. All inames are created as relative to the envdir, inside the datadir. The pathname stored in datadir will be the prefix of the pathname in the iname. The envdir is relative to the current working directory of the process running the system, unless it is specified as an absolute pathname. If the system is shut down and then restarted with a new datadir then - 1. New dictionaries are created in the new datadir. - 2. Old dictionaries, accessed by iname, are still available in their original directories. - 3. The implicit envdir prefixed to iname. That is, the full pathname is envdir/iname. - 4. inames stored in log are of the form original_data_dir/original_iname. - 5. inames stored in iname-dname directory are of the same form. When the system performs a file operation, except for close, the system creates a child transaction in which to perform the file operation. If the child transaction fails to commit, then the file operation is undone, making the file operations atomic. Every file operation comprises the following steps: - 1. Begin a child transaction. - 2. Perform operation - 3. If the operation failed, abort the child transaction. - 4. if the operation succeeded, commit the child transaction without fsyncing the log. There is no child transaction for file-close. For all the operations described below, the commit actions are performed when the topmost ancestor transaction commits. Create or Open Dictionary Opening a dictionary inserts an fopen (4710) entry in the recovery log. There is no fopen entry in the rolltmp log. Creating a dictionary inserts fcreate (4709) entry in the recovery log, followed by a fopen (4710) entry if the dictionary is to be opened. When recovery is complete, all dictionaries are closed. After recovery, the iname-dname directory is opened before performing new postrecovery operations. To create or open file the system performs the following operations: - 1. Examine the iname-dname directory to see if dname exists. - 2. Take a lock on the dname in the directory. - 3. Take a write lock if the file is being opened in create-exclusive mode. - 4. Take a read lock otherwise. - 5. Terminate with an error if: - (a) dname is found and the operation is to create the file in an exclusive mode, or - (b) dname is not found the operation is to open an existing file. - 6. If creating a file and dname is not found: - (a) Take a write lock on dname in the iname-dname directory, if the write lock has not been acquired earlier. - (b) Generate an iname using the XID of the child transaction. - (c) Insert a key-value pair in iname-dname directory, I NSERT(dname, iname) - (d) Log the file creation: - i. Generate an LSN. - ii. Log a fcreate entry (with dname and iname). - iii. f sync the log. - (e) Create the file on disk using iname. - (f) Make an fcreate entry in the rolltmp log. - 7. Log the fopen, without fsync. - 8. Open the dictionary. - 9. If file was just created, take full range write lock on new dictionary. When the system aborts an file-open operation, aborting the transaction implicitly will undo the operations on the directory. To abort fcreate the system performs the following operations: - 1. Delete the iname file. - 2. The iname-dname directory will be cleaned up by the abort. It is not necessary to explicitly modify the directory. - 3. The dictionary will be closed implicitly by aborting the transaction. During recovery, in backward scan for fcreate, the system performs the following operation: - 1. Close the file if it is open. During recovery, in backward scan for fopen, the system performs the following operations - 1. Close the file if it is open. During recovery, in forward scan for fcreate, the system performs the following operations: - 1. If transaction does not exist (because topmost parent XID is older than oldest living transaction) do nothing. - 2. Else - (a) Before reaching the begin-checkpoint record for the oldest complete checkpoint. - i. If file does not exist, then the file has been deleted, so do nothing. - ii. If file does exist, record create in transaction's rollback log and open the file. - (b) After reaching begin-checkpoint record: (The file creation was after the checkpoint, so the file may not even exist on disk in the event of certain kinds of system failures.) - i. Delete the file if it exists. - ii. Create and open the file, recording the creation in transaction's rollback log. - (c) The iname-dname directory will be recovered on its own. During recovery, forward scan for fopen, the system performs the following operations: - 1. Open the dictionary (using iname for pathname). If the file is missing, then ignore the fopen and ignore any further references to this file. Close Dictionary To close a dictionary the system performs the following operations: - 1. Log the close operation. - 2. Close the dictionary. Delete Dictionary To delete a dictionary the system performs the following operations: - 1. Find the relevant entry in the directory and get the iname. This operation takes a write lock on the key/name pair in the directory by passing in the a read-modify-write flag called DB_RMW. - 2. If dictionary is open, return error. - 3. Delete the entry from the iname-dname directory. - 4. Make an entry in the rolltmp log. - 5. Mark the transaction as having performed a delete. - 6. Log an entry in the recovery log. To commit, the system performs the following operations: - 1. If this transaction deleted a dictionary, write an committxn (4705) entry to the recovery log and fsync the log. - 2. Delete the iname file if it exists. To abort requires no additional work. The directory will be cleaned up by the abort. It is not necessary to explicitly modify the directory. During recovery, forward scan, the system performs the following operations: - 1. If transaction does not exist do nothing. - 2. Else create a rolltmp log entry. (The file will be deleted when the transaction is committed.) 4.0.1 Rename Dictionary To rename a dictionary the system performs the following operations. - 1. Record the rename as a comment in the log. - 2. If dictionary is open, return error. - 3. Delete the old entry from the directory. This operation fails if the dname is not in the directory and otherwise it take a write lock on the entry using the DB_RMW flag. - 4. Insert the new entry into the iname-dname directory. To abort requires no additional work. The directory will be cleaned up by the abort. It is not necessary to explicitly modify the directory. SQL Database Operations When the system is operating as a SQL database, the database tables are mapped to dnames, which are in turn mapped to inames. In a database, a table comprises one or more dictionaries. One of the dictionaries serves as the primary row store, and the others serve as indexes. The SQL command RENAME TABLE is implemented by the following steps: - 1. Begin a transaction. - 2. Create a list of dictionaries that make up the table. - 3. For each dictionary: - (a) Close the dictionary if open. - (b) Rename the dictionary. - 4. Commit the transaction. - 5. If dictionaries are expected to be open, open them. The SQL command DROP TABLE is implemented by the following steps: - 1. Begin a transaction. - 2. Create a list of dictionaries that make up the table. - 3. For each dictionary: - (a) Close the dictionary if open. - (b) Delete the dictionary. - 4. Commit the transaction. The SQL command CREATE TABLE is implemented by the following steps: - 1. Begin a transaction. - 2. Create a list of dictionaries that make up the table. - 3. For each dictionary: - (a) Create the dictionary. - (b) Close the dictionary. - 4. Commit the transaction. - 5. If dictionaries are expected to be open, open them. The SQL command DROP INDEX is implemented by the following steps: - 1. Begin a transaction. - 2. Delete the dictionary corresponding to the index. - 3. Commit the transaction. The SQL command ADD INDEX is implemented by the following steps: - 1. Begin a transaction. - 2. Create a dictionary for the index. - 3. Populate the dictionary with index key-value pairs. - 4. Close the dictionary. - 5. Commit or abort the transaction. - 6. If successful, open new dictionary. The SQL command TRUNCATE TABLE, when there is no parent transaction, is implemented by the following steps: - 1. Begin a transaction. - 2. Acquire metadata (including dname, settings, descriptor). - 3. For each dictionary in the table: - (a) Close the dictionary if open. - (b) Delete the dictionary. - (c) Create a new dictionary with same metadata. - (d) Close the dictionary - 4. If success, commit the transaction, else abort the transaction. - 5. If dictionaries are expected to be open, open them. Logging and Recovery The log comprises a sequence of log entries stored on disk. The system appends log entries to the log as the system operates. The log is implemented using a collection of log files to form the log. The log files each contain up to 100 megabytes of logged data. As the system operates, it appends information into a log file. When the log file becomes 100 megabytes in size, the system creates a new log file, and starts appending information to the new file. After a period of operation, the system may have created many log files. Some of the older log files are deleted, under certain conditions described below. Some of the log files may be stored on different disk drives, some may be backed up to tape. The system thus divides the log into small log files, naming each small log file in a way that will make it possible to identify the logs during recovery, and manages the log files during normal operation recovery. The large abstract log can also be implemented by writing directly to the disk drive without using files from a file system. In this description, we often refer to a single log, with the understanding that the log may be distributed across several files or disks. The log data could be stored on the same disk drive or storage device as the other disk-resident data, or on different disks or storage devices. We distinguish the log file from the other disk-resident data by referring to the log separately from the disk. In some cases, log entries are stored in the same files that contain the other data. The log is a sequence of log entries. A log entry is a sequence of fields. The first field is a single byte called the entry type. The remaining fields depend on the entry type. Every log entry begins and ends with the length, a 64-bit integer field which indicates the length, in bytes, of the log entry. The system can traverse the log in the forward or reverse direction by using the length, since the length field at the end makes it easy, given a log entry, to find the beginning of the previous log entry. Every log entry further includes a checksum which the system examines when reading the log entry to verify that the log entry has not been corrupted. The system defines the following log entry types which are serialized using similar techniques as for encoding messages. Every log entry begins with a LSN (4722), then includes a entrytype (4723). The system implements the log entries depicted in - 1. The system logs a checkpoint_begin (4701) when a checkpoint begins. It includes a timestamp (4724) field which records the time that the checkpoint began. - 2. The system logs a checkpoint_end (4702) when it completes a checkpoint. This log type comprises lsn_of_begin (4725), which is the LSN of the checkpoint_begin (4701) entry that was recorded when the checkpoint began, and a timestamp (4724), which records the time that the checkpoint ended. We say that the previous checkpoint_begin (4701) entry corresponds to the checkpoint_end (4702) entry. - 3. The system logs a fassociate (4703) when it opens a file. Also when the system performs a checkpoint, the system records a fassociate (4703) for every open file. This log entry comprises a file number filenum (4726) and a file name filename (4729). The system uses the filenum (4726) in other log entries that refer to a file. This log entry further comprises an integer flags (4727), which is an integer, to record information about the file, for example whether the dictionary contained in the file allows duplicate keys. - 4. The system logs a txnisopen (4704) when a checkpoint starts, for each open transaction. This log entry type records the fact that a particular transaction, identified by transaction_id (2812), is open. This log entry comprises transaction_id (2812), which is the same as the LSN (4722) of the begintxn (4707) log entry that was logged when the transaction was opened. This log entry further comprises another XID parenttxn (4728), which is the XID of the transactions parent if the transaction is has a parent in a nested transaction hierarchy. If the transaction has no parent then a special NULL XID is logged in the parenttxn (4728) field. - 5. The system logs a committxn (4705) when it commits a transaction. This log entry comprises transaction_id (2812), which identifies the transaction that is being committed. - 6. The system logs a aborttxn (4706) when it aborts a transaction. This log entry comprises transaction_id (2812) which identifies the transaction that is being aborted. - 7. The system logs a begintxn (4707) when it begins a transaction. The transaction can thereafter be identified by the LSN (4722) value that was logged. This log entry comprises the XID parenttxn (4728) of the parent of the transaction, if the transaction is a child in a nested transaction hierarchy. If there is no parent, then a special NULL XID is logged in the parenttxn (4728) field. - 8. The system logs a fdelete (4708) when it deletes a file. This log entry comprises transaction_id (2812) which indicates which transaction is performing the deletion. This log entry further comprises a file name filename (4729) indicating which file to delete. If the transaction eventually commits, then this deletion will take effect, otherwise this deletion will not take effect. - 9. The system logs a fcreate (4709) when it creates a file. This log entry comprises a file name filename (4729) which is the name that the file will be known as when it is operated on in the future, an iname, iname (4730) which is the name of the underlying file in the file system, an integer mode mode (4736) which indicates the permissions that the file is created on (including, but not limited to, whether the file's owner can read or write the file and whether other users can read or write the file), an integer flags flags (4727), an integer descriptor_version (4737), and a byte string descriptor (4738). - 10. The system logs a fopen (4710) when it opens a file. This log entry comprises a file number filenum (4726) which is used when referring to the file in other log entries, an integer flags (4727) to record information about the file for example whether the dictionary contained in the file allows duplicate keys, and a file name filename (4729) which names the file being opened. - 11. The system logs a fclose (4711) when it closes a file. This log entry comprises filenum (4726), flags (4727), and filename (4729), similarly to the log entry for fopen (4710). When traversing the log backwards during recovery the system uses the flags (4727) and filename (4729) to open the file. - 12. The system logs a emptytablelock (4712) when it locks a table for a transaction, in the case where the table was created by the transaction, or the table was empty when the transaction began. This log entry comprises a transaction_id (2812) and a file number filenum (4726). - 13. The system logs a pushinsert (4713) when it inserts a key-value pair into a dictionary and if there is a previous matching key-value pair, then the new key-value pair is to overwrite the old one. This record comprises a file number filenum (4726) indicating the dictionary into which pair is being inserted, transaction_id (2812) indicating the transaction that is inserting a pair, and the pair compromising key (2810) and value (2811). - 14. The system logs a pushinsertnooverwrite (4714) when it inserts a key-value pair into a dictionary when, if there is a previous matching key-value pair, the new pair should not replace the old one. The fields are similar to those of pushinsert (4713). - 15. The system logs a pushdeleteboth (4715) when it deletes a key-value pair from a dictionary, where the system is deleting any key-value that matches both the key and the value. If no such pairs match, then the deletion has no effect. This log entry comprises a filenum (4726), transaction_id (2812), key (2810), and value (2811). - 16. The system logs a pushdeleteany (4716) it deletes a key-value pair from a a dictionary, where the system is deleting any key-value that matches the key. For dictionaries with duplicates, this can result in deleting several pairs if several pairs match. If there are no such pairs, then deletion has no effect. This log entry comprises a filenum (4726), transaction_id (2812), and key (2810). - 17. The system logs a pushinsertmultiple (4717) when it inserts key-value pairs into one or more dictionaries, where there is a master key-value pair that can be used to compute the key-value pair to be inserted into each corresponding dictionary. For example if one dictionary is indexed by first-name and then last-name, and another dictionary is indexed by last-name and then first-name, then the master record might contain both names, and the pairs to be inserted into the respective dictionaries can be derived from the master record. The system uses a descriptor, descriptor (4738), to encode how the derived pairs are computed. This log entry comprises a file number, filenum (4726), which identifies a master dictionary, and an sequence of file numbers, filenums (4731), which respectively identify a sequence of derived dictionaries. This log entry further comprises an XID, transaction_id (2812), and the master key-value pair comprising key (2810) and value (2811). - 18. The system logs a pushdeletemultiple (4718) when it deletes key-value pairs from one or more dictionaries, in a situation similar to that used by pushinsertmultiple (4717). Deletion from several dictionaries can be specified with a single master key-value pair. This log entry comprises fields filenum (4726), transaction_id (2812), key (2810), value (2811), 2420 and filenums (4731). - 19. The system logs a comment (4719) when the system writes an byte string to the log, for example to note that the system rebooted at a particular time. Typically the byte string has meaning for the humans who maintain the system, but that is not required. The system also records this type of log entry to align the log end (for example, to a 4096-byte boundary), choosing the comment length to force the desired alignment. This log entry comprises a time stamp timestamp (4724) and a comment comment (4732), which is byte string. - 20. The system logs a load (4720) in some situations when the system performs a bulk load from a data file, including but not limited to files in which rows comprise comma-separated values. In these situations, the system constructs a new dictionary file, and then replaces an old dictionary file with the new one. The system starts with the old dictionary file, and it constructs a new dictionary file without modifying the old one. If the transaction enclosing the bulk load commits, the old file is deleted. If the transaction does not commit then the system deletes the new file. As part of the load, the system inserts a modified record into the iname-dname dictionary, which is committed or aborted similarly to any other dictionary insertion. Thus, when a transaction commits, the iname-dname dictionary refers to the new file, and the old file is deleted, and when a transaction aborts, the iname-dname dictionary refers to the old file, and the new file is deleted. - The load (4720) log entry comprises a timestamp (4724) which notes the time at which the load was performed, a filenum (4726) which notes which dictionary is being updated, transaction_id (2812), and two file names oldfname (4733), and newfname (4734), which specify the old and the new file names respectively. The system also records other log entries, at certain times, for example logging dictionary headers or writing an entire dictionary node into the log. Alternatively, data is compressed when written to the log. The compression is performed on one or more log entries together. The system assembles an in-RAM array of a sequence of log entries, then compresses them into a block. The compressed block is written to disk, as - 1. the length of the compressed block, and - 2. the length of the uncompressed data in the block, - 3. the LSN of the first entry in the block, - 4. the LSN of the last entry in the block, - 5. a Boolean indicating whether there is a checkpoint_begin (4701) log entry in the block, - 6. a Boolean indicating whether there is a checkpoint_end (4702) log entry in the block, - 7. a Boolean indicating that the compression table was reset when compressing the block, - 8. the compressed bytes, and - 9. the compressed length again. The log file itself further comprises a header that indicates that the file is a log file, which may help the system avoid treating a log as though it were a data file (similarly the data files also have a header which may help prevent such confusion). The compressed length at the end of the block can help the system read log files backward, by starting at the end of a log file, reading the compressed length, and then skipping back to the beginning of the log. The system employs the Booleans that indicate whether there are checkpoint records in the block to find checkpoint records during recovery without examining or uncompressing blocks that have no checkpoint record. The system uses a compression library that constructs a table as it compresses data. The table initially starts out empty, and as more data is compressed, the table grows. The system, when compressing several blocks to the log, does not always reset the table between compressing blocks. The table-reset Boolean indicates whether the system started with a new table when compressing a block, or whether it used the previously accumulated table. The first compressed block in a file has the table-reset Boolean set to T To decompress a compressed block of log sequence entries, the system starts at the compressed block, and checks to see if the table-reset Boolean is T Certain operations, including but not limited to committing a transaction that has no parent, comprise logging entries into the log and then synchronizing the log to disk using the fsync system call. The system implements such operations by writing the log entries to an in-RAM data structure, possibly appending them to some previous log entries, compressing the block, and writing the compressed block to disk, and then calling fsync. In some conditions, the system resets the compression table, and in some conditions it does not. For example, if the compression block ends up a the beginning of a log file, the system resets the table. If more than one million bytes of data have been compressed since the table was reset, the system resets the table. If the in-RAM data structure exceeds a certain size, the system compresses the data and writes it to the log file as a block. Depending on the situation, the system may or may not perform an fsync or a compression table reset. The system maintains a count of how much compressed data has been written to a log file. After a fixed number of compressed bytes have been written, the system resets the compression table at the next time that a block is compressed. The system maintains two in-RAM log buffers. At any given time, one of the log buffers is available to write log entries into. The other log buffer can be idle or busy. When a thread creates a log entry, it appends the log entry into the available log buffer. To write or synchronize the log to disk, a thread waits until the other log buffer is idle. At that point, there may be several threads waiting on the newly idle buffer. One of the threads atomically - 1. sets the available buffer to be busy, - 2. sets the idle buffer to be available, and - 3. resets the newly available buffer so that it is empty. and then proceeds to compress the busy buffer, write it to disk, and call fsync if necessary. The other threads that were waiting for that buffer to become idle all start waiting until the fsync has complete, at which point their log entries having been written to disk, they continue. In some cases the available log buffer becomes so full that the system forces threads to wait before appending their log entries to disk. In some conditions commits several transactions with a single call to fsync. When the system performs a checkpoint, the system, for each dictionary, - 1. saves all the dirty blocks of the dictionary to disk, not overwriting blocks saved at last checkpoint, - 2. records their locations on the disk in a new BTT, - 3. saves the new BTT on disk, not overwriting the BTT saved at last checkpoint, - 4. saves a new header that points to the new BTT, not overwriting the header saved at last checkpoint, and - 5. writes other relevant information in the log. One thread can perform a checkpoint even when other threads are running concurrently by performing the following steps: - 1. Write a checkpoint_begin (4701) record. - 2. Obtain a lock on the buffer pool. - 3. For each pair in the buffer pool, if the pair is dirty, then set its checkpoint_pending Boolean to T RUEand add the pair to a list of pending pairs, otherwise set its pending flag to F ALSE. - 4. For each open dictionary, - (a) copy the dictionary's BTT to a temporary BTT (the TBTT), - (b) copy the dictionary's header to a temporary header, and - (c) log a the association of the file to its file number using a fassociate (4703) log entry. - 5. For each transaction that is currently open and has no parent, log the fact that the transaction is open using a txnisopen (4704) log entry. - 6. Release the lock on the buffer pool. - 7. Establish a work queue. - 8. For each pair in the list of pending pairs: - (a) Wait until the work queue is not overfull. - (b) Obtain the lock on the buffer pool. - (c) If the pair's checkpoint_pending is T RUEthen schedule the node to be written to disk by putting the node into a work queue. (The checkpoint_pending could be F ALSEbecause, for example, another thread could have performed a getandpin operation, which would have caused the pending pair to be processed at that time.) The system updates the TBTT as well as the BTT when writing a node to disk. - (d) Release the lock on the buffer pool. - 9. Wait for all the writes to complete. - 10. For each open dictionary - (a) Allocate a segment for the dictionary's TBTT, and write it to disk. - (b) Set the temporary header's BTT to point at the newly allocated TBTT, and write the temporary header to disk. - 11. Synchronize the disk-resident data to disk using the f sync function. - 12. Write the checkpoint_end (4702) to the log. - 13. Synchronize the log to disk using the f sync function. The system frees segments when they are no longer in use. A segment is given to the dictionary's segment allocator (3201) for deallocation when the segment is not used in the BTT, the CBTT, or in a TBTT, and when the segment is not used to hold the on-disk representation in the header, the checkpointed-header, or the temporary header. The system can determine a segment is no longer in use when it writes a block as follows: - 1. When writing a block number for the BTT, the system allocates a new segment. - 2. If the old segment is not used for the same block number in either the CBTT or a TBTT, then the segment can be added to a list of segments to deallocate. If the system is writing a block for a checkpoint, then it updates both the TBTT and the BTT. In this case the old segments identified in both TBTT and the BTT can each be added to the list of segments to deallocate if each respective old segment is not used in the CBTT. When a checkpoint completes, the TBTT becomes the CBTT, and the segments in the old CBTT are candidates for deallocation. The system, for each translated block number, examines the old CBTT, the TBTT, and the BTT to see if the corresponding segment is no longer in use. If so, then it add that segment to the list of segments to deallocate. Alternatively, the system could a node to the log when a node is modified for the first time after a checkpoint. If the underlying data files are copied to a backup system, and then the log files are copied to a backup system, the system could use those copied files to restore the dictionaries to a consistent state. The system maintains two copies of the dictionary header and two copies of the block translation table. The system maintains the two copies in such a way that they are distant from each other on disk or on separate disks. The system maintains the LSN on each header as well as a checksum on each header. In a quiescent state, the system has written both copies of the headers as with same LSN, the same data, and with correct checksums. When updating the header on disk, the system first checks to see if there are two good headers that have the same LSN (that is, whether the system is in a quiescent state). If they both exist, then the system - 1. overwrites one header, - 2. synchronizes the disk with the fsync ( ) system call, and then - 3. overwrites the other header. If two good headers exist but they have different LSNs, then the system - 1. overwrites the older header, - 2. synchronizes the disk, and then - 3. overwrites the newer header. If only one header is good, then the system - 1. overwrites the bad header, - 2. synchronizes the disk, and then - 3. overwrites the other header. This sequence of steps is called a careful header write. When opening a dictionary for access, the system reads the two headers, selecting the good one if there is only one good header, and selecting the newer one if there are two good headers. If neither header is good then the system performs disaster recovery, obtaining a previously backed-up copy of the database and reapplying any operations that have been logged in a logging file. Thus, the system has the option of selecting a header from the log, or can retrieve a header from one of the two copies stored on disk. Alternatively, the details of the disk synchronization and writes can be changed. For example, in some situations it suffices to perform a careful header write and not write a copy of the header to the log. In some situations it suffices to write the header to the log and not maintain two copies of the header on disk. Another alternative is to write segments to the log device instead of to the disk, so that the snapshot is distributed through the log. Another alternative is to take a “fuzzy snapshot” in which the segments are saved to disk at different times, and enough information is stored in the log to bring the segments into a consistent state. To start the system after a crash the system reads the log backwards to find the most recent checkpoint_end (4702) log entry. That log entry includes the LSN of checkpoint_begin (4701) entry that was performed at the beginning of the checkpoint. When a header is being read from a dictionary, if there are two good headers, the system chooses the header that has the LSN matching the beginning of the checkpoint. When recovering from a crash, the system maintains a state variable, illustrated in When recovering from a crash, the system performs the following operations: - 1. Acquire a file lock (for example, using the flock ( ) system call on Linux and FreeBSD, and a fcntl in Solaris, and a _sopen with locking arguments in Windows). - 2. Delete all the rolltmp files. - 3. Determine whether recovery is needed. If there are no log files or there is a “clean” checkpoint (that had no open transactions while running), at the end of the log file, then recovery is not needed. - 4. Create an environment for recovery (creating a buffer pool, and initializing the default row comparison and row generation functions. - 5. Write a message to the error log indicating the time that recovery began. - 6. Find the last log entry in the log. The system skips empty log files during recovery, and if there is a partial log entry at the end of the last log file, the system skips that. There are many reasons why a log file might be empty or a log entry might be incomplete, including but not limited to the disk having been full when the log entry was being written. - 7. Scan backward from the last log entry, for each log entry encountered, do the following operation depending on the log entry: - (a) checkpoint_begin (4701): - i. If the system is in the BBCBE (5202) state, then if there were no live transactions recorded, then go to the FOCB (5204) state and start scanning forward, otherwise go to the BOCB (5203) state. The system prints an error log message indicating that recovery is scanning forward. - ii. Otherwise continue. - (b) checkpoint_end (4702): If the system is in the BNCE (5201) state, then go to the BBCBE (5202) state and record the XID of the checkpoint (that is the LSN of the corresponding checkpoint_begin (4701)). - (c) fassociate (4703): If the system is in the BBCBE (5202) state then open the file. - (d) txnisopen (4704): If the system is in the BBCBE (5202) state then increment the number of live transactions, and if the XID is less than any previously seen one (or if there is no previously seen one) then remember the XID. - (e) committxn (4705): Continue. - (f) aborttxn (4706): Continue. - (g) begintxn (4707): If the system is in the BOCB (5203) state and the XID of this log entry is equal to the oldest transaction mentioned in a txnisopen (4704) log entry in the BBCBE (5202) state then go to the FOCB (5204) state and start scanning forward - (h) fdelete (4708): Continue. - (i) fcreate (4709): Close the file if it is open. - (j) fopen (4710): Close the file if it is open. - (k) fclose (4711): Continue. - (l) emptytablelock (4712): Continue. - (m) pushinsert (4713): Continue. - (n) pushinsertnooverwrite (4714): Continue. - (o) pushdeleteboth (4715): Continue. - (p) pushdeleteany (4716): Continue. - (q) pushinsertmultiple (4717): Continue. - (r) pushdeletemultiple (4718): Continue. - (s) comment (4719): Continue. - (t) load (4720): Continue. - (u) txndict (4721): Merge the log entries from the identified dictionary into the recovery logs, and process them. - 8. Scan forward from the point identified above. For each log entry encountered, do the following operation depending on the log entry: - (a) checkpoint_begin (4701): If the system is in the FOCB (5204) state then go to the FBCBE (5205) state. - (b) checkpoint_end (4702): If the system is in the FBCBE (5205) state then go to the FNCE (5206) state. - (c) fassociate (4703): Continue. - (d) txnisopen (4704): Continue. - (e) committxn (4705): If the transaction is open, then execute the commit actions for the transaction, and destroy the transaction. - (f) aborttxn (4706): If the transaction is open, then execute the abort actions for the transaction, and destroy the transaction. - (g) begintxn (4707): Create a transaction. - (h) fdelete (4708): If the file exists and the identified transaction is active, then create a commit action that will delete the file when the transaction commits. - (i) fcreate (4709): If the system is in not in the FOCB (5204) state then unlink the underlying file from the file system (if the file exists) and create a new one, updating the iname-dname dictionary. - (j) fopen (4710): Open the file. - (k) fclose (4711): If the file is open, then close it. - (l) emptytablelock (4712): If the file is open, then obtain a table lock on the file. - (m) pushinsert (4713), pushinsertnooverwrite (4714), pushdeleteboth (4715), pushdeleteany (4716): If the transaction exists and the file is open then perform the identified insertion or deletion as follows. Establish commit and abort actions for the operation. If the LSN of the dictionary is older than the LSN of this log entry, then push the operation's message into the dictionary. - (n) pushinsertmultiple (4717), pushdeletemultiple (4718): If the transaction exists then generate each required row and for each generated row perform the actions that would have been done if that row were found in a pushinsert (4713) or pushdeleteany (4716) message. - (o) comment (4719): Continue. - (p) load (4720): If the transaction exists, then establish a commit action to delete the old file, and an abort transaction to delete the new file. - (q) txndict (4721): - 9. Clean up the recovery environment by closing the dictionaries. - 10. Release the file lock. When the end of the log has been reached, the system performs a checkpoint, and has recovered from a crash. Once every 1000 log entries, the system prints a status message to the error log indicating progress scanning backward or forward. The list of segments to deallocate is maintained until the data file is synchronized to disk with an f sync, after which the system deallocates unneeded segments and the disk space is used again. A segment is kept if any of the following - 1. the new segment has not been written to disk, - 2. the BTT has not updated on disk, or - 3. the segment is needed to represent some active version of the dictionary. There may be other reasons to keep segments. For example, during backup, old segments are kept in an allocated state until the backup completes. The system trims unneeded log files by deleting the files that are no longer needed. A log file is needed if - 1. the log file contains the checkpoint_begin (4701) corresponding to the most recently logged checkpoint_end (4702), - 2. some uncompleted transaction has a log entry in the log, or - 3. an older log file is needed. There may be other reasons that a log file is needed. For example, during backup, all the log entries that existed at the beginning of the backup are kept until the backup completes. After a log file is deleted, the system can reuse the storage space for other purposes, including but not limited to writing more log files or writing dictionary data files. In one mode of operation, the system, for each dictionary modified by a transaction, allocates a segment in the dictionary. Log entries that mention a file number are logged in the segment of the dictionary corresponding to the file number instead of in the log. An additional txndict (4721) log entry is recorded after the checkpoint_begin (4701) and before the checkpoint_end (4702) to note the existence of this segment. The txndict (4721) entry records the XID of the relevant transaction in transaction_id (2812), the filenum (4726) which denotes which file contains the segment, the blocknum (404) which denotes which block contains the segment, the block number being translated using the BTT to identify where in the file the segment is stored. In this mode, all information needed for recovery can be found in log entries subsequent to the checkpoint_begin (4701) corresponding to the most recent checkpoint_end (4702). Lock Tree The system employs a data structure called a lock tree to provide isolation between different transactions. The lock tree implements row-level locks on single rows and ranges of rows in each dictionary. A lock is said to cover a row if the lock is a lock on that row or on a range that includes that row. In some situations, the system employs exclusive locks, and in some situations the system employs reader-writers locks. In the system, only one transaction can hold a writer lock that covers a particular row, and if there is such a transaction, then no reader locks may be held that cover that row. Multiple reader locks may be held by different transactions on the same row at the same time. Transactions read and write key-data pairs. For the purpose of locking, we refer here to those key-data pairs as points. For a DUP database, a point can be identified by a key-value pair. For a NODUP database, the key alone is enough to identify a point. In either case, a point corresponds to a single pair in the dictionary. The locking system defines two special points, called ‘∞’ and ‘−∞’. These two special points are values that are not seen by the user of the locking system. Points can be compared by a user-defined comparison function, which is the same function used to compare pairs in the dictionary. A transaction t holds a lock on zero, one, or more points. For example, when providing serializable isolation semantics, if a transaction performs a query, and the transaction doesn't change any rows, then the transaction can perform the same query again and get the same answer. In one mode of operation, the transaction acquires reader locks on at least all the rows it reads so that another transaction cannot change any of those rows. For example, in some isolation modes, if a transaction performs a query to “retrieve the smallest element of a dictionary” and obtains P, the system acquires a reader lock on the range [−∞,P], even though the query only actually read P. This prevents a separate transaction from insert pointing P2<P before the first transaction finishes, violating the isolation property, because if the first transaction were to ask again for the smallest element, it would get P2 instead of P. As this example indicates, a transaction acquires locks on ranges of points. In this document, when we say “range,” we mean a closed interval. A range of points is a set identified by its endpoints x and y, where the x≦y. When x=y, the set is of cardinality one. Otherwise, the set may contain one or more finite or infinite values. The system treats both −∞ and ∞ as possible endpoints of ranges. For each transaction and each database, the lock tree maintains a set of closed ranges that have been read, the read set and a set of points (which are 1-point ranges) that have been written, the write set. Ranges in the read set represent both points that have been read, and those that needed to be locked to ensure proper isolation. In some situations, the system escalates locks, so the write set can sometimes contain ranges that are not single points. If a transaction holds locks on two ranges [a,b] and [c,d], where a≦b≦c≦d, and no other transaction holds conflicting locks in the range [a,d], the system may replace the two ranges with the larger range [a,d]. The system may escalate locks in this way in order to save memory, or for other reasons, including but not limited to speeding up operations on the locks. The lock tree can determine if the read set of one transaction intersects the write set of another transaction, and if the write set of two transactions intersect. If there are any such intersections, then the lock tree is conflicting. The lock tree operates as follows: - 1. The system attempts to add a set of points to a read or write set. The added set can be either a single point added to the write set or a closed range added to the read set. - 2. If the resulting lock tree would be conflicting, the set is not added. Instead an error is returned. If the resulting lock tree is not in conflict, then the lock tree is updated and the addition is successful. When a transaction completes, it releases all the locks it holds. A lock tree comprises a set of range trees. There may be zero, one, or more range trees. A range tree maintains a set of ranges, and for each range, an associated data value. Specifically, a range tree S maintains a finite set of distinct pairs of the following form: The system categorizes range trees into four groups: range trees are considered either overlapping or non-overlapping. Independently, range trees are considered homogeneous or heterogeneous. In a non-overlapping range tree, the ranges do not overlap. Ranges in an overlapping range tree sometimes overlap. Ranges in a homogeneous range tree have the same associated data item. The system uses homogeneous range trees to store ranges all locked by the same transaction. Ranges in a heterogeneous range tree may store the same or different associated data items for different ranges. The system uses heterogeneous range trees to store ranges that can be locked by multiple transactions. The system can perform the following operations on range trees: - 1. F INDA LLO VERLAPS(S,I) returns all pairs in S that overlap a given range I. - 2. F INDO VERLAPS(S,I,k) returns all K pairs from range tree S whose ranges overlap range I, unless K>k. If so, the function returns only k of these pairs, arbitrarily chosen. - 3. I NSERT(S,I,T) inserts a new pair - 4. DELETE (S,I,T) removes range I with associated data item T from S if such a pair exists. Non-overlapping ranges can be ordered, which therefore induces a total order on pairs in a non-overlapping range tree. The system defines [a,b]<[c,d] if an only if b<c. This ordering function also defines a partial order on arbitrary ranges, even those that overlap. There is a partial order on points and ranges. The system defines a<[b,c] if and only if a<b, and [b,c]<a if and only if c<a. The system performs the following additional operations on non-overlapping range trees: - 1. P REDECESSOR(S,P) returns the greatest - 2. S UCCESSOR(S,P) returns the least The non-overlapping range tree can be implemented using a search data structure, which includes but is not limited to an OMT, a red-black tree, an AVL tree, or a PMA. Non-overlapping range trees can also be implemented using other data structures including but not limited to sorted arrays or non-balanced search trees. In the search tree, the system stores the endpoints of all ranges, and an indication on each endpoint whether it is a right or a left endpoint. The overlapping range tree can also be implemented using a search tree, where some additional information is stored in the internal nodes of the tree. The system stores the intervals in a binary search tree, ordered by left endpoint. In every node in the tree, the system stores the value of the maximum right endpoint stored in the subtree rooted at that node. For the purpose of the lock tree, each database is handled independently, so we can describe the representation as though there is only one database. The system employs a collection of zero or more range trees to represent a lock tree. The ranges represent regions of key space or key-value space that are locked by a transaction. The lock tree comprises, - 1. For each pending transaction t - (a) a L OCALR EADS ETrange tree Rt, and - (b) a L OCALW RITES ETrange tree Wt; - 2. a G LOBALR EADS ETrange tree GR; and - 3. a B ORDERW RITErange tree B. Each Rt comprises a homogeneous non-overlapping range tree. The system employs Rt to maintain the read set for transaction t. The presence of a pair Each Wt comprises a homogeneous non-overlapping range tree. The system employs Wt to maintain the write set for transaction t. The presence of a pair GR comprises a heterogeneous overlapping range tree that maintains the union of all read sets. The system employs range tree GR to contain information that can, in principle, be calculated from the L B comprises a heterogeneous non-overlapping range tree. The system employs B to hold maximal ranges of the form - 1. Transaction t holds locks on points x and y. All points in the range [x, y] are either locked by transaction t or are unlocked. - 2. The largest locked point less than x (if one exists) and the smallest locked point greater than y (if one exists), are locked by transactions other than t. In principle, all the information in the B ORDERW RITEtree can be calculated from the L OCAL-W RITES ETtrees. The system performs range consolidation on some insertions, meaning that when a transaction T locks two overlapping ranges X and Y, the system replaces those two ranges with a single combined range X∪Y. If ranges are consolidated then all distinct ranges stored in a range tree for the same transaction are nonoverlapping. Range consolidation is implemented in a homogeneous range tree as follows. Before In a heterogeneous range tree, range consolidation is similar, except that the system checks that only ranges corresponding to the same T are consolidated. One way to maintain range consolidation on a heterogeneous range tree, is to maintain separate (homogeneous) range trees for each associated T. The system uses GR in this fashion. The system identifies which intervals to consolidate in the heterogeneous range tree, GR, by first doing range consolidation on the homogeneous range tree RT. As an example, consider range tree S={ We say that an interval I (or a point P) meets a range tree if one of the intervals stored in the range tree overlaps I (or P). We say that an interval I (or point P) meets a range tree at T if I (or P) overlaps an interval in the range tree associated with T. We say that an interval I (or point P) is dominated by a range tree if the interval T is entirely contained in one of the intervals stored in the range tree. As an example, consider [0,5] and range tree { The system employs the lock tree to answer queries about whether an interval I meets or is dominated by a range tree and at what transaction. The system implements those queries using procedure F - 1. Does an interval I meet a range tree S? - The system uses a F INDO VERLAPSquery with k=1. - 2. Given a point P, a transaction T, and a range tree S, does the point P meet the range tree S at a transaction different from T? - The system uses a F INDO VERLAPSquery with k=2. - 3. Given an interval I, a transaction T, and a range tree S, does more than one interval in S overlap I? If so, return “more than one overlap.” Otherwise, if exactly one interval overlaps, and its associated transaction is different from T, then return the name T′ of that transaction. Otherwise return “ok”. - The system performs this three-way test using a F INDO VERLAPSquery with k=2. - 4. Given an interval I and a range tree S, does S dominate I? - The system uses a F INDO VERLAPSquery with k=2 taking advantage of range consolidation. In more detail, the lock tree operates as follows. - 1. For transaction T to acquire a read lock on a closed range I: - (a) If I is dominated by WT then return success. - (b) Else if I is dominated by RT then return success. - (c) Else if I meets the border write tree B at a transaction T2≠T and I meets the write WT 2then return failure. - (d) Else insert - 2. For transaction T to acquire a write lock on point P: - (a) If P is dominated by WT then return success. - (b) Else if P meets GR at transaction T2≠T then return failure. - (c) Else if P meets B at T2≠T and P meets WT 2then return failure. - (d) Else insert OR- DERW RITEtree B to include - 3. For transaction T to release all of its locks (which happens when the transaction commits or aborts): - (a) Release the read set. - i. For each range IεRT: - A. LOBALR EADS ETtree GR. - ii. Delete the entire L OCALR EADS ETtree WT for transaction T. - (b) Release the write set. - i. For each range IεL OCALW RITES ETWT: - A. If I meets B ORDERW RITEtree B at T then update the B ORDERW RITEtree B to exclude - ii. Delete the entire L OCALW RITES ETtree RT for transaction T. To update the B - 1. Run a F INDO VERLAPS(B,I,k=1) query to retrieve set F. Either F={ - 2. If |F|=1 - 3. Else if |F|=1 - (a) Remove the overlapping range from B: D ELETE(B,IF,TF). - (b) Split IF into two ranges for transaction TF as: - i. Run S TRICTS UCCESSOR(WF,P) to retrieve - ii. Run S TRICTP REDECESSOR(WF,P) to retrieve - iii. Insert the lower end of the split range into the B as I NSERT(B,[If L,IP H], f). - iv. Insert the upper end of the split range into the B as I NSERT(B,[IS L,If H],f) - (c) Insert the new range into the B ORDERW RITEtree as I NSERT(B,I,t). - (d) Return success. - 4. Else (|F|=0) then: - (a) Extend I if necessary: - i. Run S TRICTS UCCESSOR(B,P) to retrieve - ii. If a successor is found and t=t2 then extend I to include IS in the B ORDERW RITEtable: - A. Remove the successor range from B as D ELETE(B,IS,t). - B. Insert an extended range to cover both I and IS as I NSERT(B,[P,IS H],t). - C. Return success. - iii. Run S TRICTP REDECESSOR(B,P) to retrieve - iv. If a predecessor is found and t=t3 then extend I to include Ip in the B ORDER-W RITEtable: - A. Remove the predecessor range from B as D ELETE(B,IP,t). - B. Insert an extended range to cover both I and Ip as I NSERT(B,[IP L,P], t) - C. Return success. - (b) Insert I into B as I NSERT(B,I,T). - (c) Return success. To update the B - 1. Let I=[P,P]. - 2. Run a F INDO VERLAPS(B,I,k=1) query to retrieve set F where either F={ - 3. If |F|=0 return success. - 4. Else if |F|1 - 5. Else (|F|=1 - (a) Remove the overlapping range from B as D ELETE(B,I,t). - (b) Run S TRICTS UCCESSOR(B,P) to retrieve - (c) Run S TRICTP REDECESSOR(B,P) to retrieve - (d) If a predecessor is found and a successor is found and t2=t3 then merge IS, IP and the set of points between them as: - i. Remove the successor range from B as D ELETE(B,IS,t2). - ii. Remove the predecessor range from B as D ELETE(B,IP,t2). - iii. Insert the extended range as I NSERT(B,[IP L,IS H],t2). - (e) Return success. The system escalates locks when running short on memory to hold the lock table. To escalate locks, the system finds one or more adjacent ranges from the same transaction, and merges them. If no such ranges can be found, then the system allocates more memory to the lock table, and may remove memory allocated to other data structures including but not limited to the buffer pool. To implement serializable transactions: - 1. When inserting a pair, the system obtains a write lock on the pair. If the lock is obtained, the pair is inserted. 2. When looking up a pair, the system obtains a read lock on the pair. If the lock is obtained, the pair is looked up in the dictionary. - 3. When querying to find the smallest pair q greater than or equal to a particular pair p, the system performs the following: - (a) Search to find q. - (b) If no such value exists, then find the largest value r in the dictionary, and lock the range [r,∞]. If the lock cannot be obtained, then the query fails. Otherwise the query succeeds, and returning an indication that there is no such value. - (c) If p=q then the system acquires a read lock on [q,q]. If the lock is obtained, the query succeeds, else it fails. - (d) Else search to find the successor s of q. If such successor exists, then lock [q,s], otherwise lock [q,∞]. If the lock cannot be acquired, then the query fails, otherwise the query succeeds. - 4. When querying to find the successor q of p, where p has already been returned by a previous search (for example in a cursor NEXT operation), the system performs the following - (a) Search to find q. If no successor exists, then let q=∞. - (b) Lock the range [p,q]. The system also performs other queries, including but not limited to finding the greatest pair less than or equal to a given value, and finding the predecessor of a value. Alternatively, instead of failing when a lock conflict is detected, the system could perform another action. For example, the system could retry several times, or the system could retry immediately, wait some time, retry again, wait a longer time, and retry again, eventually timing out and failing. Or the system could simply wait indefinitely for the conflicting lock to be released, in which case the system may employ a deadlock detection computation to kill one or more of the transactions that are deadlocked. The system also provides other isolation levels. For example, to implement a read-committed isolation level, the system acquires read locks selected data but they are released immediately, whereas write locks are released at the end of the transaction. For read uncommitted, read locks are not obtained at all. In another mode, the system implements read-committed isolation by reading the committed transaction record from a leaf entry (described below), and implements read-uncommitted by reading the most deeply nested transaction record from a leaf entry, in both cases without obtaining a read lock. For repeatable read isolation levels, instead of locking ranges, the system can lock only those points that are actually read. For snapshot isolation the system can keep multiple versions of each pair instead of using locks, and return the proper version of the pair in response to a query. Transaction Commit and Abort When a transaction commits or aborts, the system performs cleanup operations to finish the transaction. If a transaction commits, the cleanup operations cause the transactions change to take permanent effect. If a transaction aborts, the system undoes the operations of the transaction in a process called rollback. The system implements these transaction-finishing operations by maintaining a list of operations performed by the transaction. This list is called the rolltmp log. For example, each time the system pushes an insert message (2801) into the dictionary, it remembers that. If the transaction aborts, then an abort_both (2808) is inserted into the dictionary to cleanup. If the transaction commits, then a commit_both (2806) is inserted. For each operation, the system stores enough information in the rolltmp log so the proper cleanup operations can be performed on abort or commit. In the case where the system crashes before a transaction commits, then during recovery transactions are created and a rolltmp log is recreated. When recovery completes, if there are any incomplete transactions, then recovery aborts those transactions, executing the proper cleanup actions from the rolltmp log. Error Messages, Acknowledgments, and Feedback The system can return acknowledgments and error messages depending on the specific settings in the dictionary. For example, the operations I The operations I One way to determine a status Boolean is to perform an implicit search when performing I In another mode the system returns these status Booleans by filtering out some of the search operations by using a smaller dictionary, or an approximate dictionary, that can fit within RAM, thus avoiding a full Search(k). The system uses ten different filters that store information about which keys are in the streaming dictionary. Alternatively, the system could use a different number of filters. The filter is implemented using a hash table. Denote the hash function as h(x). Suppose that there are N keys. Then the filter stores Θ(N) bits, where the number of bits is always at least 2N. Then H[t]=1 if and only if there exists a key k stored in the dictionary such that h(k)=t. This filter exhibits one-sided error. That is, the filter may indicate that a key k is stored in the dictionary when, in fact, it is not. However, if the filter indicates that a key k is not in the dictionary, then the key is absent. Each filter has a constant error probability. Suppose that the error probability is ½. Then the probability that all 10 filters are in error is at most 2−10. The total space consumption for all filters can be less than 32 bits per element, which will often be more than one or two orders of magnitude smaller than the total size of the dictionary. Observe that in this specification uses a variation on the filter that supports deletions. One such variation is called a counting filter. If for a given key all filters say that the key may be in the dictionary, then the system searches for it to determine whether it is. If one or more say that it is not in the dictionary, then the system does not search for it. Even if a single filter of the ten indicates that a key k is not in the dictionary, then it is not necessary to search in the actual dictionary. Thus, the probability of searching in the dictionary, when the key is not present, is approximately 2−10. Thus, the cost to insert a new key not currently in the dictionary can be reduced by an arbitrary amount by adding more RAM, to well below one disk seek per insertion. The cost to insert a key already in the dictionary still involves a full search, and thus costs Ω(1) memory transfers. In some situations, the system makes all insertion operations give feedback in o(1) memory transfers by storing cryptographic fingerprints of the keys in a hash table. The data structure uses under 100 bits per key, which is often orders of magnitude smaller than the size of the streaming B-tree. Refer now to The first table (1505), T1, has hash function h1 (x), which hashes the four values in the tree as follows: h 1(a)=5 h 1(baab)=9 h 1(bb)=9 h 1(bbbba)=1, and hashes the two new values as follows: h 1(aa)=5 h 1(bba)=9. The second table (1506), T2, has hash function h2(x), which hashes the four values in the tree as follows: h 2(a)=8 h 2(baab)=0 h 2(bb)=6 h 2(bbbba)=3, and hashes the two new values as follows: h 2(aa)=7 h 2(bba)=8. The last table (1507), T10, has hash function h10(x), which hashes the four values in the tree as follows: h 10(a)=0 h 10(baab)=9 h 10(bb)=7 h 10(bbbba)=5. and hashes the two new values as follows: h 10(aa)=3 h 10(bba)=9. In all tables, hash marks indicate that an element is hashed to that array position (1508). Upon insertion of a key, the data structure returns whether that keys already exists in the tree or not. In this example, the two keys (1504) are to be inserted in the tree, aa and bba, and neither one already exists. Inserting aa does not require a search in the tree because T 2 [h 2(aa)]=T 2[7]=0, as shown at (1509), meaning that aa cannot already be stored in the dictionary. In contrast, to determine whether bba is in the dictionary uses a search because for all i in the hash table Ti[hi(bba)]=1 as shown at (1510). Alternatively, other feedback messages can be returned to the user. For example, one could give feedback to the user that is approximate or has a probability of error. Alternatively, there are other parameter settings that can be chosen. For, example, the sizes and number of approximate dictionaries could vary. Alternatively, other compact dictionaries and approximate dictionaries can be used. For example, one can use other filter and hash-table alternatives. Alternatively, there are other ways to return error messages and acknowledgments to users without an immediate full search in many cases. For example, the feedback can be returned with some delay, for example, after inserted messages have reached the leaves. Another example is that after a load has completed, an explicit or implicit flush can be performed—an implicit flush, say, by a range query—to ensure that all messages have reached the leaves, and all acknowledgments or error messages have been returned to the user. Concurrent Streaming Dictionaries The system provides support for concurrent operations. The system allows one or more processes and/or processors to access the system's data structures at the same time. Users of the system may configure the system with many disks, processors, memory, processes, and other resources. In some cases the system can add these resources while the system is running. The system employs when a message M(k,z) is added to the data structure, does not necessarily insert it into the root node u. Instead, M(k,z) is inserted into a deeper node v on M(k,z)'s root-to-leaf path, where v is paged into RAM. This “aggressive promotion” can mitigate or avoid a concentrated hot spot at the top of the tree. When a message M(k,z) is inserted into the data structure, there is a choice of many first nodes in which to store M(k,z). Moreover, the system's data structures automatically adapts to the insertion and access patterns as the shape of the part of the tree that is stored in RAM changes. Several examples help explain this adaptivity. At a particular time some of the nodes in the tree that are closest to the root are paged into memory. The part of the tree that is paged into memory is indicated by hash marks (1602). In this figure the paged-in part of the tree is nearly balanced. Messages are inserted into the leaves (1603) of the part of the tree that is kept in main memory. Refer now to The top part of the tree that is paged into memory is be skewed towards the beginning of the database. This part of the tree is indicated by hash marks (1703). Thus, this top part of the tree will be deep on leftward branches and shallow on rightward branches, so that, again, the paging system will adaptively diffuse what would otherwise be an insertion hotspot. As before, the vertical lines (1704) emanating from the root represent insert paths in the tree and the locally deepest nodes paged into memory are represented by rectangles (1705). The messages will be inserted into these locally deepest nodes 1705. The system obtains a write lock on a node when it inserts data into a node, and so by inserting into different nodes, the system can reduce contention for the lock on a given node. Alternatively, there are other ways to achieve concurrency through adaptivity. For example, if a tree node is a hot spot, the system could explicitly choose to flush the buffers in the node and bring the children into RAM, if it reduces the contention on that node. Also, the system may choose to deal with a given node differently, depending on whether it is clean or dirty. Alternatively, there are other ways of using aggressive promotion to help achieve a highly concurrent dictionary. For example, one could use aggressive promotion for a non-tree-based streaming dictionary, such as a cache-oblivious lookahead array, to avoid insertion bottlenecks. Alternatively, there are other ways of avoiding bottlenecks and achieving high concurrency. For example, one could use a type of data structure with a graph structure having multiple entrances into the graph, e.g., a tree with multiple roots or roots and some descendants or a modification of a skip graph. For example, one may replace the top Θ(loglogN) levels of the tree or other data structure with a skip graph. This would reduce the concurrency without changing the asymptotic behavior of the dictionary. Alternatively, additional concurrency can be achieved by having multiple disks. For example, one could use striping across all disks to make effectively bigger tree blocks. Alternatively, one could divide up the search space according to keys so that different keys are stored on different disks. DUP and NODUP The system can handle both NODUP and DUP dictionaries. - 1. No duplicate keys allowed (NODUP). This means that no two key-value pairs that are stored in the dictionary at the same time can have keys that compare as identical. - 2. Duplicates keys allowed (DUP). This means that two key-value pairs that are stored in the dictionary at the same time are allowed to have keys that compare as identical, but when the keys compare as identical, the associated values must not compare as identical. Duplicates are stored logically in sorted order. Specifically, key-value pairs are first sorted by key. All elements with the same key are sorted in order by value. The following are examples of functions that are supported with duplicate keys. - 1. I NSERT(k,v): Inserts a key-value pair (k,v). If there already exists a key-value pair (k′, v′), where k′=k and v′=v, then there are several choices depending on how flags are set. Either (k′, v′) is overwritten or it is not. In either case, a call-back function may be called. Although v and v′ are compared as equal, their values considered as byte strings may be different. - 2. D ELETE(k): Deletes a key k. In this case, all key-value pairs (k′,v) such that k′=k are deleted. - 3. D ELETE(k,v): Delete a key-value pair (k,v). Any key-value pair (k′,v′) in the dictionary with k′=k and v′=v is deleted. - 4. Cursor delete. The key-value pair that the cursor points to is deleted. - 5. Cursor replace with V. If the key-value pair (k,v) that is pointed to by the cursor has v′=v, then it is replaced with (k,v′). - 6. Search for a particular key k. The first or last key-value pair (k′, v), where k′=k, is returned (if one exists) for one setting of flags. For another setting of flags, the search returns a cursor that points to (k′, v). - 7. Search for a particular key-value pair (k,v). If a key-value pair (k′, v′) is in the dictionary, such that k′=k and v′=v, then return (k′, v′). - 8. Find the predecessor or successor of key k or a key-value pair (k,v), if it exists. The search could also find a predecessor or successor key-value pair, if it exists. In one mode the system employs PMAs that operate in a DUP or a NODUP mode. For example, when duplicate nodes are inserted into a PMA, they are put in the appropriate place in the PMA, as defined by the ordering of pairs. In one mode the system employs hash tables that operate in a DUP or a NODUP mode. In a NODUP mode, the hash tables stores messages. In a DUP mode, the system employs an extra level of indirection in hash tables, storing doubly-linked lists of messages. Messages are be hashed by key k and all messages associated with the same key k are stored in the same doubly-linked list. The hash function used maps keys k and k′ to the same bucket if k=k′. In DUP mode the system allocates a hash table with a number of buckets proportional to the number of distinct key equivalence classes. In another mode, the system uses a hash table in DUP mode, in which the system hashes both the key and the value. The system stores key-value pairs in search trees. In a search tree, the system employs pivot keys that are comprise in a NODUP mode and that comprise key-value pairs in a DUP mode. In DUP mode, the subtrees to the left of a pivot key contain pairs that are less than or equal to the pivot key. The subtrees to the right of the pivot key contains pairs that are greater than or equal to the pivot key. The nodes of the tree further comprise two additional Booleans, called equality bits. The equality bits indicate whether there exist any equal keys to the left and to the right of the pivot respectively. To search, the system uses both the pivots and equality bits to determine which branch to follow to find the minimum or maximum key-value pair for a given key. When a delete message is flushed from one buffer, the message is sent to all children that may have a matching key. All the duplicates are removed. For a cursor delete, the system deletes the item that is indicated by the cursor. To insert, the system can use both the key and values to determine the correct place to insert 3195 key-value pairs. In one mode the system handles duplicates with identical values, called DUPDUP pairs. In DUPDUP mode when a key-value pair is inserted, where that key-value pair is a DUPDUP of another key-value pair in the dictionary, then there are one or more cases for what can happen, depending on how flags are set. For example: - 1. Overwrite: One DUPDUP pair overwrites a previous one. - 2. No overwrite: One DUPDUP pair does not overwrite a previous one, instead the previous one is kept, and the new one is discarded. - 3. Keep: both pairs are kept. Alternatively, there are other ways of storing DUP and DUPDUP pairs. For example, duplicates could be stored in sorted order according to the time that they were inserted or they could be stored in an arbitrary order. For example, if the size of two rows with the same key is different, then a larger or smaller row might be pushed in preference to the other. Alternatively, these other orders can be maintained with minor modifications to the system described here. For example, to store pairs in sorted order based on insertion time, add a time stamp, in addition to the key and the value, and sort first by key, then by time stamp, and then by value, thereby organizing duplicate duplicates for storage. Other types of unique identifiers, time stamps, and very minor modifications to the search function also can be used in other ways of storing duplicates. Multiple Disks The system can use one or many disks to store data. In one mode the system partitions the key space among many disks. Which disk stores a particular key-value pairs depends on which disk (or disks) is responsible for that part of the key space. This scaling is achieved partially through a partition layer in the system. The partition layer determines which key-value pairs get stored on which disks. The partition layer uses compact partitioning, or partitioning for short. In compact partitioning, the key space is divided lexicographically. For example, if there are 26 processor-disk systems and the keys being stored are uniformly distributed starting with letters ‘A’-‘Z’, then the first processor-disk could contain all the keys starting with ‘A’, the second could contain the keys starting with ‘B’, and so forth. In this example, the keys are uniformly distributed. We describe here compact partitioning schemes that are designed to work efficiently even when the keys are not distributed uniformly. In one mode the system employs PMA-based compact partitioning. In this mode the key space is partitioned lexicographically, assigning each partition to one disk cluster. Recall that a PMA is an array of size Θ(N), which dynamically maintains N elements (key-value pairs) in sorted order. The elements are kept approximately evenly spaced with gaps. The system establishes a total order on the disks compatible with the dictionary, meaning that if disk A is before disk B in the total order, then all elements (key-value pairs) stored on disk A are lexicographically before all elements stored on disk B. These disks in order form a virtual array of storage whose length is the capacity of a disk system or subsystem. We treat this virtual array as a PMA storing all elements. When an element moves from part of the array associated with one disk to part of the array associated with another disk, then that element is migrated between disks. The system chooses the rebalance interval so that it only overlaps the boundary between one disk and the next if that disk is nearly full. Alternatively, the rebalance interval can be chosen so that it crosses the boundary between one disk and the next when one disk has a substantially higher density than a neighbor. The system's linear ordering of the disks takes into account the disk-to-disk transfer costs. For example, it is often cheaper to move data from a disk to another disk on the same machine than it is to disks residing elsewhere on a network. Consider a transfer-cost graph G, in which the nodes are disks, and the weight on edge is some measure of the cost of transferring data. This weight can take into account the bandwidth between two disks, or the weighted bandwidth that is reduced if many disks need to share the same bus or other interconnect link. Alternatively, the system could also take into account the latency of transfer between disks. For example, the weighting function can decrease with increasing connectivity. Alternatively, one disk could simulate several smaller disks in the PMA of disks. For example, if large disks are partitioned into smaller virtual disks, and then the disks are ordered for the PMA layout, one might choose for different virtual disks from the same disk not to be adjacent in the PMA order. Thus, the PMA could be made to wrap around the disks several times, say, for the purposes of load balancing. Such wrapping could, for example, allow the system to employ some subset of disks serve as a RAID array, with data striping across the RAID. Alternatively, the system could accommodate disks of different sizes. Alternatively, there are many choices for choosing a linear order on the disks. For example, a traveling salesman problem (TSP) solution for G (or an approximate TSP solution) can be used to minimize the total cost of edges traversed in a linearization. Or a tour on a minimum (or other) spanning tree of G can be used. Or the system could choose an ordering that is approximately optimal, for example an ordering that can be proved to be within a factor of two of optimal. In one mode, the system employs “disk recycling”. In this mode, the system does not keep a total order on disks. Instead, a total order is kept on a subset of disks and other disks are be kept in reserve. If a region of key space stored on a disk becomes denser than a particular threshold, a reserved disk is deployed to split the keys with the overloaded disk. If another region of key space stored on a disk becomes sparser than a particular threshold, elements are migrated off the underused disk, and the newly empty disk can be added to the reserve. In one mode the system employs an adaptive PMA (APMA). In an APMA, the system keeps a sketch of recent insertion patterns in order to learn the insertion distribution. The sketch allows the system to leave extra space for further insertions in likely hot spots. In one mode the system replaces the PMA over the entire array with an APMA. In the case of disk recycling, the system uses an APMA over all the disks, rather than the elements, to predict where to deploy spare disks. Since an APMA rebalances intervals unevenly, leaving some interval relatively sparse, the recycled disks can take the role of sparse intervals. After rebalancing (5502) disk D (5510) contains keys j-k instead of g-k. Disk B (5508) contains keys a-i, and Disk A (5507) contains keys n-z. Disk C (5509) is free. Alternatively, the disk-to-disk rebalancing system could move elements in the background, during idle time, during queries, or at other times, for example to improve hot-spot dissipation. Alternatively, the system could group together several smaller disks to simulate a larger disk. For example, these disk groups can divide up their allotted key space by consistent hashing (hashing for short), where keys are hashed to disks at random, or nearly at random, and an streaming dictionary could be maintained on each disk. When keys are hashed this way, host spots are diffused across all disks participating in the hashing scheme. If the system cannot predict where a successor or predecessor lies, then the system can replicate queries across all the disks when performing successor or predecessor queries. In a hybrid scheme, if each group has k disks, the system can employ the bandwidth of all k disks to diffuse a hot spot, and the system can limit the replication of queries to these k disks. When the dynamic partitioning scheme changes a partition boundary, thus causing items to move from one partition to another, the system can delete the items from k disks and insert them onto k other disks. The parameter k is tunable, and the system can increase insertion scaling by increasing k, whereas the system can increase query scaling by decreasing k. Finally, the parameter k need not be fixed for all clusters. An alternative approach is to reserve j disks as a buffer. Keys are first inserted into the buffer disks, and these are organized by hashing. The remaining disks are organized by partitioning. As keys are inserted into the buffer, keys are removed from the buffer and into the partitioned disks. If the system detects a particularly large burst of insertions into a narrow range of keys, it can recycle disks into that part of the key space to improve the performance of the partitioned disks. In this approach, queries can be performed once on the partitioned disks, and replicated j-fold in the hashed buffer disks. Alternatively, compact partitioning can be used for other kinds of dictionaries and data storage systems. Buffer Flushing as Background Process In one mode, the system performs buffer flushing as a background process. That is, during times in which the disks and processors are relatively idle, the system selects buffers and preemptively flushes them. To implement background buffer flushing the system maintains a priority queue, auxiliary dictionary, or other auxiliary structure storing some or all of the buffers in the tree that need to be flushed. When the CPU, memory system, and disk system have spare capacity (e.g., because they are idle), the system consults the auxiliary structure, bringing nodes into RAM, and flushing the relevant buffers. This auxiliary structure is maintained along with the tree, but it is much smaller. When the buffers in the tree are changed, then so does the auxiliary structure. The auxiliary structure could be stored exclusively in RAM, or in some combination of RAM and disk. Alternatively, there are many ways to prioritize the buffers that need to be flushed. Examples include, but are not limited to - 1. giving higher priority to buffers that contain more elements, - 2. giving higher priority to buffers that are fuller, - 3. giving higher priority to nodes that contain less available space, - 4. giving higher priority to buffers that were modified or read recently, - 5. giving higher priority to nearly full buffers that are higher up in the tree, - 6. giving higher priority to nodes whose flushes would not overflow their children, and - 7. combinations of those priorities. Alternatively, there are other ways of keeping track of which nodes need flushing. For example, the system could keep not all nodes from the main tree in the auxiliary structure, but instead, only keep those buffers that are getting full and in need of flushing. Then, when there is idle time, the system could consult this smaller structure. The buffers could be flushed in one of the orders described above or in an arbitrary order. Other strategies could also be used. Alternatively, background buffer flushing can apply to other streaming dictionaries, including but not limited to those that are not tree-based, including but not limited to a COLA, a PMA, or an APMA. For a COLA, the system can preemptively flush regions of levels that are getting dense. A PMA or an APMA might selectively flush a level of the rebalancing hierarchy. Overindexing In one mode the system implements overindexing. Recall that a nonleaf node has a sequence of keys In an overindexing mode, a node that is the parent of leaves keeps a larger sequence of monotonically increasing keys, where ki,1=ki above. Similarly the pointers are augmented to the sequence where pi,1=pi above. For every i, pointers pi,1 to pi,b point to different places in the same leaf. If some element (k,v) in child c has the smallest k such that ki,j≦k<ki,j+1, then pi,j points to the location of (k,v) in c. The choices of keys ki,j are made so as to split the elements of each leaf into parts that are sized within a factor four of each other. In a system with overindexing, the system fetches only an approximately 1/b fraction of a leaf that contains the element of interest. Alternatively, the pivots keys might be chosen not to evenly split by the number of elements in a leaf, but to approximately evenly split the sums of their sizes, or the probability of searching between two keys, or the probability of searching between two keys, weighted by the sizes of the elements, where the probability of accessing elements or subsets of elements can be given or measured or some combination thereof. Furthermore, b need not be the same constant for each leaf. Alternatively, nodes higher than leaf-parents can have overindexing, and in this case, the overindexing pointers might point to grandchildren. In this case, the buffers in overindexed nodes might be partitioned according to the overindexing pivot keys. Then, if some such subbuffer grows large enough, the elements in a subbuffer could be flushed to a grandchild, rather than to the child. Loader The system includes a loader that can load a file of data into a collection of dictionaries. The system also sometimes uses the loader for other purposes, including but not limited to creating indexes and rebuilding dictionaries that have been damaged. The loader is a structure that transforms a sequence of rows into a collection of dictionaries. The loader is given a sequence of rows; information that the loader uses to build a set of zero or secondary indexes; and a sort function for the primary rows and for each secondary index. The loader then generates all of the key-value pairs for the secondary indexes; sorts each index and the primary row; forms the blocks, compressing them; and writes the resulting dictionary or dictionaries to a file. The system uses multithreading in two ways: (1) The system overlaps I/O and computation, and (2) the system uses parallelism in the compute-part of the workload. The parallelizable computation includes, but is not limited to compressing different blocks, and implementing a parallel sort. The loader can create a table comprising a primary dictionary and zero or more secondary dictionaries. A table row is a row in a SQL table, which is represented by entries in one or more dictionaries. To insert a table row can require inserting many dictionary rows, including but not limited to the primary dictionary row and for each index a secondary dictionary row. Thus, for example, in a table with five indexes, a single table insertion might require six dictionary insertions. When inserting data, the system passes the primary row to the loader. The loader constructs the various dictionary rows from the primary row, sorts the dictionary rows, and builds the dictionaries. One way to understand how the loader fits into a database SQL is as a data pipeline illustrated in Having described the preferred embodiment as well as other embodiments of the invention it will now become apparent to those of ordinary skill in the art that other embodiments incorporating these concepts may be used. Claims (12) Priority Applications (1) Applications Claiming Priority (2) Related Child Applications (1) Publications (2) Family ID=44710872 Family Applications (2) Family Applications After (1) Country Status (1) Cited By (11) Families Citing this family (78) Citations (9) - 2010 - 2010-04-06 US US12/755,391 patent/US8996563B2/en active Active - 2015 - 2015-02-24 US US14/630,579 patent/US20150370860A1/en not_active Abandoned
https://patents.google.com/patent/US8996563B2/en
CC-MAIN-2020-10
en
refinedweb
firestore: Is there a way to perform or where query operation on collection? Scenario Given I am a tutor When I type comma separated list of search terms e.g math,accounting Then Expected I should be able to see different documents having map field key term set to true Then Actual I get empty result When I type comma separate terms that relates to the same document I get the actual result Question Is there or Operation for cloud firestore. Kindly find below code snippet and screenshots: Future<void> search(List<String> keywords) async { var docs = coursesRef.where('isDraft', isEqualTo: false); keywords.forEach((word) { String term = word.trim().toLowerCase(); print("$term\n"); docs = docs.where('searchTerms.$term', isEqualTo: true); }); QuerySnapshot query = await docs.getDocuments(); courses = []; query.documents.forEach((doc) { Course course = Course.fromDoc(doc); courses.add(course); }); notifyListeners(); } Below screen shot for sample Map<String, bool> See also questions close to this topic - how to show current play time of video when using video_player plugin in flutter? Currently using the flutter video_playerplugin stream video from the given link. Issue is that I had to hide the normal video interactive interface so that user can't skip the video. Now most of the work is done, just need to know how to display durationand current positionof the video been played. videoController.value.duration.inSecondsgives me the durationpart, and videoController.value.positiongives the position. But how to keep updating the results for theposition` section? void checkTimer(){ if(playerController.value.position == playerController.value.duration){ setState(() { Duration duration = Duration(milliseconds: playerController?.value?.position?.inMilliseconds?.round()); nowTime = [duration.inHours, duration.inMinutes, duration.inSeconds] .map((seg) => seg.remainder(60).toString().padLeft(2, '0')) .join(':'); }); } above code was created to update the time as needed. but now the issue is how to update time. should I use setState()or something else, because the above code is not working for me. Video is not loaded where then screen is loaded. It's loaded when then users click the play button. so till that time, we don't even have a durationvalue as data is still on the wayt. - Flutter In App Purchase, check for refund I have a problem with refunds of my purchases. I query all past purchases when I start the app and check whether a InAppProduct was purchased or not. final QueryPurchaseDetailsResponse purchaseResponse = await _connection.queryPastPurchases(); Now a purchase was refunded, but the purchase is still queried via the queryPastPurchases() method. Also the PurchaseDetails class does not have any information about the purchase being refunded. Any idea how to handle this case ? Information : - The refund was over 3 days ago - I use the latest version of the official InApp package - The purchase was made on an Android phone Regards - Can I send user notification after someone calls the user in Flutter? I have just started to learn notifications in Flutter. What I want to do is when someone calls user, I want to check callers number if it is not saved, send user some notification. I can check if the caller's name is in contacts, but when user opens my app. Is there a way to make flutter app work in background, and whenever call happens my app checks if it was unsaved contact call, and if it was send user a notification. Does someone how to achieve such thing? - How to get dynamic collection name from google-cloud-firestore? I have database. which each user create collection and collection name will be UID of users to save their data. I am using redux to manage state. this is my RootReducer ... import {firestoreReducer} from 'redux-firestore' import {firebaseReducer} from 'react-redux-firebase' const RootReducer =combineReducers({ firebase:firebaseReducer, firestore:firestoreReducer, }) export default RootReducer; here i get data from database and store it to "notes" and then passed to react component. notes: state.firestore.ordered."here should be collection name", but i have different collection for each users so the collection name should be dynamic according to the users UID . but i don't know how to do that? is this possible ?? thanks for any help.. const mapStateToProps = (state) =>{ return{ firebases:state.firebase.auth, notes:state.firestore.ordered.aAPocPrkXBgRalhu3zuTxSPxdn12 //this is collection name but i want to be dynamic :D } } const mapDispatchToProps = (dispatch) =>{ return { } } export default compose(connect(mapStateToProps,mapDispatchToProps), firestoreConnect(props=>{ return [{collection:props.firebases.uid,orderBy:['createdAt','desc']}]}))(NotesScreen); - How to get a document from firebase where a field is on an array? So I have an array with several uid's from users I want to get their photoURL. And the users collection with theirs uid and photoURL. The array is built from another collection if it helps. But now I don't know how to get the photoURL by only UIDs I want (that are stored in the array or from collection) without making an infinite loop of subscriptions... This is my array with UIDs: arraysUID=[]; constructor(public db: AngularFirestore, public authService: AuthService, private titleService: Title, private route: ActivatedRoute, private spinner: NgxSpinnerService, private chatService: ChatService) { this.pathId = this.route.snapshot.paramMap.get('groupId'); db.collection('mychatsGroup', ref => ref.where('groupId', '==', this.pathId)).snapshotChanges().subscribe(data => { this.arraysUID = data.map(e => { return { id: e.payload.doc.id, ...e.payload.doc.data(), } as any; }) }); and this is the collection which got the PhotoURL I want to get by uid (which are contained in the array) So now I have the UIDs that I want to get the photoURL but I don't know how to make a query like that (which contains), I only know how to make 1 query of 1 UID - Unable to send data to firestore and user authentiction not working I have a registration form in my app where the user enters firstname, surname, mobile, email, password, shared secret. im Using the firebase authentictation and firestore to store the other user data. However, when clicking the sign up button after entering all information the data isnt stored in firestore nor is a user created in the authentication section on firebase. I have checked all imports for firebase and the json and still no luck. I am only performing condition check for the email and password currently once ive managed to get it storing the data i will add this into the code for the other user entrys. Thanks for any help public class SignUp extends AppCompatActivity implements View.OnClickListener { private cairoEditText firstName, surname, birthday, mobileNumber, emailAddress, massterPass, confirmPass, sharedSecret, confirmSharedSecret; private cairoButton signUpButton; private FirebaseAuth firebaseAuth; private FirebaseFirestore fDatasebase; private String userID; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_sign_up); firebaseAuth = FirebaseAuth.getInstance(); fDatasebase = FirebaseFirestore.getInstance(); firstName = (cairoEditText) findViewById(R.id.firstNameEntry); surname = (cairoEditText) findViewById(R.id.surnameEntry); birthday = (cairoEditText) findViewById(R.id.birthdayEntry); mobileNumber = (cairoEditText) findViewById(R.id.numberEntry); emailAddress = (cairoEditText) findViewById(R.id.emailEntry); massterPass = (cairoEditText) findViewById(R.id.passwordInput); confirmPass = (cairoEditText) findViewById(R.id.passwordConfimation); sharedSecret = (cairoEditText) findViewById(R.id.sharedSecret); confirmSharedSecret = (cairoEditText) findViewById(R.id.sharedSecretConfirm); signUpButton = (cairoButton) findViewById(R.id.signUpButton); if (firebaseAuth.getCurrentUser() != null) { startActivity(new Intent(getApplicationContext(), HomeScreen.class)); finish(); } } @Override public void onClick(View v) { { if (v == signUpButton) { signupfunction(); } } } @Override public void onBackPressed() { ///// set animation on back pressed ///// super.onBackPressed(); overridePendingTransition(R.anim.from_left_in, R.anim.from_right_out); } private void signupfunction() { /////* Get Email && Name && Password *///// final String password = massterPass.getText().toString().trim(); final String confirmPassword = confirmPass.getText().toString().trim(); final String firstname = firstName.getText().toString().trim(); final String surnameinput = surname.getText().toString().trim(); final String email = emailAddress.getText().toString().trim(); final String mobile = mobileNumber.getText().toString().trim(); final String birthdayinput = birthday.getText().toString().trim(); final String sharedSecretinput = sharedSecret.getText().toString().trim(); final String confirmSharedSecretinput = confirmSharedSecret.getText().toString().trim(); if (email.isEmpty()) { emailAddress.setError("Email is required"); return; } if (password.isEmpty()) { massterPass.setError("password is required"); return; } if (password.length() < 8 || password.length() > 40) { massterPass.setError("Please enter valid password between 8 and 40 characters"); return; } firebaseAuth.createUserWithEmailAndPassword(email, password) .addOnCompleteListener(new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isSuccessful()) { Toast.makeText(SignUp.this, "User Created", Toast.LENGTH_SHORT).show(); userID = firebaseAuth.getCurrentUser().getUid(); DocumentReference documentReference = fDatasebase.collection("Users").document(userID); Map<String, Object> user = new HashMap<>(); user.put("fName", firstname); user.put("sName", surnameinput); user.put("birthday", birthdayinput); user.put("mobile", mobile); user.put("email", email); user.put("confirmPass", confirmPassword); user.put("sharedSecret", sharedSecretinput); user.put("ConfirmsharedSecret", confirmSharedSecretinput); documentReference.set(user).addOnSuccessListener(new OnSuccessListener<Void>() { @Override public void onSuccess(Void aVoid) { Log.d("TAG", "onSuccess: user profile is created for " + userID); } }); Intent myintent = new Intent(SignUp.this, HomeScreen.class); startActivity(myintent); } else Toast.makeText(SignUp.this, "error registering user", Toast.LENGTH_SHORT).show(); } }); } }
http://quabr.com/57634126/firestore-is-there-a-way-to-perform-or-where-query-operation-on-collection
CC-MAIN-2020-10
en
refinedweb
iofunc_utime() Update time stamps Synopsis: #include <sys/iofunc.h> int iofunc_utime( resmgr_context_t* ctp, io_utime_ut: - type - _IO_UTIME. - combine_len - If the message is a combine message, _IO_COMBINE_FLAG is set in this member. - cur_flag - If set, iofunc_utime() ignores the times member, and set the appropriate file times to the current time. - times - A utimbuf structure that specifies the time to use when setting the file times. For more information about this structure, see utime(). Returns: - EACCES - The client doesn't have permissions to do the operation. - EFAULT - A fault occurred when the kernel tried to access the info buffer. - EINVAL - The client process is no longer valid. - ENOSYS - NULL was passed in info. - EOK - Successful completion. -
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_utime.html
CC-MAIN-2020-10
en
refinedweb
I am making a Python web-crawler program to play The Wiki game. If you're unfamiliar with this game: - Start from some article on Wikipedia - Pick a goal article - Try to get to the goal article from the start article just by clicking wiki/ links My process for doing this is: - Take a start article and a goal article as input - Get a list of articles that link to the goal article - Preform a breadth-first search on the links found avoiding pages that have already been visited starting from the start article - Check if the goal article is on the current page: If it is, then return the path_crawler_took+goal_article - Check if any of the articles that link to the goal are on the current page. If one of them is, return path_crawler_took+intermediate_article+goal I was having a problem where the program would return a path, but the path wouldn't really link to the goal. def get_all_links(source): source = source[:source.find('Edit section: References')] source = source[:source.find('id="See_also"')] links=findall('\/wiki\/[^\(?:/|"|\#)]+',source) return list(set([''+link for link in links if is_good(link) and link])) links_to_goal = get_all_links(goal) I realized that I was getting the links to the goal by scraping all of the links off of the goal page, but wiki/ links are unidirectional: Just because the goal links to a page doesn't mean that page links to the goal. How can I get a list of articles that link to the goal?
https://www.howtobuildsoftware.com/index.php/how-do/pTR/python-python-27-web-crawler-get-all-links-from-page-on-wikipedia
CC-MAIN-2020-10
en
refinedweb
The unified data validation libraryThe unified data validation library OverviewOverview The unified validation API aims to provide a comprehensive toolkit to validate data from any format against user defined rules, and transform them to other types. Basically, assuming you have this: import play.api.libs.json._ import jto.validation._ case class Person(name: String, age: Int, lovesChocolate: Boolean) val json = Json.parse("""{ "name": "Julien", "age": 28, "lovesChocolate": true }""") implicit val personRule = { import jto.validation.playjson.Rules._ Rule.gen[JsValue, Person] } It can do this: scala> personRule.validate(json) res0: jto.validation.VA[Person] = Valid(Person(Julien,28,true)) BUT IT'S NOT LIMITED TO JSON It's also a unification of play's Form Validation API, and its Json validation API. Being based on the same concepts as play's Json validation API, it should feel very similar to any developer already working with it. The unified validation API is, rather than a totally new design, a simple generalization of those concepts. DesignDesign The unified validation API is designed around a core defined in package jto.validation, and "extensions". Each extension provides primitives to validate and serialize data from / to a particular format (Json, form encoded request body, etc.). See the extensions documentation for more information. To learn more about data validation, please consult Validation and transformation with Rule, for data serialization read Serialization with Write. If you just want to figure all this out by yourself, please see the Cookbook. Using the validation api in your projectUsing the validation api in your project Add the following dependencies your build.sbt as needed: resolvers += Resolver.sonatypeRepo("releases") val validationVersion = "2.1.0" libraryDependencies ++= Seq( "io.github.jto" %% "validation-core" % validationVersion, "io.github.jto" %% "validation-playjson" % validationVersion, "io.github.jto" %% "validation-jsonast" % validationVersion, "io.github.jto" %% "validation-form" % validationVersion, "io.github.jto" %% "validation-delimited" % validationVersion, "io.github.jto" %% "validation-xml" % validationVersion // "io.github.jto" %%% "validation-jsjson" % validationVersion ) Play dependenciesPlay dependencies DocumentationDocumentation - Validating and transforming data - Combining Rules - Serializing data with Write - Combining Writes - Validation Inception - Play's Form API migration - Play's Json API migration - Extensions: Supporting new types - Exporting Validations to Javascript using Scala.js - Cookbook - Release notes - v2.0 Migration guide ContributorsContributors - Julien Tournay - - Olivier Blanvillain - - Nick - - Ian Hummel - - Arthur Gautier - - Jacques B - - Alexandre Tamborrino -
https://index.scala-lang.org/jto/validation/validation-form/2.0-play2.3?target=_sjs0.6_2.11
CC-MAIN-2020-10
en
refinedweb
pthread_barrier_wait() Synchronize participating threads at the barrier Synopsis: #include <sync.h> int pthread_barrier_wait( pthread_barrier_t * barrier ); Since: BlackBerry 10.0.0 Arguments: - barrier - A pointer to the pthread_barrier_t object that you want to use to synchronize the threads. You must initialize the barrier by calling pthread_barrier_init() before calling pthread_barrier_wait(). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Returns: PTHREAD_BARRIER_SERIAL_THREAD to a single (arbitrary) thread synchronized at the barrier, and zero to each of the other threads; otherwise, pthread_barrier_wait() returns an error number: - EINVAL - The barrier argument isn't initialized. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_barrier_wait.html
CC-MAIN-2020-10
en
refinedweb
Le mardi 5 mai 2015 01:05:31 sebb a écrit : > > OK, but that's because whoever code the XSLT decided to be defensive to > > such interpretation. > > If you read back in this thread you will see that I did this in order > to support both asfext:PMC and asfext:pmc. > > > But that does not mean is right. > > The code is right in the sense that it works with the input files that > are provided. +1 that's a temporary workaround that we should try to not need any more in the future [...] > >> Also if it is possible to validate that the various RDF files are > >> correct according to the formal definitions. > >> PMCs could then submit their files for checking. > > > > I think we can discuss that infrastructure for the new site. I'm happy to > > help. Python provides the required libraries. I'll open a thread, probably > > tomorrow. > > I think there needs to be a way for PMCs to check their RDF files > against the formal definitions. > For example, a CGI script that accepts the URL of a file. +1 I tried W3C checker, but as it is only a syntax checker, it checked only syntax, not references to the namespace and I couldn't find any other useful tool :( Other tools to make effective use of the DOAP files would be useful too: but I completely agree that the first priority seems to have a more complete checker Regards, Hervé > > > Cheers, > > > > -- > > Sergio Fernández > > Partner Technology Manager > > Redlink GmbH > > m: +43 6602747925 > > e: [email protected] > > w:
http://mail-archives.apache.org/mod_mbox/community-dev/201505.mbox/%3C2372264.ya6xRq0jk6@herve-desktop%3E
CC-MAIN-2017-43
en
refinedweb
Edit Article wikiHow to Calculate After Tax Bond Yield Three Parts:Figuring Out How the Bond is TaxedCalculating After-Tax Bond YieldCalculating Tax-Equivalent YieldCommunity Q&A Yield is an investment concept that puts the earnings of an investment vehicle into context. It states earnings as a percentage of the initial investment. After-tax bond yield reflects the earnings of a bond investment, adjusted to account for capital gains taxes levied on the earnings from that bond. Although it may seem like an intimidating concept, it's actually a simple calculation, requiring nothing more advanced than basic algebra. Steps Part 1 Figuring Out How the Bond is Taxed - 1Determine the type of bond. Bonds returns are taxed differently based upon the type of bond. There are three major types of bonds: corporate, federal government, and municipal bonds. Corporate bonds are sold by corporations and their returns are taxed like regular income (taxed at all tax levels). Federal government bonds, like treasury bills and bonds, are only taxable at the federal level. Municipal bonds, on the other hand, are tax free if purchased in your own municipality or state.[1] - 2Use your income tax bracket for bond returns. The returns taxed on each type of bond are the coupon payments (interest payments) made to the bondholder throughout the life of the bond. The tax rate used on these payments is the same used to tax your income throughout the year. For example, if you are in the 33 percent tax bracket, returns on a corporate bond that you own would be taxed at 33 percent.[2] - 3Use capital gains tax for gains on trades. Whenever you sell a bond on the secondary market, you will need to pay capital gains tax on any gains made in the trade. This is true for every type of bond. Capital gains tax is different than income tax, but is still determined by your income. After-tax returns on trades can also be calculated using the after-tax bond yield calculation method described in this article.[4] - 4Include any discounts on bonds. If you buy a bond at a discount, as is typically done with zero-coupon bonds (bond that don't pay interest), you need to report the discount as income. The discount will be spread out evenly over the life of the bond. This discount is then taxed as income at your income tax rate. Part 2 Calculating After-Tax Bond Yield - 1Determine the pre-tax yield. Start by calculating the annual percent return provided by the bond. This is generally spelled out when you purchase the bond. For example, a corporate bond might pay 10 percent annual interest. This can be annual, with one 10 percent payment, semiannual with two 5 percent payments, or any other number of payments throughout the year that total 10 percent of the bond's par value.[6] - 2Figure out the tax rate you have paid or will be charged on the bond. Your tax rate on a bond will depend on the bond's type. See the other part of this article, "figuring out how the bond is taxed," for more information. Add up the relevant taxes for the bond to figure out your total tax rate.[7] - For example, imagine your federal tax bracket is 33 percent. Your state income tax rate is another 7 percent. So, your total tax rate for a corporate bond would be 40 percent (33 percent + 7 percent). - 3Input your data into the after-tax yield equation. The after-tax yield equation is simply . In the equation, ATY means after-tax yield, r is the pre-tax return, and t is your total tax rate for the bond.[8] - For example, the 10 percent corporate bond would be entered as . - Note that the percentage return, 10 percent, and the tax rate, 40 percent, were entered in the equation as decimals. This is to simplify calculation. To convert percentages to decimals, simply divide by 100. - 4Solve for after-tax bond yield. Solve your equation by first subtracting the figures within the parentheses. In the example, this gives . Then, simply multiply the final two numbers to get your answer. This would be 0.06 or 6 percent. So, your after-tax yield for a 10 percent corporate bond at a 40 percent tax rate would be 6 percent. Part 3 Calculating Tax-Equivalent Yield - 1Understand tax-equivalent yield. Tax-equivalent yield is a figure that is used to compare tax-free bond returns to returns from taxable bonds. It converts the untaxed bond's return to an imaginary "pre-tax" return so that it can be easily compared to taxable security returns. This technique is useful for comparing the returns on municipal bonds to federal government and corporate bonds.[9] - 2 - 3Calculate tax-equivalent yield. Tax-equivalent yield is calculated using the formula . In the formula, TEY stands for tax-equivalent yield, r represents the bond's annual return in decimal form, and t is your income tax rate, also in decimal form. For example, assuming the 5.5 percent bond described above and a 40 percent total tax rate, you would complete the equation as follows: .[11] - Calculate the answer by first subtracting the numbers on the bottom. Then, divide the remaining numbers to get your answer. Here, this would be 0.055/0.6, which works out to 0.0917 if you round to three decimal places. - Your 5.5 percent municipal bond offers the same return as a taxable bond with a stated 9.17 percent return. - 4Compare this yield to a taxed security's yield. This number can now be used to compare the municipal bond's return to taxable bond returns. For example, imagine you were considering a corporate bond that offers 9 percent returns. It may seem like an easy choice to choose the corporate bond over a municipal bond offering 5.5 percent. However, after the calculation, you can see that the municipal bond actually offers a higher return (9.17 percent over 9 percent). Community Q&A Search Ask a Question Tips - Cashing in a bond often carries a fee for the brokerage or other agent who facilitated the sale. This is not part of taxes, but should be subtracted from the profit like taxes when calculating the real yield of a bond. This is less important when comparing the performance of various bonds handled by the same brokerage, but it can be vital to comparing the performance of bonds sold by different brokers. Sources and Citations Show more... (8) - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑
https://www.wikihow.com/Calculate-After-Tax-Bond-Yield
CC-MAIN-2017-43
en
refinedweb
import matplotlib.mlab as mlab import matplotlib.pyplot as plt import numpy as np x = np.arange(0.0, 2, 0.01) y1 = np.sin(2*np.pi*x) y2 = 1.2*np.sin(4*np.pi*x) fig, ax = plt.subplots() ax.plot(x, y1, x, y2, color='black') ax.fill_between(x, y1, y2, where=y2>y1, facecolor='green') ax.fill_between(x, y1, y2, where=y2<=y1, facecolor='red') ax.set_title('fill between where') plt.show() Total running time of the script: ( 0 minutes 0.020 seconds) Gallery generated by Sphinx-Gallery
http://matplotlib.org/gallery/pyplots/whats_new_98_4_fill_between.html
CC-MAIN-2017-43
en
refinedweb
by passing AC current through Quoteby passing AC current throughAnd where does this AC come from? To QuoteToYou are not producing AC. AC is alternating current, where the voltage changes from positive to negative 50 or 60 times per second. All you are doing is switching which way the current, at +5V, is flowing. an electric current that reverses its direction at regularly recurring intervals Pulsating dc and ac are the same, a variable flow of electrons. The primary difference is the "Reference"... If "AC" is "Referenced" from the most negative level them you could say "It''s Just pulsating DC. Whether it is sinusoidal, square rectangular (duty cycle not 50%) triangular.Come to think about the definition a little and you will see that the prime requirement for AC... The one Difference that sets AC apart from noise is just Periodicity.It is the periodicity that sets it apart and allows it to do useful and predictable work is it's periodicity.Noise can be called AC... Pink noise (audio) and White noise (Full Spectrum) and random impulse noise are AC but because of the lack of predictability, little real work can be done with them.If you put in place a device that passes AC only, a capacitor or a transformer will pass your "DC" very well. it will also filter the signal due to its reactance or response to an "AC" signal... again the Periodicity.As to the library it does produce a signal that makes the LED light up Yellow... So the difference is?Just the point of reference... Put it in the right place and your "DC" signal becomes "AC".Place a diode in series and you remove 1/2 of the DC signal... Just as a diode would with AC... and you have DC again... pulsating but of one polarity... The signal cannot pull down when the input goes to it's lowest point because the diode will not conduct in the reverse direction. #include "WProgram.h" #if defined(ARDUINO) && ARDUINO >= 100#include "Arduino.h"#else#include "WProgram.h"#endif Version 1.2 released with Arduino 1.0 support. Thanks PaulDriver for the patch! I also have a library (toneAC) that does "AC" to drive a speaker at almost twice the volume as the standard tone library. This is possible because I alternate the 5 volts between two pins. In my case, it's designed to be extremely fast, so I use the Arduino's PWM pins and timer 1. This also allows for perfect switching between the two pins without any programming slowing things down.As a bonus, my library can also drive a bicolor two pin LED as yours does (one of my example sketches included with the library controls a bicolor LED with a pot to adjust the cycle speed). You may want to check out my source. As toneAC is designed for ultra speed and accuracy, you must use the timer 1 controlled PWM pins. It also is totally driven totally by port registers for the fastest and smallest code. Looking at my library may assist you. I also have a NewTone library thats a modified version of toneAC but allows you to specify what pin you want to drive a speaker with. This also may assist you with your library.While writing library using port registers and timers may be a little more challenging at first, it's really not that hard once you do it a few times. And, the benefits are many. Very small code size, very fast, color switching and duty cycle can all be done in the background, no reason for delay statements which can kill a project, etc.Best of luck with your project!Tim
http://forum.arduino.cc/index.php?topic=116824.msg879275
CC-MAIN-2017-43
en
refinedweb
Previous Versions of ASP.NET Previous versions of ASP.NET supported two types of routing: convention-based routing and attribute-based routing. When using convention-based routing, you could define your routes in your application RouteConfig.cs file like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } The code above defines a Default route that maps a request like “/products/details” to the ProductsController.Details() action. ASP.NET MVC 5 introduced support for attribute-based routing. The primary advantage of attribute-based routing is that it allowed you to define your routes in the same file as your controller. For example, here’s an example of creating a ProductsController that includes both a [RoutePrefix] and [Route] attribute used for routing: [RoutePrefix("Products")] public class ProductsController : Controller { [Route("Index")] public ActionResult Index() { return View(); } } Regardless of whether you used convention-based routing or attribute-based routing, all of your routes ended up in a static collection located at RouteTable.Routes. If you used the ASP.NET Web API then routing worked in a similar way. You could define convention-based routes in the WebApiConfig.cs file and add attribute-based routes to your Web API controller classes. However, there were two important differences between routing in MVC and the Web API. First, the Web API supported RESTful routes by default. If you gave a Web API controller action a name that started with Get, Post, Put, Delete then you could invoke the action by performing an HTTP GET, POST, PUT, or DELETE request. For example, you could invoke a Web API action named PostMovie() simply by performing an HTTP POST. The other important difference between MVC and the Web API was subtler. Behind the scenes, MVC and the Web API did not share the same routing framework. Routing in MVC and the Web API followed similar patterns, but routing was implemented with different code (originally written by different Microsoft teams). Everything in ASP.NET 5 and MVC 6 has been rewritten from the ground up. This includes the routing framework used by both MVC and the Web API. There are several important changes to routing that I discuss in this blog post. First, in ASP.NET 5, there is no longer a distinction between MVC and Web API controllers. Both MVC and Web API controllers derive from the very same Controller base class. This means that the same routing framework is used for both MVC and the Web API. With ASP.NET 5, anything that is true of routing for MVC is also true for the Web API. Second, unlike earlier versions of MVC, default routes are created for you automatically. You don’t need to configure anything to route a request to a controller action. A set of default routes is created for you automatically. Third and finally, in MVC 6, you are provided with a rich set of inline constraints and parameter options that you can use with both convention-based and attributed-based routing. For example, you can constrain the data type of your route parameters, mark a parameter as optional, and provide default values for your parameters right within your route templates. Creating Routes Let’s start with the very basics. I’m going to show you the minimal amount of code that you need to use routing in an ASP.NET 5 application. I won’t even assume that you are using MVC. Here’s my Startup class: using System; using Microsoft.AspNet.Builder; using Microsoft.AspNet.Http; using Microsoft.AspNet.Routing; using Microsoft.AspNet.Routing.Template; using Microsoft.Framework.DependencyInjection; namespace RoutePlay { public class Startup { public void Configure(IApplicationBuilder app) { RouteCollection routes = new RouteCollection(); routes.Add(new TemplateRoute(new DebuggerRouteHandler("RouteHandlerA"), "test/{a}/{b}", null)); routes.Add(new TemplateRoute(new DebuggerRouteHandler("RouteHandlerB"), "test2", null)); app.UseRouter(routes); } } } In the code above, I’ve created a Configure() method that I use to configure routing. I create a Route collection that contains two routes. Finally, I pass the Route collection to the ApplicationBuilder.UseRouter() method to enable the routes. The routes are created with the TemplateRoute class. The TemplateRoute class is initialized with a Route Handler and a URL template. The first route – named RouteHandlerA — matches URLs with the pattern “test/{a}/{b}” and the second route – named RouteHandlerB – matches URLs with the pattern “test2”. The essence of routing is mapping an incoming browser request to a Route Handler. In the code above, requests are mapped to a custom handler that I created named DebuggerRouteHandler. Here’s the code for the DebuggerRouteHandler: using Microsoft.AspNet.Http; using Microsoft.AspNet.Routing; using System; using System.Threading.Tasks; namespace RoutePlay { public class DebuggerRouteHandler : IRouter { private string _name; public DebuggerRouteHandler(string name) { _name = name; } public string GetVirtualPath(VirtualPathContext context) { throw new NotImplementedException(); } public async Task RouteAsync(RouteContext context) { var routeValues = string.Join("", context.RouteData.Values); var message = String.Format("{0} Values={1} ", _name, routeValues); await context.HttpContext.Response.WriteAsync(message); context.IsHandled = true; } } } The DebuggerRouteHandler simply displays the name of the route and any route values extracted from the URL template. For example, if you enter the URL “test/apple/orange” into the address bar of your browser then you get the following response: Creating Routes with MVC 6 When using MVC 6, you don’t create your Route collection yourself. Instead, you let MVC create the route collection for you. Here’s a Startup class that is configured to use MVC 6: used to register MVC with the Dependency Injection framework built into ASP.NET 5. The Configure() method is used to register MVC with OWIN. Here’s what my MVC 6 ProductsController looks like: using Microsoft.AspNet.Mvc; namespace RoutePlay.Controllers { public class ProductsController : Controller { public IActionResult Index() { return Content("It Works!"); } } } Notice that I have not configured any routes. I have not used either convention-based or attribute-based routing, but I don’t need to do this. If I enter the request “/products/index” into my browser address bar then I get the response “It Works!”: When you call ApplicationBuilder.UseMvc() in the Startup class, the MVC framework adds routes for you automatically. Here. Thank you CreateAttributeMegaRoute() — you are going to save me tons of work! Convention-Based Routing You can use convention-based routing with ASP.NET MVC 5 by defining the routes in your project’s Startup class. For example, here is how you would map the requests /Super and /Awesome to the ProductsController.Index() action:" } ); }); } } } If you squint your eyes really hard then the code above in the Configure() method for setting up the routes looks similar to the code in a RouteConfig.cs file. Attribute-Based Routing You also can use attribute-based routing with MVC 6. Here’s how you can modify the ProductsController Index() action so you can invoke it with the “/Things/All” request: using Microsoft.AspNet.Mvc; namespace RoutePlay.Controllers { [Route("Things")] public class ProductsController : Controller { [Route("All")] public IActionResult Index() { return Content("It Works!"); } } } Notice that both the ProductsController class and the Index() action are decorated with a [Route] attribute. The combination of the two attributes enables you to invoke the Index() method by requesting “/Things/All”. Here’s what a Web API controller looks like in MVC 6: using System; using System.Collections.Generic; using System.Linq; using Microsoft.AspNet.Mvc; namespace RoutePlay.Controllers.Api.Controllers { [Route("api/[controller]")] public class Movies) { } } } Notice that a Web API controller, just like an MVC controller, derives from the base Controller class. That’s because there is no difference between a Web API controller and MVC controller in MVC 6 – they are the exact same thing. You also should notice that the Web API controller in the code above is decorated with a [Route] attribute that enables you to invoke the Movies controller with the request “/api/movies”. The special [controller] and [action] tokens are new to MVC 6 and they allow you to easily refer to the controller and action names in your route templates. If you mix convention-based and attribute-based routing then attribute-based routing wins. Furthermore, both convention-based and attribute-based routing win over the default routes. Creating Restful Routes You can use RESTful style routing with both MVC and Web API controllers. Consider the following controller (a mixed MVC and Web API controller): using Microsoft.AspNet.Mvc; using Microsoft.AspNet.WebUtilities.Collections; namespace RoutePlay.Controllers { [Route("[controller]")] public class MyController : Controller { // GET: /my/show [HttpGet("Show")] public IActionResult Show() { return View(); } // GET: /my [HttpGet] public IActionResult Get() { return Content("Get Invoked"); } // POST: /my [HttpPost] public IActionResult Post() { return Content("Post Invoked"); } // POST: /my/stuff [HttpPost("Stuff")] public IActionResult Post([FromBody]string firstName) { return Content("Post Stuff Invoked"); } } } The MyController has the following four actions: - Show() – Invoked with an HTTP GET request for “/my/show” - Get() – Invoked with an HTTP GET request for “/my” - Post() – Invoked with an HTTP POST request for “/my” - Post([fromBody]string firstName) – Invoked with an HTTP POST request for “/my/stuff” Notice that the controller is decorated with a [Route(“[controller]”)] attribute and each action is decorated with a [HttpGet] or [HttpPost] attribute. If you just use the [HttpPost] attribute on an action then you can invoke the action without using the action name (for example, “/my”). If you use an attribute such as [HttpPost(“stuff”)] that includes a template parameter then you must include the action name when invoking the action (for example, “/my/stuff”). The Show() action returns the following view: <p> <a href="/my">GET</a> </p> <form method="post" action="/my"> <input type="submit" value="POST" /> </form> <form method="post" action="/my/stuff"> <input name="firstname" /> <input type="submit" value="POST STUFF" /> </form> If you click the GET link then the Get() action is invoked. If you post the first form then the first Post() action is invoked and if you post the second form then the second Post() action is invoked. Creating Route Constraints You can use constraints to constrain the types of values that can be passed to a controller action. For example, if you have a controller action that displays product details for a particular product given a particular product id, then you might want to constrain the product id to be an integer: using Microsoft.AspNet.Mvc; namespace RoutePlay.Controllers { [Route("[controller]")] public class ProductsController : Controller { [HttpGet("details/{id:int}")] public IActionResult Details(int id) { return View(); } } } Notice that the Details() action is decorated with an [HttpGet] attribute that uses the template “details/{0:int}” to constrain the id passed to the action to be an integer. There are plenty of other inline constraints that you use such as alpha, minlength, and regex. ASP.NET 5 introduces support for optional parameters. You can use a question mark ? in a template to mark a parameter as optional like this: [HttpGet("details/{id:int?}")] public IActionResult Details(int id) { return View(); } You can invoke the Details() action in the controller above using a URL like “/products/details/8” or a URL that leaves out the id like “/products/details” (in that case, id has the value 0). Finally, ASP.NET 5 introduces support for default parameter values. If you don’t supply a value for the id parameter then the parameter gets the value 99 automatically: [HttpGet("details/{id:int=99}")] public IActionResult Details(int id) { return View(); } Summary ASP.NET 5 (and ASP.NET MVC 6) includes a new routing framework rewritten from the ground up. In this blog post, I’ve provided a deep dive into how this new framework works. First, I discussed how routing works independently of MVC 6. I explained how you can create a new Route collection in the Startup class. Next, I demonstrated how you can use MVC 6 default routes, convention-based routes, and attribute routes. I also elaborated on how you can create controllers that follow RESTful conventions. Finally, I discussed how you can take advantage of inline constraints, optional parameters, and default parameter values. If you want to dig more deeply into routing then I recommend taking a look at the following two GitHub aspnet repositories: A small thing, but the comments in the second code block in the Attribute Based Routing section are wrong, they should be api/movies not api/values. Hopefully most people would figure it out quickly, but I’m sure you’re going to get a lot of views on this post so always good to get it right 🙂 PS I’m enjoying your coverage of ASP.NET 5 stuff so far, you seem to be one of the few people covering it in any detail… Great set of blogs! Keep them coming. I have been able to get the movie app running but have not found a way to debug the non-optimized code. Is there a way to run the application against the code in the scripts folder not the wwwroot files? This is fairly typical behavior in yeoman, or applications you manually scaffold, that use the grunt/gulp infrastructure. For example you can either run the non-optimized or optimized code using grunt-contrib-connect task. You can debug either of them in the browser’s debugger. I would like to use the Visual Studio debugger on optimized and non-optimized code if possible. @Andy – One suggestion would be to create 2 different GruntJS tasks named ReleaseBuild and DebugBuild. In the DebugBuild task, you could tell UglifyJS to combine but not minify the Javascript files. UglifyJS supports a Boolean compress option, see Looks good – MVC and WebAPI being totally different frameworks but being made to look the same can be a bit confusing. However, personally I’d seriously question the use of the term “controller” when building HTTP services, RESTful or otherwise. A “controller” is a something which is responsible for receiving user input, delegating any work to the appropriate component, and then displaying the correct view. The term shouldn’t be used when building a service. ASP.NET isn’t the only framework which makes this mistake, but it is a mistake.
http://stephenwalther.com/archive/2015/02/07/asp-net-5-deep-dive-routing
CC-MAIN-2017-43
en
refinedweb
I need to access the date of creation of a file using Python. As suggested in many posts, I am using os.stat(filename) import os, time f = 'untitled.ipynb' # Created 30 March 2016 at 15:45 fileStats = os.stat(f) time.ctime(fileStats.st_ctime) 'Mon May 2 16:04:27 2016' "Created: 30 March 2016 at 15:45" accordint to the documentation at: since you are using OSX, what you need is st_birthtime instead of st_ctime st_ctime- platform dependent; time of most recent metadata change on Unix, or the time of creation on Windows. On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them): st_birthtime- time of file creation
https://codedump.io/share/Q0O6aKAQ20zS/1/file-creation-time-do-not-match-when-using-osstat
CC-MAIN-2017-43
en
refinedweb
Introduction In my early post I showed you how to create a web service. In this post I’m going to discuss about how to create an Android client for that web service. But the intention of this post is to create the connection between server and Android client, so presenting returned string in GUI will not be discussed. What You Need - Eclipse EE - Android SDK - KSOAP library What You should know First you have to create an Android project in Eclipse and add KSOAP library. It can be done easily by right click on project name at Project Explorer -> Build path -> Configure Build Path, and then go to Library tab and add KSOAP library through “Add External Jars”. Then make sure you can see that in Project Explorer. You should be able to create and other GUI stuffs in Android (there are lots of stuffs for your help in internet), and be familiar with the terminology use in Android development. Next I’m going to discuss on how to create the SOAP client part of the web service. For Android versions before 3.2 Most of the examples for android SOAP clients belongs to this category. So I’m pointing to some resources without discussing further more. Java Dzone: For Android versions after 3.2 If you are going to use the sample code given above in these types of versions, you will end up with an error. It would surely be “Network On Main Thread Exception”. This is because the later versions of Android is not allowing to do network connecting activities on the main thread, in order to keep the main flow safe (You can have more information about this error from here). So in following example I’ll show how to create a web client using AsyncTasks, so to avoid from that error. This client is for the web service I talk about in previous post. Example Android Client import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransportSE; import android.os.AsyncTask; import java.lang.reflect.Type; /** * * @author BUDDHIMA */ public class AndroidClientIF { // To store returned value until return from methods String locationString; String userDataString; // Following will change according to web service final String NAMESPACE = ""; // has to change final String URL = ""; // has to change public String receivedLocation(String input) { new wsTask1().execute(input); // wsTask1 is assigned for the functionality try { Thread.sleep(2000); // small delay until data receive - actually not a good practice, have to find alternative like Observer pattern } catch (Exception e) { } return locationString; } private class wsTask1 extends AsyncTask<String, Void, Void> { // For the web service method: public String getLocations(String inputLocation)) @Override protected Void doInBackground(String... entry) { String METHOD = "getLocations"; String SOAPACTION = NAMESPACE + METHOD; SoapObject request = new SoapObject(NAMESPACE, METHOD); request.addProperty("inputLocation", entry[0]);ACTION, envelope); Object response = (Object) envelope.getResponse(); locationString = response.toString(); } catch (Exception e) { // TODO: handle exception System.out.println(e.toString()); } return null; } } } I have to mention specially about the NAMESPACE and URL. They should be find out from the WSDL file of the web service. For “localhost:8080” I used “10.0.2.2:8080” because it’s the IP address which use to call the local machine from Android emulator. Otherwise it should be the machine IP where your web service is hosted. Namespace also can be found as shown above. Other than that, Method name and Properties are according to web service discussed in previous post. Summary My idea in these two post is to share experience I got in 3rd year project. So I hope that these post will help to new comers to cope with errors. This is not the only way of consuming a web service. You could try with JSON too. References 1. Android developer site: 2. KSOAP with AsyncTasks example: 3. AsyncTasks: 4. My previous post on creating Web services: 5. KSOAP Library:
https://buddhimawijeweera.wordpress.com/2012/05/
CC-MAIN-2017-43
en
refinedweb
I am trying to import a class into the global scope, and I am able to do it, but then when I try to extend the class I get an error saying: Type 'any' is not a constructor function type. const MyClass = require('./core/MyClass'); class MyTestClass extends MyClass { } import MyClass from './core/MyClass' export default class MyClass { } In your code you have : require('./core/MyClass'); If you don't have import / export in your file then TypeScript assumes the file is global. However depending upon your usage of the file (e.g in NodeJS or if using a bundler like webpack) the file is still a module and not global. Cool, with that out of the way you can put something on the global like: export default class MyClass { } (global as any).MyClass = MyClass; Be sure to include node.d.ts to get global. And of course I would also like to warn against default as the const / require you wrote is also wrong. You need something like const {default} = require('module/foo');. More:
https://codedump.io/share/OVdeHICAhZLn/1/import-classes-into-the-global-scope
CC-MAIN-2017-43
en
refinedweb
Detached pthreads and memory leak Can somebody please explain to me why this simple code leaks memory? I believe that since pthreads are created with detached state their resources should be released inmediatly after it's termination, but it's not the case. My environment is Qt5.2. @#include <QCoreApplication> #include <windows.h> void *threadFunc( void *arg ) { printf("#"); pthread_exit(NULL); } int main() { pthread_t thread; pthread_attr_t attr; while(1) { printf("\nStarting threads...\n"); for(int idx=0;idx<100;idx++) { pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); pthread_create( &thread, &attr, &threadFunc, NULL); pthread_attr_destroy ( &attr ); } printf("\nSleeping 10 seconds...\n"); Sleep(10000); } }@ !(Memory Consumption Graph)! Thanks in advance! - dheerendra I agree with you here. There is no Qt code here. It is all basic operating system pthread calls. Also where are you releasing the memory allocated 'thread' ? Probably this causing the memory leak. Hello Dheerendra, thank you for your answer. I believe that each thread's resources are (should be) released after pthread_exit() call. How do I suposse to release the memory allocated by "thread"? Deerendra, don't know if you were referring to this, but I tried it and the leak persist: @#include <QCoreApplication> #include <windows.h> void *threadFunc( void *arg ) { printf("#"); pthread_exit(NULL); } int main() { while(1) { printf("\nStarting threads...\n"); for(int idx=0;idx<1000;idx++) { //pthread_t thread; pthread_t * thread = (pthread_t *) malloc(sizeof(pthread_t)); pthread_attr_t * attr = (pthread_attr_t *) malloc(sizeof(pthread_attr_t)); pthread_attr_init(attr); pthread_attr_setdetachstate(attr, PTHREAD_CREATE_DETACHED); pthread_create( thread, attr, &threadFunc, NULL); pthread_attr_destroy ( attr ); free(thread); free(attr); } printf("\nSleeping 10 seconds...\n"); Sleep(10000); } }@ I discovered that if I add a slight delay of 5 milliseconds inside the for loop the leak is WAY slower: @ for(int idx=0;idx<100;idx++) { pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); pthread_create( &thread, &attr, &threadFunc, NULL); pthread_attr_destroy ( &attr ); Sleep(5); /// <--- 5 MILLISECONDS DELAY /// }@ !(Memory leak graph with slight delay)! This is freaking me out, could somebody please tell me what is happening? How this slight delay may produce such a significant change? (or alter the behavior in any way) Any advice would be greatly appreciated. Thanks. It definitely makes sense that the slight delay shows less memory usage, as there are less multiple threads running simultaneously. What is also dangerous is your call "free(thread);", as you don't know whether the thread has really finished yet (pthread_join)... A thread itself has some memory usage just because it is alive and although you don't have any variables in your threadFunc. So there actually is no memory leakage. See for the stack size that each thread does own ... P.S. Under Linux the default stack size per thread seems to be 2 MBytes ... Maybe it's similar under Windows... So yes, I guess there is no memory leakage at all. ;) vidar thank you for your answer, I agree with you when you say that more active threads means more memory ussage. But please note that I'm creating 100 threads that do nothing (exit inmediatly) and waiting 10 seconds, that is an eternity for all threads to finish and the memory to be recovered. And also please note that I'm creating detached threads and not joined, so I believe I can securely call free after thread creation. But anyway this leak is happening with dynamic thread and attr variables but also with static, it was only a test after what Dheerendra suggest. About your second comment: Again, all the threads should liberate all the resources when they finish in this 10 seconds eternity, so it doesn't matter how much memory they take but the fact that they have to give back this memory to the system as soon as they finish. I feel that I'm missing something that is stupid but I'm blind now. Hi Fracu, you are right, I didn't notice that the x axis is really big.
https://forum.qt.io/topic/36722/detached-pthreads-and-memory-leak
CC-MAIN-2017-43
en
refinedweb
0 In the following program I'm getting the warning -> unused variable ‘fn’. I'm following along with a book so I don't know why it gave me that portion of code if it's unusable. Also, I don't understand this line at all. -> void(*fn)(int& a, int* b) = add; #include <iostream> void add(int& a, int* b) { std::cout << "Total: " << (a + *b) << std::endl; }; int main() { int num = 100, sum = 500; int& rNum = num; int* ptr = # void(*fn)(int& a, int* b) = add; std::cout << "Reference: " << rNum << std::endl; std::cout << "Pointer: " << *ptr << std::endl; ptr = ∑ std::cout << "Pointer now: " << *ptr << std::endl; add(rNum, ptr); return 0; }
https://www.daniweb.com/programming/software-development/threads/481644/pointers-and-refs
CC-MAIN-2017-43
en
refinedweb
23 September 2011 08:00 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company will shut its 750,000 tonne/year PTA plant, which is operated by its subsidiary Indorama Petrochem, in early October for a five-day turnaround, the source said. The plant is located at Banchang in Rayong. The producer will shut its 550,000 tonne/year PTA unit in Map Ta Phut, which belongs to its other subsidiary TPT Petrochemicals, in mid-October for three days, the source said. Indorama Ventures is running the two plants at full capacity, the source added. The shutdowns will not have an impact on the market as the producer’s cargoes are mostly on contract basis, market sources said.
http://www.icis.com/Articles/2011/09/23/9494567/thailands-indorama-to-shut-two-pta-plants-for-maintenance-in-october.html
CC-MAIN-2014-52
en
refinedweb
01 October 2012 07:48 [Source: ICIS news] SINGAPORE--Offers for polystyrene (PS) in the Gulf Cooperation Council (GCC) region rose by $80/tonne (€62/tonne) over September, as Saudi Arabian producers announced steep hikes pushing buyers to the sidelines, a market source said on Monday. For general purpose polystyrene (GPPS), October offers to the GCC region were at $1,830-1,840/tonne DEL (delivered) GCC, against offers for September that were at $1,750/tonne DEL. October offers to the UAE were at $1,830/tonne DEL UAE, while offers to ?xml:namespace> For high-impact polystyrene, October offers were at $2,030/tonne DEL UAE and $2,040/tonne DEL Oman, up by $80/tonne from September offers. In the The offers were considered as ‘too high’ by most buyers in the GCC and East Med markets, sources said. Buyers largely retreated to the sidelines and adopted a wait-and-watch stance amidst these offers, they added. Saudi producers expect demand to strengthen in the domestic markets because of the annual Muslim pilgrimage of ‘Haj’ to Mecca in Saudi Arabia is scheduled in October, a trader based in the East Med region said. Consequently, allotment of product to export markets was likely to be lower this month on account of an increase in domestic demand, which explains the high offer levels, the trader
http://www.icis.com/Articles/2012/10/01/9599815/gcc-polystyrene-producers-hike-offers-by-80tonne-for.html
CC-MAIN-2014-52
en
refinedweb
hihi! here's my prompt and i think i did the program right but I can't get it to work how do i fix this program? this is the prompt btw: Create a S tudent class that stores the name, address, GPA, age, and major for a student. Additionally, the S tudent class should implement the C omparable interface. Make the comparison of S tudent objects based on the name, then the address, then the major, and finally the GPA. If the difference between two objects when compared lies in the name, return a 1 or -1 depending on which object is lessor/greater. If the difference in objects is in the address, return +/- 2. If the difference is in the major, return +/- 3. If the difference is in the GPA, return +/- 4. Only return zero if all 4 fields are the same. This behavior deviates slightly from the normal c ompareTo behavior (<0, 0, or >0), but we are ok with that for the purposes of this lab. Write a class called S tudentDriver to test your new method. Hard code several students into the main method of the S tudentDriver class with small differences in different fields. That is, make one student have one name and another a second name. Make a third student have the same name as the first, but a have the third student have a different address, etc. Make sure your c ompareTo method finds each the differences correctly. the first is my comparable and the second is my driver thanks! public class Student implements Comparable { String name; String address; String major; double gpa; public int compareTo(Object st) { Student s = (Student) st; if(this.name.compareTo(s.name) == 0) { if(this.address.compareTo(s.address) == 0) { if(this.major.compareTo(s.major) ==0) { if(this.gpa == s.gpa) { return 0; } else if(this.gpa < s.gpa) { return -4; } else { return 4; }} else { return 3 * this.major.compareTo(s.major); }} else { return 2 * this.address.compareTo(s.address); }} else { return this.name.compareTo(s.name);} }} public class StudentDriver{ public static void main(String[] args) { Student st = new Student(); Student Bob = new Student("Bob", "3 Bella Vista Court", "Phys", .8); Student Bob = new Student("Bob", "4 Bella Vista Court", "Chem", .9); Student Bob = new Student("Janice", "8 Bella Vista Court", "Eng", .7); Student Bob = new Student("Ian", "5 Bella Vista Court", "Math", .2); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/17138-have-code-need-one-line-fixed-please.html
CC-MAIN-2014-52
en
refinedweb
in reply to Re^24: Why is the execution order of subexpressions undefined? in thread Why is the execution order of subexpressions undefined? So, I ported your Perl 5 program to Haskell, and benchmarked both against the 1millionlines.dat generated with this: for (1..1_000_000) { print int(rand(10)), $/ } [download] Perl5: 4.619u 1.010s 0:05.89 95.4% 10+79736k 0+0io 0pf+0w GHC: 3.007u 0.038s 0:03.10 97.7% 323+359k 0+0io 0pf+0w {-# OPTIONS -O2 #-} import Foreign import Data.Array.IArray import Data.Array.ST import Control.Monad import Control.Monad.ST.Strict import System.IO import System.Random import System.Environment main :: IO () main = do args <- getArgs when (null args) $ error "filename required" fh <- openBinaryFile (head args) ReadMode sz <- return . fromInteger =<< hFileSize fh allocaBytes sz $ \buf -> do hGetBuf fh buf sz ixs <- foldM (build buf) [-1] [0 .. sz-1] gen <- newStdGen let len = length ixs + 1 stu :: ST s (STUArray s Int Int) stu = do arr <- newListArray (1, len) (sz-1:ixs) foldM_ (swap arr) gen [len `div` 2,(len `div` 2)-1..1] return arr display buf . elems $ runSTUArray stu build :: Ptr Word8 -> [Int] -> Int -> IO [Int] build buf ixs ix = do chr <- peek (plusPtr buf ix) :: IO Word8 return $ if chr == 0o12 then (ix:ix:ixs) else ixs swap arr g ix = do let (iy, g') = randomR (1, ix) g x1 <- readArray arr $ ix*2-1; x2 <- readArray arr $ ix*2 y1 <- readArray arr $ iy*2-1; y2 <- readArray arr $ iy*2 writeArray arr (ix*2-1) y1; writeArray arr (ix*2) y2 writeArray arr (iy*2-1) x1; writeArray arr (iy*2) x2 return g' display buf [] = return () display buf (x1:x2:xs) = do hPutBuf stdout (plusPtr buf x2) (x1 - x2) display buf xs [download] Then I shall re-write my example in Inline assembler and call it Perl. Monads are a part of haskell. Without them the language wouldn't be useful at all. They are an integral feature, and not anything like inline assembler. The only difference is that monadic functions and pure ones are separated by the type system, and that previously when haskell was not a very practical language, monads didn't exist at all. Otherwise this is a part of haskell as much as sort is a part of perl. The distinction is that in a pure FP language, Autrijus' emminently clever code would not be possible. The inclusion of the monad into the previously pure FP Haskell, is testiment to the practical restrictions that pure FP implies. There are some tasks that FP algorithms are far and away the best alternative. There are some tasks that imperative algorithms (with their inherent utilisation of side effects) are more practical. Once you remove the "no side-effects" aspect from FP, you end up with the main distinction between FP and imperative code being the syntax used. For some algorithms, a declarative syntax is the most clear and concise. For some algoriths, a recursive definition is the easiest to code, understand and maintain--whether using declarative or imperative syntax. For some algorithms, good ol'imperative loops and blocks and side-effects is just the ticket. My preference--and there is no implication in this that seeks to restrict anyone else to my choices--is for a language that supports all these styles of syntax, without artificial restristions. For me, currently, the best choice available is Perl, which is why I come back to Perl in preference to all the other languages I've tried over the last few years. My conclusion from Autrujus' example, is that to address the constraints imposed by my posted problem (memory), the requirement is to utilise side-effects. What Autrujus demonstrated was that this can indeed be done within Haskell. So, we reach the point where the given task can be tackled in both languages, so the choice of language comes down to personal preference--mine is Perl. Hell yes! Definitely not I guess so I guess not Results (47 votes), past polls
http://www.perlmonks.org/index.pl?node_id=450426
CC-MAIN-2014-52
en
refinedweb
_lwp_cond_reltimedwait(2) - get file sensitivity label cc [flags...] file... -ltsol [library...] #include <tsol/label.h> int getlabel(const char *path, m_label_t *label_p); int fgetlabel(int fd, m_label_t *label_p); The getlabel() function obtains the sensitivity label of the file that is named by path. Discretionary read, write or execute permission to the final component of path is not required, but all directories in the path prefix of path must be searchable. The fgetlabel() function obtains the label of an open file that is referred to by the argument descriptor, such as would be obtained by an open(2) call. The label_p argument is a pointer to an opaque label structure. The caller must allocate space for label_p by using m_label_alloc(3TS. An I/O error occurred while reading from or writing to the file system. Oracle Solaris Trusted Extensions Developer’s Guide The functionality described on this manual page is available only if the system is configured with Trusted Extensions.
http://docs.oracle.com/cd/E23824_01/html/821-1463/fgetlabel-2.html
CC-MAIN-2014-52
en
refinedweb
This section describes the use of the basic socket interfaces. The socket(3SOCKET) call creates a socket in the specified family and of the specified type. s = socket(family, type, protocol); If the protocol is unspecified, the system selects a protocol that supports the requested socket type. The socket handle is returned. The socket handle is a file descriptor. The family is specified by one of the constants that are defined in sys/socket.h. Constants that are named AF_suite specify the address format to use in interpreting names: Apple Computer Inc. Appletalk network Internet family for IPv6 and IPv4 Internet family for IPv4 only Xerox Corporation PUP internet UNIX file system Socket types are defined in sys/socket.h. These types, SOCK_STREAM, SOCK_DGRAM, or SOCK_RAW, are supported by AF_INET6, AF_INET, and AF_UNIX. The following example creates a stream socket in the Internet family: s = socket(AF_INET6, SOCK_STREAM, 0); This call results in a stream socket. The TCP protocol provides the underlying communication. Set the protocol argument to 0, the default, in most situations. You can specify a protocol other than the default, as described in Advanced Socket Topics. establishment is usually asymmetric, with one process acting Internet family, this is to indicate how many connection requests can be queued. The second step is to accept a connection. struct sockaddr_in6 from; ... listen(s, 5); /* Allow queue of 5 connections */ fromlen = sizeof(from); newsock = accept(s, (struct sockaddr *) &from, &fromlen); The socket handle s is the socket bound to the address to which the connection request is sent. The second parameter of listen(3SOCKET) specifies the maximum number of outstanding connections that might be queued. 7 defined in sys/socket.h, can be specified as a nonzero value if one or more of the following is required: Send and receive out-of-band data Look at data without reading Send data without routing packets Out-of-band data is specific to stream sockets. When MSG_PEEK is specified with a recv(3SOCKET) call, any data present is returned to the user, but treated as still unread. The next read(2) or recv(3SOCKET) call on the socket returns the same data. The option to send data without routing packets applied to the outgoing packets is currently used only by the routing table management process. 7 7-1 Accepting an Internet Stream Connection (Server) #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> #define TRUE 1 /* * This program creates a socket and then begins an infinite loop. * Each time through the loop it accepts a connection and prints * data from it. When the connection breaks, or the client closes * the connection, the program accepts a new connection. */ main() { int sock, length; struct sockaddr_in6 server; int msgsock; char buf[1024]; int rval; /* Create socket. */ sock = socket(AF_INET6, SOCK_STREAM, 0); if (sock == -1) { perror("opening stream socket"); exit(1); } /* Bind socket using wildcards.*/ bzero (&server, sizeof(server)); server.sin6_family = AF_INET6; server.sin6_addr = in6addr_any; server.sin6_port = 0; if (bind(sock, (struct sockaddr *) &server, sizeof server) == -1) { perror("binding stream socket"); exit(1); } /* Find out assigned port number and print it out. */ length = sizeof server; if (getsockname(sock,(struct sockaddr *) &server, &length) == -1) { perror("getting socket name"); exit(1); } printf("Socket port #%d\n", ntohs(server.sin6_port)); /* Start accepting connections. */ listen(sock, 5); do { msgsock = accept(sock,(struct sockaddr *) 0,(int *) 0); if (msgsock == -1) perror("accept"); else do { memset(buf, 0, sizeof buf); if ((rval = read(msgsock,buf, sizeof(buf))) == -1) perror("reading stream message"); if (rval == 0) printf("Ending connection\n"); else /* assumes the data is printable */ printf("-->%s\n", buf); } while (rval > 0); close(msgsock); } while(TRUE); /* * Since this program has an infinite loop, the socket "sock" is * never explicitly closed. However, all sockets are closed * automatically when a process is killed or terminates normally. */ exit(0); } To initiate a connection, the client program in Example 7 7-2 Internet Family Stream Connection (Client) #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> #define DATA "Half a league, half a league . . ." /* * This program creates a socket and initiates a connection with * the socket given in the command line. Some data are sent over the * connection and then the socket is closed, ending the connection. * The form of the command line is: streamwrite hostname portnumber * Usage: pgm host port */ main(int argc, char *argv[]) { int sock, errnum, h_addr_index; struct sockaddr_in6 server; struct hostent *hp; char buf[1024]; /* Create socket. */ sock = socket( AF_INET6, SOCK_STREAM, 0); if (sock == -1) { perror("opening stream socket"); exit(1); } /* Connect socket using name specified by command line. */ bzero (&server, sizeof (server)); server.sin6_family = AF_INET6; hp = getipnodebyname(argv[1], AF_INET6, AI_DEFAULT, &errnum); /* * getipnodebyname returns a structure including the network address * of the specified host. */ if (hp == (struct hostent *) 0) { fprintf(stderr, "%s: unknown host\n", argv[1]); exit(2); } h_addr_index = 0; while (hp->h_addr_list[h_addr_index] != NULL) { bcopy(hp->h_addr_list[h_addr_index], &server.sin6_addr, hp->h_length); server.sin6_port = htons(atoi(argv[2])); if (connect(sock, (struct sockaddr *) &server, sizeof (server)) == -1) { if (hp->h_addr_list[++h_addr_index] != NULL) { /* Try next address */ continue; } perror("connecting stream socket"); freehostent(hp); exit(1); } break; } freehostent(hp); if (write( sock, DATA, sizeof DATA) == -1) perror("writing on stream socket"); close(sock); freehostent (hp); exit(0); } You can add support for one-to-one SCTP connections to stream sockets. The following example code adds the -p to an existing program, enabling the program to specify the protocol to use. Example 7. You can designate any of these pointers as a properly cast null. Each set is a structure that contains an array of long integer bit masks. Set the size of the array with FD_SETSIZE, which is 7-4 Using select(3C) to Check for Pending Connections 6, sizeof(buf))) == . The select(3C) routine provides a synchronous multiplexing scheme. The SIGIO and SIGURG signals, which is described in Advanced Socket Topics, provide asynchronous notification of output completion, input availability, and exceptional conditions.
http://docs.oracle.com/cd/E26502_01/html/E35299/sockets-18552.html
CC-MAIN-2014-52
en
refinedweb
Created on 2008-10-11.20:59:13 by fwierzbicki, last changed 2008-10-27.18:17:18 by fwierzbicki. Nested generator expressions within a function like: def x(): ( a for b in ( c for d in e )) fail to compile with an "internal compiler error" Note: this bug was originally uncovered in #1114 Sounds like the issue Tobias noticed a few weeks ago in r5351, you might wanna ping him on it if you haven't already Thanks for the heads up Philip, I haven't started working on this (other than turning the NPE into a useful Python exception) -- so I'll keep it that way until I hear from Tobias. #1151 was marked as a dup of this bug -- so we should see if pyke works after this is fixed (see #1151) Tobias (thobe) has fixed this issue.
http://bugs.jython.org/issue1150
CC-MAIN-2014-52
en
refinedweb
Canonical Voices: Quickly: Rebooted2012-11-16T09:00:48ZMichael [email protected]<p><a href=""><img class="alignleft size-full wp-image-1208" title="quickly-logo" src="" alt="" width="192" height="192" /></a>Shortly after the <a href="" target="_blank">Ubuntu App Showdown</a> earlier this year, Didier Roche and Michael Terry kicked off a <a href="" target="_blank">series of discussions</a> about a ground-up re-write of Quickly. Not only would this fix many of the complications app developers experienced during the Showdown competition, but it would also make it easier to <a title="Quickly Gtk update" href="" target="_blank">write tools</a> around Quickly itself.</p> <p>Unfortunately, neither Didier nor Michael were going to have much time this cycle to work on the reboot. We had a <a href="" target="_blank">UDS session</a> to discuss the initiative, but we were going to need community contributions in order to get it done.<span id="more-1363"></span></p> <h2>JFDI</h2> <p>I was very excited about the prospects of a Quickly reboot, but knowing that the current maintainers weren’t going to have time to work on it was a bit of a concern. So much so, that during my 9+ hour flight from Orlando to Copenhagen, I decided to <a href="" target="_blank">have a go at it</a> myself. Between the flight, a layover in Frankfurt without wifi, and a few late nights in the Bella Sky hotel, I had the start of something <a href="" target="_blank">promising enough to present</a> during the UDS session. I was pleased that both Didier and Michael liked my approach, and gave me some very <a href="" target="_blank">good feedback</a> on where to take it next. Add another 9+ hour flight home, and I had a foundation on which a reboot can begin.</p> <h2>Where is stands now</h2> <p>My code branch is now a part of the <a href="" target="_blank">Quickly project</a> on Launchpad, you can grab a copy of it by running <em>bzr branch lp:quickly/reboot</em>. The code currently provides some basic command-line functionality (including shell completion), as well as base classes for Templates, Projects and Commands. I’ve begun porting the <a href="" target="_blank"><em>ubuntu-application</em></a> template, reusing the current project_root files, but built on the new foundation. Currently only the ‘create’ and ‘run’ commands have been converted to the new object-oriented command class.</p> <p>I also have examples showing how this new approach will allow template authors to easily sub-class Templates and Commands, by starting both a port of the <a href="" target="_blank"><em>ubuntu-cli</em></a> template, and also creating an <a href="" target="_blank"><em>ubuntu-git-application</em></a> template that uses git instead of bzr.</p> <h2>What comes next</h2> <p>This is only the very beginning of the reboot process, and there is still a massive amount of work to be done. For starters, the whole thing needs to be converted from Python 2 to Python 3, which should be relatively easy except for one area that does some import trickery (to keep Templates as python modules, without having to install them to PYTHON_PATH). The Command class also needs to gain argument parameters, so they can be easily introspected to see what arguments they can take on the command line. And the whole thing needs to gain a structured meta-data output mechanism so that non-Python application can still query it for information about available templates, a project’s commands and their arguments.</p> <h2>Where you come in</h2> <p>As I said at the beginning of the post, this reboot can only succeed if it has community contributions. The groundwork has been laid, but there’s a lot more work to be done than I can do myself. Our 13.04 goal is to have all of the existing functionality and templates (with the exception of the Flash template) ported to the reboot. I can use help with the inner-working of Quickly core, but I absolutely <strong>need</strong> help porting the existing templates.</p> <p>The new Template and Command classes make this much easier (in my opinion, anyway), so it will mostly be a matter of copy/paste/tweak from the old commands to the new ones. In many cases, it will make sense to sub-class and re-use parts of one Template or Command in another, further reducing the amount of work.</p> <h2>Getting started</h2> <p>If you are interested in helping with this effort, or if you simply want to take the current work for a spin, the first thing you should do is grab the code (<em>bzr branch lp:quickly/reboot</em>). You can call the quickly binary by running <em>./bin/quickly</em> from within the project’s root.</p> <p>Some things you can try are:</p> <blockquote><p>./bin/quickly create ubuntu-application /tmp/foo</p></blockquote> <p>This will create a new python-gtk project called ‘foo’ in /tmp/foo. You can then call:</p> <blockquote><p>./bin/quickly -p /tmp/foo run</p></blockquote> <p>This will run the applicaiton. Note that you can use -p /path/to/project to make the command run against a specific project, without having to actually be in that directory. If you are in that directory, you won’t need to use -p (but you will need to give the full path to the new quickly binary).</p> <p>If you are interested in the templates, they are in ./data/templates/, each folder name corresponds to a template name. The code will look for a class called Template in the base python namespace for the template (in ubuntu-application/__init__.py for example), which must be a subclass of the BaseTemplate class. You don’t have to define the class there, but you do need to import it there. Commands are added to the Template class definition, they can take arguments at the time you define them (see code for examples), and their .run() method will be called when invoked from the command line. Unlike Templates, Commands can be defined anywhere, with any name, as long as they subclass BaseCommand and are attached to a template.</p>Michael Hall: App Developer Q&A2012-08-01T17:26:10ZMichael [email protected]<p>You can watch the App Developer Q&A live stream starting at 1700 UTC (or watch the recording of it afterwards):</p> <p></p> <p>Questions should be asked in the <a href="" target="_blank">#ubuntu-on-air</a> IRC channel on freenode.</p> <p></p> <p>You can ask me anything about app development on Ubuntu, getting things into the Software Center, or the recent Ubuntu App Showdown competition.<> (6pm London, 1pm US Eastern, 10am US Pacific). Because it will be an On-Air hangout, I won’t have a link until I start the session, but I will post it here on my blog before it starts. For IRC, I plan on using: Quickly Gtk update2012-07-30T16:52:32ZMichael [email protected]<p>As part of the <a href="" target="_blank">Ubuntu App Showdown</a> I started on a small project to provide a nice <a title="My App Developer Showdown Entry" href="" target="_blank">GUI frontend to Quickly</a>..</p> <p></p> <p><span id="more-1230"></span></p> <h2>Project Management</h2> <p <a href="" target="_blank">Observer design pattern</a>.</p> <h2>Zeitgeist event monitoring</h2> <p>The other big development was integrating Quickly-Gtk with <a href="" target="_blank">Zeitgeist</a>..</p> <h2>The future of Quickly-Gtk</h2> <p>While I was able to get a lot done with Quickly-Gtk, the underlying Quickly API and command line really weren’t designed to support this kind of use. However, as a result of what we learned during the App Showdown, <a href="" target="_blank">Didier Roche</a> has begun planning a <a href="" target="_blank">reboot of Quickly</a>, which will improve both it’s command-line functionality, and it’s ability to be used as a callable library for apps like Quickly-Gtk. If you are interested in the direction of Quickly’s development, I urge you to join in those planning meetings.</p> <p> </p> <p>Launchpad Project: <a href=""></a></p> <p> </p>Michael Hall: My App Developer Showdown Entry2012-06-22T21:39:20ZMichael [email protected]<p>As you’ve<a href="" target="_blank"> probably heard</a> already, Ubuntu is running an <a href="" target="_blank">App Developer Showdown</a> competition where contestants have three weeks to build an Ubuntu app from scratch. The rules are simple: It has to be new code, it has to run on Ubuntu, and it has to be submitted to the Software Center. The more you use Ubuntu’s <a href="" target="_blank">tools</a>, the better your chances of winning will be. This week we ran a series of <a href="" target="_blank">workshops</a> introducing these tools and how they can be used. It all seemed like so much fun, that I’ve decided to participate with my own submission!<span id="more-1199"></span></p> <p>Now 2 our of the 6 judges for this competition are my immediate co-workers, so let me just start off by saying that <strong>I will not be eligible</strong> for any of the prizes. But it’s still a fun and interesting challenge, so I’m going to participate anyway. But what is my entry going to be? Well in my typical fashion of building <a title="Charming Django with Naguine" href="" target="_blank">tools</a> for <a title="Simplified Unity Lens Development with Singlet" href="" target="_blank">tools</a>, I’ve decided to write a GUI wrapper on to of <a href="" target="_blank">Quickly</a>, using Quickly.</p> <p><a href=""><img class=" wp-image-1201 alignright" title="mockup_create" src="" alt="" width="396" height="306" /></a>Before I started on any code, I first wanted to brainstorm some ideas about the interface itself. For that I went back to my favorite mockup tool: <a title="Pencil for easy UI mockups" href="" target="_blank">Pencil</a>..</p> <p><a href=""><img class="alignleft size-medium wp-image-1203" title="Screenshot from 2012-06-22 15:09:59" src="" alt="" width="300" height="163" /></a>Now, I’ve never been a fan of GUI builders. Even back when I was writing Java/Swing apps, and GUI builders were all the rage, I never used them. I didn’t use one for <a title="Hello Unity" href="" target="_blank">Hello Unity</a>,.</p> <p><a href=""><img class="alignright size-medium wp-image-1206" title="Screenshot from 2012-06-22 16:11:00" src="" alt="" width="300" height="197" /></a.</p> <p.</p> <p><a href=""><img class="alignnone size-large wp-image-1207" title="Screenshot from 2012-06-22 16:11:52" src="" alt="" width="640" height="141" /></a></p> <p>And thanks to the developer tools available in Ubuntu, I was able to accomplish all of this in only a few hours of work.</p> <p <a href="" target="_blank">package in my PPA</a>.</p> <p>Building an app in 4 hours then accidentally building a proper package and uploading it to a PPA, who’d have thought we’d ever make it that easy? I hope you all are having as much fun and success in your showdown applications as I am.<: Goodbye And Thanks For All the Apps: Ubuntu App Developer Week – Day 5 And Wrap-Up2011-09-13T16:45:[email protected]<p><img class="aligncenter size-full wp-image-1308" title="Ubuntu App Developer Week" src="" alt="" /></p> <p>Another edition of the Ubuntu App Developer Week and another amazing knowledge sharing fest around everything related to application development in Ubuntu. Brought to you by a range of the best experts in the field, here’s just a sample of the topics they talked about: <em</em>… and more. Oh my!</p> <p>And a pick of what they had to say:</p> <blockquote><p>We believe that to get Ubuntu from 20 million to 200 million users, we need more and better apps on Ubuntu<br /> <a href="">Jonathan Lange</a> on making Ubuntu a target for app developers</p></blockquote> <blockquote><p>Bazaar is the world’s finest revision control system<br /> <a href="">Jonathan Riddell</a> on Bazaar</p></blockquote> <blockquote><p>So you’ve got your stuff, wherever you are, whichever device you’re on<br /> <a href="">Stuart Langridge</a> on Ubuntu One</p></blockquote> <blockquote><p>Oneiric’s EOG and Evince will be gesture-enabled out of the box<br /> <a href="">Jussi Pakkanen</a> on multitouch in Ubuntu 11.10</p></blockquote> <blockquote><p>I control the upper right corner of your screen <img src="" alt=";-)" class="wp-smiley" /><br /> <a href="">Ted Gould</a> on Indicators</p></blockquote> <p>If you happened to miss any of the sessions, you’ll find the logs for all of them on the <a href="">Ubuntu App Developer Week page</a>, and the summaries for each day on the links below:</p> <ul> <li><a href="">Day 1 Summary</a></li> <li><a href="">Day 2 Summary</a></li> <li><a href="">Day 3 Summary</a></li> <li><a href="">Day 4 Summary</a></li> <li>Day 5 Summary (this post)</li> </ul> <h2>Ubuntu App Developer Week – Day 5 Summary</h2> <p>The last day came with a surprise: an extra session for all of those who wanted to know more about Qt Quick and QML. Here are the summaries:</p> <h3>Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip</h3> <p><em>By <a title="LaunchpadHome" href="">Jussi Pakkanen</a></em></p> <p><img class="alignleft" title="Jussi Pakkanen" src="" alt="" width="64" height="64" />In his session, Jussi talked about one of the most interesting technologies where Ubuntu is leading the way in the open source world: multitouch. Walking the audience through the <a href="">Grip Tutorial</a>,.</p> <p>Check out the <a href="">session log</a>.<em> </em></p> <h3>Creating a Google Docs Lens</h3> <p><em>By <a title="LaunchpadHome" href="">Neil Patel</a></em></p> <p><img class="alignleft" title="Neil Patel" src="" alt="" width="64" height="64" /.</p> <p>Check out the <a href="">session log</a>.<em> </em></p> <h3>Practical Ubuntu One Files Integration</h3> <p><em>By <a title="LaunchpadHome" href="">Michael Terry</a><br /> </em></p> <p><a href=""><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /></a!</p> <p>Check out the <a href="">session log</a> and Michael’s <a href="">awesome notes</a>.</p> <h3>Publishing Your Apps in the Software Center: The Business Side</h3> <p><em>By <a title="LaunchpadHome" href="">John Pugh</a></em></p> <p><a href=""><img class="alignleft" title="John Pugh" src="" alt="" width="64" height="64" /><.</p> <p>Check out the <a href="">session log</a>.</p> <h3>Writing an App with Go</h3> <p><em>By <a title="LaunchpadHome" href="">Gustavo Niemeyer</a></em></p> <p><a href=""><img class="alignleft" title="Gustavo Niemeyer" src="" alt="" width="64" height="64" /></a>Gustavo’s enthusiasm for <a href="">Go<.</p> <p>Check out the <a href="">session log</a>.</p> <h3>Qt Quick At A Pace</h3> <p><em>By <a title="LaunchpadHome" href="">Donald Carr</a></em></p> <p><a href=""><img class="alignleft" title="Donald Carr" src="" alt="" width="64" height="64" /></a <a href="">qtmediahub</a> and <a href="">Qt tutorial examples</a>, he explored QML’s capabilities and offered good practices for succesfully developing QML-based projects.</p> <p>Check out the <a href="">session log</a>.</p> <h2>Wrapping Up</h2> <p>Finally, if you’ve got any feedback on UADW, on how to make it better, things you enjoyed or things you believe should be improved, your comments will be very appreciated and useful to tailor this event to your needs.</p> <p>Thanks a lot for participating. I hope you enjoyed it as much as I did, and see you again in 6 months time for another week full with app development goodness!<a href=""><br /> <: All Good Things Come To An End: Ubuntu App Developer Week – Day 42011-09-09T19:44:[email protected]<h2>Ubuntu App Developer Week – Day 4 Summary</h2> <p>Last day of UADW! While we’re watching the final sessions, here’s what happened yesterday:</p> <h3>Creating an App Developer Website: developer.ubuntu.com</h3> <p><em>By <a title="LaunchpadHome" href="">John Oxton</a> and <a title="LaunchpadHome" href="">David Planella</a></em></p> <p><img class="alignleft" title="John Oxton" src="" alt="" width="64" height="64" /><img class="alignleft" title="David Planella" src="" alt="" width="64" height="64" /.</p> <p>Check out the session log <a href="">here</a>.<em> </em></p> <h3>Rapid App Development with Quickly</h3> <p><em>By <a title="LaunchpadHome" href="">Michael Terry</a></em></p> <p><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /.</p> <p>Check out the session log <a href="">here</a>.<em> </em></p> <h3>Developing with Freeform Design Surfaces: GooCanvas and PyGame</h3> <p><em>By <a title="LaunchpadHome" href="">Rick Spencer</a><br /> </em></p> <p><a href=""><img class="alignleft" title="Rick Spencer" src="" alt="" width="64" height="64" /></a.</p> <p>Check out the session log <a href="">here</a>.</p> <h3>Making your app appear in the Indicators</h3> <p><em>By <a title="LaunchpadHome" href="">Ted Gould</a></em></p> <p><a href=""><img class="alignleft" title="Ted Gould" src="" alt="" width="64" height="64" /><!</p> <p>Check out the session log <a href="">here</a>.</p> <h3>Will it Blend? Python Libraries for Desktop Integration</h3> <p><em>By <a title="LaunchpadHome" href="">Marcelo Hashimoto</a></em></p> <p><a href=""><img class="alignleft" title="person-logo" src="" alt="" width="64" height="64" /></a>Marcelo shared his experience acquired with <a href="">Polly</a>,.</p> <p>Check out the session log <a href="">here</a>.</p> <h2>The Day Ahead: Upcoming Sessions for Day 4</h2> <p>Check out the first-class lineup for the last day of UADW:</p> <p><a href="">16.00 UTC</a> – <strong>Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip </strong><strong><em></em></strong></p> <p><img class="alignleft size-full wp-image-1294" title="Jussi Pakkanen" src="" alt="" /> Multitouch is everywhere these days, and now on your desktop as well -brought to you by developers such as <a title="LaunchpadHome" href="">Jussi Pakkanen</a>, who’ll guide through using libgrip to add touch support to your GTK+ apps. Learn how to use this cool new library in your own software!</p> <p><a href="">17:00 UTC</a> – <strong>Creating a Google Docs Lens<em></em></strong></p> <p><img class="alignleft size-full wp-image-1290" title="Neil Patel" src="" alt="" />Lenses are ways of presenting data coming from different sources in Unity. <a title="LaunchpadHome" href="">Neil Patel</a> knows Lenses inside out and will present a practical example of how to create a Google Docs one. Don’t miss this session on how to put two cool technologies together!</p> <p><a href="">18:00 UTC</a><strong> – <em></em>Practical Ubuntu One Files Integration</strong></p> <p><a href=""><img class="alignleft" title="Michael Terry" src="" alt="" width="64" height="64" /></a>Yet again the Deja-dup rockstar and UADW regular <a title="LaunchpadHome" href="">Michael Terry</a> will be sharing his deep knowledge on developing apps. This time it’s about adding cloud support to applications: integrating with the Ubuntu One files API.</p> <p><a href="">19:00 UTC</a> – <strong><em></em>Publishing Your Apps in the Software Center: The Business Side</strong></p> <p><a href=""><img class="size-full wp-image-1291 alignleft" title="John Pugh" src="" alt="" /></a>Closing the series of sessions around publishing apps in the Software Centre, we’ll have the luxury of having <a title="LaunchpadHome" href="">John Pugh</a>, from the team that brings you commercial apps into the Software Centre and who’ll be talking about the business side of things.</p> <p><a href="">20:00 UTC</a><strong><em></em> – Writing an App with Go</strong></p> <p><a href=""><img class="size-full wp-image-1292 alignleft" title="Gustavo Niemeyer" src="" alt="" /></a>Go is the coolest kid around in the world of programming languages. <a title="LaunchpadHome" href="">Gustavo Niemeyer</a> is very excited about it and will be showing you how to write an app using this language from Google. Be warned, his enthusiasm is contagious!<a title="LaunchpadHome" href=""><br /> </a></p> <p><a href="">20:00 UTC</a><strong><em></em> – Qt Quick At A Pace</strong></p> <p><a href=""><img class="size-full wp-image-1293 alignleft" title="Donald Carr" src="" alt="" /></a>A last minute and very welcome addition to the schedule. In his session <a title="LaunchpadHome" href="">Donald Carr </a>will introduce you to Qt Quick to create applications with Qt Creator and QML, the new declarative language that brings together designers and developers.<: Cha-ching!2011-08-17T12:36:53ZRick [email protected]<a href=""> <br /></a> <br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5641894786340952802" border="0" /></a>So, today I uploaded a special version of Photobomb to <a href="">my PPA</a>. It's special because I consider this my first *complete* release. There are some things that I would like to be better in it. For example: <br /><ul><li>I wish you could drag into from the desktop or other apps.</li><li>I wish that the app didn't block the UI when you click on the Gwibber tab when Gwibber isn't working.</li><li>I wish the local files view had a watch on the current directory so that it refreshed automatically.</li><li>I wish inking was smoother.</li><li>I wish you could choose the size of the image that you are making.</li><li>I wish that you could multi-select in it.</li></ul>But alas, if I wait until it has everything and no bugs, I'll never release. <br /> <br />So, I am releasing Photobomb in my PPA. It is a free app. You can use it for free, and it's Free software. So, enjoy. <br /> <br /. <br /> <br />The code is GPL v3, so people can enhance it, or generally do whatever they think is useful for them (including giving it a way, or using it to make money). <br /> <br />I found it remarkably easy to submit photobomb to the Software Center. I just used the <a href="">myapps.ubuntu portal</a>, and it all went very smoothly. Really just a matter of filling in some forms. Of course, since I used Quickly to build Photobomb, Quickly took care of the packaging for me, so that simplified it loads. <br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5641895137899035186" border="0" /></a> <br />I'll keep everyone posted on how it goes! <br />David Planella: Ubuntu App Developer Week – Day 42011-04-15T19:12:[email protected]<h2>Ubuntu App Developer Week – Day 4 Summary</h2> <p>Ramping:</p> <h3>Qt Quick: Elements/Animations/States</h3> <p><em>By Jürgen Bocklage-Ryannel</em></p> <p.</p> <p><em><em>Check out the session log <a href="">here</a>.</em></em></p> <h3>Qt Quick: Rapid Prototyping</h3> <p>By Jürgen Bocklage-Ryannel</p> <p.<br /> <strong></strong></p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Rapid App Development with Quickly</h3> <p>By <a title="LaunchpadHome" href="">Michael Terry</a></p> <p ‘submitubuntu’ command to help getting applications into the Software Center. All that being set straight, he then showed how to use Quickly and what it can do: from creating the first example application, to modifying the UI with ‘quickly design’ and Glade, into debugging and finally packaging.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Getting Your App in the Distro: the Application Review Process</h3> <p>By <a title="LaunchpadHome" href="">Allison Randal</a></p> <p.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Adding Indicator Support to your Apps</h3> <p>By <a title="LaunchpadHome" href="">Ted Gould</a></p> .</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h3>Using Launchpad to get your application translated -</h3> <p>By <a title="LaunchpadHome" href="">Henning Eggers</a></p> <p>As a follow up to the talk on how to <a href="">add native language support to your applications</a>.</p> <p><em>Check out the session log <a href="">here</a>.</em></p> <h2>The Day Ahead: Upcoming Sessions for Day 5</h2> <p>The last day and the quality and variety of the sessions is still going strong. Check out the great content we’ve prepared for you today:</p> <p><a href="">16:00 UTC</a><br /> <strong>Qt Quick: Extend with C++</strong> – Jürgen Bocklage-Ryannel<br /> Sometimes you would like to extend Qt Quick with your own native extension. Jürgen will show you some ways how to do it.</p> <p><a href="">17:00 UTC</a><br /> <strong>Phonon: Multimedia in Qt -</strong> <a title="LaunchpadHome" href="">Harald Sitter</a><br />.</p> <p><a href="">18:00 UTC</a><br /> <strong>Integrating music applications with the Sound Menu -</strong> <a title="LaunchpadHome" href="">Conor Curran</a><br />.</p> <p><a href="">19:00 UTC</a><br /> <strong>pkgme: Automating The Packaging Of Your Project -</strong> <a title="LaunchpadHome" href="">James Westby</a><br />.</p> <p><a href="">20:00 UTC</a><br /> <strong>Unity Technical Q&A -</strong> <a title="LaunchpadHome" href="">Jason Smith</a> and <a title="LaunchpadHome" href="">Jorge Castro</a><br />.</p> <p><a href="">21:00 UTC</a><br /> <strong>Lightning Talks -</strong> <a title="LaunchpadHome" href="">Nigel Babu</a><br /> As the final treat to close the week, Nigel has organized a series of lightning talks to showcase a medley of cool applications: <em>CLI Companio</em>n, <em><a href="">Unity Book Lens</a></em>, <em>Bikeshed</em>, <em>circleoffriends</em>, <em><a href="">Algorithm School</a></em>, <em><a href="">Sunflower FM</a></em>, <em><a href="">Tomahawk Player</a></em>, <em>Classbot</em> – your app could be in this list next time, do check them out!<: Quickly Tutorial for Natty: DIY Media Player2011-02-04T14:57:49ZRick [email protected]<a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560348583979266722" border="0" /></a><span></span>I started working on a chapter for the <a href="">Ubuntu Developers' Manual</a>. The chapter will be on how to use media in your apps. That chapter will cover:<br /><ul><li>Playing a system sound</li><li>Showing an picture</li><li>Playing a sound file</li><li>Playing a video</li><li>Playing from a web cam</li><li>Composing media</li></ul>I created an app for demonstrating some of these things in that chapter. After I wrote the app, I realized that it shows a lot of different parts of app writing for Ubuntu:<br /><ul><li>Using Quickly to get it all started</li><li>Using Glade to get the UI laid out</li><li>Using quickly.prompts.choose_directory() to prompt the user</li><li>Using os.walk for iterating through a directory </li><li>Using a dictionary<br /></li><li>Using DictionaryGrid to display a list</li><li>Using MediaPlayerBox to play videos or Sounds</li><li>Using GooCanvas to compose a singe image out of images and text</li><li>Using some PyGtk trickery to push some UI around</li></ul>A pretty decent amount of overlap with the chapter, but not a subset or superset. So I am writing a more full tutorial to post here, and then I can pull out the media specific parts for the chapter later. Certain things will change as we progress with Natty, so I will make edits to this posting as those occur. So without Further Ado ...<br /><br /><span><span>Simple Player Tutorial</span></span><br /><span>Introduction</span><br />In this tutorial you will build a simple media player. It will introduce how to start projects, edit UI, and write the code necessary to play videos and songs in Ubuntu.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332293910206642" border="0" /></a><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332295518561490" border="0" /></a>The app works by letting the user choose a directory. Simple Player then puts all the media into a list. The user can choose media to play from that list.<br /><br />This tutorial uses Quickly, which is an easy and fun way to manage application creation, editing, packaging, and distribution using simple commands from the terminal. Don't worry if you are used to using an IDE for writing applications, Quickly is super easy to use.<br /><br /><span>Requirements</span><br /.<br /><br />You also need Quickly. To install Quickly:<br /><br />$sudo apt-get install quickly python-quickly.widgets<br /><br />This tutorial also uses a yet to be merged branch of Quickly Widgets. In a few weeks, you can just install quickly-widgets, but for now, you'll need to get the branch:<br /><br />$bzr branch lp:~rick-rickspencer3/quidgets/natty-trunk<br /><br />Note that these are alpha versions, so there may be bugs.<br /><span><br />Caution About Copy and Pasting Code</span><br /.<br /><br />If you're going to copy and paste, you might want to use the code for the tutorial project in launchpad, from this:<br /><a href="">Link to Code File in the Launchpad Project</a><br /><br />You can also look at the tutorial in text format this:<br /><a href="">Link to this tutorial in text for in Launchpad</a><br /><br /><span><span>Creating the Application</span></span><br />You get started by creating a Quickly project using the ubuntu-application template. Run this command in the terminal:<br />$quickly create ubuntu-application simple-player<br /><br />This will create and run your application for you.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332302342171954" border="0" /></a><br />Notice that the application knows it is called Simple Player, and the menus and everything work.<br /><br />To edit and run the application, you need to use the terminal from within the simple-player directory that was created. So, change into that directory for running commands:<br /><br />$cd simple-player<br /><br /><span><span>Edit the User Interface</span></span><br />We'll start by the User Interface with the Glade UI editor. We'll be adding a lot of things to the UI from code, so we can't build it all in Glade. But we can do some key things. We can:<br /><ul><li>Layout the HPaned that separates the list from the media playing area</li><li>Set up the toolbar</li></ul><span>Get Started</span><br />To run Glade with a Quickly project, you have to use this command from within your project's directory:<br />$quickly design<br /><br />If you just try to run Glade directly, it won't work with your project.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332304247403250" border="0" /></a>Now that Glade is open, we'll start out by deleting some of the stuff that Quickly put in there automatically. Delete items by selecting them and hitting the delete key. So, delete:<br /><ul><li>label1</li><li>image1</li><li>label2</li></ul>This will leave you with a nice blank slate for your app:<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560332310174272674" border="0" /></a>Now, we want to make sure the window doesn't open too small when the app runs. Scroll to the top of the TreeView in the upper right of Glade, and select simple_player_window. Then in the editor below, click the common tab, and set the Width Request and Height Request.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333521250560258" border="0" /></a>There's also a small bug in the quickly-application template, but it's easy to fix. Select statusbar1, then on the packing tab, set "Pack type" to "End".<br /><br />Save your changes or they won't show up when you try running the app! Then see how your changes worked by using the command:<br />$quickly run<br /><br />A nice blank window, ready for us to party on!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333525273762210" border="0" /></a><span>Adding in Your Widgets</span><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333531070608546" border="0" /></a><br />Make sure the HPaned starts out with an appropriate division of space. Do this by going to the General tab, and setting an appropriate number of pixels in Position property.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333532514187650" border="0" /></a>The user should be able to scroll through the list, so click on ScrolledWindow in the toolbar, and then click in the left hand part of the HPaned to place it in there.<br /><br />Now add a toolbar. Find the toolbar icon in the toolbox, click on it and click in the top space open space. This will cause that space to collapse, because the toolbar is empty by default.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560333543170120722" border="0" /></a>To add the open button click the edit button (looks like pencil) in Glade's toolbar. This will bring up the toolbar editing dialog. Switch to the Hierarchy tab, and click "Add". This will add a default toolbar button.<br /><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334474103485074" border="0" /></a><br /.<br /><br />Now if you use $quickly run again, you'll see that your toolbar button is there.<br /><br /><span><span>Coding the Media List<br /><span>Making the Open Button Work</span><br /></span></span>The open button will have an important job. It will respond to a click from the user, offer a directory chooser, and then build a list of media in that directory. So, it's time write some code.<br /><br />You can use:<br />$quickly edit &<br /><br />This will open your code Gedit, the default text and code editor for Ubuntu.<br /><br />Switch to the file called "simple-player". This is the file for your main window, and the file that gets run when users run your app from Ubuntu.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334477613723234" border="0" /></a>First let's make sure that the open button is hooked up to the code. Create a function to handle the signal that looks like this (and don't forget about proper space indenting in Python!):<br /><pre><code><br /> def openbutton_clicked_event(self, widget, data=None):<br /> print "OPEN"<br /><br /><br /></code></pre>Put this function under "finish_initializing", but above "on_preferences_changed". Save the code, run the app, and when you click the button, you should see "OPEN" printed out to the terminal.<br /><br />How did this work? Your Quickly project used the auto-signals feature to connect the button to the event. To use auto-sginals, simple follow this pattern when you create a signal handlder:<br /><pre><code><br />def widgetname_eventname_event(self, widget, data=None):<br /><br /></code></pre>Sometimes a signal handler will require a different signature, but (self, widget, data=None) is the most common.<br /><br /><span><span><span>Getting the Directory from the User</span></span><br /></span>We'll use a convenience function built into Quickly Widgets to get the directory info from the user. First, go to the import section of the simple-player file, and around line 11 add an import statement:<br /><br /><pre><code><br />from quickly import prompts<br /><br /></code></pre></pre><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334488926372962" border="0" /></a>Now when you run the app you can select a directory, and it will print a full path to each file encountered. Nice start, but what the function needs to do is build a list of files that are media files and display those to the user.<br /><br /><span>Defining Media Files</span><br />This app will use a simple system of looking at file extensions to determine if files are media files. Start by specifying what file types are supporting. Add this in finish_initializing to create 2 lists of supported media:<br /><pre><code><br />self.supported_video_formats = [".ogv",".avi"]<br />self.supported_audio_formats = [".ogg",".mp3"]<br /><br /></code></pre>GStreamer supports a lot of media types so ,of course, you can add more supported types, but this is fine to start with.<br /><br />Now change the openbutton handler to only look for these file types: /> #make a full path to the file<br /> print os.path.join(root,f)<br /><br /></code></pre>This will now only print out files of supported formats.<br /><br /><span>Build a List of Media Files</span><br /></pre><span>Display the List to the User</span><br />A DictionaryGrid is the easiest way to display the files, and to allow the user to click on them. So import DicationaryGrid at line 12, like this:<br /><pre><code><br />from quickly.widgets.dictionary_grid import DictionaryGrid<br /></code></pre:<br /><pre><code><br /> for c in self.ui.scrolledwindow1.get_children():<br /> self.ui.scrolledwindow1.remove(c)<br /></code></pre>Then create a new DictionaryGrid. We only want one column, to the view the files, so we'll set up the grid like this:<pre><code>>So now the whole function /><br /> #remove any children in scrolled window<br /> for c in self.ui.scrolledwindow1.get_children():<br /> self.ui.scrolledwindow1.remove(c)<br />>Now the list is displayed when the user picks the directory.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334490114080370" border="0" /></a><br /><span><span>Playing the Media</span></span><br /><span>Adding the MediaPlayer</span><br /:<br /><pre><code><br />from quickly.widgets.media_player_box import MediaPlayerBox<br /><br /></code></pre>Then, we'll create and show a MediaPlayerBox in the finish_initializing function. By default, a MediaPlayerBox does not show it's own controls, so pass in True to set the "controls_visible" property to True. You can also do things like this:<br /><br /><pre><code><br />player.controls_visible = False<br />player.controls_visible = True<br /><br /></code></pre>to control the visibility of the controls.<br /><br /).<br /><pre><code><br />self.player = MediaPlayerBox(True)<br />self.player.show()<br />self.ui.hpaned1.add2(self.player)<br /><br /></code></pre><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560334495717813186" border="0" /></a><span>Connecting to the DictionaryGrid Signals</span><br /:<br /><pre><code><br /> #hook up to the selection_changed event<br /> media_grid.connect("selection_changed", self.play_file)<br /></code></pre>Now create that play_file function, it should look like this:<br /><pre><code><br /> def play_file(self, widget, selected_rows, data=None):<br /> print selected_rows[-1]["uri"]<br /><br /></code></pre.<br /><br /.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335754216317186" border="0" /></a><span>Setting the URI and calling play()</span><br />Now that we have the URI to play, it's a simple matter to play it. We simply set the uri property of our MediaPlayerBox, and then tell it to stop playing any file it may be playing, and then to play the selected file:<br /><pre><code><br />def play_file(self, widget, selected_rows, data=None):<br /> self.player.stop()<br /> self.player.uri = selected_rows[-1]["uri"]<br /> self.player.play()<br /><br /></code></pre.<br /><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335757346830226" border="0" /></a><br /><span>Connecting to the "end-of-file" Signal</span><br />When a media files ends, users will expect the next file played automatically. It's easy to find out when a media file ends using the MediaPlayerBox's "end-of-file" signal. Back in finish_initializing, after creating the MediaPlayerBox, connect to that signal:<br /><pre><code><br />self.player.connect("end-of-file",self.play_next_file)<br /><br /></code></pre><span>Changing the Selection of the DictionaryGrid</span><br />Create the play_next_file function in order to respond when a file is done playing:<br /><pre><code><br /> def play_next_file(self, widget, file_uri):<br /> print file_uri<br /><br /></code></pre:<br /><pre>></pre><span>Making an Audio File Screen</span><br />Notice that when playing a song instead of a video, the media player is blank, or a black box, depending on whether a video has been player before.<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335761983241298" border="0" /></a>It would be nicer to show the user some kind of visualization when a song is playing. The easiest thing to do would be to create a gtk.Image object, and swap it when for the MediaPlayerBox when an audio file is playing. However, there are more powerful tools at our disposal that we can use to create a bit richer of a user experience.<br />.<br /><br /><span>Create a Goo Canvas</span><br />Naturally, you need to import the goocanvas module:<br /><pre><code><br />import goocanvas<br /><br /></code></pre>Then, in the finish_initializing function, create and show a goocanvas.Canvas:<br /><pre><code><br />self.goocanvas = goocanvas.Canvas()<br />self.goocanvas.show()<br /><br /></code></pre.<br /><br /><span>Add Pictures to the GooCanvas</span><br /:<br /><pre><code><br />logo_file = helpers.get_media_file("background.png")<br />logo_file = logo_file.replace("","")<br />logo_pb = gtk.gdk.pixbuf_new_from_file(logo_file)<br /><br /><br /></code></pre:<br /><pre><code><br />root_item=self.goocanvas.get_root_item()<br />goocanvas.Image(parent=root_item, pixbuf=logo_pb,x=20,y=20)<br /><br /><br /></code></pre><span>Show the GooCanvas When a Song is Playing</span><br /:<br /><pre><code><br />format = selected_rows[0]["format"]<br /><br /></code></pre>We can also get a reference to the visual that is currently in use:<br /><pre><code><br />current_visual = self.ui.hpaned1.get_child2()<br /><br /></code></pre>Knowing those two things, we can then figure out whether to put in the goocanvas.Canvas or the MediaPlayerBox. So the whole function will look like this:<br /><pre>></pre><br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335768420700978" border="0" /></a><br /><span>Add another Image to Canvas</span><br />We can add the note image to the goocanvas.Canvas in the same way we added the background image. However, this time we'll play with the scale a bit:<br /><br /><pre>></pre>Remember for this to work, you have to put a note.png file in the data/media directory for your project. If your image is a different size, you'll need to tweak the x, y, and scale as well.<br /><br />(BTW, thanks to <a href="">Daniel Fore</a> for making the artwork used here. If you haven't had the pleasure of working Dan, he is a really great guy, as well as a talented artist and designer. He's also the leader of the #elementary project.)<br /><br /!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560335776680799090" border="0" /></a><span>Add Text to the goocanvas.Canvas</span><br /.<br /><pre><code><br />self.song_text = goocanvas.Text(parent=root_item,<img src="" alt="" id="BLOGGER_PHOTO_ID_5560337116417990178" border="0" /></a>Then, back in finish_initializing, after creating the MediaPlayerBox, remove the controls:<br /><pre><code><br />self.player = MediaPlayerBox(True)<br />self.player.remove(self.player.controls)<br /><br /></code></pre>Then, create a new openbutton:<br /><pre><code><br />open_button = gtk.ToolButton()<br /><br /></code></pre>We still want the open button to be a stock button. For gtk.ToolButtons, use the set_stock_id function to set the right stock item.<br /><pre><code><br />open_button.set_stock_id(gtk.STOCK_OPEN)<br /><br /></code></pre>Then show the button, and connect it to the existing signal handler.<br /><pre><code><br />open_button.show()<br />open_button.connect("clicked",self.openbutton_clicked_event)<br /><br /></code></pre:<br /><pre><code><br />self.player.controls.insert(open_button, 0)<br />self.ui.hbox1.pack_start(self.player.controls, True)<br /><br /></code></pre>Now users can use the controls even when audio is playing!<br /><a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5560337124831239410" border="0" /></a><span><span>Conclusion</span></span><br /><span></span>This tutorial demonstrated how to use Quickly, Quickly Widgets, and PyGtk to build a functional and dynamic media player UI, and how to use a goocanvas.Canvas to add interesting visual effects to your program.<br /><br />The next tutorial will show 2 different ways of implementing play lists, using text files, using pickling, or using desktopcouch for storing files.<br /><br /><span><span>API Reference</span></span><br /><span>PyGtk<br /></span><ul><li><a href="">PyGtk Reference Documentation</a></li><li><a href="">PyGtk FAQ</a></li></ul><span>Quickly Widgets</span><br /><span></span>Reference documentation for Quickly Widgets isn't currently hosted anywhere. However, the code is thoroughly documented, so until the docs are hosted, you can use pydocs to view them locally. To do this, first start pydocs on a local port, such as:<br />$pydocs -p 1234<br /><br />Then you can browse the pydocs by opening your web browser and going to <a href="">http:localhost:1234</a>. Search for quickly, then browse the widgets and prompts libraries.<br /><br />Since MediaPlayerBox is not installed yet, you can look at the doc comments in the code for the modules in natty-branch/quickly/widgets/media_player_box.py.<br /><span>GooCanvas</span><br /><ul><li><a href="">Python GooCanvas Reference</a></li></ul><span>GStreamer</span><br />MediaPlayerBox uses a GStreamer playbin to deliver media playing functionality. GStreamer si super powerful, so if you want to do more with it, you can read the docs.<br /><ul><li><a href="">Playbin Documentation</a> (you can use self.player.playbin to get a reference to the playbin).</li><li><a href="">Python GStreamer Reference</a></li></ul><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Pithos of Rain2011-01-23T11:28:31ZRick [email protected]<a href=""><img src="" alt="" id="BLOGGER_PHOTO_ID_5565431340535216658" border="0" /></a>During my normal Sunday morning chill out with a cup of coffee this morning, I saw a tweet from <a href="">Ken VanDine</a> go by about <a href="">Pithos, a native Pandora client for Ubuntu</a>. I have a Pandora account, and love to use it on my phone, but on Ubuntu I had to go through the Pandora web interface, so I didn't use it as much.<br /><br />I'm using it right now, and I'm chuffed. I'd love to see this app go through the ARB process so maverick users can more easily access it. And <s>I'd love to see it</s> I'm psyched to hear that it is in Universe <s>or</s> and even Debian for Natty.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: New Quickly App: Daily Journal2010-06-20T12:15:01ZRick [email protected]<br /><br />Quickly has started to unlock productivity for me in unexpected ways. I've mentioned about writing my own development tools, like <a href="">bughugger</a>, and <a href="">slipcover</a>. <a href="">PPA</a>.<br /><br />In my next posting, I'll show how I used quickly.widgets.text_editor to create Daily Journal.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Go Here to Learn to Program from MIT2010-06-14T11:28:28ZRick [email protected]<a href=""><img src="" border="0" alt="" /></a>I run into folks who want to get started programming, but they "don't know a language". If you are in this camp, I highly recommend <a href="">the online course from MIT</a>. It's designed for people with no prior programming experience, and it's Python!<div><br /></div><div>After the first few lessons, you'll know enough Python to start a Quickly app!.</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Create Your Own Games with Quickly Pygame2010-06-06T13:18:44ZRick [email protected] couple of months ago I created a Quickly template with the goal of making it easy and fun to get started my games. The template doesn't have any "add" or "design" commands, but it does have all the other commands. The template creates a functioning arcade-style game, and then you provide you own artwork and start hacking the code to make your own gameplay.<br /><div><br /></div><div>$quickly tutorial ubuntu-pygame is the best way to get a detailed introduction into getting started making your own game, but here's some video of hacking the code. I hope it inspires you to try your hand at creating your own games.</div><div><br /></div><div>Part 1: Create the game, copy in your artwork, and make the guy work the way you want</div><div><br /><br /><div>Part 2: Program the enemies</div><br /><br /><div>Part 3: Create a power up sprite, and manage collisions</div><br /><br /></div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly: 90 Seconds to Your PPA2010-06-06T13:18:33ZRick [email protected]<div><br /></div><br /><div><br />Here's a quick video showing taking a finished Quickly app, setting the desktop and setup.py file, and then uploading to a PPA using $quickly share<br /></div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: PyTask, written with Quickly and Quickly Widgets2010-06-06T13:18:21ZRick [email protected]<a href=""><img src="" border="0" alt="" id="BLOGGER_PHOTO_ID_5472218008646329810" /></a><br /><div>On Saturday I received an email from a developer name <a href="">Ryan</a>..</div><div><br /></div><div>Looking at the App, it was clear that there were a few more features that DictionaryGrid needs to really rock, though:</div><div><ol><li>It needs a DateColumn to handle the "due" column. Users would want to set this with a gtk.Calendar widget.</li><li.</li><li>Both of these will require new GridFilter functionality. In fact, I have been waiting for a reason to refactor this part of Quickly Widgets, as the GridFilterRows are hard coded to use specific widgets, and this should be flexible.</li></ol></div><div><br /></div><div>Anyway, as it is, I am using PyTask, I hope Ryan get's it into a PPA soon. I like the simplicity. Ryan and I are currently collaborating on creating the new functionality in DictionaryGrid that PyTask needs. Open Source FTW!</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly and Quickly Widget Intro Videos2010-04-24T20:51:43ZRick [email protected]<div>You probably saw that <a href="">didrocks released another update for Quickly</a>. It's chock full of bug fixes and tweaks based on the feedback from the last release. </div><div><br /><.<div><br /></div><div>Part 1: Create a project and use Glade to edit the UI</div><div>Here you see that you use "quickly create ubuntu-application" instead of "quickly create ubuntu-project" to create an app. You also use "quickly design" instead of "quickly glade" to design the UI.</div><div><br /><br /></div><div>Part 2: Using CouchGrid</div><div>One of the key differences here is that CouchGrid is now in the quickly.widgets module instead of the desktopcouch module. The CouchGrid moved into quickly.widgets because it now extends the DictionaryGrid class. This brings a lot benefits:</div><div><ol><li>Automatic column type inference</li><li>Ability to set column types so you get the correct renderers</li><li>Correct sorting (for instance 11 > 9 in IntegerColumn, but "11" < "9" in string columns</li></ol><div>And of course you get all the goodness of automatic desktopcouch persistence.</div></div><br /><br /><div>Part 3: Using GridFilter</div><div>GridFilter is a new class that provides automatic filtering UI for a DictionaryGrid or descendant such as CouchGrid.</div><br /><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>Rick Spencer: Quickly For Lucid Release, Intro Video2010-04-15T09:58:07ZRick [email protected] released quickly 0.4 yesterday. What a great contribution from didrocks! I suspect he his work will help tons of people have a really fun time writing Ubuntu apps. <a href="">Read about the release in his detailed blog post</a>.<div><br /></div><br /><br />In the meantime, I made a cheesy video last night, showing some of the changes in quickly, and how the new CouchGrid and GridFilter work. <br /><br />[Note that it takes blip.tv a bit of time to render out a high def video like this, so if the video is not yet working, you can check back later.]<br /><br />[D'oh .... stupid blip.tv bailed on encoding my video. I'll try again with smaller files. Stay tuned]<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div>
http://voices.canonical.com/feed/atom/tag/quickly/
CC-MAIN-2014-52
en
refinedweb
Although -- UI toolkits and interfaces are often specific to a platform and the UI-related code typically makes up the bulk of the application. As a result, delivery of cross-platform applications is hard. What is needed is a well designed, platform-independent UI toolkit that has lightweight memory and resource requirements, is highly portable, and open source. The Lightweight User Interface Toolkit (LWUIT), released in mid-2008, has been a boon to mobile developers right from the start. LWUIT is a UI library targeted to a wide range of mobile devices, from mass market to high-end smart phones, and has now also been ported to other embedded platforms. The rich functionality and clean design of LWUIT makes developing and deploying rich and engaging cross-platform applications easier than ever. LWUIT is an open technology with its source and binary freely accessible for individual or commercial use. LWUIT has seen widespread adoption by developers, ISVs, and other third parties. Numerous resources are devoted to LWUIT, including a developer guide, articles, tutorials, code samples, videos, a site with featured LWUIT-based applications, and last but not least, a very active developer community and associated forum. This article provides a brief overview of LWUIT for those who are unfamiliar with the technology, along with a list of resources for further learning. The remainder of the article focuses on the latest features and enhancements of LWUIT 1.3, along with hands-on code samples. As described above, LWUIT is a UI library that is licensed under the GPLv2 open-source license together with the Classpath Exception. This license encourages broad adoption while ensuring transparency and compatibility at the library level. LWUIT offers advanced UI capabilities and a clean API that is inspired by Swing. With LWUIT, Java developers don't need to write device-specific code for different screen sizes, but instead can add UI components as needed to provide a consistent and compelling look and feel to their applications which works across a wide range of devices from different vendors. Let's look at the LWUIT Demo application, which was written to showcase many of the different features of LWUIT such as theming, custom rendering, animations, buttons, transitions, and more. The three screenshots below show the identical application binary file (with no built-in device-specific knowledge) running on three entirely different Java ME platforms: Screen shot 1: The LWUIT demo application running on the Java ME SDK 3.0 Mobile Emulator Screen shot 2: The same application running on a mid-range Sony Ericsson G705 Screen shot 3: The same application running on a HTC Diamond with touch screen Thanks to the LWUIT toolkit the application presents a rich and consistent user interface across devices and automatically adapts to and takes advantage of device-specific properties such as screen size, graphics capabilities, and touch screen support without any extra effort by the developer. LWUIT is supported on MIDP 2.0/CLDC 1.1, has been ported onto CDC platforms, other mobile and embedded devices, and is the basis of the user interface layer for specifications such as Ginga-J for interactive TV (see:). LWUIT is designed with modern UI requirements, programming styles, and best practices in mind. For example, LWUIT provides a clean separation between the UI development, the graphics design, and the business logic and therefore allows domain experts to work independently on their specific area of expertise. Also, LWUIT is based on the MVC (model-view-controller) paradigm. For example, the List Component can display an unlimited number of items because it only renders what is visible, while the model has the data responsibility. You can show a very large list without worrying about memory consumption. Rapid Development: One of LWUIT's key benefits is rapid development. Since the API is inspired by Swing, it is easy to learn and adopt. LWUIT itself was built from scratch and does not depend on AWT. Portability: Another benefit is portability, and little, if any, device-specific code. To ensure portability, LWUIT was built using low-level common elements in MIDP 2.0. LWUIT applications look and run consistently across different devices and different Java runtimes. Flexibility: Flexibility is yet another important aspect: Almost everything in LWUIT is customizable and extensible, so if there is a missing feature or component, you can create your own and plug it in your code. Easy Deployment: Not only is LWUIT extremely powerful, well designed, and easy to use, it is also easy to deploy. During development, simply bundle the LWUIT library and resources with the application. The LWUIT components become an integrated part of the application deployment unit and are downloaded and deployed transparently when the user installs the application on their device (for example, via the standard MIDP OTA mechanism). Wide Range of Platforms: LWUIT requires only MIDP 2.0 and CLDC 1.1 (or similar basic graphics capabilities on other platforms) and is being continually tested across a wide range of today's mass market devices -- from low-end phones with limited memory, small screens, and numeric keypads all the way to high-end devices with fast processors, high-resolution touch screens, and built-in keyboards. LWUIT provides a Theme Creator tool for editing and creating themes and resources. This is a standalone application for creating and viewing background painting, objects, and other theme elements. It even features a live preview of the application that changes whenever updates are made to the theme or screen properties: The new Java ME SDK 3.0 is the de facto standard for the creation of Java ME-based applications. It offers a comprehensive development suite with a host of features giving developers a powerful and convenient environment and set of tools to efficiently build and test applications. Of course, the LWUIT toolkit can be used with the traditional Java ME tool chain. But getting started with LWUIT has never been easier now that the Java ME SDK 3.0 offers built-in support for LWUIT: The LWUIT Demo application can be run and explored right from the main screen of the Java ME SDK 3.0. A new project type called LWUIT Application provides the necessary resources and project structure that allows developers to start building LWUIT-based applications in minutes. LWUIT 1.3 was made available in December of 2009 and offers a number of new features and improvements. LWUIT 1.3 Features and Improvements: Beyond the official LWUIT 1.3 release, the LWUIT open-source subversion repository contains ongoing and additional improvements such as a pre-release version of the HTML component. The HTML component allows applications to easily render HTML conforming to XHTML Mobile Profile 1.0. Let's walk through some of these new features in LWUIT 1.3 and the latest repository. The table component in LWUIT features sophisticated functionality, such as support for a large number of rows and columns, horizontal and vertical scrolling, in-place editing, custom cell renders, and on-the-fly creation of cells that can feature animations, handle events, and more. Despite the multitude of functionality offered, creating tables is very straightforward -- thanks to a default table model. The below sample application creates a complete table consisting of a header row and three columns, where column 1 and 2 are editable. Each editable data cell can be edited in place by clicking on it. The table automatically becomes scrollable horizontally and vertically if it is larger than the available space: public class TableDemo extends MIDlet implements ActionListener { public void startApp() { Form form; Resources res;("Table Demo"); form.addCommand(new Command("Exit")); form.setCommandListener(this); // Create scrollable table with header and columns 1 and 2 editable TableModel model = new DefaultTableModel( new String[] {"Unedit.", "Editable", "Multiline"}, new Object[][] { {"Row 1", "Data 1", "Multi-line\ndata"}, {"Row 2", "Data 2", "More multi-\nline data"}, {"Row 3", "Data 3", "Data\non\nevery\nline"}, {"Row 4", "Data 4", "Data (no span)"}, {"Row 5", "Data 5", "More data"}, {"Row 6", "Data 6", "More data"}, }) { public boolean isCellEditable(int row, int col) { return col != 0; } }; Table table = new Table(model); table.setScrollable(true); table.setIncludeHeader(true); // Add table to form and show form.addComponent(table); form.show(); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } public void actionPerformed(ActionEvent ae) { // only action is from Exit command destroyApp(true); notifyDestroyed(); } } Running this code is easy by following these steps: TableDemoproject from here: and expand the zip file TableDemoproject (the Java ME SDK 3.0 should recognize the TableDemo directory as a project) TableDemoin the Projects pane, choose Properties and then in the Platform->Optional Packages panel select the Mobile Media API 1.1 check-box. Touch-screen devices can take full advantage of the virtual keyboard functionality now available in LWUIT. A Virtual Keyboard that can be bound to a text field will slide up when the user clicks or touches the text field to input characters. The Virtual Keyboard supports a number of different input modes (text, symbols, numbers), different keymaps for customized keyboard layouts, special keys, and other sophisticated features. Graphic: Screen Shot of Virtual Keyboard Demo on Startup and Screen Shot After User Clicks/Touches the Text Field ("Click for keyboard") Using a Virtual Keyboard in an application is straightforward. This example application shows a basic LWUIT application with a title, an EXIT menu, a text field for entering text, and a label to display the text: public class VKBDemo extends MIDlet implements ActionListener, FocusListener { Form form; Resources res; TextField textField; public void startApp() { VKBImplementationFactory.init(); // initialize virtual keyboard("Virtual Keyboard Demo"); form.addCommand(new Command("Exit")); form.setCommandListener(this); // Create text field with constraints textField = new TextField("Click for keyboard"); textField.setConstraint(TextField.ANY); textField.setInputModeOrder(new String[]{"Abc"}); textField.setFocusable(false); // only one component: prevent being focused right away textField.addFocusListener(this); // Create virtual keyboard and bind to text field VirtualKeyboard vkb = new VirtualKeyboard(); vkb.setInputModeOrder(new String[] {VirtualKeyboard.QWERTY_MODE} ); VirtualKeyboard.bindVirtualKeyboard(textField, vkb); // Add text field to form and show form.addComponent(textField); form.show(); textField.setFocusable(true); // after initial display, make focusable } public void pauseApp() { } public void destroyApp(boolean unconditional) { } public void actionPerformed(ActionEvent ae) { // only action is from Exit command destroyApp(true); notifyDestroyed(); } public void focusGained(Component cmp) { // If user selects text field, clear it if (cmp == textField) { ((TextField)cmp).clear(); } } public void focusLost(Component cmp) { } } As with the TableDemo, you can download the VKBDemo project from and run it with the Java ME SDK. The HTML Component is not part of the official LWUIT 1.3 release but is now available as a pre-release in the LWUIT open-source repository. Being able to display HTML content within an application (without having to call an external content handler) is useful for a number of reasons -- not so much to implement a full-blown mobile browser but to be able to render rich text locally, to dynamically display content pulled in from the network, or to embed web flows into your application -- basically, to fuse HTML concepts and content with your local Java application. For these reasons, an HTML component has always been high on the LWUIT developer wish list and is now available in an early version. The support for the XHTML Mobile Profile 1.0 is about 90% completed, including text, fonts, lists, tables, forms, images, etc. as well as WCSS. Here is a code snippet using the HTML component that shows how to display an HTML page, such as the mobile Twitter page: // Creating a new instance of a request handler HttpRequestHandler handler=new HttpRequestHandler(); // Creating the HTMLComponent instance and setting its request handler HTMLComponent htmlC = new HTMLComponent(handler); // Creating a form, adding the component to it and showing the form Form form=new Form("HTML Demo"); form.addComponent(htmlC); form.show(); // Setting the component to the required page htmlC.setPage(""); That's it! And since the HTML Component is just like any other LWUIT component it it fully touch-enabled, you can do transitions with it, theme it, etc. Note: HttpRequestHandler class is part of the LWUITBrowser project - you can get the full source code of the LWUITBrowser in the applications directory of the LWUIT subversion repository. For more information on the HTML component see Ofir Leitner's blog (). LWUIT is very powerful - we have barely scratched the surface. Please learn more about LWUIT with the resources listed in the section below. LWUIT developers can access a number of resources to get up to speed with LWUIT, to ask questions, and to help solve issues arising during development: General Information on Java Micro Edition platform and tools: LWUIT home page and overview pages: Introductory material on LWUIT: LWUIT blogs, YouTube channel, and application showcase: Deep-dive technical material: LWUIT project community forum: Further reading:
http://www.oracle.com/technetwork/articles/javamobile/default-159892.html
CC-MAIN-2014-52
en
refinedweb
Archives Expanding upon my speculations: Membership, RoleProvider and ProfileAPI implementation So, yesterday I jumped out on a limb and speculated how I thought that application architecture might include a wrapper around Membership and the Profile API in ASP.NET Whidbey. Then the guy who created it chimed in with a comment to let me know that I'm wrong! So either he's wrong or I am. Speculating about Membership, RoleProvider and ProfileAPI implementation There's an important area in ASP.NET Whidbey that I've started looking into that I haven't seen much coverage on, that is, how will the new user/security features be used when building a real application? ProjectDistributor release 1.0.2 now available Code Camp Oz (1) just announced Mitch has just announced the date and location of the first Australian Code Camp: Remove dead projects from the VS home page I just logged a feature request on the Feedback center about wanting to be able to remove unwanted projects from the "Recent Projects" list on the VS .NET home page. WebParts :: CustomVerbs Just released my first prototype showing some WebPart code: About the Whidbey demo's thing... Yesterday I blogged about a new group I've created to post small demo's and prototypes of ASP.NET Whidbey projects: VS 2005 Beta 1 Expiry Date is... July 1, 2005 Current version of Visual Web Developer I posted a question in the ASP.NET forums asking about what is the current build of Visual Web Developer: Custom Build Providers I came across this blog entry from Fritz Onion today about an experience that he had with Custom Build Providers: Valid Rss is probably a good thing Like Duncan, I just checked my Rss feed. Guess what? Do you have any small ASP.NET Whidbey working demo's Lately I've been playing around with the Whidbey a bit more, focussing on such things as UrlMapping, WebParts and Profiles. In that time I've been doing things in a rather ad-hoc manner. A small but noteable design change to the ASP.NET Portal Framework In ASP.NET V2 the new portal framework looks very interesting. It seems to me that this framework will lend itself very nicely for building applications which allow for a plug-in style of architecture and therefore make re-use of componentize UI widgets much more prominent. compressing javascript code in perl While sneaking around the web tonight looking at sites which have interesting dhtml controls or cool stylesheets, I came across this article: Data access strategy in Whidbey Fredrik has blogged some great Whidbey posts - not to mention his 3,000 or so posts on the forums! Recently he wrote a couple of articles about some of the data access techniques which are available in Whidbey. First he looked at the Data Component which allows to quickly and easily create Crud-like data access layers: Next tool - a blogging application Before I start my rant I should say that .Text is pretty decent web app.; it's seems way more complex than it needs to be and it's very difficult to install but there's a lot of great implementation code in there. Last week while unsuccessfully trying to get it installed and create some initial users I decided that, for my next application I'm going to build a blogging app. Much of the API design and feature-set is done and I'm currently working through the technical architecture. Koders - searchable repository of code snippets Found this site via Mitch's blog: Test entry This is a test entry Evilness and me I can't believe that I'm less evil than Mitch. Next ProjectDistributor release I'll be uploading the source for the next version (1.0.2) of Project Distributor on the weekend and, be upgrading the live site to run off of that version. Here is the list of new features which appear in this release: Using metadata and reflection to dynamically manage message routing Most routing systems have a transformation phase where, based on its current state, a message is transformed into a document and routed to an endpoint. Systems such as BizTalk provide GUI's and designers to remove the need for cumbersome coding by making the rules and subsequent transformations configurable; here's an example of a switch statement in a listener class where the rules of the routing engine are hard-coded: public static void MessageArrived( Message message ) { switch( message.MessageState ) { case MessageState.Initial: Console.Write( Transformer.CreateRequest( message ) ); break ; case MessageState.Submitted: Console.Write( Transformer.CreateApproval( message ) ); break ; case MessageState.Approved: Console.Write( Transformer.CreateInvoice( message ) ); break ; case MessageState.Saved: Console.Write( Transformer.CreateReport( message ) ); break ; default: Console.Write( Notifier.NotifyError( message ) ); break ; } } If there's extra "noise" in the MessageArrived method, it can become hard to maintain as the length of the switch gets longer. It can also become hard to maintain if there are repetitive code chunks within each case. In the above cases you can - at a moderate performance cost - re-factor the common code away into a generic method. Looking at the above example, one neat way to achieve this is to ascribe metadata to the MessageState enum so that it can be inspected at runtime and the routing lookup driven from that metadata. First, let's create an attribute to contain our lookup data and add it to the MessageState enum: [AttributeUsage(AttributeTargets.Field, Inherited=false, AllowMultiple=true)] public class WorkflowAttribute : Attribute { public WorkflowAttribute(Type type, string methodName) { this.Type = type ; this.MethodName = methodName ; } public Type Type; public string MethodName ; } public enum MessageState : short { [WorkflowAttribute(typeof(Transformer), "CreateRequest")] Initial = 1, [WorkflowAttribute(typeof(Transformer), "CreateApproval")] Submitted = 2, [WorkflowAttribute(typeof(Transformer), "CreateInvoice"), WorkflowAttribute(typeof(Notifier), "NotifySalesGuy")] Approved = 3, [WorkflowAttribute(typeof(Transformer), "CreateReport")] Saved = 4, [WorkflowAttribute(typeof(Notifier), "NotifyError")] Unknown = short.MaxValue } Notice that I applied 2 worklow attributes to the MessageState.Submitted enum value. Now I can re-factor the original MessageArrived method into a generic message handler routine: public static void MessageArrived( Message message ) { MessageState state = message.MessageState ; FieldInfo field = state.GetType().GetField(state.ToString()) ; object[] attribs = field.GetCustomAttributes(typeof(WorkflowAttribute), false) ; for( int i=0; i<attribs.Length; i++ ) { WorkflowAttribute att = attribs[i] as WorkflowAttribute ; if( att != null ) { MethodInfo method = null ; if( att.Type.GetMethod(att.MethodName).IsStatic ) { method = att.Type.GetMethod(att.MethodName) ; Console.Write(method.Invoke(null, new object[] {message})); }else{ object instance = Activator.CreateInstance(att.Type) ; method = instance.GetType().GetMethod(att.MethodName); Console.Write(method.Invoke(instance, new object[] {message})); } } } } I've uploaded a working demo of this to ProjectDistributor: SQL Server 2005 Beta 2 Transact-SQL Enhancements Great, in-depth article about some of the new TSQL language features. Definitely worth a read... noFollow in Google... Saved for later reading... ControlState in ASP.NET V2 Fredrick has a great post about the new ControlState feature in ASP.NET V2: 2 interesting Msdn articles An interesting article about writing code to test UI: Also, there's an article by Kent Sharkey about merging Rss feeds: This is a useful article with a great summary of the make-up of the Rss schema and some nice API design tips too! As an added bonus, the reader is invited to watch as Kent has a Dates induced meltdown towards the end of the article. Cool screen capturing tool Named Groups, Unnamed Groups and Captures I see this question come up a bit in regex so, I thought that I'd blog about it. It has to do with 2 things: named groups and captures. First, an example... The question isn't: what did I get; the question is: what did I pay - or so it seems! After several unsuccesful days of trying to implement trackbacks into ProjectDistributor I'm going to put it on the backburner for a while. It's not that I don't want it in there, it's just that time is short and the ability to get useful help about working with .Text seems a little short these days... {sigh} MbUnit... It's all green baby; it's all green! Last night I started moving some of the PD logic out of the web project and into a Framework project so that it is more accessible to other components. This resulted from some refactoring of the app. That I've been doing since implementing UrlRewriting and Trackbacks. I created UrlManager, and UrlFormatter classes to handle some of the url matching logic and thought that I'd start building a library of Unit Tests against this new stuff. What I'm going to show you now is a bit of a code dump but, take a scan through it then we'll discuss what went on... [TestFixture] public class TestUrlManager { public TestUrlManager() {} [RowTest] [Row("", false)] [Row("", false)] [Row("", true)] [Row("", false)] [Row("", false)] [Row("", true)] [Row("", true)] public void TestIsGroupInUrl(string url, bool isGroupUrl) { bool result = UrlManager.IsGroupNameUsed(url) ; Assert.AreEqual(isGroupUrl, result) ; } [RowTest] [Row("", "")] [Row("", "")] [Row("", "Foo")] [Row("", "")] [Row("", "")] [Row("", "Foo")] [Row("", "Foo")] public void TestExtractName(string url, string groupName) { string result = UrlManager.ExtractGroupName( url ) ; Assert.AreEqual(groupName, result) ; } } ...ok, as you can see, I'm testing the IsGroupNameUsed and the ExtractGroupName logic of the UrlManager class. Notice how I can use attributes to drive test data into the test method by using the RowTestAttribute and the RowAttribute classes and attaching them to the methods that I want to run as unit tests. The ease of TestDriven.NET Because I'm using TestDriven.NET, I can now right click in the class, choose "Run Tests" and voila! MbUnit presents me with a web page representation of the red and green lights that you've seen via other unit testing frameworks such as Nunit. Duplication kills So that's pretty easy right? Well, yes and no. Look at that duplicated data there. And what happens when I have 10 tests? And what happens when I need to change the Row data? That's right& all this duplication will end up working against me at some point. Combinatorial Tests to the rescue I knew that Peli would have the answer - after all, he did write it! - so, after a quick scan of his blog I discovered the combinatorial tests. Combinatorial tests allow me to create a factory which will produce my data and, for each enumerated item returned by the factory, a strongly typed data member can be returned to my unit test method. Let's start& create a Data Class and a Factory For my purposes a simple data class to encapsulate my test objects will suffice and then a simple factory method to create and return an array of that data: // A simple data class to encapsulate the attributes of my driver data public class UrlData { public UrlData( string url, string groupName ) { this.Url = url ; this.GroupName = groupName ; } public string Url; public bool IsGroup{ get{ return this.GroupName.Length > 0 } }; public string GroupName; } [Factory] public UrlData[] GetUrlData() { UrlData[] items = new UrlData[9] ; items[0] = new UrlData("", string.Empty) ; items[1] = new UrlData("", string.Empty) ; items[2] = new UrlData("", "Foo") ; items[3] = new UrlData("", string.Empty) ; items[4] = new UrlData("", string.Empty) ; items[5] = new UrlData("", "Foo") ; items[6] = new UrlData("", "Foo") ; return items ; } Re-Wire and Re-run Now, all that remains is to re-wire the test methods to use the factory and right click to re-run the tests again... [CombinatorialTest] public void TestIsGroupInUrl([UsingFactories("GetUrlData")] UrlData item) { bool result = UrlManager.IsGroupNameUsed( item.Url ) ; Assert.AreEqual(item.IsGroup, result) ; } [CombinatorialTest] public void TestExtractName([UsingFactories("GetUrlData")] UrlData item) { string result = UrlManager.ExtractGroupName( item.Url ) ; Assert.AreEqual(item.GroupName, result) ; } It's all green baby; it's all green! :-) Instant Messenger 7 - good and bad I installed the new beta version of the Instant Messenger client last week. The UI is really nice and they've added some great new features. The 2 most obvious of these are "Nudges" and "Winks". Nudges allow you to "shake" the IM client of the person that you are chatting with... I haven't really worked out when is the optimum time to use these yet. AssemblyReflector - event based discovery of assemblies AssemblyReflector (Conchango.Code.Reflection) is an event-based assembly parser - it allows assemblies to be searched for Attributes, Events, Fields, Interfaces, Methods, Nested Types, Properties and Types, by subscribing to the relevent OnDiscover event and then performing a search for the member based on several available search methods; o Contains(string) o EndsWith(string) o Named(string) o OfType(Type) - Attributes Only o StartsWith(string) o WithBindingFlags(BindingFlags) When an event is raised you can access the discovered member via the EventArgs Melbourne Geek Dinner and future Canberra Geek Dinners While I was in Melbourne, I arranged to get a few of the guys together for a geek dinner. This followed hot on that tail of the first one in Canberra a few weeks back. People attending this one were: · Cameron Reilley - (Australian Podcasting mogul) · Matthew Cosier - (Melbourne Microsoft guy and InfoPath extraordinaire) · William Luu - (active Melbourne community guy) Presenting and Facilitating I spent the past couple of days in Melbourne with several other Readify guys doing a course about "presenting and facilitating". There were some great moments and also some real highlights. It was fascinating to learn some of the little tip-n-tricks that you can use when you are speaking/facilitating that can help to get your message across and also to make your talks more participatory. Some of my most important learning's were: - Eye contact. It's important to make eye contact with the people that you are talking to and to maintain eye contact for about the length of a thought. - Structure. On the first day I had a presentation which was a sea of data& by day 2 this was packaged nicely into a structure that made it much easier to present and also, for the audience, much easier to digest. - Interventions. We discussed interventions - which are small break-up activities designed to get people up on their feet. These are used to get things going and to stimulate everyone. - The humble 'B' key. When you are doing PowerPoint presentations, you can press the 'B' key to make the screen go black and again to bring back your presentation to the screen. Use this when you want to talk and not compete with your slide. Overall it was an awesome experience which taught me a lot. It was also great to catch up with the other Readify guys. For those of you who know any of the guys you can imagine how vibrant, collaborative and enthusiastic the sessions were :-) A new PostXING feature coming... Chris hinted at a new PostXING feature: Made some improvements to the TrackBack Prototype I refactored the code a bit and added trackback validation: Mitch's Shrinklet App A cool little tray based app for creating "shrunken" urls... Prototype of a small trackback system Last night I prototyped a small trackback system: Learning about Trackbacks Today I learnt a lot more about Trackbacks - I had as I'm hoping to implement them in ProjectDistributor. Master Pages and building "nice" sites Brian is seeking community feedback around the topic of MasterPages to help ascertain the value of the ASP.NET team including some standard, out-of-the-box templates with the product. Read his blog entry here: Justin's fine; busy, but fine. Finally. More ProjectDistributor activity - auto updater and upcoming features Jonathon de Halleux (aka: "Peli") has just published a neat little helper assembly written in Whidbey which uses the ProjectDistributor webservices and can help detect whether there is a new Release available for download for a given project. Today was a day of software installation So, today I installed: SVGViewer so that I could view the Assembly Graphs generated by Reflector.Framework MicrosoftAntiSpyWare - this looks great. NCover so that I can see how much of my total code base is being exercised by my unit tests TestDriven.NET so that I can run unit tests from VS.NET MbUnit so that I can write unit tests (MbUnit actually comes with the TestDriven download). Still trying to track down Justin Quite a few people have left comments and e-mailed me directly regarding the whereabouts of Justin ( ), so I thought that I'd leave this message to let everyone know that I'm still following up on this. Me not nerd Entering the world of TestDriven.NET I've been giving serious consideration to writing a limited set of Unit Tests for the ProjectDistributor codebase - in particular, tests against the web services and several web scenatios. Today I started that journey by downloading TestDriven.NET: Partial Book Review: Extreme Programming Adventures in C# You have to love the new world of collaboration Tonight I jumped on IM and sent a message of congratulations to Roy about his MVP award (apparently he got it 4 months ago... news to me!). ProjectDistributor 1.0.1.0 Source Code now available The latest version of the ProjectDistributor source code is now available - this is the same version that the website is running. You can grab the code here: Has anybody heard from Justin Rogers lately? Justin Rogers is a brilliant guy who helped me a great deal last year with many projects. He is a prolific writer, blogger and developer... or at least he was up until early November. PD website now running on new version I uploaded the new ProjectDistributor version to the server yesterday, so, the site is now running on that. Existing users will notice some changes when they login. Chuck is blogging Char :-) First post from rebuilt laptop. First post from PostXING on my newly built laptop :-) All-in-all it took me about 6 hours to get from woe to woe to woe to go. Along the way it was useful to follow in the footsteps of my sagacious colleague Mitch who built his machine only a day prior to me doing it. New ProjectDistributor release I'm just testing the release for the next version of ProjectDistributor. This release includes bug fixes from the previous release as well as the following new features:
http://weblogs.asp.net/dneimke/archive/2005/01
CC-MAIN-2014-52
en
refinedweb
20 November 2008 09:43 [Source: ICIS news] SINGAPORE (ICIS news)--Asia’s benzene values fell below the $300/tonne mark, levels not seen since February 2002, on the back of lower crude values and weak market fundamentals, said traders and producers said on Thursday. ?xml:namespace> A deal was heard concluded at $290/tonne FOB (free on board) ?xml:namespace> Weak demand from key downstream styrene monomer (SM) and phenol segments in the past months had weighed down on benzene, traders and producers said. Prices have fallen by a hefty $695-705/tonne or 70-71% since 3 October 2008, according to global chemical market intelligence service ICIS pricing. “Demand is very bad, and there is no place for benzene to go,” said a Korean trader, referring to the poor derivatives market situation and the closed arbitrage window for exports to the The downtrend in crude values, which touched $52/bbl on Thursday, added pressure on benzene.
http://www.icis.com/Articles/2008/11/20/9173082/asia-benzene-hits-6-yr-low-at-below-300t.html
CC-MAIN-2014-52
en
refinedweb
A first hand look from the .NET engineering teams article describes the changes we made in the security namespace for .Net Framework 4 in detail. For most people, this change in policy will be unnoticeable, their program probably was already running in full-trust and the world is the same. For some others, things will be a lot easier: launching their tool from the companies’ intranet share is now possible without the need to change the .Net Framework policy. For an even smaller set of users, there will be some issues that they will encounter, they were probably expecting assemblies to be loaded as partial trust and now they are full-trust. In this post, we will cover how programs can be migrated to the newer security model and still work in the way they were intended to work. In versions of .Net Framework before v4, we had many ways to restrict the permissions of an assembly or even certain code path within the assembly: 1. Stack-walk modifiers: Deny, PermitOnly 2. Assembly-level requests: RequestOptional, RequestRefuse, RequestMinimum 3. Policy changes: caspol, and AppDomain.SetPolicyLevel 4. Loading an assembly with a Zone other than MyComputer In the past, these APIs have been a source of confusion for host and application writers. In .Net Framework 4, these methods of restricting permissions are marked obsolete and we hope to remove them at a point in the future. The .Net Framework 4 throws NotSupportedException when encountering calls to functions allowing any of these sandboxing methods. Applications that used these sandboxing APIs will now see an exception similar to this: System.NotSupportedException: The Deny stack modifier has been obsoleted by the .NET Framework. Please see for more information. Here is what you can do, after identifying that your code uses one of the previously described sandboxing methods: 1. Execute the partial trust code inside a partial-trust AppDomains. This approach might appear difficult because it asks you to figure out what trust levels your application needs. Having part of your application running in another AppDomain also requires some consideration about how to do the communication with the objects residing in the new AppDomain. This model might be more complex than just a command that changes the machine policy, but we think it is a better one. This article provides more information about why this sandboxing strategy is better. Here are the steps for creating a new sandboxing AppDomain: 1.1. Remove the Deny, assembly level requests or the caspol command from your application. 1.2. Create a new partial-trust AppDomain with a partial-trust grant set. This is done by calling the override for AppDomain.CreateDomain that receives a grant-set and a full-trust StrongName list: public static AppDomain CreateDomain(string friendlyName, Evidence securityInfo, AppDomainSetup info, PermissionSet grantSet, params StrongName[] fullTrustAssemblies) The MSDN article talking about this specific override is located here This would create a new AppDomain which would give assemblies by default the PermissionSet given as a parameter. Assemblies that have their StrongName present in the fullTrustAssemblies parameter, will receive FullTrust as the grant-set. 1.3. Create an instance of one of your classes inside the new AppDomain. Your class would have to inherit from MarshalByRefObject. The API to use is this: public object CreateInstanceAndUnwrap(string assemblyName, string typeName) The MSDN article talking about this specific override is located here. This API will return you a reference of type object to an instance of a class inside the new AppDomain. You would have to convert that instance to your specific type, so you would be able to call functions present in your class on it. 1.4. Call a function on your newly created instance. This function call would be marshaled across AppDomain boundaries and will be executing in the new AppDomain, thus in partial trust. Example: PermissionSet ps = new PermissionSet(PermissionState.None); ps.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution)); //Create a new sandboxed domain AppDomainSetup setup = AppDomain.CurrentDomain.SetupInformation; AppDomain newDomain = AppDomain.CreateDomain("test domain", null, setup, ps); //Create an instance in the new domain. //The class has to derive from MarshalByRefObject. We consider //PartialTrustTest to be such a class PartialTrustTest remoteTest = newDomain.CreateInstanceAndUnwrap( typeof(PartialTrustTest).Assembly.FullName, typeof(PartialTrustTest).FullName) as PartialTrustTest; remoteTest.EntryPoint(); This will get slightly more complicated if you would have 2 assemblies, where one needs to be run in FullTrust and one in PartialTrust. The way to do this is to sign with a key (obtained by caling sn –k) the FullTrust assembly, and pass it’s strong name as the last parameter to AppDomain.CreateDomain. … //We consider FullTrustAssm to be the name of the assembly that needs to be //executed as a full-trust assembly AppDomain newDomain = AppDomain.CreateDomain("test domain", null, setup, ps, Assembly.Load("FullTrustAssm").Evidence.GetHostEvidence<StrongName>()); 2. Sandboxing AppDomains might seem complicated and daunting, especially if all you need is to launch an entire executable in PartialTrust and you don’t want to know all the details of AppDomain creation and communication. If this is the case, we recommend the PTRunner tool. If you are going to take this route, you still need to remove the Deny, assembly level requests or the caspol command. In order to be able to use this tool, your application has to be launched from the command prompt. The tool is present in the CLR Security codeplex site at project PTRunner. So let’s say your test is located in one assembly called “partialTrustAssembly.exe” that used to have Deny. You remove the deny and call PTRunner. Under the covers the PTRunner tool sets a sandbox AppDomain for you and launches your application in it. Example: PTRunner partialTrustAssembly.exe Now let’s say, you want one of the assemblies your executable references to be full trust. You would do something like this: PTRunner –af fullTrustAssembly.dll partialTrustAssembly.exe You could also do something like this. PTRunner –af FullTrustAssembly.exe FullTrustAssembly.exe . This is rather fancy but it runs your assembly as full trust in a partial trust AppDomain. This would allow you to do operations allowed only to full-trust code, like assert, but at the same time run in a partiual-trust AppDomain so demands would still fail. Or perhaps you don’t like the minimum execution permission that your assembly is run under. You could do something like this: PTRunner –ps Internet partialTrustAssembly.exe Or perhaps you want your very own permission set that is non-standard. You would write the permission in XML and call PTRunner like this: PTRunner –xml Permission.xml partialTrustAssembly.exe The xml file contains the serialization to XML of a permission set. 3. If you don’t have the luxury to launch a runner that, in turn, launches your test, another way to launch code in partial-trust is to use the SandboxActivator. This is located in the CLR Security codeplex site in the Security 1.1 library and will help you set up a new AppDomain in an easier way. This is basically a wrap around ApDomain.CreateDomain and AppDomain. CreateInstanceAndUnwrap from the example above about sandboxed domains and it returns you just an instance of your type in the new AppDomain. You still need to remove the Deny, assembly level requests or the caspol command from your application and make one of the existing classes derive from MarshalByRefObject. Then you call the SandboxActivator.GetPartialTrustInstance which will return a new instance of your type in a partial-trust AppDomain. Calling a method on this new instance will be executed in a partial-trust sandbox. //This will create a sandboxed AppDomain with the Execution permission set MainClass m = SandboxActivator.GetPartialTrustInstance<MainClass>(); m.DoPartialTrustCall(); These are the public APIs available through the SandboxActivator interface: public static AppDomain CreateSandboxedDomain(PermissionSet ps, params Assembly[] fullTrustList) public static T GetPartialTrustInstance<T>(PermissionSet ps, params Assembly[] fullTrustList) where T : MarshalByRefObject public static T GetPartialTrustInstance<T>(params Assembly[] fullTrustList) where T : MarshalByRefObject 4. The last method to make your application work is not typically recommended by us. We have a way to set your process to run in legacy mode that is according to the pre-v4.0 security model. We prefer users of.Net Framework to fully migrate to the new security model; we introduced this only for exception situations. This legacy mode will be available only in v4 and in future versions would be removed. We also believe the new security model does a better job at setting up a sandbox for your application to run. So there, you have been warned to not use this method. To follow this migration path, you keep the code as it is, and add a config file for your executable that will look like this: <configuration> <runtime> <NetFx40_LegacySecurityPolicy enabled="true" /> </runtime> </configuration> Cristian Eigel, SDET, CLR This sounds like the right thing to do. After all, the CAS policy stuff was not only overcomplicated, but also underdocumented and buggy. (What a combination!) I'm really looking forward to the new model, but there's one thing that I'd like you to clarify: With the current model, it is possible to elevate an assembly to full trust, even if it is loaded in, say, IE. So although IE explicitly uses APIs to load its controls in a partial trust zone, we can give it full trust without IE even noticing. With the other options, I believe that each and every application would have to manage exceptions to their partial trust policy, right? And if the application doesn't have configuration options to do so, there is no way to work around it, right? While this was a real PITA, at least it was possible after some trial&error. In case my assumptions are correct, do you have any idea how various app teams at MSFT are dealing with this? Are you working with them? (I'm particularly interested in IE) Thanks, Stefan What you call overrides at three places are actually overloads. I didn't bother reading rest of the article after those mistakes. Hi Stefan, It is no longer possible for partially trusted code to be elevated to full trust the way it was in the previous model. You’re pretty much right on – each host needs to provide its own interface for trusting assemblies. It is very simple to administer exceptions to the partial trust permission set with the CreateDomain API that Cristian describes. The CreateDomain API creates an AppDomain where there are two trust levels – full trust and partial trust. Everything loaded into the domain is partially trusted unless it’s on the full trust list – the ‘params StrongName[] fullTrustAssemblies’ argument in the overload. This, coupled with the HostSecurityManager features provided ( ), provides the host the ability to define its policy for hosted code. We are indeed working with the various teams that host partially trusted code internally. What particular IE scenario are you interested in? Thanks, Andrew Dai MS Common Language Runtime Security Program Manager Hi Andrew thanks for your response. We have a managed IE control that allows an HTML-based app to interact with the desktop. We use CAS Policy to give this control (or rather, its assembly) full trust. Wouldn't it be nice if the SandboxActivator could automatically read some standard configuration section from the from app.config file? This way, users would not depend on every single app to include some exception mechanism, and app authors wouldn't have to do the work? (Then of course you'd have to make sure that every critical app, like IE, uses SandboxActivator, or provide some easy way to access that config from unmanaged code.) The gist of the new CAS policy model is that everything unhosted runs as fully trusted, and hosts get to decide what the security policy is for their hosted code. Applications will have full trust unless they’re hosted, and there’s no way for the hosted, partial trust apps to elevate out of the host’s sandbox. To allow that would remove from the host’s complete control of its own security policy, which is one of the things we set out to provide for this release. Specifically for IE hosting of managed controls - this is no longer supported in .NET Framework 4, as Silverlight is superior in terms of user and developer experience for browser-based managed code. Note that your control will continue to work as it did as long as it’s not recompiled against .NET Framework 4. Andrew Hi Andrew, thanks for the clarification. I disagree with your conclusions, though. I understand the reasons for the change and I agree with your decision, but I think it's a mistake to remove the option to opt out of the sandboxing model altogether. This leaves us with no option to use managed code for browser controls in situations where we need full control. Sure, the app needs to have control, but your SandboxActivator could provide a nice standard way to let the user configure exceptions. (The app could still decide to disable that.) Silverlight is superior in many situations, granted, but Silverlight code always runs in a sandbox. (This is an interesting topic on its own. Compare this to Moonlight, which can run code inside the sandbox, but also on top of the normal Mono runtime - what would be your Desktop CLR. That allows for code that can run in the browser, but the same code can run with full permissions. A nice option for enterprise apps that can run online or disconnected, with a local DB and everything, or need to interact with desktop apps like Office. And the compatibility story is obviously much smoother than Silverlight/WPF. But I digress.) I wonder why the assemblies loaded from byte array by the MBR object in the sandboxed appdomain inherit the permission set from the loader assembly. I expected their permission set to match the appdomain’s. What's the technique for having partially trusted dlls loaded dynamically from byte[]? Dear Andrew, Probably the old security logic with CAS was complex, and it took a while to understand, but once you figured it out, it was flexible, and could be configured in a lot of different ways. You could grant specific permissions to specific assemblies by rolling out an MSI package. Once this was done within our company, it was no problem for us to create a signed ActiveX, which would run in the browser but still access local files. Now with the new logic you have either full trust or the predefined set of permissions as decided by the hosting application. Only two possibilities, instead of millions of different combinations. And as there are no settings for this within the typical host IE - you are stuck with the default ones. For us this means, that in .NET for, we can not implement our ActiveX any longer. Also, IMHO in an ideal world, assemblies should define what permission they would like to get, but the real permissions granted should be decided by system administrator. This is now definitely not possible any longer, as the CASpol tool is obsolete. IMHO, you went in the wrong direction... Oliver If it is complex maybe you can make it simpler or explain better in the documentation. Removing it altogether?!!! makes things a lot more complicated. How is this simpler? Just keep it as an option like turning the setting flag on. Why remove it????
http://blogs.msdn.com/b/dotnet/archive/2009/06/10/new-security-model-moving-to-a-better-sandbox.aspx
CC-MAIN-2014-52
en
refinedweb
12 October 2012 10:27 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> AA prices in the Chinese market were assessed at yuan (CNY) 13,000-13,100/tonne ($2,070-2,086/tonne) Among the AA facilities that are currently shut are Shanghai Huayi’s 60,000 tonne/year unit and Shenyang Lahua’s 80,000 tonne/year unit. Shanghai Huayi’s plant was taken off line on 8 October, with the shutdown expected to last a week, a company source said. Shenyang Lahua’s unit, on the other hand, was shut because of a power failure on 23 September, with no definite restart date, a company source said. The company’s 460,000 tonne/year plant in Nippon Shokubai is expected to buy 50,000 tonnes of AA from other countries in Most sellers in ($1 = CNY6.28) Additional reporting by Liu
http://www.icis.com/Articles/2012/10/12/9603324/china-acrylic-acid-may-extend-gains-on-tight-supply.html
CC-MAIN-2014-52
en
refinedweb
C9 Lectures: Stephan T Lavavej - Advanced STL, 1 of n - Posted: Feb 10, 2011 at 12:47 PM - 112,106 Views - 105 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…”. In the first part of this n-part series, Stephan digs deeply into shared_ptr. As you already know (since you will have the perquisites in place in your mind before watching this—remember, watch the intro series first), shared_ptr is a wrapper of sorts: it wraps a reference-counted smart pointer around a dynamically allocated object. shared_ptr is a template class (almost everything in the STL is a template, thus the name...) that describes an object (int, string, vector, etc.) that uses reference counting to manage resources. A shared_ptr object effectively either holds a pointer to the resource that it owns or holds a null pointer. A resource can be owned by more than one shared_ptr object, and when the last shared_ptr object that owns a particular resource is destroyed, the resource is freed. You will also learn a lot about the beauty and the weirdness inside the STL. You should take Stephan's wisdom to heart and see if you can implement some of the patterns he shares with you in your own code, and you should of course take his advice about what NOT to do in your native compositions. Welcome back, STL!!! Tune in. Enjoy. Learn. 've been really looking forward to this. Can't wait to see the rest of C++ 0x (language and library enchancements) make it into Visual Studio .NEXT. Good Videos. Now watching. Thank you S.T.L. Great Video. looking forward to the Rest. You are great! In future I'd like to hear something about sorting, trees (maps) and some tricky algorithms. Yes! The new series is here! And shared_ptr is a great place to start it. So happy. Welcome back. I thought that video was great. As for ideas for future videos, one on std::string would be great. This video was a bit easy for me, but I've dealt with shared_ptr implementations in detail before. Here are some topic ideas: How exceptions work under the hood (the exception object is stored on the stack!) How dynamic_cast works, especially in cross casting situations (I understand there are some nasty x64 hacks) How you prevent STL code bloat (are the void * + static_cast tricks still necessary)? Hi STL, excellent video on shared_ptr. I always look forward to your STL videos. I have some STL questions I hope you can answer. My user suggested to me to implement iterators for iterating elements in my open-source xml library. Is it possible to implement my own iterators to work with the STL algorithms in VC8, VC9 and VC10? It will be best if my custom iterators can work with algorithms in all STL implementations out there. I am thinking of rewriting my own next_combination algorithm to take advantage of bidirectional and random access iterators. Right now, it only uses the least-common denominator iterators (bidirectional iterators) which is slow because the algorithm has to increment íterator 1 by 1, instead of 'jumping' to the required iterator. Is it possible to write my algorithm to work with all VC(8,9,10) STL iterators or all STL implementations? Are the iterator trait tags the same? Thanks!! Thanks for watching, everyone! Marek> I'd like to hear something about sorting, trees (maps) and some tricky algorithms. Good ideas - I've definitely been planning to explore various algorithms, and our sorts and trees contain some of the most interesting machinery. MikeK> As for ideas for future videos, one on std::string would be great. std::string's Small String Optimization is worth looking at. And going through its support for move semantics would give me a chance to explain in detail the Standardization Committee's bug that we fixed right before VC10 RTM thanks to an observant customer. Also, I recently optimized string::resize() and string::erase(), and it might be useful to show what I did and how I looked at the generated assembly. Mr Crash> Finally some C++ goodness (took you a month and a half) Heh. The studio was still undergoing renovation in January, and also I've been very busy with VC11. Charles said that he wanted to get me in the studio every week, and I was all, "great idea, but I've got this day job that you might have heard about..." :-> Ben Craig> This video was a bit easy for me, but I've dealt with shared_ptr implementations in detail before. I'll do my best to cover mind-bendingly complicated topics in the future. :-> Ben Craig> How exceptions work under the hood (the exception object is stored on the stack!) How dynamic_cast works, especially in cross casting situations (I understand there are some nasty x64 hacks) (Un)fortunately, these things are beneath my level of abstraction, which is to say that they're deeply magical to me, I'm glad they just work, and I'm very glad that somebody else has to worry about their implementations. In particular, almost all of that machinery is in the compiler, not the libraries. It might appear that I know a lot about the compiler, but my knowledge mostly ends where the Standard ends. Ben Craig> How you prevent STL code bloat (are the void * + static_cast tricks still necessary)? We rely on /OPT:REF,ICF magic. That's basically guaranteed to merge stuff like vector<X *> and vector<Y *> (note: STL containers of owning raw pointers are leaktrocity, as I explained in the intro series, but STL containers of non-owning raw pointers are perfectly fine and sometimes useful). In 4 years of maintaining the STL, I haven't seen a single customer reporting code bloat problems. shaovoon> Is it possible to implement my own iterators to work with the STL algorithms in VC8, VC9 and VC10? It will be best if my custom iterators can work with algorithms in all STL implementations out there. Totally possible. That's the best thing about having an International Standard, and a library designed by a genius (Stepanov) with easy and efficient extensibility in mind. shaovoon> Right now, it only uses the least-common denominator iterators (bidirectional iterators) which is slow because the algorithm has to increment íterator 1 by 1, instead of 'jumping' to the required iterator. That's what std::advance(), std::distance(), std::next(), and std::prev() are for. They're O(1) for random-access iterators and O(N) for weaker iterators. See 24.4.4 [iterator.operations]. shaovoon> Are the iterator trait tags the same? Yep, they're Standard. @STL: "I'll do my best to cover mind-bendingly complicated topics in the future. :->" Yes please! I do like to watch the "easier" stuff, but the mind-bending stuff is always fun too. The deeper you dive the better. @Charles: STL's comment brings another idea to mind, but I suspect it would be hard to pull off. I would REALLY like to see a series with the compiler guys, digging into the gory details of the compiler. (Maybe that is too proprietary to share. I'm sure the compiler guys are busy too.) Or if that is too specific, maybe a more general series on compilation-related topics, parsing, translation, etc.... whatever happened to Phoenix, C# compiler as a service, etc... Just thinking out loud here. Ah, the download FINALLY completed (C9 is really slow tonight). Off to watch this episode. Stephan, we want you every week, plus cohosting a monthly C++ TV, man C I've been always wondering why the C++ Standards Committee decided against adding intrusive_ptr to SC++L? A shared_ptr not only is sizeof( void* ) greater in size than intrusive_ptr, but it also requires an extra heap allocation for the ref count block (which can be avoided by using make_shared, but make_shared can not be used in scenarios where allocation needs to take place at a different site than pointer definition.) Besides, shared_ptr's approach to thread safety forces a design where reference counting must be done atomically regardless of whether the pointer is accessed by multiple threads or not. intrusive_ptr on the other hand, would allow for a design where such decisions could be made per object type. So my question is why?! Why don't we have a std::intrusive_ptr type like Boost? We could name it Native TV, but I don't want to offend my Native American brothers and sisters (and friends). Nor do I want to cause angst among the native developers out there who don't program in C++... C++ TV sound good, Niners? C @Charles: Keep C++ TV video length at 40+ mins, ok? Because I always couldn't wait till I get home to watch the STL videos. I usually watch it at workplace during lunchtime. 15mins lunch and 45 mins of STL goodness! C++ TV sounds good to me Charles. This is suddenly starting to sound very old-school 9. Paging Robert Hess. Somebody find Erica to read the news... You should bring back the .Net Show as a weekly program (and not just for .net). @ryanb: I like that idea. Dr. Hess? C C++ TV! Awesome++ Always excited see videos for STL by STL!! Interesting lecture, I think it would have been good if you went into the assignment or copy constructor of the shared_ptr so its clear how the control block is passed around to the various copies of the shared/weak ptr's. Other than that it was an insightful view into the topic, also interesting how you used the pre-processor macro's and the inclusion of the file multiple times to generate the different combinations of templates required (rather than using code generation tool or similar). The fact its included 11 times, does that mean that the STL version only supports up to 11 arguments for the constructor? really cool stuff, i snuck a peek even though im only part way through the intro series just on a c++TV note, it would be really really awsome to see more directX stuff on c9, and more stuff on managed c++ maybe and a better-together series with managed and unmanaged @ryanb: That's good stuff, Ryan. As you say, the C++ compiler people are extraordinarily busy - but it's not inconceivable that we go and meet them, dig into how the front end and back end compilers parse, analyze, optimize, etc... There is a huge amount of stuff we need to do further up the stack at the language level as well. It's also clear that we should consider exploring the jewel in the haystack: the machine. After all, the notion of "native code" we interchangeably use when refering to C++ really means the high level syntax we humans compose in such a way that efficiently abstracts what the machine will eventually do with the processing instructions (machine code) created by the compiler (in C++, the back end compiler...). The argument on reddit about my stating that C/C++ is native code (in the description of the Mohsen and Craig interview) is entertaining, by the way. Love the passion out there C PS: Let's end this tangential conversation. We can move the C++ TV ideas to the Coffeehouse and leave the comments on this thread for the topic at hand ->The STL's shared_ptr implementation. Stephan doesn't have much free time - so let's make it easy for him to parse this thread for related questions/comments. Also, if you feel compelled to debate the meaning of "native code" then Coffeehouse is the place. Thanks for your understanding Here is something I used to wonder for a long time. It is impossible to create an array of a user-define type that has no default constructor (unless you explicitly initialize all elements via the array-initalizer, of course). How does std::vector pull it off even though it uses an array internally? So if it's not too trivial, you could talk about placement new and explicit destructor calls. (This would be a perfect place to discuss various memory management and object lifetime details/issues.) Also, I would love to see a guide through the implementation of unordered_set, provided there is any interesting "magic" going on. I guess <initializer_list> is not practical to talk about yet due to lack of support in VC, right? good presentation. Fix the blurry text problem with code in VS. Part of the time, I could not read the code. One thing: why are shared pointers considered "advanced STL"? I know they're not as well-known as some of the other STL things, but they really shouldn't be considered advanced material. "How does std::vector pull it off even though it uses an array internally?" That's pretty easy. std::vector allocates bytes, not arrays. It doesn't internally do a "new ClassType[size]"; it just calls the allocator and asks for a block of memory with a size of "sizeof(ClassType) * size". When it adds an entry, it first constructs that piece of memory by calling a placement new, then issues the copy/move constructor. Ashkan> I've been always wondering why the C++ Standards Committee decided against adding intrusive_ptr to SC++L? I've been on the Committee's mailing lists for the last 4 years, but I haven't attended any of their meetings, so I can't answer this with perfect precision. (Also, while I worked on getting Dinkumware's implementation of TR1 into VC9 SP1, the design of TR1 happened before my time.) My understanding is that intrusive_ptr wasn't proposed for inclusion in TR1/C++0x, rather than being proposed and rejected. You might be able to get a more detailed answer by asking on the Boost mailing list. fileoffset> I think it would have been good if you went into the assignment or copy constructor of the shared_ptr so its clear how the control block is passed around to the various copies of the shared/weak ptr's. Agreed - especially for the converting copy constructor from shared_ptr<Derived> to shared_ptr<Base>. However, I have to cram everything into 40-45 minutes, and there just wasn't time. I also had to spend some time explaining the overall series. > The fact its included 11 times, does that mean that the STL version only supports up to 11 arguments for the constructor? 0 to 10. 10 is infinity. This Standard Library doesn't go to 11. :-> NotFredSafe> So if it's not too trivial, you could talk about placement new and explicit destructor calls. I may be able to work that into a future part. My concern wouldn't be that it's trivial - it's actually rather complicated - but that it's not widely useful enough. Using Part 1 as an example, type erasure is an enormously powerful trick that can be used in lots of situations, and knowing about make_shared<T>()'s optimizations can help you to use the STL more effectively. Placement new seems very low-level, given the number of times I've had to explain it to people (not many). Still, when I start poking around the guts of containers I may have to mention it whether I like it or not. > Also, I would love to see a guide through the implementation of unordered_set, provided there is any interesting "magic" going on. Some magic - actually, we've been squashing debug perf bugs there in VC11. (Debug perf isn't terribly important, except when it's so slow as to be un-debuggable!) I'm not too familiar with unordered_foo's machinery, but I could probably figure it out pretty quickly. > I guess <initializer_list> is not practical to talk about yet due to lack of support in VC, right? Correct, VC10 RTM doesn't support initializer lists. It contains a nonfunctional <initializer_list> header because I simply forgot to remove it. (I carefully scrubbed out Dinkumware's library support for initializer lists, but forgot about a whole header. Go figure.) JamesG> Fix the blurry text problem with code in VS. Part of the time, I could not read the code. What blurry text? I downloaded the High Quality WMV, viewed it at 100%, and forwarded to 30:28 - meow.cpp's Consolas font is ginormous (as intended), and I can even clearly read Intellisense's tooltip, which I feared would be invisible. Nicol Bolas> One thing: why are shared pointers considered "advanced STL"? Their *implementation* is advanced. Their interface is simple - I explained how to use them in Intro Part 3. @JamesG: Were you watching the streaming version? Sounds like a smooth streaming issue. There certainly wasn't anything blurry in the High Quality WMV. Yes, I know. That's why I said "used to wonder" and later mentioned placement new. You are an excellent presenter Stephan. This series is great to watch and follow even for more experienced developers. Altohugh I have been using the STL for years its internals have always been intimidating. Vut your clear presenting and expert knowledge give us mortals a much better understanding of the magic. Keep up the good work. Based on the previous STL videos and my interest in using templates to unroll loops, I have the following questions: 1) What is the equivalent VS2010 C++ flag that corresponds to g++'s -ftemplate-depth=n ( )? Can this flag be set in the Visual Studio's Project/Properties/Configuration Properties dialog box (or some other dialog box) or just at the command line? 2) Since C++0x's maximum template instantiation depth seems to be implementation dependent ( - Section 14.7.1 - Point 14 - Page 372), what is the maximum template instantiation depth of VS2010's templates? Or is it template parameter dependent (i.e. compile-time stack dependent)? 3) At compile-time, is there any way (even some weird compiler-dependent macro) to determine how deep into template instantiations we are without keeping track of that depth ourselves?? 5) What improvements can be made to my loop unrolling code at the end of this post? Background:This code can be compiled in g++ 4.5.2 using the following command line (assuming the code is in a file named main.cpp): or compiled in VS2010 using Warning Level 4. For VS2010, it takes about two minutes to compile in Release mode if you are also producing the Assembly with Source Code ( /FAs ... or Properties / Configuration Properties / C/C++ / Output Files / Assembler Output ) as well. This code produces the following output: Previously, I asked if we could cover loop unrolling (esp. for assignment statements). Normally, if I needed to repeatedly perform a few hundred thousand assignment statements (i.e. copying the buffer of images or video frames for processing) and I wanted to minimize the impact of the "i < size" and "++i" in that for-loop, then I would just write out a for-loop with a bunch of assignment statements in the for-loop body using a script and then copy & paste that code into the relevent cpp file by hand. Of course, this manual for-loop unrolling assumed that the size of the loops (i.e. the size of the images or video frames) weren't going to change from one compilation to another. A few weeks back, I had to come up with something a little easier to work with since I was going to be dealing with a number of different buffer sizes (all still known at compile time ... no run-time querying). With the help of pages 314-318 of C++ Templates: The Complete Guide, I ended up writing something like the following: Thanks In Advance, Joshua Burkholder Matt: Thanks! That's exactly what I'm trying to do here. Burkholder> 1) What is the equivalent VS2010 C++ flag that corresponds to g++'s -ftemplate-depth=n According to my knowledge, VC doesn't have a compiler option to control the maximum template instantiation depth. Burkholder> 2) Since C++0x's maximum template instantiation depth seems to be implementation dependent C++03 said "17 or more". C++0x says "1024 or more". Burkholder> what is the maximum template instantiation depth of VS2010's templates? VC10 RTM appears to believe that 500 is infinity: Burkholder> Or is it template parameter dependent (i.e. compile-time stack dependent)? Increased template complexity may reduce this limit. Burkholder> 3) At compile-time, is there any way (even some weird compiler-dependent macro) to determine how deep into template instantiations we are without keeping track of that depth ourselves? No. I've never heard of any compiler having such an ability, and it would be extremely problematic. Burkholder>? This is up to the optimizer and your optimization settings. Crazy magic happens here. Burkholder> 5) What improvements can be made to my loop unrolling code at the end of this post? Consider using SSE, etc. Video processing is a perfect scenario for vectorization. (Of course, for simple copying, just use memcpy()/memmove(). In fact, our implementation of std::copy() calls memmove() when it can get away with it - something I'm very likely to cover in future parts.) Great lecture as ever from STL of the STL. The pace was spot on - if I needed something clarifying I could use the seek bar. The book 'C++ in Action' recommends using a leading underscore to name private data members ( (scroll to bottom)) which I started doing but couldn't stand its ugliness after a while. Now I know there's an even stronger reason not to use this convention. I would like to see how the STL can be used to implement machine learning, search algorithm optimisation (e.g. iterative deepening, incurred cost estimation, etc) and other aspects in the AI field. I'll be happy with whatever direction you go in though...good work!! Would be nice if you got the watch window font to a size that can be seen. Be nice if you found out why the watch window was wrong, too, but I know it's too late to fix for 10. Maybe 11, then. Philhippus> using a leading underscore to name private data members To clarify, only _Leading_underscore_capital and double__underscoreAnywhere names are reserved everywhere. _leading_underscore names are reserved in the global namespace, but users can use them in classes. (See N3225 17.6.3.3.2 [global.names].) In my opinion, _member and member_ are terribly ugly. I use m_foo for members, because the lifetime of a data member exceeds that of any individual member function, and it's important to be constantly reminded of that fact. (This isn't Hungarian notation, which is evil - that attempts to encode types into names.) Philhippus> I would like to see how the STL can be used to implement machine learning, search algorithm optimisation (e.g. iterative deepening, incurred cost estimation, etc) and other aspects in the AI field. I'm not familiar with those domains, sorry. As I explained back in Intro Part 1, the STL is a library for pure computation, so you get to figure out how to apply it to your field. :-> (My Nurikabe solver was an attempt to demonstrate how the STL could be applied in a nontrivial program - but while I thought it was fascinating, I'm not sure how successful it was. It also took me weeks to write, something I can't easily do again. Even repurposing my code at home for data compression or font rendering would take a while.) WATCHER> Would be nice if you got the watch window font to a size that can be seen. I couldn't find an option for it. If somebody could find one, I'd be very grateful. WATCHER> Be nice if you found out why the watch window was wrong, too I think I'll reformat my laptop before filming Advanced Part 2. I may have messed with the visualizers in the past, but I thought I put everything back to its original state. @ryanb: yes, I was watching the streaming version. I'll make it a point to look for a download next time. @JamesG: Smooth streaming will streaming quality ranges from low to high depending on your network conditions. We are aware that this isn't a great experience when there's code on screen and your network isn't capable of a large data stream. The dowloadable files are located under the Download section next to the inline player. C Is "type erasure" another name for the GOF strategy pattern or is there a subletly that I am missing here? Ugh, design patterns. As far as I can tell (I'm looking at the book right now), "Strategy" means "customize behavior". The STL does this in lots of places: functors given to algorithms, comparators given to maps, allocators given to containers. Their bullet point "Strategies as template parameters." covers this. Now, compare how vector and shared_ptr handle custom allocators - they affect vector<T, MyAlloc<T>>'s type, but don't affect shared_ptr<T>'s type. That's type erasure. @Charles: maybe you should use a 2-pane layout for lectures and show us the slides and code in another view, just like the pdc player or msr lectures player. and there should be a formal section for download links of sildes or other downloadable materials. First: Thanks for a fantastic video, Stephan. Type erasure is used in many places now (std::function, boost::any...) and it's a fantastic idiom that helps decoupling implementation details from interfaces. One question though: If the make_shared allocation allocates the object and the reference counting block together, do they also have to be freed together? If I have a very big object and no more "uses", but still weak links, will the memory not be freed? @felix9: In terms of the split-screen, possibly... Stephan, can you post your slides? I understand what you mean by formal, felix. Slides, code would live under Downloads. That's a good idea. C Hi STL, love your videos, keep em coming :) I have a question that i think you can answer or at least clear up a bit. For some time now i have been confused about which of these to use for dynamic buffers that can be as small as 1 byte to over 500 megabytes and beyond, and modifiable. (But normally around 2 - 100 megabytes) vector buf; or unique_ptr buf(new BYTE[...]); Which one do i use and why. Performance is a priority. Marius> If the make_shared allocation allocates the object and the reference counting block together, do they also have to be freed together? Yep. When all of the shared_ptrs have been destroyed/reset/assigned/etc. the object will be destroyed, but the refcount control block containing space for the object will persist until all of the weak_ptrs have been destroyed/reset/assigned/etc. Marius> If I have a very big object and no more "uses", but still weak links, will the memory not be freed? Yep. This is the one scenario (big object, weak_ptrs) where traditional shared_ptr construction is better than make_shared<T>(). Of course, only sizeof(T) matters, not the size of any dynamically allocated memory it might contain - for example, vector<T> is small according to this metric (only 16 bytes in VC10 and 12 bytes in VC11). Charles> Stephan, can you post your slides? They're here: Marty> Which one do i use and why. Use vector<unsigned char> instead of unique_ptr<unsigned char[]> unless your scenario would strongly benefit from unique_ptr and you know exactly what you're doing. vector is much more powerful (and still insanely efficient), beginning with the fact that it knows its own length. vector stores 3 raw pointers compared to unique_ptr's 1, but that matters only when you've got a zillion of the things - and if your buffers are megabyte-size I can guarantee that you don't have a zillion of the things. Way back in Intro Part 1, I stressed that vector should be your container of first resort. That's still true. STL: "Way back in Intro Part 1, I stressed that vector should be your container of first resort. That's still true." Yes i did see it. There are so many ways of doing the same thing with the stl, that it is sometimes hard / confusing to know what to use in which situation, etc. Your videos are helping in that area though. :) Is the above still true when you use that vector buffer and cast it to a structure ex LPBITMAP and then change values in it and then saving the buffer to file again and / or the other way new up a new vector to use as a temporary buffer, inserting structure info/ file headers, etc and then saving it to a file ? "unique_ptr unless your scenario would strongly benefit from unique_ptr and you know exactly what you're doing" "...strongly benefit..." Can you elaborate on that ? I like to know both sides of the story before i choose a side. side1:pros/cons vs side2:pros/cons I like to be thorough, sorry if i'm a bother. Marty: Yep, still true. Also, vector works just fine with C-style APIs, where you can pass v.data() and v.size(). Marty> "...strongly benefit..." Can you elaborate on that ? Basically, if you have a zillion - like ten million - tiny buffers (so fixed overheads per buffer really matter), AND their sizes aren't known at compiletime (so you can't just use std::array), AND they're not growing/shrinking during execution (so you don't have to worry about performing reallocation yourself), AND you can determine their runtime size from other information you're already storing (otherwise, you'll need to store unique_ptr<T[]> and size_t, which is 8 bytes versus vector's soon-to-be 12). In that contrived case, unique_ptr might be better than vector. In all other cases, use vector. It's really that simple. Can not watch this voide. @哥哥: Where are you located? Do the Download links not work? C Regarding your question about the pace of these shows, I listen to the audio while driving so the pace is fine for me. Perhaps if I were to watch the video it would be too slow but as it is I can usually follow everything that's going on without having too many fatal accidents. I do have a question regarding this show, when using make_shared the T must sometimes be destroyed before the reference counting block (if weak pointers exist). How is this managed? Do you use a char buffer with placement new and then call the destructor explicitly? If so how do you guarantee that the memory block to T is correctly aligned (is it as simple as it's the first part of the structure returned from new and therefore aligned to all types). Normal 0 false false false EN-US X-NONE HE guarantee Motti: See _Ref_count_obj in <memory>. Yes, we use placement new and explicit destructor calls. We also use a fancy bit of machinery called std::aligned_storage to guarantee alignment - as its name indicates, it's Standard machinery available for public consumption (by experts who know what they're doing). Stephan: Excellent show! You have mentioned Standard machinery available to deal with object's alignment. It'd be great if you could cover the STL allocators and the new C++0x alignment specifiers with STL. Implementing an SSE-friendly-container guaranteed-alignment allocator, along the lines of estl:allocator discussed in N2271 ( ) would be a fantastic example! @Matt_PD: on the comments of before video (the one about type_traits) I make a comment about std::aligned_storage, I almost sure I'm exploiting it wrongly but it worked fine and really grant the alignment of subsequent data. For work with most multimedia instructions (SSE, AVX, etc) the initial address must be aligned too, with the info of this series I'm now testing shared_ and unique_ptr with the compiler aligned_alloc and aligned_free. It makes easy use the std::vector to store, lets say, 4 or more small aligned buffers for audio/video processing, but it is compiler dependent.I believe using placement new and delete would helps decouple/shields the code. Anyway I'm loving the series and the slides from STL is helping to demystify the source code we see when playing with debug. Lets hope next version of VS makes easy add the code for formating debug. I had posted this over in the C++ Blog. @Stephan To change the size of the text in the watch window do the following. Menu -> Tools -> Options.. Options Dialog -> Environment -> Fonts and Colors Select the drop combo for Show settings for: Select Watch Windows Change the font size to what you want. This will make the lines of the watch window bigger. Keep them coming. The whole series is amazing. When you'll finish the STL you could put some BOOST lectures(i'm pretty sure you know that too). Good luck. For SEH in Windows/Win32, specfically, this is the seminal paper on the topic of exceptions. Well worth a read if you haven't already: C @STL: Thanks for the AWESOME suggestions!!! SSE, memcpy(), and memmove() are amazing! I'm a complete newbie to SSE ... but WOW ... it seems like using one instruction to load multiple floating point values into a 128-bit register and then using another instruction to store those values is a quicker way to go. I have a few questions on SSE: 1) Since I'm a newbie to SSE, I used the following type of code to copy memory from one place to another:In this case, is this the correct way to use SSE? Or is there a better way? 2) The following "normal" type of code (i.e. no _mm_xxxx_pd() stuff) also compiled and ran:Will this type of code (i.e. using __m128d * the same way I would double * or any other pointer) be valid in the future? Or is this something that works in VS2010, but might not work in future versions? ... If this will work in future versions, then what's up with all that _mm_xxxx_pd() stuff? 3) I have no idea if I'm writing good or bad SSE code. What are the suggested tutorials? Are there any good books? Lastly, memcpy() and memmove() were faster at copying a single image than anything that I could write ... even with SSE and loop unrolling. I could only beat memcpy() and memmove() when I took into account my specific situation ... copying a single image buffer into two images ... where I could use a single for-loop for both images (as above), vice a separate loop for each image. So my question is: 4) What is the "secret sauce" in memcpy() and memmove() that makes them so much faster? Is the implementation of VS2010's memcpy() or memmove() available? If so, where can I find that code? Definitely cover memmove() when you cover std::copy(). Thanks In Advance, Joshua Burkholder If memory serves me well, since VS2008 memcpy and memmove already make use of SSE instructions (including cache bypass) when you compile in release mode and with SSE flags up (/arch:SSE2 or /arch:AVX etc), i ready about it on some specialized sites (new memory not nerve me :doh:) even string search functions can take advantages on SS4.2 (with specific str intrinsics). You don't need manually make move bytes in your code, unless u doing something specific like, move and expand YCbCr to RGB in the same pass. And if you like it, AVX (new instructions from 2nd generation of Core iX) are 256bit wide. Sorry the typing mistakes above. [old]"i ready about it on some specialized sites (new memory not nerve me :doh:) even string search functions can take advantages on SS4.2".[revised]"I read about it on some specialized sites (now memory not serve ...) ... SSE4.2). Another thing i want to add, u using #include , you only need include , this header include all other headers and include check macros against architectures (some intrinsics are 64bit or itanium specific and the macro nulls then) you are using emmintrin.h, you need only use intrin.h (better I make an account to edit my posts :P). Sorry for the triple post. @new @Burkholder: (now I make an account ), unfortunate I don't know any book about SSE/AVX, at least none for programming (when can find any they are a thousand page manual). Most I learned on intrinsic was reading Gamasutra articles and Intel (few times IBM) whitepapers. The Intel ones are complex but portable, as VS uses same headers (GCC prefer use vectorization classes). Is safe nowdays use SSE2 code, and near safe use SSE3. Some notable citizens: AMD CPU OpenCL and WARP (Windows Advanced Rasterization Platform). A foot note, intrinsics are mandatory when compiler is targeting 64bit. I need search if there is any form of direct mail or send a message to a niner, then I can stop poluting the comments with off-topic ) Unfortunately, intrin.h does not seem to exist for GCC's g++ and MinGW's port of g++; however, emmintrin.h exists for both GCC's and MinGW's g++ ... and for Visual C++ 2010. Here's the simple test that I ran:... and here's the command line: Joshua Burkholder So when is the next video coming, i'm in withdrawal here. Well done STL@MSoft. These videos are fantastic - as you said there are no books! I have a topic suggestion: Do you think it would be possible to cover allocators at some point? SL99 @Burkholder: Good they added the XXXintrin.h, last time I used gcc (mingw) was really long time ago The intrin.h is VS specific, only a huge all-in-one header that make those architecture check for you On Intel have a guy talk about how his program (video processing, under en-us/blogs/2010/12/20/visual-studio-2010-built-in-cpu-acceleration/) get faster by simple using SSE2 arch option on VS2010. And under (en-us/avx/) is the Intel source for articles, the page always point to newest technology, but serve as a hub for the 'old' ones. All x86 instructions can b found here. FYI: I already worked on similar case, activating 720(4:3) composition on H323plus project. @STL: I wasn't sure if you kept an eye on the c9 forum so here you go. I've created a thread for my problem: What am i doing wrong ? Is it me or is it vs2010 ? Is it both ? Thanks for your time. @Burkholder: Regarding SSE intrinsics learning materials -- see Agner's Optimization Manuals: @new2stl: I've tried looking for your comments mentioning that on the "Standard Template Library (STL), 10 of 10" video (that's the one discussing type_traits) but I couldn't see them -- was I looking in the wrong place? @Matt_PD:Thank you so much ... the optimizing_cpp.pdf is OUTSTANDING! Matt_PD> It'd be great if you could cover the STL allocators and the new C++0x alignment specifiers with STL. I might cover our allocator machinery. However, I can't cover C++0x features that aren't implemented in VC10! ilcredo> When you'll finish the STL you could put some BOOST lectures(i'm pretty sure you know that too). I'm familiar with some Boost libraries, and I might cover them in the future. Charles> For SEH in Windows/Win32, specfically, this is the seminal paper on the topic of exceptions. Disclaimer: Structured Exception Handling is totally different from C++ exception handling. (Implementation detail: VC implements C++ exceptions with SEH.) SEH is extremely low level, and in general programs shouldn't mess with it. Burkholder> I have a few questions on SSE: I know very little about SSE, other than the fact that it exists and what it's generally useful for. Burkholder> What is the "secret sauce" in memcpy() and memmove() that makes them so much faster? They have dedicated assembly implementations, and I believe they're constantly maintained by some combination of our compiler back-end devs and our Intel/AMD contacts. These assembly implementations know the fastest way to copy bytes from one location to another, which can be very processor-specific. Burkholder> Is the implementation of VS2010's memcpy() or memmove() available? My vague understanding is that there are actually 3 implementations of memcpy/memmove: assembly, "compiler intrinsic", and plain old C. I don't know how the compiler selects between the 3. See "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\crt\src\intel\memcpy.asm" and "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\crt\src\memcpy.c" for the first and third. The compiler intrinsic implementation, which I've been told exists, is a sequence of instructions that the compiler knows how to generate on demand. Gordon> So when is the next video coming, i'm in withdrawal here. Filming tomorrow. I finished setting my laptop up today. theDUFF> Do you think it would be possible to cover allocators at some point? Yes, although I'll have to think of something useful to say about them. Mr Crash> I wasn't sure if you kept an eye on the c9 forum so here you go. I don't monitor the Channel 9 forums, but I occasionally scan the MSDN Visual C++ forums. Mr Crash> What am i doing wrong ? It looks like you're trying to write a scope guard. I've written an implementation powered by std::function: Notes: * I haven't extensively tested this, nor have I used it in production code. * Destructors must not emit exceptions, so if invoking m_f() in ~HyperScopeGuard() throws, we immediately terminate(). * In HyperScopeGuard's constructor, storing f (via perfect forwarding) in m_f might throw, e.g. if the std::function tries to allocate memory and that throws bad_alloc. In that event, we invoke f() before rethrowing, so that we don't leak whatever we're trying to guard. There are a couple of subtleties here. First, f() itself might throw. That's bad (just like in HyperScopeGuard's destructor), so immediate termination is the answer. Second, there are subtleties involving moved-from functors, but I think I'm worrying about nothing there. @Matt_PD: My previous comments are a bit hard to track, only now I'm using an account to post then. Plus in old posts I'm not writing code in comments cause they are limited to 'wall-of-text'. On the video about type_traits you can use: With std::aligned_storage I can make the default allocators grant the alignment of subsequent data, but the start address still need to be aligned too for SSE and SSE2. Inspecting the source code, allocators look for the info in the std::aligned_storage when possible. Now with a more indepth of std::XXX_ptr, I feel more confortable to append a call to _aligned_free for the destructor. If @STL provide some more info about allocator I believe the cycle will be complete making a a wrapper arround _aligned_malloc PS.: all this is only necessary for MMX/SSE/SSE2 or for play with cache lines, SSE3 and beyond include unaligned versions of the LD and MOV. @new2STL: Regarding "SSE3 and beyond include unaligned versions of the LD and MOV" -- would you happen to know what the costs are of using unaligned load/move in SSE3+? According to Intel, for SSE2, the costs are at least 40% slowdown (going up to possible 500%): !" Why does the decltype of a function change from "void()" to "void(*)()" in the following code? Code:The output of this code on both VC++2010 and g++ 4.5.2 is the following: Thanks In Advance For Any Clarification, Joshua Burkholder @Matt_PD: SSE3 (and SSSE3) works better with Core Architecture (Core, Core2, Core i#). The Core architecture are different from Prescott (P4). The Core have a different way to handle cache and SIMD. Diary of X264 have some talks about cache and SIMD in Nehalem (Core i#). Here a excerpt of Intel about SSE3 LDDQU: "... is a special 128-bit unaligned load designed to avoid cache-line splits. If the address of the load is aligned on a 16-byte boundary, LDQQU loads the 16 bytes requested. If the address of the load is not aligned on a 16-byte boundary, LDDQU loads a 32-byte block starting at the 16-byte aligned address immediately below the load request. It then extracts the requested 16 bytes. The instruction provides significant performance improvement on 128-bit unaligned memory accesses at the cost of some usage-model restrictions." If I find some numbers i edit this post @Burkholder: I can be wrong but, reading on en.wikipedia.org/wiki/Decltype, semantic rule 3, the function void f() as it is passed to decltype are an lvalue then it is returning the reference to the function type (the "(*)" denote an unnamed function type). You can see a lot of this declaration on OpenGL headers. void f() type is void (*) () I filmed Part 2 today! Burkholder> Why does the decltype of a function change from "void()" to "void(*)()" in the following code? N3225 14.8.2.1 [temp.deduct.call]/2 describes how template argument deduction works by comparing a function parameter type P to a function argument type." The second bullet point applies here. When you pass an array or a function (like f) to a function template (like check_similarity_[12]) with a value parameter (like F f_), an array will decay to a pointer and a function will decay to a function pointer. These template argument deduction rules mirror how C and C++ work - any attempt to write a function that takes an array parameter or a function parameter is immediately and forcibly rewritten to take a pointer parameter or a function pointer parameter. (This rewriting is different from decay.) In C++ this is N3225 8.3.5 [dcl.fct]/5 "After determining the type of each parameter, any parameter of type “array of T” or “function returning T” is adjusted to be “pointer to T” or “pointer to function returning T,” respectively." and C has the same rule (C99 6.7.5.3/7). This is why array parameters are widely regarded to be a bad idea (anyone writing array parameters probably doesn't know what they're doing - fortunately almost nobody tries to write function parameters, which is why that rule is so obscure). The third bullet point, const-dropping, may seem weird but it makes sense. Given template <typename T> void foobar(T t), and const int c = 1729; calling foobar(c) deduces T to be int, not const int. That's because when passed by value, the constness of the source is unrelated to the constness of the destination. foobar()'s author may want to modify its copy of t. (Otherwise, they could write template <typename T> void foobar(const T t).) This stuff is somewhat subtle but fundamentally important, which is why I've explained at length - hopefully I haven't made things more confusing. @new2STL:Since void f() isn't always void(*)(), let me clarify. Why is the first std::is_same<...> in main() returing 1?Similarly, why is the second std::is_same<...> in main() returning 0? It seems like this should be 0 then 1 ... like the check_similarity_x() functions, vice 1 then 0. In order to get a 0 then 1 in main(), I have to do the following: Which produces the following output: Obviously, this is some rule that I didn't pay enough attention to when I was learning C++ ... or just didn't learn the right way Joshua Burkholder new2STL: Your explanation of what's happening with decltype is not correct. double (int): This is a function type, "function taking int and returning double". double (*)(int): This is a pointer to function type, "pointer to function taking int and returning double". double (&)(int): This is an lvalue reference to function type, "lvalue reference to function taking int and returning double". The (*) means "pointer". Here's a definition of a function pointer: double (*fp)(int) = &func; N3225 7.1.6.2 [dcl.type.simple]/4 specifies how decltype works: "The type denoted by decltype(e) is defined as follows: - if e is an unparenthesized id-expression or a class member access (5.2.5), decltype(e) is the type of the entity named by e. If there is no such entity, or if e names a set of overloaded functions, the program is ill-formed; - otherwise, if e is a function call (5.2.2). The operand of the decltype specifier is an unevaluated operand (Clause 5)." Both decltype(f) and decltype(f_) activate bullet point #1, "unparenthesized id-expression", and return the type of f/f_ without modification. Bullet point #3 applies to things like decltype(ptr[index]). In this case, it turns out that adding an lvalue reference is desirable. @STL:Looks like we were both typing at the same time. Thank you very much for the explanation! I'm getting closer to understanding. Is there anyway to go from a "void(*)()" back to "void()"? Or avoid this rewriting? Lastly, why is the decltype( f ) in my first couple of std::is_same<...> lines in main() __not__ being rewritten to a pointer to function type once it is inside std::is_same<...>? In other words, why is the following output being produced from main()? Thanks In Advance For Clarification ... And Your Patience With My Dense-ness, Joshua Burkholder @STL:Looks like we were typing at the same time again! I'm still not sure why the rewriting in the std::is_same<>'s in main isn't taking the function type down to a pointer to function type, but I figured out how to go from "void(*)()" back to "void()" ... which was so simple I'm a little embarrased I asked the question ... just a simple typename std::remove_pointer<...>::type. Here's the code:Here's the output: Thanks For Any Clarification ... And Your Continued Patience , Joshua Burkholder @STL:I am a bonehead! ... I finally understand your post ... with a little help from page 168 of "C++ Templates: The Complete Guide". Note To Self: Implicitly deduced template arguments can decay. Explicit template arguments cannot. In my search for clarity, I may have found a **** potential bug **** in VC++ 2010. Here's the code that helped me out ... I'll explain the potential bug at the end:In VC++ 2010, this produces the following output: In g++ 4.5.2, this code produces the following output: Everything agrees between the output from VC++ 2010 and g++ 4.5.2, except the "check_similarity_1< decltype( f ) >( F f_ )" section ... and there is the potential bug. **** potential bug ****: In VC++ 2010 when I make the call to check_similarity_1< decltype( f ) >( f ) inside of main(), is_same< decltype( f_ ), void (*) () >::value is 1 within that check_similarity_1 function ... even though I explicitly declared the template parameter F to be "void ()" (i.e. decltype( f ) ). Shouldn't is_same< decltype( f_ ), void () >::value be 1 instead (esp. since is_same< F, void () >::value is 1 within that same function)? In g++ 4.5.2, is_same< decltype( f_ ), void () >::value is 1. It seems like VC++ 2010 is assuming that a function argument will always become a pointer ( i.e. "void ()" f will always go to "void (*) ()" f_ ) whether or not it agrees with its actual type ( F: void () ). Hope This Helps, Joshua Burkholder To make this potential VC++ 2010 bug (explicit template argument type disagreement) a little more obvious, here is some much shorter code that pertains just to the bug:On VC++ 2010, this produces the following output: While on g++ 4.5.2, this produces the following output: Since we have F ff as the parameter of the is_F_same_as_decltype_ff() function, it seems like F should __always__ agree with decltype( ff ). In other words, the bug is that there is type disagreement when explicit template arguments are used (esp. when there is no type disagreement when implicit template arguments are deduced). Hope This Clarifies, Joshua Burkholder Interesting code though a bit heavy since it use exception handling. Please check the thread again, i'd like to hear your opinion on the matter. Great videos, thanks! a(new int);std::unique_ptr b;std::unique_ptr c;b=a; // does not compile c=a; // should not compile too; does not linkI checked the stl source code that the operator = is private member of unique_ptr but the "c=a" still compiles in VC2010. Am I missing something? Just today I saw a bit weird behavior. Consider this code: struct deleter { void operator() (int* p) { delete p; }}; std::unique_ptr Sorry for the poorly formatted previous post. Let me try it one more time. Consider this code: I checked the stl source code that the operator = is private member but the "c=a" still compiles in VC2010. Am I missing something? Thanks for any insight. Petr @Burkholder:I filed this as a bug on Microsoft's Connect website. The title of this bug report is "VC++ 2010: Explicit template arguments cause type disagreement for types that decay to pointers" (bug id: 647035) and is under the "Visual Studio and .NET Framework" section. Note: This bug affects anything that decays ... so both function types and array types. If you start out with say "int arr[3];" and pass arr to those type of functions using implicit and explicit template arguments, then you get the same type of results ... type disagreement ( int[3] versus int * ) when explicit template arguments are used in VC++ 2010, but __no__ type disagreement in g++ 4.5.2. Hope This Helps, Joshua Burkholder Mr Crash> Interesting code though a bit heavy since it use exception handling. Exception handling is part of the language, and is used by the STL. PetrM> I checked the stl source code that the operator = is private member but the "c=a" still compiles in VC2010. We've already changed unique_ptr such that VC11 emits "error C2679: binary '=' : no operator found which takes a right-hand operand of type 'std::unique_ptr<_Ty>' (or there is no acceptable conversion)". However, it appears that you've found a compiler bug. I've filed DevDiv#150368 "Access control mysteriously not applied for VC10 RTM's unique_ptr" with a minimal repro: Burkholder> I believe that VC's behavior is CORRECT. Here is a modified repro: N3225 7.1.6.2 [dcl.type.simple]/4 says: "The type denoted by decltype(e) is defined as follows: — if e is an unparenthesized id-expression or a class member access (5.2.5), decltype(e) is the type of the entity named by e." 8.3.5 [dcl.fct]/5 says: "After determining the type of each parameter, any parameter of type “array of T” or “function returning T” is adjusted to be “pointer to T” or “pointer to function returning T,” respectively." This "adjustment" happens before sizeof (which can easily be verified with arrays adjusted to pointers), so it should happen before decltype too. In fact, in the absence of templates, GCC believes that the adjustment happens before decltype: @STL: If decltype(ff) has to be adjusted to a pointer type, then why doesn't F also have to be adjusted to a pointer type (regardless of the explicit template arguments) ... like it's adjusted in the implicit template arguments case? If (as you are suggesting) this type-disagreement behavior is intended by the standard, then why? What does this behavior enable? Or prevent? I ask because conflicting types ( i.e. F not being the same type as decltype( ff ) even though we declared F ff ) doesn't make much sense to me (esp. if ... say ... F is char[8] and decltype( ff ) adjusts down to char * where sizeof( F ) would be 8 and sizeof( decltype( ff ) ) would be 4 ). Joshua Burkholder There are several things going on here, so it's helpful to go step by step. First, template argument deduction. This happens when you call a function template without providing explicit template arguments (or providing some-but-not-all). Being called in this manner is how most function templates are intended to be used, so template argument deduction is a very important process. (Indeed, providing explicit template arguments when you shouldn't is a subtle way to misuse C++.) I quoted the relevant rules for this above, N3225 14.8.2.1 [temp.deduct.call]/2. (There are other rules not relevant here.) This says that given "template <typename T> whatever foobar(T t)" and "double func(int)" and "foobar(func)", T is deduced to be double (*)(int), i.e. a function pointer type. That's just how C++ works. Now, there's a reason for these rules - you can't pass around functions, but you can pass around function pointers, so when func is passed around by value, it makes sense for T to be a function pointer. (Hypothetically, the language could be restrictive and simply ban foobar(func), requiring you to say foobar(&func) in which there should be absolutely no mystery whatsoever - but recall that C++ is extremely permissive and allows programmers to say lots of things, then tries to figure out what they meant.) After template argument deduction runs, what happens is the same as if explicit template arguments were used. So "foobar(func)" is exactly equivalent to "foobar<double (*)(int)>(func)". The second thing is function parameter adjustment. This rule is ANCIENT, literally, because it comes from C. This says that functions declared as taking arrays or functions are immediately rewritten, or adjusted, to take pointers or function pointers instead. That's just how C works (and how C++ works). Now, there's a reason for THESE rules - in C you can't pass around arrays or functions, but you can pass around pointers or function pointers. So when the language sees a function declared as taking something "impossible", it just says, "okay, you can think it works like that, but I need to compile this into something possible". Personally, I take an extremely harsh view of this syntax, as I mentioned earlier - but the rules are what they are. The rules for template argument deduction and function parameter adjustment are totally different - they occur in different clauses of the Standard - and yet similar, because they are both dealing with the same thing - you can't pass arrays and functions by value. Now, you've taken it the next level by mixing templates and function parameters of function types. > why doesn't F also have to be adjusted to a pointer type (regardless of the explicit template arguments) Explicit template arguments don't get messed with. (I believe I'm glossing over a couple of subtleties here, mostly with template non-type arguments - please don't ask about those - but for the most part this is true.) If you tell the compiler that F needs to be a function type instead of a function pointer type, then a function type it shall be. That still doesn't stop the compiler from performing function parameter adjustment, though - that process is unstoppable. As you can see, this is subtle enough that two compilers written by experts disagree, but I believe that the Standard speaks with a clear voice here. (Sometimes it's worse and the Standard itself is ambiguous, requiring a Core Language Issue to be filed. Even Standards have bugs.) I believe that function parameter adjustment should happen before decltype inspects the type. I could be wrong - I have been known to be wrong about the Core Language in the past. > I ask because conflicting types ( i.e. F not being the same type as decltype( ff ) even though we declared F ff ) Take a look at my "absence of templates" example above where VC and GCC are in agreement. There, ff is declared to have function type, but it actually has function pointer type. Perhaps another example will illustrate why I believe that GCC is incorrect and inconsistent. Given meow<int[3]>(arr), GCC believes that t is 4 bytes (incontrovertibly correct, it is a pointer), but that t's declared type is 12 bytes. Yet given purr(arr), where x is declared in the source code to be int[3], GCC believes that both x and x's declared type are 4 bytes. I cannot imagine any possible interpretation of the Standard that permits GCC's behavior here. > VC++ seems to be just as inconsistent as g++ ... only in a slightly different way. VC's working correctly there. In the case you're looking at (where VC prints 20 for T and 4 for t and decltype(t)), you've explicitly specified T to be char[20]. Function parameter adjustment makes t a char *, which is why it's 4 bytes. Function parameter adjustment does not affect template parameters. In particular, this behavior (sizeof(T) is 20, sizeof(t) is 4) is shared by VC and GCC, and mandated by C++98/03. It's not new. > Using implicit or explicit template arguments should produce exactly the same results It mostly does, when you're careful to specify exactly the same template argument that automatic deduction would have chosen for you - but then, why bother? And if you specify something different, now you're forcing the template into an unusual mode of operation. (I'm aware of one case where you can specify explicit template arguments identical to what template argument deduction would have chosen, and yet the compilation explodes. This happens when people use explicit template arguments with swap(), which is WRONG and BAD and WRONG. The problem is that there are many swap() overloads, and while the provided explicit template arguments will work for the overload desired by the programmer, the compiler has to look at the *other* overloads too, and plugging those explicit template arguments in can cause a hard error. In contrast, when you rely on template argument deduction like you're supposed to, the undesired overloads fail out of deduction and are silently removed from the overload set. Again, this is subtle - if you don't understand it, simply remember that you shouldn't use explicit template arguments unless the function is documented as being called like that, as with make_shared<T>() for the first template argument.) Joshua Burkholder > Where in the proposed C++0x standard does it state that function parameter adjustment does not affect explicit template parameters? Function parameters and template parameters are totally different, so function parameter adjustment can't affect template parameters. The Standard/Working Paper doesn't need to explicitly say this, even in a footnote. > My interpretation of 14.8.2.3 (PDF page 383) Here's a tip to avoid confusion: when citing the Standard/Working Paper, always mention what you're citing (e.g. C++03 or N3225) and both the numeric and alphabetic section IDs (e.g. 14.8.2.3 [temp.deduct.conv]). Knowing what document is being cited avoids the pitfall of looking at different Working Papers and being confused by wording changes between them. As for the section IDs, numeric IDs are easy to find through the bookmark tree, but are occasionally renumbered as sections are added, removed, or moved. The alphabetic IDs are provided because they're more stable (although very rarely they are modified, as happened to the Standard Library after C++03). Most importantly, don't mix section numbers and paragraph numbers! You appear to be referring to N3225 14.8.2 [temp.deduct]/3 (that is, section 14.8.2, paragraph 3), not 14.8.2.3 [temp.deduct.conv]. > and the explicit template arguments in it's example seems to suggest that it does Those examples are depicting what happens to the function parameter types (which affect the overall function type). Perhaps there's terminology confusion here. In "template <class T> void f(T * p);" the "T" is a template parameter. It'll be given a template argument, either explicitly or through template argument deduction. The "p" is a function parameter, and its type is "T *". The same applies to "f(T t)". The template type parameter (on the left) and the function parameter type (on the right) are still distinct things, although both appear as "T" in the source code. Function parameter adjustment affects the latter, but not the former. > (esp. #2, where f<const int> implies that T and decltype( t ) are both const int, but the signature of the explicit f<const int> is adjusted to void(*)(int) ). That one's special - I've tried to avoid mentioning every possible scenario in the interests of reducing complexity. The thing about const value parameters is that they don't affect the callers of a function, but they do affect the function itself (where the const value parameter cannot be modified). Therefore, const value parameters are stripped out of function types, but still affect function definitions. Note the differences between this and what happens to array/function parameters. For THOSE, they get adjusted to pointers/function pointers in function types, AND this affects function definitions. > In other words, T agrees with decltype( t ) in spite of adjustment In that case, yes, because the adjustment has deliberately not been performed on the function definition (where const value parameters still matter). > I don't have a copy of the C++98/03 standard handy. Is this still in the proposed C++0x standard? Yes, same behavior. There are breaking changes between C++03 and C++0x, but not many (especially in the Core Language). > If so, where can I look this up? N3225 5.3.3 [expr.sizeof]/1: "The sizeof operator yields the number of bytes in the object representation of its operand. The operand is either an expression, which is an unevaluated operand (Clause 5), or a parenthesized type-id." When 8.3.5 [dcl.fct]/5 says "After determining the type of each parameter, any parameter of type “array of T” or “function returning T” is adjusted to be “pointer to T” or “pointer to function returning T,” respectively." it's talking about function parameters. Because the function parameter has been adjusted to be a pointer, sizeof(t) is 4. t behaves as a pointer in every other respect (e.g. when being passed to other templates). The template parameter is unaffected - it is still an array type. sizeof(T) therefore returns how many bytes would be in such an array, which is 20. > If you have a templated function, then we have to explicitly instaniate the function template in order to pass it around. Ah, but there's a better way to do that (one that avoids the pitfall I mentioned earlier, where explicit template arguments make the compilation explode). Instead of "f<decltype(s)>" you can pass "static_cast<void (*)(decltype(s))>(f)". (Yes, it's more typing, but it doesn't explode. I'll construct an example if you really want one.) Thanks to N3225 13.4 [over.over], when faced with overloaded and/or templated functions, you can use static_cast to disambiguate exactly which one you want. (This is one of the few good uses of casts). Note that this will change the output, because as soon as you say static_cast<void (*)(decltype(s))> which is static_cast<void (*)(char[])>, the compiler adjusts that function pointer type to static_cast<void (*)(char *)>, so T is deduced to be char *. _8<<. About all this interesting debate about template and functions I see The Visual C++ Weekly Vol. 1 Issue 9 (Feb 26, 2011) come with an interesting link titled Expressive C++: Fun With Function Composition (cpp-next.com/archive/2010/11/expressive-c-fun-with-function-composition/), they talk about function composition in C++ like the compositor operation . ("dot") in Haskell. The examples ilustrate the use of template metaprogramaing and recursion, result_of protocol, boost equivalent (for pre C++0x compilers) and touch the type decay aborded by @Burkholder and @STL. Fun reading 2 things: Quick question What is the function type (i think it's called that) for this functor For some reason, vs2010 doesn't think it is "void (*)(int)" so if it's not that then what is it ? Burkholder> Well, I have to say that I'm really learning a lot about C++0x through this discussion. Cool! Burkholder> Hopefully, you're getting something out of this as well ... so that I'm not just irritating you. Very few things irritate me - chief among them is when people are wasting my time. But when I'm explaining something and people are listening, I'm never wasting my time. Burkholder> I should have written "explicit template __arguments__", vice "parameters". Precise terminology is indeed important. It appears that this doesn't affect my response, though. Basically, you've got a function template, like "template <typename T> void f(T& r, T v)", with template parameters (like "T") and function parameters (like "T& r" and "T v"). First, this needs to be fed template arguments. It can get them implicitly (through template argument deduction) or explicitly (through explicit template arguments). Template argument deduction follows certain rules, while explicit template arguments are used as-is. After template arguments have been determined, they're plugged ("substituted") into the function template, in order to instantiate a real function. Suppose that f<int[3]>(blah, blah) has been called. In this case, T is int[3], end of line. When substituted into the signature, we get (int (&r)[3], int v[3]). The former is cool, but the latter is not, so it gets adjusted, and we end up with (int (&r)[3], int * v). Those are the function parameters that the function will use. This is what sizeof(r) and sizeof(v) see, and I claim (and VC agrees) that the same should be true for decltype. Burkholder> I cannot get VC++ 2010 to compile the same code That's clearly a bug. I've filed this as DevDiv#151929 "decltype(f<int>) emits bogus error C3556: 'f': incorrect argument to 'decltype'". Burkholder> Is decltype fully baked in VC++ 2010? Or are there known limitations? There are bugs, but there are always bugs. So far they seem to be relatively rare and relatively minor (for example, decltype(expr1, expr2, etc, exprN) wasn't working properly, and Dinkumware wanted that badly, so we got the compiler fixed). Burkholder> I would love more info and examples of that static_cast< function pointer >( function template name ) thing that you were writing about. This is an exercise left to the reader: Deraynger> Where's the next video? In the encoding pipeline. Renumbered> What is the function type (i think it's called that) for this functor I don't know what "guard" is - I'm not supposed to use my psychic powers in public. If it's like std::function, you want "void (int)". That's a function type taking int and returning void. @STL:sorry, i thought my telepathic powers worked at that distance Here's guard: It would be appreciated if you could convert the string pool allocator implemented here: to C++/STL code (guiding us through the process in a video). It would be interesting and useful if you could develop a generic PoolAllocator<T>, generalizing the above string pool allocator. Thanks much. It seems STL on Windows CE is lagging behind the desktop version. Any idea why? Part 2: C Renumbered: You want guard<S_FUNC>. Coconut: That's maintained by another team, you'll have to ask them. Dinkumware and I maintain the One True STL in Visual C++, which other Microsoft toolsets (e.g. the Xbox Development Kit) are derived from. Please note that I'm now monitoring Advanced Part 2 for comments. 哈哈,I am the first chinese to catch the sofa!!! beg for your direction!! Just finished watching this video on shared_ptr. instances. The most basic base class has an empty, but virtual destructor to ensure that the right destructor is always used when instances are destroyed. In a complete, but simple, model codebase, there could well be hundreds of derived classes (which will inevitably grow as more life forms get modelled), and a model that is running would have thousands of instances of these. When a new object is created, the value returned by operator new is cast to the base class. I rely on this, and the fact the destructor is virtual, to ensure that each is cleaned up properly. (I am not eccentric enough to even consider using malloc/free in my C++ code, so your example left me wanting more). As an experienced educator myself, I'd say well done. As an old programmer, though, I'd have wanted something a bit more challenging. ;-) Anyway, I wonder if you would comment, in a bit more detail, about how this 'type forgetting' works, particularly with regard to inheritance trees. I have often had to deal with complex inheritance trees for modelling ecosystems (and often would have base classes representing families or genera of related organisms and derived classes representing species: so a genus class might represent canids, and from it would be derived classes representing wolves, dogs, foxes, coyotes, &c.). All of these classes would be ultimately derived from an abstract base class (with only a small number of data members and perhaps have a dozen pure vitual functions). The simulation engine would have an std::vector containing boost::shared_ptr If you ever saw my production code, you'd find that all pointers are immediately handed over to the most appropriate smart pointer the instant operator new returns it. Except way back when I first started using C++, you would never find a naked pointer in my code. This is the context, and why I try to encourage my junior colleagues to develop a habit of making a virtual destructor whenever they find themselves adding a virtual member function to whatever class they've been assigned to write. My first question is how would your method of 'type forgetting' fit into the context I often face. And my second is like the first: Why? Or what does this type forgetting provide that I don't already have with virtual destructors and a vector of boost::shared_ptr My last question relates to habits to be encouraged among junior programmers being mentored by old fossils like me. As I said, I encourgage kids to add public virtual destructors whever they add virtual member functions to a class. But I have been told, recently, by equally old programmers that it is better to encourage a habit of making destructors protected and non-virtual. What I have not been able to get from these guys is an explanation of what significant downside there may be from the habit I encourage or what the upside is for the practice they recommend. I am not omniscient, so I will acknowledge there's plenty I don't know, but at the same time, I don't do things just because I can, but rather because there is a demonstrable benefit for doing it. My objective is always stable, fast and correct production code. Can you contribute to my education on this matter? Thank Ted @Ted: Hey Ted, you should either comment in Part 2 (or soon Part 3) or e-mail him. I am not sure, but he mentioned his e-mail address in a previous post (on another video). I'll try to answer the question about the protected/public virtual dtors. Am quite novice, so please correct me (anyone) if I say something wrong. If you have a (obviously public) virtual dtor you can destroy the derived class thru the base class pointer: Calls ctor/dtors like this: CBase() CDerived() ~CBase() ~CDerived() With a protected dtor the delete part won't be possible thru the base class pointer. Assignment will work: I don't know if the reason that someone suggested this was, because you're maybe managing the deletion from elsewhere. Another reason, (not sure about this though), could be that the vtable will be bigger if you have a virtual dtor??? Creating and destroying the derived class will work in both cases (public virtual dtor or protected dtor) as usual: Calls ctor/dtors like this: CBase() CDerived() ~CDerived() ~CBase() Hope that was of any help, and hope even more that it's all correct as I explained it. Thanks Deraynger, > instance that holds instances of boost::shared_ptr , so all objects held by the vector are properly deleted. is a different type from boost::shared_ptr , in turn in an std::vector, and still have things properly deleted. But what I don't see is what benefit this extra magic provides. You have it right, as far as you go. It is true that having a virtual destructor increases the memory consumed by vtable by the size of one pointer, but that is hardly significant on a machine with 8 GB RAM, and pointers to objects that can consume several kilobytes. The problem that the recommendation of making destructors protected makes is that it becomes impossible to make a std::vector boost::shared_ptr If you have only two derived types, this isn't much of a problem as you can have two vectors; one for each derived type. But if you have thousands of derived types, it becomes a nightmare. Having a public virtual destructor guarantees that one vector containing instances of all derived types through pointers to the base class is sufficient. But in the context of an event driven application where the user can set up a simulation by adding, modifying or removed instances of the derived classes, there are numerous places where these instances can be either created or deleted. THAT is a second reason why is it so useful to have all instances of the derived classes managed in instances of shared_ptr containing pointers to the base class. If I understood Stephan correctly, the latest incarnation(s) of shared_ptr provides a way to make the base class destructor protected and still store pointers to the derived class in shared_ptr Thanks Ted @Ted: Ok, I get it now, and great that I got my part right (though you seem to know it all already, and also better than me ) Regarding the shared_ptr of a base class, with protected dtors, having instances of derived class', I have no idea, you'll have to ask STL (e-mail or comment in newest video thread:). All I can think of, is maybe it doesn't use a vtable (not sure of the implementation of shared_ptr), and maybe the only benefit, is that it won't leave a memory leak, as opposed to not being able to delete the derived class Ray Remove this comment Remove this threadclose
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Advanced-STL/C9-Lectures-Stephan-T-Lavavej-Advanced-STL-1-of-n?format=html5
CC-MAIN-2014-52
en
refinedweb
17 October 2012 17:54 [Source: ICIS news] LONDON (ICIS)--BASF plans to cut 90 jobs at its France-based beauty care solutions business, French media reported on Wednesday. The reports, which cited a BASF spokesperson, said the job cuts at ?xml:namespace> Media officials at BASF’s headquarter in According to information on its website, BASF’s French beauty care solutions business had a staff of 242 at the end of 2011. The business's main markets
http://www.icis.com/Articles/2012/10/17/9604874/basf-to-cut-90-jobs-at-french-beauty-care-business-reports.html
CC-MAIN-2014-52
en
refinedweb
jGuru Forums Posted By: Neha_Behl Posted On: Thursday, April 10, 2008 06:13 PM Hi, i have an assignment and have been trying hard to solve it.. maybe easy for some of u.. I have a Person.java class and have to create equivalent classes for the inputs of the methods and also write a test class named JUnit4PersonTester.. please help here is the code... package set5; /** * A class for storing a person's last name * and age. Meant for storing information * on active workers. * * @author Raine Kauppinen */ public class Person { // attributes private int age; private String lastName; // set and get methods for the attributes // method setAge is specified as follows: // - an age must be between 16 and 68 // - if the age is ok, the method sets the age // and returns true // - if the age is not ok, the method does not // set the age and returns false public boolean setAge(int age) { boolean ageOk = false; if (age >= 16 || age <= 68) { this.age = age; ageOk = true; } return ageOk; } // method setLastName is specified as follows: // - a last name must have at least one character and // all the characters in the name have to be letters // (for example, there may not be any numbers in // the last name) // - if the last name name is ok, the method sets the // last name and returns true // - if the last name is not ok, the method does not // set the last name and returns false public boolean setLastName(String lastName) { boolean lastNameOk = true; if (lastName.length() < 1) { lastNameOk = false; } else { for (int i=0; i if (!(Character.isLetter(lastName.charAt(i)) == true)) { lastNameOk = false; } } if (lastNameOk == true) { this.lastName = lastName; } } return lastNameOk; } public int getAge() { return age; } public String getLastName() { return lastName; } }
http://www.jguru.com/forums/view.jsp?EID=1360202
CC-MAIN-2014-52
en
refinedweb
; } class Vec2 { constructor(readonly x: number = 0, readonly y: number = 0) { } plus(other: Vec2): Vec2 { return new Vec2(this.x + other.x, this.y + other.y); } times(scalar: number): Vec2 { return new Vec2(this.x * scalar, this.y * scalar); } minus(other: Vec2): Vec2 { return new Vec2(this.x - other.x, this.y - other.y); } length2(): number { return (this.x * this.x + this.y * this.y); } length(): number { return Math.sqrt(this.length2()) } } interface Numeric<T> { plus(other: T): T times(scalar: number): T } function runSimulation<T extends Numeric<T>>( y0: T, f: (t: number, y: T) => T, render: (y: T) => void ) { const h = (24 * 60 * 60) * 1 / 60.0 function simulationStep(yi: T, ti: number) {) ) } } const canvas = document.createElement("canvas") canvas.width = 400; canvas.height = 400; const ctx = canvas.getContext("2d")!; document.body.appendChild(canvas); ctx.fillStyle = "rgba(0, 0, 0, 0, 1)" ctx.fillRect(0, 0, 400, 400); function render(y: TwoParticles) { const { x1, x2 } = y; ctx.fillStyle = "rgba(0, 0, 0, 0.05)"; ctx.fillRect(0, 0, 400, 400); const rEarth = 6.371e6/1e9; const rMoon = 1.73e6/1e9; ctx.fillStyle = "rgba(45, 66, 143, 1)"; ctx.beginPath(); ctx.ellipse((x1.x/1e9)*400 + 200, (x1.y/1e9)*400 + 200, rEarth*400*10, rEarth*400*10, 0, 0, 2 * Math.PI); ctx.fill(); ctx.fillStyle = "rgba(189, 189, 189, 1)"; ctx.beginPath(); ctx.ellipse((x2.x/1e9)*400 + 200, (x2.y/1e9)*400 + 200, rMoon*400*10, rMoon*400*10, 0, 0, 2 * Math.PI); ctx.fill(); } const G = 6.67e-11; const m1 = 5.972e24; const m2 = 7.34e22;)) ) } const y0 = new TwoParticles( /* x1 */ new Vec2(0, 0), /* v1 */ new Vec2(0, -13.22), /* x2 */ new Vec2(3.6e8, 0), /* v2 */ new Vec2(0, 1.076e3) ) runSimulation(y0, f, render) Also see: Tab Triggers
https://codepen.io/jlfwong/pen/mmmXVK
CC-MAIN-2021-21
en
refinedweb
An E-Stream implementation in Python Project description An E-Stream implementation in Python E-Stream is an evolution-based technique for stream clustering which supports five behaviors: - Appearance - Disappearance - Self-evolution - Merge - Split These behaviors are achieved by representing each cluster as a Fading Cluster Structure with Histogram (FCH), utilizing a histogram for each feature of the data. The details for the underlying concepts can be found here: Udommanetanakit, K, Rakthanmanon, T, Waiyamai, K, E-Stream: Evolution-Based Technique for Stream Clustering, Advanced Data Mining and Applications: Third International Conference, 2007 How to use E-Stream The estream package aims to be substibutable with sklearn classes so it can be used interchangably with other transformers with similar API. from estream import EStream from sklearn.datasets.samples_generator import make_blobs estream = EStream() data, _ = make_blobs() estream.fit(data) E-Stream contains a number of parameters that can be set; the major ones are as follows: - max_clusters: This limits the number of clusters the clustering can have before the existing clusters have to be merged. The default is set to 10. - stream_speed/decay_rate: These determine the fading factor of the clusters. In this implementation, the fading function is constant derived from the default values of 10 and 0.1, respectively. - remove_threshold: This sets the lower bound for each cluster’s weight before they are considered to be removed. The default is set to 0.1. - merge_threshold: This determines whether two close clusters can be merged togther. The default is set to 1.25. - radius_threshold: This determines the minimum range from an existing cluster that a new data must be in order to be merged into one. The default is set to 3.0. - active_threshold: This sets the minimum weight of each cluster before they are considered active. The default is set to 5.0. An example of setting these parameters: from estream import EStream from sklearn.datasets.samples_generator import make_blobs estream = EStream(max_clusters=5, merge_threshold=0.5, radius_threshold=1.5, active_threshold=3.0) data, _ = make_blobs() estream.fit(data) Installation Currently, the package is only available through either PyPI: pip install estream or a manual install: wget unzip master.zip rm master.zip cd estream-master python setup.py install Help & Support Currently, there is no dedicated documentation available, so any questions or issues can be asked via my email. Citation If you make use of this software for your work, please cite the paper from the Advanced Data Mining and Applications: Third International Conference: @inproceedings{inproceedings, author = {Udommanetanakit, Komkrit, and Rakthanmanon, Thanawin and Waiyamai, Kitsana}, year = {2007}, month = {08}, pages = {605-615}, title = {E-Stream: Evolution-Based Technique for Stream Clustering}, volume = {4632}, doi = {10.1007/978-3-540-73871} } Moreover, this implementation is based on a MOA implementaion of E-Stream (and other related algorithms) by David Ratier. The original source code can be found in this repository. License The estream package is under the GNU General Public License. Contributing Contributions are always welcome! Everything ranging from code to notebooks and examples/documentation will be very valuable to the growth of this project. To contribute, please fork this project , make your changes and submit a pull request. I will do my best to fix any issues and merge your code into the main branch. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/estream/
CC-MAIN-2021-21
en
refinedweb
implementation 'org.junit.jupiter:junit-jupiter-api' implementation 'org.junit.jupiter:junit-jupiter-engine' implementation 'org.junit.platform:junit-platform-launcher' Junit 5 Platform Launcher API Carvia Tech | September 08, 2019 | 2 min read | 516 views Running Junit5 based tests programmatically from java/kotlin code based on selected package/class and tag filters using Junit Platform Launcher API. Setting Gradle dependencies We need to add the below dependencies to gradle based Spring Boot project for enabling Junit 5 support. Also, we need to exclude the junit 4 dependencies which are shipped by default for Spring Boot 2.1 based projects. implementation("org.springframework.boot:spring-boot-starter-test") { exclude module: "junit" } Executing tests programmatically We will use LauncherDiscoveryRequest to build a request that will be submitted to Junit TestEngine for test case discovery. public class Junit5Runner { SummaryGeneratingListener listener = new SummaryGeneratingListener(); public void runOne() { LauncherDiscoveryRequest request = LauncherDiscoveryRequestBuilder.request() .selectors(selectClass(SampleTest.class)) (1) .selectors(selectPackage("com.shunya")) (2) .filters(TagFilter.includeTags("security")) (3) .build(); Launcher launcher = LauncherFactory.create(); TestPlan testPlan = launcher.discover(request); launcher.registerTestExecutionListeners(listener); launcher.execute(request); } public static void main(String[] args) { final Junit5Runner runner = new Junit5Runner(); runner.runOne(); TestExecutionSummary summary = runner.listener.getSummary(); summary.printTo(new PrintWriter(System.out)); } } Output When we run the above program, following output will be printed to the console. This output is produced by SummaryGeneratingListener Test run finished after 2410 ms [ 2 containers found ] [ 0 containers skipped ] [ 2 containers started ] [ 0 containers aborted ] [ 2 containers successful ] [ 0 containers failed ] [ 6 tests found ] [ 0 tests skipped ] [ 6 tests started ] [ 0 tests aborted ] [ 0 tests successful ] [ 6 tests failed ] Important points Junit jupyter TestEngine will create a new instance of test class before invoking each test method. This helps making tests independent, as you can’t rely on execution order. Kotlin might be a better choice for writing testcases as tests written in Kotlin are more concise, readable and uses modern language features. So you might think of migrating to Kotlin. Reference Top articles in this category: - SDET: JUnit interview questions for automation engineer - Migrating Spring Boot tests from Junit 4 to Junit 5 - JUnit 5 Parameterized Tests - Writing a simple Junit 5 test - Rest Assured API Testing Interview Questions - 50 SDET Java Interview Questions & Answers - h2load for REST API benchmarking
https://www.javacodemonk.com/junit-5-platform-launcher-api-7dddb7ab
CC-MAIN-2021-21
en
refinedweb
One of the fundamental strength of Java is its serialization mechanism. This is basically serialization of Java objects, where the object is persisted as a sequence of bytes. The persistent storage can be file system, database or streams. And, deserialization is just the reverse process, where the sequences of bytes are again converted back into objects. The important point to note is that the object is stored in its current state and reversed back to that state only. In this article, we will try to explore the core concepts of Java object serialization and also work on some coding examples. Why we need serialization? Everything in Java is represented as objects. So, in a Java application, be it stand-alone, enterprise or in some other form, you need to deal with objects. These objects are having their own states (states are nothing but the value or data it contains at any point of time) and it varies from time to time. In an application, if we need to store data, we can store it in a database or file system (in the form of files). And, then retrieve it whenever required. But, this is typically handling and storing the raw data. Now, if we need to store an object (with its current state and value) we cannot use database or file system directly. Because they do not understand object, so, we need to store it in the form of bytes. This mechanism is also applicable when we need to transfer an object over network. But, the question is – ‘How do we perform this task’? Serialization is the solution to this problem. It can also be defined as a protocol, which can be used by any party to serialize or de-serialize an object. Following are the two most important purpose for which serialization is widely used. - Persists objects in storage (Database, file system, stream) - Transfer Objects over network Some related concepts Before moving into the next sections on serialization mechanisms and code samples, we must understand some basic technical concepts used in the serialization process. serialVersionUID: This is basically the identification of a serialized object. It is used to ensure that the serialized and de-serialized objects are same. Sometime this UID is also used for refactoring purpose. More details can be found here. Marker Interface: To implement serialization in Java or making an object serializable, you need to implement Serializable interface. Serializable is a marker interface, which means it is an interface without any fields and methods, for implementing some special behaviour. There are also other marker interfaces available in Java. Transient Keyword: This is a very important keyword in Java. There may be a need to store a part of an object and avoid some fields which may contain sensitive information like credit card number, password etc. Here, we just need to define those fields as ‘transient’, and it will not allow those fields to be saved during the serialization process. Object Stream classes: Two object stream classes are very important for serialization and de-serialization process. Those are ObjectOutputStream and ObjectInputStream. We will check the implementation in the following code sample section. How serialization works – Some code Examples In this coding example we will have three Java classes as mentioned below. - java class representing the object to be serialized - java class for serializing Student object - java class to extract the values from the saved Student object Following is the Student class with some relevant fields. Please note that the ‘pwd’ field is marked as ‘transient’ to avoid saving it as a part of the object. The other fields will be saved as part of the Student object. Listing1: Student class sample code public class Student implements java.io.Serializable { public String name; public String address; public String userId; public transient String pwd; public void objectCheck() { System.out.println("Student details " + name + " " + address +" "+ userId); } } Now, the 2nd class is designed to serialize Student object as shown below. It creates a Student object and save it in a file named ‘student.ser’ in the local files system. Listing2: Serializing Student class object import java.io.*; public class SerializeExample { public static void main(String [] args) { Student st = new Student(); st.name = "Allen"; st.address = "TX, USA"; st.userId = "Aln"; st.pwd = "Aln123$"; try { //Create file output stream FileOutputStream fileOutStr = new FileOutputStream("student.ser"); //Create object output stream and write object ObjectOutputStream objOutStr = new ObjectOutputStream(fileOutStr); objOutStr.writeObject(st); //Close all streams objOutStr.close(); fileOutStr.close(); System.out.printf("Serialized data is saved in a file - student.ser"); }catch(IOException exp) { exp.printStackTrace(); } } } Output from this class is shown below. Image1: Showing serialization output The 3rd class is designed to de-serialize the saved Student object and extract the values from it. The extracted values will be shown on the Java console. Listing3: De-serializing Student object import java.io.*; public class DeserializeExample { public static void main(String [] args) { //Create student object Student st = null; try { FileInputStream fileInStr = new FileInputStream("student.ser"); ObjectInputStream objInStr = new ObjectInputStream(fileInStr); st = (Student) objInStr.readObject(); objInStr.close(); fileInStr.close(); }catch(IOException exp) { exp.printStackTrace(); return; }catch(ClassNotFoundException cexp) { System.out.println("Student class not found"); cexp.printStackTrace(); return; } System.out.println("Deserialized Student..."); System.out.println("Name: " + st.name); System.out.println("Address: " + st.address); System.out.println("User Id: " + st.userId); System.out.println("Password: " + st.pwd); } } Output from this class is shown below. Please note that the output does not print the value of the password, as it was declared as transient. Image2: Showing de-serialization output Some real life implementations In this section, let us have a look at some of the real life implementations of serialization. It will help you understand the importance and the usage of object persistence. - Think of a game application where the state is very important. Now, when a user left the game at any point of time, the state is serialized and stored in some type of storage. While the user wants to re-start the game again, same state of the object is recreated by the process of de-serialization. So, nothing is lost in the whole process. - The other important example is ATM application. When a user request some withdrawal from an ATM machine (which is the client), the request is sent to the server as a serialized object. On the server end, the reverse process (de-serialization) is executed and the action is performed. This is an example of how serialization works over network communication. - Stock market update is another example where the update is stored as a serialized object and served to the client whenever required. - In any web application, the user session information is very important to maintain. Because, if at any point of time, the application fails or internet does not work, the user is disconnected from the application in the middle of some activity. Now, this half-done activity is stored as a serialized object, and restored when connection is established again. As a result, the user can continue from the same point where he left his activity. Conclusion: Java serialization is a very important feature to learn. In this article, we have discussed serialization in details along with its relevant concepts. We have also explained one coding example to show how serialization works. The example can be enhanced or modified to perform addition tasks. Overall, serialization is very flexible in nature, but the developers need to know the tricks and tips to implement it properly. Hope this article will provide you a guidance to move forward. if we have serialize any object then it can be read and deserialize it using object’s type and other information so we can retrieve original object. Many Thanks for sharing this article .
https://blog.eduonix.com/java-programming-2/learn-serialize-deserialize-objects-java/
CC-MAIN-2021-21
en
refinedweb
Java Persistence/OneToOne OneToOneEdit is used. Example of a OneToOne relationship databaseEdit EMPLOYEE (table) ADDRESS (table) Example of a OneToOne relationship annotationsEdit @Entity public class Employee { @Id @Column(name="EMP_ID") private long id; ... @OneToOne(fetch=FetchType.LAZY) @JoinColumn(name="ADDRESS_ID") private Address address; } Example of a OneToOne relationship XMLEdit <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"> <column name="EMP_ID"/> </id> <one-to-one <join-column </one-to-one> </attributes> </entity> Inverse Relationships, Target Foreign Keys and Mapped ByEditEdit @Entity public class Address { @Id @Column(name = "ADDRESS_ID") private long id; ... @OneToOne(fetch=FetchType.LAZY, mappedBy="address") private Employee owner; ... } Example of an inverse OneToOne relationship XMLEdit <entity name="Address" class="org.acme.Address" access="FIELD"> <attributes> <id name="id"/> <one-to-one </attributes> </entity> See AlsoEdit - Relationships - ManyToOneEdit Target Foreign Keys, Primary Key Join Columns, Cascade Primary KeysEditEdit COMPANY(table) DEPARTMENT(table) EMPLOYEE (table) ADDRESS(table) Example of cascaded primary keys and mixed OneToOne and ManyToOne mapping annotationsEdit umnEdit EditEdit EMPLOYEE (table) EMP_ADD(table) ADDRESS (table) Example of a OneToOne using a JoinTableEdit Edit (0, address); } } ...
https://en.m.wikibooks.org/wiki/Java_Persistence/OneToOne
CC-MAIN-2021-21
en
refinedweb
, issues. - We received a $1k/month donation from Facebook Open Source! - This the highest monthly donation we have gotten since the start (next highest is $100/month). - In the meantime, we will use our funds to meet in person and to send people to TC39 meetings. These meetings are every 2 months around the world. - If a company wants to specifically sponsor something, we can create separate issues to track. This was)#1 Help Maintain the Project (developer time at work) Engineer: There's a thing in SQL Server Enterprise blocking us— Shiya (@ShiyaLuo) November 16, 2017 Company: Let's set up a call next week with them an ongoing discussion to resolve it next quarter Engineer: There's a thing we need in babel, can I spent 2 days with a PR for it Company: lol no it's their job The best thing Fund development Company: "We'd like to use SQL Server Enterprise"— Adam Rackis (@AdamRackis) November 16, 2017 MS: "That'll be a quarter million dollars + $20K/month" Company: "Ok!" ... Company: "We'd like to use Babel" Babel: "Ok! npm i babel --save" Company: "Cool" Babel: "Would you like to help contribute financially?" Company: "lol no" We are definitely a upgrade tool that will help rewrite your package.json/.babelrc files and more. Ideally this means it would modify any necessary version number changes, package renames, and config changes. Please reach out to help and to post issues when trying to update! This is a great opportunity to get involved and to help the ecosystem update! previous postSummary of the - Dropped Node 0.10/0.12/5 support. - Updated TC39 proposals - Numeric Separator: 1_000 - Optional Chaining Operator: a?.b import.meta(parsable) - Optional Catch Binding: try { a } catch {} - BigInt (parsable): 2n - Split export extensions into export-default-fromand export-ns-form .babelrc.jssupport (a config using JavaScript instead of JSON) - Added a new Typescript Preset +)Deprecated Yearly Presets (e.g. babel-preset-es20xx) TL;DR: use babel-preset-env. What's better than you having to decide which Babel preset to use? Having it done for you, automatically! Even though the amount of work that goes into maintaining the lists of data is humongous — again, why we need help — it solves multiple issues. It makes sure users are up to date with the spec. It means less configuration/package confusion. It means an easier upgrade path. And it means ability to compile according to target environments you specify: whether that is for development mode, like the latest version of a browser, or for multiple builds, like a version for IE, and then another version for evergreen browsers. Not removing the Stage presets (babel-preset-stage-x) EDIT: We removed them, explained here Frustration level if we remove the Stage presets in Babel? (in favor explicitly requiring proposal plugins since they aren't JavaScript yet)— Henry Zhu (@left_pad) June 9, 2017 We can always keep it up to date, and maybe we just need to determine a better system than what these presets are currently. Right now, stage presets are literally just a list of plugins that we manually update after TC39 meetings. To make this manageable, we should use scoped packages, and, if anything, we should have done it much earlier 🙂! Examples of the scoped name change: babel-cli-> @babel/cli babel-core-> @babel/core babel-preset-env-> @babel/preset-env Renames: -proposal- Any proposals will be named with -proposal- now to signify that they aren't officially in JavaScript yet. So @babel/plugin-transform-class-properties becomes @babel/plugin-proposal-class-properties, and we would name it back once it gets into Stage 4. Renames: Drop the year from the plugin name + Integrations We are introducing a peer dependencies on @babel/core for all the plugins ( @babel/plugin-class-properties), presets ( @babel/preset-env), and top level packages ( @babel/cli, babel-loader). peerDependencies are dependencies expected to be used by your code, as opposed to dependencies only used as an implementation detail. — Stijn de Witt via StackOverflow. babel-loader already had a peerDependency on babel-core, so this just changes it to @babel/core. This is just so that people weren't trying to install these packages on the wrong version of Babel. For tools that already have a peerDependency on babel-core and aren't ready for a major bump (since changing the peer dependency is a breaking change), we have published a new version of babel-core to bridge the changes over with the new version TC39 proposal plugins to use the -proposal- name. If the spec changes, we will do a major version bump to the plugin and the preset it belongs to (as opposed to just making a patch version which may break for people). Then, we will need to deprecate the old versions and setup an infrastructure only add polyfills that the targets don't support. So with this option, something like: import "babel-polyfill"; Can turn into import "core-js/modules/es7.string.pad-start"; import "core-js/modules/es7.string.pad-end"; // ... if the target environment happens to support polyfills other than padStart or padEnd. an import at the top of each file but only adds the import if it finds it used in the code. This approach means that we can import the minimum amount of polyfills necessary for the app (and only if the target environment doesn't support it). So if you use Promise in your code, it will import it at the top of the file (if your target doesn't support it). Bundlers will dedupe the file if it's the same so this approach won't cause multiple polyfills to be imported. import "core-js/modules/es6.promise"; var a = new Promise(); import "core-js/modules/es7.array.includes"; [].includes a.includes With type inference we can know if an instance method like .includes is for an array or not, and having a false positive is ok until that logic is better since it's still better than importing the whole polyfill like before. Misc UpdatesMisc Updates babel-templateis faster/easier to use - regenerator was released under the MIT License - it's the dependency used to compile generators/async - "lazy" option to the modules-commonjsplugin via #6952 - You can now use envName: "something"in .babelrc or pass via cli babel --envName=somethinginstead of having to use process.env.BABEL_ENVor process.env.NODE_ENV ["transform-for-of", { "assumeArray": true }]to make all for-of loops output as regular arrays - Exclude transform-typeof-symbolin loose mode for preset-env #6831 - Landed PR for better error messages with syntax errors Todos Before ReleaseTodos Before Release - Handle .babelrclookup (want this done before first RC release) - "overrides" support: different config based on glob pattern - Caching and invalidation logic in babel-core. - maintenance Jessica for stepping up in the last year to help out. I'm really looking forward to a release (I'm tired too; it's almost been a year 😝), but also don't want to rush anything just because! Been a lot of ups and downs throughout this release but I've certainly learned a lot and I'm sure the rest of the team has as well. And if I've learned anything at all this year, I should really heed this advice rather than just write about it. "Before you go maintaining anything else, maintain your own body first" - Mom 😸— Henry Zhu (@left_pad) December 22, 2017 Also thanks to Mariko for the light push to actually finish this post (2 months in the making)
https://www.babeljs.cn/blog/2017/12/27/nearing-the-7.0-release
CC-MAIN-2021-21
en
refinedweb
openshift-start-master-controllers - Man Page Launch master controllers Synopsis openshift start master controllers [Options] Description Start the master controllers This command starts the controllers for the master. Running openshift start master controllers will start the controllers that manage the master state, including the scheduler. The controllers will run in the foreground until you terminate the process. Options - --config="" Location of the master configuration file to run from. Required - --listen=" ⟨"⟩ The address to listen for connections on (scheme://host:port). - --lock-service-name="" Name of a service in the kube-system namespace to use as a lock, overrides config. Options Inherited from Parent Commands - --azure-container-registry-config="" Path to the file containing Azure container registry configuration information. - --log-flush-frequency=0 Maximum number of seconds between log flushes - --version=false Print version information and quit See Also openshift-start-master(1), History June 2016, Ported from the Kubernetes man-doc generator Referenced By openshift-start-master(1).
https://www.mankier.com/1/openshift-start-master-controllers
CC-MAIN-2021-21
en
refinedweb
SbDPViewVolume.3coin3 - Man Page The SbDPViewVolume class is a double precision viewing volume in 3D space. Synopsis #include <Inventor/SbLinear.h> Public Types enum ProjectionType { ORTHOGRAPHIC = 0, PERSPECTIVE = 1 } Public Member Functions SbDPViewVolume (void) ~SbDPViewVolume (void) void getMatrices (SbDPMatrix &affine, SbDPMatrix &proj) const SbDPMatrix getMatrix (void) const SbDPMatrix getCameraSpaceMatrix (void) const void projectPointToLine (const SbVec2d &pt, SbDPLine &line) const void projectPointToLine (const SbVec2d &pt, SbVec3d &line0, SbVec3d &line1) const void projectToScreen (const SbVec3d &src, SbVec3d &dst) const SbPlane getPlane (const double distFromEye) const SbVec3d getSightPoint (const double distFromEye) const SbVec3d getPlanePoint (const double distFromEye, const SbVec2d &normPoint) const SbDPRotation getAlignRotation (SbBool rightAngleOnly=FALSE) const double getWorldToScreenScale (const SbVec3d &worldCenter, double normRadius) const SbVec2d projectBox (const SbBox3f &box) const SbDPViewVolume narrow (double left, double bottom, double right, double top) const SbDPViewVolume narrow (const SbBox3f &box) const void ortho (double left, double right, double bottom, double top, double nearval, double farval) void perspective (double fovy, double aspect, double nearval, double farval) void frustum (double left, double right, double bottom, double top, double nearval, double farval) void rotateCamera (const SbDPRotation &q) void translateCamera (const SbVec3d &v) SbVec3d zVector (void) const SbDPViewVolume zNarrow (double nearval, double farval) const void scale (double factor) void scaleWidth (double ratio) void scaleHeight (double ratio) ProjectionType getProjectionType (void) const const SbVec3d & getProjectionPoint (void) const const SbVec3d & getProjectionDirection (void) const double getNearDist (void) const double getWidth (void) const double getHeight (void) const double getDepth (void) const void print (FILE *fp) const void getViewVolumePlanes (SbPlane planes[6]) const void transform (const SbDPMatrix &matrix) SbVec3d getViewUp (void) const void copyValues (SbViewVolume &vv) Detailed Description The SbDPViewVolume class is a double precision viewing volume in 3D space. This class contains the necessary information for storing a view volume. It has methods for projection of primitives from or into the 3D volume, doing camera transforms, view volume transforms etc. Be aware that this class is an extension for Coin, and it is not available in the original SGI Open Inventor v2.1 API. - See also SbViewportRegion - Since Coin 2.0 Member Enumeration Documentation enum SbDPViewVolume::ProjectionType An SbDPViewVolume instance can represent either an orthogonal projection volume or a perspective projection volume. - See also ortho(), perspective(), getProjectionType(). Enumerator - ORTHOGRAPHIC Orthographic projection. - PERSPECTIVE Perspective projection. Constructor & Destructor Documentation SbDPViewVolume::SbDPViewVolume (void) Constructor. Note that the SbDPViewVolume instance will be uninitialized until you explicitly call ortho() or perspective(). - See also ortho(), perspective(). SbDPViewVolume::~SbDPViewVolume (void) Destructor. Member Function Documentation void SbDPViewVolume::getMatrices (SbDPMatrix & affine, SbDPMatrix & proj) const Returns the view volume's affine matrix and projection matrix. - See also getMatrix(), getCameraSpaceMatrix() SbDPMatrix SbDPViewVolume::getMatrix (void) const Returns the combined affine and projection matrix. - See also getMatrices(), getCameraSpaceMatrix() SbDPMatrix SbDPViewVolume::getCameraSpaceMatrix (void) const Returns a matrix which will translate the view volume camera back to origo, and rotate the camera so it'll point along the negative z axis. Note that the matrix will not include the rotation necessary to make the camera up vector point along the positive y axis (i.e. camera roll is not accounted for). - See also getMatrices(), getMatrix() void SbDPViewVolume::projectPointToLine (const SbVec2d & pt, SbDPLine & line) const Project the given 2D point from the projection plane into a 3D line. pt coordinates should be normalized to be within [0, 1]. void SbDPViewVolume::projectPointToLine (const SbVec2d & pt, SbVec3d & line0, SbVec3d & line1) const Project the given 2D point from the projection plane into two points defining a 3D line. The first point, line0, will be the corresponding point for the projection on the near plane, while line1 will be the line endpoint, lying in the far plane. void SbDPViewVolume::projectToScreen (const SbVec3d & src, SbVec3d & dst) const Project the src point to a normalized set of screen coordinates in the projection plane and place the result in dst. It is safe to let src and \dst be the same SbVec3d instance. The z-coordinate of dst is monotonically increasing for points closer to the far plane. Note however that this is not a linear relationship, the dst z-coordinate is calculated as follows: Orthogonal view: DSTz = (-2 * SRCz - far - near) / (far - near), Perspective view: DSTz = (-SRCz * (far - near) - 2*far*near) / (far - near) The returned coordinates (dst) are normalized to be in range [0, 1]. SbPlane SbDPViewVolume::getPlane (const double distFromEye) const Returns an SbPlane instance which has a normal vector in the opposite direction of which the camera is pointing. This means the plane will be parallel to the near and far clipping planes. - See also getSightPoint() SbVec3d SbDPViewVolume::getSightPoint (const double distFromEye) const Returns the point on the center line-of-sight from the camera position with the given distance. - See also getPlane() SbVec3d SbDPViewVolume::getPlanePoint (const double distFromEye, const SbVec2d & normPoint) const Return the 3D point which projects to normPoint and lies on the plane perpendicular to the camera direction and distFromEye distance away from the camera position. normPoint should be given in normalized coordinates, where the visible render canvas is covered by the range [0.0, 1.0]. SbDPRotation SbDPViewVolume::getAlignRotation (SbBool rightangleonly = FALSE) const Returns a rotation that aligns an object so that its positive x-axis is to the right and its positive y-axis is up in the view volume. If rightangleonly is TRUE, it will create a rotation that aligns the x and y-axis with the closest orthogonal axes to right and up. double SbDPViewVolume::getWorldToScreenScale (const SbVec3d & worldCenter, double normRadius) const Given a sphere with center in worldCenter and an initial radius of 1.0, return the scale factor needed to make this sphere have a normRadius radius when projected onto the near clipping plane. SbVec2d SbDPViewVolume::projectBox (const SbBox3f & box) const Projects the given box onto the projection plane and returns the normalized screen space it occupies. SbDPViewVolume SbDPViewVolume::narrow (double left, double bottom, double right, double top) const Returns a narrowed version of the view volume which is within the given [0, 1] normalized coordinates. The coordinates are taken to be corner points of a normalized 'view window' on the near clipping plane. I.e.: SbDPViewVolume view; view.ortho(0, 100, 0, 100, 0.1, 1000); view = view.narrow(0.25, 0.5, 0.75, 1.0); - See also scale(), scaleWidth(), scaleHeight() SbDPViewVolume SbDPViewVolume::narrow (const SbBox3f & box) const Returns a narrowed version of the view volume which is within the given [0, 1] normalized coordinates. The box x and y coordinates are taken to be corner points of a normalized 'view window' on the near clipping plane. The box z coordinates are used to adjust the near and far clipping planes, and should be relative to the current clipping planes. A value of 1.0 is at the current near plane. A value of 0.0 is at the current far plane. void SbDPViewVolume::ortho (double left, double right, double bottom, double top, double nearval, double farval) Set up the view volume as a rectangular box for orthographic parallel projections. The line of sight will be along the negative z axis, through the center of the plane defined by the point <(right+left)/2, (top+bottom)/2, 0>. - See also perspective(). void SbDPViewVolume::perspective (double fovy, double aspect, double nearval, double farval) Set up the view volume for perspective projections. The line of sight will be through origo along the negative z axis. - See also ortho(). void SbDPViewVolume::frustum (double left, double right, double bottom, double top, double nearval, double farval) Set up the frustum for perspective projection. This is an alternative to perspective() that lets you specify any kind of view volumes (e.g. off center volumes). It has the same arguments and functionality as the corresponding OpenGL glFrustum() function. - See also perspective() void SbDPViewVolume::rotateCamera (const SbDPRotation & q) Rotate the direction which the camera is pointing in. - See also translateCamera(). void SbDPViewVolume::translateCamera (const SbVec3d & v) Translate the camera position of the view volume. - See also rotateCamera(). SbVec3d SbDPViewVolume::zVector (void) const Return the vector pointing from the center of the view volume towards the camera. This is just the vector pointing in the opposite direction of getProjectionDirection(). - See also getProjectionDirection(). SbDPViewVolume SbDPViewVolume::zNarrow (double nearval, double farval) const Return a copy SbDPViewVolume with narrowed depth by supplying parameters for new near and far clipping planes. nearval and \farval should be relative to the current clipping planes. A value of 1.0 is at the current near plane. A value of 0.0 is at the current far plane. - See also zVector(). void SbDPViewVolume::scale (double factor) Scale width and height of viewing frustum by the given ratio around the projection plane center axis. - See also scaleWidth(), scaleHeight(). void SbDPViewVolume::scaleWidth (double ratio) Scale width of viewing frustum by the given ratio around the vertical center axis in the projection plane. - See also scale(), scaleHeight(). void SbDPViewVolume::scaleHeight (double ratio) Scale height of viewing frustum by the given ratio around the horizontal center axis in the projection plane. - See also scale(), scaleWidth(). SbDPViewVolume::ProjectionType SbDPViewVolume::getProjectionType (void) const Return current view volume projection type, which can be either ORTHOGRAPHIC or PERSPECTIVE. - See also SbDPViewVolume::ProjectionType const SbVec3d & SbDPViewVolume::getProjectionPoint (void) const Returns coordinates of center point in the projection plane. const SbVec3d & SbDPViewVolume::getProjectionDirection (void) const Returns the direction of projection, i.e. the direction the camera is pointing. - See also getNearDist(). double SbDPViewVolume::getNearDist (void) const Returns distance from projection plane to near clipping plane. - See also getProjectionDirection(). double SbDPViewVolume::getWidth (void) const Returns width of viewing frustum in the projection plane. - See also getHeight(), getDepth(). double SbDPViewVolume::getHeight (void) const Returns height of viewing frustum in the projection plane. - See also getWidth(), getDepth(). double SbDPViewVolume::getDepth (void) const Returns depth of viewing frustum, i.e. the distance from the near clipping plane to the far clipping plane. - See also getWidth(), getHeight(). void SbDPViewVolume::print (FILE * fp) const Dump the state of this object to the file stream. Only works in debug version of library, method does nothing in an optimized compile. void SbDPViewVolume::getViewVolumePlanes (SbPlane planes[6]) const Returns the six planes defining the view volume in the following order: left, bottom, right, top, near, far. Plane normals are directed into the view volume. This method is an extension for Coin, and is not available in the original Open Inventor. void SbDPViewVolume::transform (const SbDPMatrix & matrix) Transform the viewing volume by matrix. SbVec3d SbDPViewVolume::getViewUp (void) const Returns the view up vector for this view volume. It's a vector which is perpendicular to the projection direction, and parallel and oriented in the same direction as the vector from the lower left corner to the upper left corner of the near plane. Author Generated automatically by Doxygen for Coin from the source code.
https://www.mankier.com/3/SbDPViewVolume.3coin3
CC-MAIN-2021-21
en
refinedweb
When I finished playing around with building a Design System(or part of it) for a project, before start coding one important question popped up: Which library should I use to style my components? Lately, I've been working with styled-components but I wanted to try the trending ones right now: Tailwind CSS or Chakra-Ui. After watching some videos and seeing how both looked like in code, I decided to go with Chakra-Ui. So, in this article I'm going to share what my experience have been so far with Chakra-Ui after working with it during these last 2 days. Hopefully it can help people having their first steps with the library. 1. Creating a custom theme is a breeze By default, Chakra-Ui already comes with a theme but we can customize it to best fit our design. And that was where I started to play with Chakra since I had created a design system. The theme object is where we define the application's color pallete, type scale, font stacks, border radius values and etc. All Chakra components inherit from this default theme. From the default theme, we can extend and overide tokens and also add new values to the theme. Customizing the it is as easy as: 1) Extending it with extendTheme: import { extendTheme } from '@chakra-ui/react' const customTheme = extendTheme({ colors: { lightGray: { default: '#C4C4C4', hover: '#EEEEEE', disabled: '#9E9E9E' } }, // I'm just adding one more fontSize than the default ones fontSizes: { xxs: '0.625rem' }, // I'm creating a new space tokens since the default is represented with numbers space: { xs: '0.25rem', sm: '0.5rem', md: '1rem', lg: '1.5rem', xl: '2rem', xxl: '3rem', } }) export default customTheme 2) Passing to the ChakraProvider: import customTheme from './theme' <ChakraProvider theme={customTheme}> <App /> </ChakraProvider> 3) Using it: import customTheme from './theme' const BoxWithText= ({ text }) => ( <Box padding='xs' borderRadius='lg'> <Text>{text}</Text> </Box> ) 2. Creating variants of components makes it easier to implement a design system Besides customizing theme tokens we can also customize component styles. Chakra component styles have a specific API that a component style consists of: baseStyle, the default style of a component sizes, represents styles for different sizes of a component variants, represents styles for different visual variants defaultProps, optional, to define the default sizeor variant. From the docs, what the component style looks like: const ComponentStyle = { // style object for base or default style baseStyle: {}, // styles for different sizes ("sm", "md", "lg") sizes: {}, // styles for different visual variants ("outline", "solid") variants: {}, // default values for `size` and `variant` defaultProps: { size: "", variant: "", }, } With the possibility of customizing each component we can create variants for them to match pre-defined styles of a component. For example, in a design system you may have different variations of the typography to show different font sizes, font weights, etc. The same goes with components such as buttons, inputs, etc. With variants we can create pre-defined styles for those components: import { extendTheme } from '@chakra-ui/react' const customTheme = extendTheme({ components: { Heading: { variants: { h1: { fontSize: '4xl', fontWeight: 'bold' }, h2: { fontSize: '3xl', fontWeight: 'bold' } } }, Text: { variants: { subtitle: { fontSize: 'xl', fontWeight: 'medium' }, body: { fontSize: 'md', fontWeight: 'medium' } } } } }) export default customTheme And use it in our code: const Badge = ({ text }) => ( <Box padding='xs' borderRadius='lg' w='max-content'> <Text variant='bodyExtraSmall'>{text}</Text> </Box> ) 3. Integrating with Storybook is not so smooth currently One pain point I had with this begining of my journey with Chakra-Ui was trying to use Storybook to show my created components. For my workflow, I always create the components and their corresponding stories to be easier to see the different styles and create a component library. However, when I created the stories with my Chakra components and checked the Storybook it did not load any styling I made with Chakra. I was frustrated at first but thanks to an issue raised I could get it working. To fix that you can: 1) Modify the main.js file inside the .storybook folder to match the webpackFinal config that Chakra uses: const path = require("path"); const toPath = (_path) => path.join(process.cwd(), _path); module.exports = { stories: ["../src/**/*.stories.mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)"], addons: [ "@storybook/addon-links", "@storybook/addon-essentials", "@storybook/preset-create-react-app", ], webpackFinal: async (config) => { return { ...config, resolve: { ...config.resolve, alias: { ...config.resolve.alias, "@emotion/core": toPath("node_modules/@emotion/react"), "emotion-theming": toPath("node_modules/@emotion/react"), }, }, }; }, }; 2) Wrap the story decorator with the ChakraProvider in the preview.js file: import React from "react" import { ChakraProvider } from '@chakra-ui/react' import theme from '../src/theme' export const parameters = { actions: { argTypesRegex: "^on[A-Z].*" }, } const withChakra = (StoryFn) => { return ( <ChakraProvider theme={theme}> <StoryFn /> </ChakraProvider> ) } export const decorators = [withChakra] This is a temporary workaround that I believe can be resolved any time soon since they are already working on it :) 4. We can't create variants for Box but we can use Layer Styles or Text Styles Another "problem" I had was when I tried to create variants for the Box component. I wanted to create different types of Badges that I could simply choose the variant when inserting them on my components. My Badge consisted of a Box with a Text inside, the code I've shown in the previous snippets. However, after finding this issue I understood that by design Box cannot receive variants in theme since it is a generic component, it represents a div. To work around that, you could use the useStyleConfig or use the textStyle or layerStyle props, docs here. Both props are used to avoid repeating specific text and layer properties and help us keep our components organized and consistent. They allow us to save styling attributes to re-use in other components, passing the corresponding prop to the component. Since I only needed to change the background color and the border depending on the type of Badge I wanted, I used the layer style. To solve this: 1) Extend the theme with new layerStyles: const customTheme = extendTheme({ layerStyles: { defaultBadge: { bg:'lightGray.default' }, outlinedBadge: { bg:'transparent', border: '1px solid #000000' }, whiteBadge: { bg:'#FFFFFF' } } }) 2) Consume it in the component: const Badge = ({ text }) => ( <Box layerStyle=`outlinedBadge` <Text variant='bodyExtraSmall'>{text}</Text> </Box> ) Conclusion That's it for now. I hope you could get some tips when also starting your journey with Chakra-Ui. If I find more interesting points and learnings to share I may create another article as well :) If it was somehow useful, leave it a ❤️ or if you have more to add drop a comment. Also, I'd love if we connect on Twitter as well :) Discussion (10) nice post, thanks for writing this together Carlos! 🙏 Thanks! Great that you liked it, Dominik :) Love this post, have a lot of experiences in this, thank you! Thanks for the feedback, Harry! Glad that you loved it :) Oi Carlos muito obrigado pelo post :) Carlos, I was wondering if you had to create multi-part components. I'm trying to get it working but I'm not sure of the approach. Massa que você gostou, Hélio! 🙂 Unfortunatelly I haven't used it yet. I tried to look in the docs and tried to find some other resources but I couldn't quite understand its use so I can't help you right now :( If I ended up using it and getting the grip of it I will share with you :) Yeah, I think, the docs lack quite some information I've tried to get help from the Discord channel but without look either. I'll keep trying to find info and maybe I'll write a post just like yours which is really good :) Obrigado, abraço Great, Helio! It would be a massive help if you share it after you understand how it works. There are certainly other devs with the same struggle. I'm looking forward to reading your post :) Valeuzão :D Thanks for the post. I'm bookmarking it for the time I'll try Chakra-UI. Thanks, Magda! Hopefully it'll be helpful to you :)
https://dev.to/carlosrafael22/what-i-ve-learned-with-chakra-ui-so-far-4f5e
CC-MAIN-2021-21
en
refinedweb
Dynamixel and 3mxl driver threemxl provides a library to communicate with Dynamixel servos and 3mxl motor control boards. It also allows multiple ROS nodes to communicate with a single Dynamixel chain using a shared_serial node. As threemxl depends on some external packages, make sure to run rosdep install threemxl at least once! To quickly test the board, you can use the console node. Just type rosrun threemxl console, provided you have a roscore running somewhere. Note that the console defaults to /dev/ttyUSB0. You can specify a different port or namespace as argument. The main classes are CDynamixel and C3mxl. These communicate with the hardware directly through a serial port (either LxSerial or LxFTDI). The ROS versions, CDynamixelROS and C3mxlROS, just subclass CDynamixel and C3mxl and set the package handler to CDxlROSPacketHandler such that the communication works over a shared_serial node.
https://docs.ros.org/en/indigo/api/threemxl/html/
CC-MAIN-2021-21
en
refinedweb
git --git-dir=a.git --work-tree=b -C c status git --git-dir=c/a.git --work-tree=c/b status git - the stupid content tracker git [--version] [--help] [-C <path>] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p|--paginate|-P|--no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] [--super-prefix=<path>] <command> [<args>] Git. Pass a configuration parameter to the command. The value given will override values from configuration files. The <name> is expected in the same format as listed by git config (subkeys separated by dots). (" . Currently for internal use only. Set a prefix which gives a path from above a repository down to its root. One use is to give submodules context about the superproject that invoked it.. Do not perform optional operations that require locks. This is equivalent to setting the GIT_OPTIONAL_LOCKS to 0. List commands by group. This is an internal/experimental option and may change or be removed in the future. Supported groups are: builtins, parseopt (builtin commands that use parse-options), main (all commands in libexec directory), others (all other commands in $PATH that have git- prefix), list-<category> (see categories in command-list.txt), nohelpers (exclude helper commands), alias and config (retrieve command list from config variable completion.commands) Give an object a human readable name based on an available ref The Git repository browser Show commit logs Run tasks to optimize Git repository data Join two or more development histories together Move or rename a file, a directory, or a symlink Add or inspect object notes Fetch from and integrate with another repository or a local branch Update remote refs along with associated objects Compare two commit ranges (e.g. two versions of a branch) Reapply commits on top of another base tip Reset current HEAD to the specified state Restore working tree files Revert some existing commits Remove files from the working tree and from the index Summarize git log output Show various types of objects Initialize and modify the sparse-checkout Stash the changes in a dirty working directory away Show the working tree status Initialize, update or inspect submodules Switch branches Create, list, delete or verify a tag object signed with GPG Manage multiple working trees Manipulators: Get and set repository or global options Git data exporter Backend for fast Git data importers Rewrite branches Run merge conflict resolution tools to resolve merge conflicts Pack heads and tags for efficient repository access Prune all unreachable objects from the object database Manage reflog information Manage set of tracked repositories Pack unpacked objects in a repository Create, list, delete refs to replace objects Interrogators: Annotate file lines with commit information Show what revision and author last modified each line of a file Collect information for user to file a bug report Count unpacked number of objects and their disk consumption Show changes using common diff tools Verifies the connectivity and validity of the objects in the database Display help information about Git Instantly browse your working repository in gitweb Show three-way merge without touching index Reuse recorded resolution of conflicted merges Show branches and their commits Check the GPG signature of commits Check the GPG signature of tags Git web interface (web frontend to Git repositories) Show logs with difference each commit introduces These commands are to interact with foreign SCM and with other people via patch over e-mail. Import a GNU Arch repository into Git Export a single commit to a CVS checkout Salvage your data out of another SCM people love to hate A CVS server emulator for Git can also be used to restore the index, overlapping with git restore. Write and verify Git commit-graph files Create a new commit object Compute object ID and optionally creates a blob from a file Build pack index file for an existing packed archive Run a three-way file merge Run a merge for files needing merging Write and verify multi-pack-indexes Creates a tag object Find commits yet to be applied to upstream Compares files in the working tree and the index Compare a tree to the working tree or index Compares the content and mode of blobs found via two tree objects Output information on each ref Extract commit ID from an archive created using git-archive Pick out and massage parameters Show packed archive index List references in a local repository Debug gitignore / exclude files Show canonical names and email addresses of contacts Ensures that a reference name is well formed Display data in columns Retrieve and store user credentials Helper to temporarily store passwords in memory Helper to store credentials on disk Produce a merge commit message Add or parse structured information in The following documentation pages are guides about Git concepts. Defining attributes per path Git command-line interface and conventions A Git core tutorial for developers Providing usernames and passwords to Git Git for CVS users Tweaking diff output A useful minimum set of commands for Everyday Git Frequently asked questions about using Git A Git Glossary Hooks used by Git Specifies intentionally untracked files to ignore Defining submodule properties Git namespaces Helper programs to interact with remote repositories Git Repository Layout Specifying revisions and ranges for Git Mounting one repository inside another A tutorial introduction to Git: part two A tutorial introduction to Git An overview of recommended workflows with Git Git = "[email protected]" Various commands read from the configuration file and adjust their operation accordingly. See git-config[1] for a list and more details about the configuration mechanism. a foreign front-end. GIT_INDEX_FILE This environment allows the specification of an alternate index file. If not specified, the default of $GIT_DIR/index is used. GIT_INDEX_VERSION This environment variable allows the specification of an index version for new repositories. It won’t affect existing index files. By default index file version 2 or 3 is used. See git-update-index[1] for more information._COMMON_DIR If this variable is set to a path, non-worktree files that are normally in $GIT_DIR will be taken from this path instead. Worktree-specific files such as HEAD or index are taken from $GIT_DIR. See gitrepository-layout[5] and git-init[1].. to generate diffs, and Git does not use its builtin diff machinery.-1 A 1-based counter incremented by one for every path. GIT_DIFF_PATH_TOTAL The total number of paths._PROGRESS_DELAY A number controlling how many seconds to delay before showing optional progress indicators. Defaults to 2. GIT_EDITOR This environment variable overrides $EDITOR and $VISUAL. It is used by several Git commands when, on interactive mode, an editor is to be launched. See also git-var[1] and the core.editor option in git-config[1]. GIT_SEQUENCE_EDITOR This environment variable overrides the configured Git editor when editing the todo list of an interactive rebase. See also linkit::git-rebase[1] and the sequence.editor option in link serves the same purpose._TERMINAL_PROMPT If this environment variable is set to 0, git will not prompt on the terminal (e.g., when asking for HTTP authentication). and git check-ignore Enables general trace messages, e.g. alias expansion, built-in command execution and external command execution. Enables trace messages for the filesystem monitor extension. See GIT_TRACE for available trace output options. GIT_TRACE_PACK_ACCESS Enables trace messages for all accesses to any packs. For each access, the pack file name and an offset in the pack is recorded. This may be helpful for troubleshooting some pack-related performance problems. See GIT_TRACE for available trace output options. GIT_TRACE_PACKET. GIT_TRACE_PACKFILE Enables tracing of packfiles sent or received by a given program. Unlike other trace output, this trace is verbatim: no headers, and no quoting of binary data. You almost certainly want to direct into a file (e.g., GIT_TRACE_PACKFILE=/tmp/my.pack) rather than displaying it on the terminal or mixing it with other trace output. Note that this is currently only implemented for the client side of clones and fetches. GIT_TRACE_PERFORMANCE Enables performance related trace messages, e.g. total execution time of each Git command. See GIT_TRACE for available trace output options. GIT_TRACE_REFS Enables trace messages for operations on the ref database. See GIT_TRACE for available trace output options. GIT_TRACE_SETUP Enables trace messages printing the .git, working tree and current working directory after Git has completed its setup phase. See GIT_TRACE for available trace output options. GIT_TRACE_SHALLOW Enables trace messages that can help debugging fetching / cloning of shallow repositories. See GIT_TRACE for available trace output options. GIT_TRACE_CURL Enables a curl full trace dump of all incoming and outgoing data, including descriptive information, of the git transport protocol. This is similar to doing curl --trace-ascii on the command line.. for full details. GIT_TRACE2_EVENT This setting writes a JSON-based format that is suited for machine interpretation. See GIT_TRACE2 for available trace output options and Trace2 documentation for full details. GIT_TRACE2_PERF In addition to the text-based messages available in GIT_TRACE2, this setting writes a column-based format for understanding nesting regions. See GIT_TRACE2 for available trace output options and Trace2 documentation for full details. GIT_TRACE_REDACT By default, when tracing is activated, Git redacts the values of cookies, the "Authorization:" header, and the "Proxy-Authorization:" header. Set this variable to 0 to prevent this redaction.). cause Git to treat all pathspecs as case-insensitive. GIT_REFLOG_ACTION When a ref is updated, reflog entries are created to keep track of the reason why the ref was updated (which is typically the name of the high-level command that updated the ref), in addition to the old and new values of the ref. A scripted Porcelain command can use set_reflog_action helper function in git-sh-setup to set its name to this variable when it is invoked as the top level command by the end user, to be recorded in the body of the reflog. GIT_REF_PARANOIA If set to 1, include broken or badly named refs when iterating over lists of refs. In a normal, non-corrupted repository, this does nothing. However, enabling it may help git to detect and abort some operations in the presence of broken refs. Git sets this variable automatically when performing destructive operations like git-prune[1]. You should not need to set it yourself unless you want to be paranoid about making sure an operation has touched every ref (e.g., because you are cloning a repository to make a backup). GIT_ALLOW_PROTOCOL If set to a colon-separated list of protocols, behave as if protocol.allow from references in the "description" section. See also the howto documents for some useful examples. The internals are documented in the Git API documentation. Users migrating from CVS may also want to read gitcvs-migration[7]. Git was started by Linus Torvalds, and is currently maintained by Junio C Hamano. Numerous contributions have come from the Git mailing list <[email protected]>. gives you a more complete list of contributors. If you have a clone of git.git itself, the output of git-shortlog[1] and git-blame[1] can show you the authors for specific parts of the project. Report bugs to the Git mailing list <[email protected]> <[email protected]>. gittutorial[7], gittutorial-2[7], giteveryday[7], gitcvs-migration[7], gitglossary[7], gitcore-tutorial[7], gitcli[7], The Git User’s Manual, gitworkflows[7] © 2012–2018 Scott Chacon and others Licensed under the MIT License.
https://docs.w3cub.com/git/git
CC-MAIN-2021-21
en
refinedweb
I am new in python programming. I trired to istall pygame and I wrote some code about it: import sys import settings import pygame def run_game(): pygame.init() all_settings = settings() screen = pygame.display.set_mode( (all_settings.screen_width, all_settings.screen_length)) pygame.display.set_caption("Alien Invaders") while True: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() screen.fill(all_settings.bg_color) pygame.display.flip() run_game() But it shows me a huge error message like below: ERROR: Command errored out with exit status 1: command: /home/runner/.local/share/virtualenvs/python3/bin/python -c 'i mport sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-gbk6qx6 o/pygame/setup.py'"'"'; __file__='"'"'/tmp/pip-install-gbk6qx6o/pygame/setup .py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().r eplace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info cwd: /tmp/pip-install-gbk6qx6o/pygame/ Complete output (12 lines): WARNING, No "Setup" File Exists, Running "buildconfig/config.py" Using UNIX configuration... /bin/sh: 1: sdl-config: not found /bin/sh: 1: sdl-config: not found /bin/sh: 1: sdl-config: not found Hunting dependencies... WARNING: "sdl-config" failed! Unable to run "sdl-config". Please make sure a development version of SD L is installed. ImportError: No module named 'pygame' Can anybody help? 1. Open the folder where your python is installed 2. Open scripts folder 3. Type cmd in the address bar. It opens a command prompt window in that location 4. Type pip install pygame and press enter (it should download and install pygame module) Now run your code and it should work. Happy coding. Three methods to check this: 1. Employing apt-cache to search and apt-get to install : apt-cache apt-get apt-cache search pygame python-pygame - SDL bindings for games development in Python Thereafter install: sudo apt-get install python-pygame 2. Employ pip: pip pip install Pygame 3. Manual installation: Download the source from Pypi; Extract the .tar.gz file, and install employ: .tar.gz python setup.py install obs: check the python version you need to install, in case you use Python3, employ: pip3 install Pygame Go to python/scripts folder, open a command window to this path, type the following command line: C:\python34\scripts> python -m pip install pygame --user To examine it, open python IDE and type import pygame print (pygame.ver) I was attempting to figure this out for at least an hour. And you're correct the problem is that the installation files are all for 32 bit. Only pick the similar version according to your python version and it must perform like magic. The installation feature will fetch you to a bright-blue screen as the installation (at this point you know that the installation is appropriate for you). Thereafter go into the Python IDLE and type "import pygame" and you must not obtain any more errors. Here are instructions for users with the newer Python 3.5 (Google brought me here, I doubt other 3.5 users might end up here as well): I only successfully installed Pygame 1.9.2a0-cp35 on Windows and it runs with Python 3.5.1. Install Python, and keep in mind the install location Go here and download pygame-1.9.2a0-cp35-none-win32.whl pygame-1.9.2a0-cp35-none-win32.whl Move the downloaded .whl file to your python35/Scripts directory python35/Scripts Open a command prompt in the Scripts directory (Shift-Right click in the directory > Open a command window here) Scripts Shift Right click Open a command window here Enter the command: pip3 install pygame-1.9.2a0-cp35-none-win32.whl In case you obtain an error in the last step, attempt: python -m pip install pygame-1.9.2a0-cp35-none-win32.whl And that must do it. Tested as performing on Windows 10 64bit.
https://kodlogs.com/37994/importerror-no-module-named-pygame
CC-MAIN-2021-21
en
refinedweb
Kubernetes reserves all labels and annotations in the kubernetes.io namespace. This document serves both as a reference to the values, and as a coordination point for assigning values. Example: beta.kubernetes.io/arch=amd64 Used on: Node Kubelet populates this with runtime.GOARCH as defined by Go. This can be handy if you are mixing arm and x86 nodes, for example. Example: beta.kubernetes.io/os=linux Used on: Node Kubelet populates this with runtime.GOOS as defined by Go. This can be handy if you are mixing operating systems in your cluster (although currently Linux is the only OS supported by Kubernetes). Example: kubernetes.io/hostname=ip-172-20-114-199.ec2.internal Used on: Node Kubelet populates this with the hostname. Note that the hostname can be changed from the “actual” hostname by passing the --hostname-override flag to kubelet. Example: beta.kubernetes.io/instance-type=m3.medium Used on: Node Kubelet populates this with the instance type as defined by the cloudprovider. It will not be set if not using a cloudprovider. This can be handy if you want to target certain workloads to certain instance types, but typically you want to rely on the Kubernetes scheduler to perform resource-based scheduling, and you should aim to schedule based on properties rather than on instance types (e.g. require a GPU, instead of requiring a g2.2xlarge) See failure-domain.beta.kubernetes.io/zone. Example: failure-domain.beta.kubernetes.io/region=us-east-1 failure-domain.beta.kubernetes.io/zone=us-east-1c Used on: Node, PersistentVolume On the Node: Kubelet populates this with the zone information as defined by the cloudprovider. It will not be set if not using a cloudprovider, but you should consider setting it on the nodes if it makes sense in your topology. On the PersistentVolume: The PersistentVolumeLabel admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS. Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority. This is a best-effort placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes, different types of nodes, or different pod resource requirements), this might prevent equal spreading of your pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading. The scheduler (via the VolumeZonePredicate predicate) will also ensure that pods that claim a given volume are only placed into the same zone as that volume, as volumes cannot be attached across zones. The actual values of zone and region don’t matter, and nor is the meaning of the hierarchy rigidly defined. The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three-rack installation will choose a very different setup to a multi-datacenter configuration. If PersistentVolumeLabel does not support automatic labeling of your PersistentVolumes, you should consider adding the labels manually (or adding support to PersistentVolumeLabel), if you want the scheduler to prevent pods from mounting volumes in a different zone. If your infrastructure doesn’t have this constraint, you don’t need to add the zone labels to the volumes at all.
https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/
CC-MAIN-2018-34
en
refinedweb
public class DropTargetEffect extends DropTargetAdapter The drop target effect has the same API as the DropTargetAdapter so that it can provide custom visual feedback when a DropTargetEvent occurs. Classes that wish to provide their own drag under effect can extend the DropTargetEffect and override any applicable methods in DropTargetAdapter to display their own drag under effect. The feedback value is either one of the FEEDBACK constants defined in class DND which is applicable to instances of this class, or it must be built by bitwise OR'ing together (that is, using the int "|" operator) two or more of those DND effect constants. DropTargetAdapter, DropTargetEvent, Sample code and further information dragEnter, dragLeave, dragOperationChanged, dragOver, drop, dropAccept clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public DropTargetEffect(Control control) DropTargetEffectto handle the drag under effect on the specified Control. control- the Controlover which the user positions the cursor to drop the data IllegalArgumentException- public Control getControl() public Widget getItem(int x, int y) x- the x coordinate used to locate the item y- the y coordinate used to locate the item Copyright (c) 2000, 2017 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
http://help.eclipse.org/oxygen/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/dnd/DropTargetEffect.html
CC-MAIN-2018-34
en
refinedweb
arch(3) BSD Library Functions Manual arch(3) NAME NXGetAllArchInfos, NXGetLocalArchInfo, NXGetArchInfoFromName, NXGetArchInfoFromCpuType, NXFindBestFatArch, NXCombineCpuSubtypes -- get architecture information SYNOPSIS #include <mach-o/arch.h> extern const NXArchInfo * NXGetAllArchInfos(void); extern const NXArchInfo * NXGetLocalArchInfo(void); extern const NXArchInfo * NXGetArchInfoFromName(const char *name); extern const NXArchInfo * NXGetArchInfoFromCpuType(cpu_type_t cputype, cpu_subtype_t cpusubtype); extern struct fat_arch * NXFindBestFatArch(cpu_type_t cputype, cpu_subtype_t cpusubtype, struct fat_arch *fat_archs, unsigned long nfat_archs); extern cpu_subtype_t NXCombineCpuSubtypes(cpu_type_t cputype, cpu_subtype_t cpusubtype1, cpu_subtype_t cpusubtype2); DESCRIPTION These functions are intended for use in programs that have to deal with universal files or programs that can target multiple architectures. Typ- ically, a program will use a command-line argument that starts with ``-arch name'', where this specifies an architecture. These functions and data structures provide some help for processing architecture flags and then processing the contents of a universal file. The structure NXArchInfo is defined in <mach-o/arch.h>: typedef struct { const char *name; cpu_type_t cputype; cpu_subtype_t cpusubtype; enum NXByteOrder byteorder; const char *description; } NXArchInfo; It is used to hold the name of the architecture and the corresponding CPU type and CPU subtype, together with the architecture's byte order and a brief description string. The currently known architectures are: Name CPU Type CPU Subtype Description ppc CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_ALL PowerPC ppc64 CPU_TYPE_POWERPC64 CPU_SUBTYPE_POWERPC64_ALL PowerPC 64-bit i386 CPU_TYPE_I386 CPU_SUBTYPE_I386_ALL Intel 80x86 x86_64 CPU_TYPE_X86_64 CPU_SUBTYPE_X86_64_ALL Intel x86-64 m68k CPU_TYPE_MC680x0 CPU_SUBTYPE_MC680x0_ALL Motorola 68K hppa CPU_TYPE_HPPA CPU_SUBTYPE_HPPA_ALL HP-PA i860 CPU_TYPE_I860 CPU_SUBTYPE_I860_ALL Intel 860 m88k CPU_TYPE_MC88000 CPU_SUBTYPE_MC88000_ALL Motorola 88K sparc CPU_TYPE_SPARC CPU_SUBTYPE_SPARC_ALL SPARC ppc601 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_601 PowerPC 601 ppc603 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_603 PowerPC 603 ppc604 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_604 PowerPC 604 ppc604e CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_604e PowerPC 604e ppc750 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_750 PowerPC 750 ppc7400 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_7400 PowerPC 7400 ppc7450 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_7450 PowerPC 7450 ppc970 CPU_TYPE_POWERPC CPU_SUBTYPE_POWERPC_970 PowerPC 970 i486 CPU_TYPE_I386 CPU_SUBTYPE_486 Intel 486 i486SX CPU_TYPE_I386 CPU_SUBTYPE_486SX Intel 486SX pentium CPU_TYPE_I386 CPU_SUBTYPE_PENT Intel Pentium i586 CPU_TYPE_I386 CPU_SUBTYPE_586 Intel 586 pentpro CPU_TYPE_I386 CPU_SUBTYPE_PENTPRO Intel Pentium Pro i686 CPU_TYPE_I386 CPU_SUBTYPE_PENTPRO Intel Pentium Pro pentIIm3 CPU_TYPE_I386 CPU_SUBTYPE_PENTII_M3 Intel Pentium II Model 3 pentIIm5 CPU_TYPE_I386 CPU_SUBTYPE_PENTII_M5 Intel Pentium II Model 5 pentium4 CPU_TYPE_I386 CPU_SUBTYPE_PENTIUM_4 Intel Pentium 4 m68030 CPU_TYPE_MC680x0 CPU_SUBTYPE_MC68030_ONLY Motorola 68030 m68040 CPU_TYPE_MC680x0 CPU_SUBTYPE_MC68040 Motorola 68040 hppa7100LC CPU_TYPE_HPPA CPU_SUBTYPE_HPPA_7100LC HP-PA 7100LC The first set of entries are used for the architecture family. The sec- ond set of entries are used for a specific architecture, when more than one specific architecture is supported in a family of architectures. NXGetAllArchInfos() returns a pointer to an array of all known NXArchInfo structures. The last NXArchInfo is marked by a NULL name. NXGetLocalArchInfo() returns the NXArchInfo for the local host, or NULL if none is known. NXGetArchInfoFromName() and NXGetArchInfoFromCpuType() return the NXArch- Info from the architecture's name or CPU type/CPU subtype combination. A CPU subtype of CPU_SUBTYPE_MULTIPLE can be used to request the most gen- eral NXArchInfo known for the given CPU type. NULL is returned if no matching NXArchInfo can be found.. NXCombineCpuSubtypes() returns the resulting CPU subtype when combining two different CPU subtypes for the specified CPU type. If the two CPU subtypes can't be combined (the specific subtypes are mutually exclu- sive), . SEE ALSO arch(1) July 28, 2005 Mac OS X 10.7 - Generated Wed Nov 16 06:42:09 CST 2011
http://manpagez.com/man/3/arch/
CC-MAIN-2018-34
en
refinedweb
… in a museum. You walk by an painting suddenly your phone becomes the voice of the artist and begins to speak to you about the piece…. Bringing art to the guest. In 2008 I developed an application that used RFID to trigger events on a mobile device(PDA). The main purpose was to be an Electronic Docent, a museum guide. Exhibit information delivered directly to the guest. Unfortunately RFID never became a consumer friendly technology. Fast forward to 2016, smart phones are prevalent and low power bluetooth(ble) devices are becoming ever more popular. In January myself and two others began development on a new version of the application. The PDA has been replace by smart phones and tablets. Both iOS and Android hold major positions in this area. Both have support for standard Bluetooth as well as BLE. How it works The application running on the device is designed to look for BLE tags. When one is located a request is made to a server to search the database for the tag id. If the id is located information about the media is returned and the user can select if they want to view the media or not. The tags and media have to be associated.This is done by personnel managing the location. They understand both the content and how they would like it displayed to the visitor. One of the biggest decisions was how to develop the mobile portion of the application. - Native, iOS and Android - Cross platform framework: Xarmin,QT - Javascript/HMTL5 framework: Apache Cordova ( formerly Phonegap), Ionic. Native Until a few years ago mobile applications were required to be developed in Java or Obj C. Apple refused to accept applications cross compiled or interpreted into Obj C. The draw back is that an application had to be developed twice. Maintenance was much harder since it required twice the effort in coding and QA. On the other side,native applications had the ability to interact with the devices hardware, sound, touch, gps, and accelerometer. Cross platform framework Frameworks such as Xarmin and QT give the developer to write one application and deploy it to multiple mobile platforms. Xarmin: Based around C# and created by team that created Mono. Xarmin takes C# code and creates native code for iOS or Android. Microsoft now owns Xarmin and has integrated it into its Visual Studio IDE. QT: This has long been a popular framework for developing applications for Windows, OSX and Linux. When mobile support was added there became license issues. Also QT has less of a native look and feel. Javascript/HMTL5 framework: Tools such as Ionic use the Angular.js framework and Cordova libraries to create cross platform applications. The key to the success has been the Cordova(Phonegap) libraries. These provide access to the device hardware which lets the application behave more like native code. We chose Ionic. There were too many issues with either Xarmin or QT. Developing two separate native applications was out of the question. Serving up data Once the mobile application find a BLE tag it needs to get the information associated with the tag. This means an application server. This was a simple choice, Java, Hibernate ,MySQL and Tomcat. This combination is proven, solid and will work well on something like AWS. One advantage to MySQL is that AWS’s Aruora database is MySQL compatible and an easy replacement if very high performance is required. Server side Using Java and Hibernate makes the server work pretty straight forward. The code is built on layers. Entities,DAO, Service, Controller. Entity Each entity represents a table in the database. - ExhibitTag - This represents a single ble tag - ExhibitTagMedia - This represents the media associated with tag. A tag could have more than one media component. - Location - This represents the location of the tag - Organization - This represents the site or organization managing the tags. package com.tundra.entity; import java.io.Serializable; import java.util.Date; import java.util.Set; import javax.persistence.Basic; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.OneToMany; import javax.persistence.Table; import javax.persistence.Temporal; import javax.persistence.TemporalType; import com.fasterxml.jackson.annotation.JsonIgnore; @Entity @Table(name = "exibittag") public class ExhibitTag implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "Id") private Integer id; @Basic(optional = false) @Column(name = "Name") private String name; @Basic(optional = false) @Column(name = "Tag") private String tag; @Basic(optional = false) @Column(name = "Description") private String description; @Basic(optional = false) @Column(name = "Created") @Temporal(TemporalType.TIMESTAMP) private Date created; @Basic(optional = false) @Column(name = "Updated") @Temporal(TemporalType.TIMESTAMP) private Date updated; @JsonIgnore @JoinColumn(name = "Location_Id", referencedColumnName = "Id") @ManyToOne(optional = false, fetch = FetchType.EAGER) private Location location; @OneToMany(cascade = CascadeType.ALL, mappedBy = "exhibitTag", fetch = FetchType.EAGER) private Set<ExhibitTagMedia> exhibitTagMediaSet; // // setters and getters removed @Override public int hashCode() { int hash = 0; hash += (id != null ? id.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { if (!(object instanceof ExhibitTag)) { return false; } ExhibitTag other = (ExhibitTag) object; if (this.id == null && other.id == null) { return super.equals(other); } if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) { return false; } return true; } @Override public String toString() { return "Exibittag[ id=" + id + " ]"; } } DAO Spring will create the query for FindByTag() automatically package com.tundra.dao; import java.util.List; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.transaction.annotation.Transactional; import com.tundra.entity.ExhibitTag; @Transactional("tundraTransactionManager") public interface ExhibitTagDAO extends JpaRepository<ExhibitTag, Integer> { List<ExhibitTag> findByTag(String tag); } Service The service layer is how the controller will interface with the server. package com.tundra.service; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import com.tundra.dao.ExhibitTagDAO; import com.tundra.dao.ExhibitTagMediaDAO; import com.tundra.dao.OrganizationDAO; import com.tundra.entity.ExhibitTag; import com.tundra.entity.ExhibitTagMedia; import com.tundra.entity.Organization; import com.tundra.response.ExhibitTagSummaryResponse; @Service public class TundraServiceImpl implements TundraService { @Autowired ExhibitTagDAO exhibitTagDAO; @Autowired private OrganizationDAO organizationDAO; @Autowired ExhibitTagMediaDAO exhibitTagMediaDAO; /* (non-Javadoc) * @see com.tundra.service.TundraService#findAllOrganizations() */ @Override public List<Organization> findAllOrganizations() { return organizationDAO.findAll(); } /* (non-Javadoc) * @see com.tundra.service.TundraService#findOrganization(int) */ @Override public Organization findOrganization(int id) { return organizationDAO.findOne(id); } @Override public List<Organization> findByName(String name) { return organizationDAO.findByName(name); } @Override public List<Organization> findByNameAndCity(String name, String city) { return organizationDAO.findByNameAndCity(name, city); } @Override public ExhibitTag findByTag(String tag) { ExhibitTag et = null; List<ExhibitTag> list = exhibitTagDAO.findByTag(tag); if( list != null && list.size() ==1){ et = list.get(0); } return et; } @Override public List<ExhibitTag> findAllTags() { return exhibitTagDAO.findAll(); } @Override public ExhibitTagMedia findMediaByTag(String tag) { ExhibitTagMedia media = null; List<ExhibitTagMedia> list = exhibitTagMediaDAO.findByExhibitTag(tag); if( list != null && list.size() ==1){ media = list.get(0); } return media; } @Override public ExhibitTagSummaryResponse findSummaryByExhibitTag(String tag) { ExhibitTagSummaryResponse summary = null; List<ExhibitTagSummaryResponse> list = exhibitTagMediaDAO.findSummaryByExhibitTag(tag); if( list != null && list.size() ==1){ summary = list.get(0); } return summary; } } Controller The controller layer represents the REST layer. The mobile app will interface with the server via the controller. package com.tundra.controller; import java.io.Serializable; import java.util.List; import javax.servlet.http.HttpServletResponse; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; import com.tundra.entity.ExhibitTag; import com.tundra.entity.ExhibitTagMedia; import com.tundra.response.ExhibitTagSummaryResponse; import com.tundra.service.TundraService; @Controller @RequestMapping("/tag") public class ExhibitController implements Serializable { private static final String ERROR_PREFIX = "Whoops : "; private static final long serialVersionUID = 1L; @Autowired private TundraService tundraService; @RequestMapping(value="/{tag}", method=RequestMethod.GET) public @ResponseBody ResponseEntity<?> getExhibitTagByTagId(HttpServletResponse httpResponse, @PathVariable(value="tag") String tag) { try { return new ResponseEntity<ExhibitTagSummaryResponse>(tundraService.findSummaryByExhibitTag(tag),HttpStatus.OK); } catch (Throwable t) { return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR); } } @RequestMapping(value="/media/{tag}", method=RequestMethod.GET) public @ResponseBody ResponseEntity<?> getExhibitMediaByTagId(HttpServletResponse httpResponse, @PathVariable(value="tag") String tag) { try { return new ResponseEntity<ExhibitTagMedia>(tundraService.findMediaByTag(tag),HttpStatus.OK); } catch (Throwable t) { return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR); } } @RequestMapping(value="/list", method=RequestMethod.GET) public @ResponseBody ResponseEntity<?> getExhibits(HttpServletResponse httpResponse) { try { return new ResponseEntity<List<ExhibitTag>>(tundraService.findAllTags(),HttpStatus.OK); } catch (Throwable t) { return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR); } } } With the server code in place its time to look at the mobile app. As started earlier we are using the Ionic framework. It is based on Javascript/Angular. The structure of a Ionic project is shown below. The areas we change are app.js, controller.js and service.js. Index.html is modified only slightly to include our files. <!-- cordova script (this will be a 404 during development)--> <!-- your app's js --> The templates folder holds the html files for the various screens. Since we started with a tabbed ionic project we have two core html templates, tab.html and tab-dash.html.The tab format allows tabbed pages as a navigation. We are not using this format and will be renamed later on. tab.html <ion-tab title=”My Docent” icon-off=”ion-ios-pulse” icon-on=”ion-ios-pulse-strong” href=”#/tab/dash”> <ion-nav-view name=”tab-dash”></ion-nav-view> </ion-tab> The main screen is in tab-dash.html <ion-content> <ion-header-bar<h1 class="title">Available Exhibits</h1> </ion-header-bar></ion-content></ion-content> The screen is very basic. The other screens are used to represent the media types, text,video,and audio.html. Here is an example of a text view. The app.js file is loaded first and sets up the basic structure. The application uses the Ionic Bluetooth Low Energy (BLE) Central plugin for Apache Cordova . If the app is running on a real mobile device(not in a browser on a PC) the object ‘ble’ will be defined. On a PC this will not be valid. The app.js run function will check for this. if(typeof(ble) != "undefined"){ ble.isEnabled( function () { document.getElementById("bleStatus").style= "color:green;"; },function () { document.getElementById("bleStatus").style= "color:red;"; } ); } The controller layer manages the controls from the html(UI) code. Example: It the main html file there is a button to start scanning. <button ng-click=”startScanning()” class=”button”>Search</button> In the controller there is the startScanning function. The BLEService is located in the service layer. $scope.startScanning = function () { BLEService.connect(function(exibitTags) { $scope.exibitTags = exibitTags; console.log(exibitTags); }); $scope.myText = "startScanning"; console.log($scope.myText); isScanning = true; }; In the service layer. .service("BLEService", function($http){ if (typeof(ble) != "undefined") { ble.scan([], 30, onConnect, onError); The onConnect function returns a list of Bluetooth tags located. Once the list of devices is returned, the REST service is called to check the tags against the database. The server returns: - Organization Name - Location Name - Exhibit TagName - Exhibit TagId - Exhibit Tag - Exhibit Tag MimeType The user selects which exhibit they want to view. Testing the app locally Ionic can run the app locally by using the command ‘ionic serve’ from the project folder. C:\Users\rickerg0\workspace\Tundra>ionic serve ****************************************************** The port 35729 was taken on the host localhost - using port 35730 instead Running live reload server: Watching: www/**/*, !www/lib/**/*, !www/**/*.map The basic screen as viewed in FireFox. Deploy the app to an Android device from Windows Make sure the device is connected via the USB port. Also set the developer option on the device. If you don’t do this last step the device will not allow Ionic to connect. From the terminal issue the command ‘ionic run android’. This will build the apk file and install it on the device.
https://rickerg.com/2016/12/19/multimedia-mobile-application-using-low-power-bluetoothble/
CC-MAIN-2018-34
en
refinedweb
Now that we have pushed data management related concerns in the right places, we can focus on implementing the remaining portions - NoteStore and NoteActions. These will encapsulate the application data and logic. No matter what state management solution you end up using, there is usually something equivalent around. In Redux you would end up using actions that then trigger a state change through a reducer. In MobX you could model an action API within an ES6 class. The idea is that you will manipulate the data within the class and that will cause MobX to refresh your components as needed. The idea is similar here. We will set up actions that will end up triggering our store methods that modify the state. As the state changes, our views will update. To get started, we can implement a NoteStore and then define logic to manipulate it. Once we have done that, we have completed porting our application to the Flux architecture. NoteStore# Currently we maintain the application state at App. The first step towards pushing it to Alt is to define a store and then consume it from there. This will break the logic of our application temporarily as that needs to be pushed to Alt as well. Setting up an initial store is a good step towards this overall goal, though. To set up a store we need to perform three steps. We'll need to set it up, then connect it with Alt at Provider, and finally connect it with App. In Alt we model stores using ES6 classes. Here's a minimal implementation modeled after our current state: app/stores/NoteStore.js import uuid from 'uuid'; export default class NoteStore { constructor() { this.notes = [ { id: uuid.v4(), task: 'Learn React' }, { id: uuid.v4(), task: 'Do laundry' } ]; } } The next step is connecting the store with Provider. This is where that setup module comes in handy: app/components/Provider/setup.js export default alt => {}import NoteStore from '../../stores/NoteStore'; export default alt => { alt.addStore('NoteStore', NoteStore); } To prove that our setup works, we can adjust App to consume its data from the store. This will break the logic since we don't have any way to adjust the store data yet, but that's something we'll fix in the next section. Tweak App as follows to make notes available there: app/components/App.jsx ... class App extends React.Component {constructor(props) { super(props); this.state = { notes: [ { id: uuid.v4(), task: 'Learn React' }, { id: uuid.v4(), task: 'Do laundry' } ] } }render() {const {notes} = this.state;const {notes} = this.props;return ( <div>{this.props.test}<button className="add-note" onClick={this.addNote}>+</button> <Notes notes={notes} onNoteClick={this.activateNoteEdit} onEdit={this.editNote} onDelete={this.deleteNote} /> </div> ); } ... }export default connect(() => ({ test: 'test' }))(App)export default connect(({notes}) => ({ notes }))(App) If you refresh the application now, you should see exactly the same data as before. This time, however, we are consuming the data from our store. As a result our logic is broken. That's something we'll need to fix next as we define NoteActions and push our state manipulation to the NoteStore. Given Appdoesn't depend on state anymore, it would be possible to port it as a function based component. Often most of your components will be based on functions just for this reason. If you aren't using state or refs, then it's safe to default to functions. Actions are one of the core concepts of the Flux architecture. To be exact, it is a good idea to separate actions from action creators. Often the terms might be used interchangeably, but there's a considerable difference. Action creators are literally functions that dispatch actions. The payload of the action will then be delivered to the interested stores. It can be useful to think them as messages wrapped into an envelope and then delivered. This split is useful when you have to perform asynchronous actions. You might for example want to fetch the initial data of your Kanban board. The operation might then either succeed or fail. This gives you three separate actions to dispatch. You could dispatch when starting to query and when you receive some response. All of this data is valuable as it allows you to control the user interface. You could display a progress widget while a query is being performed and then update the application state once it has been fetched from the server. If the query fails, you can then let the user know about that. You can see this theme across different state management solutions. Often you model an action as a function that returns a function (a thunk) that then dispatches individual actions as the asynchronous query progresses. In a naïve synchronous case it's enough to return the action payload directly. The official documentation of Alt covers asynchronous actions in greater detail. NoteActions# Alt provides a little helper method known as alt.generateActions that can generate simple action creators for us. They will simply dispatch the data passed to them. We'll then connect these actions at the relevant stores. In this case that will be the NoteStore we defined earlier. When it comes to the application, it is enough if we model basic CRUD (Create, Read, Update, Delete) operations. Given Read is implicit, we can skip that. But having the rest available as actions is useful. Set up NoteActions using the alt.generateActions shorthand like this: app/actions/NoteActions.js import alt from '../libs/alt'; export default alt.generateActions('create', 'update', 'delete'); This doesn't do much by itself. Given we need to connect the actions with App to actually trigger them, this would be a good place to do that. We can start worrying about individual actions after that as we expand our store. To connect the actions, tweak App like this: app/components/App.jsx import React from 'react'; import uuid from 'uuid'; import Notes from './Notes'; import connect from '../libs/connect';import NoteActions from '../actions/NoteActions';class App extends React.Component { ... }export default connect(({notes}) => ({ notes }))(App)export default connect(({notes}) => ({ notes }), { NoteActions })(App) This gives us this.props.NoteActions.create kind of API for triggering various actions. That's good for expanding the implementation further. NoteActionswith NoteStore# Alt provides a couple of convenient ways to connect actions to a store: this.bindAction(NoteActions.CREATE, this.create)- Bind a specific action to a specific method. this.bindActions(NoteActions)- Bind all actions to methods by convention. I.e., createaction would map to a method named create. reduce(state, { action, data })- It is possible to implement a custom method known as reduce. This mimics the way Redux reducers work. The idea is that you'll return a new state based on the given state and payload. We'll use this.bindActions in this case as it's enough to rely on convention. Tweak the store as follows to connect the actions and to add initial stubs for the logic: app/stores/NoteStore.js import uuid from 'uuid';import NoteActions from '../actions/NoteActions';export default class NoteStore { constructor() {this.bindActions(NoteActions);this.notes = [ { id: uuid.v4(), task: 'Learn React' }, { id: uuid.v4(), task: 'Do laundry' } ]; }create(note) { console.log('create note', note); } update(updatedNote) { console.log('update note', updatedNote); } delete(id) { console.log('delete note', id); }} To actually see it working, we'll need to start connecting our actions at App and the start porting the logic over. App.addNoteto Flux# App.addNote is a good starting point. The first step is to trigger the associate action ( NoteActions.create) from the method and see if we see something at the browser console. If we do, then we can manipulate the state. Trigger the action like this: app/components/App.jsx ... class App extends React.Component { render() { ... }' }]) });this.props.NoteActions.create({ id: uuid.v4(), task: 'New task' });} ... } ... If you refresh and click the "add note" button now, you should see messages like this at the browser console: create note Object {id: "62098959-6289-4894-9bf1-82e983356375", task: "New task"} This means we have the data we need at the NoteStore create method. We still need to manipulate the data. After that we have completed the loop and we should see new notes through the user interface. Alt follows a similar API as React here. Consider the implementation below: app/stores/NoteStore.js import uuid from 'uuid'; import NoteActions from '../actions/NoteActions'; export default class NoteStore { constructor() { ... } create(note) {console.log('create note', note);this.setState({ notes: this.notes.concat(note) });} ... } If you try adding a note now, the update should go through. Alt maintains the state now and the edit goes through thanks to the architecture we set up. We still have to repeat the process for the remaining methods to complete the work. App.deleteNoteto Flux# The process exactly the same for App.deleteNote. We'll need to connect it with our action and then port it over. Here's the App portion: app/components/App.jsx ... class App extends React.Component { ... deleteNote = (id, e) => { // Avoid bubbling to edit e.stopPropagation();this.setState({ notes: this.state.notes.filter(note => note.id !== id) });this.props.NoteActions.delete(id);} ... } ... If you refresh and try to delete a note now, you should see a message like this at the browser console: delete note 501c13e0-40cb-47a3-b69a-b1f2f69c4c55 To finalize the porting, we'll need to move the setState logic to the delete method. Remember to drop this.state.notes and replace that with just this.notes: app/stores/NoteStore.js import uuid from 'uuid'; import NoteActions from '../actions/NoteActions'; export default class NoteStore { ... delete(id) {console.log('delete note', id);this.setState({ notes: this.notes.filter(note => note.id !== id) });} } After this change you should be able to delete notes just like before. There are still a couple of methods to port. App.activateNoteEditto Flux# App.activateNoteEdit is essentially an update operation. We'll need to change the editing flag of the given note as true. That will initiate the editing process. As usual, we can port App to the scheme first: app/components/App.jsx ... class App extends React.Component { ... activateNoteEdit = (id) => {this.setState({ notes: this.state.notes.map(note => { if(note.id === id) { note.editing = true; } return note; }) });this.props.NoteActions.update({id, editing: true});} ... } ... If you refresh and try to edit now, you should see messages like this at the browser console: update note Object {id: "2c91ba0f-12f5-4203-8d60-ea673ee00e03", editing: true} We still need to commit the change to make this work. The logic is the same as in App before except we have generalized it further using Object.assign: app/stores/NoteStore.js import uuid from 'uuid'; import NoteActions from '../actions/NoteActions'; export default class NoteStore { ... update(updatedNote) {console.log('update note', updatedNote);this.setState({ notes: this.notes.map(note => { if(note.id === updatedNote.id) { return Object.assign({}, note, updatedNote); } return note; }) });} ... } It should be possible to start editing a note now. If you try to finish editing, you should get an error like Uncaught TypeError: Cannot read property 'notes' of null. This is because we are missing one final portion of the porting effort, App.editNote. App.editNoteto Flux# This final part is easy. We have already the logic we need. Now it's just a matter of connecting App.editNote to it in a correct way. We'll need to call our update method the correct way: app/components/App.jsx ... class App extends React.Component { ... editNote = (id, task) => {this.setState({ notes: this.state.notes.map(note => { if(note.id === id) { note.editing = false; note.task = task; } return note; }) });this.props.NoteActions.update({id, task, editing: false});} } ... After refreshing you should be able to modify tasks again and the application should work just like before now. As we alter NoteStore through actions, this leads to a cascade that causes our App state to update through setState. This in turn will cause the component to render. That's Flux's unidirectional flow in practice. We actually have more code now than before, but that's okay. App is a little neater and it's going to be easier to develop as we'll soon see. Most importantly we have managed to implement the Flux architecture for our application. The current implementation is naïve in that it doesn't validate parameters in any way. It would be a very good idea to validate the object shape to avoid incidents during development. Flow based gradual typing provides one way to do this. In addition you could write tests to support the system. Even though integrating a state management system took a lot of effort, it was not all in vain. Consider the following questions: localStorage. Where would you implement that? One approach would be to handle that at the Provider setup. connectand display it, however we want. Adopting a state management system can be useful as the scale of your React application grows. The abstraction comes with some cost as you end up with more code. But on the other hand if you do it right, you'll end up with something that's easy to reason and develop further. Especially the unidirectional flow embraced by these systems helps when it comes to debugging and testing. In this chapter, you saw how to port our simple application to use Flux architecture. In the process we learned more about actions and stores of Flux. Now we are ready to start adding more functionality to our application. We'll add localStorage based persistency to the application next and perform a little clean up while at it. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/react/implementing-kanban/implementing-store-and-actions/index.html
CC-MAIN-2018-34
en
refinedweb