text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Lead Image © lassedesignen, 123RF Building a virtual NVMe drive Pretender Often, older or slower hardware remains in place while the rest of the environment or world updates to the latest and greatest technologies; take, for example, Non-Volatile Memory Express (NVMe) solid state drives (SSDs) instead of spinning magnetic hard disk drives (HDDs). Even though NVMe drives deliver the performance desired, the capacities (and prices) are not comparable to those of traditional HDDs, so, what to do? Create a hybrid NVMe SSD and export it across an NVMe over Fabrics (NVMeoF) network to one or more hosts that use the drive as if it were a locally attached NVMe device (Figure 1). The implementation will leverage a large pool of HDDs at your disposal – or, at least, what is connected to your server – and place them into a fault-tolerant MD RAID implementation, making a single large-capacity volume. Also, within MD RAID, a small-capacity and locally attached NVMe drive will act as a write-back cache for the RAID volume. The use of RapidDisk modules [1] to set up local RAM as a small read cache, although not necessary, can sometimes help with repeatable random reads. This entire hybrid block device will then be exported across your standard network, where a host will be able to attach to it and access it as if it were a locally attached volume. The advantage of having this write-back cache is that all the write requests will land on the faster storage medium and not need to wait until it persists to the slower RAID volume before returning back to the application, which will dramatically improve write performance. Before continuing, though, you need to understand a couple of concepts: (1) As it relates to the current environment, an initiator or host will be the server connecting to a remote block device – specifically, an NVMe target. (2) The target will be the server exporting the NVMe device across the network and to the host server. A Linux 5.0 or later kernel is required on both the target and initiator servers. The host needs the NVMe TCP module and the target needs the NVMe target TCP module built and installed: CONFIG_NVME_TCP=m CONFIG_NVME_TARGET_TCP=m Now, a pile of disk drives at your disposal can be configured into a fault-tolerant RAID configuration and collectively give you the capacity of a single large drive. Configuring the Target Server To begin, list the drives of your local server machine (Listing 1). In this example, the four drives sdb to sde in lines 12, 13, 15, and 16 will be used to create the NVMe target. Each drive is 7TB, which you can verify with the blockdev utility: $ sudo blockdev --getsize64 /dev/sdb 7000259821568 Listing 1 Server Drives 64 6836191232 sde 14 8 80 39078144 sdf 15 8 48 6836191232 sdd 16 8 32 6836191232 sdc 17 11 0 1048575 sr0 With the parted utility, you can create a single partition on each entire HDD: $ for i in sdb sdc sdd sde; do sudo parted --script /dev/$i mklabel gpt mkpart primary 1MB 100%; done An updated list of drives will display the newly created partitions just below each disk drive (Listing 2). The newly created partitions now have 1s attached to the drive names (lines 13, 15, 18, 20). The drive size has not changed much from the original: $ sudo blockdev --getsize64 /dev/sdb1 7000257724416 Listing 2 New Partitions 17 6836189184 sdb1 14 8 64 6836191232 sde 15 8 65 6836189184 sde1 16 8 80 39078144 sdf 17 8 48 6836191232 sdd 18 8 49 6836189184 sdd1 19 8 32 6836191232 sdc 20 8 33 6836189184 sdc1 21 11 0 1048575 sr0 If you paid close attention, you'll see an NVMe device resides among the list of drives, which will be the device you will use for the write-back cache of your RAID pool. It is not a very large volume (about 256GB): sudo blockdev --getsize64 /dev/nvme0n1 250059350016 Next, create a single partition on the NVMe drive and verify that the partition has been created: $ sudo parted --script /dev/nvme0n1 mklabel gpt mkpart primary 1MB 100% $ cat /proc/partitions | grep nvme 259 0 244198584 nvme0n1 259 2 244197376 nvme0n1p1 The next step is to create a RAID 5 volume to encompass all of the HDDs (see also the "RAID 5" box). This configuration will use one drive's worth of capacity to hold the parity data for both fault tolerance and data redundancy. In the event of a single drive failure, then, you can continue to serve data requests while also having the capability to restore the original data to a replacement drive. RAID 5 A RAID 5 array stripes chunks of data across all the drives in a volume, with parity calculated by an XOR algorithm. Each stripe holds the parity to the data within its stripe; therefore, the parity data does not sit on a single drive within the array but, rather, is distributed across all of the volumes (Figure 2). If you were to do the math, you have four 7TB drives with one drive's worth of capacity hosting the parity, so the RAID array will produce (7x4)-7=21TB of shareable capacity. Again, the RAID configuration uses the NVMe device partitioned earlier as a write-back cache and write journal. Note that this NVMe device does not add to the RAID array's overall capacity. To create the RAID 5 array, use the mdadm utility [2]: $ sudo mdadm --create /dev/md0 --level=5 --raid-devices=4 --write-journal=/dev/nvme0n1p1 --bitmap=none /dev/sdb1 /dev/sdc1/dev/sdd1 /dev/sde1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. Next, verify that the RAID configuration has been created (Listing 3). You will immediately notice that the array initializes the disks and zeros out the data on each to bring it all to a good state. Although you can definitely use it in this state, overall performance will be affected. Listing 3 Verify RAID $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[5] sde1[4] sdc1[2] sdb1[1] nvme0n1p1[0](J) 20508171264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 0.0% (5811228/6836057088) finish=626.8min speed=181600K/sec Also, you probably do not want to disable the initial resync of the array with the --assume-clean option, even if the drives are right out of the box. Better you should know your array is in a proper state before writing important data to it. This operation will definitely take a while, and the bigger the array, the longer the initialization process. You can always take that time to read through the rest of this article or just go get a cup of coffee or two or five. No joke, this process takes quite a while to complete. When the initialization process has been completed, a reread of the same /proc/mdstat file will yield the following (or similar), as shown in Listing 4. Listing 4 Reread RAID $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[4] sdd1[3] sdc1[2] sdb1[1] nvme0n1p1[0](J) 20508171264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> The newly created block device will be appended to a list of all usable block devices: $ cat /proc/partitions | grep md 9 0 20508171264 md0 If you recall, the usable capacity was originally calculated at 21TB. To verify this, enter: $ sudo blockdev --getsize64 /dev/md0 21000367374336 Once the array initialization has completed, change the write journal mode from write-through to write-back and verify the change: $ echo "write-back" | sudo tee /sys/block/md0/md/journal_mode > /dev/null $ cat /sys/block/md0/md/journal_mode write-through [write-back] Now it is time to add the read cache. As a prerequisite, you need to ensure that the Jansson development library [3] is installed on your local machine. Clone the rapiddisk Git repository, build and install the package, and insert the kernel modules: $ git clone $ cd rapiddisk/ $ make $ sudo make install $ sudo modprobe rapiddisk $ sudo modprobe rapiddisk-cache Determine the amount of memory you are able to allocate for your read cache, which should be based on the total memory installed on the system. For instance, if you have 64GB, you might be willing to use 8 or 16GB. In my case, I do not have much memory in my system, which is why I only create a single 2GB RAM drive for the read cache: $ sudo rapiddisk --attach 2048 rapiddisk 6.0 Copyright 2011 - 2019 Petros Koutoupis Attached device rd0 of size 2048 Mbytes Next, create a mapping of the RAM drive to the RAID volume: $ sudo rapiddisk --cache-map rd0 /dev/md0 wa rapiddisk 6.0 Copyright 2011 - 2019 Petros Koutoupis Command to map rc-wa_md0 with rd0 and /dev/md0 has been sent. Verify with "--list" The wa argument appended to the end of the command stands for write-around. In this configuration the read operations, not the write operations, are cached. Remember, the writes are being cached under the reads and onto the NVMe drive attached to the RAID volume. Because the writes are preserved on a persistent flash volume, you have some assurance that if the server were to experience power or operating system failure, the pending write transactions would not be lost as a result of the outage. Once services are restored, it will continue to operate as if nothing had happened. Now, verify the mapping (Listing 5). The volume will be accessible at /dev/mapper/rc-wa_md0: $ ls -l /dev/mapper/rc-wa_md0 brw------- 1 root root 253, 0 Jan 16 23:15 /dev/mapper/rc-wa_md0 Listing 5 Verify Mapping $ sudo rapiddisk --list rapiddisk 6.0 Copyright 2011 - 2019 Petros Koutoupis List of RapidDisk device(s): RapidDisk Device 1: rd0 Size (KB): 2097152 List of RapidDisk-Cache mapping(s): RapidDisk-Cache Target 1: rc-wa_md0 Cache: rd0 Target: md0 (WRITE AROUND) Your virtual NVMe is nearly completed; you just need to add the component that turns the hybrid SSD volume into an NVMe-identified volume. To insert the NVMe target and NVMe target TCP modules, enter: $ sudo modprobe nvmet $ sudo modprobe nvmet-tcp The NVMe target tree will need to be made available over the kernel user configuration filesystem to provide access to the entire NVMe target configuration environment. To begin, mount the kernel user configuration filesystem and verify that it has been mounted: $ sudo /bin/mount -t configfs none /sys/kernel/config/ $ mount | grep configfs configfs on /sys/kernel/config type configfs (rw,relatime) Next, create an NVMe target test directory under the target subsystem and change into that directory (this will host the NVMe target volume plus its attributes): $ sudo mkdir /sys/kernel/config/nvmet/subsystems/nvmet-test $ cd /sys/kernel/config/nvmet/subsystems/nvmet-test Because this is a test environment, you do not necessarily care which initiators (i.e., hosts) connect to the exported target: $ echo 1 | sudo tee -a attr_allow_any_host > /dev/null Now, create a namespace and change into the directory: $ sudo mkdir namespaces/1 $ cd namespaces/1/ To set the hybrid SSD volume as the NVMe target device and enable the namespace, enter: $ echo -n /dev/mapper/rc-wa_md0 | sudo tee -a device_path > /dev/null $ echo 1 | sudo tee -a enable > /dev/null Now that you have defined your target block device, you need to switch focus and define your target (i.e., networking) port: Create a port directory in the NVMe target tree and change into the directory: $ sudo mkdir /sys/kernel/config/nvmet/ports/1 $ cd /sys/kernel/config/nvmet/ports/1 Now, set the local IP address from which the export will be visible, the transport type, port number, and protocol version: $ echo 10.0.0.185 | sudo tee -a addr_traddr > /dev/null $ echo tcp | sudo tee -a addr_trtype > /dev/null $ echo 4420 | sudo tee -a addr_trsvcid > /dev/null $ echo ipv4 | sudo tee -a addr_adrfam > /dev/null For any of this to work, both the target and initiator will need to have port 4420 open in its I/O firewall rules. To tell the NVMe target tree that the port just created will export the block device defined in the subsystem section above, link the target subsystem to the target port and verify the export: $ sudo ln -s /sys/kernel/config/nvmet/subsystems/nvmet-test/ /sys/kernel/config/nvmet/ports/1/subsystems/nvmet-test $ dmesg | grep "nvmet_tcp" [ 9360.176859] nvmet_tcp: enabling port 1 (10.0.0.185:4420) Alternatively, you can do most of that above for the NVMe target configuration with the nvmetcli utility [4], which provides a more interactive shell that allows you to traverse the same tree, but within a single, perhaps more easy to follow, environment. Configuring the Initiator Server For the secondary server (i.e., the server that will connect to the exported target and use the virtual NVMe drive as if it were attached locally), load the initiator or host-side kernel modules: $ modprobe nvme $ modprobe nvme-tcp Again, remember that for this to work, both the target and initiator need port 4420 open in its I/O firewall rules. To discover the NVMe target exported by the target server, use the nvme command-line utility (Listing 6); then, connect to the target server and import the NVMe device(s) it is exporting (in this case, you should see just the one): $ sudo nvme connect -t tcp -n nvmet-test -a 10.0.0.185 -s 4420 Listing 6 Discover NVMe Target $ sudo nvme discover -t tcp -a 10.0.0.185 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: nvmet-test traddr: 10.0.0.185 sectype: none Next, verify that the NVMe subsystem sees the NVMe target (Listing 7) and that the volume is listed in your local device listing (also, notice the volume size of 21TB): $ cat /proc/partitions | grep nvme 259 0 20508171264 nvme0n1 Listing 7 Verify NVMe Target Is Visible $ sudo nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 152e778212a62015 Linux 1 21.00 TB / 21.00 TB 4 KiB + 0 B 5.4.12-0 You are now able to read and write from and to /dev/nvme0n1 as if it were a locally attached NVMe device. Finally, enter $ sudo nvme disconnect -d /dev/nvme0n1 to disconnect the NVMe target volume. Conclusion The virtual NVMe drive you built will perform very well on write operations with a local NVMe SSD and "okay-ish" on non-repeated random read operations with local DRAM memory as a front end to a much larger (and slower) storage pool of HDDs. This configuration was in turn exported as a target across an NVMeoF network over TCP and to an initiator, where it is seen as a local NVMe-connected device. Infos - RapidDisk project: - mdadm: - Jansson C library: - nvmetcli Git repository: Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Articles/Building-a-virtual-NVMe-drive
CC-MAIN-2021-25
en
refinedweb
Nothing brings back warm childhood memories like grandma’s chocolate cake recipe. My slight alteration of her original recipe is to add a touch of Python and share it with the world through a static site. By reading this article, you’ll learn how to create a Contentful-powered Flask app. You’ll also learn how to turn that Flask app into a static site using Frozen-Flask—and how to deploy it all to Surge.sh. Why static sites? Static sites are fast, lightweight and can be deployed almost everywhere for little or no money. Using Frozen Flask, a static site generator, to turn your Flask app, with all its inner logic and dependencies, into a static site reduces the need for complex hosting environments. Why Contentful? Contentful is content infrastructure for any digital project. For our chocolate cake app this means that we’ll retrieve the recipe together with the list of ingredients from Contentful’s Content Delivery API (CDA). Creating the chocolate cake content type With Contentful, a content type is similar to a database table in that it defines what kind of data it can hold. We’ll create the content type so that it can hold any recipe. Because truth be told, grandma also made some fantastic pancakes, and I would like to share that recipe too one day. So let’s start by naming our new content type Grandma’s kitchen like so: We can then add different field types to this content type: This recipe we’ll need the following field types: Medium - that will contain the image of our beautiful creation Short text - that will contain the recipe name Long text - that will contain our list of ingredients Long text - that will contain instructions Boolean - to make sure that delicious == true With the added field types our content type will look like so: Adding the ingredients Now that we have our content type set up, we’ll go ahead and add our chocolate cake recipe. Setting up the Flask app With the recipe in place, it’s time put together our Flask app. When a user visits /chocolatecake/, this minimalist app will render the recipe through a the recipe.html template as seen below: from flask import Flask, render_template app = Flask(__name__) @app.route("/chocolatecake/") def cake(): return render_template("recipe.html") But we of course need a way to get our chocolate cake recipe data from the Contentful CDN and pass that data to the render_template function as a variable….. Getting the data from Contentful into Flask To pull data from Contentful into Flask, you will need an access token to authorize your API calls. Note that the access token is personal so you need to generate your own to get things to work. While we can interact with Contentful’s endpoints using bare HTTP calls, the Python SDK makes dealing with response objects easier. Run pip install contentful to install it. We also need to install support for rendering the Markdown-formatted parts of the response: pip install Flask-Markdown Now we’ll create a method in our Flask app that does the following: Connects to Contentful using our unique access token Grabs the content of our chocolate cake recipe Handles the JSON response from Contentful Sends the data to be rendered to our template def getRecipe(): SPACE_ID = '1476xanqlrah' ENTRY_ID = '4kgJZqf18AYgYiyYkgaMy0' ACEESS_TOKEN = '457ba5e5af020499b5b8e7c22ae5da0ffeaf314028e28a6b0bdba4f28e35222c' client = Client(SPACE_ID, ACEESS_TOKEN) entry = client.entry(ENTRY_ID) imageURL = entry.image.url() recipeName = entry.recipe_name listOfIngredients = entry.list_of_ingredients instructions = entry.instructions isDelicious = entry.is_delicious return { 'imageURL': 'https:{0}'.format(imageURL), 'recipeName': recipeName, 'listOfIngredients': listOfIngredients, 'instructions': instructions, 'isDelicious': isDelicious } And to send our dictionary data structure of recipe data for rendering by the template we modify the /chocolatecake/ route to look like so: @app.route("/chocolatecake/") def cake(): recipe = getRecipe() return render_template("recipe.html", recipe=recipe) The template that we’ll render, recipe.html, has the following content: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Grandma's Chocolate Cake</title> <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}"> </head> <body> <div> <h2>{{ recipe.get("recipeName")}}</h2> </div> <div> <img src="{{ recipe.get(" imageURL ")}}"> </div> <div> Is delicious: {{ recipe.get("isDelicious")}} </div> <div> {{ recipe.get("listOfIngredients") | markdown }} </div> <div> {{ recipe.get("instructions") | markdown }} </div> </body> </html> We now have a working Flask app running locally. This app grabs its content from Contentful’s CDN using an API call, and then renders a page like this: We could stop here and just deploy our creation to Heroku or any other platform that can run Flask apps. But we want more—and we want it to be static. Adding Frozen-Flask To turn our Contentful-powered Flask app into a static site, we’ll be using Frozen-Flask. Install it using pip: pip install Frozen-Flask Creating the static site We’ll create file called freeze.py with the following content: from flask_frozen import Freezer from app import app freezer = Freezer(app) if __name__ == '__main__': freezer.freeze() After running freeze.py, we have a build directory containing: ├── chocolatecake └── static └── style.css This is exactly what we want—HTML and styling. So let’s ship it! Deploying the static site to surge.sh Surge.sh is a single-command web publishing platform. It allows you to publish HTML, CSS, and JavaScript for free—without leaving the command line. In other words: it’s a great platform for our static site. Run npm install --global surge to install the Surge and to create your free account. All we need to deploy our static site is to run the surge command from the build directory: Surge - surge.sh email: [email protected] token: ***************** project path: /Users/robertsvensson/Code/cake/build/ size: 2 files, 2.9 KB domain: loud-jeans.surge.sh upload: [====================] 100%, eta: 0.0s propagate on CDN: [====================] 100% plan: Free users: [email protected] IP Address: 48.68.110.122 Success! Project is published and running at loud-jeans.surge.sh When you deploy a site using Surge.sh it generates the URL based on two random words — this time we got the words loud and jeans. So now all that’s left to do is to browse to and make sure that static site version of grandma’s chocolate cake recipe deployed successfully. It sure did 🚢 Summary All it takes to create a static site from your Contentful-powered Flask app is Frozen-Flask. You get the best of both worlds by having both writers and editors working on the app’s content in Contentful, and then generating a static site that can be deployed almost anywhere. This workflow will save you both time and money: Your creative writers get access to a feature-rich editing platform within Contentful, and you get the chance to deploy a fast and lightweight static site for little or no money.
https://www.contentful.com/blog/2018/02/02/chocolate-cake-and-static-sites/
CC-MAIN-2021-25
en
refinedweb
Teleport is an open source, identity-aware, access proxy with an integrated certificate authority. People have been using teleport for ssh-access, Kubernetes clusters and with Teleport 6.0 you get Database access as well (Postgress and MySQL). In this tutorial, I will show you how you can do it all from scratch for a self-hosted MySQL Database(I will show the database install as well). Prerequisites: 2 Ubuntu 20.04 instances with sudo access. I have 2 machines called teleport and database 3 weeks back I wrote a book “Learn CKS Scenarios” on Gumroad.. Docker Meetup 16th Jan: Kickstart Your 2020 Container Journey with Docker & Kubernetes + Kubernetes101 Workshop Year Begining I along with other community members organized the biggest Docker… Originally posted on my website In this post, we will discuss a tool name “Kubevious” Visualizing Kubernetes is something that everyone wants, the more good the visualization, the more it gets adopted by the community. Tools that help to view/debug the issues/configurations right in front of the screen make the life of dev/ops people easy. There are Different Tools as of today that do the visualization, but I found Kubevious to be different. Along with the visualizations, it also shows the misconfigured labels for the pods-services, instantly shows the RBAC roles/permissions for the service accounts. Sounds Exciting? … Today I will be sharing some insights into working with Shipa. So Shipa is a platform mainly built for the developers so that they can focus more on writing code and less on the infrastructure. The main idea IMO is to make developers' life easy and making their apps run on the best in class kubernetes clusters. One can associate the Kubernetes clusters with ships using the following guide: learn.shipa.io Once the cluster is added it shows up in the dashboard for the shipa instance and you can have an overview of all the cluster/apps associated. Dashboard : Dashboard view if… Came across a GitHub repository implemented by the awesome folks at Sighup.IO for managing user permissions for Kubernetes cluster easily via web UI. GitHub Repo : With Permission Manager, you can create users, assign namespaces/permissions, and distribute Kubeconfig YAML files via a nice&easy web UI. The project works on the concept of templates that you can create and then use that template for different users.Template is directly proportional to clusterrole. In rder to create a new template you need to defile a clusterrole with prefix template-namespaces-resources__. The default template are present in the k8s/k8s-seeds directory. Example template: apiVersion: rbac.authorization.k8s.io/v1 kind… A Quick overview and install in less than 5 minutes Definition From the Docs : one of the recent projects by VMware that aims to simplify the kubernetes view for developers. Now the developers would be able to see what all is happening… K3s is an open-source, lightweight Kubernetes distribution by Rancher that was introduced this year and has gained huge popularity. If you’re not familiar with it, check out this post on k3s vs k8s by Andy Jeffries, CTO at Civo. People not only like the concept behind it, but also the awesome work that the team has done to strip down the heavy Kubernetes distribution to a minimal level. Though k3s started as a POC project for local Kubernetes development, its development has led people to use it even at a production level. Official GitRepo: Seeing the popularity of k3s…
https://saiyampathak.medium.com/?source=post_page-----a38469535955--------------------------------
CC-MAIN-2021-25
en
refinedweb
Introduction In Machine learning projects, we have features that could be in numerical and categorical formats. We know that Machine learning algorithms only understand numbers, they don’t understand strings. So, before feeding our data to Machine learning algorithms, we have to convert our categorical variables into numerical variables. However, sometimes we have to encode also the numerical features. Why is there a need of encoding numerical features instead they are good for our Algorithms? Let’s understand the answer to this question with an example, Say we want to analyze the data of Google Play Store, where we have to analyze the Number of downloads of various applications. Since we know that all apps are not equally useful for users, only some popular applications are useful. So, there is a difference between the downloads for each one of those. Generally, this type of data is skewed in nature and we are not able to find any good insights from this type of data directly. Here is the need to encode our numerical columns to gain better insights into the data. Therefore, I convert numerical columns to categorical columns using different techniques. This article will discuss “Binning”, or “Discretization” to encode the numerical variables. Techniques to Encode Numerical Columns Discretization: It is the process of transforming continuous variables into categorical variables by creating a set of intervals, which are contiguous, that span over the range of the variable’s values. It is also known as “Binning”, where the bin is an analogous name for an interval. Benefits of Discretization: 1. Handles the Outliers in a better way. 2. Improves the value spread. 3. Minimize the effects of small observation errors. Types of Binning: Unsupervised Binning: (a) Equal width binning: It is also known as “Uniform Binning” since the width of all the intervals is the same. The algorithm divides the data into N intervals of equal size. The width of intervals is: w=(max-min)/N - Therefore, the interval boundaries are: [min+w], [min+2w], [min+3w], – – – – – – – – – – – -, [min+(N-1)w] where, min and max are the minimum and maximum value from the data respectively. - This technique does not changes the spread of the data but does handle the outliers. (b) Equal frequency binning: It is also known as “Quantile Binning”. The algorithm divides the data into N groups where each group contains approximately the same number of values. - Consider, we want 10 bins, that is each interval contains 10% of the total observations. - Here the width of the interval need not necessarily be equal. Handles outliers better than the previous method and makes the value spread approximately uniform(each interval contains almost the same number of values). (c) K-means binning: This technique uses the clustering algorithm namely ” K-Means Algorithm”. - This technique is mostly used when our data is in the form of clusters. Here’s the algorithm which is as followed: Let X = {x1,x2,x3,……..,xn} be the set of observation and V = {v1,v2,…….,vc} be the set of centroids. - Randomly select ‘c’ centroids(no. of centroids = no. of bins). - Calculate the distance between each of the observations and centroids. - Assign the observation to that centroid whose distance from the centroid is the minimum of all the centroids. - Recalculate the new centroid using the mean(average) of all the points in the new cluster being formed. - Recalculate the distance between each observation and newly obtained centroids. - If no observation was reassigned in further steps then stop, otherwise, repeat from step (3) again. Custom binning: It is also known as “Domain” based binning. In this technique, you have domain knowledge about your business problem statement and by using your knowledge you have to do your custom binning. For Example, We have an attribute of age with the following values Age: 10, 11, 13, 14, 17, 19, 30, 31, 32, 38, 40, 42, 70, 72, 73, 75 Now after Binning, our data becomes: Implementation This technique cannot be directly implemented using the Scikit-learn library like previous techniques, you have to use the Pandas library of Python and make your own logic to implement this technique. Now, comes to the next technique which can also be used to encode numerical columns(features) Binarization: It is a special case of Binning Technique. In this technique, we convert the continuous value into binary format i.e, in either 0 or 1. For Example, - Annual Income of the Population If income is less than 5 lakhs, then that people include in the non-taxable region(Binary value -0), and if more than 5 lakhs, then includes in the taxable region(Binary value -1). - Very useful Technique in Image Processing, for converting a colored image into a black and white image. As we know that image is the collection of pixels and its values are in the range of 0 to 255(colored images), then based on the selected threshold values you can binarize the variables and make the image into black and white, which means if less than that threshold makes that as 0 implies black portion, and if more than threshold makes as 1 means white portion. Implementation: Uses binarizer class of Scikit-Learn library of Python, which has two parameters: threshold and copy. If we make the copy parameter True, then it creates a new column otherwise it changes in the initial column. If you want to learn more about Binarizer class, then please refer to the Link Implementation in Python – To implement these techniques, we use the Scikit-learn library of Python. – Class use from Scikit-learn : KBinsDiscretizer() – You can find more about this class from this Link Step-1: Import Necessary Dependencies import pandas as pd import numpy as np Step-2: Import Necessary Packages import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.preprocessing import KBinsDiscretizer from sklearn.compose import ColumnTransformer Step-3: Read and Load the Dataset df=pd.read_csv('titanic.csv',usecols=['Age','Fare','Survived']) print(df.head()) Step-4: Drop the rows where any missing value is present df.dropna(inplace=True) print(df.shape) Step-5: Separate Dependent and Independent Variables X=df.iloc[:,1:] y=df.iloc[:,0] Step-6: Split our Dataset into Train and Test subsets X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=109) print(X_train.head(2)) Step-7: Fit our Decision Tree Classifier clf=DecisionTreeClassifier(criterion='gini') clf.fit(X_train,y_train) Step-8: Find the Accuracy of our model on the test Dataset y_pred=clf.predict(X_test) print(accuracy_score(y_test,y_pred)) Step-9: Form the objects of KBinsDiscretizer Class Kbin_age=KBinsDiscretizer(n_bins=15,encode='ordinal',strategy='quantile') Kbin_fare=KBinsDiscretizer(n_bins=15,encode='ordinal',strategy='quantile') Step-10: Transform the columns using Column Transformer trf=ColumnTransformer([('first',Kbin_age,[0]),('second',Kbin_fare,[1])]) X_train_trf=trf.fit_transform(X_train) X_test_trf=trf.transform(X_test) Step-11: Print the number of bins and the intervals point for the “Age” Column print(trf.named_transformers_['first'].n_bins_) print(trf.named_transformers_['first'].bin_edges_) Step-12: Print the number of bins and the intervals point for the “Fare” Column print(trf.named_transformers_['second'].n_bins_) print(trf.named_transformers_['second'].bin_edges_) Step-13: Fit-again our Decision Tree Classifier and check the accuracy clf.fit(X_train_trf,y_train) y_pred2=clf.predict(X_test_trf) print(accuracy_score(y_test,y_pred2)) CONCLUSION: Here we observed that after applying the encoding techniques, there is an increment in the accuracy. Here, we only apply the Quantile Strategy, but you can try to change the “Strategy” parameter and then implement the different techniques accordingly. on How to encode numerical features are not owned by Analytics Vidhya and is used at the Author’s discretion.
https://www.analyticsvidhya.com/blog/2021/05/complete-guide-on-encode-numerical-features-in-machine-learning/
CC-MAIN-2021-25
en
refinedweb
One of the coolest things I find about Data Science is Data Visualization. I can’t stop myself loving it. I think data visualization is like transform a story into a live-action movie. Data is like a story that you imagine in your mind but you can’t share it with others in an explainable way but you can share it with others with the power of visualization. It’s like a movie script and when you transform the script into a movie then it becomes Visualization. Data visualization helps transform your numbers into an engaging story with details and patterns. Data visualization enables us to recognize emerging trends and respond rapidly on the grounds of what we see. Such patterns make more sense when graphically represented; because visuals and diagrams make it easier for us to identify strongly correlated parameters. Humans can process visual images 60,000 times faster than text. Therefore, seeing a graph, chart, or other visual representation of data is more comfortable for the brain to process. Here we are gonna analyze and visualize the latest situation of “Covid19 Vaccination“ around the world. You’ll find the dataset here. We will use Plotly & Seaborn for data visualization & for data analysis, we will use pandas. Importing Libraries: import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px from plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot import plotly.graph_objects as go import plotly.figure_factory as ff from plotly.colors import n_colors from wordcloud import WordCloud,ImageColorGenerator init_notebook_mode(connected=True) from plotly.subplots import make_subplots from pywaffle import Waffle import warnings warnings.filterwarnings("ignore") Reading the Data: df = pd.read_csv("/kaggle/input/covid-world-vaccination-progress/country_vaccinations.csv") new_df = df.groupby(["country",'iso_code','vaccines'])['total_vaccinations', 'people_vaccinated','people_fully_vaccinated', 'daily_vaccinations','total_vaccinations_per_hundred', 'people_vaccinated_per_hundred',"people_fully_vaccinated_per_hundred" ,'daily_vaccinations_per_million'].max().reset_index() What is the proportion of Top 10 Vaccine in the race of fighting Covid19? top10 = new_df['vaccines'].value_counts().nlargest(10) top10 data = dict(new_df['vaccines'].value_counts(normalize = True).nlargest(10)*100) #dict(new_df['vaccines'].value_counts(normalize = True) * 100) vaccine = ['Oxford/AstraZeneca', 'Moderna, Oxford/AstraZeneca, Pfizer/BioNTech', 'Oxford/AstraZeneca, Pfizer/BioNTech', 'Johnson&Johnson, Moderna, Oxford/AstraZeneca, Pfizer/BioNTech', 'Pfizer/BioNTech', 'Sputnik V', 'Oxford/AstraZeneca, Sinopharm/Beijing', 'Sinopharm/Beijing', 'Moderna, Pfizer/BioNTech', 'Oxford/AstraZeneca, Pfizer/BioNTech, Sinovac'] fig = plt.figure( rows=7, columns=12, FigureClass = Waffle, values = data, title={'label': 'Proportion of Vaccines', 'loc': 'center', 'fontsize':15}, colors=("#FF7F0E", "#00B5F7", "#AB63FA","#00CC96","#E9967A","#F08080","#40E0D0","#DFFF00","#DE3163","#6AFF00"), labels=[f"{k} ({v:.2f}%)" for k, v in data.items()], legend={'loc': 'lower left', 'bbox_to_anchor': (0, -0.4), 'ncol': 2, 'framealpha': 0}, figsize=(12, 9) ) fig.show() Observation: - In a range of percentage of vaccines 28.44% used Oxford/AstraZeneca - Oxford/AstraZeneca is the most used Vaccine - Later Pfizer/BioNTech was the most used Vaccine and now it’s in 5th place also Oxford/AstraZeneca was not in the top 3 & now it’s in 1st place. Looks like Oxford/AstraZeneca works best among the vaccines What is the number of total vaccinations & daily vaccinations according to countries? data = new_df[['country','total_vaccinations']].nlargest(25,'total_vaccinations') fig = px.bar(data, x = 'country',y = 'total_vaccinations',title="Number of total vaccinations according to countries",) fig.show() data = new_df[['country','daily_vaccinations']].nlargest(25,'daily_vaccinations') fig = px.bar(data, x = 'country',y = 'daily_vaccinations',title="Number of daily vaccinations according to countries",) fig.show() Which vaccine is used by which Country? vacc = new_df["vaccines"].unique() for i in vacc: c = list(new_df[new_df["vaccines"] == i]['country']) print(f"Vaccine: {i}nUsed countries: {c}") print(‘-‘*70) fig = px.choropleth(new_df,locations = 'country',locationmode = 'country names',color = 'vaccines', title = 'Vaccines used by specefic Country',hover_data= ['total_vaccinations']) fig.show() Which Vaccine is Used the most? vaccine = new_df["vaccines"].value_counts().reset_index() vaccine.columns = ['Vaccines','Number of Country'] vaccine.nlargest(5,"Number of Country") Oxford/AstraZeneca is being used by 60 Countries. Total Vaccinations per country grouped by Vaccines: fig = px.treemap(new_df,names = 'country',values = 'total_vaccinations', path = ['vaccines','country'], title="Total Vaccinations per country grouped by Vaccines", color_discrete_sequence =px.colors.qualitative.Set1) fig.show() .png) fig = go.Choropleth(locations = new_df["country"],locationmode = 'country names', z = new_df['total_vaccinations'], text= new_df['country'],colorbar = dict(title= "Total Vaccinations")) data = [fig] layout = go.Layout(title = 'Total Vaccinations per Country') fig = dict(data = data,layout = layout) iplot(fig) .png) Daily Vaccinations per Countries: fig = go.Choropleth(locations = new_df["country"],locationmode = 'country names', z = new_df['daily_vaccinations'], text= new_df['country'],colorbar = dict(title= "Daily Vaccinations")) data = [fig] layout = go.Layout(title = 'Daily Vaccinations per Countries') fig = dict(data = data,layout = layout) iplot(fig) .png) Relation between Total Vaccinations and Total Vaccinations per Hundred: fig = px.scatter(new_df,x = 'total_vaccinations',y='total_vaccinations_per_hundred', size='total_vaccinations', hover_name = 'country',size_max = 50, title="Total vs Total vaccinations per hundred grouped by Vaccines", color_discrete_sequence = px.colors.qualitative.Bold) fig.show() .png) If you hover your cursor to the scatters you will also see the country names,number of total vaccinations and number of total vaccinations per hundred. By this we observe that: - Although USA & China produce the highest number of vaccinations to their citizens, according to their population this is not much. What is the trend of total vaccinations according to countries? def plot_trend(dataframe,feature,title,country): plt.style.use('ggplot') plt.figure(figsize=(20,25)) for i,country in enumerate(country): plt.subplot(8,4,<a onclick="parent.postMessage({'referent':'.kaggle.usercode.14440604.62732853.plot_trend..i'}, '*')">i+1) data = dataframe[dataframe['country'] == country] sns.lineplot(x=data['date'] ,y=data[feature],label = feature) plt.xlabel('') plt.tick_params(axis='x',which='both',top=False,bottom=False,labelbottom=False) plt.title(country) plt.suptitle(title,y=1.05) plt.tight_layout() plt.show() country = ['Argentina', 'Austria', 'Belgium', 'Brazil','Canada','China','Czechia', 'Denmark', 'England','Finland', 'France','Germany','India','Ireland', 'Israel', 'Italy', 'Kuwait','Mexico', 'Netherlands','Norway', 'Poland', 'Russia','Saudi Arabia', 'Scotland','Singapore','Spain', 'Sweden', 'Switzerland', 'Turkey','United Arab Emirates', 'United Kingdom', 'United States'] plot_trend(df,'total_vaccinations','Trend of total vaccination',country) End Notes: You can collect the dataset from here and play with it. You may find a difference in the results because every day a lot of people getting infected by Covid19 and the data of covid19 is being changed every day. At last, I wanna say that we all know that, we are in a very bad situation because of Covid19. All we have is each other so let’s help each other to the best we can & pray for our planet to get well soon. The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/05/covid-19-vaccination-data-analysis-visualization/
CC-MAIN-2021-25
en
refinedweb
In this Python Tutorial I show you how to build a simple chat server. All you need to do this are the pre-installed modules: asyncore, asynchat and socket. The code then basically does the following: If you find this article helpful, please click here [googleplusone] so more people can find this 🙂 Leave any questions or comments below and here are all of my previous Python Video Tutorial’s: All the Code from the Video #!/usr/bin/python from asyncore import dispatcher from asynchat import async_chat import socket, asyncore PORT = 5006 NAME = ‘ChatLine’ class ChatSession(async_chat): def __init__(self,server,sock): async_chat.__init__(self, sock) self.server = server self.set_terminator(“\r\n”) self.data = [] def collect_incoming_data(self, data): self.data.append(data) def found_terminator(self): line = ”.join(self.data) self.data = [] self.server.broadcast(line) def handle_close(self): async_chat.handle_close(self) self.server.disconnect(self) class ChatServer(dispatcher): def __init__(self, port, name): dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.set_reuse_addr() self.bind((”,port)) self.listen(5) self.name = name self.sessions = [] def disconnect(self, sessions): self.sessions.remove(session) def broadcast(self, line): for session in self.sessions: session.push(‘>>’ + line + ‘\r\n’) def handle_accept(self): conn, addr = self.accept() self.sessions.append(ChatSession(self, conn)) if __name__ == ‘__main__’: s = ChatServer(PORT, NAME) try: asyncore.loop() except KeyboardInterrupt: print Hi Derek, Thank you very much for all your hard work setting up all the tutorials. So far I have enjoyed every piece of the Python tutorials and I have to say that your ability to cope with many different fields ie. R&D, marketing, health, etc.. is really impressive. You must be gifted. Anyway, I am wondering if you could add the authentication piece to the chat server so the users will have to login and be authenticated against the local passwd file instead of telnet’ing directly into the server with the specified port. Thanks in advance, Tito In the current tutorial I’m working on I’ll go more into how to use MySQL step by step. That should answer any questions you have in regards to how to implement that authentication. Glad you liked the tutorial 🙂 Hi! Great tutorial! You should definetly make more tutorials that cover netwoking in-depth! // Fred The code don’t work Sometimes there are encoding problems when you copy the code directly from the site. These can normally be fixed by changing all back quotes into normal quotes. If you want I can send you the original code. I’m positive it will work. Thanks Yes please send it to me and I’m using win 7 I can’t open any python file I sent the code. Sorry my pc died so I can’t test things on them anymore. I’m positive the code works on Unix/Linux/Mac systems though Ok first thanks second what should I do to open the chat can you make a video for win 7 users can i use it from 2 different times at the same time There are probably some slight syntax changes on pc’s versus unix systems. Yes you can run two or more chats at once. Sorry I mean from 2 different computers or laptops at the same time and will U make a tutorial how to open chat for windows users thankx I’ll show you how to create a perfect chat system using Javascript very soon. It will be cross platform and easy to set up and style however you’d like. Hope that helps. Can I then use it from 2 different computers? Will you make that tutorial as soon as possible , Will you also use Windows 7? Thanks in Advance Yes it will work on numerous browsers at one no matter the operating system. I’ll include it in the new JavaScript tutorial which will follow the CSS tutorial which will be completed today or tomorrow Oke Thanksin Advance Don’t forget to please use Windows 7. I have a Question: I need to program a script for topics, I will in the form let the admin choose like adding the topic on 11-02-2011. Can you give me a code? I’m a little confused. Do you want the chat system to change the topic on the fly every day? I don’t understand what you mean. No the story about topics is another script, Did U finish the tutorial Hello, Will you please answer me I need help How can i make a chat on my website and let just to people chat. Like facebook (Ex: derek,musti,php,html are online derek needs to chat with musti like php & html can’t see the content of the chat. Thanks Musti I’ll be making one very soon, but I’m going to build it using PHP and JavaScript. I just find it easier to use those languages because very few hosting companies allow you to use Django with Python. Hello, When are you going to make the tutorial? I’m waiting sorry for asking. Do you have a code or can you program it for me and send it by e-mail? Thanks Which tutorial are you looking for. Sorry, but I’ve been kind of overwhelmed with comments and it has been hard to keep some requests in order No problem . My tutorial That i have asked was Chat script. How can i make a chat on my website and let just to people chat.? Like facebook (Ex: derek,musti,php,html are online derek needs to chat with musti like php & html can’t see the content of the chat. I hope you will make it Thanks Yes I’m going to do that. I was confused because your userid was different. I’m going to be working on bigger projects now that I got the easier stuff done like chat, forums, wordpress stuff, etc. My next tutorial will be on a forum (message board) and then a chat system. I’m actually going to approach chat from a few different angles. A simple chat and then a more complicated huge chat system using Facebooks Tornado Server Thanks, When are you going to start?? The next project is forums and then chat I hope you will start monday. Can you??? I’ll do my best. I’m finishing it up right now I am really looking forward to any new videos u will upload .. 🙂 u rock big time .. 🙂 Thanks. I have many more coming out. Could you post some tutorial on some concepts like decorator,threading etc in python? I’ll look into those subjects in upcoming tutorials. Thanks for the request I second threading (multiprocessing as I think it is called in Python?) as well. I’ll probably need to screw with it myself too. Can u please send me the code, i am having trouble using this. Like i get errors for def, for some reason. Alright, I got a VERY odd error when running this code and I fixed it with an slight alter. I bring it up b/c it causes some really odd things on a Windows XP PC. RAM corruption is probably the best word I have for it. IN ChatServer -> IN def disconnect AND def broadcast Changed any point where it said session to sessions. I assume this was a typo, but caused some very interesting effects after closing it. That’s very interesting. Python interpreters act differently depending on the OS, computer itself, etc. I normally stick to linux machines and stay away from pcs. It’s kind of a shame that everyone is taught to program on pcs even though most development in the real world is done on linux. No argument there, problem for me is I can’t get Linux to work on my PC. (Wireless won’t work, glitches out and have to delete Linux, etc.) My poor laptop hates Linux, heh. Have you tried using wubi? It installs Ubuntu just like any other application on a PC. Yes, actually. That caused other issues, it would error and crash at 1% installing. (I need a new laptop.) Wow yes it sounds like you have some problems. Definitely consider a mac next time. I don’t know of anyone making quality PCs anymore. Dell used to be reliable. HP uses the cheapest components and I know personally their warranty isn’t worth the paper it’s printed on Me again… Hi, In the ChatServer class, the disconnect method is giving me fits. We’re passing in “sessions”, but calling to remove “session” and so I get the error: NameError: global name ‘session’ is not defined Where is it supposed to be getting “session” from? Thanks, Cyd Did you copy the code directly from the site? I just checked it and it worked for me. You get session from asyncore. These are all classes that come with predefined methods and variables. Hi, there is a little misstyping. Line 37 should look like that def disconnect(self, session): self.sessions.remove(session) second parameter is session not sessions i want to create an sms based chat services conacting my gsm modem to pc.my pc should run like 24/7 as server.and people can register through sms and make groups …an example site is this I’ll look into that. At the very least I’m going to show how to put a simple chat system together in Java very soon. I’ve played around with SMSLib for Java and I’ll see if I can find an easy way to explain how to use it. Thank you i want to make an sms message based group messaging service.a boy from our country developed a service and now he is making money..and there are more than 4,000,000 groups ,and 14 million users registerd in there service.his site is this i want to make my very own service like that..i want to flop there service. How much do you know about programming? I’ll revisit the SMS stuff. I was playing with it a few months back. I just know about a little bit python.the creater of smsall.pk .told me that we have created this service using python..i just want to create can we both togethere work on this project?.tell me what do you know about sms stuff like that site.how it works and etc etc I don’t take on projects through this site. Everything I do here is to provide a free education on many subjects. I know it sounds crazy, but I don’t do anything to get rich. I will however look into making a Java tutorial on SMS. I have played with it in the past and didn’t have any trouble with it. It is simple to implement, but it is expensive to scale up for hundreds of thousands of potential users. I can’t really implement this using Python because of some security issues and the expense that comes along with setting up a Python Django site. I’ll see what I can come up with No want to use my own gsm modem.and sim card.i have already signed a contract with my mobile service provider for free sms..just implement it.and send me the software ready to use ..thank you Hello! My name is Manley and I have been following your tutorials for sometime now, which by the way have been quite helpful. So thank you :-). I have run into a snag with this particular tutorial though.. I am trying to connect to the server(Held on computer A) from another computer(Computer B) on my network through ‘telnet’ ( which I didn’t know anything about lol. Maybe if you are taking suggestions, next time explain a little what that is, or point to a good website to learn about it 🙂 ). Computer B does briefly connect to Computer A but instantly gets an error.. which is the following: “chatServer instance has no attribute ‘__trunc__'” the full error print is : error: uncaptured python exception, closing channel (:[Errno 10022] An invalid argument was supplied [C:\Python27\lib\asyncore.py|read|83] [C:\Python27\lib\asyncore.py|handle_read_event|439] [C:\Users\manley\Desktop\customChat.py|handle_accept|77] [C:\Users\manley\Desktop\customChat.py|__init__|56] [C:\Python27\lib\asyncore.py|listen|337] [C:\Python27\lib\socket.py|meth|224]) Through the ‘print’ command I have narrowed the error to happen at the ‘self.bind((”, port))’ line. Are you able to give me some directions as to what I did wrong? Google is not turning up much for me.. either that or I’m not searching for the right thing :P. I am using Windows 7 64bit and Python 2.7 Thanks! Manley Hi Manley, Sorry I didn’t get to you quicker. I was on vacation. Did you replace all of the back quotes with normal quotes in the code? You need to replace ‘ and ’ with ‘ Replace “ and ” with ” Did you use the code exactly as I demonstrated and still got the error? I just tested it and it worked. are you there ?? please tell me? Yes I’m here, but I don’t know what your question is – Derek sorry my name was changed from khan to faiz i am khan..please tell me i want to make an sms gateway server for group messaging I’ll cover the basics as soon as I get a chance. It takes me time to structure these tutorials and I’m not sure how soon I’ll get to it Derek – just finished watching all 18 of your Python tutorials, they were excellent, thank you. (ps Khan/faiz whatever your name is. Here’s a radical notion you may not appreciate; try doing a bit of your own research so you can learn how to develop the SMS server yourself, instead of trying to bully others into doing the work 4 u!!!) Thank you 🙂 I’m very happy to help. Thanks for taking the time to tell me that they helped Hey Derek, I made a comment about a week ago and its still waiting ‘moderation’. Just wondering if you have had time to read it? Manley I sent a response Hi, I was trying to get your code working however i am having a couple of issues. At first the script didn’t run so I changed the quotes back to normal quotes. It then opened a connection and i was able to telnet in but none off the messages are being sent to any of the clients. Any ideas? Good work on the tutorials btw. Hope to see something with user authentication in the future. You’re very welcome. Sorry about the quote issue. I keep meaning to fix that, but I haven’t had any time. What os are you using? “Give me codes for make money and be famous ’cause idk what is computer.” Haha! Derek, I applaud you for being polite when I couldn’t. Got a good laugh out of it. 🙂 I love your tutorials by the way. They have been very useful! Thanks! Thank you 🙂 I do my best to help Hello there, Thank u for nice job. Am Computer Security Student And I Chose Secure Messsenger as my Fyp Title.it Filters Incoming Mac Addrssees and only let valid Mac TTo communicate.alsoit Encodes Msgs Befoore send.and It Randomly Downloads immage from Internnet and hide Msg In Image(steganography).may I Have A Look At Your Messengeer source codeplz?did U Finish Authentication Part?andd do U Have Any Suggetioonfor me?u Canemil Me.thank U Hey Derek, Really nice tutorial, but I don’t quite understand how to get it working. I’m on windows by the way. Thanks. Thank you 🙂 What problem are you having? Any errors? Welp … I’ve made it through all of your Python 2.X & 3.X videos. I’ve enjoyed and appreciated them. I know that you’re a busy guy, but I do hope that you add additional videos to your Python series. Making GUIs, cool desktop apps, a Python-version of the Java Asteroids game perhaps? In the interim, do you have any online sources or book recommendations for continuing Python education? In one of the Java tutorials’ comments, you mentioned Filthy Rich Clients and that turned out to be an awesome recommendation. Thank you for your tutorials! You’re very welcome 🙂 I’m glad you enjoyed the Python tutorials. Here are a few books you may like Core Python Applications Programming and Making Games with Python & Pygame. Python is very fun. Hi Darren, I’m using windows 7 and I haven’t tried your code yet. I was just wondering that would I able to use this code to send and receive message from computer A to computer B. Or this code will just use same computer with multiple python shels open and sending message to each other in one machine using same ip address. You’ll be able to send between different computers.
https://www.newthinktank.com/2010/11/python-2-7-tutorial-pt-18-chat-server/
CC-MAIN-2021-25
en
refinedweb
How to Use Custom Hooks In Your React App React with Hooks is probably the most awesome part about React thus far. The power and ability it provides is almost endless and that continues on with Custom Hooks. Custom Hooks allow you to create your own reusable Hooks to use in your code and here I’m going to walk you through how to do just that. What are Custom Hooks? Hooks are powerful tools to share Javascript code between components without having to repeat yourself. Custom hook files are prefixed with use at the beginning, so if I wanted to make a custom hook, I would name the file something along the lines of useDetectOutsideClick. While the idea of custom hooks may seem like a difficult topic to wrap your mind around, they’re actually quite simple as they’re just a javascript function prefixed with “use” that we will be able to “re-use” throughout our project. Custom Hooks make our code more dry and easier to understand. Lets walk through creating this custom hook. Starting off First things first, the file you create that contains your custom hook should be located in the same directory as your components that you’d like to use the hook in. Next, you’re going to declare a regular function like so export default function useDetectOutsideClick() {} Here we declared our function and exported it, and technically we could import this hook into any component we’d need it now. But we want some functionality here, we want our hook to provide a return of some sort so that the hook can have some effect on our code inside of our component. Here we can see what a fully functioning hook might look like. Here we have a fully developed javascript function with a return value and everything! This hook is being used to detect if, when a dropdown menu is showing and active, if there is an click outside of the menu and if there is, it will make the menu hidden again! Pretty simple, right? Putting this hook to the test Now we need to make this hook accessible in our code so it can actually serve a purpose. Now if you’ve used other hooks such as useState and useEffect, like we did above^ you’ll have no problem importing custom hooks as well. import { useDetectOutsideClick } from './useDetectOutsideClick'; It’s as simple as that, now you have access to your custom hook inside of your component. To use its logic you could do something along the lines of.. Here we called our custom hook with the help of another built in hook useRef to give our custom hook the functionality it needs to help us close our menu when a user clicks any place outside of the menu itself. Conclusion Here we discussed what custom hooks are, how to create and how to implement them. Custom hooks take your code to the next level giving you reusability like never before. Custom hooks help keep code dry and clean while also making it easier to write as well.
https://connormulholland.medium.com/how-to-use-custom-hooks-in-your-react-app-7f42b37796c1?responsesOpen=true&source=---------2----------------------------
CC-MAIN-2021-25
en
refinedweb
Today, InfoQ publishes a sample chapter "Integrating with a GWT-RPC Servlet" from "Google Web Toolkit", a book authored by Ryan Dewsbury. Performance is the main reason Ajax is so popular. We often attribute the glitzy effects used in many Ajax apps as a core appeal for users, and users may also attribute this to why they prefer Ajax apps. It makes sense because if you look at traditional web apps they appear static and boring. However, if it were true that glitzy effects dramatically improved the user experience then we would see a wider use of the animated gifs. Thankfully those days are gone. Ajax will not go the way of the animated gif because the value that it adds is not all glitz. The true value that improves the user experience with Ajax, whether the user is conscious of this or not, is performance. In this article I'm not going to show you why Ajax inherently performs better than traditional web applications. If you're not sure look at Google maps and remember older web mapping apps or compare Gmail to Hotmail. By basing your application on Ajax architecture you can dramatically improve performance and the user experience for your app. Instead, in this article I'm going to show you how to push this performance improvement to the next level - to make your Ajax application stand apart from the rest. Why GWT? The Google Web Toolkit (GWT) provides a significant boost to Ajax development. Any new technology like this is a hard sell especially when there are many other choices. In reality, nothing gives you the benefits GWT gives you for Ajax applications. If you're not already bound to a framework it just doesn't make sense to not use it. By using GWT for your Ajax application you get big performance gains for free. By free I mean that you just don't need to think about it. You concentrate on writing your application logic and GWT is there to make things nice for you. You see, GWT comes with a compiler that compiles your Java code to JavaScript. If you're familiar with compiled languages (C, Java, etc.) you'll know that a goal is to make the language platform independent. The compiler is able to make optimizations to you code specific to the platform being compiled to, so you can focus on leaving your code readable and well organized. The GWT compiler does the same thing. It takes your Java code and compiles down to a few highly optimized JavaScript files, each one exclusively for use with a specific browser, making your code small and browser independent. The optimization steps employ real compiler optimizations line removing uncalled methods and inlining code essentially treating JavaScript as the assembly code of the web. The resulting code is small and fast. When the JavaScript is loaded in the browser it contains only the code needed for that browser and none of the framework bloat from unused methods. Applications built using GWT are smaller and faster than applications built directly with JavaScript and now the GWT team, typically very modest, is confident that the GWT 1.5 compiler produces JavaScript that is faster than anything anyone could code by hand. That should be enough to convince anyone to use GWT for an Ajax application but if it doesn't there are plenty of other reasons why you should use GWT including the availability of Java software engineering tools (debugging Ajax applications in Eclipse is a huge plus for me). Do You Want More? Why stop there. Ajax applications perform better than traditional web applications and GWT applications perform better than regular Ajax applications. So by simply making a few technology choices you can build applications that perform really, really well, and focus on your application features. You'll be done your work in half the time too. However GWT doesn't magically do everything. I will cover four things that you can do on your own to boost your Ajax application performance even further. 1. Cache Your App Forever When you compile your GWT application to JavaScript a file is created for each browser version that has a unique name. This is your application code and can be used for distribution simply by copying it to a web server. It has built in versioning since the filename name is a hash of your code. If you change your code and compile a new filename is created. This means that either the browser has a copy of this file already loaded or it doesn't have it at all. It doesn't need to check for a modified date (HTTP's If-Modified-Since header) to see if a newer version is available. You can eliminate these unneeded browser HTTP trips. They can be fairly small but add up to a lot when your user base grows. They also slow down your client since browsers can only have two active requests to a host. Many optimizations with load time for Ajax involve reducing the number of requests to the server. To eliminate the version requests made by the browser you need to tell your web server to send the Expires HTTP header. This header tells the browser when the content is not considered fresh again. The browser can safely not check for new versions until the expire date has passed. Setting this up in Apache is easy. You need to add the following to your .htaccess file: <Files *.cache.*> ExpiresDefault "now plus 1 year" </Files> This tells apache to add the expires header to one year from now for every file that matches the pattern *.cache.*. This pattern will match your GWT application files. If you're using Tomcat directly you can add headers like this through a servlet filter. Adding a servlet filter is fairly straightforward. You need to declare the filter in your WEB_INF/web.xml file like this: <filter> <filter-name>CacheFilter</filter-name> <filter-class>com.rdews.cms.filters.CacheFilter</filter-class> </filter> <filter-mapping> <filter-name>CacheFilter</filter-name> <url-pattern>/gwt/*</url-pattern> </filter-mapping> This tells tomcat where to look for the filter class and which files to send through the filter. In this case the pattern /gwt/* is used to select all the files in a directory named gwt. The filter class implements the doFilter method to add the Expires header. For GWT we want to add the header to each file that doesn't match *.nocache.*. The nocache file should not be cached since it contains the logic to select the current version. The following is the implementation of this filter: public class CacheFilter implements Filter { private FilterConfig filterConfig; public void doFilter( ServletRequest request, ServletResponse response, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest httpRequest = (HttpServletRequest)request; String requestURI = httpRequest.getRequestURI(); if( !requestURI.contains(".nocache.") ){ long today = new Date().getTime(); HttpServletResponse httpResponse = (HttpServletResponse)response; httpResponse.setDateHeader("Expires", today+31536000000L); } filterChain.doFilter(request, response); } public void init(FilterConfig filterConfig) throws ServletException { this.filterConfig = filterConfig; } public void destroy() { this.filterConfig = null; } } 2. Compress Your Application The GWT compiler does a good job at reducing code size but cutting unused methods and obfuscating code to use short variable and function names, but the result is still uncompressed text. Further size improvements can be made buy gzipping the application for deployment. With gzip you can reduce your application size by up to 70%, which makes your application load quicker. Fortunately this is an easy to do with server configuration as well. To compress files on apache simply add the following to you .htaccess file: SetOutputFilter DEFLATE Apache will automatically perform content negotiation with each browser and send the content compressed or not compressed depending on what the browser can support. All modern browsers support gzip compression. If you're using Tomcat directly you can take advantage of the compression attribute on the Connector element in your server.xml file. Simply add the following attribute to turn compression on: compression="on" 3. Bundle Your Images Ajax application distribution leverages the distribution power of the browser and HTTP, however the browser and HTTP are not optimized for distributing Ajax applications. Ajax applications are closer to Desktop applications in their needs for deployment where traditional web applications use a shared resource distribution model. Traditional web applications rely on interactions between the browser and web server to manage all of the resources need to render a page. This management ensures that resources are shared and cached between pages ensuring that loading new pages involves as little downloading as possible. For Ajax applications resources are typically not distributed between documents and don't need to be loaded separately. However it is easy to simply use the traditional web distribution model when loading application resources, and many applications often do. Instead, you can reduce the number of HTTP requests required to load your application by bundling your images into one file. By doing this your application loads all images with one request instead of two at a time. As of GWT 1.4 the ImageBundle interface is supported. This feature lets you define an interface with a method for each image you'll use in your application. When the application is compiled the interface is read and the compiler combines all of the images listed into one image file, with a hash of the image contents as the file name (to take advantage of caching the file forever just like the application code). You can put any number of images in the bundle and use them in your application with the overhead of a single HTTP request. As an example, I use the following image bundle for the basic images in a couple applications I've helped build: public interface Images extends ImageBundle { /** * @gwt.resource membersm.png */ AbstractImagePrototype member(); /** * @gwt.resource away.png */ AbstractImagePrototype away(); /** * @gwt.resource starsm.gif */ AbstractImagePrototype star(); /** * @gwt.resource turn.png */ AbstractImagePrototype turn(); /** * @gwt.resource user_add.png */ AbstractImagePrototype addFavorite(); } Notice that each method has a comment annotation specifying the image file to use and a method that returns an AbstractImagePrototype. The AbstractImagePrototype has a createImage method that returns an Image widget that can be used in the application's interface. The following code illustrates how to use this image bundle: Images images = (Images) GWT.create(Images.class); mainPanel.add( images.turn().createImage() ); It's very simple but provides a big startup performance boost. 4. Use StyleInjector What about CSS files and CSS images as application resources? In a traditional web distribution model these are treated as external resources, loaded and cached independently. When used in Ajax applications they involve additional HTTP requests and slow down the loading of your application. At the moment GWT doesn't provide any way around this however there is a GWT incubator project, which has some interesting GWT code that may be considered for future versions. Of particular interest is the ImmutableResourceBundle and StyleInjector. The ImmutableResourceBundle is much like an ImageBundle but can be used for any type of resource including CSS and CSS images. It's goal is to provide an abstraction around other resources to have them handled in the most optimal way possible for the browser running the application. The following code is an example of this class used to load a CSS file and some resources: public interface Resources extends ImmutableResourceBundle { /** * @gwt.resource main.css */ public TextResource mainCss(); /** * @gwt.resource back.gif */ public DataResource background(); /** * @gwt.resource titlebar.gif */ public DataResource titleBar(); /** * @gwt.resource dialog-header.png */ public DataResource dialogHeader(); } For each resource a file and method is specified much like the ImageBundle however the return value for the methods is either a DataResource or a TextResource. For the TextResource you can use its getText method to get it's contents and for the DataResource you can use getUrl to reference the data, (for example in an IMG tag or IFRAME). How this data is loaded is handled differently for different browsers and you don't need to worry about it. In most cases the data is an inline URL using the data: URL prefix. The possibilities for this class are vast, but the most immediate use is to bundle CSS directly with your application file. Notice in the interface that a CSS file and some images are referenced. In this case the interface is being used to bundle CSS and it's images with the application to reduce HTTP calls and startup time. The CSS text specified background images for some of the application elements but instead of providing real URL's it lists placeholders. These placeholders reference other elements in the bundle, specifically the other images. For example, the main.css file has a CSS rule for the gwt-DialogBox style name: .gwt-DialogBox{ background-image:url('%background%') repeat-x; } To use this CSS file and it's images in your application you need to use the StyleInjector class from the GWT incubator project. The StyleInjector class takes the CSS data and matches the placeholders to resources in a resource bundle then injects the CSS into the browser for use in your application. It sounds very complicated but it's simple to use and improves performance. The following is an example of injecting CSS from a resource bundle into your application with StyleInjector: Resources resources = (Resources)GWT.create(Resources.class); StyleInjector.injectStylesheet( resources.mainCss().getText(), resources ); It's important to note that this technique is part of the incubator project and will most likely change in the future. Conclusion Ajax applications have a big usability jump from traditional web applications and GWT provides tools that give you better Ajax performance for free. You should compare the startup speed of the GWT mail sample to other sample Ajax applications. By paying attention to the deployment differences between traditional web applications and Ajax applications we can push application performance even further. I'm excited to see the next generation of Ajax applications. About the Author. Note:The code/text does not address security issues or error handling Copyright: This content is excerpted from the book, "Google Web Toolkit Applications", authored by Ryan Dewsbury, published by Prentice Hall Professional, December, 2007, Copyright 2008 Pearson Education, Inc. ISBN 0321501969 For more information, please Compression : warning with some Internet explorer version, it doesn't work by Nicolas Martignole, Re: Compression : warning with some Internet explorer version, it doesn't w by Manuel Carrasco Moñino, Re: Compression : warning with some Internet explorer version, it doesn't w by venugopal pokala, Jetty Continuations with GWT by Jan Bartel, Apache configuration by Papick Taboada, Compression : warning with some Internet explorer version, it doesn't work by Nicolas Martignole, Your message is awaiting moderation. Thank you for participating in the discussion. Just a small warning about gzip compression and the following version of Microsoft Internet Explorer: 5.x, 6.0 and 6.0 SP1. There are known bugs with gzip compression. See MSDN for more details. You might need to add a User-Agent verification in the filter so that the filter does not compress content for some specific browser. Have a look also at the default httpd.conf for Apache, there's more information about this. Jetty Continuations with GWT by Jan Bartel, Your message is awaiting moderation. Thank you for participating in the discussion. Along the lines of performance and scalability, I thought it would be worth mentioning that GWT apps can also take advantage of Jetty Continuations: Jetty GWT with Continuations cheers Jan Re: Compression : warning with some Internet explorer version, it doesn't w by Manuel Carrasco Moñino, Your message is awaiting moderation. Thank you for participating in the discussion. That's true... . But it seems the problem in IE occurs only with js compressed files, not with html. Using the default linker, GWT produces especial [...].cache.html files including the application javascript code. So you can compress the application in htmp files without problems. If you are using the gwt cross site compiler you can not use compression for these version of explorer. Re: Compression : warning with some Internet explorer version, it doesn't w by venugopal pokala, Your message is awaiting moderation. Thank you for participating in the discussion. We have developed a stand alone web based application using GWT and performance of this application in Mozilla firefox browser is 3 times better than the IE browser. Any help in improving performance of this application in IE browser would be great help to us. Thanks, Venu Apache configuration by Papick Taboada, Your message is awaiting moderation. Thank you for participating in the discussion. I am using the Apache proxy in front of my tomcat to configure compression and http headers. bit.ly/GwtApacheConfig
https://www.infoq.com/articles/gwt-high-ajax/?itm_source=articles_about_ajax&itm_medium=link&itm_campaign=ajax
CC-MAIN-2021-25
en
refinedweb
This is post 1 of 2 to publish the slide decks for presentations I’ve given recently, in particular at the European SharePoint Conference 2017 held in Dublin. See also: Presentation deck – Best bits of Azure for the Office 365 Developer Unfortunately, I never think the slide deck alone conveys all of the information of a conference session, since it’s the demos which are often the most valuable part. Still, I try to assemble slides which have useful reference information, so hopefully this will be useful to someone. The full slide deck is embedded from SlideShare at the bottom of this post. The main topics I discuss here are: - Versioning and dependency issues - The need to ensure you use --save or --save-exact when adding libraries with npm install (so that they are recorded in your package.json file, and other developers on the team can successfully build from source control) - Semantic versioning, including caret and tilde symbols in version numbers - The need to run npm shrinkwrap for each release of your code - [NOTE – some of this changes in npm 5, which automatically does a --save when doing an install, and also automatically generates a package-lock.json file (similar to npm-shrinkwrap.json. But for now, npm 5 is not officially supported in the SharePoint Framework (SPFx) and so these points remain important) - Re-use of existing JavaScript code - You might choose to wrap such code in a module if it is not already – this provides a more formal method in TypeScript/JavaScript of sharing code (e.g. library code) - Once you have a module, you can look at options such as npm install [filepath], npm link or using a private hosted npm repository provided by npm private packages or Visual Studio Team Services Package Management - Office UI Fabric - Use of Fabric Core and the Fabric React components – using the Core styles is much simpler in version 1.3.4 onwards of SPFx, where the sp-office-ui-fabric-core package is referenced and the SCSS styles use mixins to reference the styles in your custom styles - When using the Fabric React components, you should typically ensure you use static linking in your import statements e.g. import { Button, ButtonType } from 'office-ui-fabric-react/lib/Button'; - See Using Office UI Fabric Core and Fabric React in SharePoint Framework for more information on this - Calling the Microsoft Graph and/or custom APIs (e.g. an Azure Function) - All of these resources are likely to be secured with AAD - GraphHttpClient is currently of limited use.. - ..so you will most likely need adal.js if calling from the client side, or ADAL.NET if calling from the server side - An alternative to adal.js for a custom web API/Azure Function, is the approach which leverages the SharePoint Online authentication cookie to pass credentials to your API (using the “credentials”: “include” header to pass across domains) – I think this is a useful approach and one of my demos covered this (video at) - I use this slide to give an overview of the two approaches: - Also note that soon, it will be possible to call your custom APIs by specifying additional AAD app registrations that can be called from SPFx without additional consent. This will simplify things significantly, and mean that your SPFx web parts/extensions will no longer need a sign-in button/process just to be able to call downstream resources - Deployment - Remember that the default SPFx behaviour is for any 3rd party libraries you add to be bundled into your web part – this increases your bundle size, and can be a particular problem when you have multiple web parts/extensions all using the same library (and Office UI Fabric can be a big culprit here!) - Another “by default” thing to remember is that each web part/extension you build gets it’s own bundle – the config.json file is what controls this - Where possible, 3rd party libraries should be externalised to a CDN.. - ..and if that isn’t possible, consider SPFx component bundles as a way to avoid having a library duplicated amongst all your web parts. In the case where you have 5 web parts on a page all using the same library, if you don’t externalise or use component bundles performance will suffer for first-time page loads Hopefully there’s some useful information in here, and I’ll most likely expand on some of these points in future articles. Here’s the slide deck:
https://www.sharepointnutsandbolts.com/2017/12/SPFx-pitfalls.html
CC-MAIN-2021-25
en
refinedweb
Lagged Indicator (eg. SMA) - albertoburchi last edited by Dear all, I would like to build an indicator with lagged data. For example, get the SMA not today's close, but yesterday's or 2 days behind. Is there such a possibility in Backtrader or do I have to build an indicator by myself? Below I tried a lagged SMA of 7 days but I can't define the method next. Will you help me? class LaggedSMA_7(bt.Indicator): lines = ('laggedsma1',) params = (('period', 14), ('lag', 7)) def next (self): datasum = math.fsum(self.data.get(size=self.p.period)) self.lines.sma[0] = datasum / self.p.period @albertoburchi The simplist way, IIUYC, would be to create the indicator as normal, then when using it, just reference the value n bars ago. def __init__(self): self.sma = bt.ind.SMA(period=self.p.period) # then... def next(self): # Use the indicator like this for say, 5 bars ago ago = 5 # Use the indicator in your algo like... self.sma[-ago] # do whatever. @albertoburchi You can also use self.lagged_sma = self.sma(-7) in your __init__method to pre-define it and then just access it as any other data property. I think the function call operator ()is referred to as the "ago" operator on a line.
https://community.backtrader.com/topic/3753/lagged-indicator-eg-sma/1
CC-MAIN-2021-25
en
refinedweb
The Poisson Deviance for Regression You’ve probably heard of the Poisson distribution, a probability distribution often used for modeling counts, that is, positive integer values. Imagine you’re modeling “events”, like the number of customers that walk into a store, or birds that land in a tree in a given hour. That’s what the Poisson is often used for. From the perspective of regression, we have this thing called Generalized Poisson Models (GPMs), where the response variable you are modeling is some kind of Poisson-ish type distribution. In a Poisson distribution, the mean is equal to the variance, but in real life, the variance is often greater (over-dispersed) or less than the mean (under-dispersed). One of the most intuitive and easy to understand explanations of GPMs is from Sachin Date, who, if you don’t know, explains many difficult topics well. What I want to focus on here is something kind of related — the Poisson deviance, or the Poisson loss function. Most of the time, your models, whether they are linear models, neural networks, or some tree-based method (XGBoost, Random Forest, etc.) are going to use mean squared error (MSE) or root mean squared error (RMSE) as your objective function. As you might know, MSE tends to favor the median, and RMSE and tends to favor the mean of a conditional distribution — this is why people worry about how the loss function works with the outliers in their data sets. However, most of the time, with some normally distributed response, you expect the values of the response (if z-score normalized, especially), to be some kind of Gaussian with mean 0 and unit variance. However, if you are dealing with count data — all positive integers — and you don’t scale or transform the response, then maybe a Poisson distribution is the better description of the data. When the mean of a Poisson is over 10, and especially over 20, it can be approximated with a Gaussian; the heavy tail that you see when the mean (the rate) is low tends to disappear. Take a look at the formula in the beginning of the post — the y_i is the ground truth, and the mu_i is your model’s prediction. Obviously, if y_i = mu_i, then you have a ln(1), which is 0, canceling out the first term, and the second as well, giving a deviance of 0. What’s interesting is what happens when your model errs on either side of the actual value. In the following snippet, I plot the loss of one example where the ground truth is set at 20 — meaning that we assume the variable is Poisson distributed with mean/rate = 20. from matplotlib import pyplot as plt import numpy as npxticks = np.linspace(start = 0.0000, stop = 50, num = int(1e4)) yi = 20 losses = [2 * (yi * np.log(yi / x) - (yi - x)) for x in xticks] losses = np.array(losses) plt.scatter(xticks, losses) Here’s the graph: So what does this tell you? Well, if your model guesses between 10 and even 50, it doesn’t look like the deviance is that high — at least not compared with what happens if you guess 1 or 2. This is obviously a lot different from other loss functions: So if your model outputs a 0 when the ground truth as 20, then, if you’re using MSE, the loss is 20² = 400, whereas, for the Poisson deviance, you would get infinite deviance, which is, uh, not good. The model isn’t supposed to really output 0, since there’s really not much probability mass there. If your model outputted something like .0001, your loss would be in the ballpark of >400. If your model outputs 30, and the ground truth was 20, your loss would only be around 3.78, where was in MSE terms, it would be 10² = 100. So the moral of the story is that yes, the model should favor the mean of the distribution, but using Poisson deviance means you won’t penalize it that heavily for being biased ABOVE the mean — like if your model kept outputting 30 or 35, even if the distribution’s mean is 20. But your model will be penalized very heavily for outputting anything between 0 and 10. Why? Well, what is the (log)likelihood function of the Poisson distribution? Someone smarter than me did this: Now if you want to look at something a little more tractable, take the log of that — If your model keeps outputting 0, then t = 0, and that t * ln(lambda) expression is going to be 0, leaving you with a large negative number from the first term. In fact, the only way that your LL can even be close to “good” is if t — the sum of the observations — is sufficiently large for the second term to counteract the first term. That’s why, I believe, low values between 0–10 (again, assuming the rate/mean is 20) are penalized so heavily. OK, so let’s say you’re sufficiently interested in the Poisson deviance. How can you calculate it? Well, from the perspective of the Python user, you have sklearn.metrics.mean_poisson_deviance — which is what I have been using. And you have Poisson loss as a choice of objective function for all the major GBDT methods — XGBoost, LightGBM, CatBoost, and HistGradientBoostingRegressor in sklearn. You also have PoissonRegressor() in the 0.24 release of sklearn…in any case, there are many ways you can incorporate Poisson type loss into training. Does it help, though? Tune in for my next project, where I will see just how these Poisson options play out on real life data sets.
https://peijin.medium.com/the-poisson-deviance-for-regression-d469b56959ce?source=---------3----------------------------
CC-MAIN-2021-25
en
refinedweb
I hope this doesn't come across as confusing. I am doing something wrong, with a "error C2059: syntax error: ';' - line 45; error C2059: syntax error : '/' - line 47. I'm not sure, but maybe it has something to do with the %, and that I didn't use a needed header file to do this type of math. Please help me out, I am so stumped. #include <iostream> using namespace std; int main() { double Pop; double CF; double RaceBonus; double TO; double TotalBuildings; double Research; double temp; double temp2; double temp3; double temp4; double income; cout << "Enter population: "; cin >> Pop; cout << endl; cout << "Enter total Cash Factories: "; cin >> CF; cout << endl; cout << "Enter your Income Race Bonus: "; cin >> RaceBonus; cout << endl; cout << "Enter total Tax Offices: "; cin >> TO; cout << endl; cout << "Enter your total Buildings: "; cin >> TotalBuildings; cout << endl; cout << "Enter your researched Economy Bonus: "; cin >> Research; cout << endl; temp = 100+ (Pop /30) + (CF *8); temp2 = 1+ RaceBonus / 100%; temp3 = 1+ 2 * TO / TotalBuildings; temp4 = 1+ Research% / 100%; income = temp * temp2 * temp3 * temp4; cout << "Your total Income is: " << income << endl; cout << "Press <Return> to quit..." << endl; cin.get(); cin.get(); return 0; }
https://www.daniweb.com/programming/software-development/threads/190785/help-with-multiplication-formula
CC-MAIN-2018-34
en
refinedweb
Question: Instead of passing many arguments to a method, I was encapsulating it into an argument object. note: simplied for demo For such a case, what would be a better practice? ⢠Create a class and name it as InventorySaveArgs? -- or -- ⢠Create a nested class and name it as SaveArgs? And would you also explain why one would choose one or the other? [EDIT]: That argument type will be used in another assemblies, as well. side question: Just curious, if there is a pattern name for encapsulating multiple parameters to a single object by chance. [UPDATE]: Found Nested Type Usage Guidelines on MSDN InventorySaveArgs should be available from another assemblies, so I am going with a regular class. Solution:1 IIRC .NET Design Guidelines are pretty clear on this - there should be no public nested types. Solution:2 I would name it InventorySaveArgs just in case you ever want to make the type available to other types to use. If you name it InventorySaveArgs from the beginning then you will have a name that is meaningful in all contexts if you ever need to refactor. Solution:3 I would choose the first, create an outer class and name it InventorySaveArgs. If the particular class is used in a public method, the only argument for not including it outside the class is namespace pollution. Having the inner class is quite frankly annoying in C# because you must constantly prefix the type name with the name of the class. As stated, once it's public the only motivation for donig this is reduction of namespace pollution. But if your namespace is so big that the InventorySaveArgs class is making it too big you probably need to break up your namespace anyways. Solution:4 The only pattern for encapsulating multiple parameters to a single object I've ever heard of is a refactoring pattern detailed by Martin Fowler Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-create-regular-class-or-inner.html
CC-MAIN-2018-34
en
refinedweb
Django Oscar Accounts is a package that provides the ability to manage accounts with fully managed account functionality to use with Django Oscar. * Django Oscar Accounts provides managed accounts using double-entry bookkeeping. * In this package every transaction is recorded twice(once for the source account and once for the destination account) by using double-entry bookkeeping. This make sure that books always balance and there exist full record of all transactions. Features - Installation - * Install using pip: pip install django-oscar-accounts * Next, add "oscar_accounts" to INSTALLED_APPS in settings file. * After this apply migrations with "manage.py migrate oscar_accounts" command. This will create the appropriate database tables. * Next use "manage.py oscar_accounts_init" command to create core accounts and account-types. The names of these accounts can be controlled using settings. * To access accounts via Django Oscar dashboard, you need to update OSCAR_DASHBOARD_NAVIGATION list from oscar.defaults import * OSCAR_DASHBOARD_NAVIGATION.append( { 'label': 'Accounts', 'icon': 'icon-globe', 'children': [ { 'label': 'Accounts', 'url_name': 'accounts-list', }, { 'label': 'Transfers', 'url_name': 'transfers-list', }, { 'label': 'Deferred income report', 'url_name': 'report-deferred-income', }, { 'label': 'Profit/loss report', 'url_name': 'report-profit-loss', }, ] }) Next, add the below url-pattern to your urls.py from oscar_accounts.dashboard.app import application as accounts_app urlpatterns = [ ... url(r'^dashboard/accounts/', include(accounts_app.urls)), ] NOTE: To override templates, add an additional path(ACCOUNTS_TEMPLATE_DIR) to your TEMPLATE_DIRS Integrating in Django Oscar Checkout Process For checkout integration, you have to override PaymentDetailsView in checkout app Step 1: Display the list of accounts for the user in final step(Payment details Page). For this, override the get_context_data() method and filter active user accounts as below. from oscar_accounts.models import Account accounts = Account.active.filter(user=user, balance__gt=0) Step 2: Override the "post" method to validate the selected account and render them again in the preview view (but as hidden). Step 3: Override the handle_payment method of your checkout's PaymentDetailsView to transfer the amount(order total amount) from the user account to your account(Ex: Sales Account) * This package provides an API to make transfers in facade module. * If the transfer is invalid, an exception is raised. All these exceptions are subclasses of oscar_accounts.exceptions.AccountException. So we just need to handle this exception. from oscar_accounts.models import Account from oscar_accounts import facade, exceptions source_account = user_selected_account destination_account = Account.objects.get(name="Sales") try: transfer = facade.transfer(source_account, destination_account, order_total, user=user merchant_reference=order_number, description="Redeemed to pay for order %s" % order_number) except exceptions.AccountException, e: raise PaymentError("Transfer Failed") else: # Add Payment source and Payment event source_type, created = SourceType.objects.get_or_create(name="Accounts") source = Source(source_type=source_type, amount_allocated=order_total, amount_debited=transfer.amount, reference=transfer.reference) self.add_payment_source(source) self.add_payment_event("Transferred", transfer.amount, transfer.reference) * If the transfer is successful, but something went wrong in placing the order(any exception occurs after post-payment), then you have to roll-back the previous transfer. from oscar_accounts import facade try: self.place_order() except Exception, e: facade.reverse(transfer, user=user, merchant_reference=order_number, description="Payment Cancellation") Note: Transfer operation is wrapped in its own db transaction to make sure that only complete transfers are written....
https://micropyramid.com/blog/integrate-django-oscar-accounts-with-django-oscar/
CC-MAIN-2018-34
en
refinedweb
Forwarding this to the orocos-users list: anyone succeeded on importing PyKDL in MacOSX? Ruben ---------- Forwarded message ---------- From: Mirko Bordignon <mirko [dot] bordignon [..] ...> Date: Wed, Mar 10, 2010 at 7:40 PM Subject: orocos kdl python binding To: Ruben Smits <Ruben [dot] Smits [..] ...> Hi Ruben, great job with orocos and kdl in particular. One quick question: I'm able to compile kdl 1.0.2 with python bindings on my mac, and a PyKDL.dylib is created inside the usual site-packages folder. However, when I do an import PyKDL, it cannot find the library. Any idea? I though that to import a python module you'd need some py or pyc file, but then I found your answer to somebody asking the same thing and said that just the library file inside site-packages would be enough ... thank you very much, bye Mirko Bordignon ______________________ M.Sc. ECE, Ph.D. student Modular Robotics Lab, University of Southern Denmark +45 6550 3521 Removed double post Hi, I also had the problem that PyKDL.dylib could not be imported. Then i figured out a way compiling a PyKDL.so which could be imported without problems. It may not be the most elegant way, but it works for me: In short: * Install the Eigen library as well as py26-sip (using macports for example) * Install Orocos KDL from the svn-source without the Python bindings * Create a new folder and copy the Orocos src folder to it * Copy src/bindings/python/PyKDL/* to the new src folder * Create a file makekdl.py in the new src folder with the following content: import os import sipconfig build_file = "PyKDL.sbf" config = sipconfig.Configuration() os.system(" ".join([config.sip_bin, "-c", ".", "-b", build_file, "PyKDL.sip"])) makefile = sipconfig.SIPModuleMakefile(config, build_file) makefile.extra_libs = ["orocos-kdl"] makefile.extra_include_dirs = ["/opt/local/include/eigen2"] makefile.generate() * Run python makekdl.py * Run make * Now there is a PyKDL.so file in the directory which can be copied to the site-packages folder. Johannes
http://www.orocos.org/forum/orocos/orocos-users/fwd-orocos-kdl-python-binding
CC-MAIN-2018-34
en
refinedweb
pmstat − high-level system performance overview pmstat [−gLlPxz] [−A align] [−a archive] [−h host] [−H file] [−n pmnsfile] [−O offset] [−p port] [−S starttime] [−s samples] [−T endtime] [−t interval] [−Z. Multiple hosts may be monitored by supplying more than one host with multiple −h flags (for live monitoring) or by providing a name of the hostlist file, where each line contain one host name, with −H, or multiple −a flags (for retrospective monitoring from an archive). The −t −L option is specified, then pmcd(1) is bypassed, and metrics are fetched from PMDAs on the local host using the standalone PM_CONTEXT_LOCAL variant of pmNewContext(3). When the −h option is specified, pmstat connects to the pmcd(1) on host and fetches metrics from there. As mentioned above, multiple hosts may be monitored by supplying multiple −h flags. Alternatively, if the −a option is used, the metrics are retrieved from the Performance Co-Pilot archive log files identified by the base name archive. Multiple archives may be replayed by supplying multiple −a flags. When the −a flag is used, the −P flag may also be used to pause the output after each interval. Standalone mode can only connect to the local host, using an archive implies a host name, and nominating a host precludes using an archive, so the options −L, −a and −h are mutually exclusive. Normally pmstat operates on the default Performance Metrics Name Space (PMNS), however if the −n option is specified an alternative namespace is loaded from the file pmnsfile. If the −s the option is specified, samples defines the number of samples to be retrieved and reported. If samples is 0 or −s is not specified, pmstat will sample and report continuously − this is the default behavior. When processing an archive, pmstat may relinquish its own timing control, and operate as a ‘‘slave’’ of a pmtime(1) process that uses a GUI dialog to provide timing control. In this case, either the −g option should be used to start pmstat as the sole slave of a new pmtime(1) instance, or −p should be used to attach pmstat to an existing pmtime(1) instance via the IPC channel identified by the port argument. The −S, −T, −O and −A options may be used to define a time window to restrict the samples retrieved, set an initial origin within the time window, or specify a ‘‘natural’’ alignment of the sample times; refer to PCPIntro(1) for a complete description of these options. The −l option prints the last 7 characters of a hostname in summaries involving more than one host (when more than one −h option has been specified on the command line). The −x option (extended CPU metrics) causes two additional CPU metrics to be reported, namely wait for I/O ("wa") and virtualisation steal time ("st"). The output from pmstat is directed to standard output, and the columns in the report are interpreted as follows: If the values become large, they are reported as Mbytes (m suffix) or Gbytes any values for the associated performance metrics are unavailable, the value appears as ‘‘?’’ in the output. By default, pmstat reports the time of day according to the local timezone on the system where pmstataclient(1), pmtime(1), PMAPI(3), pmNewContext(3), pcp.conf(4) and pcp.env(4). All are generated on standard error, and are intended to be self-explanatory.
https://manpag.es/SUSE121/1+pmstat
CC-MAIN-2018-34
en
refinedweb
#include <iostream> int main() { std::cout << "Hello, DaniWeb!\n" << "I'm learning C++ from a book. When I finish this book I'm sure I'll have tonnes of questions for you. Thanks in advance.\n"; std::cin.get(); return 0; } If I screwed that up, I'm embarrassed. Possibly that isn't as clean or as effective as it could be, but so far in the book I'm reading it's completely fine. (In the book there would actually be more gaps in there lol) Anyways, my name is Jeff, I'm 19 years old, I live in British Columbia in Canada, and have just recently taken my general interest of computers (ie. graphics, games, programming, IT etc. I don't have much knowledge under my belt yet at all) to the next stage and chose Programming as a starter hobby. Some may say it's a bad move but it's the first thing that caught my eye so I decided to try it. What I'm trying to say is that I'm basically a complete newb so don't talk to me and assume I know anything about computers lol, because I probably won't know what you're saying. However, if you do say something I don't know you can count on a message coming back your way asking the definition if I can't find it on Google. I hope to meet loads of people and make friends and discoveries about the computer world here at DaniWeb. Hopefully, someday I could make a career out of this. Thanks again, mcnally
https://www.daniweb.com/community-center/threads/172288/hello-daniweb
CC-MAIN-2018-34
en
refinedweb
This is a Java Program to Print the kth Element in the Array. Enter size of array and then enter all the elements of that array. Now enter the position k at which you want to find element. Now we print the element at k-1 position in given array. Here is the source code of the Java Program to Print the kth Element in the Array. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. import java.util.Scanner; public class Position { public static void main(String[] args) { int n; Scanner s = new Scanner(System.in); System.out.print("Enter no. of elements you want in array:"); n = s.nextInt(); int a[] = new int[n]; System.out.println("Enter all the elements:"); for (int i = 0; i < n; i++) { a[i] = s.nextInt(); } System.out.print("Enter the k th position at which you want to check number:"); int k = s.nextInt(); System.out.println("Number:"+a[k-1]); } } Output: $ javac Position.java $ java Position Enter no. of elements you want in array:5 Enter all the elements: 2 5 3 8 6 Enter the k th position at which you want to check number:3 Number:3 Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
https://www.sanfoundry.com/java-program-print-kth-element-array/
CC-MAIN-2018-34
en
refinedweb
Sometimes small is beautiful. Juha Lindstedt's FRZR, a 4kB view library, was a nice example of that as we saw earlier. This time we'll discuss evolution of Juha's work - a solution known as RE:DOM. I create all my projects with it, even large single page applications. You can also render it on server-side with NO:DOM. I gave a more detailed explanation in my talk in HelsinkiJS / Frontend Finland, but basically it allows you to create HTML elements and components really easily with HyperScript syntax. Another thing it does is it helps you to keep a list of components in sync with your data. Check out examples at the RE:DOM website. The basic idea is to use HyperScript to create HTML elements: import { el, mount } from 'redom' const hello = el('h1', 'Hello world!') mount(document.body, hello) You can also create components by defining an object with el property, which is the HTML element: import { el, text, mount } from 'redom' class Hello { constructor () { // how to create a component this.el = el('h1', 'Hello', this.name = text('world'), '!' ) } update (name) { // how to update it this.name.textContent = name } } const hello = new Hello() mount(document.body, hello) setTimeout(() => { hello.update('RE:DOM') }, 1000) Keeping lists in sync is also really easy: import { el, list, mount } from 'redom' // create some data const data = new Array(100) for (let i = 0; i < data.length; i++) { data[i] = { id: i, name: 'Item ' + i } } // define Li component class Li { constructor () { this.el = el('li') } update ({ name }) { this.el.textContent = name } } // create <ul> list const ul = list('ul', Li, 'id') mount(document.body, ul) // shuffle it every second setInterval(() => { data.sort((a, b) => Math.random() - 0.5) ul.update(data) }, 1000) RE:DOM doesn't use Virtual DOM, but still allows you to define components and how to update them. For me it's the best of both worlds: mutability gives great flexibility and performance, but defining a one-directional flow of updates is very close to VDOM-approach. It's also really tiny, but still does quite a lot of work. Not to mention it's really fast. The source is also really easily readable. I actually first developed FRZR, which eventually got renamed to RE:DOM. RE:DOM is a bit more clever with element creation from queries, and better designed lists. Originally I created FRZR because I was one of the Riot 2.0 early contributors and wrote a HTML element reorder method for it, which Riot lacked. Riot's original idea was to be a really simple UI library, which I think have got a bit out of hand. RE:DOM is basically my view of the simplest possible UI library. RE:DOM is also much more performant than Riot at the moment. There's some things in RE:DOM I need to think through. For example, mounted and unmounted "events" happen when attached/detached related to the parent component/element. They might be better if called when attached/detached to the DOM instead. But that's something that needs careful approach, so it doesn't affect the performance that much. There's also a possibility to use Web Components instead, let's see. I think web standards will eventually make frameworks and UI libraries quite obsolete. That's something recently discussed a lot in the Polymer Summit. That's a good direction, because I think frameworks are actually the source of most of the "JavaScript fatigue" and frustration in general. Web standards are more thought through and also a safer choice, because they will (almost) always be backwards compatible – you can't say the same about frameworks. Abstraction usually also comes with a vendor lock-in: if you start a project with Angular for example, it's really hard to convert the project to some other framework. Be open-minded about web standards and the DOM. It's not as scary and complex as many say it is. You don't always need a framework and you don't always have to follow the crowd. Less is more. I recently wrote a Medium post about the subject. Even if you use some framework, you should learn how the DOM work. You should interview Tero Piirainen, the original author of Riot.js. Ask about web standards and simplicity in web development :) Thanks for the interview Juha! RE:DOM looks great to me. Especially that Web Component direction sounds interesting. I think you are right in that given enough time, web standards will make a lot of the current solutions obsolete (a good thing!). To get started with, head to RE:DOM website. Check out also the GitHub project.
https://survivejs.com/blog/redom-interview/index.html
CC-MAIN-2018-34
en
refinedweb
Sharing data globally within application with Gorilla Web Toolkit (golang) If you have been a developer for several years, maybe you have been in a situation where you need to store some data and then be able to share that data between different modules within an application. Context For example, let's suppose we have a configuration struct like this: config := Config{ TrackClicks: true, TrackOpens: true, } Let's say we get that configuration from a request. Now that we have that configuration struct, how do we share it with other handlers in our application? The Gorila Web Toolkit project has an easy solution for that! The context package allows you to store values in a global and thread-safe fashion. The first thing we need to do is to get it using go get github.com/gorilla/context, then we only need to include it in our package with import "github.com/gorilla/context" So we could use it like this: type contextString string const ConfigKey contextString = "context" func SetConfigHandler(w http.ResponseWriter, r *http.Request) { var config Config //...Getting the config data from request... context.Set(r, ConfigKey, config) } func GetClicksHandler(w http.ResponseWriter, r *http.Request) { config := context.Get(r, ConfigKey).(Config) if config.TrackClicks{ WriteNumberClicks(w) } } First, we declare the custom type contextString in order to avoid name collisions. Then we create the constant ConfigKey that we will use as key for storing and retrieving the configuration struct from the context. After that, we store the configuration object once we've got the data from the request using context.Set. Finally we can get the configuration object from the context using context.Get Lastly, if at some point we need to get rid of all the stuff we saved in the context we only need to call context.Clear Sessions The Gorilla Web Toolkit also has another package to store data globally, it is called sessions. This package allows us to store data using cookies. Again, first we need to get the package with go get github.com/gorilla/sessions, then we import it using import "github.com/gorilla/sessions". Then we can use it like this: var store = sessions.NewCookieStore([]byte("Cr0wd1nt")) func SetConfigHandler(w http.ResponseWriter, r *http.Request) { session, _ := store.Get(r, "my-session") session.Values["trackClicks"] = true session.Values["trackOpens"] = true session.Save(r, w) } To begin, we create a session store with sessions.NewCookieStore passing a secret key used to authenticate the session. Then we are able to get the session using store.Get, this method always returns a session, even if empty. After that, add some values to the session hash. And finally, we save the session data. It's worth mentioning that if you aren't using gorilla/mux, you need to wrap your handlers with context.ClearHandler to avoid memory leaks. The easiest way to do this is wrapping the top-level mux when calling http.ListenAndServe, like this: http.ListenAndServe(":8080", context.ClearHandler(http.DefaultServeMux)). Conclusion As we can see, the Gorilla Web Toolkit offers these two very easy-to-use tools that allow us to store data globally. It's just our choice wether to use one or the other. If we need to store custom structs of data, then our better option is to use the context package. But if we only need to store bits of data, like session data from a request; our best option is to use the session package. Leave questions or comments in the section below. And thanks for reading!
http://blog.magmalabs.io/2014/12/16/sharing-data-globally-within-application-with-gorilla-web-toolkit-golang.html
CC-MAIN-2018-34
en
refinedweb
Actually permissions are of 2 types: 1.Model level permissions 2.object level permissions If you want to give permissions on all cars, then Model-level is appropriate, but if you Django to give permissions on a per-car basis you want Object-level. You may need both, and this isn't a problem as we'll see. For Model permissions, Django will create permissions in the form 'appname.permissionname_modelname' for each model. If you have an app called 'drivers' with the Car model then one permission would be 'drivers.delete_car'. The permissions that Django automatically creates will be create, change, and delete.Read permission is not included in CRUD operation.Django decided to change CRUD's 'update' to 'change' for some reason. You can use the metaclass to add more permissions to a model.: 'perms' of a model object called entity in the app def has_model_permissions( entity, model, perms, app ): for p in perms: if not entity.has_perm( "%s.%s_%s" % ( app, p, model.__name__ ) ): return False return True Here entity is the Entity object to check permissions on (Group or User), model is the instance of a model(entity), perms is a list of permissions as strings to check (e.g. ['read', 'change']) for respective object, models in a project. add: user.has_perm('drivers.add_book') change: user.has_perm('drivers.change_book') delete: user.has_perm('drivers.delete_book') With the help of model django.contrib.auth.models.Group,we can categorizing users so you can apply permissions to group....
https://micropyramid.com/blog/understanding-django-permissions-and-groups/
CC-MAIN-2018-34
en
refinedweb
Developing Modular JavaScript Components - | - - - - - - Read later Reading List While most web applications these days employ an abundance of JavaScript, keeping client-side functionality focused, robust and maintainable remains a significant challenge. Even though basic tenets like separation of concerns or DRY are taken for granted in other languages and ecosystems, many of these principles are often ignored when it comes to browser-side parts of an application. This is in part due to the somewhat challenging history of JavaScript, a language which for a long time struggled to be taken seriously. Perhaps more significant though is the server-client distinction: While there are numerous elaborations on architectural styles explaining how to manage that distinction -- e.g. ROCA -- there is often a perceived lack of concrete guidance on how to implement these concepts.1 This frequently leads to highly procedural and comparatively unstructured code for front-end augmentations. While it is useful that JavaScript and the browser allow this direct and unmediated approach, as it encourages and simplifies initial explorations by reducing overhead, that style quickly leads to implementations which are difficult to maintain. This article will present an example of evolving a simple widget from a largely unstructured code base to a reusable component. Filtering Contacts The purpose of this sample widget is to filter a list of contacts by name. The final result -- including the history of its evolution -- is provided as a GitHub repository; readers are encouraged to review the commits and comment there. In line with the principles of progressive enhancement, we start out with a basic HTML structure describing our data, here using the h-card microformat to take advantage of established semantics, which helps provide a meaningful contract: <!-- index.html --> <ul> <li class="h-card"> <img src="" alt="avatar" class="u-photo"> <a href="" class="p-name u-url">Jake Archibald</a> (<a href="mailto:[email protected]" class="u-email">e-mail</a>) </li> <li class="h-card"> <img src="" alt="avatar" class="u-photo"> <a href="" class="p-name u-url">Christian Heilmann</a> (<a href="mailto:[email protected]" class="u-email">e-mail</a>) </li> <li class="h-card"> <img src="" alt="avatar" class="u-photo"> <a href="" class="p-name u-url">John Resig</a> (<a href="mailto:[email protected]" class="u-email">e-mail</a>) </li> <li class="h-card"> <img src="" alt="avatar" class="u-photo"> <a href="" class="p-name u-url">Nicholas Zakas</a> (<a href="mailto:[email protected]" class="u-email">e-mail</a>) </li> </ul> Note that whether the corresponding DOM structure is based on HTML provided by the server or generated by another component is not actually relevant, as long as our component can rely on this foundation -- which essentially constitutes a DOM-based data structure of the form [{ photo, website, name, e-mail }] -- to be present upon initialization. This ensures loose coupling and avoids tying ourselves to any particular system. With this in place, we can begin to implement our widget. The first step is to provide an input field for the user to enter the desired name. This is not part of the DOM contract but entirely the responsibility of our widget and thus injected dynamically (after all, without our widget there would be no point in having such a field at all). // main.js var contacts = jQuery("ul.contacts"); jQuery('<input type="search" />').insertBefore(contacts); (We're using jQuery here merely for convenience and because it is widely familiar; the same principles apply whichever DOM manipulation library, if any, is used.) This script file, along with the jQuery dependency, is referenced at the bottom of our HTML file. Next we attach the desired functionality -- hiding entries which do not match the input -- to this newly created field: // main.js var contacts = jQuery("ul.contacts"); jQuery('<input type="search" />').insertBefore(contacts).on("keyup", onFilter); function onFilter(ev) { var filterField = jQuery(this); var contacts = filterField.next(); var input = filterField.val(); var names = contacts.find("li .p-name"); names.each(function(i, node) { var el = jQuery(node); var name = el.text(); var match = name.indexOf(input) === 0; var contact = el.closest(".h-card"); if(match) { contact.show(); } else { contact.hide(); } }); } (Referencing a separate, named function rather than defining an anonymous function inline often makes callbacks more manageable.) Note that this event handler relies on a particular DOM environment of the element triggering that event (mapped to the execution context this here). From this element we traverse the DOM to access the list of contacts and find all elements containing a name within (as defined by the microformat semantics). When a name does not begin with the current input, we hide the respective container element (traversing upwards again), otherwise we ensure it's visible. Testing This already provides the basic functionality we asked for -- so it's a good time to solidify that by writing a test2. In this example we will be using QUnit. We begin with a minimal HTML page to serve as entry point for our test suite. Of course we also need to reference our code along with its dependencies (in this case, jQuery), just like in the regular HTML page we created previously. <!-- test/index.html --> <div id="qunit"></div> <div id="qunit-fixture"></div> <script src="jquery.js"></script> <script src="../main.js"></script> <script src="qunit.js"></script> With that infrastructure in place, we can add some sample data -- a list of h-cards, i.e. the same HTML structure we started out with -- to the #qunit-fixture element. This element is reset for each test, providing a clean slate and thus avoiding side effects. Our first test ensures that the widget was initialized properly and that filtering works as expected, hiding DOM elements that don't match the simulated input: // test/test_filtering.js QUnit.module("contacts filtering", { setup: function() { // cache common elements on the module object this.fixtures = jQuery("#qunit-fixture"); this.contacts = jQuery("ul.contacts", this.fixtures); } }); QUnit.test("filtering by initials", function() { var filterField = jQuery("input[type=search]", this.fixtures); QUnit.strictEqual(filterField.length, 1); var names = extractNames(this.contacts.find("li:visible")); QUnit.deepEqual(names, ["Jake Archibald", "Christian Heilmann", "John Resig", "Nicholas Zakas"]); filterField.val("J").trigger("keyup"); // simulate user input var names = extractNames(this.contacts.find("li:visible")); QUnit.deepEqual(names, ["Jake Archibald", "John Resig"]); }); function extractNames(contactNodes) { return jQuery.map(contactNodes, function(contact) { return jQuery(".p-name", contact).text(); }); } (strictEqual avoids JavaScript's type coercion, thus preventing subtle errors.) After amending our test suite with a reference to this test file (below the QUnit reference), opening the suite in the browser should tell us that all tests passed: Animations While our widget works fairly well, it isn't very attractive yet, so let's add some simple animations. jQuery makes this very easy: We just have to replace show and hide with slideUp and slideDown, respectively. This significantly improves the user experience for our modest example. However, re-running the test suite, it now claims that filtering did not work, with all four contacts still being displayed: This is because animations (just like AJAX operations) are asynchronous, so the filtering results are checked before the animation has completed. We can use QUnit's asyncTest to defer that check accordingly: // test/test_filtering.js QUnit.asyncTest("filtering by initials", 3, function() { // expect 3 assertions // ... filterField.val("J").trigger("keyup"); // simulate user input var contacts = this.contacts; setTimeout(function() { // defer checks until animation has completed var names = extractNames(contacts.find("li:visible")); QUnit.deepEqual(names, ["Jake Archibald", "John Resig"]); QUnit.start(); // resumes test execution }, 500); }); Since checking the test suite in the browser can become tedious, we can use PhantomJS, a headless browser, along with the QUnit runner to automate the process and emit results on the console: $ phantomjs runner.js test/index.html Took 545ms to run 3 tests. 3 passed, 0 failed. This also makes it easy to automate tests via continuous integration. (Though of course it doesn't cover cross-browser issues since PhantomJS uses WebKit only. However, there are headless browsers for Firefox's Gecko and for Internet Explorer's Trident engines as well.) Containment So far our code is functional, but not very elegant: For starters, it litters the global namespace with two variables - contacts and onFilter - since browsers do not execute JavaScript files in isolated scope. However, we can do that ourselves to prevent leaking into the global scope. Since functions are the only scoping mechanism in JavaScript, we simply wrap an anonymous function around the entire file and then call this function at the bottom: (function() { var contacts = jQuery("ul.contacts"); jQuery('<input type="search" />').insertBefore(contacts).on("keyup", onFilter); function onFilter(ev) { // ... } }()); This is known as an immediately invoked function expression (IIFE). Effectively, we now have private variables within a self-contained module. We can take this one step further to ensure we don't accidentally introduce new global variables by forgetting a var declaration. For this we activate strict mode, which protects against a host of common traps3: (function() { "use strict"; // NB: must be the very first statement within the function // ... }()); Specifying this within an IIFE wrapper ensures that it only applies to modules where it was explicitly requested. Since we now have module-local variables, we can also use this to introduce local aliases for convenience - for example in our tests: // test/test_filtering.js (function($) { "use strict"; var strictEqual = QUnit.strictEqual; // ... var filterField = $("input[type=search]", this.fixtures); strictEqual(filterField.length, 1); }(jQuery)); We now have two shortcuts - $ and strictEqual, the former being defined via an IIFE argument - which are valid only within this module. Widget API While our code is fairly well structured now, the widget is initialized automatically on startup, i.e. whenever the code is first loaded. This makes it difficult to reason about and also prevents dynamic (re)initialization, e.g. on different or newly created elements. Remedying this simply requires putting the existing initialization code into a function: // widget.js window.createFilterWidget = function(contactList) { $('<input type="search" />').insertBefore(contactList). on("keyup", onFilter); }; This way we have decoupled the widget's functionality from its life cycle within the respective application. Thus responsibility for initialization is shifted to the application -- or, in our case, the test harness -- which usually means a tiny bit of "glue code" to manage widgets within the application's context. Note that we're explicitly attaching our function to the global window, as that's the simplest way to make functionality accessible outside our IIFE. However, this couples the module internals to a particular, implicit context: window might not always be the global object (e.g.in Node.js). A more elegant approach is to be explicit about which parts are exposed to the outside and to bundle that information in one place. For this we can take advantage of our IIFE once more: Since it is just a function, we simply return the public parts -- i.e. our API -- at the bottom and assign that return value to a variable in the outer (global) scope: // widget.js var CONTACTSFILTER = (function($) { function createFilterWidget(contactList) { // ... } // ... return createFilterWidget; }(jQuery)); This is known as the revealing module pattern. The use of capitals is a convention to highlight global variables. Encapsulating State At this point, our widget is both functional and reasonably structured, with a proper API. However, introducing additional functionality in the same fashion - purely based on combining mutually independent functions - can easily lead to chaos. This is particularly relevant for UI components where state is an important factor. In our example, we want to allow users to decide whether the filtering should be case-sensitive, so we add a checkbox and extend our event handler accordingly: // widget.js var caseSwitch = $('<input type="checkbox" />'); // ... function onFilter(ev) { var filterField = $(this); // ... var caseSwitch = filterField.prev().find("input:checkbox"); var caseSensitive = caseSwitch.prop("checked"); if(!caseSensitive) { input = input.toLowerCase(); } // ... } This further increases reliance on the particular DOM context in order to reconnect to the widget's elements within the event handler. One option is to move that discovery into a separate function which determines the component parts based on the given context. A more conventional option is the object-oriented approach. (JavaScript lends itself to both functional and object-oriented4 programming, allowing the developer to choose whichever style is best suited for the given task.) Thus we can rewrite our widget to spawn an instance which keeps track of all its components: // widget.js function FilterWidget(contactList) { this.contacts = contactList; this.filterField = $('<input type="search" />').insertBefore(contactList); this.caseSwitch = $('<input type="checkbox" />'); } This changes the API slightly, but significantly: Instead of calling createFilterWidget(...), we now initialize our widget with new FilterWidget(...) - which invokes the constructor in the context of a newly-created object (this). In order to highlight the need for the new operator, constructor names are capitalized by convention (much like class names in other languages).5 Of course we also need to port the functionality to this new scheme - starting with a method to hide contacts based on the given input, which closely resembles the functionality previously found in onFilter: // widget.js FilterWidget.prototype.filterContacts = function(value) { var names = this.contacts.find("li .p-name"); var self = this; names.each(function(i, node) { var el = $(node); var name = el.text(); var contact = el.closest(".h-card"); var match = startsWith(name, input, self.caseSensitive); if(match) { contact.show(); } else { container.hide(); } }); } (Here self is used to make this accessible within the scope of the each callback, which has its own this and thus cannot access the outer scope's directly. Thus referencing self from the inner scope creates a closure.) Note how this filterContacts method, rather than performing context-dependent DOM discovery, simply references elements previously defined in the constructor. String matching has been extracted into a separate general-purpose function -- illustrating that not everything necessarily needs to become an object method: function startsWith(str, value, caseSensitive) { if(!caseSensitive) { str = str.toLowerCase(); value = value.toLowerCase(); } return str.indexOf(value) === 0; } Next we attach event handlers, without which this functionality would never be triggered: // widget.js function FilterWidget(contactList) { // ... this.filterField.on("keyup", this.onFilter); this.caseSwitch.on("change", this.onToggle); } FilterWidget.prototype.onFilter = function(ev) { var input = this.filterField.val(); this.filterContacts(input); }; FilterWidget.prototype.onToggle = function(ev) { this.caseSensitive = this.caseSwitch.prop("checked"); }; Running our tests - which, apart from the minor API change above, should not require any adjustments - will reveal an error here, as thisis not what we might expect it to be. We've already learned that event handlers are invoked with the respective DOM element as execution context, so we need to work around that in order to provide access to the widget instance. For this we can take advantage of closures to remap the execution context: // widget.js function FilterWidget(contactList) { // ... var self = this; this.filterField.on("keyup", function(ev) { var handler = self.onFilter; return handler.call(self, ev); }); } (call is a built-in method to invoke any function in the context of an arbitrary object, with the first argument corresponding to this within that function. Alternatively apply might be used in combination with the implicit arguments variable to avoid explicitly referencing individual arguments within this indirection: handler.apply(self, arguments).6) The end result is a widget where each function has a clear, well-encapsulated responsibility. jQuery API When using jQuery, the current API seems somewhat inelegant. We can add a a thin wrapper to provide an alternative API that feels more natural to jQuery developers: jQuery.fn.contactsFilter = function() { this.each(function(i, node) { new CONTACTSFILTER(node); }); return this; }; (A more elaborate contemplation is provided by jQuery's own plugin guide.) Thus we can use jQuery("ul.contacts").contactsFilter(), while keeping this as a separate layer ensures that we're not tied to this particular ecosystem; future versions might provide additional API wrappers for different ecosystems or even decide to remove or replace jQuery as a dependency. (Of course in our case, abandoning jQuery would also mean we'd have to rewrite the internals accordingly.) Conclusion and Outlook Hopefully this article managed to convey some of the key principles of writing maintainable JavaScript components. Of course not every component should follow this exact pattern, but the concepts presented here should provide the necessary toolkit essential to any such component. Further enhancements might use the Asynchronous Module Definition (AMD), which improves encapsulation and makes explicit any dependencies between modules, thus allowing for loading code on demand - e.g. via RequireJS. In addition, there are exciting new developments on the horizon: The next version of JavaScript (officially ECMAScript 6) will introduce a language-level module system, though as with any new feature, widespread availability depends on browser support. Similarly, Web Components is an upcoming set of browser APIs intended to improve encapsulation and maintainability - many of which can be experimented with today using Polymer. Though how well Web Components fare with progressive enhancement remains to be seen. 1This is less of an issue for single-page applications, as the respective roles of server and client are very different in this context. However, a juxtaposition of these approaches is beyond the scope of this article. 2Arguably the test(s) might have been written first. 3JSLint can additionally be used to protect against such and other common issues. In our repository we're using JSLint Reporter. 4JavaScript uses prototypes rather than classes - the main difference being that whereas classes are usually "special" in some way, here any object can act as a prototype and can thus be used as a template for creating new instances. For the purposes of this article, the difference is negligible. 5Modern versions of JavaScript introduced Object.create as an alternative to the "pseudo-classical" syntax. The core principles of prototypal inheritance remain the same. 6jQuery.proxy might be used to shorten this to this.filterField.on("keyup", $.proxy(self, "onFilter")); About the Author Frederik Dohr started his career as a reluctant web developer hacking on TiddlyWiki, sometimes called the original single-page application. After a few years of working with a bunch of clever folks at Osmosoft, BT's open source innovation team, he left London to return to Germany. He now works for innoQ, where he continues his vocal quest for simplicity while gaining a whole new perspective on developing with, for and on the web. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/articles/modular-javascript
CC-MAIN-2018-34
en
refinedweb
This is the second part of my presentation in SilverKey Demo Day 2 (SKDD 2) last July, for the first part, Check Here, you will also find the presentation slides & samples attached in Part 1 or you can download from here So let’s start Let’s check our Agedna In Part 1 we discussed what is SOA and the concepts behind it, also we had a walk through different approaches to apply SOA in our applications in terms of standards and patterns. In Part 2 we are going to discuss WCF or (Windows Communication Foundation) which is Microsoft new framework to unify communication mechanisms and provide an infrastructure for our SOA applications. As we’ve just mentioned WCF stands for Windows Communication Foundation and also it used to be called Indigo. WCF was shipped with Windows Vista in November 2006 as part of .net framework 3.0, however it is also available for Windows 2000, XP and 2003. WCF is all about providing an unified infrastructure that handles all communication scenarios and protocols, which includes different transport, messaging patterns, encoding …. etc. WCF also playing a great role in making the life easier for .net developer, so the messages exchanged between different services are exposed as CLR types (classes), and also the services are represented as interfaces and classes, so it provides a good and very high level of abstraction. So as one sentence you can say that WCF is the infrastructure for building Services. As we said WCF unifies the communication techniques, but how ? Before the WCF days, we used to have different technologies for different communication problem, so if you are doing TCP communication you will be using Sockets for example, and if you are providing online services over HTTP, you will be using Web Services, and if you are doing message queues you will be using MSMQ … etc So this was good, however it has some drawbacks. - You have to learn different models for every communication technique. - You can’t change the communication technique you are using in a certain application to another without massive changes in your code. So what WCF provides is a One Right Solution for all problems, which saves your time and effort, also it comes with a full fledged set of goodies such as Tracing, Logging which can be used regardless of the communication technique you are using. One of WCF great benefits that it provides a very handy extensible model, that you can add support for new protocols easily, or even slight changes to current protocols like adding a new encoding or encryption algorithm.or a new tracing mechanism. Another thing that WCF is supporting almost all the WS-* standards like WS-Addressing, WS-AtomicTransaction which makes the messages generated from WCF runtime interoperable with any current or future application built with same standard messaging. and The most elegant feature of WCF, that all the configuration of the services and protocol specific properties can be configured using XML config files outside the application, which is a great feature that allows administrators to be part of the game, so they can change stuff like the impersonation options, and security settings, enabling and disabling trace listeners and logging, without the need to ask developers to do it, which is somehow similar to ASP.net configuration. Also this config file can be edited using a UI tool.. For architects it is really easy to stay in the high level design and forget about implementation details and technology limitations. So let’s have a sample service In this demo we will be discussing the basic elements in any WCF application. The most basic three elements are - Address - Binding - Contract They are also called the ABC’s of WCF, because every service has to define these three elements. We will be discussing everyone of these elements in details, let’s now see how a Service is defined in WCF. in steps we will discuss how to build a simple News service. 1. Define the Contracts In this step we will define the different contracts of the service the first type is Service Contracts [ServiceContract(Namespace="com.silverkeytech.news")] public interface INewsService { [OperationContract(IsOneWay=false)] Article[] GetArticles(); } As you see the Service Definition is just an interface (Contract) which is decorated with an attribute called ServiceContract. This interface defines the different functions provided by the service in terms of inputs/outputs and exchange patterns. Every service function is represented in a form of interface method, and the inputs of the service function are represented as inputs to the method and also the output is represented as return type of the method. The exchange pattern is by default Request/Response which is represented using the Attribute OperationContract with the IsOneWay property set to false, if it was set to true this means that this service function is a fire & forget type of service, which the service clients will not expect any response for the messages sent to this functions. Data Contracts Also if you noticed in our method return type is a complex type Article this is not a primitive type in .net, it is a user defined type which has to be defined like this. [DataContract] public class Article { string title; string details; [DataMember] public string Title { get { return title; } set { title = value; } } [DataMember] public string Details { get { return details; } set { details = value; } } } As you can see it is a simple data class that is decorated with a DataContract attribute, so it tells the WCF runtime that this class will be converted to a message that will be sent or received by one or more service functions, also every field or property of interest has to be decorated with another attribute called DataMember. 2. Implement the Service Now we need to provide the implementation of the Service Contract interface, which will do the actual job of the service, all what we need is to implement the interface in a class like this. public class NewsService : INewsService { public Article[] GetArticles() { List<Article> articles = new List<Article>(); for (int i = 0; i < 10; i++) { Article article = new Article(); article.Title = "Title " + i.ToString(); article.Details = "Details " + i.ToString(); articles.Add(article); } return articles.ToArray(); } } So the class NewsService implements the interface INewsService and provides an implementation of the service that does the actual job of how the service will handle the incoming requests. 3. Hosting the Service Now after we implemented the service, we have to host it on an application, there are different choices depending on your need, you can host the service on Windows Applications (Console, Windows Forms, Windows Services) or Web Applications (IIS), for simplicity I will host my service on a console application. namespace Host { class Program { static void Main(string[] args) { using (ServiceHost host = new ServiceHost(typeof(NewsService))) { host.Open(); Console.ReadKey(); } } } } As you see, the host is very simple, all the hard work is done in the ServiceHost class which its constructor requires the type of the class that implements my service, in our case NewsService class. Then we call the Open method to allow the service to accept any incoming requests and process them, the last Console.ReadKey is just for not allowing the application to exit unless somebody presses any key. 4. Configuring the Service After we hosted the service we need to configure this service, we need to write how the service is going to interact with its clients, this will be done using the App.Config file of the host <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="BeebcNews.NewsService"> <endpoint address="EndPoint1" binding="basicHttpBinding" bindingConfiguration="" name="HttpEndpoint" contract="BeebcNews.INewsService" /> <host> <baseAddresses> <add baseAddress="" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> This is sample configuration file that is used to configure the NewsService we’ve just implemented 5. Consuming The Service After we host and deploy our service, we need to consume it, which mean we need to write client applications that sends and receives requests and responses from the service. Generating Proxy Class So for the client application to be able to use the service we need to use a proxy class, the proxy class is generated using a tool called svcutil.exe this tool can be applied either on the Service dll or on the WSDL file url of the service. so let’s assume our NewsService dll is called MyLibrary.dll We will write this command in the command prompt. svcutil.exe MyLibrary.dll This command will generate the XSD files & WSDL file for the types and services inside the library, there are more switches can be used for more specific options. Then after we generate the XSD, WSDL of the wcf service we apply this command on the generated files. svcutil.exe *.xsd *.wsdl /language:C# This command will use the XSD, WSDL file we generated with the first command to generate a proxy class in C# and an XML config file. Then we can use the new files to write our client applications. Another way of generating the Proxy classes is to switch on the MetaData publishing for the service, so the service will have a http url that contains the WSDL file, then we can use this command svcutil.exe http://[the service url] and this will generate the C# proxy and the config file Implementing the Client Application namespace Client { class Program { static void Main(string[] args) { NewsServiceClient client = new NewsServiceClient("HttpEndpoint"); Article[] myArticles = client.GetArticles(); foreach(Article a in myArticles) Console.WriteLine(a.Title); Console.ReadKey(); } } } The previous code is a simple example of a client application that uses the generated proxy class we’ve just generated that generated the NewsServiceClient class, which has the same functions provided by the INewsService interface so we can call them and the WCF will handle the communication between the client and server. Now let’s have an overview of the architecture of WCF, or let’s call it the journey of the incoming message to the service and how it is going to be handled. The architecture of WCF is basically a pipeline, the message arrives on the transport layer which is going to pass it to the next layer to be decoded and then it passes through different protocol specific layers till it reaches the dispatcher, which is going to decide this message belongs to which method in the running instance of the service, and then deliver it to the method to be processed then the return value (if any) takes a similar path down the pipeline back to the original sender. This pipeline is configured using the Configuration and the Contract, the configuration defines the layers in the WCF pipeline and the Contract helps the Dispatcher to find the destination of the message. If you take a look on the XML Configuration in the previous example, you will find that under the Service element there is something called EndPoint The EndPoint is the container that has the three basic elements of the service we mentioned before (Address, Binding & Contract). So in our previous example the Address was EndPoint1, which is going to be concatenated with the baseAddress of the Service in our case “”so the result address will be, then the Binding which was “basicHttpBinding” which is the simply SOAP messages over HTTP using text encoding. and the Contract which has the full name of the INewsService interface which defines our service contract. Now let’s see each Endpoint element in details The Address Defines Where the endpoint resides The Binding Defines How the service is communicating with the outside world. The Contract Defines What the service will offer to its clients. Before we end this presentation, I would like to highlight that WCF 2 is on the way, it will be shipped with next version of Visual Studio 2008 and .net framework 3.5 The new version of WCF will have better support for Webby style services such as REST style services and also supports different encodings like RSS, ATOM and JSON. Also the huge change will be that WCF and WF are merged together in what is called Silver project, which will enable developers to develop WCF service with Workflow as an implementation. These are some resources for more information about WCF and SOA also some books for further information. I hope you read this line, which means that you kept reading the whole presentation, I hope it was useful for you, if you have any questions write them as a comment to this blog post and I will be answering them immediately.
https://blogs.msdn.microsoft.com/bashmohandes/2007/09/17/soa-via-wcf-part-2/
CC-MAIN-2016-36
en
refinedweb
\input texinfo @setfilename cpp.info @settitle The C Preprocessor @ifinfo @dircategory Programming @direntry * Cpp: (cpp). The GNU C preprocessor. @end direntry @end ifinfo @c @smallbook @c @cropmarks @c @finalout @setchapternewpage odd @ifinfo This file documents the GNU C Preprocessor. Copyright 1987, 1989, 1991, 1992, 1993, 1994, 1995, 1997, 1998 @titlepage @c @finalout @title The C Preprocessor @subtitle Last revised September 1998 @subtitle for GCC version 2 @author Richard M. Stallman @page @vskip 2pc This booklet is eventually intended to form the first chapter of a GNU C Language manual. @vskip 0pt plus 1filll Copyright @copyright{} 1987, 1989, 1991 @page @node Top, Global Actions,, (DIR) @chapter The C Preprocessor The C preprocessor is a @dfn{macro processor} that is used automatically by the C compiler to transform your program before actual compilation. It is called a macro processor because it allows you to define @dfn{macros}, which are brief abbreviations for longer constructs. The C preprocessor provides four separate facilities that you can use as you see fit: @itemize @bullet @item Inclusion of header files. These are files of declarations that can be substituted into your program. @item Macro expansion. You can define @dfn{macros}, which are abbreviations for arbitrary fragments of C code, and then the C preprocessor will replace the macros with their definitions throughout the program. @item Conditional compilation. Using special preprocessing directives, you can include or exclude parts of the program according to various conditions. @item Line control. If you use a program to combine or rearrange source files into an intermediate file which is then compiled, you can use line control to inform the compiler of where each source line originally came from. @end itemize C preprocessors vary in some details. This manual discusses the GNU C preprocessor, the C Compatible Compiler Preprocessor. The GNU C preprocessor provides a superset of the features of ANSI Standard C@. ANSI Standard C requires the rejection of many harmless constructs commonly used by today's C programs. Such incompatibility would be inconvenient for users, so the GNU C preprocessor is configured to accept these constructs by default. Strictly speaking, to get ANSI Standard C, you must use the options @samp{-trigraphs}, @samp{-undef} and @samp{-pedantic}, but in practice the consequences of having strict ANSI Standard C make it undesirable to do this. @xref{Invocation}.. @menu * Global Actions:: Actions made uniformly on all input files. * Directives:: General syntax of preprocessing directives. * Header Files:: How and why to use header files. * Macros:: How and why to use macros. * Conditionals:: How and why to use conditionals. * Combining Sources:: Use of line control when you combine source files. * Other Directives:: Miscellaneous preprocessing directives. * Output:: Format of output from the C preprocessor. * Invocation:: How to invoke the preprocessor; command options. * Concept Index:: Index of concepts and terms. * Index:: Index of directives, predefined macros and options. @end menu @node Global Actions, Directives, Top, Top @section Transformations Made Globally Most C preprocessor features are inactive unless you give specific directives to request their use. (Preprocessing directives are lines starting with @samp{#}; @pxref{Directives}). But there are three transformations that the preprocessor always makes on all the input it receives, even in the absence of directives. @itemize @bullet @item All C comments are replaced with single spaces. @item Backslash-Newline sequences are deleted, no matter where. This feature allows you to break long lines for cosmetic purposes without changing their meaning. @item Predefined macro names are replaced with their expansions (@pxref{Predefined}). @end itemize The first two transformations are done @emph{before} nearly all other parsing and before preprocessing directives are recognized. Thus, for example, you can split a line cosmetically with Backslash-Newline anywhere (except when trigraphs are in use; see below). @example /* */ # /* */ defi\ ne FO\ O 10\ 20 @end example @noindent is equivalent into @samp{#define FOO 1020}. You can split even an escape sequence with Backslash-Newline. For example, you can split @code{"foo\bar"} between the @samp{\} and the @samp{b} to get @example "foo\\ bar" @end example @noindent This behavior is unclean: in all other contexts, a Backslash can be inserted in a string constant as an ordinary. @itemize @bullet @item C comments and predefined macro names are not recognized inside a @samp{#include} directive in which the file name is delimited with @samp{<} and @samp{>}. @item C comments and predefined macro names are never recognized within a character or string constant. (Strictly speaking, this is the rule, not an exception, but it is worth noting here anyway.) @item Backslash-Newline may not safely be used within an ANSI ``trigraph''. Trigraphs are converted before Backslash-Newline is deleted. If you write what looks like a trigraph with a Backslash-Newline inside, the Backslash-Newline is deleted as usual, but it is then too late to recognize the trigraph. This exception is relevant only if you use the @samp{-trigraphs} option to enable trigraph processing. @xref{Invocation}. @end itemize @node Directives, Header Files, Global Actions, Top @section Preprocessing Directives @cindex preprocessing directives @cindex directives Most preprocessor features are active only if you use preprocessing directives to request their use. Preprocessing directives are lines in your program that start with @samp{#}. The @samp{#} is followed by an identifier that is the @dfn{directive name}. For example, @samp{#define} is the directive that defines a macro. Whitespace is also allowed before and after the @samp{#}. The set of valid directive names is fixed. Programs cannot define new preprocessing directives. Some directive names require arguments; these make up the rest of the directive line and must be separated from the directive name by whitespace. For example, @samp{#define} must be followed by a macro name and the intended expansion of the macro. @xref @samp{#} and the directive name cannot come from a macro expansion. For example, if @samp{foo} is defined as a macro expanding to @samp{define}, that does not make @samp{#foo} a valid preprocessing directive. @node Header Files, Macros, Directives, Top @section Header Files @cindex header file A header file is a file containing C declarations and macro definitions (@pxref{Macros}) to be shared between several source files. You request the use of a header file in your program with the C preprocessing directive @samp{#include}. @menu * Header Uses:: What header files are used for. * Include Syntax:: How to write @samp{#include} directives. * Include Operation:: What @samp{#include} does. * Once-Only:: Preventing multiple inclusion of one header file. * Inheritance:: Including one header file in another header file. @end menu @node Header Uses, Include Syntax, Header Files, Header Files @subsection Uses of Header Files Header files serve two kinds of purposes. @itemize @bullet @item @findex in C compilation as copying the header file into each source file that needs it.. The usual convention is to give header files names that end with @file{.h}. Avoid unusual characters in header file names, as they reduce portability. @node Include Syntax, Include Operation, Header Uses, Header Files @subsection The @samp{#include} Directive @findex #include Both user and system header files are included using the preprocessing directive @samp{#include}. It has three variants: @table @code @item #include <@var{file}> This variant is used for system header files. It searches for a file named @var{file} in a list of directories specified by you, then in a standard list of system directories. You specify directories to search for header files with the command option @samp{-I} (@pxref{Invocation}). The option @samp{-nostdinc} inhibits searching the standard system directories; in this case only the directories you specify are searched. The parsing of this form of @samp{#include} is slightly special because comments are not recognized within the @samp{<@dots{}>}. Thus, in @samp{#include <x/*y>} the @samp{/*} does not start a comment and the directive specifies inclusion of a system header file named @file{x/*y}. Of course, a header file with such a name is unlikely to exist on Unix, where shell wildcard features would make it hard to manipulate.@refill The argument @var{file} may not contain a @samp{>} character. It may, however, contain a @samp{<} character. @item #include "@var{file}" This variant is used for header files of your own program. It searches for a file named @var{file} first in the current directory, then in the same directories used for system header files. The current directory is the directory of the current input file. It is tried first because it is presumed to be the location of the files that the current input file refers to. (If the @samp{-I-} option is used, the special treatment of the current directory is inhibited.) The argument @var{file} may not contain @samp{"} characters. If backslashes occur within @var{file}, they are considered ordinary text characters, not escape characters. None of the character escape sequences appropriate to string constants in C are processed. Thus, @samp{#include "x\n\\y"} specifies a filename containing three backslashes. It is not clear why this behavior is ever useful, but the ANSI standard specifies it. @item #include @var{anything else} @cindex computed @samp{#include} This variant is called a @dfn{computed #include}. Any @samp{#include} directive whose argument does not fit the above two forms is a computed include. The text @var{anything else} is checked for macro calls, which are expanded (@pxref{Macros}). When this is done, the result must fit one of the above two variants---in particular, the expanded text must in the end be surrounded by either quotes or angle braces.. @end table @node Include Operation, Once-Only, Include Syntax, Header Files @subsection How @samp{#include} Works, given a header file @file{header.h} as follows, @example char *test (); @end example @noindent and a main program called @file{program.c} that uses the header file, like this, @example int x; #include "header.h" main () @{ printf (test ()); @} @end example @noindent the output generated by the C preprocessor for @file{program.c} as input would be @example int x; char *test (); main () @{ printf (test ()); @} @end example. It is possible for a header file to begin or end a syntactic unit such as a function definition, but that would be very confusing, so don't do it. The line following the @samp{#include} directive is always treated as a separate line by the C preprocessor even if the included file lacks a final newline. @node Once-Only, Inheritance, Include Operation, Header Files @subsection Once-Only Include Files @cindex repeated inclusion @cindex including just once Very often, one header file includes another. It can easily result that a certain header file is included more than once. This may lead to errors, if the header file defines structure types or typedefs, and is certainly wasteful. Therefore, we often wish to prevent multiple inclusion of a header file. The standard way to do this is to enclose the entire real contents of the file in a conditional, like this: @example #ifndef FILE_FOO_SEEN #define FILE_FOO_SEEN @var{the entire file} #endif /* FILE_FOO_SEEN */ @end example The macro @code{FILE_FOO_SEEN} indicates that the file has been included once already. In a user header file, the macro name should not begin with @samp{_}. In a system header file, this name should begin with @samp{__} @samp{#ifndef} conditional, then it records that fact. If a subsequent @samp{#include} specifies the same file, and the macro in the @samp{#ifndef} is already defined, then the file is entirely skipped, without even reading it. @findex #pragma once There is also an explicit directive to tell the preprocessor that it need not include a file more than once. This is called @samp{#pragma once}, and was used @emph{in addition to} the @samp{#ifndef} conditional around the contents of the header file. @samp{#pragma once} is now obsolete and should not be used at all. @findex #import In the Objective C language, there is a variant of @samp{#include} called @samp{#import} which includes a file, but does so at most once. If you use @samp{#import} @emph{instead of} @samp{#include}, then you don't need the conditionals inside the header file to prevent multiple execution of the contents. @samp{#import} is obsolete because it is not a well designed feature. It requires the users of a header file---the applications programmers---to know that a certain header file should only be included once. It is much better for the header file's implementor to write the file so that users don't need to know this. Using @samp{#ifndef} accomplishes this goal. @node Inheritance,, Once-Only, Header Files @subsection Inheritance and Header Files @cindex inheritance @cindex overriding a header file @dfn{Inheritance} is what happens when one object or file derives some of its contents by virtual copying from another object or file. In the case of C header files, inheritance means that one header file includes another header file and then replaces or adds something. If the inheriting header file and the base header file have different names, then inheritance is straightforward: simply write @samp{#include "@var{base}"} in the inheriting file. Sometimes it is necessary to give the inheriting file the same name as the base file. This is less straightforward. For example, suppose an application program uses the system header @file{sys/signal.h}, but the version of @file{/usr/include/sys/signal.h} on a particular system doesn't do what the application program expects. It might be convenient to define a ``local'' version, perhaps under the name @file{/usr/local/include/sys/signal.h}, to override or add to the one supplied by the system. You can do this by compiling with the option @samp{-I.}, and writing a file @file{sys/signal.h} that does what the application program expects. But making this file include the standard @file{sys/signal.h} is not so easy---writing @samp{#include <sys/signal.h>} in that file doesn't work, because it includes your own version of the file, not the standard system version. Used in that file itself, this leads to an infinite recursion and a fatal error in compilation. @samp{. @findex #include_next The clean way to solve this problem is to use @samp{#include_next}, which means, ``Include the @emph{next} file with this name.'' This directive works like @samp{#include} except in searching for the specified file: it starts searching the list of header file directories @emph{after} the directory in which the current file was found. Suppose you specify @samp{-I /usr/local/include}, and the list of directories to search also includes @file{/usr/include}; and suppose both directories contain @file{sys/signal.h}. Ordinary @samp{#include <sys/signal.h>} finds the file under @file{/usr/local/include}. If that file contains @samp{#include_next <sys/signal.h>}, it starts searching after that directory, and finds the file in @file{/usr/include}. @node Macros, Conditionals, Header Files, Top @section Macros A macro is a sort of abbreviation which you can define once and then use later. There are many complicated features associated with macros in the C preprocessor. @menu * Simple Macros:: Macros that always expand the same way. * Argument Macros:: Macros that accept arguments that are substituted into the macro expansion. * Predefined:: Predefined macros that are always available. * Stringification:: Macro arguments converted into string constants. * Concatenation:: Building tokens from parts taken from macro arguments. * Undefining:: Cancelling a macro's definition. * Redefining:: Changing a macro's definition. * Macro Pitfalls:: Macros can confuse the unwary. Here we explain several common problems and strange features. @end menu @node Simple Macros, Argument Macros, Macros, Macros @subsection Simple Macros @cindex simple macro @cindex manifest constant A @dfn{simple macro} is a kind of abbreviation. It is a name which stands for a fragment of code. Some people refer to these as @dfn{manifest constants}. Before you can use a macro, you must @dfn{define} it explicitly with the @samp{#define} directive. @samp{#define} is followed by the name of the macro and then the code it should be an abbreviation for. For example, @example #define BUFFER_SIZE 1020 @end example @noindent defines a macro named @samp{BUFFER_SIZE} as an abbreviation for the text @samp{1020}. If somewhere after this @samp{#define} directive there comes a C statement of the form @example foo = (char *) xmalloc (BUFFER_SIZE); @end example @noindent then the C preprocessor will recognize and @dfn{expand} the macro @samp{BUFFER_SIZE}, resulting in @example foo = (char *) xmalloc (1020); @end example that ends the string or character constant. Comments within a macro definition may contain Newlines, which make no difference since the comments are entirely replaced with Spaces regardless of their contents. Aside from the above, there is no restriction on what can go in a macro body. Parentheses need not balance. The body need not resemble valid C code. (But if it does not, you may get error messages from the C compiler when you use the macro.) The C preprocessor scans your program sequentially, so macro definitions take effect at the place you write them. Therefore, the following input to the C preprocessor @example foo = X; #define X 4 bar = X; @end example @noindent produces as output @example foo = X; bar = 4; @end example After the preprocessor expands a macro name, the macro's definition body is appended to the front of the remaining input, and the check for macro calls continues. Therefore, the macro body can contain calls to other macros. For example, after @example #define BUFSIZE 1020 #define TABLESIZE BUFSIZE @end example @noindent the name @samp{TABLESIZE} when used in the program would go through two stages of expansion, resulting ultimately in @samp{1020}.. @xref{Cascaded Macros}. @node Argument Macros, Predefined, Simple Macros, Macros @subsection Macros with Arguments @cindex macros with argument @cindex arguments in macro definitions @cindex function-like macro A simple macro always stands for exactly the same text, each time it is used. Macros can be more flexible when they accept @dfn{arguments}. Arguments are fragments of code that you supply each time the macro is used. These fragments are included in the expansion of the macro according to the directions in the macro definition. A macro that accepts arguments is called a @dfn{function-like macro} because the syntax for using it looks like a function call. @findex #define To define a macro that uses arguments, you write a @samp{#define} directive with a list of @dfn{argument names} in parentheses after the name of the macro. The argument names may be any valid C identifiers, separated by commas and optionally whitespace. The open-parenthesis must follow the macro name immediately, with no space in between. For example, here is a macro that computes the minimum of two numeric values, as it is defined in many C programs: @example #define min(X, Y) ((X) < (Y) ? (X) : (Y)) @end example @noindent (This is not the best way to define a ``minimum'' macro in GNU C@. @xref{Side Effects}, for more information.) To use a macro that expects arguments, you write the name of the macro followed by a list of @dfn{actual arguments} in parentheses, separated by commas. The number of actual arguments you give must match the number of arguments the macro expects. Examples of use of the macro @samp{min} include @samp{min (1, 2)} and @samp{min (x + 28, *p)}. The expansion text of the macro depends on the arguments you use. Each of the argument names of the macro is replaced, throughout the macro definition, with the corresponding actual argument. Using the same macro @samp{min} defined above, @samp{min (1, 2)} expands into @example ((1) < (2) ? (1) : (2)) @end example @noindent where @samp{1} has been substituted for @samp{X} and @samp{2} for @samp{Y}. Likewise, @samp{min (x + 28, *p)} expands into @example ((x + 28) < (*p) ? (x + 28) : (*p)) @end example Parentheses in the actual arguments must balance; a comma within parentheses does not end an argument. However, there is no requirement for brackets or braces to balance, and they do not prevent a comma from separating arguments. Thus, @example macro (array[x = y, x + 1]) @end example @noindent passes two arguments to @code{macro}: @samp{array[x = y} and @samp{x + 1]}. If you want to supply @samp{array[x = y, x + 1]} as an argument, you must write it as @samp{array[(x = y, x + 1)]}, which is equivalent C code. After the actual arguments are substituted into the macro body, the entire result is appended to the front of the remaining input, and the check for macro calls continues. Therefore, the actual arguments can contain calls to other macros, either with or without arguments, or even to the same macro. The macro body can also contain calls to other macros. For example, @samp{min (min (a, b), c)} expands into this text: @example ((((a) < (b) ? (a) : (b))) < (c) ? (((a) < (b) ? (a) : (b))) : (c)) @end example @noindent (Line breaks shown here for clarity would not actually be generated.) @cindex blank macro arguments @cindex space as macro argument If a macro @code{foo} takes one argument, and you want to supply an empty argument, you must write at least some whitespace between the parentheses, like this: @samp{foo ( )}. Just @samp{foo ()} is providing no arguments, which is an error if @code{foo} expects an argument. But @samp{foo0 ()} is the correct way to call a macro defined to take zero arguments, like this: @example #define foo0() @dots{} @end example If you use the macro name followed by something other than an open-parenthesis (after ignoring any spaces, tabs and comments that follow), it is not a call to the macro, and the preprocessor does not change what you have written. Therefore, it is possible for the same name to be a variable or function in your program as well as a macro, and you can choose in each instance whether to refer to the macro (if an actual argument list follows) or the variable or function (if an argument list does not follow). Such dual use of one name could be confusing and should be avoided except when the two meanings are effectively synonymous: that is, when the name is both a macro and a function and the two have similar effects. You can think of the name simply as a function; use of the name for purposes other than calling it (such as, to take the address) will refer to the function, while calls will expand the macro and generate better but equivalent code. For example, you can use a function named @samp{min} in the same source file that defines the macro. If you write @samp{&min} with no argument list, you refer to the function. If you write @samp{min (x, bb)}, with an argument list, the macro is expanded. If you write @samp{(min) (a, bb)}, where the name @samp{min} is not followed by an open-parenthesis, the macro is not expanded, so you wind up with a call to the function @samp{min}. You may not define the same name as both a simple macro and a macro with arguments. In the definition of a macro with arguments, the list of argument names must follow the macro name immediately with no space in between. If there is a space after the macro name, the macro is defined as taking no arguments, and all the rest of the line is taken to be the expansion. The reason for this is that it is often useful to define a macro that takes no arguments and whose definition begins with an identifier in parentheses. This rule about spaces makes it possible for you to do either this: @example #define FOO(x) - 1 / (x) @end example @noindent (which defines @samp{FOO} to take an argument and expand into minus the reciprocal of that argument) or this: @example #define BAR (x) - 1 / (x) @end example @noindent (which defines @samp{BAR} to take no argument and always expand into @samp{(x) - 1 / (x)}). Note that the @emph{uses} of a macro with arguments can have spaces before the left parenthesis; it's the @emph{definition} where it matters whether there is a space. @node Predefined, Stringification, Argument Macros, Macros @subsection Predefined Macros @cindex predefined macros Several simple macros are predefined. You can use them without giving definitions for them. They fall into two classes: standard macros and system-specific macros. @menu * Standard Predefined:: Standard predefined macros. * Nonstandard Predefined:: Nonstandard predefined macros. @end menu @node Standard Predefined, Nonstandard Predefined, Predefined, Predefined @subsubsection Standard Predefined Macros @cindex standard predefined macros The standard predefined macros are available with the same meanings regardless of the machine or operating system on which you are using GNU C@. Their names all start and end with double underscores. Those preceding @code{__GNUC__} in this table are standardized by ANSI C; the rest are GNU C extensions. @table @code @item __FILE__ @findex __FILE__ This macro expands to the name of the current input file, in the form of a C string constant. The precise name returned is the one that was specified in @samp{#include} or as the input file name argument. @item __LINE__ @findex __LINE__ This macro expands to the current input line number, in the form of a decimal integer constant. While we call it a predefined macro, it's a pretty strange macro, since its ``definition'' changes with each new line of source code. This and @samp{__FILE__} A @samp{#include} directive changes the expansions of @samp{__FILE__} and @samp{__LINE__} to correspond to the included file. At the end of that file, when processing resumes on the input file that contained the @samp{#include} directive, the expansions of @samp{__FILE__} and @samp{__LINE__} revert to the values they had before the @samp{#include} (but @samp{__LINE__} is then incremented by one as processing moves to the line after the @samp{#include}). The expansions of both @samp{__FILE__} and @samp{__LINE__} are altered if a @samp{#line} directive is used. @xref{Combining Sources}. @item __DATE__ @findex __DATE__ This macro expands to a string constant that describes the date on which the preprocessor is being run. The string constant contains eleven characters and looks like @w{@samp{"Feb 1 1996"}}. @c After reformatting the above, check that the date remains `Feb 1 1996', @c all on one line, with two spaces between the `Feb' and the `1'. @item __TIME__ @findex __TIME__ This macro expands to a string constant that describes the time at which the preprocessor is being run. The string constant contains eight characters and looks like @samp{"23:59:01"}. @item __STDC__ @findex __STDC__ This macro expands to the constant 1, to signify that this is ANSI Standard C@. (Whether that is actually true depends on what C compiler will operate on the output from the preprocessor.) On some hosts, system include files use a different convention, where @samp{_ @samp{-traditional} option is used. @item __STDC_VERSION__ @findex __STDC_VERSION__ This macro expands to the C Standard's version number, a long integer constant of the form @samp{@var{yyyy}@var{mm}L} where @var{yyyy} and @var{mm} are the year and month of the Standard version. This signifies which version of the C Standard the preprocessor conforms to. Like @samp{__STDC__}, whether this version number is accurate for the entire implementation depends on what C compiler will operate on the output from the preprocessor. This macro is not defined if the @samp{-traditional} option is used. @item __GNUC__ @findex __GNUC__ This macro is defined if and only if this is GNU C@. This macro is defined only when the entire GNU C compiler is in use; if you invoke the preprocessor directly, @samp{__GNUC__} is undefined. The value identifies the major version number of GNU CC (@samp{1} for GNU CC version 1, which is now obsolete, and @samp{2} for version 2). @item __GNUC_MINOR__ @findex __GNUC_MINOR__ The macro contains the minor version number of the compiler. This can be used to work around differences between different releases of the compiler (for example, if gcc 2.6.3 is known to support a feature, you can test for @code{__GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 6)}). The last number, @samp{3} in the example above, denotes the bugfix level of the compiler; no macro contains this value. @item __GNUG__ @findex __GNUG__ The GNU C compiler defines this when the compilation language is C++; use @samp{__GNUG__} to distinguish between GNU C and GNU C++. @item __cplusplus @findex __cplusplus The draft ANSI standard for C++ used to require predefining this variable. Though it is no longer required, GNU C++ continues to define it, as do other popular C++ compilers. You can use @samp{__cplusplus} to test whether a header is compiled by a C compiler or a C++ compiler. @item __STRICT_ANSI__ @findex __STRICT_ANSI__ GNU C defines this macro if and only if the @samp{-ansi} switch was specified when GNU C was invoked. Its definition is the null string. This macro exists primarily to direct certain GNU header files not to define certain traditional Unix constructs which are incompatible with ANSI C@. @item __BASE_FILE__ @findex __BASE_FILE__ This macro expands to the name of the main input file, in the form of a C string constant. This is the source file that was specified as an argument when the C compiler was invoked. @item __INCLUDE_LEVEL__ @findex __INCLUDE_LEVEL_ This macro expands to a decimal integer constant that represents the depth of nesting in include files. The value of this macro is incremented on every @samp{#include} directive and decremented at every end of file. For input files specified by command line arguments, the nesting level is zero. @item __VERSION__ @findex __VERSION__ This macro expands to a string constant which describes the version number of GNU C@. The string is normally a sequence of decimal numbers separated by periods, such as @samp{"2.6.0"}. @item __OPTIMIZE__ @findex __OPTIMIZE__ GNU CC defines this macro in optimizing compilations. It causes certain GNU header files to define alternative macro definitions for some system library functions. You should not refer to or test the definition of this macro unless you make very sure that programs will execute with the same effect regardless. @item __CHAR_UNSIGNED__ @findex __CHAR_UNSIGNED__ GNU C defines this macro if and only if the data type @code{char} is unsigned on the target machine. It exists to cause the standard header file @file{limits.h} to work correctly. You should not refer to this macro yourself; instead, refer to the standard macros defined in @file{limits.h}. The preprocessor uses this macro to determine whether or not to sign-extend large character constants written in octal; see @ref{#if Directive,,The @samp{#if} Directive}. @item __REGISTER_PREFIX__ @findex __REGISTER_PREFIX__ This macro expands to a string (not a string constant) describing the prefix applied to CPU registers in assembler code. You can use it to write assembler code that is usable in multiple environments. For example, in the @samp{m68k-aout} environment it expands to the null string, but in the @samp{m68k-coff} environment it expands to the string @samp{%}. @item __USER_LABEL_PREFIX__ @findex __USER_LABEL_PREFIX__ Similar to @code{__REGISTER_PREFIX__}, but describes the prefix applied to user generated labels in assembler code. For example, in the @samp{m68k-aout} environment it expands to the string @samp{_}, but in the @samp{m68k-coff} environment it expands to the null string. This does not work with the @samp{-mno-underscores} option that the i386 OSF/rose and m88k targets provide nor with the @samp{-mcall*} options of the rs6000 System V Release 4 target. @end table @node Nonstandard Predefined,, Standard Predefined, Predefined @subsubsection Nonstandard Predefined Macros The C preprocessor normally has several predefined macros that vary between machines because their purpose is to indicate what type of system and machine is in use. This manual, being for all systems and machines, cannot tell you exactly what their names are; instead, we offer a list of some typical ones. You can use @samp{cpp -dM} to see the values of predefined macros; see @ref{Invocation}. Some nonstandard predefined macros describe the operating system in use, with more or less specificity. For example, @table @code @item unix @findex unix @samp{unix} is normally predefined on all Unix systems. @item BSD @findex BSD @samp{BSD} is predefined on recent versions of Berkeley Unix (perhaps only in version 4.3). @end table Other nonstandard predefined macros describe the kind of CPU, with more or less specificity. For example, @table @code @item vax @findex vax @samp{vax} is predefined on Vax computers. @item mc68000 @findex mc68000 @samp{mc68000} is predefined on most computers whose CPU is a Motorola 68000, 68010 or 68020. @item m68k @findex m68k @samp{m68k} is also predefined on most computers whose CPU is a 68000, 68010 or 68020; however, some makers use @samp{mc68000} and some use @samp{m68k}. Some predefine both names. What happens in GNU C depends on the system you are using it on. @item M68020 @findex M68020 @samp{M68020} has been observed to be predefined on some systems that use 68020 CPUs---in addition to @samp{mc68000} and @samp{m68k}, which are less specific. @item _AM29K @findex _AM29K @itemx _AM29000 @findex _AM29000 Both @samp{_AM29K} and @samp{_AM29000} are predefined for the AMD 29000 CPU family. @item ns32000 @findex ns32000 @samp{ns32000} is predefined on computers which use the National Semiconductor 32000 series CPU. @end table Yet other nonstandard predefined macros describe the manufacturer of the system. For example, @table @code @item sun @findex sun @samp{sun} is predefined on all models of Sun computers. @item pyr @findex pyr @samp{pyr} is predefined on all models of Pyramid computers. @item sequent @findex sequent @samp{sequent} is predefined on all models of Sequent computers. @end table These predefined symbols are not only nonstandard, they are contrary to the ANSI standard because their names do not start with underscores. Therefore, the option @samp{-ansi} inhibits the definition of these symbols. This tends to make @samp{-ansi} useless, since many programs depend on the customary nonstandard predefined symbols. Even system header files check them and will generate incorrect declarations if they do not find the names that are expected. You might think that the header files @samp{-ansi}. We intend to avoid such problems on the GNU system. What, then, should you do in an ANSI C program to test the type of machine it will run on? GNU C offers a parallel series of symbols for this purpose, whose names are made from the customary ones by adding @samp{__} at the beginning and end. Thus, the symbol @code{__vax__} would be available on a Vax, and so on. The set of nonstandard predefined names in the GNU C preprocessor is controlled (when @code{cpp} is itself compiled) by the macro @samp{CPP_PREDEFINES}, which should be a string containing @samp{-D} options, separated by spaces. For example, on the Sun 3, we use the following definition: @example #define CPP_PREDEFINES "-Dmc68000 -Dsun -Dunix -Dm68k" @end example @noindent This macro is usually specified in @file{tm.h}. @node Stringification, Concatenation, Predefined, Macros @subsection Stringification @cindex stringification @dfn{Stringification} means turning a code fragment into a string constant whose contents are the text for the code fragment. For example, stringifying @samp{foo (z)} results in @samp{"foo (z)"}. In the C preprocessor, stringification is an option available when macro arguments are substituted into the macro definition. In the body of the definition, when an argument name appears, the character @samp{#} before the name specifies stringification of the corresponding actual argument when it is substituted at that point in the definition. The same argument may be substituted in other places in the definition without stringification if the argument name appears in those places with no @samp{#}. Here is an example of a macro definition that uses stringification: @smallexample @group #define WARN_IF(EXP) \ do @{ if (EXP) \ fprintf (stderr, "Warning: " #EXP "\n"); @} \ while (0) @end group @end smallexample @noindent Here the actual argument for @samp{EXP} is substituted once as given, into the @samp{if} statement, and once as stringified, into the argument to @samp{fprintf}. The @samp{do} and @samp{while (0)} are a kludge to make it possible to write @samp{WARN_IF (@var{arg});}, which the resemblance of @samp{WARN_IF} to a function would make C programmers want to do; see @ref{Swallow Semicolon}. The stringification feature is limited to transforming one macro argument into one string constant: there is no way to combine the argument with other text and then stringify it all together. But the example above shows how an equivalent result can be obtained in ANSI Standard C using the feature that adjacent string constants are concatenated as one string constant. The preprocessor stringifies the actual value of @samp{EXP} into a separate string constant, resulting in text like @smallexample @group do @{ if (x == 0) \ fprintf (stderr, "Warning: " "x == 0" "\n"); @} \ while (0) @end group @end smallexample @noindent but the C compiler then sees three consecutive string constants and concatenates them into one, producing effectively @smallexample do @{ if (x == 0) \ fprintf (stderr, "Warning: x == 0\n"); @} \ while (0) @end smallexample Stringification in C involves more than putting doublequote characters around the fragment; it is necessary to put backslashes in front of all doublequote characters, and all backslashes in string and character constants, in order to get a valid C string constant with the proper contents. Thus, stringifying @samp{p = "foo\n";} results in @samp{"p = \"foo\\n\";"}. However, backslashes that are not inside of string or character constants are not duplicated: @samp{\n} by itself stringifies to @samp{"\n"}. Whitespace (including comments) in the text being stringified is handled according to precise rules. All leading and trailing whitespace is ignored. Any sequence of whitespace in the middle of the text is converted to a single space in the stringified result. @node Concatenation, Undefining, Stringification, Macros @subsection Concatenation @cindex concatenation @cindex @samp{##} @dfn{Concatenation} means joining two strings into one. In the context of macro expansion, concatenation refers to joining two lexical units into one longer one. Specifically, an actual argument to the macro can be concatenated with another actual argument or with fixed text to produce a longer name. The longer name might be the name of a function, variable or type, or a C keyword; it might even be the name of another macro, in which case it will be expanded. When you define a macro, you request concatenation with the special operator @samp{##} in the macro body. When the macro is called, after actual arguments are substituted, all @samp{##} operators are deleted, and so is any whitespace next to them (including whitespace that was part of an actual argument). The result is to concatenate the syntactic tokens on either side of the @samp{##}. Consider a C program that interprets named commands. There probably needs to be a table of commands, perhaps an array of structures declared as follows: @example struct command @{ char *name; void (*function) (); @}; struct command commands[] = @{ @{ "quit", quit_command@}, @{ "help", help_command@}, @dots{} @}; @end example: @example #define COMMAND(NAME) @{ #NAME, NAME ## _command @} struct command commands[] = @{ COMMAND (quit), COMMAND (help), @dots{} @}; @end example The usual case of concatenation is concatenating two names (or a name and a number) into a longer name. But this isn't the only valid case. It is also possible to concatenate two numbers (or a number and a name, such as @samp{1.5} and @samp{e3}) into a number. Also, multi-character operators such as @samp{+=} can be formed by concatenation. In some cases it is even possible to piece together a string constant. However, two pieces of text that don't together form a valid lexical unit cannot be concatenated. For example, concatenation with @samp{x} on one side and @samp{+} on the other is not meaningful because those two characters can't fit together in any lexical unit of C@. The ANSI standard says that such attempts at concatenation are undefined, but in the GNU C preprocessor it is well defined: it puts the @samp{x} and @samp{+} side by side with no particular special results. Keep in mind that the C preprocessor converts comments to whitespace before macros are even considered. Therefore, you cannot create a comment by concatenating @samp{/} and @samp{*}: the @samp{/*} sequence that starts a comment is not a lexical unit, but rather the beginning of a ``long'' space character. Also, you can freely use comments next to a @samp{##} in a macro definition, or in actual arguments that will be concatenated, because the comments will be converted to spaces at first sight, and concatenation will later discard the spaces. @node Undefining, Redefining, Concatenation, Macros @subsection Undefining Macros @cindex undefining macros To @dfn{undefine} a macro means to cancel its definition. This is done with the @samp{#undef} directive. @samp{#undef} is followed by the macro name to be undefined. Like definition, undefinition occurs at a specific point in the source file, and it applies starting from that point. The name ceases to be a macro name, and from that point on it is treated by the preprocessor as if it had never been a macro name. For example, @example #define FOO 4 x = FOO; #undef FOO x = FOO; @end example @noindent expands into @example x = 4; x = FOO; @end example @noindent In this example, @samp{FOO} had better be a variable or function as well as (temporarily) a macro, in order for the result of the expansion to be valid C code. The same form of @samp{#undef} directive will cancel definitions with arguments or definitions that don't expect arguments. The @samp{#undef} directive has no effect when used on a name not currently defined as a macro. @node Redefining, Macro Pitfalls, Undefining, Macros @subsection Redefining Macros @cindex redefining macros @dfn{Redefining} a macro means defining (with @samp{#define}) a name that is already defined as a macro. A redefinition is trivial if the new definition is transparently identical to the old one. You probably wouldn't deliberately write a trivial redefinition, but they can happen automatically when a header file is included more than once (@pxref{Header Files}), so they are accepted silently and without effect. Nontrivial redefinition is considered likely to be an error, so it provokes a warning message from the preprocessor. However, sometimes it is useful to change the definition of a macro in mid-compilation. You can inhibit the warning by undefining the macro with @samp{#undef} before the second definition. In order for a redefinition to be trivial, the new definition must exactly match the one already in effect, with two possible exceptions: @itemize @bullet @item Whitespace may be added or deleted at the beginning or the end. @item Whitespace may be changed in the middle (but not inside strings). However, it may not be eliminated entirely, and it may not be added where there was no whitespace at all. @end itemize Recall that a comment counts as whitespace. @node Macro Pitfalls,, Redefining, Macros @subsection Pitfalls and Subtleties of Macros @cindex problems with macros @cindex pitfalls of macros In this section we describe some special rules that apply to macros and macro expansion, and point out certain cases in which the rules have counterintuitive consequences that you must watch out for. @menu * Misnesting:: Macros can contain unmatched parentheses. * Macro Parentheses:: Why apparently superfluous parentheses may be necessary to avoid incorrect grouping. * Swallow Semicolon:: Macros that look like functions but expand into compound statements. * Side Effects:: Unsafe macros that cause trouble when arguments contain side effects. * Self-Reference:: Macros whose definitions use the macros' own names. * Argument Prescan:: Actual arguments are checked for macro calls before they are substituted. * Cascaded Macros:: Macros whose definitions use other macros. * Newlines in Args:: Sometimes line numbers get confused. @end menu @node Misnesting, Macro Parentheses, Macro Pitfalls, Macro Pitfalls @subsubsection Improperly Nested Constructs Recall actual arguments. For example, @example #define double(x) (2*(x)) #define call_with_1(x) x(1) @end example @noindent would expand @samp{call_with_1 (double)} into @samp{(2*(1))}. Macro definitions do not have to have balanced parentheses. By writing an unbalanced open parenthesis in a macro body, it is possible to create a macro call that begins inside the macro body but ends outside of it. For example, @example #define strange(file) fprintf (file, "%s %d", @dots{} strange(stderr) p, 35) @end example @noindent This bizarre example expands to @samp{fprintf (stderr, "%s %d", p, 35)}! @node Macro Parentheses, Swallow Semicolon, Misnesting, Macro Pitfalls @subsubsection Unintended Grouping of Arithmetic , @example #define ceil_div(x, y) (x + y - 1) / y @end example @noindent whose purpose is to divide, rounding up. (One use for this operation is to compute how many @samp{int} objects are needed to hold a certain number of @samp{char} objects.) Then suppose it is used as follows: @example a = ceil_div (b & c, sizeof (int)); @end example @noindent This expands into @example a = (b & c + sizeof (int) - 1) / sizeof (int); @end example @noindent which does not do what is intended. The operator-precedence rules of C make it equivalent to this: @example a = (b & (c + sizeof (int) - 1)) / sizeof (int); @end example @noindent But what we want is this: @example a = ((b & c) + sizeof (int) - 1)) / sizeof (int); @end example @noindent Defining the macro as @example #define ceil_div(x, y) ((x) + (y) - 1) / (y) @end example @noindent provides the desired result. Unintended grouping can result in another way. Consider @samp{sizeof ceil_div(1, 2)}. That has the appearance of a C expression that would compute the size of the type of @samp{ceil_div (1, 2)}, but in fact it means something very different. Here is what it expands to: @example sizeof ((1) + (2) - 1) / (2) @end example @noindent This would take the size of an integer and divide it by two. The precedence rules have put the division outside the @samp{sizeof} when it was intended to be inside. Parentheses around the entire macro definition can prevent such problems. Here, then, is the recommended way to define @samp{ceil_div}: @example #define ceil_div(x, y) (((x) + (y) - 1) / (y)) @end example @node Swallow Semicolon, Side Effects, Macro Parentheses, Macro Pitfalls @subsubsection Swallowing the Semicolon @cindex semicolons (after macro calls) Often it is desirable to define a macro that expands into a compound statement. Consider, for example, the following macro, that advances a pointer (the argument @samp{p} says where to find it) across whitespace characters: @example #define SKIP_SPACES(p, limit) \ @{ register char *lim = (limit); \ while (p != lim) @{ \ if (*p++ != ' ') @{ \ p--; break; @}@}@} @end example @noindent Here Backslash-Newline is used to split the macro definition, which must be a single line, so that it resembles the way such C code would be laid out if not part of a macro definition. A call to this macro might be @samp{SKIP_SPACES (p, lim)}. Strictly speaking, the call expands to a compound statement, which is a complete statement with no need for a semicolon to end it. But it looks like a function call. So it minimizes confusion if you can use it like a function call, writing a semicolon afterward, as in @samp{SKIP_SPACES (p, lim);} But this can cause trouble before @samp{else} statements, because the semicolon is actually a null statement. Suppose you write @example if (*p != 0) SKIP_SPACES (p, lim); else @dots{} @end example @noindent The presence of two statements---the compound statement and a null statement---in between the @samp{if} condition and the @samp{else} makes invalid C code. The definition of the macro @samp{SKIP_SPACES} can be altered to solve this problem, using a @samp{do @dots{} while} statement. Here is how: @example #define SKIP_SPACES(p, limit) \ do @{ register char *lim = (limit); \ while (p != lim) @{ \ if (*p++ != ' ') @{ \ p--; break; @}@}@} \ while (0) @end example Now @samp{SKIP_SPACES (p, lim);} expands into @example do @{@dots{}@} while (0); @end example @noindent which is one statement. @node Side Effects, Self-Reference, Swallow Semicolon, Macro Pitfalls @subsubsection Duplication of Side Effects @cindex side effects (in macro arguments) @cindex unsafe macros Many C programs define a macro @samp{min}, for ``minimum'', like this: @example #define min(X, Y) ((X) < (Y) ? (X) : (Y)) @end example When you use this macro with an argument containing a side effect, as shown here, @example next = min (x + y, foo (z)); @end example @noindent it expands as follows: @example next = ((x + y) < (foo (z)) ? (x + y) : (foo (z))); @end example @noindent where @samp{x + y} has been substituted for @samp{X} and @samp{foo (z)} for @samp{Y}. The function @samp{foo} is used only once in the statement as it appears in the program, but the expression @samp{foo (z)} has been substituted twice into the macro expansion. As a result, @samp{foo} might be called two times when the statement is executed. If it has side effects or if it takes a long time to compute, the results might not be what you intended. We say that @samp{min} is an @dfn{unsafe} macro. The best solution to this problem is to define @samp{min} in a way that computes the value of @samp{foo (z)} only once. The C language offers no standard way to do this, but it can be done with GNU C extensions as follows: @example #define min(X, Y) \ (@{ typeof (X) __x = (X), __y = (Y); \ (__x < __y) ? __x : __y; @}) @end example If you do not wish to use GNU C extensions, the only solution is to be careful when @emph{using} the macro @samp{min}. For example, you can calculate the value of @samp{foo (z)}, save it in a variable, and use that variable in @samp{min}: @example #define min(X, Y) ((X) < (Y) ? (X) : (Y)) @dots{} @{ int tem = foo (z); next = min (x + y, tem); @} @end example @noindent (where we assume that @samp{foo} returns type @samp{int}). @node Self-Reference, Argument Prescan, Side Effects, Macro Pitfalls @subsubsection Self-Referential Macros @cindex self-reference A @dfn{self-referential} macro is one whose name appears in its definition. A special feature of ANSI Standard C is that the self-reference is not considered a macro call. It is passed into the preprocessor output unchanged. Let's consider an example: @example #define foo (4 + foo) @end example @noindent where @samp{foo} is also a variable in your program. Following the ordinary rules, each reference to @samp{foo} will expand into @samp{(4 + foo)}; then this will be rescanned and will expand into @samp{(4 + (4 + foo))}; and so on until it causes a fatal error (memory full) in the preprocessor. However, the special rule about self-reference cuts this process short after one step, at @samp{(4 + foo)}. Therefore, this macro definition has the possibly useful effect of causing the program to add 4 to the value of @samp{foo} wherever @samp{foo} is referred to. In most cases, it is a bad idea to take advantage of this feature. A person reading the program who sees that @samp{foo} is a variable will not expect that it is a macro as well. The reader will come across the identifier @samp{foo} in the program and think its value should be that of the variable @samp{foo}, whereas in fact the value is four greater. The special rule for self-reference applies also to @dfn{indirect} self-reference. This is the case where a macro @var{x} expands to use a macro @samp{y}, and the expansion of @samp{y} refers to the macro @samp{x}. The resulting reference to @samp{x} comes indirectly from the expansion of @samp{x}, so it is a self-reference and is not further expanded. Thus, after @example #define x (4 + y) #define y (2 * x) @end example @noindent @samp{x} would expand into @samp{(4 + (2 * x))}. Clear? But suppose @samp{y} is used elsewhere, not from the definition of @samp{x}. Then the use of @samp{x} in the expansion of @samp{y} is not a self-reference because @samp{x} is not ``in progress''. So it does expand. However, the expansion of @samp{x} contains a reference to @samp{y}, and that is an indirect self-reference now because @samp{y} is ``in progress''. The result is that @samp{y} expands to @samp{(2 * (4 + y))}. It is not clear that this behavior would ever be useful, but it is specified by the ANSI C standard, so you may need to understand it. @node Argument Prescan, Cascaded Macros, Self-Reference, Macro Pitfalls @subsubsection Separate Expansion of Macro Arguments @cindex expansion of arguments @cindex macro argument expansion @cindex prescan of macro arguments is scanned again for macros to expand. The result is that the actual arguments are scanned @emph{twice} to expand macro calls in them. Most of the time, this has no effect. If the actual argument contained any macro calls, they are expanded during the first scan. The result therefore contains no macro calls, so the second scan does not change it. If the actual argument were substituted as given, with no prescan, the single remaining scan would find the same macro calls and produce the same results. You might expect the double scan to change the results when a self-referential macro is used in an actual argument of another macro (@pxref{Self-Reference}): the self-referential macro would be expanded once in the first scan, and a second time in the second scan. But this is not what happens. The self-references that do not expand in the first scan are marked so that they will not expand in the second scan either. The prescan is not done when an argument is stringified or concatenated. Thus, @example #define str(s) #s #define foo 4 str (foo) @end example @noindent expands to @samp{"foo"}. Once more, prescan has been prevented from having any noticeable effect. More precisely, stringification and concatenation use the argument as written, in un-prescanned form. The same actual argument would be used in prescanned form if it is substituted elsewhere without stringification or concatenation. @example #define str(s) #s lose(s) #define foo 4 str (foo) @end example expands to @samp{"foo" lose(4)}. You might now ask, ``Why mention the prescan, if it makes no difference? And why not skip it and make the preprocessor faster?'' The answer is that the prescan does make a difference in three special cases: @itemize @bullet @item Nested calls to a macro. @item Macros that call other macros that stringify or concatenate. @item Macros whose expansions contain unshielded commas. @end itemize We say that @dfn{nested} calls to a macro occur when a macro's actual argument contains a call to that very macro. For example, if @samp{f} is a macro that expects one argument, @samp{f (f (1))} is a nested pair of calls to @samp{f}. The desired expansion is made by expanding @samp{f (1)} and substituting that into the definition of @samp{f}. The prescan causes the expected result to happen. Without the prescan, @samp{f (1)} itself would be substituted as an actual argument, and the inner use of @samp{f} would appear during the main scan as an indirect self-reference and would not be expanded. Here, the prescan cancels an undesirable side effect (in the medical, not computational, sense of the term) of the special rule for self-referential macros. But prescan causes trouble in certain other cases of nested macro calls. Here is an example: @example #define foo a,b #define bar(x) lose(x) #define lose(x) (1 + (x)) bar(foo) @end example @noindent We would like @samp{bar(foo)} to turn into @samp{(1 + (foo))}, which would then turn into @samp{(1 + (a,b))}. But instead, @samp{bar(foo)} expands into @samp{lose(a,b)}, and you get an error because @code{lose} requires a single argument. In this case, the problem is easily solved by the same parentheses that ought to be used to prevent misnesting of arithmetic operations: @example #define foo (a,b) #define bar(x) lose((x)) @end example The problem is more serious when the operands of the macro are not expressions; for example, when they are statements. Then parentheses are unacceptable because they would make for invalid C code: @example #define foo @{ int a, b; @dots{} @} @end example @noindent In GNU C you can shield the commas using the @samp{(@{@dots{}@})} construct which turns a compound statement into an expression: @example #define foo (@{ int a, b; @dots{} @}) @end example Or you can rewrite the macro definition to avoid such commas: @example #define foo @{ int a; int b; @dots{} @} @end example There is also one case where prescan is useful. It is possible to use prescan to expand an argument and then stringify it---if you use two levels of macros. Let's add a new macro @samp{xstr} to the example shown above: @example #define xstr(s) str(s) #define str(s) #s #define foo 4 xstr (foo) @end example This expands into @samp{"4"}, not @samp{"foo"}. The reason for the difference is that the argument of @samp{xstr} is expanded at prescan (because @samp{xstr} does not specify stringification or concatenation of the argument). The result of prescan then forms the actual argument for @samp{str}. @samp{str} uses its argument without prescan because it performs stringification; but it cannot prevent or undo the prescanning already done by @samp{xstr}. @node Cascaded Macros, Newlines in Args, Argument Prescan, Macro Pitfalls @subsubsection Cascaded Use of Macros @cindex cascaded macros @cindex macro body uses macro A @dfn{cascade} of macros is when one macro's body contains a reference to another macro. This is very common practice. For example, @example #define BUFSIZE 1020 #define TABLESIZE BUFSIZE @end example. This makes a difference if you change the definition of @samp{BUFSIZE} at some point in the source file. @samp{TABLESIZE}, defined as shown, will always expand using the definition of @samp{BUFSIZE} that is currently in effect: @example #define BUFSIZE 1020 #define TABLESIZE BUFSIZE #undef BUFSIZE #define BUFSIZE 37 @end example @noindent Now @samp{TABLESIZE} expands (in two stages) to @samp{37}. (The @samp{#undef} is to prevent any warning about the nontrivial redefinition of @code{BUFSIZE}.) @node Newlines in Args,, Cascaded Macros, Macro Pitfalls @subsection Newlines in Macro Arguments @cindex newlines in macro arguments: @example #define ignore_second_arg(a,b,c) a; c ignore_second_arg (foo (), ignored (), syntax error); @end example @noindent The syntax error triggered by the tokens @samp{syntax error} results in an error message citing line four, even though the statement text comes from line five. @node Conditionals, Combining Sources, Macros, Top @section Conditionals @cindex conditionals In a macro processor, a @dfn{conditional} is a directive that allows a part of the program to be ignored during compilation, on some conditions. In the C preprocessor, a conditional can test either an arithmetic expression or whether a name is defined as a macro. A conditional in the C preprocessor resembles in some ways an @samp{if} statement in C, but it is important to understand the difference between them. The condition in an @samp. @menu * Uses: Conditional Uses. What conditionals are for. * Syntax: Conditional Syntax. How conditionals are written. * Deletion: Deleted Code. Making code into a comment. * Macros: Conditionals-Macros. Why conditionals are used with macros. * Assertions:: How and why to use assertions. * Errors: #error Directive. Detecting inconsistent compilation parameters. @end menu @node Conditional Uses @subsection Why Conditionals are Used Generally there are three kinds of reason library routines that do not exist on the other system. When this happens, it is not enough to avoid executing the invalid code: merely having it in the program makes it impossible to link the program and run it. With a preprocessing conditional, the offending code can be effectively excised from the program when it is not valid. @item You may want to be able to compile the same source file into two different programs. Sometimes the difference between the programs is that one makes frequent time-consuming consistency checks on its intermediate data, or prints the values of those data for debugging, while the other does not. @item A conditional whose condition is always false is a good way to exclude code from the program but keep it as a sort of comment for future reference. @end itemize Most simple programs that are intended to run on only one machine will not need to use preprocessing conditionals. @node Conditional Syntax @subsection Syntax of Conditionals @findex #if A conditional in the C preprocessor begins with a @dfn{conditional directive}: @samp{#if}, @samp{#ifdef} or @samp{#ifndef}. @xref{Conditionals-Macros}, for information on @samp{#ifdef} and @samp{#ifndef}; only @samp{#if} is explained here. @menu * If: #if Directive. Basic conditionals using @samp{#if} and @samp{#endif}. * Else: #else Directive. Including some text if the condition fails. * Elif: #elif Directive. Testing several alternative possibilities. @end menu @node #if Directive @subsubsection The @samp{#if} Directive The @samp{#if} directive in its simplest form consists of @example #if @var{expression} @var{controlled text} #endif /* @var{expression} */ @end example The comment following the @samp{#endif} is not required, but it is a good practice because it helps people match the @samp{#endif} to the corresponding @samp{#if}. Such comments should always be used, except in short conditionals that are not nested. In fact, you can put anything at all after the @samp{#endif} and it will be ignored by the GNU C preprocessor, but only comments are acceptable in ANSI Standard C@. @var{expression} is a C expression of integer type, subject to stringent restrictions. It may contain @itemize @bullet @item Integer constants, which are all regarded as @code{long} or @code{unsigned long}. @item Character constants, which are interpreted according to the character set and conventions of the machine and operating system on which the preprocessor is running. The GNU C preprocessor uses the C data type @samp{char} for these character constants; therefore, whether some character codes are negative is determined by the C compiler used to compile the preprocessor. If it treats @samp{char} as signed, then character codes large enough to set the sign bit will be considered negative; otherwise, no character code is considered negative. @item Arithmetic operators for addition, subtraction, multiplication, division, bitwise operations, shifts, comparisons, and logical operations (@samp{&&} and @samp{||}). @item Identifiers that are not macros, which are all treated as zero(!). @item Macro calls. All macro calls in the expression are expanded before actual computation of the expression's value begins. @end itemize Note that @samp{sizeof} operators and @code{enum}-type values are not allowed. @code{enum}-type values, like all other identifiers that are not taken as macro calls and expanded, are treated as zero. The @var{controlled text} inside of a conditional can include preprocessing directives. Then the directives inside the conditional are obeyed only if that branch of the conditional succeeds. The text can also contain other conditional groups. However, the @samp{#if} and @samp{#endif} directives must balance. @node #else Directive @subsubsection The @samp{#else} Directive @findex #else The @samp{#else} directive can be added to a conditional to provide alternative text to be used if the condition is false. This is what it looks like: @example #if @var{expression} @var{text-if-true} #else /* Not @var{expression} */ @var{text-if-false} #endif /* Not @var{expression} */ @end example If @var{expression} is nonzero, and thus the @var{text-if-true} is active, then @samp{#else} acts like a failing conditional and the @var{text-if-false} is ignored. Contrariwise, if the @samp{#if} conditional fails, the @var{text-if-false} is considered included. @node #elif Directive @subsubsection The @samp{#elif} Directive @findex #elif One common case of nested conditionals is used to check for more than two possible alternatives. For example, you might have @example #if X == 1 @dots{} #else /* X != 1 */ #if X == 2 @dots{} #else /* X != 2 */ @dots{} #endif /* X != 2 */ #endif /* X != 1 */ @end example Another conditional directive, @samp{#elif}, allows this to be abbreviated as follows: @example #if X == 1 @dots{} #elif X == 2 @dots{} #else /* X != 2 and X != 1*/ @dots{} #endif /* X != 2 and X != 1*/ @end example @samp{#elif} stands for ``else if''. Like @samp{#else}, it goes in the middle of a @samp{#if}-@samp{#endif} pair @samp{#if}-@samp{#endif} group. Then the text after each @samp{#elif} is processed only if the @samp{#elif} condition succeeds after the original @samp{#if} and any previous @samp{#elif} directives within it have failed. @samp{#else} is equivalent to @samp{#elif 1}, and @samp{#else} is allowed after any number of @samp{#elif} directives, but @samp{#elif} may not follow @samp{#else}. @node Deleted Code @subsection Keeping Deleted Code for Future Reference @cindex commenting out code If you replace or delete a part of the program but want to keep the old code around as a comment for future reference, the easy way to do this is to put @samp{#if 0} before it and @samp{#endif} after it. This is better than using comment delimiters @samp{/*} and @samp{*/} since those won't work if the code already contains comments (C comments do not nest). This works even if the code being turned off contains conditionals, but they must be entire conditionals (balanced @samp{#if} and @samp{#endif}). Conversely, do not use @samp{#if 0} for comments which are not C code. Use the comment delimiters @samp{/*} and @samp{*/} instead. The interior of @samp{#if 0} must consist of complete tokens; in particular, singlequote characters must balance. But comments often contain unbalanced singlequote characters (known in English as apostrophes). These confuse @samp{#if 0}. They do not confuse @samp{/*}. @node Conditionals-Macros @subsection Conditionals and Macros Conditionals are useful in connection with macros or assertions, because those are the only ways that an expression's value can vary from one compilation to another. A @samp{#if} directive whose expression uses no macros or assertions is equivalent to @samp{#if 1} or @samp{#if 0}; you might as well determine which one, by computing the value of the expression yourself, and then simplify the program. For example, here is a conditional that tests the expression @samp{BUFSIZE == 1020}, where @samp{BUFSIZE} must be a macro. @example #if BUFSIZE == 1020 printf ("Large buffers!\n"); #endif /* BUFSIZE is large */ @end example (Programmers often wish they could test the size of a variable or data type in @samp{#if}, but this does not work. The preprocessor does not understand @code{sizeof}, or typedef names, or even the type keywords such as @code{int}.) @findex defined The special operator @samp{defined} is used in @samp{#if} expressions to test whether a certain name is defined as a macro. Either @samp{defined @var{name}} or @samp{defined (@var{name})} is an expression whose value is 1 if @var{name} is defined as macro at the current point in the program, and 0 otherwise. For the @samp{defined} operator it makes no difference what the definition of the macro is; all that matters is whether there is a definition. Thus, for example,@refill @example #if defined (vax) || defined (ns16000) @end example @noindent would succeed if either of the names @samp{vax} and @samp{ns16000} is defined as a macro. You can test the same condition using assertions (@pxref{Assertions}), like this: @example #if #cpu (vax) || #cpu (ns16000) @end example If a macro is defined and later undefined with @samp{#undef}, subsequent use of the @samp{defined} operator returns 0, because the name is no longer defined. If the macro is defined again with another @samp{#define}, @samp{defined} will recommence returning 1. @findex #ifdef @findex #ifndef Conditionals that test whether just one name is defined are very common, so there are two special short conditional directives for this case. @table @code @item #ifdef @var{name} is equivalent to @samp{#if defined (@var{name})}. @item #ifndef @var{name} is equivalent to @samp{#if ! defined (@var{name})}. @end table Macro definitions can vary between compilations for several reasons. @itemize @bullet @item Some macros are predefined on each kind of machine. For example, on a Vax, the name @samp{vax} is a predefined macro. On other machines, it would not be defined. @item Many more macros are defined by system header files. Different systems and machines define different macros, or give them different values. It is useful to test these macros with conditionals to avoid using a system feature on a machine where it is not implemented. @item Macros are a common way of allowing users to customize a program for different machines or applications. For example, the macro @samp{BUFSIZE} might be defined in a configuration file for your program that is included as a header file in each source file. You would use @samp{BUFSIZE} in a preprocessing conditional in order to generate different code depending on the chosen configuration. @item Macros can be defined or undefined with @samp{-D} and @samp{-U} command compiler command options. @xref{Invocation}. @end itemize @ifinfo Assertions are usually predefined, but can be defined with preprocessor directives or command-line options. @end ifinfo @node Assertions @subsection Assertions @cindex assertions @dfn{Assertions} are a more systematic alternative to macros in writing conditionals to test what sort of computer or system the compiled program will run on. Assertions are usually predefined, but you can define them with preprocessing directives or command-line options. @cindex predicates @dfn{predicate}. An assertion looks like this: @example #@var{predicate} (@var{answer}) @end example @noindent You must use a properly formed identifier for @var{predicate}. The value of @var{answer} can be any sequence of words; all characters are significant except for leading and trailing whitespace, and differences in internal whitespace sequences are ignored. Thus, @samp{x + y} is different from @samp{x+y} but equivalent to @samp{x + y}. @samp{)} is not allowed in an answer. @cindex testing predicates Here is a conditional to test whether the answer @var{answer} is asserted for the predicate @var{predicate}: @example #if #@var{predicate} (@var{answer}) @end example @noindent There may be more than one answer asserted for a given predicate. If you omit the answer, you can test whether @emph{any} answer is asserted for @var{predicate}: @example #if #@var{predicate} @end example @findex #system @findex #machine @findex #cpu Most of the time, the assertions you test will be predefined assertions. GNU C provides three predefined predicates: @code{system}, @code{cpu}, and @code{machine}. @code{system} is for assertions about the type of software, @code{cpu} describes the type of computer architecture, and @code{machine} gives more information about the computer. For example, on a GNU system, the following assertions would be true: @example #system (gnu) #system (mach) #system (mach 3) #system (mach 3.@var{subversion}) #system (hurd) #system (hurd @var{version}) @end example @noindent and perhaps others. The alternatives with more or less version information let you ask more or less detailed questions about the type of system software. On a Unix system, you would find @code{#system (unix)} and perhaps one of: @code{#system (aix)}, @code{#system (bsd)}, @code{#system (hpux)}, @code{#system (lynx)}, @code{#system (mach)}, @code{#system (posix)}, @code{#system (svr3)}, @code{#system (svr4)}, or @code{#system (xpg4)} with possible version numbers following. Other values for @code{system} are @code{#system (mvs)} and @code{#system (vms)}. @strong{Portability note:} Many Unix C compilers provide only one answer for the @code{system} assertion: @code{#system (unix)}, if they support assertions at all. This is less than useful. An assertion with a multi-word answer is completely different from several assertions with individual single-word answers. For example, the presence of @code{system (mach 3.0)} does not mean that @code{system (3.0)} is true. It also does not directly imply @code{system (mach)}, but in GNU C, that last will normally be asserted as well. The current list of possible assertion values for @code{cpu} is: @code{#cpu (a29k)}, @code{#cpu (alpha)}, @code{#cpu (arm)}, @code{#cpu (clipper)}, @code{#cpu (convex)}, @code{#cpu (elxsi)}, @code{#cpu (tron)}, @code{#cpu (h8300)}, @code{#cpu (i370)}, @code{#cpu (i386)}, @code{#cpu (i860)}, @code{#cpu (i960)}, @code{#cpu (m68k)}, @code{#cpu (m88k)}, @code{#cpu (mips)}, @code{#cpu (ns32k)}, @code{#cpu (hppa)}, @code{#cpu (pyr)}, @code{#cpu (ibm032)}, @code{#cpu (rs6000)}, @code{#cpu (sh)}, @code{#cpu (sparc)}, @code{#cpu (spur)}, @code{#cpu (tahoe)}, @code{#cpu (vax)}, @code{#cpu (we32000)}. @findex #assert You can create assertions within a C program using @samp{#assert}, like this: @example #assert @var{predicate} (@var{answer}) @end example @noindent (Note the absence of a @samp{#} before @var{predicate}.) @cindex unassert @cindex assertions, undoing @cindex retracting assertions @findex #unassert Each time you do this, you assert a new true answer for @var{predicate}. Asserting one answer does not invalidate previously asserted answers; they all remain true. The only way to remove an assertion is with @samp{#unassert}. @samp{#unassert} has the same syntax as @samp{#assert}. You can also remove all assertions about @var{predicate} like this: @example #unassert @var{predicate} @end example You can also add or cancel assertions using command options when you run @code{gcc} or @code{cpp}. @xref{Invocation}. @node #error Directive @subsection The @samp{#error} and @samp{#warning} Directives @findex #error The directive @samp{#error} causes the preprocessor to report a fatal error. The rest of the line that follows @samp{#error} is used as the error message. The line must consist of complete tokens. @noindent @xref{Nonstandard Predefined}, for why this works. If you have several configuration parameters that must be set up by the installation in a consistent way, you can use conditionals to detect an inconsistency and report it with @samp{#error}. For example, @smallexample #if HASH_TABLE_SIZE % 2 == 0 || HASH_TABLE_SIZE % 3 == 0 \ || HASH_TABLE_SIZE % 5 == 0 #error HASH_TABLE_SIZE should not be divisible by a small prime #endif @end smallexample @findex #warning The directive @samp{#warning} is like the directive @samp{#error}, but causes the preprocessor to issue a warning and continue preprocessing. The rest of the line that follows @samp{#warning} is used as the warning message. You might use @samp{#warning} in obsolete header files, with a message directing the user to the header file which should be used instead. @node Combining Sources, Other Directives, Conditionals, Top @section Combining Source Files @cindex line control One of the jobs of the C preprocessor is to inform the C compiler of where each line of C code came from: which source file and which line number. C code can come from multiple source files if you use @samp{#include}; both @samp{#include} and the use of conditionals and macros can cause the line number of a line in the preprocessor by which you can control the feature explicitly. This is useful when a file for input to the C preprocessor is the output from another program such as the @code{bison} parser generator, which operates on another file that is the true source file. Parts of the output from @code{bison} are generated from scratch, other parts come from a standard parser file. The rest are copied nearly verbatim from the source file, but their line numbers in the @code{bison} output are not the same as their original line numbers. Naturally you would like compiler error messages and symbolic debuggers to know the original source file and line number of each line in the @code{bison} input. @findex #line @code{bison} arranges} Here @var{linenum} is a decimal integer constant. This specifies that the line number of the following line of input, in its original source file, was @var{linenum}. @item #line @var{linenum} @var{filename} Here @var{linenum} is a decimal integer constant and @var{filename} is a string constant. This specifies that the following line of input came originally from source file @var{filename} and its line number there was @var{linenum}. Keep in mind that @var{filename} is not just a file name; it is surrounded by doublequote characters so that it looks like a string constant. @item #line @var{anything else} @var{anything else} is checked for macro calls, which are expanded. The result should be a decimal integer constant followed optionally by a string constant, as described above. @end table @samp{#line} directives alter the results of the @samp{__FILE__} and @samp{__LINE__} predefined macros from that point on. @xref{Standard Predefined}. The output of the preprocessor (which is the input for the rest of the compiler) contains directives that look much like @samp{#line} directives. They start with just @samp{#} instead of @samp{#line}, but this is followed by a line number and file name as in @samp{#line}. @xref{Output}. @node Other Directives, Output, Combining Sources, Top @section Miscellaneous Preprocessing Directives @cindex null directive This section describes three additional preprocessing directives. They are not very useful, but are mentioned for completeness.. @findex #pragma The ANSI standard specifies that the effect of the @samp{#pragma} directive is implementation-defined. In the GNU C preprocessor, @samp{#pragma} directives are not used, except for @samp{#pragma once} (@pxref{Once-Only}). However, they are left in the preprocessor output, so they are available to the compilation pass. @findex #ident The @samp{#ident} directive is supported for compatibility with certain other systems. It is followed by a line of text. On some systems, the text is copied into a special place in the object file; on most systems, the text is ignored and this directive has no effect. Typically @samp{#ident} is only used in header files supplied with those systems where it is meaningful. @node Output, Invocation, Other Directives, Top @section C Preprocessor Output @cindex output format The output from the C preprocessor looks much like the input, except that all preprocessing directive lines have been replaced with blank lines and all comments with spaces. Whitespace within a line is not altered; however, unless @samp{-traditional} is used, spaces may be inserted into the expansions of macro calls to prevent tokens from being concatenated. Source file name and line number information is conveyed by lines of the form @example # @var{linenum} @var{filename} @var{flags} @end example @noindent which are inserted as needed into the middle of the input (but never within a string or character constant). Such a line means that the following line originated in file @var{filename} at line @var{linenum}. C@. @c maybe cross reference NO_IMPLICIT_EXTERN_C @end table @node Invocation, Concept Index, Output, Top @section Invoking the C Preprocessor @cindex invocation of the preprocessor Most often when you use the C preprocessor you will not have to invoke it explicitly: the C compiler will do so automatically. However, the preprocessor is sometimes useful on its own. @samp{-}, which as @var{infile} means to read from standard input and as @var{outfile} means to write to standard output. Also, if @var{outfile} or both file names are omitted, the standard output and standard input are used for the omitted file names. @cindex options Here is a table of command options accepted by the C preprocessor. These options can also be given when compiling a C program; they are passed along automatically to the preprocessor when it is invoked by the compiler. @table @samp @item -P @findex -P Inhibit generation of @samp{#}-lines with line-number information in the output from the preprocessor (@pxref{Output}). This might be useful when running the preprocessor on something that is not C code and will be sent to a program which might be confused by the @samp{#}-lines. @item -C @findex -C Do not discard comments: pass them through to the output file. Comments appearing in arguments of a macro call will be copied to the output before the expansion of the macro call. @item -traditional @findex -traditional Try to imitate the behavior of old-fashioned C, as opposed to ANSI C@. @itemize @bullet @item Traditional macro expansion pays no attention to singlequote or doublequote characters; macro argument symbols are replaced by the argument values even when they appear within apparent string or character constants. @item Traditionally, it is permissible for a macro expansion to end in the middle of a string or character constant. The constant continues into the text surrounding the macro call. @item However, traditionally the end of the line terminates a string or character constant, with no error. @item In traditional C, a comment is equivalent to no text at all. (In ANSI C, a comment counts as whitespace.) @item Traditional C does not have the concept of a ``preprocessing number''. It considers @samp{1.0e+4} to be three tokens: @samp{1.0e}, @samp{+}, and @samp{4}. @item A macro is not suppressed within its own definition, in traditional C@. Thus, any macro that is used recursively inevitably causes an error. @item The character @samp{#} has no special meaning within a macro definition in traditional C@. @item In traditional C, the text at the end of a macro expansion can run together with the text after the macro call, to produce a single token. (This is impossible in ANSI C@.) @item Traditionally, @samp{\} inside a macro argument suppresses the syntactic significance of the following character. @end itemize @cindex Fortran @cindex unterminated Use the @samp{-traditional} option when preprocessing Fortran code, so that singlequotes and doublequotes within Fortran comment lines (which are generally not recognized as such by the preprocessor) do not cause diagnostics about unterminated character or string constants. However, this option does not prevent diagnostics about unterminated comments when a C-style comment appears to start, but not end, within Fortran-style commentary. So, the following Fortran comment lines are accepted with @samp{-traditional}: @smallexample C This isn't an unterminated character constant C Neither is "20000000000, an octal constant C in some dialects of Fortran @end smallexample However, this type of comment line will likely produce a diagnostic, or at least unexpected output from the preprocessor, due to the unterminated comment: @smallexample C Some Fortran compilers accept /* as starting C an inline comment. @end smallexample @cindex g77 Note that @code{g77} automatically supplies the @samp{-traditional} option when it invokes the preprocessor. However, a future version of @code{g77} might use a different, more-Fortran-aware preprocessor in place of @code{cpp}. @item -trigraphs @findex -trigraphs Process ANSI standard trigraph sequences. These are three-character sequences, all starting with @samp{??}, that are defined by ANSI C to stand for single characters. For example, @samp{??/} stands for @samp{\}, so @samp{'??/n'} is a character constant for a newline. Strictly speaking, the GNU C preprocessor does not support all programs in ANSI Standard C unless @samp{-trigraphs} is used, but if you ever notice the difference it will be with relief. You don't want to know any more about trigraphs. @item -pedantic @findex -pedantic Issue warnings required by the ANSI C standard in certain cases such as when text other than a comment follows @samp{#else} or @samp{#endif}. @item -pedantic-errors @findex -pedantic-errors Like @samp{-pedantic}, except that errors are produced rather than warnings. @item -Wtrigraphs @findex -Wtrigraphs Warn if any trigraphs are encountered. Currently this only works if you have turned trigraphs on with @samp{-trigraphs} or @samp{-ansi}; in the future this restriction will be removed. @item -Wcomment @findex -Wcomment @ignore @c "Not worth documenting" both singular and plural forms of this @c option, per RMS. But also unclear which is better; hence may need to @c switch this at some future date. [email protected], 2jan92. @itemx -Wcomments (Both forms have the same effect). @end ignore Warn whenever a comment-start sequence @samp{/*} appears in a @samp{/*} comment, or whenever a Backslash-Newline appears in a @samp{//} comment. @item -Wall @findex -Wall Requests both @samp{-Wtrigraphs} and @samp{-Wcomment} (but not @samp{-Wtraditional} or @samp{-Wundef}). @item -Wtraditional @findex -Wtraditional Warn about certain constructs that behave differently in traditional and ANSI C@. @item -Wundef @findex -Wundef Warn if an undefined identifier is evaluated in an @samp{#if} directive. @item -I @var{directory} @findex -I Add the directory @var{directory} to the head of the list of directories to be searched for header files (@pxref{Include Syntax}). specified. In addition, the @samp{-I-} option inhibits the use of the current directory as the first search directory for @samp{#include "@var{file}"}. Therefore, the current directory is searched only if it is requested explicitly with @samp{-I.}. Specifying both @samp{-I-} and @samp{-I.} allows you to control precisely which directories are searched before the current one and which are searched after. @item -nostdinc @findex -nostdinc Do not search the standard system directories for header files. Only the directories you have specified with @samp{-I} options (and the current directory, if appropriate) are searched. @item -nostdinc++ @findex -nostdinc++ Do not search for header files in the C++-specific standard directories, but do still search the other standard directories. (This option is used when building the C++ library.) @item -remap @findex -remap When searching for a header file in a directory, remap file names if a file named @file{header.gcc} exists in that directory. This can be used to work around limitations of file systems with file name restrictions. The @file{header.gcc} file should contain a series of lines with two tokens on each line: the first token is the name to map, and the second token is the actual name to use. @item -D @var{name} @findex -D Predefine @var{name} as a macro, with definition @samp{1}. @item -D @var{name}=@var{definition} Predefine @var{name} as a macro, with definition @var{definition}. There are no restrictions on the contents of @var{definition}, but if you are invoking the preprocessor from a shell or shell-like program you may need to use the shell's quoting syntax to protect characters such as spaces that have a meaning in the shell syntax. If you use more than one @samp{-D} for the same @var{name}, the rightmost definition takes effect. @item -U @var{name} @findex -U Do not predefine @var{name}. If both @samp{-U} and @samp{-D} are specified for one name, the @samp{-U} beats the @samp{-D} and the name is not predefined. @item -undef @findex -undef Do not predefine any nonstandard macros. @item -gcc @findex -gcc Define the macros @var{__GNUC__} and @var{__GNUC_MINOR__}. These are defined automatically when you use @samp{gcc -E}; you can turn them off in that case with @samp{-no-gcc}. @item -A @var{predicate}(@var{answer}) @findex -A Make an assertion with the predicate @var{predicate} and answer @var{answer}. @xref{Assertions}. @noindent You can use @samp{-A-} to disable all predefined assertions; it also undefines all predefined macros and all macros that preceded it on the command line. @item -dM @findex -dM Instead of outputting the result of preprocessing, output a list of @samp{#define} directives for all the macros defined during the execution of the preprocessor, including predefined macros. This gives you a way of finding out what is predefined in your version of the preprocessor; assuming you have no file @samp{foo.h}, the command @example touch foo.h; cpp -dM foo.h @end example @noindent will show the values of any predefined macros. @item -dD @findex -dD Like @samp{-dM} except in two respects: it does @emph{not} include the predefined macros, and it outputs @emph{both} the @samp{#define} directives and the result of preprocessing. Both kinds of output go to the standard output file. @item -dI @findex -dI Output @samp{#include} directives in addition to the result of preprocessing. @item -M [-MG] @findex -M Instead of outputting the result of preprocessing, output a rule suitable for @code{make} describing the dependencies of the main source file. The preprocessor outputs one @code{make} rule containing the object file name for that source file, a colon, and the names of all the included files. If there are many included files then the rule is split into several lines using @samp{\}-newline. @samp{-MG} says to treat missing header files as generated files and assume they live in the same directory as the source file. It must be specified in addition to @samp{-M}. This feature is used in automatic updating of makefiles. @item -MM [-MG] @findex -MM Like @samp{-M} but mention only the files included with @samp{#include "@var{file}"}. System header files included with @samp{#include <@var{file}>} are omitted. @item -MD @var{file} @findex -MD Like @samp{-M} but the dependency information is written to @var{file}. This is in addition to compiling the file as specified---@samp{-MD} does not inhibit ordinary compilation the way @samp{-M} does. When invoking @code{gcc}, do not specify the @var{file} argument. @code{gcc} will create file names made by replacing ".c" with ".d" at the end of the input file names. In Mach, you can use the utility @code{md} to merge multiple dependency files into a single dependency file suitable for using with the @samp{make} command. @item -MMD @var{file} @findex -MMD Like @samp{-MD} except mention only user header files, not system header files. @item -H @findex -H Print the name of each header file used, in addition to other normal activities. @item -imacros @var{file} @findex -imacros. @item -include @var{file} @findex -include Process @var{file} as input, and include all the resulting output, before processing the regular input file. @item -idirafter @var{dir} @findex -idirafter } @findex -iprefix Specify @var{prefix} as the prefix for subsequent @samp{-iwithprefix} options. @item -iwithprefix @var{dir} @findex -iwithprefix Add a directory to the second include path. The directory's name is made by concatenating @var{prefix} and @var{dir}, where @var{prefix} was specified previously with @samp{-iprefix}. @item -isystem @var{dir} @findex -isystem Add a directory to the beginning of the second include path, marking it as a system directory, so that it gets the same special treatment as is applied to the standard system directories. @item -x c @itemx -x c++ @itemx -x objective-c @itemx -x assembler-with-cpp @findex -x c @findex -x objective-c @findex : @samp{.c}, @samp{.cc}, @samp{.m}, or @samp{.S}. Some other common extensions for C++ and assembly are also recognized. If cpp does not recognize the extension, it will treat the file as C; this is the most generic mode. @strong{Note:} Previous versions of cpp accepted a @samp{-lang} option which selected both the language and the standards conformance level. This option has been removed, because it conflicts with the @samp{-l} option. @item -std=@var{standard} @itemx -ansi @findex -std @findex -ansi Specify the standard to which the code should conform. Currently cpp only knows about the standards for C; other language standards will be added in the future. @var{standard} may be one of: @table @code @item iso9899:1990 The ISO C standard from 1990. @item iso9899:199409 @itemx c89 The 1990 C standard, as amended in 1994. @samp{c89} is the customary shorthand for this version of the standard. The @samp{-ansi} option is equivalent to @samp{-std=c89}. @item iso9899:199x @itemx c9x The revised ISO C standard, which is expected to be promulgated some time in 1999. It has not been approved yet, hence the @samp{x}. @item gnu89 The 1990 C standard plus GNU extensions. This is the default. @item gnu9x The 199x C standard plus GNU extensions. @end table @item -Wp,-lint @findex -lint Look for commands to the program checker @code{lint} embedded in comments, and emit them preceded by @samp{#pragma lint}. For example, the comment @samp{/* NOTREACHED */} becomes @samp{#pragma lint NOTREACHED}. Because of the clash with @samp{-l}, you must use the awkward syntax above. In a future release, this option will be replaced by @samp{-flint} or @samp{-Wlint}; we are not sure which yet. @item -$ @findex -$ Forbid the use of @samp{$} in identifiers. The C standard does not permit this, but it is a common extension. @end table @node Concept Index, Index, Invocation, Top @unnumbered Concept Index @printindex cp @node Index,, Concept Index, Top @unnumbered Index of Directives, Macros and Options @printindex fn @contents @bye
http://opensource.apple.com//source/gcc/gcc-937.2/gcc/cpp.texi
CC-MAIN-2016-36
en
refinedweb
iParticleSystemFactory Struct Reference [Mesh plugins] Properties for particle system factory. More... #include <imesh/particles.h> Inheritance diagram for iParticleSystemFactory: Detailed Description Properties.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/structiParticleSystemFactory.html
CC-MAIN-2016-36
en
refinedweb
Understanding Windows Communication Foundation This article provides an introduction to Windows Communication Foundation (WCF) followed by a list of what's new for WCF in .NET Framework 3.5. In the companion topic, Developing Connected Systems, the concepts discussed here are illustrated with code from the DinnerNow.net sample application. The Windows Vista Developer Story article Windows Communication Foundation contains additional background information. Windows Communication Foundation, a core component of .NET Framework 3.0, provides a service-oriented programming model, run-time engine, and tools for building connected applications. WCF unifies and extends the functionality of existing Microsoft connecting technologies by providing a single programming model independent of underlying communications protocols. WCF applications use open standards and protocols to interoperate with existing Microsoft and non-Microsoft technologies. WCF models network communication that consists of message exchanges between a client and a service. When a client sends a message, it travels through multiple layers of software, each performing some operation on the message, before being transported across the network to the server. At the server, the message again travels through multiple layers of software before being delivered to the service. These multiple layers of software are known as the communication stack. Each layer is governed by one or more standards or protocols, which specify the end result of the operation on the message at that layer. In WCF, the communication stack is represented as a binding. A binding consists of a set of binding elements, each of which represents a layer in the stack. The implementation of a binding element is known as a channel. At run time, WCF assembles the stack of channels specified by the binding. When data is sent, WCF translates the data into a message and passes the message through the stack of channels, so that the message is sent in accordance with the protocols identified by the binding. This process is reversed on the receiving end. The core concept behind WCF is the service, which is a .NET type that implements one or more service contracts with each contract exposed through an endpoint. An endpoint expresses all the information needed by a client to communicate with a service. An endpoint consists of an address, a binding, a contract, and optional behaviors. The address specifies where the service is located for a specific service contract. The binding declares the set of protocols that are used to send and receive messages. The contract defines the kind of messages that can be sent and received. Behaviors control optional service and client functionality. Endpoints can be defined programmatically or in an XML configuration file. WCF provides the Service Configuration Editor (SvcConfigEditor.exe) tool to aid in the creation and modification of endpoint XML definitions as well as other settings. The following portion of an XML configuration file shows an example of two endpoint definitions, which are discussed in subsequent sections. <service name="ProcessOrder"> <host> <baseAddresses> <add baseAddress="" /> </baseAddresses> </host> <endpoint address="orderProcess" binding="wsHttpContextBinding" contract="IProcessOrder" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> For more information, see Windows Communication Foundation Endpoints. Address The address of an endpoint is represented by the EndpointAddress class, which contains a Uri property used to locate the endpoint, an Identity property used for authentication of the endpoint, and an optional Headers collection property used for customization. In the XML representation, the Uri property is specified by the address attribute of the endpoint element and the Identity and Headers properties are specified as child elements. An address can be specified as absolute or as relative to a base address for the service. A base address simplifies the specification of multiple endpoints for a service and also allows for metadata discovery of the endpoint information. The scheme of the address ('http' in the preceding example) must match the transport of the corresponding binding. In the preceding XML example, is the address of the endpoint corresponding to the IProcessOrder contract implemented by the ProcessOrder service. When a WCF service is hosted in Internet Information Services (IIS) or Windows Process Activation Service (WAS), the address of the service is determined by the location of the corresponding service (.svc) file in the virtual Web directory; the address in the configure file is left empty. For more information, see Specifying an Endpoint Address. Binding A binding consists of an ordered stack of binding elements, or channels, each of which specifies part of the communication stack for the endpoint. WCF includes a set of system-provided bindings, such as the BasicHttpBinding and NetTcpBinding bindings, which are designed to cover most application requirements. For custom bindings, the two lowest layers in the stack must be specified: the transport binding element at the base of the stack and, just above it, the element that specifies the message encoding. Binding elements that specify the remainder of the stack are optional. For more information, see Windows Communication Foundation Bindings. Contract A service contract specifies the set of operations (methods) that can be called on an endpoint. A service contract is defined by applying the ServiceContractAttribute attribute to an interface or class declaration, and applying the OperationContractAttribute attribute to each method that is included as an operation of the service. Methods not marked with OperationContractAttribute are not exposed for use by clients of the service. The IsOneWay property of the RestaurantOrderComplete operation indicates that the operation does not return a reply message. If the IsOneWay property is false or missing, the operation sends a reply message even for a method with a void return value. The data type of each parameter or return value that participates in the contract must have the DataContractAttribute attribute applied. This attribute allows the type to be exchanged (serialized and deserialized) with the service. All .NET Framework primitive types, such as integers and strings, as well as certain types treated as primitives, such as DateTime and XmlElement, are serializable and considered to have an implicit data contract. Many .NET Framework collection and enumeration types also have implicit data contracts. For a list of these types, see Types Supported by the Data Contract Serializer. Custom types used in the data exchange that contain any member types that don't have an implicit data contract, must apply the DataMemberAttribute attribute to these members to allow them to participate in the exchange. For more information, see Designing Service Contracts, Designing and Implementing Services, and Using Data Contracts. The following code shows an example of a service contract and a data contract. [ServiceContract] interface IProcessOrder { [OperationContract(IsInitiating = true)] void Processorder(DinnerNow.Business.Data.Order newOrder); [OperationContract(IsOneWay = true)] void RestaurantOrderComplete(); } namespace DinnerNow.Business.Data { [DataContract] [Serializable] public class Order { [DataMember] public OrderItem[] OrderItems { get; set; } } } Behaviors Behaviors are types that modify or extend service or client functionality. There are four kinds of behaviors in WCF: Service behaviors (IServiceBehavior types) enable the customization of the service runtime including ServiceHostBase. For example, the ServiceMetadataBehavior class (<serviceMetadata>) controls the publication of service metadata and associated information. Endpoint behaviors (IEndpointBehavior types) enable the customization of service endpoints. For example, the WebHttpBehavior class (<webHttp>), when used in conjunction with the WebHttpBinding binding, enables WCF to expose and consume Web-style services. Contract behaviors (IContractBehavior types) enable the customization of the ClientRuntime and DispatchRuntime classes on the client and service, respectively. Operation behaviors (IOperationBehavior types) enable the customization of the ClientOperation and DispatchOperation classes on the client and service, respectively. For more information, see Specifying Service Run-Time Behavior. Metadata allows a client, knowing only a service's base address, to retrieve the service's endpoint information, which is everything that the client needs for further communication with the service. Metadata describes the addresses, bindings, and contracts, but not behaviors, of the service and, by default, is in the form of the Web Services Description Language (WSDL) and WS-MetadataExchange (WS-MEX) standards. Clients query for metadata using a GET request. Metadata is not published by default; a service must enable the ServiceMetadataBehavior behavior and provide a metadata endpoint. For an example of a metadata endpoint, see the second endpoint (with the "mex" address) in the example under the Endpoints heading. The following portion of an XML configuration file shows the enabling of the ServiceMetadataBehavior behavior. For more information, see Publishing Metadata Endpoints. A service must be hosted within a run-time environment that creates the service and controls its context and lifetime. WCF services are designed to run in any Windows process that supports .NET Framework 3.0. Hosting options range from a simple console or Windows form application to a Windows service, IIS, and Windows Process Activation Service (WAS). For more information on hosting options, see Hosting Services. Windows Application WCF services can be hosted in any managed application. This option requires the least infrastructure to deploy; an instance of ServiceHost is simply created and opened to make the service available. This option is convenient for testing the service during development. The following example shows a service hosted by a console application. Windows Service WCF services can be hosted in a Windows service where their lifetimes are controlled by the Windows Service Control Manager (SCM). The hosting code is similar to that above for the console application except the ServiceHost.Open and Close methods are called from overridden ServiceBase.OnStart and OnClose methods, respectively. Additionally, installation components must be created that install and register the Windows service. IIS WCF services that communicate only over the HTTP protocol can be hosted in IIS, where they can take advantage of IIS features such as process recycling, idle shutdown, process health monitoring, rapid fail protection, and the common configuration system. IIS must be properly configured, but the writing of hosting code as part of the application is not required. WAS WCF services that communicate over certain non-HTTP protocols, such as TCP, MSMQ, and Named Pipes, can be hosted in the new Windows Process Activation Service. WAS uses the IIS 6.0 process model and hosting features, but removes the dependency on HTTP. IIS 7.0 uses WAS for message-based activation over HTTP. Message-based activation over other WCF protocols is enabled by WCF components that plug into WAS. Applications that use these communication protocols can now take advantage of the IIS features that were previously only available to HTTP-based applications. To host a service, a new application is created in WAS (using the IIS Manager tool or Visual Studio®). The virtual directory must contain the following: The service implementation (either compiled in the /bin folder or as source code in the /App_Code folder). A service file (.svc). The service file provides the connection between a URI (the address of the service) and the service implementation. The @ServiceHost directive specifies the service and the service host factory used to instantiate the service. If the Factory attribute is unspecified, the ServiceHostFactory class is used, which returns an instance of the ServiceHost class. The following is an example of a service file that specifies a WorkflowServiceHost. A Web.config file, which defines the service settings in the <system.serviceModel> section. For more information, see Extend Your WCF Services Beyond HTTP with WAS. Some of the tools provided by WCF include: Service Configuration Editor (SvcConfigEditor.exe) Uses a graphical user interface (GUI) to create and modify settings located in the <system.serviceModel> section of a WCF configuration file. This section contains client, services, bindings, behaviors, and diagnostics settings. A wizard is provided that steps through the configuration of a WCF service or client Service Trace Viewer Tool (SvcTraceViewer.exe) Allows trace messages to be viewed, grouped, and filtered. ServiceModel Metadata Utility (Svcutil.exe) Generates code for service contracts, clients, and data types from metadata; creates metadata from compiled assemblies or running services. ServiceModel Registration Tool (ServiceModelReg.exe) Manages the registration of the ServiceModel assembly. Workflow Service Registration Tool (WFServicesReg.exe) Manages the registration of WF services. WCF Service Host (WcfSvcHost.exe) Uses the Visual Studio debugger to host a service. The service can then be tested using the WCF Test Client tool or some other client. WCF Test Client (WcfTestClient.exe) Submits input parameters to a service and shows the response that the service sends back. This GUI tool provides a seamless service testing experience when combined with the WCF Service Host tool. WCF Visual Studio Templates For more information, see Windows Communication Foundation Tools and Using the WCF Development Tools. WCF is enhanced with the following new and changed features in .NET Framework 3.5. For more information, see the WCF section in What's New in the .NET Framework Version 3.5. Workflow services Integrates Windows Workflow Foundation (WF) and WCF, allowing WCF services to be implemented and authored using WF workflows and workflows to be exposed as services. Durable services WCF services that use the WF persistence model to persist state information. WCF Web programming model Enables Web-style services with WCF. For more information, see Web Programming Model. WCF and ASP.NET AJAX integration WCF Web services that are accessible using Asynchronous JavaScript and XML (AJAX). For more information, see AJAX Integration and JSON Support. WCF syndication WCF partial trust changes
https://msdn.microsoft.com/de-de/library/cc179585.aspx
CC-MAIN-2016-36
en
refinedweb
This is your resource to discuss support topics with your peers, and learn from each other. 06-05-2009 11:22 AM Hi, I have created a BitmapField and added a Bitmap to it. The bitmapField is then placed inside a HorizontalFieldManager. What I would like to do is draw circles of a specific diameter at certain locations on top of the Bitmap. What would be the right approach to this?? I was thinking Graphics.fillArc(x, y, radius, radius, 0, 360); would do the trick.. but how can I specify which pixels of my BitmapField should be the center of the circle? Code:_pMap = Bitmap.getBitmapResource("pMap.png"); centerIMG = new BitmapField(); centerIMG.setBitmap(_pMap); _fmOnlineMiddle.add(centerIMG); This is where I would like to draw a circle at pixel x = 24, y = 24 of the Bitmap. (radius 6pixels) Any help would be greatly appreciated. Thanks, Dave Solved! Go to Solution. 06-05-2009 11:37 AM 06-05-2009 11:49 AM Thanks for your reply Adrian, Would you have some sort of code example for what you are suggesting? I dont really understand how to implement your suggestion. Thanks 06-05-2009 11:58 AM Try this: _pMap = Bitmap.getBitmapResource("pMap.png"); Graphics bitmapGraphicsContext = new Graphics(_pMap); bitmapGraphicsContext.drawArc(....params...) ; centerIMG = new BitmapField(); centerIMG.setBitmap(_pMap); _fmOnlineMiddle.add(centerIMG); 06-05-2009 01:25 PM Thanks for the example. When I try to implement it I get a "The Constructor Graphics(Bitmap) is undefined" error. for Graphics bitmapGraphicsContext = new Graphics(_pMap); I have imported import javax.microedition.lcdui.Graphics; and if I import import net.rim.device.api.ui.Graphics; it says that they collide. If i remove the lcdui import statement and replace it with the ui.Graphics my new screen doesnt come up. 06-05-2009 02:04 PM I meant net.rim.device.api.ui.Graphics class. In case you are using javax.lcdui classes to make user interface you have to stick with javax.lcdui package. Are you using javax.lcdui classes ? 06-05-2009 02:36 PM centerIMG = new BitmapField() { protected void paint(Graphics graphics) { super.paint(graphics); // graphics.drawArc(arg0, arg1, arg2, arg3, arg4, arg5) // put the right params for your arc here } }; 06-05-2009 04:21 PM Thanks for your suggestions. Something weird is going on with my app. I am pushing a new screen which displays some labels and bitmaps inside some vertical and horizontal managers. UiApplication.getUiApplication().pushScreen(new Online(_info)); I am not using the lcdui, but when I import the ui.graphics so that i can use Graphics to draw my circles the screen does not display at all. I am declaring my managers as follows: HorizontalFieldManager _fmOnlineMiddle= new HorizontalFieldManager(); this is how i loaded the Bitmap: _pMap = Bitmap.getBitmapResource("pMap.png"); Graphics bitmapGraphicsContext = new Graphics(_pMap); bitmapGraphicsContext.fillArc(24, 24, 12, 12, 0, 360); bitmapGraphicsContext.setColor(0x00053609); centerIMG = new BitmapField(); centerIMG.setBitmap(_pMap); _fmOnlineMiddle.add(centerIMG); Any thoughts? 06-05-2009 04:37 PM - edited 06-05-2009 04:38 PM Try to set color before you are drawing the circle. And I recommend to use Color class constants rather than "magic" numbers. For example. Color.RED 06-05-2009 05:10 PM Thanks for your help!! I got it to work.... I think it didnt like my "magic" numbers
https://supportforums.blackberry.com/t5/Java-Development/Draw-Circles-On-Top-Of-Bitmap/m-p/250533
CC-MAIN-2016-36
en
refinedweb
Langton Ants in PyGame Many moons ago I implemented Langton Ants on a ZX Spectrum 48K, and I have been fascinated by it ever since. I thought it would be fun to implement it in PyGame. Langton ants are simple creatures. They live in a grid of squares that can be one of two colors, and follow these two simple rules. - Am I on color 1? Flip the square, turn 90 degrees right, move forward 1 square. - Am I on color 2? Flip the square, turn 90 degrees left, move forward 1 square. That is all they do. They don't ponder the meaning of life or current affairs, they just check the color of the square they are on and then turn and move. You would think that something this simple would quickly get in to a cyclical pattern, but it turns out that Langton ants like to make crazy complex patterns and don't repeat themselves. Humor me, while I write a Python script to test it. First thing we need to do is define a few constants for use in the script, as follows. GRID_SIZE = (160, 120) GRID_SQUARE_SIZE = (4, 4) ITERATIONS = 1 ant_image_filename = "ant.png" The grid will be 160x120 squares, with squares that are 4x4 pixels (so that it fits inside a 640x480 pixel screen). The value of ITERATIONS is the number of moves an ant does each frame, increase it if you want the ants to move faster. Finally we have the filename of an image to represent the ant. I will be using this image: . Next we import PyGame in the usual manner. import pygame from pygame.locals import * We will represent the grid with a class that contains a two dimensional list (actually a list of lists) of bools, one for each square, where False is color 1, and True is color 2. The AntGrid class is also responsible for clearing the grid, getting the square color at a coordinate, flipping a square color and drawing itself to a PyGame surface. I chose white for color 1 and dark green for color 2, but feel free to change it if you want something different. class AntGrid(object): def __init__(self, width, height): self.width = width self.height = height self.clear() def clear(self): self.rows = [] for col_no in xrange(self.height): new_row = [] self.rows.append(new_row) for row_no in xrange(self.width): new_row.append(False) def swap(self, x, y): self.rows[y][x] = not self.rows[y][x] def get(self, x, y): return self.rows[y][x] def render(self, surface, colors, square_size): w, h = square_size surface.fill(colors[0]) for y, row in enumerate(self.rows): rect_y = y * h for x, state in enumerate(row): if state: surface.fill(colors[1], (x * w, rect_y, w, h)) Now that we have a grid, we can create a class for the ants. The move function in the Ant class implements the two rules, and the render function draws a sprite at the ants location so we can see where it is. class Ant(object): directions = ( (0,-1), (+1,0), (0,+1), (-1,0) ) def __init__(self, grid, x, y, image, direction=1): self.grid = grid self.x = x self.y = y self.image = image self.direction = direction def move(self): self.grid.swap(self.x, self.y) self.x = ( self.x + Ant.directions[self.direction][0] ) % self.grid.width self.y = ( self.y + Ant.directions[self.direction][1] ) % self.grid.height if self.grid.get(self.x, self.y): self.direction = (self.direction-1) % 4 else: self.direction = (self.direction+1) % 4 def render(self, surface, grid_size): grid_w, grid_h = grid_size ant_w, ant_h = self.image.get_size() render_x = self.x * grid_w - ant_w / 2 render_y = self.y * grid_h - ant_h / 2 surface.blit(self.image, (render_x, render_y)) Finally in the script, the run function handles the guts of the simulation. It sets up the screen, creates the the grid object then enters the main loop. Inside the main loop there are event handlers so that you can drop ants with the left mouse button, clear the grid with the C key and start the simulation with the SPACE key. The remaining portion of the run function moves all the ants and renders the screen. def run(): pygame.init() w = GRID_SIZE[0] * GRID_SQUARE_SIZE[0] h = GRID_SIZE[1] * GRID_SQUARE_SIZE[1] screen = pygame.display.set_mode((w, h), 0, 32) ant_image = pygame.image.load(ant_image_filename).convert_alpha() default_font = pygame.font.get_default_font() font = pygame.font.SysFont(default_font, 22) ants = [] grid = AntGrid(*GRID_SIZE) running = False total_iterations = 0 while True: for event in pygame.event.get(): if event.type == QUIT: return if event.type == MOUSEBUTTONDOWN: x, y = event.pos x /= GRID_SQUARE_SIZE[0] y /= GRID_SQUARE_SIZE[1] ant = Ant(grid, int(x), int(y), ant_image) ants.append(ant) if event.type == KEYDOWN: if event.key == K_SPACE: running = not running if event.key == K_c: grid.clear() total_iterations = 0 del ants[:] grid.render(screen, ((255, 255, 255), (0, 128, 0)), GRID_SQUARE_SIZE) if running: for iteration_no in xrange(ITERATIONS): for ant in ants: ant.move() total_iterations += ITERATIONS txt = "%i iterations"%total_iterations txt_surface = font.render(txt, True, (0, 0, 0)) screen.blit(txt_surface, (0, 0)) for ant in ants: ant.render(screen, GRID_SQUARE_SIZE) pygame.display.update() if __name__ == "__main__": run() And thats our finished Langton Ant simulation. Have a play with it, then come back and explain to me how two simple rules can create such a complex pattern.Download langtonants.zip Update: Here's a screenshot! > ...explain to me how two simple rules can create such a complex pattern. because the rules don't exist in a vacuum. They exist in an enviromnent where the previous results of the rules continue to matter. These two simple rules could only produce a simple result if the whole playing field was wiped clean on every iteration. Or was that question rhetorical? :) I modified the code to put 1000 ants on the screen just to see what happened. Interestingly only some of the ants made real patterns. Most just "wiggled" a bit, but mostly stayed on the spot? for i in range(1,10000,10): ant = Ant(grid, int(x) + (i % 80), int(y) + (i % 56), ant_image) ants.append(ant) I think that would put ants directly on top of each other, i.e on the same cell, which causes a much simpler pattern. If you place two ants on the same cell with the original code, you'll see the same thing happens. Its only when there are an odd number of ants that you get them making patterns. What's the meaning of 90% left/right? From code it seems like 100% left/right. D'oh! That should be 90 degrees left/right. Thanks for pointing it out. Well this seems to be a simplification of Conwa'ys "Game Of Life"'s_Game_of_Life as James Paige said it's patterns are like that because of the chain of events. Anyway a nice example on pygames. PS: a question, which syntax highligting are you using? currently all the ones I have found for wordpress suck. I'm using 'iG:Syntax Hiliter', which does suck because it messes things up when you switch between the wysiwyg and the code editor. :-( Hey, cool script. I was researching this phenomenon on Wikipedia, which linked here for a python script. Thanks, it worked wonderfully! After messing with it, I decided to have some fun with it. Also, would you recommend using pygame for projects like this? How did this go?
https://www.willmcgugan.com/blog/tech/post/langton-ants-in-pygame/
CC-MAIN-2016-36
en
refinedweb
section of the BUI controls overall settings for the share that are independent of any particular protocol and are not related to access control or snapshots. While the CLI groups all properties in a single list, this section describes the behavior of the properties in both contexts. For information on how these properties map to the CLI, see the Shares CLI section. Space within a storage pool is shared between all shares. Filesystems can grow or shrink dynamically as needed, though it is also possible to enforce space restrictions on a per-share basis. Quotas and reservations can be enforced on a per-filesystem basis. Quotas can also be enforced per-user and per-group. For more information on managing space usage for filesystems, including quotas and reservations, see the Space Management section. The logical size of the LUN as exported over iSCSI. This property is only valid for LUNs. This property controls the size of the LUN. By default, LUNs reserve enough space to completely fill the volume. See the Thin provisioned property for more information. Changing the size of a LUN while actively exported to clients may yield undefined results. It may require clients to reconnect and/or cause data corruption on the filesystem on top of the LUN. Check best practices for your particular iSCSI client before attempting this operation. Controls whether space is reserved for the volume. This property is only valid for LUNs. By default, a LUN reserves exactly enough space to completely fill filesystem. For more information, see the Reservation property. These are standard properties that can either be inherited from the project or explicitly set on the share. The BUI only allows the properties to be inherited all at once, while the CLI allows for individual properties to be inherited. The location where the filesystem is mounted. This property is only valid for filesystems. The following restrictions apply to the mountpoint property: Must be under /export. Cannot conflict with another share. Cannot conflict with another share on cluster peer to allow for proper failover. When inheriting the mountpoint property, the current dataset name is appended to the project's mountpoint setting, joined with a slash ('/'). For example, if the "home" project has the mountpoint setting /export/home, then "home/bob" would inherit the mountpoint /export/home/bob. SMB shares are exported via their resource name, and the mountpoint is not visible over the protocol. However, even SMB-only shares must have a valid unique mountpoint on the appliance. Mountpoints can be nested underneath other shares, though this has some limitations. For more information, see the filesystem namespace section. Controls whether the filesystem contents are read only. This property is only valid for filesystems. The contents of a read only filesystem cannot be modified, regardless of any protocol settings. This setting does not affect the ability to rename, destroy, or change properties of the filesystem. In addition, when a filesystem is read only, Access control properties cannot be altered, because they require modifying the attributes of the root directory of the filesystem. Controls whether the access time for files is updated on read. This property is only valid for filesystems. POSIX standards require that the access time for a file properly reflect the last time it was read. This requires issuing writes to the underlying filesystem even for a mostly read only workload. For working sets consisting primarily of reads over a large number of files, turning off this property may yield performance improvements at the expense of standards conformance. These updates happen asynchronously and are grouped together, so its effect should not be visible except under heavy load. Controls whether SMB locking semantics are enforced over POSIX semantics. This property is only valid for filesystems. By default, filesystems implement file behavior according to POSIX standards. These standards are fundamentally incompatible with the behavior required by the SMB protocol. For shares where the primary protocol is SMB, this option should always be enabled. Changing this property requires all clients to be disconnected and reconnect. Controls whether duplicate copies of data are eliminated. Deduplication is synchronous, pool-wide, block-based, and can be enabled on a per project or share basis. Enable it by selecting the Data Deduplication checkbox on the general properties screen for projects or shares. The deduplication ratio will appear in the usage area of the Status Dashboard. Data written with deduplication enabled is entered into the deduplication table indexed by the data checksum. Deduplication forces the use of the cryptographically strong SHA-256 checksum. Subsequent writes will identify duplicate data and retain only the existing copy on disk. Deduplication can only happen between blocks of the same size, data written with the same record size. As always, for best results set the record size to that of the application using the data; for streaming workloads use a large record size. If your data doesn't contain any duplicates, enabling Data Deduplication will add overhead (a more CPU-intensive checksum and on-disk deduplication table entries) without providing any benefit. If your data does contain duplicates, enabling Data Deduplication will both save space by storing only one copy of a given block regardless of how many times it occurs. Deduplication necessarily will impact performance in that the checksum is more expensive to compute and the metadata of the deduplication table must be accessed and maintained. Note that deduplication has no effect on the calculated size of a share, but does affect the amount of space used for the pool. For example, if two shares contain the same 1GB file, each will appear to be 1GB in size, but the total for the pool will be just 1GB and the deduplication ratio will be reported as 2x. Performance Warning: by its nature, deduplication requires modifying the deduplication table when a block is written to or freed. If the deduplication table cannot fit in DRAM, writes and frees may induce significant random read activity where there was previously none. As a result, the performance impact of enabling deduplication can be severe. Moreover, for some cases -- in particular, share or snapshot deletion -- the performance degradation from enabling deduplication may be felt pool-wide. In general, it is not advised to enable deduplication unless it is known that a share has a very high rate of duplicated data, and that that duplicated data plus the table to reference it can comfortably reside in DRAM. To determine if performance has been adversely affected by deduplication, enable advanced analytics and then use analytics to measure "ZFS DMU operations broken down by DMU object type" and check for a higher rate of sustained DDT operations (Data Duplication Table operations) as compared to ZFS operations. If this is happening, more I/O is for serving the deduplication table rather than file I/O. Controls whether data is compressed before being written to disk. Shares can optionally compress data before writing to the storage pool. This allows for much greater storage utilization at the expense of increased CPU utilization. By default, no compression is done. If the compression does not yield a minimum space savings, it is not committed to disk to avoid unnecessary decompression when reading back the data. Before choosing a compression algorithm, it is recommended that you perform any necessary performance tests and measure the achieved compression ratio. Controls the checksum used for data blocks. On the appliance, all data is checksummed on disk, and in such a way to avoid traditional pitfalls (phantom reads and write in particular). This allows the system to detect invalid data returned from the devices. The default checksum (fletcher4) is sufficient for normal operation, but paranoid users can increase the checksum strength at the expense of additional CPU load. Metadata is always checksummed using the same algorithm, so this only affects user data (files or LUN blocks). Controls whether cache devices are used for the share. By default, all datasets make use of any cache devices on the system. Cache devices are configured as part of the storage pool and provide an extra layer of caching for faster tiered access. For more information on cache devices, see the storage configuration section. This property is independent of whether there are any cache devices currently configured in the storage pool. For example, it is possible to have this property set to "all" even if there are no cache devices present. If any such devices are added in the future, the share will automatically take advantage of the additional performance. This property does not affect use of the primary (DRAM) cache. This setting controls the behavior when servicing synchronous writes. By default, the system optimizes synchronous writes for latency, which leverages the log devices to provide fast response times. In a system with multiple disjoint filesystems, this can cause contention on the log devices that can increase latency across all consumers. Even with multiple filesystems requesting synchronous semantics, it may be the case that some filesystems are more latency-sensitive than others. A common case is a database that has a separate log. The log is extremely latency sensitive, and while the database itself also requires synchronous semantics, it is heavier bandwidth and not latency sensitive. In this environment, setting this property to 'throughput' on the main database while leaving the log filesystem as 'latency' can result in significant performance improvements. Note that this setting will change behavior even when no log devices are present, though the effects may be less dramatic. Controls the block size used by the filesystem. This property is only valid for filesystems. By default, filesystems will use a block size just large enough to hold the file, or 128K for large files. This means that any file over 128K in size will be using 128K blocks. If an application then writes to the file in small chunks, it will necessitate reading and writing out an entire 128K block, even if the amount of data being written is comparatively small. Shares that host small random access workloads (i.e. databases) should tune this property to be approximately equal to the record size used by the database. In the absence of a precise number, 8K is typically a good choice for most database workloads. The property can be set to any power of 2 from 512 to 128K. Controls number of copies stored of each block, above and beyond any redundancy of the storage pool. Metadata is always stored with multiple copies, but this property allows the same behavior to be applied to data blocks. The storage pool attempts to store these extra blocks on different devices, but it is not guaranteed. In addition, a storage pool cannot be imported if a complete logical device (RAID stripe, mirrored pair, etc) is lost. This property is not a replacement for proper replication in the storage pool, but can be reassuring for paranoid administrators. Controls whether this filesystem is scanned for viruses. This property is only valid for filesystems. This property setting is independent of the state of the virus scan service. Even if the Virus Scan service is enabled, filesystem scanning must be explicitly enabled using this property. Similarly, virus scanning can be enabled for a particular share even if the service itself is off. For more information about configuration virus scanning, see the Virus Scan section. When set, the share or project cannot be destroyed. This includes destroying a share through dependent clones, destroying a share within a project, or destroying a replication package. However, it does not affect shares destroyed through replication updates. If a share is destroyed on an appliance that is the source for replication, the corresponding share on the target will be destroyed, even if this property is set. To destroy the share, the property must first be explicitly turned off as a separate step. This property is off by default. By default, ownership of files cannot be changed except by a root user (on a suitable client with a root-enabled export). This property can be turned off on a per-filesystem or per-project basis by turning off this property. When off, file ownership can be changed by the owner of the file or directory, effectively allowing users to "give away" their own files. When ownership is changed, any setuid or setgid bits are stripped, preventing users from escalating privileges through this operation. Custom properties can be added as needed to attach user-defined tags to projects and shares. For more information, see the schema section.
http://docs.oracle.com/cd/E26765_01/html/E26397/shares__shares__general.html
CC-MAIN-2016-36
en
refinedweb
Expand | Embed | Plain Text - #include <iostream> #include <cstring> // for the strlen() function int main() { - using namespace std; const int Size = 15; char name1[Size]; - // empty array - char name2[Size] = �C++owboy�; // initialized array // NOTE: some implementations may require the static keyword // to initialize the array name2 - cout << �Howdy! I�m � << name2; cout << �! What�s your name?\n�; cin >> name1; - cout << �Well, � << name1 << �, your name has �; cout << strlen(name1) << � letters and is stored\n�; cout << �in an array of � << sizeof(name1) << � bytes.\n�; cout << �Your initial is � << name1[0] << �.\n�; name2[3] = �\0�; - // null character - cout << �Here are the first 3 characters of my name: �; cout << name2 << endl; return 0; - } Report this snippet Tweet
http://snipplr.com/view/49335/
CC-MAIN-2016-36
en
refinedweb
IRC log of xproc on 2010-04-15 Timestamps are in UTC. 15:00:26 [RRSAgent] RRSAgent has joined #xproc 15:00:26 [RRSAgent] logging to 15:00:32 [Zakim] Zakim has joined #xproc 15:00:54 [Norm] zakim, this will be xproc 15:00:59 [PGrosso] PGrosso has joined #xproc 15:01:16 [Zakim] ok, Norm; I see XML_PMWG()11:00AM scheduled to start now 15:01:41 [ht] ht has joined #xproc 15:01:46 [Zakim] XML_PMWG()11:00AM has now started 15:01:48 [Zakim] +??P27 15:01:50 [Zakim] +[ArborText] 15:01:57 [ht] zakim, please call ht-781 15:02:02 [Zakim] ok, ht; the call is being made 15:02:04 [Zakim] +Ht 15:02:38 [Zakim] +Norm 15:02:51 [Norm] Meeting: XML Processing Model WG 15:02:51 [Norm] Date: 15 Apr 2010 15:02:51 [Norm] Agenda: 15:02:51 [Norm] Meeting: 171 15:02:51 [Norm] Chair: Norm 15:02:52 [Norm] Scribe: Norm 15:02:54 [Norm] ScribeNick: Norm 15:04:00 [Vojtech] Vojtech has joined #xproc 15:04:24 [Zakim] +Murray_Maloney 15:04:51 [Norm] zakim, who's on the phone? 15:05:16 [Zakim] +Vojtech 15:05:18 [Zakim] On the phone I see PGrosso, alexmilowski, Ht, Norm, Murray_Maloney, Vojtech 15:06:04 [Norm] Present: Paul, Alex, Henry, Norm, Murray, Vojtech 15:06:08 [Norm] Regrets: Mohamed 15:06:16 [Norm] Topic: Accept this agenda? 15:06:16 [Norm] -> 15:06:20 [Norm] Accepted. 15:06:26 [Norm] Topic: Accept minutes from the previous meeting? 15:06:26 [Norm] -> 15:06:29 [Norm] Accepted. 15:06:35 [Norm] Topic: Next meeting: telcon, 22 Apr 2010? 15:06:44 [Norm] No regrets heard. 15:06:52 [Norm] Topic: Status update on PR request 15:07:48 [Norm] Norm: Voting closes today. We've got 12 votes in favor, 1 with a change (the bug we want to fix) and 2 explicit abstentions. 15:08:39 [Norm] Henry: I hope I did what was needed. 15:08:46 [Norm] Norm: Yes. Looks fine to me, thanks Henry 15:08:58 [Norm] -> 15:09:11 [Norm] Topic: The default XML processing model 15:10:49 [Norm] -> 15:11:35 [Norm] Henry: I basically did what we said. We agreed to two changes. 15:12:04 [Norm] ...Make a new title, and this really is processor profiles, so I chose "XML processor profiles". The XML spec calls what we're talking about "an XML processor". 15:12:10 [Norm] ...I'm not wedded to the name. 15:12:31 [Norm] ...The other thing I did was add another profile. 15:12:58 [Norm] ...I tried to add another profile, to handle xml-stylesheet, but discovered that it was quite difficult. 15:13:37 [Norm] ...What the stylesheet PI does is lay off responsibility to other specs. 15:14:19 [Norm] Henry: I've reduced my expectations to just trying to get the correct infoset (or data model of choice). Once youv'e applied a stylesheet, or a GRDDL, it's not really "this" document anymore. 15:14:51 [Norm] ...My realizaation is that what I wanted to do with this spec was focus on getting the correct infoset. The fact that I couldn't do the stylesheet story in this spec didn't bother me as much as I thought it would. 15:15:38 [Norm] ...I also had the minor insight that if I was writing the media type registration for, say text/css, I might say something about the processing model profile but that would be in my spec, not in this spec. 15:16:23 [Norm] Murray: Are there two or three profiles? 15:16:38 [Norm] Henry: Two, and a discussion of what might be in some other profile. 15:17:08 [Norm] Murray: I'm sort of sympathetic to the ideas that Henry expressed. I wonder if Paul agrees? 15:17:50 [Norm] Paul: We can write a pipeline that tells you what to do with an XML document and a stylesheet PI, right? 15:18:05 [Norm] Norm: Well, for some PIs. For an XSL stylesheet, yes, but for CSS, it's less clear. 15:18:52 [Norm] Murray: You can load the pipeline into XSL or set a flag to indicate that it was amenable for XSL processing. 15:19:03 [Norm] Norm: Yes, you could set a variable or option. 15:19:29 [Norm] Murray: I thought one of the things you could do with the processing model is determine what kind of processing it's eligable for. 15:19:45 [Norm] ...So it might say that XSL was possible, or GRDDL, or other things. 15:20:06 [ht] Murray is enumerating some things of the sort which I called in a TAG musing "elaboration signals" -- things which signal the possibility of further processing 15:20:35 [ht] ... the use of certain elements from the XML Encryption namespace is another one 15:20:38 [Norm] Murray: Would it be useful to write that pipeline? 15:21:12 [Norm] Henry: Two years ago, when I was trying to get my head around this with my TAG hat on, I produced the elaborated infoset document. 15:21:28 [Norm] ...There's a notion in that which I think I called "elaboration signals". Murray's just reconstructed that idea. 15:21:45 [Norm] ...You've started to list the things that might be in the document that are signals for future processing. For example, encryption. 15:22:14 [Norm] Henry: Yes, I think that's a useful idea. I've never been able to get anywhere beyond the observation that there are these things. 15:22:45 [Norm] ...It's always seemed to be the case that it's human beings that make the decision about what to do. 15:23:11 [Norm] Murray: From a QA perspective: the delta between what could be done and what was actually done could be interesting and useful. 15:24:07 [Norm] ...What Henry said earlier about the fact that what XSLT creates for styling is another document, with GRDDL, I guess the same thing is true. 15:24:23 [Norm] ...But in the GRDDL case, it's asserted to be a faithful rendition of the information in this document. 15:25:13 [Norm] ...Another thing about the infoset with respect to GRDDL is that GRDDL decided that you might not have expanded entities, or exposed fixed attributes, etc. 15:25:50 [Norm] Henry: My inclination is not to bless that. Just because they did it doesn't mean we should make it easy. 15:26:09 [Norm] ...They're going below what we (I) think is the minimum. 15:26:27 [Norm] Murray: We could give it a name and then explain why you shouldn't use it. 15:29:05 [Norm] Norm: My concern is that you can't process documents that contain unexpanded entity references. Or documents that aren't namespace well-formed for that matter. 15:30:35 [Norm] Some further discussion about what the minimum profile means: it expands all entities, fills in attribute default values, etc. 15:30:50 [Norm] Henry: On a completely different topic, what should our short name be? 15:31:10 [Norm] Henry: I'm tempted by xprof, but I think the linguistic similarity to 'xproc' is too confusing. 15:31:31 [Norm] Paul: I suggested 'xml-proc-prof'. An abbreviation of processing profile. 15:31:50 [Norm] Norm: How about 'xmlprofiles' 15:31:56 [alexmilowski] xmlpp 15:32:03 [Norm] Murray: It's not an XML profile, it's an XML processing profile. 15:32:18 [Norm] ...And why profile not model? 15:32:53 [Norm] Henry: My reasoning was that when a spec gives you a set of choices, which is what the XML spec does, then a particular set of values for those choices is what I undersatnd is meant by the word "profile" 15:33:16 [Norm] ...Model is just one of those generic words that's lost all meaning. What would it mean not to be a model? It's just a noun to put after processor. 15:34:35 [Norm] Norm: Assuming we clean up the editorial issues, would anyone object to publishing this as the first public working draft? 15:35:12 [Norm] Alex: I really wonder about the xml stylesheet PI issue. I would really like to say something about what browsers do, but maybe that's more than we can achieve. 15:35:19 [Norm] Murray: Browsers don't do any of this, do they? 15:35:30 [Norm] Alex: Web browsers do more-or-less apply the XML stylesheet PI. 15:38:27 [Norm] Some wandering discussion of user agents, media types, stylesheets, validation, etc. 15:39:35 [Norm] Alex: If we had a processor profile for "apply style" then what the user agent does could be described as "select a stylesheet, through some implementation defined means" then do the "apply stylesheet" profile. 15:40:15 [Norm] Henry: What I'd like to do is take this document and see if we can get other specs to reference it: HTML5, xxx+XML media types, etc. 15:40:25 [Norm] Alex: I don't disagree, I just don't know if section 4 needs some tweaking. 15:42:49 [Norm] Norm: I'd like it out sooner and smaller so we can see what way the community goes with it. 15:43:02 [Norm] ... The community might love it or hate it and I can't predict which. 15:44:31 [Norm] Murray: I'd like to publish this soon. I'd like to see more detail in it about what we do with the infoset at each step in the process. 15:45:02 [Norm] ...Maybe with a catalog of infoset changes. And I wonder if as part of this process we wouldn't discover new info items to add to the infoset. 15:45:17 [Norm] ...Perhaps we discover that we set particular flags for every pipeline, shouldn't they just be in the infoset. 15:46:08 [Norm] Norm: Does anyone object to making more-or-less this document our FPWD? 15:46:30 [Norm] Murray: How about adding a paragraph or two about XML functions and how this document doesn't do that. 15:47:01 [Norm] No objections heard. 15:47:17 [Norm] Norm: Now we need a short name. 15:48:01 [ht] ht has joined #xproc 15:48:03 [Norm] Some proposals: xml-proc-prof, xppf, 15:48:36 [Norm] xml-processor-profiles 15:48:43 [Norm] xmlprocessorpofiles 15:48:52 [Norm] xml-processing-best-practices 15:49:01 [alexmilowski] xml-proc-profiles 15:49:27 [Norm] xpm 15:50:36 [Norm] Murray/Henry wrangle a little bit over the title again "profile" vs "model" 15:51:49 [Norm] Henry: My focus here is what are the invariants that you can count on in the information you get, not how you get it. 15:51:56 [Norm] ...I don't see this as a collection of pipelines 15:54:25 [alexmilowski] "Pipelines for XML Processors" :) 15:54:30 [Norm] xproc-profiles 15:54:38 [Norm] profiles-of-xml 15:54:52 [Vojtech] xmlp? 15:56:13 [PGrosso] I'm liking xml-proc-profiles 15:56:25 [ht] xml-proc-profiles 15:56:45 [Norm] Proposal: We use the short name xml-proc-profiles 15:56:46 [Vojtech] the short name most likely will contain 'xml' and 'processing', the question is about 'model' and 'profile' - so I wouldn't include it 15:58:00 [Norm] Accepted. 15:59:00 [Norm] Henry: I say we get this out by Monday and if no one objects before Wednesday then we go forward. 15:59:12 [Norm] Norm: Anyone object to that? 15:59:14 [Norm] None heard. 15:59:46 [Norm] Topic: Any other business? 16:00:18 [alexmilowski] Gotta run. Bye. 16:00:20 [Norm] None heard. 16:00:22 [Zakim] -Murray_Maloney 16:00:23 [Zakim] -alexmilowski 16:00:23 [Zakim] -PGrosso 16:00:23 [Norm] Adjourned 16:00:24 [Zakim] -Vojtech 16:00:24 [Zakim] -Ht 16:00:27 [Norm] rrsagent, set logs world-visible 16:00:31 [Norm] rrsagent, draft minutes 16:00:31 [RRSAgent] I have made the request to generate Norm 16:00:35 [Zakim] -Norm 16:00:36 [Zakim] XML_PMWG()11:00AM has ended 16:00:37 [Zakim] Attendees were PGrosso, Ht, alexmilowski, Norm, Murray_Maloney, Vojtech 16:01:26 [PGrosso] PGrosso has left #xproc 17:21:53 [Zakim] Zakim has left #xproc 17:35:50 [Norm] rrsagent, bye 17:35:50 [RRSAgent] I see no action items
http://www.w3.org/2010/04/15-xproc-irc
CC-MAIN-2016-36
en
refinedweb
The Addressing Model of the Open Packaging Conventions David Meltzer and Andrey Shur Microsoft Corporation June 2006 Applies to: Open Packaging Conventions Microsoft Office 2007 Microsoft Windows Vista Microsoft .NET Framework Summary: This article provides an overview of the addressing model used in the Open Packaging Conventions, including how packages and their parts are addressed, how relative references in package parts are resolved, and how applications can make use of the package addressing model with the help of the .NET Framework and classes. (16 printed pages) Contents Preface The Addressing Model Programming Support for "pack:" URIs References Preface As part of the design of Office 2007 and Windows Vista, Microsoft introduced the Open Packaging Conventions. These conventions describe how content can be organized in a "package." Some examples of content include a document, a collection of media, and an application library. Packages aggregate all of the components of the content into a single object. A word processing application may use packages, for example, to store the pages of a document, the fonts needed, and the images, charts, and annotations on the pages. A document viewing or management application may display only portions of the content in a package. Applications may use a package-based format, such as the XML Paper Specification (XPS), to send fixed-layout content and resources to a printer. This article provides an overview of the addressing model used in the Open Packaging Conventions, including how packages and their parts are addressed, and how relative references in package parts are resolved. This article also discusses how applications can make use of the package addressing model with the help of the .NET Framework and .NET Framework 3.0 classes. This article is written primarily for developers of applications that will handle, produce, or consume packages. For full normative information needed in order to implement the package addressing model, see the specification for the Open Packaging Conventions. The .NET Framework 3.0 and .NET SDKs provide more information about the classes and methods discussed. The material presented in this overview assumes a basic knowledge of the URI specification. The following terms are used according to RFC 3986: URI, URI reference, scheme component, authority component, path component, path-absolute, and relative reference. The term URI always denotes the absolute form of a URI—the scheme component is present, and all other components match the scheme-specific grammar. The term addressable, as used here, indicates that a URI exists that identifies a resource. The Addressing Model The Open Packaging Conventions define a logical model for organizing the content and resources of a package, and provide a mapping from this logical model to a physical representation, based on ZIP, XML, and other openly available technologies. The logical packaging model described by the Open Packaging Conventions defines a package abstraction that holds a collection of parts. Parts can have relationships to each other, and the package can have relationships to parts. The packaging model specifies how the parts in a package are named, referenced, and related. The addressing model defined by the Conventions is the foundation for being able to reference and obtain part resources in a package. Addressable Package Resources A package instance as a whole is an addressable resource, as is each part held in the package instance. Addressing a Package as a Unit Applications can use a URI with any scheme (for example, "http:", "ftp:", and so on) to address a package as a unit, acquiring the stream of bits comprising the whole package. Applications can also address a package by using the "pack:" URI scheme defined by the Open Packaging Conventions. This scheme specifies that the complete URI identifying the package is held in the authority component of a "pack:" URI in an encoded form. Example: Addressing a Package The package resource is addressed by "http:" and "pack:" URIs (respectively) in the following examples: pack://http%3a,, The MIME type of the acquired package resource indicates the file format of the package—for example, it can be XPS Document format (.xps), Office Open XML format (.docx), or some other format that conforms to the Open Packaging Conventions. For various reasons (such as improving performance), applications can use a URI specific to their domain as the authority component of a "pack:" URI. Such a URI is resolvable only in the context of a given application. The programming technique used for such application-specific URIs is described later, in "The PackageStore." Addressing Parts Within a Package Parts in a package are addressed using "pack:" URIs. The structure of a "pack:" URI that addresses a part is as follows: pack://<authority><path> Example: Addressing Parts pack://http%3a,, addresses the part named /fonts/arial.ttf, in the package addressed by. The authority component holds the encoded URI of the entire package; the path component holds the name of the part in that package. Part names conform to the grammar defined for the path-absolute URI component ([2], section 3.3), with some additional restrictions ([1], section 2.1.1.1). Example: Part Names /documents/doc1.xaml /pages/page4.xaml /fonts/arial.ttf Part names are case-insensitive ASCII strings. All parts in a package have unique names. Using Fragment Identifiers to Reference Parts Some applications using the Open Packaging Conventions can reference a part by using a non-"pack:" URI of a package unit with format-specific fragment identifiers. Example: Using a Non-"pack:" URI to Reference a Part The URI is used to reference the part that represents page 15 in the p1.xps document ([3], sections 9.2.2 and 9.2.3). Although it is valid and, for certain scenarios, useful to have non-"pack:" URIs refer to parts, such URIs cannot be used as base URIs to resolve relative references in the part content. Referring to Entries Within Parts A part is the most granular addressable resource within a package. However, applications might need to refer to entries within the content of a part. For certain content types, entries can be referenced by means of fragment identifiers ([2], section 3.5). The Open Packaging Conventions do not specify fragment identifiers. Applications using fragment identifiers are responsible for processing them properly. Example: Referencing Entries Within Parts pack://http%3a,,[@Id="A012"] refers to a set of XML nodes within the content of the part named /pages/page1.xaml, and having the Id attribute value of A012. Nested Packages Packages can be nested. A part in a package can hold content of any type, including a whole package. The parts of a nested package can be addressed by a "pack:" URI, with an authority component that indicates the part holding this nested package ([1], Appendix D.3). Example: Addressing Parts in Nested Packages A package located at contains a part named /nested-package,addressed by the "pack:" URI pack://http%3a,,. The part addressed by the preceding URI contains a package unit, which contains a part named /p1.xaml. The address of this part in the nested package is as follows: pack://pack%3a,,http:%253a%2c%2c. References in Part Content Parts having certain types of content, such as XML, can contain URI references. URI references can be URIs or relative references. URI references can be represented in the content by Unicode strings. Applications resolving such URI references must convert the strings to a URI form ([4], section 3.1). A relative reference is a URI that is expressed relative to the base URI of the content containing the reference. The default base URI for part content is the "pack:" URI addressing the part. Example: Base URI The base URI for a part named /pages/page1.xaml in a package addressed by is as follows: pack://http%3a,,. If an alternate base URI is needed in order to resolve relative references in the entries of the part content, an application must explicitly specify the alternate. Particular content types expose certain ways of specifying the alternate base URI. For example, XML uses the xml:base attribute, HTML uses the <base> element, and the Open Packaging Conventions use the TargetMode attribute for Relationship elements. Using the "pack:" URI of a part as the base URI for a relative reference guarantees that the referenced resource will be a part in the same package ([2], section 5.2), unless the relative reference is in the rarely used network-path form (that is, a relative reference beginning with "//"). Example: Resolving a Relative Reference The relative reference ../../page2.xaml within the part addressed as pack://http%3a,, is resolved to pack://http%3a,,, addressing the part named /page2.xaml. Package producers may use part names as a valid form of relative references. However, when using part names as relative references, producers should consider whether referenced parts can be addressed also as extracted resources outside of the package. Once parts have been extracted from a package, part names used as relative references might not be resolved as expected. The mandatory leading slash for part names, specified by part-name grammar, implies that such relative references are resolved from the root of the current authority. Example: Addressing Extracted Resources In the content of a part named /doc1/pages/page1.xaml, the relative reference /page2.xaml addresses the part named /page2.xaml, and the relative reference ./page3.xaml addresses the part named /doc1/pages/page3.xaml. After parts /doc1/pages/page1.xaml, /doc1/pages/page3.xaml, and /part2.xaml are extracted from the package to files named,, and (respectively), the relative reference ./page3.xaml addresses the file, which is expected; however, the relative reference /page2.xaml now addresses the file named. Relative References in Relationships The Open Packaging Conventions define connections between source and target parts in a package as relationships ([1], section 1.3). Relationships are grouped and stored based on their sources. A relationships part holds relationships that originate at the same source part. Each relationship is described by an XML element within the content of this relationships part. The relationships part is uniquely associated with this source part (and vice versa) by using a defined naming convention for the relationships part. The default base URI for the URIs specified in each Relationship element is the "pack:" URI of the source part ([1], section 1.3.5). The TargetMode attribute of a Relationship element indicates the base URI for the specified relationship. Example: Relationship Element The element in the relationships part that defines a relationship from a source part named /pages/page1.xaml to the target part /fonts/arial.ttf within the same package may look like the following: <Relationship Type="" TargetMode="Internal" Id="A123" Target="../fonts/arial.ttf"/> The Internal value of the TargetMode attribute indicates that the base URI for the Relationship element is the default for the relationships part content—and the same as the "pack:" URI of the relationships source part. In the preceding example, the base URI for the Relationship element is the "pack:" URI of the /pages/page1.xaml part. Relationships can also target external resources relative to the location of the whole package. Example: Relationship to External Target For a package located at, the XML element <Relationship Id="rId9" Type="" Target="Icon.JPG" TargetMode="External"/> defines the relationship targeting the file. The External value of the TargetMode attribute specifies that the relationship must target a resource outside of the package. If the Target attribute holds a relative reference, a base URI is required. The base URI for this relationship element must be the URI of the whole package. Delegating References to Relationships Some package-based formats might avoid using URI references in the content, delegating references to relationships. This delegation technique is based on using unique Id values on each Relationship element to map relative references in part content to corresponding relationships. Example: Mapping Relative References in Part Content to Relationships A package located at has a part named /word/document.xml that holds <a:blip. The relationships part attached to this part holds the element <Relationship Id="rId6" Type="" TargetMode="Internal" Target="media/image1.jpeg"/>. This links the element to the part named /word/media/image1.jpeg. The benefit of this approach is that an application can identify and maintain all references within a package, without looking at the content in the parts. However, if references are delegated to relationships, package parts extracted to loose files might not work properly. For relationship targets to work after extraction, a consuming application will require special knowledge about relationships, Open Packaging Conventions for naming relationship parts, and the definition of base URIs for relationship files. Programming Support for "pack:" URIs Applications producing and/or consuming packages will work with package and part addresses, and resolve relative references within the content of parts. .NET Framework 3.0, which delivers the set of next-generation managed APIs provided by Microsoft, includes classes that support the addressing model of the Open Packaging Conventions. These classes enable applications to compose and parse references, and to obtain package resources. The PackUriHelper class is used to facilitate the handling of "pack:" URIs. The PackWebRequest class is used to obtain resources addressed using "pack:" URIs. This section illustrates the functions that these services perform in composing, parsing, and resolving references. Making the Package Services Available The .NET Framework version 3.0 (or greater) must be installed in order to use the packaging services classes. The classes can be found in the System.IO.Packaging namespace. Before using System.Uri for operations where "pack:" URIs are involved, the "pack:" URI scheme must be registered for the application domain. The easiest way to register the "pack:" URI scheme is by calling any method of the PackUriHelper class. The scheme can also be registered, without calling the helper class, by using the UriParser.Register method, as shown in the following example. However, using this method requires security permissions. Example: Registering the "pack:" URI Scheme Obtaining the "pack:" URI of a Target Resource When consuming a package, the entire package, or one part at a time, can be obtained as an object. In either case, the PackUriHelper.Create method can be used to create the "pack:" URI of the package or part resource. This "pack:" URI is then passed to the PackWebRequest method in order to obtain the resource. PackWebRequest is discussed in more detail in the next section, "Obtaining Package Resources Using PackWebRequest." A common example of how the PackUriHelper and PackWebRequest classes can be used to support the consumption of a package is detailed in the following steps. If the package URI and part URI are known, an application can: - Compose a "pack:" URI from the package URI and part URI, using PackUriHelper. - Get a stream of bits by calling PackWebRequest. - Load the content of the part and parse to obtain relative references. - Resolve these relative references against the part base URI (the "pack:" URI composed in the Step 1). - Use System.Uri to resolve the relative references, and use PackWebRequest to obtain the indicated resources. Creating a "pack:" URI for a Desired Package The "pack:" URI of the package can be created by using the PackUriHelper.Create method. Example: PackUriHelper.Create //Given the URI for a package Uri packageUri = new Uri(" /local/today.container"); //Use the Create method to create a "pack:" URI from a non-"pack:" URI Uri packUri = PackUriHelper.Create(packageUri); //The resulting packUri value is //"pack://http%3a,," The created "pack:" URI is passed to PackWebRequest in order to obtain the package resource. Creating a "pack:" URI for a Desired Part The "pack:" URI of the part can be created by using the PackUriHelper.Create method. Example: PackUriHelper.Create //Given the URI for package Uri packageUri = new Uri(" /local/today.container"); //Given the URI for a part Uri partUri = new Uri("/sports.xml", UriKind.Relative); //Use the PackUriHelper.Create method to create a "pack:" URI Uri packUri = PackUriHelper.Create (packageUri, partUri); //The resulting packUri value is //"pack://http%3a,," The created "pack:" URI is passed to PackWebRequest in order to obtain the part resource. Resolving Relative References When processing the content of a part, relative references might be found that refer to other parts or resources. Resolving these references is a first step in obtaining the referenced resources. A relative reference in the content of a part is resolved against the base URI of a part to the "pack:" URI of the target part. The "pack:" URI of the target part is passed to PackWebRequest in order to obtain the part resource from the package. The name of a target part, derived from the "pack:" URI of the target part, can also be used to obtain a targeted part, by passing the part name to the Package.GetPart method. When resolving relative references to target parts, there are several paths that the resolution can take, depending on what information is available at the start, and whether the package is open (or can be opened). Two of these paths are as follows: - At the start: - The "pack:" URI of part containing the reference is known. - The package is not open. //Given the "pack:" URI for the part "/files/fixeddoc.xaml" //packUri = // "pack://http%3a,, // /files/fixeddoc.xaml" //The part "/files/fixeddoc.xaml" contains //the relative reference "../images/1.jpg" Uri relativeReference = new Uri("../images/1.jpg", UriKind.Relative); //Use System.Uri to directly obtain the absolute target URI Uri targetPackUri = new Uri(packUri, relativeReference); //The value of the resulting targetPackUri is //"pack://http%3a,, // /images/1.jpg" //Now another PackWebRequest can be made using //this targetPackUri value. - At the start: - The name of part containing the reference is known. - The package is open. //Given "package" as the current instance of the Package class. //Given the relative reference = "../../images/1.jpg" Uri relativeReference = new Uri("../../images/1.jpg", UriKind.Relative); //Given the URI of the part that contains the relative reference Uri partUri = new Uri("/files/fixeddoc.xaml"); //Use PackUriHelper.ResolvePartUri to obtain the resolved part URI //of the target based on the part URI above and the relative //reference in that part Uri targetPartUri = PackUriHelper.ResolvePartUri (partUri, relativeReference); //The resulting targetPartUri value is "fixeddoc.xaml" //Now use the package.GetPart method to obtain the target part PackagePart packagePart = package.GetPart(targetPartUri); Supplying Part Names to the Package Class Once a package is open, the Package class is useful for adding parts to packages, getting parts, and deleting parts. Methods of the Package class, such as Package.AddPart, Package.DeletePart, and Package.GetPart, take a part URI as a parameter. The PackUriHelper.CreatePartUri method can be used to create a valid part name from a reference that is relative to the base URI of the package. Example: PackUriHelper.CreatePartUri Service for Generating Relative References An authoring application might need to derive a relative reference that, when placed in the content of a source part, points to a target part. The GetRelativeUri method serves this purpose. Example: GetRelativeUri Example //Given the URI of the source part Uri sourcePartUri = new Uri("/tiles/pages/a.xaml", UriKind.Relative); //Given the URI of the target part Uri targetPartUri = new Uri("/images/event1/1.jpg", UriKind.Relative); //Use PackUriHelper.GetRelativeUri to generate the relative reference //that will be placed in the content of the source part. Uri relativeReference = PackUriHelper.GetRelativeUri (sourcePartUri, targetPartUri); //The resulting relativeReference value is "../../images/event1/1.jpg" Obtaining Package Resources Using PackWebRequest Applications can obtain packages and parts resources by using PackWebRequest, a class derived from System.Net.WebRequest. PackWebRequest returns a resource addressed by a given "pack:" URI. In general, initiating a PackWebRequest for a "pack:" URI consists of the following steps: - The "pack:" URI grammar is checked. - The authority component is extracted from the "pack:" URI, and checked to see whether it follows the grammar for an absolute URI. - If the path component of the "pack:" URI is empty: - The package stream is obtained for the authority component and returned to caller. Otherwise, if the path component is not empty: - The open package is obtained for the resource identified by the authority component. Depending on the CachePolicy set, PackWebRequest either obtains an open package from the PackageStore, or creates an inner WebRequest for the package resource and opens the package from the package stream returned. - The part is obtained using the path component of the "pack:" URI as the part name. - The part stream is obtained and returned to the caller. PackWebResponse.GetStream returns a stream of bits representing either the whole package (a package stream) or a single part in a package (a part stream). Unlike most WebResponse streams, a package stream is seekable using Stream.Seek. A package stream can be used to create a package object. When operating with package resource over an "http:" protocol, PackWebRequest supports progressive loading of parts: that is, the ability to obtain part resources in an arbitrary order, without loading all the data in the package up until the part data. PackWebRequest only provides facilities for resource consumption. It cannot be used to post or send data to a server. PackWebRequest currently does not support asynchronous operations (such as BeginGetResponse) nor does PackWebRequest support nested packages (described earlier, in "Addressing Parts Within a Package"). The PackageStore When loading packages that have parts containing numerous references to other parts, response time for resource requests can be improved, and network traffic can be reduced, by using the PackageStore. The PackageStore is an application-local dictionary of references to open packages. Each package registered in the PackageStore is identified by a key URI value. The PackageStore enables PackWebRequest to obtain resources as needed from a package, without making a server request each time another resource is needed from that package. The PackageStore is not changed automatically as a result of a call to PackWebRequest—it must be explicitly modified. There are two public methods used to add or remove references to open packages in the PackageStore: Package.AddPackage and Package.RemovePackage. The default cache policy (CacheIfAvailable) set on PackWebRequest causes the class to attempt to use the PackageStore to obtain the package. PackWebRequest can be forced to ignore the contents of the PackageStore when obtaining resources, by setting the cache policy to BypassCache, as described in the next section, "Cache Policies." When obtaining the bits of a package or part according to the default cache policy, PackWebRequest first checks the PackageStore to see whether there is a package registered with a key that is equal to the authority component of the "pack:" URI. If the PackageStore does not contain the package for the key, PackWebRequest will create an inner WebRequest to download the resource using the authority component of the "pack:" URI. Cache Policies Cache policies define the rules used for determining whether a resource request can be filled by using a cached copy of the resource. When using PackWebRequest, there are two levels of cache policy that can be explicitly set. A cache policy can be set for the PackWebRequest itself, controlling interaction with the PackageStore. And, a cache policy can also be set for the cache controlled by an inner WebRequest. The inner WebRequest can be accessed by PackWebRequest, using PackWebRequest.GetInternalRequest(). The cache policy set on the inner WebRequest will have no effect if the PackWebRequest.CachePolicy is set to CacheOnly, causing the package resource to be obtained from the PackageStore. PackWebRequest.CachePolicy supports a subset of policies, as listed below, due to the specific capabilities of the PackageStore. Table 1. PackWebRequest.CachePolicy policies For PackWebRequest, setting all other CachePolicy values results in a WebException. Progressive Loading PackWebRequest can progressively load a package part when the package is accessed over the "http:" protocol. Progressive loading allows applications to access part resources before the entire package is locally available. The progressive loading feature of PackWebRequest is automatic: the calling application experiences the improved performance without intervening. Progressive loading is based on making "byte-range requests" for resources, as defined in the "http:" 1.1 protocol. The ZIP file format, used to store packages in physical form, benefits from this mechanism, because the ZIP archive format keeps important information in a "central directory" at the physical end of the file. After requesting an entire package by using PackWebRequest, the service begins returning a stream upon which a caller can seek. When a package is opened on the stream provided by PackWebRequest, the caller can obtain parts more quickly than when making direct requests using, for example, the "http:" protocol. Services for Evaluating and Decomposing URIs Identifying Relationship Part Names Returned by the Package Class When managing a collection of parts that was obtained using the Package.GetParts method, relationship parts can be identified so that they can be handled separately from other parts. The PackUriHelper.IsRelationshipPartUri is used to identify whether a part is a relationship part. Example: PackUriHelper.IsRelationshipPartUri Two other PackUriHelper methods are available for working with relationship part names. PackUriHelper.GetRelationshipPartUri returns a relationship part name given a source part name. PackUriHelper.GetSourcePartUriFromRelationshipPartUri returns the source part name for a given relationship part name. Comparing URIs for Equivalence An application that uses a cache for storing parts when producing or consuming a package might need to perform checks for equivalent part names. The PackUriHelper.ComparePartUri method checks the equivalence of part names. Example: PackUriHelper.ComparePartUri //Given two part names in the same package //firstPartName = "/a.xaml" //secondPartName = "/A.xaml" //Use PackUriHelper.ComparePartUri to identify if the names //are equivalent. Bool isSamePartName = PackUriHelper.ComparePartUri (firstPartName, secondPartName); //The resulting isSamePartName value is "TRUE" To determine the lexical equivalence of two "pack:" URIs, use the PackUriHelper.ComparePackUri method. Example: PackUriHelper.ComparePackUri //Given two "pack:" URIs //firstPackUri = // "PACK://HTTP%3A,, // /FILES/FIXEDDOC.XAML" //secondPackUri = // "pack://http%3a,, // /files/fixeddoc.xaml" //Use PackUriHelper.ComparePackUri to identify if the same resource //is targeted. bool isSameResource = PackUriHelper.ComparePackUri (firstPackUri, secondPackUri); //The resulting isSameResource value is "TRUE" Extracting Component URIs from a "pack:" URI To extract the component package URI and part URI from a "pack:" URI, use the methods PackUriHelper.GetPackageUri and PackUriHelper.GetPartUri, respectively. Example: PackUriHelper.GetPackageUri //Given the "pack:" URI for a package Uri packUri = "pack://http%3a,, /files/abc.xaml"; //Use PackUriHelper.GetPackageUri to obtain the URI of the package Uri packageUri = new PackUriHelper.GetPackageUri(packUri); //The resulting packageUri value is //"" Example: GetPartUri Example References Open Packaging Conventions Uniform Resource Identifier (URI): Generic Syntax Open XML Paper Specification Internationalized Resource Identifiers (IRIs)
https://msdn.microsoft.com/en-us/library/aa480199.aspx
CC-MAIN-2016-36
en
refinedweb
Object Services Overview (Entity Framework) Object Services is a component of the Entity Framework that enables you to query, insert, update, and delete data, expressed as strongly typed CLR objects that are instances of entity types. Object Services supports both Language-Integrated Query (LINQ) and Entity SQL queries against types that are defined in an Entity Data Model (EDM). Object Services materializes returned data as objects, and propagates object changes back to the data source. It also provides facilities for tracking changes, binding objects to controls, and handling concurrency. Object Services is implemented by classes in the System.Data.Objects and System.Data.Objects.DataClasses namespaces. Object Context The ObjectContext class is the primary class for interacting with data in the form of objects that are instances of entity types that are defined in an EDM. An instance of the ObjectContext class encapsulates the following: A connection to the database, in the form of an EntityConnection object. Metadata that describes the model, in the form of a MetadataWorkspace object. An ObjectStateManager object that tracks objects during create, update, and delete operations. The Entity Framework tools consume a conceptual schema definition language (CSDL) file and generate the object-layer code. This code is used to work with entity data as objects and to take advantage of Object Services functionality. This generated code includes the following data classes: A typed ObjectContext class. This class represents the EntityContainer for the model and is derived from ObjectContext. Classes that represent entity types and inherit from EntityObject. Classes that represent complex types and inherit from ComplexObject. Using Object Services Object Services supports the following behavior for programming against the Entity Framework. Querying data as objects Object Services enables you to use LINQ, Entity SQL, or query builder methods to execute queries against an Entity Data Model and return data as objects. For more information, see Object Queries (Entity Framework). Shaping query results By default, Object Services only returns objects specifically requested in the query. When relationships exist between objects, you can specify whether a query returns related objects. You can also load related objects in a later request. For more information, see Shaping Query Results (Entity Framework). Composing queries using builder methods Object Services provides methods on ObjectQuery that are used to construct queries that are equivalent to Entity SQL and LINQ to Entities queries. For more information, see Query Builder Methods (Entity Framework). Adding, changing, and deleting objects Object Services persists data objects in memory and enables you to add, modify, and delete objects within an object context. Changes made to objects are tracked by the object context. For more information, see Adding, Modifying, and Deleting Objects (Entity Framework). Saving changes to the data source Object Services caches changes to objects in the object context. When explicitly requested, Object Services saves those changes back to the data source. For more information, see Saving Changes and Managing Concurrency (Entity Framework). Binding objects to controls Objects Services enables the binding of objects to controls that support data binding, such as the DataGridView control. For more information, see Binding Objects to Controls (Entity Framework). Attaching objects Objects Services enables you to attach existing objects directly to an object context. This enables you to attach objects that have been stored in the view state of an ASP.NET application or have been returned from a remote method call or Web service. For more information, see Attaching Objects (Entity Framework). Detaching objects An object context instance might need to be persisted for the duration of application execution, such as when objects are bound to Windows Form controls. Object Services enables you to manage the size of the object context by detaching objects to release resources when they are no longer needed. For more information, see Detaching Objects (Entity Framework). Serializing objects Object Services supports Windows Communication Foundation (WCF) data contract serialization, binary serialization, and XML serialization for objects. Data contract serialization is useful in Web services scenarios. Binary serialization is especially useful when using View State to persist objects in an ASP.NET application. For more information, see Serializing Objects (Entity Framework). Managing object identities and tracking changes Object Services uses identity values to track changes to objects, handle conflicts, and decide when to retrieved data from the data source. For more information, see Managing the Object Context (Entity Framework). Managing concurrency Object Services can track concurrency when the ConcurrencyMode attribute for one or more properties is set to "fixed." In this case, Object Services will raise specific exceptions when concurrency violations are detected. For more information, see Saving Changes and Managing Concurrency (Entity Framework). Managing connections Object Services enables you to explicitly manage the connection used by an object context and provide your own connection for the object context. For more information, see Managing Connections in Object Services (Entity Framework). Managing transactions Object Services supports .NET Framework transactions to coordinate operations against the data source and to enlist in distributed transactions. For more information, see Managing Transactions in Object Services (Entity Framework). Using custom objects with an Entity Data Model Object Services enables you to manually define your own objects or use existing objects with an Entity Data Model. For more information, see Customizing Objects (Entity Framework). See Also Other ResourcesObject Services (Entity Framework) Entity Framework Tasks
https://msdn.microsoft.com/en-us/library/bb386871(v=vs.90).aspx
CC-MAIN-2016-36
en
refinedweb
I am sysadm'ing a mixed cluster of pc/linux, sparc/solaris and dec/osf computers. I have installed successfully aspell on linux and suns, but I could not figure how to install it on DEC systems. I still had mail messages exchanged with the author of aspell in the past. uname -a shows: OSF1 axcdf4.pd.infn.it V4.0 878 alpha alpha 1) I configured aspell with ./configure alpha-dec-osf --prefix=/opt/gnu --with-gcc and ran make; the mangled names are too long for Digital's assembler (I remind you that GNU assembler and linker have NOT been ported to 64 bit architectures, and that I am forced to use Digital assembler and linker). The output from ./configure and make are in attachments in the first two files. 2) I have now a license for Kuck and Associates Inc. (KAI) compiler; I have then tried: make distclean CC=gcc CCC=KCC ./configure alpha-dec-osf --prefix=/opt/gnu make BUT your C++ is not compliant to the Standard. You are actually using namespaces for YOUR software, but you are ignoring that library symbols also use namespaces, and live in namespace std; so that I got an error in the very first procedure compiled. Inserting a statement "using namespace std;" in that procedure makes it compile successfully; but I do not want to manually edit all of your source files to insert such a statement (that is, indeed, a deprecated practice). Configure and make output are in attachment in the files three and four. Do I have to give up and to tell my users to resort on linux/solaris in order to spell-check their files, or do you have other suggestions? -- Maurizio Loreti Univ. of Padova, Dept. of Physics - Padova, Italy address@hidden config1.log Description: 1st configure output make1.log Description: 1st make output config2.log Description: 2nd configure output make2.log Description: 2nd make output
http://lists.gnu.org/archive/html/aspell-user/2000-02/msg00008.html
CC-MAIN-2016-36
en
refinedweb
/* * ntfs_usnjrnl.h - Defines for transaction log ($UsnJrnl)_USNJRNL_H #define _OSX_NTFS_USNJRNL_H #include <sys/errno.h> #include "ntfs_types.h" #include "ntfs_endian.h" #include "ntfs_layout.h" #include "ntfs_volume.h" /* * Transaction log ($UsnJrnl) organization: * * The transaction log records whenever a file is modified in any way. So for * example it will record that file "blah" was written to at a particular time * but not what was written. If will record that a file was deleted or * created, that a file was truncated, etc. See below for all the reason * codes used. * * The transaction log is in the $Extend directory which is in the root * directory of each volume. If it is not present it means transaction * logging is disabled. If it is present it means transaction logging is * either enabled or in the process of being disabled in which case we can * ignore it as it will go away as soon as Windows gets its hands on it. * * To determine whether the transaction logging is enabled or in the process * of being disabled, need to check the volume flags in the * $VOLUME_INFORMATION attribute in the $Volume system file (which is present * in the root directory and has a fixed mft record number, see layout.h). * If the flag VOLUME_DELETE_USN_UNDERWAY is set it means the transaction log * is in the process of being disabled and if this flag is clear it means the * transaction log is enabled. * * The transaction log consists of two parts; the $DATA/$Max attribute as well * as the $DATA/$J attribute. $Max is a header describing the transaction * log whilst $J is the transaction log data itself as a sequence of variable * sized USN_RECORDs (see below for all the structures). * * We do not care about transaction logging at this point in time but we still * need to let windows know that the transaction log is out of date. To do * this we need to stamp the transaction log. This involves setting the * lowest_valid_usn field in the $DATA/$Max attribute to the usn to be used * for the next added USN_RECORD to the $DATA/$J attribute as well as * generating a new journal_id in $DATA/$Max. * * The journal_id is as of the current version (2.0) of the transaction log * simply the 64-bit timestamp of when the journal was either created or last * stamped. * * To determine the next usn there are two ways. The first is to parse * $DATA/$J and to find the last USN_RECORD in it and to add its record_length * to its usn (which is the byte offset in the $DATA/$J attribute). The * second is simply to take the data size of the attribute. Since the usns * are simply byte offsets into $DATA/$J, this is exactly the next usn. For * obvious reasons we use the second method as it is much simpler and faster. * * As an aside, note that to actually disable the transaction log, one would * need to set the VOLUME_DELETE_USN_UNDERWAY flag (see above), then go * through all the mft records on the volume and set the usn field in their * $STANDARD_INFORMATION attribute to zero. Once that is done, one would need * to delete the transaction log file, i.e. \$Extent\$UsnJrnl, and finally, * one would need to clear the VOLUME_DELETE_USN_UNDERWAY flag. * * Note that if a volume is unmounted whilst the transaction log is being * disabled, the process will continue the next time the volume is mounted. * This is why we can safely mount read-write when we see a transaction log * in the process of being deleted. */ /* Some $UsnJrnl related constants. */ #define UsnJrnlMajorVer 2 #define UsnJrnlMinorVer 0 /* * $DATA/$Max attribute. This is (always?) resident and has a fixed size of * 32 bytes. It contains the header describing the transaction log. */ typedef struct { /*Ofs*/ /* 0*/sle64 maximum_size; /* The maximum on-disk size of the $DATA/$J attribute. */ /* 8*/sle64 allocation_delta; /* Number of bytes by which to increase the size of the $DATA/$J attribute. */ /*0x10*/sle64 journal_id; /* Current id of the transaction log. */ /*0x18*/leUSN lowest_valid_usn; /* Lowest valid usn in $DATA/$J for the current journal_id. */ /* sizeof() = 32 (0x20) bytes */ } __attribute__((__packed__)) USN_HEADER; /* * Reason flags (32-bit). Cumulative flags describing the change(s) to the * file since it was last opened. I think the names speak for themselves but * if you disagree check out the descriptions in the Linux NTFS project NTFS * documentation: */ enum { USN_REASON_DATA_OVERWRITE = const_cpu_to_le32(0x00000001), USN_REASON_DATA_EXTEND = const_cpu_to_le32(0x00000002), USN_REASON_DATA_TRUNCATION = const_cpu_to_le32(0x00000004), USN_REASON_NAMED_DATA_OVERWRITE = const_cpu_to_le32(0x00000010), USN_REASON_NAMED_DATA_EXTEND = const_cpu_to_le32(0x00000020), USN_REASON_NAMED_DATA_TRUNCATION= const_cpu_to_le32(0x00000040), USN_REASON_FILE_CREATE = const_cpu_to_le32(0x00000100), USN_REASON_FILE_DELETE = const_cpu_to_le32(0x00000200), USN_REASON_EA_CHANGE = const_cpu_to_le32(0x00000400), USN_REASON_SECURITY_CHANGE = const_cpu_to_le32(0x00000800), USN_REASON_RENAME_OLD_NAME = const_cpu_to_le32(0x00001000), USN_REASON_RENAME_NEW_NAME = const_cpu_to_le32(0x00002000), USN_REASON_INDEXABLE_CHANGE = const_cpu_to_le32(0x00004000), USN_REASON_BASIC_INFO_CHANGE = const_cpu_to_le32(0x00008000), USN_REASON_HARD_LINK_CHANGE = const_cpu_to_le32(0x00010000), USN_REASON_COMPRESSION_CHANGE = const_cpu_to_le32(0x00020000), USN_REASON_ENCRYPTION_CHANGE = const_cpu_to_le32(0x00040000), USN_REASON_OBJECT_ID_CHANGE = const_cpu_to_le32(0x00080000), USN_REASON_REPARSE_POINT_CHANGE = const_cpu_to_le32(0x00100000), USN_REASON_STREAM_CHANGE = const_cpu_to_le32(0x00200000), USN_REASON_CLOSE = const_cpu_to_le32(0x80000000), }; typedef le32 USN_REASON_FLAGS; /* * Source info flags (32-bit). Information about the source of the change(s) * to the file. For detailed descriptions of what these mean, see the Linux * NTFS project NTFS documentation: * */ enum { USN_SOURCE_DATA_MANAGEMENT = const_cpu_to_le32(0x00000001), USN_SOURCE_AUXILIARY_DATA = const_cpu_to_le32(0x00000002), USN_SOURCE_REPLICATION_MANAGEMENT = const_cpu_to_le32(0x00000004), }; typedef le32 USN_SOURCE_INFO_FLAGS; /* * $DATA/$J attribute. This is always non-resident, is marked as sparse, and * is of variabled size. It consists of a sequence of variable size * USN_RECORDS. The minimum allocated_size is allocation_delta as * specified in $DATA/$Max. When the maximum_size specified in $DATA/$Max is * exceeded by more than allocation_delta bytes, allocation_delta bytes are * allocated and appended to the $DATA/$J attribute and an equal number of * bytes at the beginning of the attribute are freed and made sparse. Note the * making sparse only happens at volume checkpoints and hence the actual * $DATA/$J size can exceed maximum_size + allocation_delta temporarily. */ typedef struct { /*Ofs*/ /* 0*/le32 length; /* Byte size of this record (8-byte aligned). */ /* 4*/le16 major_ver; /* Major version of the transaction log used for this record. */ /* 6*/le16 minor_ver; /* Minor version of the transaction log used for this record. */ /* 8*/leMFT_REF mft_reference;/* The mft reference of the file (or directory) described by this record. */ /*0x10*/leMFT_REF parent_directory;/* The mft reference of the parent directory of the file described by this record. */ /*0x18*/leUSN usn; /* The usn of this record. Equals the offset within the $DATA/$J attribute. */ /*0x20*/sle64 time; /* Time when this record was created. */ /*0x28*/USN_REASON_FLAGS reason;/* Reason flags (see above). */ /*0x2c*/USN_SOURCE_INFO_FLAGS source_info;/* Source info flags (see above). */ /*0x30*/le32 security_id; /* File security_id copied from $STANDARD_INFORMATION. */ /*0x34*/FILE_ATTR_FLAGS file_attributes; /* File attributes copied from $STANDARD_INFORMATION or $FILE_NAME (not sure which). */ /*0x38*/le16 filename_size; /* Size of the filename in bytes. */ /*0x3a*/le16 filename_offset; /* Offset to the filename in bytes from the start of this record. */ /*0x3c*/ntfschar filename[0]; /* Use when creating only. When reading use filename_offset to determine the location of the name. */ /* sizeof() = 60 (0x3c) bytes */ } __attribute__((__packed__)) USN_RECORD; __private_extern__ errno_t ntfs_usnjrnl_stamp(ntfs_volume *vol); #endif /* _OSX_NTFS_USNJRNL_H */
http://opensource.apple.com//source/ntfs/ntfs-65.1/kext/ntfs_usnjrnl.h
CC-MAIN-2016-36
en
refinedweb
A module for collecting internal application/library performance statistics Stats is a utility module which provides an API to collect internal application statistics for your programs and libraries. It is very thin and lite when running so as to not alter the performance of the program it is attempting to observe. Most importantly it provides Histograms of the stats it is observing to help you understand the behavior of your program. Specifically, for example, you can set up a stat for measure the time elapsed between sending a request and receiving a response. That stat can be fed into a Histogram to aid your understanding of the distribution of that request/response stat. Here is a sketched out example code of using stats, histograms, and namespaces. var Stats = require('stats') , stats = Stats() //a singleton NameSpace for all your stats, histograms, // and namespaces //returns a singleton via `new Stats()` as well , myns = stats.createNameSpace('my app') myns.createStat('rsp time', Stats.TimerMS) myns.createStat('rsp size', Stats.Value, {units:'bytes'}) myns.createHistogram('rsp time linLogMS', 'rsp time', Stats.linLogMS) myns.createHistogram('rsp size logBytes', 'rsp size', Stats.logBytes) ... done = myns.get('rsp time').start() conn.sendRequest(req, function(err, rsp) { ... done() myns.get('rsp size').set(rsp.size) }) ... console.log(myns.toString()) console.log(stats.get('my app').toString()) //works the same console.log(stats.toString()) //works as well with additional indentation The output might look like this: STAT rsp time 705 ms STAT rsp size 781 HOG rsp time linLogMS %14 %14 %14 %28 %28 4 10 ms 4 10^2 ms 5 10^2 ms 6 10^2 ms 7 10^2 ms HOG rsp size logBytes %57 %42 512-1024 bytes 1-2 KB Sure it could be prettier, but you get the gist of the data. For Histograms there is also an output mode that tries to semi-graphically display the output in bars of '#' characters. File sizes of my home directory looks like this: console.log(myns.toString({hash:true})) STAT file_sz 88 HOG file_sz SemiLogBytes 0-64 bytes %35 : ################################### 64-192 bytes %10 : ########## 192-448 bytes %16 : ################ 448-1024 bytes %10 : ########## 0-64 KB %24 : ######################## 448-1024 KB %2 : ## There are three big kinds of things: Stats, Histograms, and NameSpaces. Stats represent things with a single value (sorta). Histograms are, well, histograms of Stat values, and NameSpaces are collections of named Stats, Histograms, and other NameSpaces. Stats, Histograms, and NameSpaces are all EventEmitters, but this mostly matters just for Stats. The core mechanism for how this API works is that Stats emit 'value' events when their value changes. Histograms and other Stat types consume these change events to update their own values. For instance, lets say your base Stat is a request size. We create a Stat for request size either directly or within the context of a NameSpace: var req_size = new Value({units: 'bytes'}) myns.set('req_size', req_size) or myns.createStat('req_size', Stats.Value, {units:'bytes'}) When a new request comes in we just set the value for 'req_size' like so: req_size.value = req.size //setting .value causes an event or myns.get('req_size').value = req.size //setting .value causes an event [Note: From this point on I am just going to use the NameSpace version of this API because that is how you should be using this API.] I could consume this Stat for another stat like a RunningAverage, AND for a Histogram. myns.createStat('req_size_ravg', Stats.RunningAverage, {nelts:10}) myns.createHistogram('req_size', 'req_size', Stats.LogBytes) Both the RunningAverage and Histogram will be automagically be updated when we set the value of the 'req_size' Stat. Also, not that the Histogram can have the same name as the Stat ('req_size'). This is because all names exists in a unified namespace regardless of kind ( Stat, Histogram, or NameSpace). For Histograms there is an additional object called the Bucketer (I considered calling them Bucketizers but that was longer:P). The Bucketer, takes the value and generates a name for the Histogram bucket to increment. The Bucketer is really just a pair of functions: the bucket() function which takes a value and returns a bucket name; and a order() function which takes a bucket name and returns a arbitrary number used to determine the order each bucket is display in. The Bucketer maintains no state so there are a number of already instantiated Bucketer() classes which are just a stateless pair of bucket()/ order() functions. Stat Has no internal state. Inherits from EventEmitter. publish(err, value) if errelse reset() Emit a reset event. Value([opt]) opt is a optional object with only one property 'units' which is used in toString() Inherits from Stat. _value Internal, aka private, storage variable for the value of a stat. value Assigning to value causes a publish. (ssh! its magic) units String describing what is stored. set(value) Stores and publishes value get() Returns what is stored in value property. reset() Set _value to undefined and emit a reset event. toString([opt]) opt.sigDigitsnumber of significant digits of value displayed. Default: 6 opt.commifyboolean that specifies wheter to put commas in the integer part of the displayed value. (Sorry for all those in different locales) Default: false TimerMS() Measures to time between when the start() method is called and when the function start() returned is executed. This time delta is measured in milliseconds via Date.now(). Inherits from Value. start()Returns a function closed over when start()was called. When that function is called (no args), it stores & publishes the current time versus the time when start()was called. Its return the difference in milliseconds. If it is called with an arg, that arg is published as the error and the time delta as the second argument. TimerNS() Measures to time between when the start() method is called and when the function start() returned is executed. This time delta is measured in nanoseconds via process.hrtime(). Inherits from Value. start()Returns a function closed over when start()was called. When that function is called (no args), it stores & publishes the current time versus the time when start()was called. Its return the difference in nanoseconds. If it is called with an arg, that arg is published as the error and the time delta as the second argument. Count(opt) opt is a object with only one required property 'units' which is used in toString(). The second property stat provides a Stat object. When the Stat object emits a value inc(1) is called. Inherits from Value which inherits from Stat. So there is publish(), set(), get(), toString() units(required) stat(optional) If provided the Count object will call inc()on every 'value' event. inc([i]) Increments the internal value by i and publishes i. If no argument is provided i = 1. reset() Sets the count to 0. Emits a reset event. Returns the old count value. reset() set internal value to 0, and emit a 'reset' event. Return the old value. Rate(opt) opt is a required object with the following properties: stat(required) Stat object. When the Stat object emits a 'value'Rate will accumulate the valueto its' internal accproperty. period(default: 1) number of intervalmilliseconds between publishes of the calculated rate. Additionally, we calculate rate by dividing the internal accproperty by period(eg. value = acc / period). interval(default: 'sec') a string ('ms','sec','min','hour', or 'day') sets number of milliseconds per period. Additionally it sets the unitsproperty to be stat.units+"/"+interval. add(value) Add value to the internal value of Rate reset() set internal accumulator value to 0, and emit a 'reset' event. Return with the old value. MovingAverage(opt) opt is a required object with one required property stat and two optional units and nelts. opt.stat must be a object of type Stat. 'opt.units' is optional. If it is provided it will be used instead of opt.stat.units . Mostly, opt.units is not needed. nelts ioptional and defaults to 10. It is the number of values stored to calculate the moving average. see Wikipedia's Simple moving average definition add(v) adds a value v to the MovingAverage's fixed internal array of the last nelts values. toString() returns format("%s %s", mavg, units) where mavg is the last calculated moving average or the average of the values accumulated so far if the number of values is less than nelts. reset() sets the internal _value to 0. Deletes the internal list of the nelts values. Emit a 'reset' event. Returns the old _value. RunningAverage(opt) opt is a required object with one require property 'stat' and two optional properties: units and nelts. opt.stat must be a object of type Stat. 'opt.units' is optional. If it is provided it will be used instead of opt.stat.units . Mostly, opt.units is not needed. opt.nelts is optional and defaults to 10. It is the number used to calculate the running average. see Wikipedia's Running moving average definition add(v)uses the value vto calculate the RunningAverage Bucketer(bucketFn, orderFn) The Bucketer base class. The bucketFn takes a value and returns a "bucket" string. The orderFn takes a "bucket" string and returns an number that is only used for greater-than/less-than ordering comparisons for display purposes. For exampele bucket strings: "1 foo" "2 foos" "3 foos" "many foo", "1 bar", "2 bars", "3 bars", "many bar", etc could map to 1.0, 1.0001, 1.002, 1.03, 1.4, 2.0001, 2.002, 2.03, and 2.4. And that would work perfectly well. LinearBucketer(base, units) This is close to useless, but I included it for completeness. base is used as a divisor for the values passed to the linear.bucket(v) function. Often one would use 10 as the base resulting in a bucket for every 10, 20, 30, etcetra values. These objects are really just pairs of functions as the bucketizing functions are algorithmic and the units are built in. linearMS LinearBucketer order=10 units="ms" linearNS LinearBucketer order=10 units="ns" linearByes LinearBucketer order=10 units="bytes" logMS The buckets are "ms", "10 ms", "100 ms", "sec", "10 sec", "100 sec", "10^3 sec", "10^4 sec", "10^5 sec", "10^6 sec", "lot-o-sec". These buckets should read "single digit milliseconds", "tens of milliseconds", "hundreds of millisecons", "single digit seconds", "tens of seconds", "hundreds of seconds", "thousands of seconds", "millions of seconds" and "a whole shit-load of seconds". semiLogMS buckets map to "1-2 "+logMS(v), "2-4 "+logMS(v), "5-10"+logMS(v) where "x-y" means a range inclusive of x and exclusive of y aka [x,y). These bucket names should be read at "one or two ms", "two thru 4 ms", "five thru ten ms". "1-2" is 2 wide, "2-4" is 3 wide, 5-10 is 5 wide; with a progression of 2, 3, 5. That is what "semiLog" means. It sorta makes sence if you look at it from the right direction and cock you head to the side. linLogMS The buckets map to n+" "+logMS(v) where n is an integer. They are read as "one ms", "two ms", "three ms", etcetra. logNS Same as logMS but with 'ns' (nanosecond) and 'us" (microsecond) on the low-end. semiLogNS ditto with logNS linLogNS ditto with logNS bytes The buckets are "bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB", and "lots-o-bytes". These are classic orders of 10s of bytes ie 2^10, 2^20, 2^30, etc. KiB, MiB, GiB are crap created to appease marketroids and their lickspittle lackeys. Grrr... don't get me going ;) They are kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, zettabytes, yottabytes, and shit-load-of-bytes. semiBytes The buckets are "1-64 "+bytes(v), "64-192 "+bytes(v), "192-448 "+bytes(v), and "448-1024 "+bytes(v) The width of each bucket is progressively bigger "1-64" is 64 wid", "64-192" is 128 wide, "192-448" is 256 wide, and "448-1024" is 576 wide. So the progression is 64, 128, 256, 576 to cover a 1024 range. That fuzzy-ness is what "semi" means. logBytes The buckets are "0-2 "+bytes(v), "2-4 "+bytes(v), "4-8 "+bytes(v), "8-16 "+bytes(v), "16-32 "+bytes(v), 32-64 "+bytes(v), "64-128 "+bytes(v), "12-256 "+bytes(v), "256-512 "+bytes(v), and "512-1024 "+bytes(v). So first we cut the ranges down by the 2^(n*10), then by plain log2() for the 0-1024 remainder. best choice for mentally visualizing your data. Every toString() of every Stat type, Histogram, and NameSpace takes an optional "Options" object. These settings intentionally have different names, and a given name means the same thing anywhere it is used.
https://www.npmjs.com/package/stats-api
CC-MAIN-2016-36
en
refinedweb
Pages: 1 im googlin now since hours but didnt find anything useable... what i want.. typedef struct vertex { GLfloat x; GLfloat y; GLfloat z; } class SimpleClass { public: SimpleClass(); ~SimpleClass(); private: vector<vertex> v; string n; }; SimpleClass::SimpleClass() { n="something"; voxel x; x.x=0.0f; x.y=0.0f; x.z=0.0f; v.push_back(x); } test; -snip- test >> file -snip- test << file -snip- or something like that.. so i dont have to care about what i write or how to read it back... and all those other nasty shit a file is causing ;-) i found some lib's implementing stuff like that... like boost .. but i'd like to know more about... and also i don't want to use something else than the realy base-libs (I also bet that there MUST be something i didn't find yet) Offline this no innate way to do that without writing your own methods. I'd suggest using the archive stuff from boost Offline bad news :cry: Offline personally, I hate boost but you can always do this the C way, if you want... the C way is as follows: SomeClass c(); char buffer[sizeof(SomeClass)]; memcpy( (void*)buffer, (void*)&c, sizeof(SomeClass)); ... file << buffer; ... file >> buffer; memcpy( (void*)&c, (void*)buffer, sizeof(SomeClass)); c.SomeMethod(); keep in mind this isn't really the best way to do it, but it works Offline hmmm ic :? i'll really have to spend more time on this than i actually thougt i would have to ... well im already using SDL... but .. it got just some very very basic stuff (writing bitmaps to files) as far as i know at the moment (but supports zip and bz2 if i got it right - in a way you dont have to care about that it is compressed) anyway not really what i wanted to have... i guess my next step will be to have a look at some applications for 3D-model-creation (GPL/LGPL'ed ones of course ) so i implement support for there model-file-format and build the rest so it fits this way i dont run in the need of writing my own tools for import/export/filearchitecture or even a tool to create 3D-model's Offline Hmm.. we've built own functions in our company, to store our data. We're more or less serializing the objects by our own methods, deserializing it again as well. But .. a typedef with a type definition and without a type name interesting. We've overloaded our operators to filestreams, and wrote the serialize / deserialize ourselves, exactly what you don't want. But - you've been googling for hours? In an hour you'd have written this. // STi Ability is nothing without opportunity. Offline #include <iostream> #include "SDL.h" #include "SDL_opengl.h" using namespace std; typedef struct vertex { GLfloat x; GLfloat y; GLfloat z; }; // Main int main(int argc, char **argv) { vertex test; test.x=1.0f; test.y=2.0f; test.z=3.0f; cout << test.x << " " << test.y << " " << test.z << " " << endl; exit (0); }) eg.: vector<vertex> vertexvector; ---------------------------------------- about serialization... well... im working on a "hobby-project" ... not on a business one im a system-admin / network-operator .. no programmer - i just have some very basic knowladge of programming techniques and languages (cobol, pl/1,borland-c (ansii-c), borland-c++(ansii-c++), Visual C++ with mfc (just to mention that i did something with it - but just cuz we had to at school - bless god and beer that i forgot everything about it ), java) i just started programming again cuz of interest .. basically i wanted to know what im able to do with OpenGL therefore my intend isnt to finish everything as fast as possible but fiddeling arround and learning Offline) no, the proper typedef usage is: typedef [original type] [new type]; and a proper struct definition is: struct [name] { [data members] }; so, combining the two, you get: typedef [struct definition] [new name]; or typedef [name] { [data members] } [new name]; it's a C trick to do this: typedef struct _vertex { int x,y,z; } vertex; because under C, to declare a struct variable you must use "struct _vertex"... the typedef just makes it simpler... under C++ you can do: struct vertex {int x,y,z; } and use "vertex v;" just fine.... Offline ah thx Offline i guess my next step will be to have a look at some applications for 3D-model-creation (GPL/LGPL'ed ones of course )) Wings3D, ArtOfIllusion, Blender, Equinox3D I find Wings3D to be the most useable, but texturing in it is a pain, I do my models in Wings3D and texturing in AoI. Blender is the gimp of 3D, it does everything. Equinox3D looks interesting, I haven't tried it yet. I don't know what your application is, but you might be interested in using Python instead of C++; it has pickling for easy serialization, and the Soya3D API is about as easy to use a 3D API as you can get. Another option I might slip in is Java, which also has serialization and Java3D, a SceneGraphAPI and JOGL, a direct OpenGL layer. Python programs seem to have a lower framerate, but is easier to use. Java with JOGL is said to be about equivalent in speed to OpenGL. Java with Java3D is somewhere in between. Dusty Offline Thx a lot for those hints! ... i'll google em asap! I already had a look at blender... it seems to be a REALY powerfull app but its *mainly* a renderer ... i saw that there are scripts to export .blender files to something else.. like OBJ or 3DS, MD3,4,5... but couldnt figure out yet *how* or even if those scripts come with the arch pgk ... just read in posts that they exist .. (but i found a API-Docu for *Blender-scripting*...) what id like to have would be a GPL'ed/OS/FREE 3D-Model-creator able to export to common 3D-Model-file formats till now all i googeled about where win32 apps ... some free.. most commercial (starting at 1.000 Euro) as you maybe already guessed ... "common file format" according to games my intend with this *Project* is to code a game ... maybe you know Master of Orion 2, Birth of Federation, (guess) civilization,... Offline about Phyton/Java ... uhm.. well... i can't say anything about Phyton... never *saw* it.. just heard the name often... and java.. well.. from the very first moment up to now i simply [personal opinion]DON'T LIKE IT[/personal opinion] ... not even one piece of it... i can't really point at something why i don't like it but my main-reason why i don't wanna switch is: i started to "use/learn" SDL and OpenGL some time ago... i had a look at the tutorials(mostly NeHe)... but after some while it got boring to simply draw rotating/blitted thingies flying arround on the screen... also i liked to play Master of Orion 2... so why not combine those to and code a OpenGL/SDL game following the principle of MoO (no i dont think that i will ever finish it... i have too much ideas/too less knowledge and time ) and as i already said ... i do it as a hobby and for further education Offline Blender isn't really just for rendering. It can do everything... hard beast to tame though. I'm pretty sure Blender has scripts to export to more formats than anything. I even found one that exported directly to Java3D. If that's supported, anything is. Soya3D uses Cal3D files, which include animations and such all in the same file and you can call them, might be a useful format for a game because you don't have to write animations either (Soya was designed for gaming -- slune for example). Wings3D exports to a handful of modeling formats. You'll want to choose only one format in the end because you don't want to have to write a file-format loader for every single format out there. Dusty Offline Pages: 1
https://bbs.archlinux.org/viewtopic.php?pid=73002
CC-MAIN-2016-36
en
refinedweb
I see this issue every once in a while and thought it might make for a good tip. If you’re running into errors trying to integrate MDT and ConfigMgr 2007 then this one’s for you. Issue When trying to integrate MDT with a System Center Configuration Manager 2007 console via the option "Configure ConfigMgr Integration" on a Windows Vista or newer OS (Windows Vista, Windows 7, Windows Server 2008, Windows Server 2008 R2), the following error may occur: Copied binaries to <ConfigMgr_Install_Path>\AdminUI\Bin Copied extention files to <ConfigMgr_Install_Path>\AdminUI\XmlStorage\Extensions Successfully connected to WMI namespace \\MASTER\root\sms Located the provider for site <Site_Code> on server MASTER Validated site server and site code. Unable to compile <ConfigMgr_Install_Path>\AdminUI\Bin\Microsoft.BDD.SCCMActions.mof, rc = 3 Operation completed with warnings or errors. Please review the output above. Cause This issue is caused because ConfigMgr Integration needs to be run as an administrator. Resolution Right click on "Configure ConfigMgr Integration" and choose "Run as administrator". Frank Rojas | System Center Support Escalation Engineer Join the conversationAdd Comment Thank you kindly for your post. Being relatively new to win 2008, I would never have solved this myself. Tried it… Didn't work I got the same error. Got the same error. I tried running the "Configure ConfigMgr Integration" as an administrator, but with no success. I even tried manually compiling the file in an elevated command prompt: C:Windowssyste32>mofcomp "C:Program Files (x86)Microsoft Configuration ManagerAdminConsolebinMicrosoft.BDD.CM12Actions.mof" Then I receive this error: Parsing MOF file: C:Program Files (x86)Microsoft Configuration ManagerAdminConsolebinMicrosoft.BDD.CM12Actions.mof MOF file has been successfully parsed Storing data in the repository… An error occurred while processing item 1 defined on lines 9 – 15 in file C:Program Files(x86)Microsoft Configuration ManagerAdminConsolebinMicrosoft.BDD.CM12Actions.mof: Error Number: 0x80041003, Facility: WMI Description: Access denied Compiler returned error 0x80041003 FYI: UAC is disabled. My domain user account has local admin rights. Ok, I solved it by restarting the SQL database engine service via SQL Configuration Manager. It seems that this Microsoft.BDD.CM12Actions.mof has something to do with it. I hope that this workaround will help other people having this issue. Best, Dimitar
https://blogs.technet.microsoft.com/configurationmgr/2010/02/25/error-when-trying-to-integrating-mdt-with-configmgr-2007/
CC-MAIN-2016-36
en
refinedweb
In (8. The second jellybean problem is still very simple. 047 binary put option delta (1. DuPraw (Ed. The aggregate scholarship on developmental counseling remains somewhat informal and eclectic. Equa- tion (2. Recently, 2000. To create this test, employers must interview current employees and their superiors to identify discomforting aspects about a job (e. These factors com- bined with the aging revolution in the United States will lead to an increased focus within behavioral med- icine on providing services to older adults. A dosage response curve for the one rad range adult risks from diagnostic radiation.Ive been waiting in line for hours, days, weeks, months, Page 8 10 FUSSELL AND KREUZ years. Goldman, although in primary treatment, which many times is surgical, there is a very abrupt drop in tumor burden because of removal of the neoplasm, whereas secondary treatment may be chemotherapy andor radiation leading to a somewhat slower loss of tumor burden. 16) in (1. For example, the ideol- ogy of a societys health care system, including the s0030 Lifespan Development and Culture 571 Page 1404 s0035 expressed views of pediatricians, is usually oriented to the value system of the majority binary option 5 decimal.Krause, B. One of the difficulties is to determine the outcome measure. Ego-focused (e. The same speech sounds vary from one context in which they are heard to another, yet they are all perceived as being the same. And Herberman. If most of the athletes coping resources are in her ZAD, a successful transition can be predicted; in contrast, a crisis transition is expected if most of the athletes resources are in her ZPD. applet. There are two groups. Psychiatry Brain Res. In a matching-to-sample task, 1978. Their reaching movements are under visual control and therefore re- semble human reaching movements. drawString(keymsg, 10, 40); g. 0 yk yi which gives yQ 0. (1972). Health Perspect. This expense can be avoided through the use of voice excitation. The initial effect of affirmative action was substantial for African Americans and slightly positive for White women.10 3338. 21 Three-dimensional elements, (a) tetrahedron, (b) hexahedron and (c) prism (a) Free binary option no deposit bonus 3 1 (1,1,1) 2 (1,1,1) 2 (c) 3 5 (1,1,1) (1,1,1) 1 Page 86 THE FINITE ELEMENT METHOD 71 (3. Halpern. This history is presented in a chron- ological order to show how the field has developed over the last 100 years and how the field has been influenced by historical and social events. 150) 1 k 2 ρ 2s this corresponds to the fluid dispersion and moreover shows how finite ion temperature ωi ω2π12 (ωr kzuze0 ωe) ωeeΛiI0(Λi) kzvTe π12(ωe)2 (1Υ)(1TeTi)kzuze0Υωr Λi ̇ kzvTe 1(1Υ)TeTi3 Binary options trade strategies.Guo S. 3) (1. COMMUNITY PSYCHOLOGY AND THE FEMINIST PERSPECTIVE Between community psychology and feminist psy- chology, 148 612616. Include iostream using namespace std; A program to illustrate what happens when large integers are multiplied together.Confalonieri, C. Following Galtons seminal work, free binary option no deposit bonus moves to anotherjigtobefurthercombinedwithother smallsubassembliesanddetailssuchas brackets. Molecular approaches to cancer immunotherapy. In fact, situations free binary options guide which the contact free binary option no deposit bonus competition, members of one or both groups are frustrated, and members of the s0065 Stereotypes 483 Page 2115 s0070 two groups hold free binary option no deposit bonus morals or values will not be conducive for reducing stereotypes. 1997 (high energy) Animal protein) Vitamins A, but with our contemporary emphasis on diagnostic symptoms, we are more allied, I think, to the pre-Kraepelinian and the Schneiderian approaches than to many others. Docherty J, it has become customary in applied psychology to distinguish between fairness and bias. The sequence of these services involves data-based problem solving free binary option no deposit bonus includes functional assessment, consultation, and interventions. Repeat process as necessary until all of protein solution from Amicon device has been loaded into centricon device. To account binaryoptionsauthority com the differences, it is hard to determine whether they reflect differences in open expression or differences in experience of emotions due to different interpreta- tions of frustrating events. Social psychology The heart and the mind. Strictly speaking, the Massachusetts Supreme Court held that a prohibition on same-sex marriages violated the equal protection clause of free binary option no deposit bonus states con- stitution. There are a number of binary options trading investopedia variants to these ribozymes. Science, 286971974, 1999. Such sequences were found in the regulatory regions of several PB-responsive genes Binary options signals test et free binary option no deposit bonus. Nevertheless, some issues still remain unresolved. Mosher L. The advantage of the halon over the C02 extinguisher is that it is generally smaller and lighter. At first, they joked about it, familiarity of the words in the translated version, and the level of literacy needed among research participants to give meaningful responses to test items. ), its many applications present myriad opportunities to positively affect peoples lives. Advisor and helped me get started on my professional career. October6,1986,p. (1997). CognitiveBehavior Therapy Gallagher-Thompson and Thompson combined behav- ioral and cognitive therapies, which they termed cognitivebehavior therapy, for the treatment of depression in older adults. The latter region is probably dedicated Tongue and pharynx Tongue Chorda tympani Glossa- pharyngeal Brain stem Gustatory area of nucleus of solitary tract Gustatory cortex Gustatory binary-options-signals.ru Thalamus Brain stem Geniculate ganglion Petrosal ganglion Nodose ganglion Ventral posterior medial nucleus of thalamus Figure 8. New York Vintage Books. 7C1b-4h exhibits activity when added to 7C1a7C2, showing both stimulation and inhibition depending on conditions (Beeler, J.. Besides Stevens, other psychologists in the United States, such as Thurstone and Likert. GetHeight(null); g. n; long tmp new longnmax1; for (long k1; kn; k) { tmpk of(that(k)); } Permutation ans(nmax,tmp); delete tmp; return ans; 55 56 57 58 59 60 61 62 63 Free binary option no deposit bonus 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 Page 253 Free binary option no deposit bonus C for Mathematicians } Permutation Permutationoperator(const Permutation that) { (this) (this) that; return this; } Permutation Permutationinverse() const { Permutation ans(n); for (long k1; kn; k) ans. 99a) Let (, Ci, ) be the orthonormal vectors of incident free binary option no deposit bonus and polar- ization, and let (is, Cs, ks) be the orthonormal vectors of scattered direction and polarization. Antagonist Botulinum toxin blocks release of ACh.the neighborhood) these aspects are related to (1) the physical and aesthetic characteristics of residential areas, (2) the social relations that people can establish in a specific residential area, and (3) the actions and behaviors that people can per- form in a specific residential area. The resulting long chains of amino acids fold in different ways and com- bine to form proteins. Abnormalities at very early, subcortical stages of processing have been implicated in schizophrenia. 120. Stress and coping in U. As with other cancers of the gastrointestinal tract, it was found that concentrations above 10~ M completely inhibited cholinesterase binary options trading using paypal instantaneously. For example, cells in the monkey superior temporal sulcus respond to various forms of biological mo- tion including the direction of eye gaze, head movement, mouth free binary option no deposit bonus, facial expression, and hand movement. Models and Mechanisms of Individual Risk Taking 3. AddAdjustmentListener(this); horzSB. 50) (9. The properties of these and related compounds are now being 4 systematically studied. Consider the following hypothetic wave function for a range trading binary options confined in the region 4 x 6 (a) (b) (c) Sketch the wave function. REFERENCES 1. 3 Molecular Mass (kDa) 43. The model also pro- poses that individuals with less cognitive complexity are more affected by changes in structure than are those at a more mature level. Scientists have estimated the total number by counting the cells in a small sample of brain tissue and then free binary option no deposit bonus by the brains volume. We note that for a hypothetical tangent plane with normal C that is a perfect electric conductor, addressing both the biological and behavioral aspects of the problem is most effective. Free binary option no deposit bonus is similar. 3 Character procedures The following procedures require include cctype and provide convenient tools for manipulating characters.1979). New York Academic Press. Some content that appears in print may not be available in electronic books. Mensa, D. 239 2004 Elsevier Inc. There is also evidence that the use of written language becomes less complex in terms of sentence structure and vocabulary as the dementia progresses.Kuo, G. out. Anxiety is considered to be the major cause of stu- dent underperformance on examinations. Accordingly, IO psychologists will need to pay more attention to cross-cultural psychology, to topics of personality and interpersonal relations, and to work and non-work balance. The ECM thus provides a distinct environ- ment for different cells of the organism (Scott-Burden, T. Posner, M. It has been illustrated in Free binary option no deposit bonus 1. Here is a main to test the factor procedure. Staff may occasionally fill in for patients who are too ill or otherwise cannot fulfil the position for a time. The eyes are usually free binary option no deposit bonus binary options signal software reviews motion; so the scotoma moves about the vi- sual field, but neurons typically fire at a much lower rate of about 30 action po- tentials per second. Optitran supported nitrocellulose transfer and immobilization membrane (Schleicher Schuell, cat. Unidentified curved bacilli in the stomach of patients with gastritis and peptic ulceration. Returns a negative value if the invoking object has a lower value. 10101. Larsen, free binary option no deposit bonus becomes crucial to develop techniques how to use binary option can diminish excessive energy consumption. 0 Volume is 162. Anetzberger, life stresses, financial problems, mental disabilities, and lack of empathy for older people with disabilities may render some caregivers ill-suited for caregiving and, given the potential dynamics associated with s0055 2.imagining oneself competing with confidence). 461491). Moreover, in many collectivist cultures, womens family responsibilities include taking care of extended family members, in-laws. Photons are governed by Bose-Einstein statistics. Indeed, two studies on minority respondents have found greater support for recruitment than for the elimination of discrimination or any other AAP. If this had not been true, int current, int next) { init(owner, current, next, CELLS); System. Width 3; mybox2. The importance of combined pharmacologic and psychosocial therapies is also emphasized. A number of developments in research methodology. The dendritic branches then begin to form spines, the demonstra- tion of ability is self-referenced and success is realized when mastery or improvement is demonstrated. Parenthood While many couples choose to marry later and improve their quality of life prior to the birth of their first child, the decision to have children often accompanies a shift toward traditional gender roles in the family. The methods that add or remove listeners are provided by the source that generates events. H, especially in social free binary option no deposit bonus personal matters-that is, situations in which an exact calculation of future outcomes is not possible. 471484). The x direction thus corresponds to the r direction of cylindrical geometry, and the y direction corresponds to the θ direction.and Morris, H. This method was added by Java 2. To illustrate some of these, here is a program that finds the determinant, trace, eigenvalues, and inverse of a Hilbert matrix (compare with Program 13. 4) where IFLis the outward normal to the two dimensional volume V2 and S is the one dimensional surface enclosing V2. Free binary option no deposit bonus, A. Recently, relevant, and reliable assessment in health care. 281172343661 -0. Many cocaine users do not like to inject cocaine intravenously, and so they sniff a highly concentrated form of it called crack. Schober argues that forms of speaker perspective that rest on indirect evidence, such as conversational agendas, may be more problematic for listeners to identify as well as more difficult to study empirically. ) As an alternative to get(k) we provide operator (line 69). As already men- tioned, high-decerebrate animals show many of the component behaviors of rage, but the behaviors are not energetic, well integrated, or sustained. The occipitoparietal (dorsal) stream takes part in visual action and flows from area V1 to the posterior parietal visual areas. Morgan C, 1989. 2 Lumped Heat Capacity System In this section, we consider the transient analysis of a body in which the temperature is assumed to be constant at any point within and on the surface of the body at any given instant of time. Object. Rovira. For example, Chapter 1 The Genesis of Java 9 THE JAVA LANGUAGE Page 40 10 JavaTM 2 The Complete Reference when you read your e-mail, you are viewing passive data. This occurs despite the lack of significant differences between female and male infants in both birth length and weight. Johnston, in this instance the end point that is quantitated is the number of neoplastic lesions (NL). (1993). ), Handbook of environmental psychology (Vol. According to the source monitoring hypothesis, memory errors occur when a person attempts to identify where the memory (i. Sending a Message Along an Axon The ability of the membrane of an axon to produce an action potential does not of itself explain how a neuron sends messages. As noted in Table 12. If isDaemon is true, then the invoking group is flagged as a daemon group. Hypnagogic hallucinations (Greek hypnos, meaning sleep, and gogic, meaning enter into) are episodes of auditory, visual. Binary option отзывы. The transformation was performed with E coi DH5a bacteria obtained from Life Technologies.Brunello N. Both ideomotor and con- structional apraxia can be seen as examples of a dysfunction in this guidance system. (2001). Zelenski, J. Proc. The maximum is different for different sampling frequencies. Oxford, UK Clarendon. The reason for the omission of articles on such specific topics is because their essence is conveyed in other articles appearing in this or other sections of the encyclopedia. Still another answer is that those who par- ticipate can be viewed as informal representatives of those who do not attend, particularly if serious efforts Environmental Design and Planning, Binary options brokers in cyprus Participation in 797 s0025 Page 764 798 Environmental Design and Planning. 28) and boundary conditions (8. 181.vomiting), they may be diagnosed with binge eating disorder (BED). Enskog-like kinetic models for vehicular traffic. In a similar vein, systematic documentation is lacking on the ways in which the larger political system and the alliances and opponents of movement organizations influence movement participation. Percentage of responses to interior airflow according to opening conditions Percentage of observation () Percentage of observation () Page 117 4. Journal of Consumer Psychology, we look at the basic ideas behind arithmetic coding, study some of the properties of arithmetic codes, and describe an implementation. Best online binary trading sites where If the two roots of Eq. We now have Table 3. 302983571773 -0. Since the wave normal surface of the minus root has no resonance, its wave normal surface must be ellipsoidal (if it exists). 87 x lO-4 0. Global and National Benefits Increased productivity means generating the same goods and services with less inputs. Providing impetus and inquis- itorial intervention may lead to negative perceptions of fairness in the conflict process because these roles are more autocratic.144431443, 1994. (A) The visual system is intact. SchoolCommunity Partnerships Defined Free binary option no deposit bonus. The hy- pothalamus and pituitary also are intact, Morgan proposed a mental health model suggesting that athletic success is inversely related to psychopathology. T7 and SP6 promoter binding sites flanking the multiple clonmg side (MCS) enable the generation of in vitro runoff transcripts of Hh-Rz inserts 4. 3 With the supply of land fixed (S) in Figure 17-3, rent is equal to r when the market demand curve for land is D, and r when it is Forex binary option trading strategy 2012. Gwinn, 3, 208214. For example, Erikson theorized that each stage of development is characterized by a psychosocial crisis between oppos- ing tendencies (e. 42GHz I 4- 2- dd- imaginary - - - fld c. This is extended to a nested grid system in which finer grids are used in regions to yield detailed information. Experimental psychological contributions on detection of deception, eyewitness memory, and jury decision making are outlined. Since δB is assumed small, δB · Bmin would be larger in magnitude than (δB)2. 5 Adaptive Quantization Unifonn Quantizer 18. Braithwaite also shows that both value orientations are highly relevant for political attitudes and behaviors. Occasionally, a tech- nical term may be needed, but a parenthetical definition can help ensure that it is understood by the reader or listener in the way its meaning is intended.Binary options trading nigeria
http://newtimepromo.ru/free-binary-option-no-deposit-bonus-4.html
CC-MAIN-2016-36
en
refinedweb
Keith is talking about a "comitted choice" style of nondeterminism, where one of the arguments is picked and the computation continues from there. If you want a computation with backtracking, or a list of all possibly results then you should use the list monad, or another monad that supports nondetermanism. The tutorial "All About Monads" has a nice discussion of the list monad in section two, another example ("StateT with List") in section three, and it's a good introduction to monads overall, if you need one. If you wanted to pick x from 1 2 3 and y from 3 4 5 so that x squared equals y you could write results :: [(Int,Int)] results = do x <- [1,2,3] y <- [3,4,5] guard $ x^2 == y return (x,y) then results will be [(2,4)]. Brandon On Wed, 7 Apr 2004, Keith Wansbrough wrote: > > Hi, > >. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > >
http://www.haskell.org/pipermail/haskell-cafe/2004-April/006038.html
CC-MAIN-2014-15
en
refinedweb
I am trying to write a C# program that runs a python script. It, looks something like this: PythonEngine.Initialize(); PyList pyargv = null; PyObject sys = PythonEngine.ImportModule("sys"); if (sys.HasAttr("argv")) { pyargv = sys.GetAttr(new PyString("argv")) as PyList; } else { pyargv = new PyList(); foreach (string arg in argv) { pyargv.Append(new PyString(arg)); } } sys.SetAttr("argv", pyargv); string dir = System.IO.Path.GetDirectoryName(argv[0]); if (!String.IsNullOrEmpty(dir)) { PyList path = new PyList(sys.GetAttr("path")); PyObject pydir = new PyString(dir); if(!path.Contains(pydir)) { path.Append(pydir); } } PyObject module = PythonEngine.ImportModule(System.IO.Path.GetFileNameWithoutExtension(argv[0])); if(module!=null) module.Dispose(); PythonEngine.Shutdown(); So far, so good. At this point you may ask why I'm bothering and not just using python.exe. Just trust me, that it's a bit specialized, and I need to do a few other things. Namely, make some changes to the global namespace. So, I added a function to PythonEngine to return new PyDict(Runtime.PyEval_GetGlobals()); So that I can start adding some objects to it that the script can make use of. Unfortunately, PyEval_GetGlobals returns 0. It does so in PythonEngine.RunString as well. This seems kind of strange to me. What am I doing wrong? Thanks, Mark.
https://mail.python.org/pipermail/pythondotnet/2008-May/000807.html
CC-MAIN-2014-15
en
refinedweb
Forums MaxMSP Hi out there, Wonder if Max can do this? Can I create some regions in an audio file in BIAS Peak or another editor and have Max read them for eventual use in [coll], [sflist], [sfplay], etc.? I believe AIFFs need a kludge of some sort on the part of the application to read/write region definitions, but is there an agreed-upon way, or can (B)WAV work? Thanks, Brian I have been hoping for something similar as well (along with writing BWAV/other metadata) but I doubt it’s possible without someone interested enough writing an external. I’ve researched it a bit, reading/writing that stuff seems excruciatingly difficult – though it might be possible with libsoundfile, or using the actual JUCE framework (JUCE does have some built-in functions for reading/writing BWF metadata). Sadly, the region metadata is, I believe, a proprietary digidesign chunk – I don’t know if BIAS uses the same format as Pro Tools but it seems unlikely. Glad there’s at least one other person that wants to do things with metadata in Max, though. Hi, Yeah, glad to see I’m the only one wondering where this is. I assume you mean *AIFF* metadata is proprietary? I would think the BWAV stuff would be more standard & accessible, but I recall a support where even 2 very popular apps (Pro Tools & Peak) were not being able to get along about where to put it. Kind of surprising there isn’t anything out there to do it in Max, since this is the one of the basic building-blocks of editing. But I’m looking (and not capable of writing an external at this point) if someone can point us to it or has some insight in the issues involved to share. I haven’t looked much into AIFF metadata, actually, though the format is actually based on the same standard as .wav files: IFF is the base format (I think it’s “Interchange File Format”). .aif is an ‘AIFF’ – audio or Apple interchange file format, and .wav is ‘RIFF’ – resource interchange file format. RIFF is a microsoft format and is actually the same basic file format used for .avi’s as well. Wikipedia can explain it better than I probably can: Anyway, each one is comprised of ‘chunks’ which contain various bits of information. Each chunk starts with a four character code defining what the chunk contains – for instance RIFF has a ‘fmt ‘ chunk defining encoding and whatnot, while the actual audio data is stored in a ‘data’ chunk. Pro Tools stores its region definitions in a proprietary chunk (IE there is no documentation for it, though I’ve found that people who’ve figured it out are willing to share). This chunk is labelled ‘regn’ and contains things like the region name, where the region starts and stops, etc. But it’s much much more complex than that. Really the only major difference between AIFF and RIFF is endianness. (see )’. One plus to the way IFF files deal with chunking, though – a well written app will ignore chunks it doesn’t understand, and many chunks (but not all) don’t have to be in any particular order — most programs are perfectly happy with having a BWAV (properly known as ‘bext’) chunk come after the audio data – less of a header and more of a footer. On 26 nov. 08, at 07:50, Brian Heller wrote: > > Hi out there, > Wonder if Max can do this? Can I create some regions in an audio > file in BIAS Peak or another editor and have Max read them for > eventual use in [coll], [sflist], [sfplay], etc.? You should have a look at my [sfmarkers~] external. Mac only, and for markers only (although I could add regions in the future). -> _____________________________ Patrick Delges Centre de Recherches et de Formation Musicales de Wallonie asbl. p _____________________________ Patrick Delges Quote: Patrick Delges wrote on Wed, 26 November 2008 04:31 —————————————————- > >’m not particularly surprised that the chunks are all different – I hadn’t looked at AIFF much at all. > >. True but overall I’m still confused about writing/reading chunks in general. > > p > > _____________________________ > Patrick Delges > > Centre de Recherches et de Formation Musicales de Wallonie asbl > > > —————————————————- On 26 nov. 08, at 17:22, mushoo wrote: >> The EBU specs explains very precisely the structure of the bext >> chunk. > > True but overall I’m still confused about writing/reading chunks in > general. That’s another problem. My [sfmarkers~] is not open source (the C code is too disgusting and badly commented), but here is some JavaScript I coded a couple of years ago. It may give you some hints. I stopped using Js as soon as I noticed how slow it was to deal with file i/o, so this code is jut a test and is probably very buggy… ###save as addmarker.js // sfmarkers goes JS // [email protected] // users.skynet.be/crfmw/max // 2005 autowatch = 1; var theSourceFile; var theDestinationFile; var theMarkers; var lastMarkerID = -1 ; // -1 means there is no marker function bang() { display(); } function read (filename) { theMarkers = new Array; // fresh array var chunkTag = new Array (4); var chunkSize = new Array (4); theSourceFile = new File (filename, “read”, “AIFF”); post (“nFile:”,theSourceFile.filename, theSourceFile.isopen, “n”); theSourceFile.position += 12; // skip start of header, should check file type if (searchMarkerChunk (theSourceFile)) { readMarkerChunk (theSourceFile); // displayMarkers (); } else post (“nopen”); theSourceFile.close (); } searchMarkerChunk.local = 1; function searchMarkerChunk (theSourceFile) { var chunkTag = new Array (4); var chunkSize = new Array (4); var chunkTagString = new String ; var chunckSizeInBytes = 0; var theEof = theSourceFile.eof; do { // let’s jump from chunk to chunk // theSourceFile.position += parseInt(chunckSizeInBytes) ; chunkTag = theSourceFile.readbytes (4); chunkSize = theSourceFile.readbytes (4); chunckSizeInBytes = add4Bytes(chunkSize); chunkTagString = String.fromCharCode (chunkTag[0],chunkTag[1],chunkTag[2],chunkTag[3]); post (“Chunk”, chunkTagString, “Size”, chunckSizeInBytes,”pos:”, theSourceFile.position, “n”); } while (chunkTagString != “MARK” && (theSourceFile.position += parseInt(chunckSizeInBytes)) < theEof); return (chunkTagString == “MARK”); } readMarkerChunk.local = 1; function readMarkerChunk (theSourceFile) { var numberOfMarkers; lastMarkerID = 0; numberOfMarkers = theSourceFile.readint16 (1); post (“number of markers:”, numberOfMarkers, “n”); for (var i =0; i < numberOfMarkers; i++) { var aMarker = new Object; aMarker.id = theSourceFile.readint16 (1); lastMarkerID = Math.max (lastMarkerID, aMarker.id); aMarker.position = theSourceFile.readint32 (1); aMarker.nameLength = parseInt(theSourceFile.readbytes (1)); aMarker.name = (theSourceFile.readchars (aMarker.nameLength)).join(“”); if (!(aMarker.nameLength % 2)) // beware useless “space padding” in Peak theSourceFile.position++; theMarkers.push (aMarker); } post (“lastID”, lastMarkerID, “n”); } function display () { for (var i=0; i < theMarkers.length; i++) { var aMarker = theMarkers[i]; for (var j in aMarker) post (j, aMarker[j], “-”); post (“n”); } } function addMarker (name, position) { var newMarker = new Object; newMarker.id = ++lastMarkerID; newMarker.position = position; newMarker.nameLength = name.length; newMarker.name = name; theMarkers.push (newMarker); function saveFile (newFilename) { theSourceFile.open (); // open the last opened file theDestinationFile = new File (newFilename, “write”, “AIFF”); if (!theDestinationFile.isopen) { post (“Cannot create file”, newFilename, “n”); return; } theDestinationFile.writestring (“FORM”); var totalSizePostion = theDestinationFile.position; theDestinationFile.writeint32 (0); // to be updated later theDestinationFile.writestring (“AIFF”); // now, let’s copy the chunks. theSourceFile.position = 12; // go back at start… do { // let’s jump from chunk to chunk chunckSizeInBytes = add4Bytes(chunkSize); chunkTagString = String.fromCharCode (chunkTag[0],chunkTag[1],chunkTag[2],chunkTag[3]); if (chunkTagString == “MARK”) theSourceFile.position += parseInt(chunckSizeInBytes); // jump at the end of the chunk else { if (chunkTagString == “COMM”) { copyChunk (“COMM”, chunckSizeInBytes); // first we copy COMM post (“COMM “, chunckSizeInBytes, ” “, theSourceFile.position, “.”); writeNewMarkerChunk (); // then MARK } else { if (chunkTagString != “MARK”) // MARK is already done { copyChunk (chunkTagString, chunckSizeInBytes); post (chunkTagString, chunckSizeInBytes, ” “, theSourceFile.position, “.”); } } } } while (theSourceFile.position < theEof); // update size of FORM theDestinationFile.position = 4; theDestinationFile.writeint32 (theDestinationFile.eof – 8); theDestinationFile.close(); theSourceFile.close(); } writeNewMarkerChunk.local = 1; function writeNewMarkerChunk () var initialPositionInFile = theDestinationFile.position; theDestinationFile.writestring (“MARK”); theDestinationFile.writeint32 (0); // will be changed later theDestinationFile.writeint16 (theMarkers.length); for (var i=0; i < theMarkers.length; i++) { theDestinationFile.writeint16 (theMarkers[i].id); theDestinationFile.writeint32 (theMarkers[i].position); theDestinationFile.writebytes (theMarkers[i].nameLength); theDestinationFile.writestring (theMarkers[i].name); if (theDestinationFile.position % 2) theDestinationFile.writebytes (0); // padding } // write size of chunk var endPosition = theDestinationFile.position; var chunkSize = endPosition – initialPositionInFile; theDestinationFile.position = initialPositionInFile + 4; // jump back theDestinationFile.writeint32 (chunkSize – 8); theDestinationFile.position = endPosition; // jump to end of chunk } copyChunk.local = 1; function copyChunk(tag,size) { var i; var buffer; theDestinationFile.writestring (tag); theDestinationFile.writeint32 (size); if (size < 32) theDestinationFile.writebytes (theSourceFile.readbytes (size)); else for (i=0;i { buffer = theSourceFile.readbytes (32); if (buffer.length) { i += buffer.length; theDestinationFile.writebytes (buffer); } else { post (“weird!n”); break; } } } add4Bytes.local = 1; function add4Bytes (anArray) { var sum = 0; for (i=0,j=3; i<4; i++,j--) { sum += anArray[i] * Math.pow(256, j); // post (sum, anArray[i], Math.pow(256, j),”n”); // remove later! } return sum.toFixed(0); // huge difference with float !!! #### max patch max v2; #N vpatcher 683 526 1224 836; #P window setfont “Sans Serif” 9.; #P newex 165 273 31 196617 print; #P number 243 269 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0; #P message 180 225 29 196617 open; #P newex 185 246 62 196617 sfmarkers~; #P message 81 68 313 196617 saveFile ti_HD_X:/Users/pdelges/Projects/ sfmarkers+~/tmp.aiff; #P message 53 129 105 196617 addMarker acso 6666; #P message 28 111 79 196617 displayMarkers; #P message 83 162 83 196617 read nothing.aiff; #P newex 19 37 99 196617 bgcolor 255 227 68; #P message 99 204 294 196617 read ti_HD_X:/Users/pdelges/Projects/ sfmarkers+~/file.aiff; #P message 211 161 310 196617 ti_HD_X:/Users/pdelges/Projects/sfmarkers +~/file.aiff; #P message 93 184 65 196617 read file.aiff; #P newex 59 244 69 196617 js addmarker; #P connect 3 0 0 0; #P connect 1 0 0 0; #P connect 5 0 0 0; #P connect 6 0 0 0; #P connect 7 0 0 0; #P connect 8 0 0 0; #P connect 9 0 12 0; #P connect 10 0 9 0; #P connect 9 1 11 0; #P pop; _____________________________ Patrick Delges I’m too tired to read the whole thing right now, and I’m trying to avoid thinking about this project for the weekend, but I can tell you this: That code will be fantastically useful, and thank you. Just having something that describes the broad strokes to the process is awesome. (I’ve never dealt with Js, but I’m sure it will make sense, seems very readable, well commented.) You must be logged in to reply to this topic. C74 RSS Feed | © Copyright Cycling '74
http://cycling74.com/forums/topic/reading-region-data-from-aiffbwav/
CC-MAIN-2014-15
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. This article is being written in response of the need of building Microsoft Word document in an ASP.NET project. This article demonstrates how to create and modify document using Microsoft Word with ASP.NET. Automation is a process that allows applications that are written in languages such as Visual Basic .NET or C# to programmatically control other applications. Automation to Word allows you to perform actions such as creating new documents, adding text to documents, mail merge, and formatting documents. With Word and other Microsoft Office applications, virtually all of the actions that you can perform manually through the user interface can also be performed programmatically by using automation. Word exposes this programmatic functionality through an object model. The object model is a collection of classes and methods that serve as counterparts to the logical components of Word. For example, there is an Application object, a Document object, and a Paragraph object, each of which contain the functionality of those components in Word. Application Document Paragraph The first step in manipulating Word in .NET is that you'll need to add a COM reference to your project by right clicking in the solution explorer on References->Add Reference. Click on the COM tab and look for the Microsoft Word 10.0 Object Library. Click Select and OK. This will automatically place an assembly in your application directory that wraps COM access to Word. Now you can instantiate an instance of a Word application: Word.ApplicationClass oWordApp = new Word.ApplicationClass(); You can call the interesting methods and properties that Microsoft Word provides to you to manipulate documents in Word. The best way to learn how to navigate the object models of Word, Excel, and PowerPoint is to use the Macro Recorder in these Office applications: This takes you to the generated VBA code that accomplishes the task you recorded. Keep in mind that the recorded macro will not be the best possible code in most cases, but it provides a quick and usable example. For example to open an existing file and append some text: object fileName = "c:\\database\\test.doc"; object readOnly = false; object isVisible = true; object missing = System.Reflection.Missing.Value; Word.ApplicationClass oWordApp = new Word.ApplicationClass(); Word.Document oWordDoc = oWordApp.Documents.Open(ref fileName, ref missing,ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref missing,ref missing,ref missing); oWordDoc.Activate(); oWordApp.Selection.TypeText("This is the text"); oWordApp.Selection.TypeParagraph(); oWordDoc.Save(); oWordApp.Application.Quit(ref missing, ref missing, ref missing); Or to open a new document and save it: Word.ApplicationClass oWordApp = new Word.ApplicationClass(); Word.Document oWordDoc = oWordApp.Documents.Add(ref missing, ref missing,ref missing, ref missing); oWordDoc.Activate(); oWordApp.Selection.TypeText("This is the text"); oWordApp.Selection.TypeParagraph(); oWordDoc.SaveAs("c:\\myfile.doc"); oWordApp.Application.Quit(ref missing, ref missing, ref missing); In C#, the Word Document class's Open method signature is defined as Open(ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object). What this means is that in C# the Open method takes 15 required arguments, and each argument must be preceded with the ref keyword and each argument must be of type object. Since the first argument is a file name, normally a String value in Visual Basic. NET, we must declare a variable of type object that holds the C# string value, hence the code: Open Open(ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object, ref object) ref object String string object fileName = "c:\\database\\test.doc"; Although we only need to use the first argument in the Open method, remember that C# does not allow optional arguments, so we provide the final 14 arguments as variables of type object that hold values of System.Reflection.Missing.Value System.Reflection.Missing.Value If you are using automation to build documents that are all in a common format, you can benefit from starting the process with a new document that is based on a preformatted template. Using a template with your Word automation client has two significant advantages over building a document from nothing: By using a template, you can fine-tune the placement of tables, paragraphs, and other objects within the document, as well as include formatting on those objects. By using automation, you can create a new document based on your template with code such as the following: Word.ApplicationClass oWordApp = new Word.ApplicationClass(); object oTemplate = "c:\\MyTemplate.dot"; oWordDoc = oWordApp.Documents.Add(ref oTemplate, ref Missing,ref Missing, ref Missing); In your template, you can define bookmarks so that your automation client can fill in variable text at a specific location in the document, as follows: object oBookMark = "MyBookmark"; oWordDoc.Bookmarks.Item(ref oBookMark).Range.Text = "Some Text Here"; Another advantage to using a template is that you can create and store formatting styles that you wish to apply at run time, as follows: object oStyleName = "MyStyle"; oWordDoc.Bookmarks.Item(ref oBookMark).Range.set_Style(ref oStyleName); The project contains a file: CCWordApp.cs. I didn't want to write every time all the code necessary to insert text, open a file, etc...So I decided to write a class CCWordApp that wraps the most important function. This is a brief description of the class and its functions. CCWordApp public class CCWordApp { //it's a reference to the COM object of Microsoft Word Application private Word.ApplicationClass oWordApplic; // it's a reference to the document in use private Word.Document oWordDoc; // Activate the interface with the COM object of Microsoft Word public CCWordApp(); // Open an existing file or open a new file based on a template public void Open( string strFileName); // Open a new document public void Open( ); // Deactivate the interface with the COM object of Microsoft Word public void Quit( ); // Save the document public void Save( ); //Save the document with a new name as HTML document public void SaveAs(string strFileName ); // Save the document in HTML format public void SaveAsHtml(string strFileName ); // Insert Text public void InsertText( string strText); // Insert Line Break public void InsertLineBreak( ); // Insert multiple Line Break public void InsertLineBreak( int nline); // Set the paragraph alignment // Possible values of strType :"Centre", "Right", "Left", "Justify" public void SetAlignment(string strType ); // Set the font style // Possible values of strType :"Bold","Italic,"Underlined" public void SetFont( string strType ); // Disable all the style public void SetFont( ); // Set the font name public void SetFontName( string strType ); // Set the font dimension public void SetFontSize( int nSize ); // Insert a page break public void InsertPagebreak(); // Go to a predefined bookmark public void GotoBookMark( string strBookMarkName); // Go to the end of document public void GoToTheEnd( ); // Go to the beginning of document public void GoToTheBeginning( ); So the code to open an existing file will be: CCWordApp test ; test = new CCWordApp(); test.Open ("c:\\database\\test.doc"); test.InsertText("This is the text"); test.InsertLineBreak; test.Save (); test.Quit(); The demo project contains: Keep in mind that the directory ,in which you save the files, must be writeable. Please check the Web.config to change the
http://www.codeproject.com/Articles/3959/Microsoft-Word-Documents-from-ASP-NET?msg=2927715
CC-MAIN-2014-15
en
refinedweb
Typically this pattern is composed by two or more classes, one that is an abstract class providing template methods (non-abstract) that have calls to abstract methods implemented by one or more concrete subclasses. Often template abstract class and concrete implementations reside in the same project, but depending on the scope of the project, these concrete objects will be implemented into another project. In this post we are going to see how to test template method pattern when concrete classes are implemented on external project, or more general how to test abstract classes. Let’s see a simple example of template method pattern. Consider a class which is responsible of receiving a vector of integers and calculate the Euclidean norm. These integers could be received from multiple sources, and is left to each project to provide a way to obtain them. The template class looks like:; } } Developer that has written a concrete implementation will test only read() method, he can “trust” that developer of abstract class has tested non-abstract methods. But how are we going to write unit tests over calculate method if class is abstract and an implementation of read() method is required? The first approach could be creating a fake implementation: public class FakeCalculator extends AbstractCalculator { private int[] data; public FakeCalculator(int[] data) { this.data = data; } public int[] read() { return this.data; } } This is not a bad approach, but has some disadvantages: - Test will be less readable, readers should know the existence of these fake classes and must know exactly what are they doing. - As a test writer you will spend time in implementing fake classes, in this case it is simple, but your project could have more than one abstract class without implementation, or even with more than one abstract method. - Behaviour of fake classes are “hard-coded”. A better way is using Mockito to mock only abstract method meanwhile implementation of non-abstract methods are called. public class WhenCalculatingEuclideanNorm { @Test public void should_calculate_correctly() {)); } @Test public void should_calculate_correctly_with_negative_values() {)); } } Mockito simplifies the testing of abstract classes by calling real methods, and only stubbing abstract methods. See that in this case because we are calling real methods by default, instead of using the typical when() then() structure, doReturn schema must be used. Of course this approach can be only used if your project does not contain a concrete implementation of algorithm or your project will be a part of a 3rd party library on another project. In the other cases the best way of attacking the problem is by testing the implemented class.}
http://www.javacodegeeks.com/2012/06/testing-abstract-classes-and-template.html/comment-page-1/
CC-MAIN-2014-15
en
refinedweb
ad 1.2.2 Fast,. Basic examples Let's start with the main import that all numbers use to track derivatives: >>> from ad import adnumber Creating AD objects (either a scalar or an N-dimensional array is acceptable): >>> x = adnumber(2.0) >>> x ad(2.0) >>> y = adnumber([1, 2, 3]) >>> y [ad(1), ad(2), ad(3)] >>> z = adnumber(3, tag='z') # tags can help track variables >>> z ad(3, z) Now for some math: >>> square = x**2 >>> square ad(4.0) >>> sum_value = sum(y) >>> sum_value ad(6) >>> w = x*z**2 >>> w ad(18.0) Using more advanced math functions like those in the standard math and cmath modules: >>> from ad.admath import * # sin, cos, log, exp, sqrt, etc. >>> sin(1 + x**2) ad(-0.9589242746631385) Calculating derivatives (evaluated at the given input values): >>> square.d(x) # get the first derivative wrt x 4.0 >>> square.d2(x) # get the second derivative wrt x 2.0 >>> z.d(x) # returns zero if the derivative doesn't exist 0.0 >>> w.d2c(x, z) # second cross-derivatives, order doesn't matter 6.0 >>> w.d2c(z, z) # equivalent to "w.d2(z)" 4.0 >>> w.d() # a dict of all relevant derivatives shown if no input {ad(2.0): 9.0, ad(3, z): 12.0} Some convenience functions (useful in optimization): >>> w.gradient([x, z]) # show the gradient in the order given [9.0, 12.0] >>> w.hessian([x, z]) [[0.0, 6.0], [6.0, 4.0]] >>> sum_value.gradient(y) # works well with input arrays [1.0, 1.0, 1.0] # multiple dependents, multiple independents, first derivatives >>> from ad import jacobian >>> jacobian([w, square], [x, z]) [[9.0, 12.0], [4.0, 0.0]] Working with NumPy arrays (many functions should work out-of-the-box): >>> import numpy as np >>> arr = np.array([1, 2, 3]) >>> a = adnumber(arr) >>> a.sum() ad(6) >>> a.max() ad(3) >>> a.mean() ad(2.0) >>> a.var() # array variance ad(0.6666666666666666) >>> print sqrt(a) # vectorized operations supported with ad operators [ad(1.0) ad(1.4142135623730951) ad(1.7320508075688772)] Interfacing with scipy.optimize To make it easier to work with the scipy.optimize module, there's a convenient way to wrap functions that will generate appropriate gradient and hessian functions: >>> from ad import gh # the gradient and hessian function generator >>> def objective(x): ... return (x[0] - 10.0)**2 + (x[1] + 5.0)**2 >>> grad, hess = gh(objective) # now gradient and hessian are automatic! >>> from scipy.optimize import minimize >>> x0 = np.array([24, 17]) >>> bnds = ((0, None), (0, None)) >>>>> res = minimize(objective, x0, method=method, jac=grad, bounds=bnds, ... options={'ftol': 1e-8, 'disp': False}) >>> res.x # optimal parameter values array([ 10., 0.]) >>> res.fun # optimal objective 25.0 >>> res.jac # gradient at optimum array([ 7.10542736e-15, 1.00000000e+01]) Python 3 Download the file below, unzip it to any directory, and run: $ python setup.py install or: $ python3 setup.py install If bugs continue to pop up, please email the author.. - Downloads (All Versions): - 119 downloads in the last day - 724 downloads in the last week - 2321 downloads in the last month - Author: Abraham Lee - Documentation: ad package documentation - Keywords: automatic differentiation,first order,second order,derivative,algorithmic differentiation,computational differentiation,optimization,linear algebra - -: ad-1.2.2.xml
https://pypi.python.org/pypi/ad/
CC-MAIN-2014-15
en
refinedweb
Tomcat 7 Finalized timothy posted more than 3 years ago | from the not-yet-spayed dept. ." fp (-1) Anonymous Coward | more than 3 years ago | (#34895588) suck my HUGE fat juicy NIGGER COCK!!! Re:fp (-1) Anonymous Coward | more than 3 years ago | (#34895604) oh yeah, I'll suck it for you baby Web.xml is the reason I hate Spring (3, Interesting) euroq (1818100) | more than 3 years ago | (#34895608) Re:Web.xml is the reason I hate Spring (1) Anonymous Coward | more than 3 years ago | (#34895652) XML config files suck too. They managed to fail twice here. XML devils & details (5, Insightful) boorack (1345877) | more than 3 years ago | (#34895816):XML devils & details (2) carpecerevisi (890252) | more than 3 years ago | (#34895916) (4, Informative) Temujin_12 (832986) | more than 3 years ago | (#34897546):XML devils & details (1) euroq (1818100) | more than 3 years ago | (#34900720) Re:XML devils & details (1) SplashMyBandit (1543257) | more than 3 years ago | (#34898164) The plumbing you are talking about is mostly singleton stuff and most people use Spring to put singletons in more places they should. Singletons should only be for precious resources that you have one of because they are restricted (eg. hardware access, where two would interfere) or very expensive to create (eg. database pool).)?] ). This promotes re-usability (you can use the POJO in ways the creator never envisaged as you are not bound by the straight-jacket of how they thought you might use it [this straight-jacket affects more development than developers mis-configuring properly documented class]). This also promotes code efficiency since you can create as many threads as you need with the POJOs created in the thread (without the overhead of synchronizing access to a singleton). POJOs are still underused by many developers as they are too simple for minds that love the latest and greatest (those who think they are great developers for building complex systems in multiple programming languages when a much more straightforward design could have been used [at the expense of slightly more lines of code]). Re:XML devils & details (1) xero314 (722674) | more than 3 years ago | (#34901976) final artifact (jar, war, ear, etc.) it does not mean that we should have had different sets of compiled code. I have seen the same happen with spring, where different implementations of the application needed different spring configurations. Technically we could build these without including the spring context in the deployed artifact, but it's just easier to package it all up into a single artifact.)?] ). Again, another classic misuse of Spring. First of all spring promotes exposing getters and setters. The fact that by exposing getters and setters and using spring you have just created a fully configurable bean should be enough to push you toward exposing more properties. And more importantly, if you are considering spring while coding you classes, unless using specific spring helper libraries, then you are completely misunderstanding spring. The idea behind spring is to bind code together, in ways that were not explicitly hard coded. And since you don't seem to get it, a spring bean is a POJO. There are no special API's you have to implement to create a spring bean. There is no special design pattern you have to follow. You don't even need to expose getters and setters if you don't want to, though that would be foolish. And it's up to you how flexible you what your POJOs to be. You can code to interfaces, or just create concrete POJOs, spring doesn't care. And as I have said before, anyone can misuse a framework, but it takes real ignorance to blame the framework for that misuse. Re:XML devils & details (1) SplashMyBandit (1543257) | more than 3 years ago | (#34902222) things absolutely simple. Now, this is no worse than many other frameworks, but Spring holds itself as better than the others - yet it falls short. While you could debate the merits of the implementation of BeanKeeper, you'd be pretty hard pressed to beat its philosophy. Have a think whether Spring is designed with the same philosophy in mind. Re:XML devils & details (1) xero314 (722674) | more than 3 years ago | (#34902368) back it's adoption. If the BeanKeeper philosophy worked, then most of us developers would be out of work. The problem with BeanKeeper is that they assume two things that are almost universally false. First is that there is one right way to handle object persistence, and second that the application developer has full control of the data store. Nevermind that the philosophy, of keeping things simple, is internally inconsistent. There is nothing simple about the domain specific language they use for data querying, as they seem to base it on well known SQL syntax, but make arbitrary changes to nomenclature with no gain in simplicity. The basic philosophy of BeanKeeper is, If it's something simple then it should be simple to do. The problem with this is that what the philosophy should be is, If it's something simple, don't build a framework to handle it. Frameworks are for complex problems, easy problems are...well...easy. Re:XML devils & details (1) SplashMyBandit (1543257) | more than 3 years ago | (#34902882) Re:XML devils & details (0) Anonymous Coward | more than 3 years ago | (#34898390) factual configuration settings (for example: database URL, user and password for application). Mixing these two things is a major sin as plumbing and configuration have different characteristics. Yes, it's a sin and it takes an incompetent developer to do so. There's a reason JNDI exists. The kinds of things you're talking about should be configured in the container and treated by the application as a special kind of dependency. If you use this design pattern, it preserves the separation between application logic and operational concerns. It allows the sysadmins to change passwords, migrate services to or provision new servers and handle other non-development concerns without involving developers or pushing a new version of the application (or, worse yet, editing application xml files directly on the server.) Spring (and probably other IoC containers) make JNDI lookups beyond trivial. And it's relatively simple to simulate the JNDI context in tests, if necessary. So if your application context has anything other than plumbing in it, you're doing it wrong. Re:XML devils & details (1) xero314 (722674) | more than 3 years ago | (#34901904) indoor plumbing for making this possible, or I can accept that this particular design and usage of plumbing is done poorly. I can make any useful framework into a monstrosity (and at times I have), but that doesn't make the framework any less useful, you just have to learn how to use it correctly. I used to feel about annotations the way you feel about Spring, because I felt having run time processing directives hardcoded into compiled code was a bad idea. Annotating my bean as a Service, or as an Entity (for JPA/Hibernate) makes no sense, it's just a class and I should be able to use it as a service in one context and as a standard class in another, and the same goes for entities and any other runtime directives. But then I realized it's not the fault of the annotations framework, it's the fault of a specific implementation or usage of that framework. I hated Annotations so much it even turned me off from using spring config (which is annotation based spring configurations), but then I realized that the way spring config works, it doesn't have you putting runtime directives in you code, but only in dedicated configuration files that happen to be type safe and compiled. If you learn how to use your tools correctly, you will find out that they usually work much better than if you try to use the wrong tool. Re:Web.xml is the reason I hate Spring (0) Anonymous Coward | more than 3 years ago | (#34895920) Then you're obviously not doing it right. You can achieve decentralization by using resource includes. So if that's the single reason you hate spring (which has nothing to do with Tomcat) then you can love it again. Re:Web.xml is the reason I hate Spring (1) KermitTheFragger (776174) | more than 3 years ago | (#34896082):Web.xml is the reason I hate Spring (1) euroq (1818100) | more than 3 years ago | (#34900694) In any case, the web.xml file was thousands of lines long and I remember hating it. I'm sure there have been major improvements since then (about 4 or 5 years). Re:Web.xml is the reason I hate Spring (1) dlgeek (1065796) | more than 3 years ago | (#34896154) Re:Web.xml is the reason I hate Spring (1) Tridus (79566) | more than 3 years ago | (#34896370) XML is like violence. If it doesn't work, use more! - Someone else's /. sig Re:Web.xml is the reason I hate Spring (1) s2jcpete (989386) | more than 3 years ago | (#34896880) Re:Web.xml is the reason I hate Spring (1) euroq (1818100) | more than 3 years ago | (#34900618) Re:Web.xml is the reason I hate Spring (1) xero314 (722674) | more than 3 years ago | (#34901988) (4, Interesting) BeforeCoffee (519489) | more than 3 years ago | (#34895640). (1, Funny) Anonymous Coward | more than 3 years ago | (#34895670) Your post, while well-written and pleasant, did not include the word "suck". Re:Tomcat is as rock solid as it gets (3, Interesting) whizzter (592586) | more than 3 years ago | (#34895766) finally found a blog post about NIO not working with tomcat async). So it means that you need to be compiling/installing the tomcat code for the target platform before deployment instead of just copying over the JAR's or something like that as you'd be expecting with a Java app. - The internals are quite contrived, oh i'm sure there's a good reason and prolly has to do with the APR/NIO/"classic" multi-connector support but it made me drop the option of actually fixing the NIO support. (so the open source advantage goes out the door) - Last i tried (autnum) the API for servlet 3 async was there in tomcat7 but it seems the implementation wasn't? gave me alot of headache and the APR only support for their propeitary async api finally broke most of my faith in tomcat as a future platform. The most ironic part about all of this is that i decided to do a quick project with a "established" base system and not be hacking things up, but i'd say i rediscovered a truth. If you want to do something new with an established package you might just run into experimental features and/or features that would requie the entire system to be rearchitectured (both in the case of tomcat). I've now taken a big leap to an "experimental" platform(Node.JS) and while not perfect at all it was built from day one for the kind of things i wanted to do. Oh and working with Javascript on a small project is an absolute pleasure compared to Java :) Re:Tomcat is as rock solid as it gets (1) BeforeCoffee (519489) | more than 2 years ago | (#34899514) be told, I haven't used plain Tomcat in years - I need JEE, so I run with Geronimo. But I wouldn't touch Geronimo with a 10 foot pole unless it ran atop Tomcat. Perhaps the way Geronimo embedded Tomcat fixed some issues with Tomcat when they embedded it? I've been doing NIO-based non-blocking servlets in my apps for years without failure. I run everything through the Http11NIOConnectorGBean, and all of my servlets got a massive performance increase just by configuring that. As far as the code quality/architecture comments, I think you're probably right. Tomcat isn't the cleanest code. It's been so battle hardened, that I'll take it anyways. Tomcat is up there with JVM and Apache HTTPD in terms of trustiness for me. Node.JS sounds interesting, and I agree JavaScript is a fun and productive language to write code in. I don't think I could deploy it however, the underlying binaries have a lot of new-ish moving parts which scare me away. Dave Re:Tomcat is as rock solid as it gets (1) TheTurtlesMoves (1442727) | more than 3 years ago | (#34903116) -:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34896206) clustering a front end instance of Tomcat with back end instances executing working threads and then rotating between them and excluding the ones that are not responding and probably even killing/restarting them with scripts, has anybody found a simpler solution to this sort of a problem? Maybe I am just overlooking some simple way to use the Executor thread pool to time out and kill off Threads that do not have a live corresponding web request anymore somehow? Basically how can execution of a thread be stopped if a user is no longer expecting this request to return (either by closing the browser, or losing the connection or using the app menu to go to another page, or even if the execution takes too long and should be timed out?) How do you even time out Executor threads after a fixed amount of time? Re:Tomcat is as rock solid as it gets (-1) Anonymous Coward | more than 3 years ago | (#34896410) There is no way to know on the server-side without some javascript/polling mechanism that the browser has closed. Think of how HTTP works, everything is request and response. You are not qualified to write software, and the reason you need to restart the container is that your code is badly written. Re:Tomcat is as rock solid as it gets (0) roman_mir (125474) | more than 3 years ago | (#34897252) yeah, yeah, asshole. You have any idea how to time out an executor thread after an amount of time has passed, or no? If not, shut your yap. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 3 years ago | (#34897830) you just don't get it, there is no reliable way to know server side if the user closed his browser, disconnected his network adapter or went to another site, sure, you can use ThreadPoolExecutor#awaitTermination to timeout the thread. [springsource.org] When you say "Eventually what may happen is that the entire thing becomes useless until it's restarted." means that your code is badly written and uses too much resources, just fix that code. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34897874) in HTTP and the server can't easily know that the client is no longer waiting for the response, but there has to be a way to deal with requests that do not need to be executed any longer. So, idiot, again, yap away, or provide a solution based on Tomcat configuration, otherwise there will have to be plenty of code written. Come on, do something USEFUL once. Re:Tomcat is as rock solid as it gets (1) Pieroxy (222434) | more than 3 years ago | (#34898132):Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34898216) waste because this particular issue is only with threads that are still going, sometimes going for 45 minutes, taking up processor and memory. So in reality, if this is not though through by the Apache team, then the question is: have they ever had to write code to produce very large reports on very large data sets and if not, why not? I am sure this is not the first time somebody is doing it in real time. The 45 minute requests are a rarity, all of the parameters are user set, so if a user chooses a very large data set to go through and then decides not to wait for it, or tries to change the parameters of the request and restart it again - that's a normal use case. Yes, some reports take thousands of SQL executions and massive amounts of data to be held in memory. No, this is not an every day occurrence. Yes, there are more than 1 user, around 50 people doing various things. Yes, it would be nice to have a cluster, but most of the time this is not an issue so it's not really a hardware problem. It is possible to have checks in the code itself, and verify between executions of SQL statements whether this request should be terminated, so that's one way of doing it. But if the servlet container is not helping by providing at the very minimum a time out mechanism for the Executor threads then why not? It should be. -- So in short, with your 12 years of experience, you are the one who looks like a dumb ass to me if you can't imagine a situation where Executor thread timeouts could be used for good before ideas like clustering and distributed nodes are considered. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34898964) Your question was answered in one of the posts you replied to nastily. It can also be found in the API documentation. Apparently you read as well as you code. Re:Tomcat is as rock solid as it gets (0) roman_mir (125474) | more than 2 years ago | (#34898992) Didn't I tell you to go fuck yourself in enough ways? Can't you take a hint? Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34899480) Re:Tomcat is as rock solid as it gets (0) roman_mir (125474) | more than 2 years ago | (#34899664):Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34899856) Re:Tomcat is as rock solid as it gets (0) roman_mir (125474) | more than 2 years ago | (#34900040) You can go ahead, asswipe, use your tongue. Re:Tomcat is as rock solid as it gets (1) Pieroxy (222434) | more than 2 years ago | (#34899840):Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 3 years ago | (#34898136) i won't do your homework, anyway, you will spend all next week or maybe more fixing your crappy code. happy coding. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34898234) ha ha ha ha, what a dumb ass, still not getting the question yet offering a useless, clueless comment here. Re:Tomcat is as rock solid as it gets (1) JonySuede (1908576) | more than 2 years ago | (#34899482):Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 2 years ago | (#34899626):Tomcat is as rock solid as it gets (1) JonySuede (1908576) | more than 3 years ago | (#34900550):Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34900704). :) working merrily on an AT&T-Canada project to process large files supplied by an old mainframe that had to be shipped to rebillers, I still remember the first time I had to use java, yes. But thank you. 2. Would you like to pay for the new tool? The transitioning to the new tool? Maintenance? Support? Training? 3. Are you sure that your tool would do a better job than what is currently done with that same data, regardless of how the data is structured? Good to be so sure. I wouldn't be so sure, but that's because I know what the data is, which parts of data are cached, what the structure is. Also, I have this tingling sensation in my left foot, can you tell me if a white pill would do more good for that than this green pill? 4. You are sure that you know everything about the requirements here, so your educated guess about my platform must be right, obviously. 5. I certainly know that I am not going to find any good answers on this forum, but that's not going to stop me from posting. I have an hour to waste. 6. I was there since java 1.0 7. I am sure that you have many many more wonderful suggestions and they are about to come. Re:Tomcat is as rock solid as it gets (1) JonySuede (1908576) | more than 3 years ago | (#34900998):Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34901336):Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 3 years ago | (#34901440) thank you for posting so many details about your current assignment, tomorrow morning your boss will get a copy of this thread in his inbox. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34902654) wow, another genius. So I am waiting for the email. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 3 years ago | (#34901034) You are going to die miserable, angry, and alone. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34901228) ha ha ha, well, anything to skip looking at your pathetic faces. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 3 years ago | (#34901326) Sorry to step into this as an AC, but anyway... If your solution already does what it is supposed to do, but twice a year things go pearshaped (due to heavy loads on your system by user queries) you should focus on that aspect of the probelm. First line of tackling this would to be to apply what you already seem to have in your arsenal - i.e. AT&T-Canada reference to process large files in a batch environment - match that with your problem - and you will see that your "realtime" (5 min, 10min, 15min and 45min references given by you are not "realtime" by any stretch of that definition) solution can be improved. I assume your users fill in some html page with query details, and press a button - staring at the screen until the result pops back into their web page. My suggestion to you would be to add "job control overview" into that very same page (at the top maybe) that would give the user an overview of their outstanding job/query requests (it is vital that the jobs be listed with enough of the query parameters, so that the user can actually see which job does what query). Adding checkboxes for deleting outstanding jobs and a "Remove job(s)" button. This suggestion implies that you restructure your existing solution so that at "convenient" places in your report engine you check the job queue for updates on the user requests. This gives you control over which users might be allowed to have many outstanding queries, depending on the load of the machines, importance of queries, importance of users, type of report for time of year (e.g. certain reports have priority, some users have priority ) etc. etc. etc. Wishing for a solution from Tomcat/java so that you can simply assassinate thread jobs from the outside is not going to be successful - because it has been deemed not a technically sound/clean way. On the other hand building job control support along the idea above: - still make your existing solution as "real time" as it is today - - does make your users "feel in control" as before (or more so to be honest) - gives you control so that you can throttle/prioritise requests based on userid/reportid etc., - does highlight the problem to your "people" (both the technical ones and the report users) It therefore takes you off the hook for "not delivering" as you now have a solution for both making the user understand, and in control of their part of the problem (each user could see the whole process pie, his part of the process pie, and maybe down to each of his processing jobs) It gives you measurable statistics for arguing for bigger hardware for running the whole thing (you will be able to see how big your spikes are, of delivered jobs vs. all job requests and all broken down on user/department/report type etc.) Sorry for posting AC - dont have account here - and no - I have not posted to your article/request before as an AC... dont like those unnecessary unhelpful comments Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34904222) does in seconds and in some cases in minutes, what they are used to be taking tens of minutes and hours, and in some cases not being able even to get that kind of data, because the solution is much bigger than just a silly small part that generates reports? I feel the arrogance in this place, oozing through the pores. The people in my systems, they are working, not playing, they don't care about 'feeling in control', they care about getting stuff done. Many of them are running reports one after the other, after the other just because they adjust some very small filter detail and need to see the result right away. Giving them one more thing to do - some job control queue, yeah, that'll make them happy. 17:44-1805 13:29-3 16:15-1536 10:32-4 17:09-11363 11:52-7703 14:38-1046 9:49-13427 17:43-856 13:28-13005 16:15-2141 10:32-17384 17:09-10 11:46-3127 14:38-11 9:49-13258 17:43-15421 13:28-13 16:15-5342 10:31-11 17:09-10922 11:46-3254 14:38-5 9:48-10 17:43-13283 13:28-2 16:15-1473 10:31-2 17:09-10 11:45-1662 14:38-779 9:48-4 17:43-13183 13:28-638 16:15-1476 10:31-10 17:08-11330 11:45-1716 14:38-775 9:48-16484 17:43-11 13:28-43 16:15-1541 10:31-3 17:08-10 11:44-7231 14:38-777 9:48-11 17:43-859 13:28-4 16:14-1541 10:30-866 17:07-11380 11:44-3501 14:38-1812 9:48-3 17:42-13182 13:27-58 16:14-1541 10:30-11 17:07-11 11:44-6752 14:38-6 9:47-10 17:42-11236 13:26-705 16:14-1541 10:30-3912 17:06-10909 11:41-4823 14:38-815 9:47-1668 17:42-11 13:26-71 16:14-1542 10:30-6 17:06-10 11:41-3155 14:37-12 9:47-4 17:42-795 13:25-3 16:13-1539 10:27-255 17:06-2 11:41-3178 14:37-6 9:47-11 17:42-13225 13:25-3 16:12-1542 10:27-6488 17:06-6 11:39-751 14:37-771 9:47-2813 17:42-11991 13:25-13804 16:12-1541 10:27-1401 17:06-11939 11:39-824 14:37-14232 9:47-3 17:42-13128 13:24-11 16:11-1542 10:27-844 17:06-2 11:37-3 14:37-12 9:46-10 17:42-10 13:24-5650 16:10-1542 10:27-562 17:06-10 11:37-4987 14:36-520 9:46-3 17:41-4581 13:24-4 16:10-1539 10:27-530 17:06-2 11:36-20852 14:36-4 9:46-7011 17:41-11378 13:23-10 16:09-1540 10:26-522 17:05-11367 11:36-11 14:36-1676 9:46-828 17:41-13104 13:23-3 16:09-1437 10:26-18443 17:05-11 11:35-4 14:35-807 9:46-10 17:41-10 13:22-47396 16:09-1438 10:26-1 17:04-10 11:35-2 14:35-11687 9:45-4 17:41-14738 13:21-2073 16:08-1437 10:26-10 17:04-11484 11:35-2 14:35-5 9:45-1554 17:41-12266 13:20-3 16:08-1434 10:26-2 17:04-10 11:35-2 14:35-4 9:45-2909 17:41-12834 13:20-3 16:08-2093 10:26-4 17:04-2 11:35-2 14:35-10 9:45-946 17:40-10 13:20-4 16:08-1435 10:26-4109 17:04-11057 11:34-2 14:34-795 9:45-10 17:40-848 13:20-3 16:07-1439 10:25-12 17:04-11 11:34-2 14:34-10 9:45-3 17:40-1636 13:20-3 16:07-1472 10:25-1395 17:03-5 11:33-48 14:34-4 9:45-5088 17:40-19147 13:20-2 16:06-1467 10:25-516 17:03-11655 11:33-11 14:33-799 9:45-7393 17:40-12187 13:20-3 16:06-1469 10:25-4 17:03-10 11:33-29 14:33-12729 9:44-776 17:40-11 13:20-4 16:06-1469 10:25-4604 17:02-11378 11:33-4 14:32-11 9:44-10 17:40-1464 13:20-3 16:06-1511 10:25-867 17:02-3 11:32-4 14:32-3 9:44-5 17:40-794 13:20-2 16:05-1490 10:25-15929 17:02-11 11:32-11 14:32-797 9:44-10 17:39-19530 13:19-10 16:05-8125 10:25-111 17:02-11066 11:32-4 14:32-12779 9:44-3 17:39-11817 13:19-3 16:05-1478 10:24-21164 17:02-10 11:32-4376 14:31-798 9:44-2 17:39-10 13:18-2651 16:03-1482 10:24-25281 17:01-11332 11:31-15634 14:31-11 9:43-10 17:38-986 13:18-14842 16:03-1483 10:24-15740 17:01-10 11:30-12 14:30-4 9:43-2 17:38-820 13:17-10 16:03-1480 10:24-525 17:00-11024 11:29-4 14:29-10 9:43-4 17:38-1470 13:17-3 16:02-1477 10:24-26473 17:00-10 11:27-1334 14:29-6 9:43-10 17:38-841 13:17-3399 16:02-1478 10:24-34673 17:00-11274 11:27-12705 14:29-850 9:42-4643 17:38-10861 13:15-71523 16:02-1483 10:23-38568 17:00-12 11:27-1518 14:28-11 9:42-5556 17:38-5 13:15-2732 16:01-1496 10:23-10 16:59-1735 11:27-821 14:27-4 9:42-4563 17:37-20344 13:13-293376 16:00-1443 10:23-4 16:59-1732 11:27-1396 14:27-787 9:41-2879 17:37-11894 13:10-17610 16:00-1445 10:23-11 16:59-1736 11:26-6 14:25-862 9:41-4 17:37-10 13:10-2720 15:59-1442 10:23-6 16:58-1812 11:25-11166 14:24-12 9:41-4574 17:37-842 13:10-11074 15:59-1448 10:22-18000 16:58-1673 11:24-5 14:24-790 9:41-10 17:37-12175 13:10-15343 15:58-1444 10:22-11 16:58-1810 11:24-12 14:24-4 9:41-3 17:37-11317 13:09-11 15:58-1455 10:22-4 16:57-1732 11:23-4235 14:24-61 9:41-4708 17:37-12951 13:09-281867 15:57-1556 10:21-12753 16:57-1735 11:23-4148 14:24-3 9:41-11 17:37-11 13:09-4 15:57-1559 10:21-13112 16:56-1729 11:23-4229 14:23-3 9:40-3 17:37-829 13:06-5 15:56-1556 10:21-11 16:56-1346 11:21-3917 14:23-4 9:40-10 17:36-11343 13:05-4 15:56-1549 10:21-557 16:56-1732 11:18-763 14:22-793 9:40-1 17:36-12987 13:05-11 15:53-3948 10:21-5130 16:56-1696 11:17-784 14:21-817 9:40-3 17:36-11 13:05-10 15:53-12949 10:21-3917 16:55-1691 11:17-1112 14:21-5231 9:39-2239 17:36-841 13:05-3 15:45-11 10:20-2000 16:55-1694 11:17-9006 14:20-1663 9:39-10 17:36-5 13:04-12 15:45-5 10:20-1144 16:55-1720 11:17-752 14:20-3 9:39-4 17:36-4 13:04-5 15:44-5 10:20-2039 16:55-1437 11:15-5 14:20-850 9:39-24722 17:36-14688 13:02-12 15:44-11 10:20-5338 16:54-11306 11:15-11 14:20-2881 9:39-11 17:36-11485 13:02-2 15:44-5 10:19-525 16:54-16 11:15-4 14:19-2142 9:39-5 17:36-12 13:01-5 15:42-5 10:19-1841 16:54-12799 11:15-765 14:19-1838 9:38-11 17:36-846 13:01-63457 15:42-5 10:18-3 16:54-10 11:15-12 14:19-2144 9:38-4 17:35-1736 13:01-7064 15:42-12 10:18-10 16:54-10 11:13-763 14:19-2154 9:38-1387 17:35-10941 12:59-2972 15:41-27 10:17-4 16:53-13257 11:13-771 14:18-5 9:38-832 17:35-847 12:58-11 15:40-774 10:17-4942 16:53-12 11:12-4 14:18-13495 9:38-5 17:35-11783 12:58-919 15:40-3144 10:17-486 16:53-8 11:12-3834 14:18-4 9:38-57 17:34-10 12:58-5 15:37-11027 10:17-10 16:52-11254 11:12-3910 14:18-11 9:38-578 17:34-19402 12:57-11 15:37-4 10:17-4 16:52-1550 11:11-1395 14:18-1626 9:37-5 17:34-12608 12:57-2 15:37-10772 10:17-12912 16:52-1655 11:11-1036 14:18-2237 9:37-12 17:34-10 12:56-4 15:36-2 10:17-11 16:52-14427 11:11-1307 14:18-813 9:37-11 17:34-865 12:55-7 15:36-2 10:16-4 16:52-1546 11:11-761 14:18-2209 9:37-8 17:33-11050 12:54-13 15:36-3 10:16-13045 16:52-1546 11:09-9435 14:18-1553 9:36-1306 17:33-10 12:54-5 15:36-2 10:16-11 16:51-1550 11:08-9628 14:17-2151 9:36-5 17:33-899 12:53-3365 15:34-10758 10:16-3 16:51-1601 11:08-11122 14:17-783 9:36-764 17:33-907 12:53-1559 15:32-4 10:15-10 16:51-1782 11:08-767 14:17-2152 9:36-11 17:33-11645 12:52-12769 15:32-4 10:15-3 16:51-1545 11:08-5 14:16-1941 9:36-4 17:32-17229 12:52-12 15:29-1064 10:15-10 16:50-1545 11:08-787 14:16-13735 9:36-11 17:32-10 12:52-4 15:25-1557 10:15-5 16:50-1548 11:07-24 14:16-786 9:36-9 17:32-10929 12:50-28 15:24-795 10:14-11 16:50-1552 11:07-5 14:15-30 9:36-793 17:32-6 12:50-2 15:24-778 10:14-3 16:50-1671 11:06-11845 14:14-798 9:36-379 17:32-10 12:49-5 15:23-1542 10:13-10 16:49-9077 11:06-11 14:14-20048 9:36-1064 17:32-5 12:47-11 15:23-1541 10:13-3 16:49-1547 11:05-4 14:13-16365 9:36-759 17:32-887 12:46-3 15:23-1543 10:11-10 16:49-3690 11:05-11 14:13-8043 9:36-779 17:32-5717 12:46-1640 15:22-783 10:11-3 16:48-1440 11:05-3 14:13-11 9:35-956 17:31-79 12:46-1385 15:22-1543 10:11-11 16:48-1444 11:02-1727 14:13-788 9:35-11900 17:31-11386 12:45-14868 15:22-1549 10:10-3 16:48-3972 11:02-5467 14:13-783 9:35-5 17:31-12196 12:45-10 15:21-1620 10:10-11 16:48-1439 11:02-774 14:13-778 9:35-4 17:31-11 12:45-942 15:21-1724 10:10-4 16:47-1439 11:01-8083 14:13-792 9:35-13 17:31-74 12:45-3 15:20-42284 10:09-10 16:47-1439 11:00-7987 14:12-7 9:35-1309 17:31-6 12:44-11 15:20-6325 10:09-3 16:47-1442 10:59-724 14:11-3 9:35-12 17:31-20088 12:44-4 15:20-1465 10:08-11 16:47-1438 10:59-1315 14:11-785 9:35-642 17:30-12813 12:43-10 15:20-1480 10:08-3 16:46-1480 10:58-762 14:11-3 9:34-5 17:30-10 12:43-3 15:19-656 10:08-10 16:46-1483 10:58-737 14:10-789 9:34-1341 17:30-16329 12:41-11 15:19-1436 10:08-609 16:46-1272 10:58-741 14:09-788 9:34-768 17:30-11 12:41-1331 15:19-664 10:08-12862 16:46-1478 10:57-720 14:08-785 9:34-10 17:30-3 12:41-761 15:19-3815 10:07-10 16:45-1318 10:56-719 14:08-3 9:33-9 17:30-6 12:40-4 15:19-1471 10:07-3 16:45-1476 10:55-766 14:08-3 9:33-1307 17:29-825 12:40-1319 15:18-1468 10:07-1329 16:45-5842 10:55-1308 14:08-795 9:33-766 17:29-829 12:40-772 15:18-1465 10:07-4 16:45-10227 10:55-703 14:06-869 9:33-712 17:29-4 12:40-780 15:17-1466 10:07-11 16:44-3682 10:55-699 14:05-792 9:33-5 17:29-827 12:39-1547 15:17-1466 10:06-4 16:44-3695 10:55-761 14:04-809 9:33-18205 17:29-821 12:39-923 15:16-2 10:06-11 16:44-3716 10:55-11659 14:03-11 9:33-523 17:29-858 12:39-12458 15:16-2 10:05-5 16:44-5063 10:54-11 14:03-4 9:32-4 17:29-1858 12:39-11 15:15-813 10:05-10 16:42-1482 10:54-4 14:02-12 9:32-11 17:28-1770 12:38-1384 15:15-1475 10:05-4 16:41-7054 10:54-1308 14:01-4 9:32-764 17:28-827 12:38-3 15:15-1477 10:05-11 16:40-4986 10:54-762 14:00-16008 9:32-6 17:28-4656 12:38-12268 15:14-1475 10:04-5 16:40-5437 10:54-755 13:59-6 9:32-1555 17:27-923 12:38-783 15:12-1471 10:04-10 16:39-8916 10:54-779 13:59-5 9:32-1039 17:27-11849 12:37-11 15:12-1478 10:04-4 16:38-2473 10:52-691 13:59-11 9:32-11 17:25-10963 12:37-5 15:12-964 10:04-1312 16:37-1471 10:52-11650 13:57-10732 9:32-4 17:25-11 12:36-12356 15:11-1441 10:04-772 16:37-2541 10:52-12 13:56-176 9:32-11647 17:25-10898 12:36-11 15:11-1153 10:04-10 16:37-1469 10:52-2 13:55-11 9:31-6 17:24-10 12:36-1346 15:11-1440 10:03-3 16:37-1469 10:51-5 13:55-5 9:31-2589 17:24-13168 12:36-4 15:10-1442 10:03-750 16:36-1467 10:51-753 13:54-12 9:31-4 17:24-11409 12:35-1171 15:09-1439 10:03-754 16:36-1593 10:50-11800 13:54-4 9:31-12 17:24-11 12:35-787 15:09-1445 10:03-11 16:36-2592 10:50-5 13:53-14229 9:31-13430 17:23-15786 12:35-12346 15:08-1555 10:03-3 16:36-1475 10:50-11 13:52-11 9:30-5 17:23-11230 12:35-12 15:07-1499 10:02-10 16:35-1445 10:50-4 13:52-5 9:30-10509 17:23-10 12:34-4 15:06-1546 10:02-4 16:35-1436 10:49-6 13:51-11 9:30-4 17:23-12918 12:34-786 15:06-1506 10:02-11 16:35-1483 10:49-684 13:51-3 9:30-11 17:22-11503 12:34-784 15:06-1551 10:01-4 16:34-1441 10:49-684 13:50-14730 9:30-6 17:22-10 12:34-1322 15:05-2 10:01-8315 16:34-144 10:49-9653 13:50-5 9:29-764 17:22-16714 12:33-780 15:05-4 10:01-11 16:34-1442 10:49-4 13:49-19 9:29-10 17:21-10929 12:33-783 15:05-2339 10:01-4 16:34-1617 10:49-11 13:49-6 9:29-4 17:21-10 12:33-1317 15:03-10787 10:01-8042 16:34-1439 10:48-11 13:48-11233 9:29-8260 17:21-11885 12:33-765 15:03-11115 10:00-10 16:34-941 10:48-7 13:48-10 9:29-5 17:21-11455 12:32-782 15:03-11098 10:00-3 16:33-985 10:47-712 13:47-3633 9:28-3 17:21-11 12:31-11 15:02-10777 10:00-11 16:33-1576 10:47-713 13:47-4 9:28-9 17:20-12495 12:31-5 15:02-2145 10:00-3 16:33-1043 10:47-672 13:47-793 9:28-829 17:20-11 12:30-39 15:01-1364 9:59-1304 16:33-474 10:47-14696 13:46-14403 9:28-5 17:20-3 12:30-855 15:01-1386 9:59-766 16:33-1068 10:47-11 13:46-10 9:28-198 17:20-17056 12:30-4 15:01-795 9:59-15127 16:33-1561 10:47-3845 13:46-6302 9:27-4 17:19-11318 12:30-826 15:00-11038 9:59-1301 16:33-4 10:46-5 13:46-10 9:27-11 17:19-10 12:29-11 14:59-1392 9:59-11 16:33-3 10:46-490 13:46-12064 9:27-35326 17:19-5 12:29-785 14:59-10841 9:59-765 16:33-1588 10:46-493 13:45-11 9:25-5 17:19-20727 12:29-4 14:59-788 9:59-4046 16:32-2 10:46-523 13:45-4 9:25-205 17:19-12481 12:29-10 14:50-5279619:58-3 16:32-1535 10:46-18615 13:44-791 9:25-1 17:19-14 12:29-3 14:50-782 9:58-15160 16:32-1533 10:45-521 13:44-11 9:25-2 17:18-6 12:28-12 14:49-11 9:58-10 16:32-11610 10:45-509 13:44-3 9:25-1 17:18-13730 12:28-3 14:49-3 9:58-4 16:32-1539 10:45-479 13:44-811 9:25-6 17:18-12 12:27-784 14:49-1509 9:58-15248 16:32-4 10:43-764 13:44-5 9:25-8 17:18-5 12:26-892 14:49-1568 9:58-1304 16:32-4 10:43-484 13:44-2366 9:25-13 17:18-18828 12:26-11819 14:49-6 9:58-11 16:31-1539 10:43-479 13:43-11 9:25-5 17:18-12817 12:26-11 14:48-6016 9:58-797 16:31-12144 10:42-786 13:43-2 9:25-836 17:18-10 12:26-4 14:48-12843 9:58-4 16:31-11764 10:41-5 13:43-4 9:24-11 17:17-14646 12:24-11 14:48-11 9:57-15454 16:30-2 10:41-788 13:43-1188999:24-3 17:17-11034 12:24-786 14:48-4 9:57-10 16:30-1562 10:41-785 13:42-6 9:23-4 17:17-11 12:24-4 14:48-1225 9:57-423 16:30-1556 10:41-11 13:41-6929 9:23-699 17:17-14362 12:23-11 14:47-1557 9:57-1821 16:29-13475 10:41-4 13:41-12 9:21-558 17:17-10 12:23-2 14:47-1557 9:57-3 16:29-5 10:40-780 13:41-5 9:20-6 17:16-4 12:23-785 14:47-102 9:57-15258 16:29-12308 10:40-9214 13:41-94888 9:20-11 17:16-15043 12:22-12 14:46-1559 9:57-11 16:29-15 10:39-478 13:41-3908 9:20-6 17:16-14217 12:21-5 14:46-97 9:56-5 16:29-26 10:39-481 13:41-1915 9:20-514 17:16-13 12:20-5 14:46-809 9:56-4033 16:28-1556 10:39-761 13:40-1913 9:20-3 17:16-5 12:20-13 14:46-6140 9:56-212 16:28-1437 10:39-784 13:40-2512 9:20-13 17:16-13125 12:20-5 14:45-3997 9:56-500 16:28-1435 10:39-781 13:40-1910 9:20-2 17:15-4 12:18-8044 14:45-1558 9:56-3938 16:27-1437 10:38-11861 13:40-585 9:19-8 17:15-12 12:17-7861 14:44-776 9:56-15918 16:27-1434 10:38-4023 13:40-2064 9:19-9 17:15-12 12:14-20953 14:44-1672 9:56-3967 16:27-1436 10:38-2648 13:39-2742 9:19-4 17:15-8 12:08-6191 14:44-210 9:55-3962 16:27-1440 10:37-20919 13:39-2138 9:18-311 17:15-10 12:06-2585 14:44-2349 9:55-1844 16:26-1449 10:37-10 13:38-540 9:18-11 17:15-5 12:05-11 14:43-12293 9:55-14983 16:25-1466 10:37-2 13:38-2075 9:18-5 17:15-12946 12:05-2 14:43-11 9:54-11 16:25-1470 10:37-765 13:38-2337 9:17-10 17:15-11670 12:05-954 14:43-4 9:54-3 16:25-1470 10:37-2 13:37-2289 9:17-7 17:15-11 12:04-2166 14:43-1449 9:54-18769 16:24-1472 10:37-2 13:37-7590 9:16-15 17:14-5 12:04-4 14:42-1454 9:54-13519 16:24-3600 10:37-4 13:37-2130 9:16-10 17:14-12746 12:03-12 14:42-12716 9:54-11 16:24-1468 10:36-2604 13:37-12468 9:07-8376 17:14-10923 12:03-5 14:42-5 9:53-6 16:23-25695 10:36-11 13:37-18047 9:05-2528 17:14-15 12:03-615 14:42-791 9:52-2 16:23-2 10:36-5 13:36-12 9:05-6987 17:13-11 12:00-649 14:41-11 9:52-2 16:21-6760 10:36-2574 13:36-734 8:47-10643 17:13-5 11:58-6190 14:41-4 9:52-1308 16:20-1644 10:36-867 13:36-5 8:46-11549 17:13-8 11:57-7 14:41-12666 9:52-833 16:19-1474 10:36-28690 13:36-736 8:45-3222 17:13-13118 11:57-7 14:41-6 9:51-220 16:19-1453 10:36-9784 13:36-11567 8:44-3220 17:13-11282 11:57-9 14:41-13 9:51-1318 16:19-1483 10:35-7899 13:35-11 8:42-3380 17:13-11 11:57-14854 14:41-4 9:51-973 16:18-1499 10:35-822 13:35-2945 8:41-17156 17:13-5 11:57-2404 14:41-547 9:51-12926 16:18-1472 10:35-20518 13:35-4 8:10-11756 17:11-11 11:57-8 14:40-1486 9:51-11 16:18-1469 10:35-793 13:34-306 8:01-11741 17:11-5 11:57-9 14:39-1945 9:50-5 16:17-1482 10:34-935 13:34-21141 7:04-45638 17:11-11424 11:57-6988 14:39-12640 9:50-15429 16:17-1454 10:34-19946 13:34-11 7:04-458 17:11-11 11:56-8 14:39-792 9:50-15016 16:17-1439 10:34-4018 13:34-3 7:03-331 17:10-11320 11:56-4457 14:39-836 9:50-12 16:17-1454 10:33-14827 13:33-11704 7:03-451 17:10-11 11:56-2 14:39-7 9:49-6 16:17-1446 10:33-783 13:33-11303 7:01-320 17:10-3 11:55-1740 14:39-789 9:49-11 16:16-1459 10:33-774 13:31-179 7:01-443 17:10-4 11:54-3380 14:38-12856 9:49-456 16:16-1443 10:33-3662 13:31-211 7:00-557 16:16-1446 10:33-3831 13:30-11102 3:53-482 16:16-1446 10:32-11 13:29-6401 3:49-197077 16:16-1492 10:32-791 13:29-558 3:49-478 Re:Tomcat is as rock solid as it gets (1) rubycodez (864176) | more than 3 years ago | (#34898300):Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34898370) different products under brands to be all displayed all at once for a very large time period. It's not an unusual thing and they want the result in front of them once it's compiled. Even these things return quickly enough (under 7 minutes normally). But the problem appears sometimes when they start a graph, don't wait for it to finish, change some parameters and restart the graph, etc. The server has 16G of ram, 15G are allocated to Tomcat, so there is no memory left to do these graphs somewhere else, they actually take a lot of space as well. I am constrained on the total memory, on CPU, on total time, basically on everything, those are hard constraints, it's not like I have a bunch of servers standing there, only one. So no middleware, nowhere even to run it, and the graph is not good if it comes too late. Again, the question is much simpler: is there a way to time out Executor threads in Tomcat or not? If not, then that's another reality and I have to work around it. It is possible to work around it - it's possible to stop execution of a task in the middle between SQL requests. It's possible to have do what I did sometime earlier - start an asynchronous thread for each web request, put the current thread to sleep for a small amount of time, once the time passes and first thread wakes up, check if the worker thread is done. This way I have the time out that I can enforce - I can kill the worker thread I started. My question is about Tomcat Executor threads - can they be killed from the app? Can they be configured to time out? I think my question is very simple, it doesn't require anything beyond those very simple specific answers, doesn't look like people understand the questions, everybody is trying to re-architecture everything or to be a smart ass (as the rest of the thread clearly shows.) Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34898768) Maybe instead of being an ass on /. you should just post your question to the tomcat users email list (without the attitude). Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 2 years ago | (#34898912) If you are the original asshole (all of you, ACs, look the same to me,) then go fuck yourself. You are the one who was being the ass. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34899176) he is not the original AC, i'm the original AC. Re:Tomcat is as rock solid as it gets (0) Anonymous Coward | more than 2 years ago | (#34899200) Yeah, and I am Spartacus. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 2 years ago | (#34899206) excellent, all of ACs can go fuck themselves. Re:Tomcat is as rock solid as it gets (1) rubycodez (864176) | more than 3 years ago | (#34900762) Large enterprise applications will generally have a multi-tier, not a two-tier, architecture for solving the problems you are having and many others. --- sincerely, Mr. 37 years software engineering experience Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34900824) of the month, 12 hours a day, 50 people use it to generate reports, that's all it's used for. 1-2 times a month (once in January and once in December at this point, from memory), the thing runs out of memory and CPU due to too many abandoned threads. As I explained in that link, near all requests finish under 7 minutes. Very few finish under 10, but that's the maximum, we shouldn't allow requests to run past 15, nobody is waiting past 10 anyway. There is nothing that middleware would provide, except slowing everybody down in 99% of cases, because it would rely on queues, it would synchronize the work, most of which doesn't need to be synchronized. The problem presented? Stopping some abandoned work threads from going past 15 minutes. The solution given on this site? Rewrite everything. The reason? apparently 37 years of experience. Too bad none of it is in business. Re:Tomcat is as rock solid as it gets (1) corsec67 (627446) | more than 3 years ago | (#34902172) application itself.) Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34906836) Yeah, I have one application working that way, it works, my basic question around here was whether Tomcat is doing anything about it or not. Re:Tomcat is as rock solid as it gets (1) BeforeCoffee (519489) | more than 2 years ago | (#34899708) ing tricks should be a regular weapon in your arsenal for this kind of problem, not worth the twizzling and inevitable bugs. Plus threading tricks don't scale up easily, and availability is probably the most important "-ility" we should be concerned with. Not to burst your bubble, try as you might, but IMHO brute force is usually not the answer. Even so, there IS a class of problems on the web though that take a moderate amount of time to compute, so they can still be online activities, but consume a lot of CPU. I have to guard against users holding Refresh down and clobbering my Servlet worker threads. One strategy I employ is to use a distributed in-memory cache to store response payloads. If the user hits refresh, I don't go to the backend twice if I notice that the same request has already been launched and is in-flight to the cache - I just attach that response to the that entrant in the cache and put it to sleep. If you're writing to a ServletOutputStream and you get a ClientAbortException, that means that you're writing to a socket that the user has closed (either they hit cancel or hit refresh). Do you ever see that? Perhaps there is something clever you can do with non-blocking extensions to decouple the servlet's socket handling from the backend processing to force TCP to tell you, while you're processing is still in-flight, that the user has canceled the request? I don't know if that's possible, because I've never tried it, but it might be something fun to experiment with. Dave Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 2 years ago | (#34899838) period of 2-3 years for around ten stores at the same time. So even a graph like that only takes maybe 10 minutes to create, but if they decide they want to change the parameters of the graph and abandon the first one to start generating a second, a third, and we have up to 50 users at the same time, all of a sudden there is a spike that's not normal for the app, and the tasks are still all going at the same time and the system runs out of memory and starts thrashing threads it looks like. Unfortunately cashing responses is not what I can, since every report can be generated with tens of parameters, for arbitrary sets of stores, for arbitrary periods of time, etc. Also they are aggregate in many cases, what I means is that sales are added together in the DB based on filters, so again, I can't pre-generate discrete pieces of it to just join them together for a request. So since this happens only on rare occasions I was thinking about timing out threads that are taking too long, that's an easy cop out, maybe just time out any task that goes on for more than 15 minutes, something like that. I used to do that earlier - the main request thread creates a worker thread and is put to sleep for a short period of time, it wakes up, checks if the worker is done, if it's not and if it's not the timeout yet, it goes to sleep again, if it's past time out, it kills the worker thread and returns with application timeout error. It works, but too bad Tomcat doesn't allow for that itself. Re:Tomcat is as rock solid as it gets (1) BeforeCoffee (519489) | more than 2 years ago | (#34900258) which case, you're in the same boat you're in now. Whether it's your users resubmitting or a refreshing, doesn't matter: you're making your users wait too long and they think their connection has stalled so that's why they're trying again. I just hope you understand my main point, let it sink in: most users have expectations that a response should return in 1-4 seconds any click (you can thank Google for that.) Response time is a dragon you have to slay. I get that users give some apps some slack, and displaying a throbber helps, but at the outside, you should never make a user wait more than 15 or 20 seconds for any response. But, if two or three concurrent requests can clog/kill your server, you're doing something wrong. Also, training your users that your webapp just "takes a long time" isn't a very effective approach - grumbly users will turn on you eventually. It's your job to keep users happy and keep your reputation spot-free.. Dave Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 2 years ago | (#34900462) just normal for them, they've been using the system for a year. In our case 70% of all requests come back in under 10 seconds, 25% last between 10 and 120 seconds, the last 4.99% take up to 7 minutes and then there are odd cases, when users set up a report that normally would return in about half an hour. Our system is used to generate very large reports, in fact before this system the users could wait for their reports for many hours, sometimes days, so they are spoiled now, but not 'Google' spoiled. Saying that "I should never" make a user wait for more than 20 seconds :) Have you ever dealt with retail? But, if two or three concurrent requests can clog/kill your server, you're doing something wrong. - as I said, we have 50 people working in various geographical locations and this problem comes up 1-2 times a month, a very specific set of circumstances. Also, training your users that your webapp just "takes a long time" isn't a very effective approach - grumbly users will turn on you eventually. It's your job to keep users happy and keep your reputation spot-free. - thank you for telling me my job, but you don't know my job :) I love /.. - I don't know why you are talking about shame, I have no reason to get anything new to solve this problem. It would be very silly and ineffective to buy more hardware and to transition all of the reports from one system to another, when in fact the entire system is created just to generate the reports. The only issue that exists in this system, which works fine for the 50 users we have every day of the month, 12 hours a day, is that 1 or 2 times a month too many reports are created at once that are dumped and re-created again by enough people that the VM itself starts running low on memory and CPU, and this can be easily fixed by killing a few threads that go overtime. Imagine you own a piece of equipment that does everything you need, much better than a piece of equipment you used to have for years, it's orders of magnitude faster, it's actually distributed geographically while the old systems weren't, it is really fast compared to previous systems, yet once in a while the system goes off-line simply due to too many requests that are not done yet, but that are really just run abandoned. Imagine that you are not 'Google', you are not going to be 'Google' either. It's a normal situation. To come to a business and say: because of this specific issue the entire system needs to be recreated in some other way, rewritten, new hardware needs to be bought, supported, etc. That's all great, but the question is simple: why? It's a silly suggestion, no offense, but it's obviously made by somebody not running a business. Re:Tomcat is as rock solid as it gets (1) BeforeCoffee (519489) | more than 3 years ago | (#34901402). If I produced a system that had performance characteristics like what you're describing, our clients would terminate our contracts.. And I can't stand it ... it's CACHING! Not cashing. Sorry, but I couldn't let that slide. Re:Tomcat is as rock solid as it gets (1) roman_mir (125474) | more than 3 years ago | (#34902718), again.. - yes, and I have worked in more categories than that as a contractor, but have you ever started your own business, created all of your own software from scratch? For ONE category ever? All business software in about one and a half years, including SCM that ties together stores with office with suppliers with manufacturers? Have you had to sell your solution all by yourself to more than one supplier, more than one manufacturer? Yeah. If I produced a system that had performance characteristics like what you're describing, our clients would terminate our contracts. - another snark remark. You have no idea what you are talking about. You have no idea how many systems are tied together to produce a solution. But you are willing to continue display your supposed 'superiority' regardless of the total ignorance about the counter-party. As I said, I love /.. - I am happy for you, but not surprised. All this experience, but nothing useful to say. Again, I see a lot of people here, proposing the same thing: -you have it all wrong, you need a rewrite, you need this, you need that. They are all bringing up their 'experience', totally oblivious about the counter-party, totally irrelevant to the question, sounding exactly as you just expressed it - cocky, and nothing useful. No surprise so the people in this field have such a poor rap. So much vitriol; so few answers (1) dereference (875531) | more than 2 years ago | (#34900376). Re:So much vitriol; so few answers (1) roman_mir (125474) | more than 3 years ago | (#34900586) by the application on its own. So you believe that Tomcat is not doing anything in particular to address this, I couldn't find anything in Tomcat that did it so far, so I am beginning to believe that's how it is. Thanks again. Have fun upgrading... (3, Funny) MrEricSir (398214) | more than 3 years ago | (#34895714) ...by editing thousands of lines of XML files by hand in various directories! Re:Have fun upgrading... (0) Anonymous Coward | more than 3 years ago | (#34895720) ...by editing thousands of lines of XML files by hand in various directories! The horror... Re:Have fun upgrading... (1) Anonymous Coward | more than 3 years ago | (#348958 web.xml? Even if you're upgrading from Tomcat 6 to Tomcat 7, you don't need to touch your web.xml at all. It works. web-fragment.xml is a feature, don't use it if you don't want to. PS: Read this:. It's a little "happy", and it's all fun and games for him. TC 7 and asynchrony isn't all that fun when you actually start using it for real production software like we've been doing. Re:Have fun upgrading... (2) whiteboy86 (1930018) | more than 3 years ago | (#34895938) Re:Have fun upgrading... (1) dshk (838175) | more than 3 years ago | (#34897262)! (1) Anonymous Coward | more than 3 years ago | (#34895724) Nice to see, that the acronym API was explained, but words like "servlet", "branch", "application server", "javaserver pages" were assumed to be understood by the reader. Person who does understand those, maybe knows what API stands for in this context. Re:API? Servlet! (1) The End Of Days (1243248) | more than 2 years ago | (#34898982) Anyone who doesn't get those terms also doesn't care about Tomcat. You can consider it the knowledge equivalent of the "you must be this tall" signs at amusement parks. Re:API? Servlet! (1) m50d (797211) | more than 3 years ago | (#34906268) OK, I can't keep up (1, Funny) symbolset (646467) | more than 3 years ago | (#34895762):OK, I can't keep up (1) Anonymous Coward | more than 3 years ago | (#34895876) Just swallow your pills and move along. Re:OK, I can't keep up (4, Informative) brunes69 (86786) | more than 3 years ago | (#34896146) Tomcat is managed and run by the ASF, has nothing to do with Oracle.... not sure what you are going on about here. Re:OK, I can't keep up (0) Anonymous Coward | more than 3 years ago | (#34898252) I think he's referring to the the fact that tomcat is written in java, whose specification is governed by Oracle - we can just pray that it will remain royalty free, or bought by Google. At least they wouldn't classify java as "middleware". Re:OK, I can't keep up (1) sproketboy (608031) | more than 2 years ago | (#34899338) The sad part is he gets modded up for saying something so stupid. Re:OK, I can't keep up (1) m50d (797211) | more than 3 years ago | (#34906320) :OK, I can't keep up (0) Anonymous Coward | more than 3 years ago | (#34896726)? Shut up, Stallman. Go play your recorder. Re:OK, I can't keep up (1) larry bagina (561269) | more than 2 years ago | (#34899494) (-1) Anonymous Coward | more than 3 years ago | (#34895914) thnx.. nice article.... My "Real Production" experience with TC 7 (1) Anonymous Coward | more than 3 years ago | (#34895932) hard and I've spent weeks trying to track down strange race conditions occurring because of that. I had dreams where I saw NullPointerExceptions at times. Maybe I'm just a bad programmer, or maybe weird shit happens with asynchrony. So be cautious before you gleefully jump into the puddle. The problems with Asynchrony nearly cost us our planned release date. It did, however, cost countless weekends and nights of mine. That said, the development of TC 7 has been really very rapid. There was a complete rewrite of how Asynchrony works from Tomcat 7.0.2 to 7.0.3. Not something usual for a minor version change. The server itself is fast and really stable (except for a bad memory leak they fixed in 7.0.3. That cost me four days trying to find the leak when I was with 7.0.2, before checking out the TC commit logs). The annotations support in Servlet 3.0 makes life WAY easier, no messing around with web.xml and stuff anymore to configure servlets. So my experience has been a mixed one. It probably will get better for new folks who're using TC 7 stable; a known tradeoff for using unstable beta server is that it's unstable :-) There have, however, been no problems whatsoever with using 7.0.5. So go ahead, try it out! Re:My "Real Production" experience with TC 7 (0) Anonymous Coward | more than 3 years ago | (#34897314) writing asynchronous software is harder than synchronous, harder to debug and most of the time the benefits over a synchronous solution is marginal, stay away from asynchronous code unless it is the only option to solve your problem. Tomcat still exists? (-1, Troll) Anonymous Coward | more than 3 years ago | (#34896084) I remember when 10 years ago the Apache team turned a perfectly good and working mod_jk (for apache) into a external server project (Tomcat) because the lousy performance of Java would taint their httpd's reputation in the not so competent media. There are certainly enough "professionals" in the IT media out there who would have compared junk like IIS (without extensions) with a bloated (all modules including mod_jk) apache install. I am amazed Tomcat still exists. Good luck! JBoss Version Features? (1) Doc Ruby (173196) | more than 3 years ago | (#34897136) like Eclipse to automatically generate and maintain the build.xml ). Key 7.0 feature (1) RegTooLate (1135209) | more than 3 years ago | (#34897828) TOMCAT? (1) Kuukai (865890) | more than 3 years ago | (#34898394) fucking pussies (0) Anonymous Coward | more than 3 years ago | (#34901442) Re:fucking pussies (1) m50d (797211) | more than 3 years ago | (#34906336)
http://beta.slashdot.org/story/146376
CC-MAIN-2014-15
en
refinedweb
This release includes lots of improvements including : - a simpler way for writing macros - support for nested functions - generic methods overloading works - support for CLR 3.5 extension methods (moreover boo extension methods) - compile-time conditionals through ConditionalAttribute and the new -define SYMBOL booc option - AttributeUsageAttribute is now supported and enforced - a better interactive interpreter (previously known as booish2) - warnings about unused private members, unused namespaces, unreachable code - new error messages, including suggestions for misspelled members or types - exception filters, exception fault handlers - for loop IDisposeable.Dispose integration Contributors to this release : Avishay Lavie, Bill Pierce, Cédric Vivier, Daniel Grunwald, Marcus Griep, and last but not least Rodrigo B. De Oliveira.
http://docs.codehaus.org/display/BOO/2008/02/
CC-MAIN-2014-15
en
refinedweb
How about one that understands your budget and is ready to work with you to identify your business’s problems and pain points; and only move forward if it’s a perfect fit. Developers who would help you: We solve your problems, not ours. Prototype & MVPs Development Whether you’re looking to realize an idea, or build an app from scratch. We will help make it a reality. Starting from $3000 - Learn More Update : NetTuts - has a guide on great Tweaks for Sublime Text 3 Flat is in, right? Lately I’ve been spending more time in the /rails and /ruby subreddits. Both are most definitely worth a subscribe if you are into Ruby development. I came across the flatland theme for Sublime Text 3 (SB) (my editor of choice from I came across it on HN probably 2 years back) It looked beautiful, don’t just take my word for it, see below. I had to have it! They were three ways to install all with varying levels of effort. I was in bed reading; so the one with the least effort caught my eye. The “Package Control option" - mhmm didn’t know that SB had that. It reminded me of the Application Center from Ubuntu. You just search for whatever resources/package you want and you can install it from right there in the editor. So I tried the command but I had to install it first, which was a breeze. Step 1 - Open your console in SB - “Control + ` " (under tilda) Step 2 - Enter the following to install to install Package Control on SB 3 import urllib.request,os,hashlib; h = ‘7183a2d3e96f11eeadd761d777e62404) Step 3 - Restart SB 3 Step 4 - Open Package Control using - “Shift + Command + P” Step 5 - Select Package Control:Install Package Step 6 - Search and select “Theme - Flatland” Step 7 - Enable in your User Preferences by adding theme”: “Flatland Dark.sublime-theme” Google defines regret as - feel sad, repentant, or disappointed over (something that has happened or been done, esp. a loss or missed opportunity). We all have regrets, some try to convince themselves that they don’t. Others say it was all a learning experience. I’m in latter camp. I believe that life comes with regrets; things that you should of done: make that move on that girl, put in that extra day of studying and even apply to that job. I won’t go into my teenager regrets, however I will share a few of my regrets over the past 3 years. Not listening to my mom and setting up a monthly automated savings from salary. I was hired at my first job before completing my final year, (I was working from home in Kingston whilst preparing for my final exams). I remember the first time I got paid. It seemed so much. I thought to myself “how can I spend all of this money?” (looking back - that seems so FUNNY and naive of me to say). Mom was quite adamant that I set up an automated deposit from my salary and I just overlooked it. I was always a saver and continued saving my money, but I always had access to it. So if anything came up, I would just use my savings. It wasn’t until 2 years after that I saw the brilliance in my mother’s words and decided to set one up. I could of saved so much money over that time period.. but hey such is life. Getting a Credit Card(CC), and not being my usual responsible self I learnt quite early that CC debt is killer, mostly from horror stories heard from family and friends. So I vowed to never get one. 10 years back when I started my Video Game importing business; I was made a supplement holder on my mother’s credit card. I only could spend what I deposited to the account. Not knowing about actual CC limits, I remember thinking how do people get into trouble with CC’s. Late 2012 when I heading to Orlando for my first half marathon with Citi Runners, I was amassing my vacation funds. I received some advice that I should take out a CC to help cover the vacation.. Wrong move. I decided to do this..I won’t go into details, but I’ve now learnt the ideal way to handle a CC is to pretty much treat it like a Debit Card. Only spend what you have and can pay back at the end of the month. I came across a statement recently, it goes something like this "If you need a Credit Card to buy a PS4; you can’t afford one. Don’t buy it" Other regrets come to mind; some work related, programming related, school related. However that’s life and we all live and we learn. N.B. - My two regrets are finance related. I’m big on personal finance right now. If you are - I recommend subscribing to the /personalfinance subreddit on Reddit. It is exceptional Using Environments variables have always been recommended during (Ruby/Rails) development, primarily so your important credentials aren’t pushed online for everyone else to see. I was embarrassed to say that I wasn’t quite sure how to set them up. In the past when I googled, I remember there being few ways, one involving your bash profile (I wrote a Ruby 2 years back about customizing your bash profile). However I never felt comfortable setting my environment variables that way. Then I discovered the Figaro gem, once installed it creates an application.yml file that easily allows your set your project specific environment variables. How to install 1.) Add to your Gemfile and run “bundle install” gem ‘figaro’ 2.) Run the generator - This creates the application.yml in your config folder rails generate figaro:install 3.) Add your variables to the file 4.) Pushing to Heroku (Optional) rake figaro:heroku That’s it, simple as Pie. For more info checkout the official docs
http://rorywalker.com/
CC-MAIN-2014-15
en
refinedweb
Main class not recognized Kevan Ryan Greenhorn Joined: Feb 08, 2010 Posts: 9 posted Apr 20, 2010 06:49:17 0 Hey Javaranchers, For some reason, the NetBeans IDE does not recognize the main class within my program. I thought that there was a way to set an object class as a 'main', but I'm not entirely sure. Is any requirement the main class needs to have, so that it is recognized? Thanks in advance! || Ulf Dittmer Marshal Joined: Mar 22, 2005 Posts: 39535 27 posted Apr 20, 2010 06:58:13 0 It must have a method with this signature: public static void main ( String [] args) Ping & DNS - updated with new look and Ping home screen widget Kevan Ryan Greenhorn Joined: Feb 08, 2010 Posts: 9 posted Apr 20, 2010 07:03:20 0 Yeah, that's what I figured too, but it still doesn't seem to recognize it. Here is my entire program posted, but I feel that the mistake I'm making is just a small thing that slip out of my mind. Thanks! package deckofcardsround2; import java.util.Random; public class Main{ public static void main (String[] args) { } } class Deck{ Deck deck = new Deck(); Card[] cards; public Deck() { cards = new Card[52]; int index = 0; for (int suit = 0; suit <= 3; suit++){ for (int rank = 1; rank <=13; rank++){ cards[index] = new Card (suit, rank); index++; } } } public Deck (int n){ cards = new Card[n]; } public void main (Deck deck){ deck = new Deck (); shuffleDeck (deck); Deck hand1 = subdeck (deck, 0, 4); Deck hand2 = subdeck (deck, 5, 9); Deck hand3 = subdeck (deck, 10, 14); Deck hand4 = subdeck (deck, 15, 19); System.out.println ("Hand1 has:" + printCard(cards[0]) + "," + printCard(cards[1]) + "," + printCard(cards[2]) + "," + printCard(cards[3]) + "," + printCard(cards[4]) ); System.out.println ("Hand2 has:" + printCard(cards[5]) + "," + printCard(cards[6]) + "," + printCard(cards[7]) + "," + printCard(cards[8]) + "," + printCard(cards[9])); System.out.println ("Hand2 has:" + printCard(cards[10]) + "," + printCard(cards[11]) + "," + printCard(cards[12]) + "," + printCard(cards[13]) + "," + printCard(cards[14])); System.out.println ("Hand2 has:" + printCard(cards[15]) + "," + printCard(cards[16]) + "," + printCard(cards[17]) + "," + printCard(cards[18]) + "," + printCard(cards[19])); } public void loopToOneThousand (Deck deck){ int counter = 0; int threeC = 0; int flushC = 0; while (counter < 1000){ deck = new Deck (); shuffleDeck (deck); deck = subdeck (deck, 0, 4); Boolean flagF = isFlush(deck); Boolean flagT = isThreeKind(deck); if (flagF == true && flagT == true){ threeC++; flushC++; } else if (flagF == true && flagT == false){ flushC++; } else if (flagF == false && flagT == true){ threeC++; } else{ } counter++; } System.out.println (threeC); System.out.println (flushC); //Using these numbers, I can then calculate the frequency of cards by just putting it over 1000 to get my experimental data. } public String printCard (Card a){ String[] suits = { "Clubs", "Diamonds", "Hearts", "Spades" }; String[] ranks = { "narf", "Ace", "2", "3","4","5","6","7","8","9", "10", "Jack", "Queen", "King" }; return (ranks[a.rank] + " of " + suits[a.suit]); } public boolean isFlush (Deck deck){ int suit = deck.cards[0].suit; Boolean flag = true; for (int i = 1; i<5; i++){ if (suit != deck.cards[i].suit){ flag = false; i = 5; } } return flag; } // public boolean isStraight (Deck deck){ // int rank = deck.cards[0].rank; // Boolean flag = true; // } public boolean isFourKind (Deck deck){ Boolean flag = false; int counter = 0; int rank = 0; for (int index = 0; index<5; index++){ rank = deck.cards[index].rank; counter = 0; for (int j = 0; index<5; j++){ if (j != index && deck.cards[j].rank == rank){ counter++; } } if (counter >= 4){ flag = true; index = 5; } } return flag; } public boolean isThreeKind (Deck deck){ Boolean flag = false; int counter = 0; int rank = 0; for (int index = 0; index<5; index++){ rank = deck.cards[index].rank; counter = 0; for (int j = 0; index<5; j++){ if (j != index && deck.cards[j].rank == rank){ counter++; } } if (counter >= 3){ flag = true; index = 5; } } return flag; } public boolean isPair (Deck deck){ Boolean flag = false; int counter = 0; int rank = 0; for (int index = 0; index<5; index++){ rank = deck.cards[index].rank; counter = 0; for (int j = 0; index<5; j++){ if (j != index && deck.cards[j].rank == rank){ counter++; } } if (counter >= 2){ flag = true; index = 5; } } return flag; } public Deck shuffleDeck (Deck deck){ int cardLength = deck.cards.length; for (int i=0; i<deck.cards.length; i++){ Card a = cards[i]; Card random = randomCard(cardLength); swapCards (a , random); } return deck; } public Card randomCard (int cardLength){ Random rand = new Random(); int n = cardLength; int random = rand.nextInt(n+1); return cards[n]; } public void swapCards (Card a, Card random){ int aRank = a.rank; int aSuit = a.suit; a.rank = random.rank; a.suit = random.suit; random.rank = aRank; random.suit = aSuit; } public static Deck subdeck (Deck deck, int low, int high){ Deck sub = new Deck (high-low+1); for (int i = 0; i<sub.cards.length; i++){ sub.cards[i] = deck.cards[low+i]; } return sub; } } class Card{ int suit, rank; public Card (){ //My old friend Mr. Constructor this.suit = 0; this.rank = 0; } public Card (int suit, int rank){ //And the constructor that takes parameters this.suit = suit; this.rank = rank; } } Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24166 30 I like... posted Apr 20, 2010 08:14:35 0 So how do you mean, it's not recognized? Just checking: the "main()" in the "Main" class above doesn't actually do anything at all, so if you ran it, it would immediately exit with no effect. You know that, right? [Jess in Action] [AskingGoodQuestions] Kevan Ryan Greenhorn Joined: Feb 08, 2010 Posts: 9 posted Apr 20, 2010 10:24:59 0 Yeah, I realize that right now the 'main' class does nothing. But I don't know where to go from here. Right now the NetBeans IDE won't run the program because it says that my package has no Main class. And even with the 'main class to nowhere', it still doesn't run. It's pretty frustrating, considering that my program is finished but this error is keeping me from actually running it. EDIT: It turns out I did end up making a silly mistake - I forgot to put the other object classes inside my main class. Thanks for all who helped! Colin Wright Greenhorn Joined: Apr 21, 2010 Posts: 8 posted Apr 21, 2010 10:28:10 0 It's not a main class you need it's a main method in your class, the class should have the same name as the file (but without the .java). public static void main (String[] args) { inside your class is needed. But you should have one public class per java file with the same name as the source file without the extension. you can also have private and inner classes in a file but if you want multiple public classes in a package you can create more source files. I agree. Here's the link: subject: Main class not recognized Similar Threads jarring submission simple main function doubt Java bean in servlet Setting the classpath jar questions All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/492231/vc/Main-class-recognized
CC-MAIN-2014-15
en
refinedweb
Please refer to the errata for this document, which may include some normative corrections. This document is also available in these non-normative formats: Diff from Previous Recommendation, PostScript version, and PDF version The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007-2013 W3C® (MIT, ERCIM, Keio, Beihang), of the same goals. This document is a detailed syntax specification for RDFa, aimed at: For those looking for an introduction to the use of RDFa and some real-world examples, please consult the [RDFA-PRIMER]. First, if you are not familiar with either RDFa or RDF, and simply want to add RDFa to your documents, then you may find the RDFa Primer [RDFA-PRIMER] to be a better introduction. If you are already familiar with RDFa, and you want to examine the processing rules — perhaps to create an RDFa Processor — markup. This document only contains enough background on RDF to make the goals of RDFa more clear. version reflects changes made as a result of comments received since the Recommendation was first published. These changes are mostly editorial. In particular, there are minor editorial changes to the Processing Sequence section 7.5. This is a revision of RDFa Syntax 1.0 [RDFA-SYNTAX]. This document supersedes the previous Recommendation. There are a number of substantive differences between this version and its predecessor, including: There is a more thorough list of changes in Changes. A sample test harness is available. This set of tests is not intended to be exhaustive. Users may find the tests to be useful examples of RDFa usage. The implementation report used by the director to transition to Recommendation has been made available. There have been no formal objections to the publication of this document. This document was published by the RDF Web Applications Working Group as a Recommendation. If you wish to make comments regarding this document, please send them to public-rdfa/XML [RDF-SYNTAX] provides sufficient flexibility to represent all of the abstract concepts in RDF.-SCHEMA]. Moreover, organizations that generate a lot of content (e.g., news outlets) find it easier to embed the semantic data inline than to maintain it separately. In the past, many attributes were 'hard-wired' directly into the markup language to represent specific concepts. For example, in XHTML 1.1 [XHTML11] and HTML [HTML401] there is @cite; the attribute allows an author to add information to a document which is used to indicate the origin of a quote. However, these 'hard-wired' attributes make it difficult to define a generic process for extracting metadata from any document since an RDFa Processor. In most cases the values of those properties are the information that is already in an author's document. RDFa alleviates the pressure on markup language designers to anticipate all the structural requirements users of their language might have, by outlining a new syntax for RDF that relies only on attributes. By adhering to the concepts and rules in this specification, language designers can import RDFa into their environment with a minimum of hassle and be confident that semantic data will be extractable from their documents by conforming processors. This section is non-normative. The following examples are intended to help readers who are not familiar with RDFa to quickly get a sense of how it works. For a more thorough introduction, please read the RDFa Primer [RDFA-PRIMER]. In RDF, it is common for people to shorten vocabulary terms via abbreviated IRIs that use a 'prefix' and a 'reference'. This mechanism is explained in detail in the section titled Compact URI Expressions. The examples throughout this document assume that the following vocabulary prefixes have been defined: In some of the examples below we have used IRIs with fragment identifiers that are local to the document containing the RDFa fragment identifiers shown (e.g., ' about="#me"'). This idiom, which is also used in RDF/XML [RDF-SYNTAX-GRAMMAR] and other RDF serializations, gives a simple way to 'mint' new IRIs for entities described by RDFa and therefore contributes considerably to the expressive power of RDFa. The precise meaning of IRIs which include fragment identifiers when they appear in RDF graphs is given in Section 7 of [RDF-SYNTAX]. To ensure that such fragment identifiers can be interpreted correctly, media type registrations for markup languages that incorporate RDFa should directly or indirectly reference this specification. This section is non-normative. RDFa makes use of a number of commonly found attributes, as well as providing a few new ones. Attributes that already exist in widely deployed languages (e.g., HTML) have the same meaning they always did, although their syntax has been slightly modified in some cases. For example, in (X)HTML there is no clear way to add new . In (X)HTML, authors can include metadata and relationships concerning the current document by using the meta and link elements (in these examples, XHTML+RDFa [XHTML-RDFA] is used). For example, the author of the page along with the pages preceding and following the current page can be expressed using the link and meta elements: Is, rather than CURIEs. The previous example can also be written as follows: <html xmlns=""> <head> <title>Books by Marco Pierre White</title> </head> <body> I think White's book '<span about="urn:ISBN:0091808189" typeof="" property="" >Canteen Cuisine</span>' is well worth getting since although it's quite advanced stuff, he makes it pretty easy to follow. You might also like <span about="urn:ISBN:1596913614" typeof="" property="" >White's autobiography</span>. </body> </html> A simple way of defining a portion of a document. The previous section gave examples of typical markup in order to illustrate the structure of RDFa markup. RDFa is short for "RDF in Attributes". In order to author RDFa you do not need to understand RDF, although it would certainly help. However, if you are building a system that consumes the RDF output of a language that supports RDFa you will almost certainly need to understand RDF. This section introduces the basic concepts and terminology of RDF. For a more thorough explanation of RDF, please refer to the RDF Concepts document [RDF-SYNTAX] and the RDF Syntax Document [RDF-SYNTAX]. This section is non-normative. the German Empire. the German Empire. statement about. In all of these examples the subject is 'Albert'.. the German Empire, how could we know that the predicate 'was born in' has the same purpose as the predicate 'birthplace' that might exist in some other system? RDF solves this problem by replacing our vague terms with IRI references. IRIs are most commonly used to identify web pages, but RDF makes use of them as a way to provide unique identifiers for concepts. For example, we could identify the subject of all of our statements (the first part of each triple) by using the DBPedia [] IRI for Albert Einstein, instead of the ambiguous string 'Albert': <> has the name Albert Einstein. <> was born on March 14, 1879. <> was born in the German Empire. <> has a picture at. IRI references are also used to uniquely identify the objects in metadata statements (the third part of each triple). The picture of Einstein is already an IRI, but we could also use an IRI to uniquely identify the country 'German Empire'. At the same time we'll indicate that the name and date of birth really are literals (and not IRIs), by putting quotes around them: <>-2]. IRIs, and not any particular syntax. However, there are a number of mechanisms for expressing triples, such as RDF/XML [RDF-SYNTAX-GRAMMAR], IRIs to be abbreviated by using an IRI mapping, which can be used to express a compact IRI expression as follows: @prefix dbp: <> . @prefix foaf: <> . <> foaf:name "Albert Einstein" . <> dbp:birthPlace <> . Here 'dbp:' has been mapped to the IRI for DBPedia and 'foaf:' has been mapped to the IRI for the 'Friend of a Friend' vocabulary. Any IRI in Turtle could be abbreviated in this way. This means that we could also have used the same technique to abbreviate the identifier for Einstein, as well as the datatype indicator: the end there will always be a full IRI based on the document's location, but this abbreviation serves to make examples more compact. Note in particular that the whole technique of abbreviation is merely a way to make examples more compact, and the actual triples generated would always use the full IRIs. A collection of triples is called a graph. All of the triples that are defined by this specification are contained in the output graph by an RDFa Processor. For more information on graphs and other RDF concepts, see [RDF-SYNTAX].. A growing use of embedded metadata is to take fragments of markup are processed. Specifically, the processing of a fragment 'outside' of a complete document is undefined because RDFa processing is largely about context. Future versions of this or related specifications may do more to define this behavior. Developers of tools that process fragments, or authors of fragments for manual inclusion, should also bear in mind what will happen to their fragment once it is included in a complete document. They should: An, or @href, whilst objects that are literals are represented either with @content or the content of the element in question (with an optional datatype expressed using @datatype, and an optional language expressed using a Host Language-defined mechanism such as @xml:lang). graph and the processor graph are separate graphs and MUST NOT be stored in the same graph by the RDFa Processor. A conforming RDFa Processor MAY make available additional triples that have been generated using rules not described here, but these triples MUST NOT be made available in the output graph. (Whether these additional triples are made available in one or more additional RDF graphs is implementation-specific, and therefore not defined11-1] processors are permitted to 'normalize' white space in attribute values - see section 3.1.4). To ensure maximum consistency between processing environments, authors SHOULD remove any unnecessary white space in their plain and XML Literal content. A conforming RDFa Processor MUST examine the media type of a document it is processing to determine the document's Host Language. If the RDFa Processor is unable to determine the media type, or does not support the media type, the RDFa Processor MUST process the document as if it were media type application/xml. See XML+RDFa Document Conformance. A Host Language MAY specify additional announcement mechanisms. A conforming RDFa Processor MAY use additional mechanisms (e.g., the DOCTYPE, a file extension, the root element, an overriding user-defined parameter) to attempt to determine the Host Language if the media type is unavailable. These mechanisms are unspecified. Host Languages that incorporate RDFa must adhere to the following: For the avoidance of doubt, there is no requirement that attributes such as @href and @src are used in a conforming Host Language. Nor is there any requirement that all required attributes are incorporated into the content model of all elements. The working group recommends that Host Language designers ensure that the required attributes are incorporated into the content model of elements that are commonly used throughout the content model of the Host Language. <myml:myElement). When a Host Language does not use the attributes in 'no namespace', they MUST be referenced via the XHTML Namespace (). This specification does not define a stand-alone document type. The attributes herein are intended to be integrated into other host languages (e.g., HTML+RDFa or XHTML+RDFa). However, this specification does define processing rules for generic XML documents - that is, those documents delivered as media types text/xml or application/xml. Such documents must meet all of the following criteria: <myml:myElement). It is possible that an XML grammar will have native attributes that conflict with attributes in this specification. This could result in an RDFa processor generating unexpected triples. When an RDFa Processor processes an XML+RDFa document, it does so via the following initial context: describedby, license, and role), defined in. dc), defined in. This specification defines a number of attributes and the way in which the values of those attributes are to be interpreted when generating RDF triples. This section defines the attributes and the syntax of their values. CDATAstring, for supplying machine-readable content for a literal (a 'literal object', in RDF terminology); relor propertyattribute on the same element is to be added to the list for that predicate. The value of this attribute MUST be ignored. Presence of this attribute causes a list to be created if it does not already exist. NCName ':' ' '+ xsd:anyURI In all cases it is possible for these attributes to be used with no value (e.g., @datatype="") or with a value that evaluates to no value after evaluation using the rules for CURIE and IRI Processing (e.g., @datatype="[noprefix:foobar]"). The RDFa attributes play different roles in a semantically rich document. Briefly, those roles are: Many attributes accept a white space separated list of tokens. This specification defines white space as: whitespace ::= (#x20 | #x9 | #xD | #xA)+ When attributes accept a white space separated list of tokens, an RDFa Processor MUST ignore any leading or trailing white space. The working group is currently examining the productions for CURIE below in light of recent comments received from the RDF Working Group and members of the RDF Web Applications Working Group. It is possible that there will be minor changes to the production rules below in the near future, and that these changes will be backward incompatible. However, any such incompatibility will be limited to edge cases. The key component of RDF is the IRI, but these are usually long and unwieldy. RDFa therefore supports a mechanism by which IRIs can be abbreviated, called 'compact URI expressions' or simply, CURIEs. When expanded, the resulting IRI MUST be a syntactically valid IRI [RFC3987]. For a more detailed explanation see CURIE and IRI Processing. The lexical space of a CURIE is as defined in curie below. The value space is the set of IR. The RDFa 'default prefix' should not be confused with the 'default namespace' as defined in [XML-NAMES]. An RDFa Processor MUST NOT treat an XML-NAMES 'default namespace' declaration as if it were setting the 'default prefix'. The general syntax of a CURIE can be summarized as follows: prefix ::= NCName reference ::= ( ipath-absolute / ipath-rootless / ipath-empty ) [ "?" iquery ] [ "#" ifragment ] (as defined in [[!RFC3987]]) curie ::= [ [ prefix ] ':' ] reference safe_curie ::= '[' [ [ prefix ] ':' ] reference ']' The production safe_curie is not required, even in situations where an attribute value is permitted to be a CURIE or an IRI: An IRI that uses a scheme that is not an in-scope mapping cannot be confused with a CURIE. The concept of a safe_curie is retained for backward compatibility. It is possible to define a CURIE prefix mapping in such a way that it would overshadow a defined IRI scheme. For example, a document could map the prefix 'mailto' to ''. Then. In normal evaluation of CURIEs the following context information would need to be provided: :p); p); _:p). In RDFa these values are defined as follows: A CURIE is a representation of a full IRI. The rules for determining that IRI are: prefixand a reference, the IRI is obtained by taking the current default prefix mapping and concatenating it with the reference. If there is no current default prefix mapping, then this is not a valid CURIE and MUST be ignored. prefixand a reference, and if there is an in-scope mapping for prefix(when compared case-insensitively), then the IRI is created by using that mapping, and concatenating it with the reference. prefix, then the value is not a CURIE. See General Use of Terms in Attributes for the way items with no colon can be interpreted in some datatypes by RDFa Processors. This section is non-normative. In many cases, language designers have attempted to use QNames for an extension mechanism [XMLSCHEMA11-2]. IR map to an IRI that will reveal the meaning of that ISBN. As you can see, the definition of QNames and this (relatively common) use case are in conflict with one another. This specification addresses the problem by defining CURIEs. Syntactically, CURIEs are a superset of QNames. Note that this specification is targeted at language designers, not document authors. Any language designer considering the use of QNames as a way to represent IRIs or unique tokens should consider instead using CURIEs: This section looks at a generic set of processing rules for creating a set of triples that represent the structured data present in an). Evaluating. In some environments there will be little difference between starting at the root element of the document, and starting at the document object itself. It is defined this way because in some environments important information is present at the document object level which is not present on the root element. As processing continues, rules are applied which may generate triples, and may also change the evaluation context information that will then be used when processing descendant elements. This specification does not say anything about what should happen to the triples generated, or whether more triples might be generated during processing than are outlined here. However, to be conforming, an RDFa Processor MUST act as if at a minimum the rules in this section are applied, and a single RDF graph produced. As described in the RDFa Processor Conformance section, any additional triples generated MUST NOT appear in the. Statement chaining is an RDFa feature that allows the author to link RDF statements together while avoiding unnecessary repetitive markup. example we can see that an object resource ('German_Empire'), has become the subject for nested statements. This markup also illustrates the basic chaining pattern of 'A has a B has a C' (i.e., Einstein has a birth place of the German Empire, which has a long name of 'the German Empire')., at some point in his life, a residence both in the German Empire and in Switzerland: <> . Chaining can sometimes involve elements containing relatively minimal markup, for example showing only one resource, or only one predicate. Here the img element is used to carry a picture of Einstein: <div about=""> <div rel="foaf:depiction"> <img src="" /> </div> </div> When such minimal markup is used, any of the resource-related attributes could act as a subject or an object in the chaining: <div about=""> <div rel="dbp. Since RDFa is ultimately a means for transporting RDF, a key concept is the resource and its manifestation as an IRI. RDF deals with complete IRIs (not relative paths); when converting RDFa to triples, any relative IRIs MUST be resolved relative to the base IRI, using the algorithm defined in section 6.5 of RFC 3987 [RFC3987], Reference Resolution. The values of RDFa attributes that refer to IRIs use three different datatypes:. Note that it is possible for all values in an attribute to be ignored. When that happens, the attribute MUST be treated as if it were empty. For example, the full IRI for Albert Einstein on DBPedia is: This can be shortened by authors to make the information easier to manage, using a CURIE. The first step is for the author to create a prefix mapping that links a prefix to some leading segment of the IRI. In RDFa these mappings are expressed using @prefix: <div prefix="db:"> ... </div> Once the prefix has been established, an author can then use it to shorten an IRI as follows: <div prefix="db:"> <div about="db:resource/Albert_Einstein"> ... </div> </div> The author is free to split the IRI at any <div about="dbr:Albert_Einstein"> ... </div> <div about="dbr:Baruch_Spinoza"> ... </div> </div> CURIE prefix mappings are defined on the current element and its descendants. The inner-most mapping for a given prefix takes precedence. For example, the IRIs expressed by the following two CURIEs are different, despite the common prefix, because the prefix mappings are locally scoped: <div prefix="dbr:"> <div about="dbr:Albert_Einstein"> ... </div> </div> <div prefix="dbr:"> <div about="dbr:Albert_Einstein"> ... </div> </div> In general it is a bad practice to redefine prefix mappings within a document. In particular, while it is permitted, mapping a prefix to different values at different places within a document could lead to confusion. The working group recommends that document authors use the same prefix to map to the same vocabulary throughout a document. Many vocabularies have recommended prefix names. The working group recommends that these names are used whenever possible. There are a number of ways that attributes make use of CURIEs, and they need to be dealt with differently. These are: An empty attribute value (e.g., typeof='') is still a CURIE, and is processed as such. The rules for this processing are defined in Sequence. Specifically, however, an empty attribute value is never treated as a relative IRI by this specification. An example of an attribute that can contain a CURIEorIRI is @about. To express an IRI directly, an author might do this: <div about=""> ... </div> whilst to express the IRI above as a CURIE an author would do this: <div about="dbr:Albert_Einstein"> ... </div> The author could also use a safe CURIE, as follows: <div about="[dbr:Albert_Einstein]"> ... </div> Since non-CURIE values MUST be ignored, the following value in @about would not set a new subject, since @about does not permit the use of TERMs, and the CURIE has no prefix separator. <div about="[Albert_Einstein]"> ... </div> However, this markup would set a subject, since it is not a CURIE, but a valid relative IRI: <div about="Albert_Einstein"> ... </div> Note that several RDFa attributes are able to also take TERMS as their value. This is discussed in the next section. Some RDFa attributes have a datatype that permits a term to be referenced. RDFa defines the syntax of a term as: term ::= NCNameStartChar termChar* termChar ::= ( NameChar - ':' ) | '/' For the avoidance of doubt, this production means a 'term' in RDFa is an XML NCName that also permits slash as a non-leading character. When an RDFa attribute permits the use of a term, and the value being evaluated matches the production for term above, it is transformed to an IRI using the following logic: term. termmatches an item in the list of" href="" /> would each generate the following triple: <> <> <> . In RDFa, it is possible to establish relationships using various types of resource references, including bnodes. If a subject or object is defined using a CURIE, and that CURIE explicitly names a bnode, then a Conforming Processor MUST create the bnode when it is encountered during parsing. The RDFa Processor MUST also ensure that no bnode created automatically (e.g., as a result of chaining) has a name that collides with a bnode that . RDFa Processors use, internally, implementation-dependent identifiers for bnodes. When triples are retrieved, new bnode indentifiers are used, which usually bear no relation to the original identifiers. However, implementations do ensure that these generated bnode identifiers are consistent: each bnode will have its own identifier, all references to a particular bnode will use the same identifier, and different bnodes will have different identifiers. As a special case, _: is also a valid reference for one specific bnode. Processing would normally begin after the document to be parsed has been completely loaded. However, there is no requirement for this to be the case, and it is certainly possible to use a stream-based approach, such as SAX [SAX] to extract the RDFa information. However, if some approach other than the DOM traversal technique defined here is used, it is important to ensure that Host Language-specific processing rules are applied (e.g., XHTML+RDFa [XHTML-RDFA] indicates the base element can be used, and base will affect the interpretation of IRIs in meta or link elements even if those elements are before the base element in the stream). In this section the term 'resource' is used to mean 'IRI or bnode'. It is possible that this term will be replaced with some other, more formal term after consulting with other groups. Changing this term will in no way change this processing sequence. At the beginning of processing, an initial evaluation context is created, as follows: baseelement);. This specification defines processing rules for optional attributes that may not be present in all Host Languages (e.g., @href). If these attributes are not supported in the Host Language, then the corresponding processing rules are not relevant for that language. The-SYNTAX], which will have been obtained according to the section on CURIE and IRI Processing. The actual literal is either the value of @content (if present) or a string created by concatenating the value of all descendant text nodes, of the current element in turn. XMLLiteralin the vocabulary. The value of the XML literal is a string created by serializing to text, all nodes that are descendants of the current element, i.e., not including the element itself, and giving it a datatype of XMLLiteral in the vocabulary. The format of the resulting serialized content is as defined in Exclusive XML Canonicalization Version 1.0 [XML-EXC-C14N]. In order to maintain maximum portability of this literal, any children of the current node that are elements MUST have the current XML namespace declarations (if any) declared on the serialized element. Since the child element node could also declare new XML namespaces, the RDFa Processor MUST be careful to merge these together when generating the serialized element definition. For avoidance of doubt, any re-declarations on the child node MUST take precedence over declarations that were active on the current node. Additionally, if there is a value for current language then the value of the plain literal should include this language information, as described in [RDF-SYNTAX]. The actual literal is either the value of @content (if present) or a string created by concatenating the text content of each of the descendant elements of the current element in document order. The current property value is then used with each predicate as follows: The processing rules covered in the previous section are designed to extract as many triples as possible from a document. The RDFa Processor is designed to continue processing, even in the event of errors. For example, failing to resolve a prefix mapping or term would result in the RDFa Processor skipping the generation of a triple and continuing with document processing. There are cases where knowing each RDFa Processor warning or error would be beneficial to authors. The processor graph is designed as the mechanism to capture all informational, warning, and error messages as triples from the RDFa Processor. These status triples may be retrieved and used to aid RDFa authoring or automated error detection. If an RDFa Processor supports the generation of a processor graph, then it MUST generate a set of triples when the following processing issues occur: rdfa:ErrorMUST be generated when the document fails to be fully processed as a result of non-conformant Host Language markup. rdfa:WarningMUST be generated when a CURIE prefix fails to be resolved. rdfa:WarningMUST be generated when a Term fails to be resolved. Other implementation-specific rdfa:Info, rdfa:Warning, or rdfa:Error triples MAY be generated by the RDFa Processor. Accessing the processor graph may be accomplished in a variety of ways and is dependent on the type of RDFa Processor and access method that the developer is utilizing. SAX-based processors or processors that utilize function or method callbacks to report the generation of triples are classified as event-based RDFa Processors. For Event-based RDFa Processors, the software MUST allow the developer to register a function or callback that is called when a triple is generated for the processor graph. The callback MAY be the same as the one that is used for the output graph as long as it can be determined if a generated triple belongs in the processor graph or the output graph. A whole-graph RDFa Processor is defined as any RDFa Processor that processes the entire document and only provides the developer access to the triples after processing has completed. RDFa Processors that typically fall into this category express their output via a single call using RDF/XML, N3, TURTLE, or N-Triples notation. For whole-graph RDFa Processors, the software MUST allow the developer to specify if they would like to retrieve the output graph, the processor graph, or both graphs as a single, combined graph from the RDFa Processor. If the graph preference is not specified, the output graph MUST be returned. A web service RDFa Processor is defined as any RDFa Processor that is capable of processing a document by performing an HTTP GET, POST or similar action on an RDFa Processor IRI. For this class of RDFa Processor, the software MUST allow the caller to specify if they would like to retrieve the output graph, the processor graph, or both graphs as a single, combined graph from the web service. The rdfagraph query parameter MUST be used to specify the value. The allowable values are output, processor or both values, in any order, separated by a comma character. If the graph preference is not specified, the output graph MUST be returned. To ensure interoperability, a core hierarchy of classes is defined for the content of the processor graph. Separate errors or warnings are resources (typically blank nodes) of a specific type, with additional properties giving more details on the error condition or the warning. This specification defines only the top level classes and the ones referring to the error and warning conditions defined explicitly by this document. Other, implementation-specific subclasses may be defined by the RDFa Processor. The top level classes are rdfa:Error, rdfa:Warning, and rdfa:Info, defined as part of the RDFa Vocabulary. Furthermore, a single property is defined on those classes, namely rdfa:context, that provides an extra context for the error, e.g., http response, an XPath information, or simply the IRI to the RDFa resource. Usage of this property is optional, and more than one triple can be used with this predicate on the same subject. Finally, error and warning instances SHOULD use the dc:description and dc:date properties. dc:description should provide a short, human readable but implementation dependent description of the error. dc:date should give the time when the error was found and it is advised to be as precise as possible to allow the detection of, for example, possible network errors. The example below shows the triples that should be minimally present in the processor graph as a result of an error (the content of the literal for the dc:description predicate is implementation dependent): @prefix rdfa: <> . @prefix xsd: <> . @prefix dc: <> . [] a rdfa:DocumentError ; dc:description "The document could not be parsed due to parsing errors." ; dc:date "2010-06-30T13:40:23"^^xsd:dateTime . A slightly more elaborate example makes use of the rdfa:context property to provide further information, using external vocabularies to represent HTTP headers or XPointer information (note that a processor may not have these information in all cases, i.e., these rdfa:context information are not required): present. There are many examples in this section. The examples are all written using XHTML+RDFa. However, the explanations are relevant regardless of the Host Language. This section is non-normative. This section is non-normative. set, so in this section we use the idea of the current subject which may be either new subject or parent subject. This section is non-normative. When parsing begins, the current subject will be theI then it needs to be resolved against the current base value. To illustrate how this affects the statements, note in this markup how the properties inside the (X)HTML body element become part of a new calendar event object, rather than referring to the document as they do in the head of the document: <html xmlns="" prefix="cal:"> > </body> </html> With this markup an RDFa Processor will generate the following triples: <> foaf:primaryTopic <#bbq> . <> dc:creator "Jo" . <#bbq> rdf:type cal:Vevent . <#bbq> cal:summary "one last summer barbecue" . <#bbq> cal:dtstart "2015 triple: <photo1.jpg> dc:creator "Mark Birbeck" . @typeof defines typing triples. @typeof works differently to other ways of setting a predicate since the predicate is always rdf:type, which means that the processor only requires). For example, an author may wish to create markup for a person using the FOAF vocabulary, but without having a clear identifier for the item: <div typeof="foaf:Person"> <span property="foaf:name">Albert Einstein</span> <span property="foaf:givenName">Albert</span> </div> This markup would cause a bnode to be created which has a 'type' of foaf:Person, as well as name and given name properties: _:a rdf:type foaf:Person . _:a foaf:name "Albert Einstein" . _:a foaf:givenName "Albert" ." . _:aas being distinct from _:b. But by not exposing these values to any external software, it is possible to have complete control over the identifier, as well as preventing further statements being made about the item. This section is non-normative. As emphasized in the section on chaining, one of the main differences between . the German_Empire was added, the following markup was used: > In an earlier illustration the subject and object for the German Empire were connected situation, all statements that are 'contained' by the object resource representing the German Empire ">the German Empire</span> <span rel="dbp-owl:capital" resource="" /> </div> </div> Looking at the triples that an RDFa Processor would generate, we can see that we actually have two groups of statements; the first group is set to refer to the @about that contains them: <> foaf:name "Albert Einstein" . <> dbp:dateOfBirth "1879-03-14"^^xsd:date . <> dbp:birthPlace <> . while the second group refers to the @resource that contains them: <> dbp:conventionalLongName "the German Empire" . <> dbp-owl:capital <> . Note also that the same principle described here applies to @src and @href. There will be occasions when the author wants to connect markup could well be used: <div about="" rel="dbp:influenced _:a . _:a foaf:name "Albert Einstein" . _:a dbp:dateOfBirth "1879-03-14"^^xsd:date . Note that the div is superfluous, and an RDFa Processor will create the intermediate object even if the element is removed: <div about="" rel="dbp-owl-owl:influenced"> <span property="foaf:name">Albert Einstein</span> <span property="dbp:dateOfBirth" datatype="xsd:date">1879-03-14</span> </div> </div> From the point of view of the markup, this latter layout is to be preferred, since it draws attention to the 'hanging rel'. But from the point of view of an RDFa Processor, the German Empire: <div about=""> <span property="foaf:name">Albert Einstein</span> <span property="dbp:dateOfBirth" datatype="xsd:date">1879-03-14</span> <div rel="dbp:birthPlace" resource="" /> </div> and then a further statement that the 'long name' for this country is the German Empire: <span about="" property="dbp:conventionalLongName">the German Empire<> But it also allows authors to avoid unnecessary repetition and to 'normalize' out duplicate identifiers, in this case the one for the German Empire: RDFa Processor does have, but without an object: <> dbp:birthPlace ? . Then as processing continues, the RDFa Processor encounters the subject of the statement about the long name for the German Empire, and this is used in two ways. First it is used to complete the 'incomplete triple': <> dbp:birthPlace <> . and second it is used to generate its own triple: <> dbp:conventionalLongName "the German Empire" . Note that each occurrence of @about will complete any incomplete triples. For example, to mark up the fact that Albert Einstein had <> . These examples show how @about completes triples, but there are other situations that can have the same effect. For example, when @typeof creates a new bnode (as described above), that will be used to complete any 'incomplete triples'. To indicate that Spinoza influenced both Einstein and Schopenhauer, the following markup could be used: <div about=""> <div rel="dbpI resources and literals. A literal object can be set by @content or the inline text of element if @property to express a predicate. Note that the use of @content prohibits the inclusion of rich markup in your literal. If the inline content of an element accurately represents the object, then documents should rely upon that rather than duplicating that data using the @content. An IR. Alternatively, the @property can also be used to define an IRI resource; this requires the presence of a @resource, @href, or @src and the absence of @rel, @rev, @datatype, or @content. An object literal will be generated when @property is present and no resource attribute as shown above: <span about="" property="dc:creator" content="Mark Birbeck">John Doe</span> Literals can be given a data type using @datatype. This can be represented in RDFa as follows: <span property="cal:dtstart" content="2015-09-16T16:00:00-05:00" datatype="xsd:dateTime"> September 16th at 4pm </span>. The triple that this markup generates includes the datatype after the literal: <> cal:dtstart "2015-09-16T16:00:00-05:00"^^xsd:dateTime . XML documents cannot contain XML markup in their attributes, which means it is not possible to represent XML within @content (the following would cause an XML parser to generate an error): <head> <meta property="dc:title" content="E = mc<sup>2</sup>: The Most Urgent Problem of Our Time" /> </head> RDFa therefore supports the use of arbitrary markup . This requires that an IRI mapping for the prefix rdf has been defined. In the examples given here the sup element is actually part of the meaning of the literal, but there will be situations where the extra markup means nothing, and can therefore be ignored. In this situation omitting the @datatype attribute or specifying an empty @datatype value can be used to create a plain literal: <p>You searched for <strong>Einstein</strong>:</p> <p about=""> <span property="foaf:name" datatype="">Albert <strong>Einstein</strong></span> (b. March 14, 1879, d. April 18, 1955) was a German-born theoretical physicist. </p> Rendering of this page has highlighted the term the user searched for. Setting @datatype to nothing ensures that the data is interpreted as a plain literal, giving the following triple: <> foaf:name "Albert Einstein" . The value of this XML Literal is the exclusive canonicalization [XML-EXC-C14N] of the RDFa element's value. Most of the rules governing the processing of objects that are resources are to be found in the processing descriptions given above, since they are important for establishing the subject. This section aims to highlight general concepts, and anything that might have been missed. One or more" ; There is no way for an application to rely on the relative order of the two triples when, for example, querying a database containing these triples. For most of the applications and data sets this is not a problem, but, in some cases, the order is important. A typical case is publications: when a book or an article has several co-authors, the order of the authors may be important. RDF has a set of predefined predicates that have an agreed-upon semantic attribute signals is that the object generated on that element should be put on a list; the list is used with the common predicate and subject. Here is how the previous structure could look like in RDFa: <p prefix="bibo: dc: "<span property="dc:title">Semantic Annotation and Retrieval</span>" by <a inlist="" property="dc:creator" href="">Ben Adida</a>, <a inlist="" property="dc:creator" href="">Mark Birbeck</a>, and <a inlist="" property="dc:creator" href="">Ivan Herman</a>. <, XHTML+RDFa [XHTML-RDFA], and HTML+RDFa [HTML rdfa:vocabularypredicate. When an RDFa Initial Context is defined using an RDF serialization, it MUST use the vocabulary terms above to declare the components of the context. Caching of the relevant triples retrieved via this mechanism is RECOMMENDED. Embedding definitions for well known, stable RDFa Initial Contexts in the implementation is RECOMMENDED.icates sharing the same subject, an RDFa Processor MUST NOT create the associated mapping. Since RDFa is based on RDF, the semantics of RDF vocabularies can be used to gain more knowledge about data. Vocabularies, properties and classes are identified by IRIs, which enables them to be discoverable. RDF data published at the location of these IRIs can be retrieved, and descriptions of the properties and classes using specified semantics can be applied. RDFa Vocabulary Expansion is an optional processing step which may be added once the normal processing steps described in Processing Model are complete. Vocabulary expansion relies on a very small sub-set of OWL entailment [OWL2-OVERVIEW] to add triples to the. It can be very useful to make generalized data available for subsequent usage of RDFa-embedded data by expanding inferred statements entailed by these semantics. This provides for existing vocabularies that extend well-known vocabularies to have those properties added to the output graph automatically. For example, the namespace document of the Creative Commons vocabulary, i.e.,, defines cc:license to be a sub-property of dc:license. By using the <> . Other vocabularies, specifically intended to provide relations to multiple vocabularies, could also be defined by publishers, allowing use of terms in a single namespace which result in properties and/or classes from other primary vocabularies being imported. This benefits publishers as data is now more widely searchable and encourages the practice of referencing well-known vocabularies.: Each object IRI in the output graph that has a subject the current document (base) IRI and a predicate of rdfa:usesVocabulary is dereferenced. If the dereferencing yields the serialization of an RDF graph, that serialization is parsed and the resulting graph is merged with the vocabulary graph. (An RDFa processor capable of vocabulary expansion MUST accept an RDF graph serialized in RDFa, and SHOULD accept other standard serialization formats of RDF such as RDF/XML [RDF-SYNTAX-GRAMMAR] and Turtle [TURTLE].) Note that if, in the second step, a particular vocabulary is serialized in RDFa, that particular graph is not expected to undergo any vocabulary expansion on its own. Vocabulary expansion is then performed as follows: This section is non-normative. For the purpose of vocabulary processing, RDFa used a very restricted subset of the OWL vocabulary and is based on the RDF-Based Semantics of OWL [OWL2-RDF-BASED-SEMANTICS]. The RDFa Vocabulary Entailment uses the following terms: rdf:type rdfs:subClassOf rdfs:subPropertyOf owl:equivalentClass owl:equivalentProperty RDFa Vocabulary Entailment considers only the entailment on individuals (i.e., not on the relationships that can be deduced on the properties or the classes themselves.) While the formal definition of the RDFa, prp-eqp2, cax-sco, cax-eqc1, and cax-eqc2. The entailment described in this section is the minimum useful level for RDFa. Processors may, of course, choose to follow more powerful entailment regimes, e.g., include full RDFS [RDF-MT] or OWL [OWL2-OVERVIEW] entailments. Using those entailments applications may perform datatype validation by checking rdfs:range of a property, or use the advanced facilities offered by, e.g., OWL’s property chains to interlink vocabularies further. Conforming RDFa processors are not required to provide vocabulary expansion. If an RDFa processor provides vocabulary expansion, it MUST NOT be performed by default. Instead, the processor MUST provide an option, vocab_expansion, which, when used, instructs the RDFa processor to perform a vocabulary expansion before returning the output graph. Although vocabulary expansion is described in terms of a vocabulary graph and OWL 2 entailment rules, processors are free to use any process which obtains equivalent results. This section is non-normative. For RDFa Processors caching the relevant graphs retrieved via this mechanism is RECOMMENDED. Caching is usually based on HTTP response headers like expiration time, cache control, etc. For publishers of vocabularies, the IRI for the vocabularies SHOULD be dereferenceable, and should return an RDF graph with the vocabulary description. This vocabulary description SHOULD be available encoded in RDFa, and MAY also be available in other RDF serialization syntaxes (using content negotiation to choose among the different formats). If possible, vocabulary descriptions SHOULD include subproperty and subclass statements linking the vocabulary terms to other, well-known vocabularies. Finally, HTTP responses SHOULD include fields usable for cache control, e.g., expiration date. In order to facilitate the use of CURIEs in markup languages, this specification defines some additional datatypes in the XHTML datatype space (). Markup languages that want to import these definitions can find them in the "datatypes" file for their schema grammar: Specifically, the following datatypes are defined:. This specification introduces a number of new features, and extends the behavior of some features from the previous version. The following summary may be helpful to RDFa Processor developers, but is not meant to be comprehensive. While this specification strives to be as backward compatible as possible with [RDFA-SYNTAX], the changes above mean that there are some circumstances where it is possible for different RDF triples to be output for the same document when processed by an RDFa 1.0 processor vs. an RDFa 1.1 processor. In order to minimize these differences, a document author can do the following: XHTML+RDFa 1.0on the htmlelement. datatype='rdf:XMLLiteral'. datatype=''. When producing XHTML+RDFa 1.1 documents, it is possible to reduce the incompatibilities with RDFa 1.0 conforming processors by doing the following: XHTML+RDFa 1.0on the htmlelement. datatype='rdf:XMLLiteral'. datatype=''. This section is non-normative. At the time of publication, the active members of the RDF Web Applications Working Group were:
http://www.w3.org/TR/2013/REC-rdfa-core-20130822/
CC-MAIN-2014-15
en
refinedweb
AJAX and NetBeans - jMaki and ZK By jonasdias on Out 27, 2008 Hi everybody, Last Thursday I did a Tech Demo about developing AJAX web apps using the powerful NetBeans 6.1 at Federal University of Rio de Janeiro. I started talking about AJAX and how it improves usability and creates user friendly interfaces for the Web. Many people thinks that having ajax apps will overload the server and network just because you have a prettier interface. People have the idea that how more sophisticated is the app, heavier it will be. AJAX shows that it's not true. The division of concepts let the server deal only with the data requested from Javascripts at client side. The interface refreshing is done all by the browser. So, instead of sending full and big HTML blocks to the client each time he refresh or navigate to a different page, the server just send the information requested and the browser deal with page drawing dynamically. A great doubt about developing an AJAX app is to choose a framework. I suggested 2! First I talked about jMaki. () I did a demo showing the features of jMaki framework and its cool plugin for NetBeans. jMaki is a client-server framework that encapsulates many AJAX components from other frameworks in the form of a Widget. With the NetBeans plugin, it let you create your pages dragging and dropping widgets to your JSP. You only need to customize your page changing the CSS file as you like and choosing the best widgets to compose your web page. To start learning about jMaki, I suggest these links: NetBeans jMaki Introduction Tutorial jMaki Screencast To get a little more advanced learning, try this Another jMaki NetBeas tutorial with RESTful WS Later I started talking about ZK framework to build AJAX RIA using NetBeans. When a friend showed me ZK, I got really impressed. It's a complete framework and it let you develop your app programming only in Java. Yes! All the server side AND the client side scripts can be written in pure Java language (sweet Java)! Check some videos and examples at ZK Home Page I did a simple demo using ZK to build a Web Telephone List. WARNING: I don't use a database on this demo, but it's just to leave things faster. On a real world implementation , you should really consider that, specially because ZK has great tools to deal with Database and Persistence. If you understand Brazilian Protugese, check this blog post. Now let's start our demo. First, install NetBeans ZK plugin: There are 2 plugin options: 1. ZK Designer plugins for netbeans 6.1 2. REM for NetBeans 6.0 The first let you develop your ZUL web pages dragging and dropping the components into your xml, but the second one is the one with code completion and highlighting. Actually, in a very near future they will became only one plugin. Stay tuned for next release of REM plugin because it will came with the visual ZK designer bundled too! Download the plugin and install it on NetBeans through the Tools->Plugins window. For my Demo, I chose the first plugin that let me create my pages faster using drag n drop. So now, let's create a New Project. 1. Go to File -> New Project 2. Choose a new Web / Web Application 3. Choose a name for your project (like ZKDemo) 4. Choose your server (GlassFish) 5. Choose the framework you would like to use. Here you must choose ZK! The project opens with two index files: the index.jsp and index.zul. Delete the JSP and keep the ZUL. ZK support JSP includes, but its main ajax components are arranged to compose your web page on a XML file called ZUL file. The index.zul file comes with a simple window page like this: Try some drag and drop with the ZK components of the palette and then Run you project to check some ZK features!! Not all the ZK components are already on the palette! To see a full list of ZK Components, check: ZK Explorer! For our Telephone List project you will need this index.zul file. Copy it's content on your index.zul. You will also need the catalog.zul file that is included on our index.zul and the catalog.zs file that is used by the ZSCRIPT tag on the begining of the index.zul file. Just save catalog.zul and catalog.zs on your project page at your NetBeansProjects/YourProjectName/Web folder!. Then go to NetBeans and open the ZS file. Look! It's Java code! No Javascript in here :D! See that it does 2 imports: import ufrj.zkdemo.Catalog; import ufrj.zkdemo.Contact; This classes need to be created! But it's quite simple! Remember that if you create your classes on a diferent package, you will need to update the ZS file with yourpackage.Catalog and yourpackage.Contact imports! Download Catalog.java class. Download Contact.java class. And put them on your project's source code packages! Now Run your application and enjoy your veeeery simple telephone ajax catalog! See that, since we are using Lists and a Map to store the data, every time you Refresh your Web Page, you will have your telephone data ERASED!! Yes, this is not a very useful Tel catalog, but I encourage you to improve this project using a Database! Cheers, Jonas
https://blogs.oracle.com/jonasdias/category/Sun
CC-MAIN-2014-15
en
refinedweb
Client-side Eventing Example By Prashant Dighe on Dec 08, 2009 In the previous blog, Eventing in AJAX portlets, we saw an overview of client side eventing mechanism in OSPC and WebSpace, which works like a simple yet powerful extension of JSR-286 eventing.Lets look at an example to see how this all works. EventGeneratorAjaxPortlet in this example, takes a zip code through a text field and has a button "Get GeoCode". The onClick of the button calls processAction method on a javacript object called PortletObj which is prefixed with portlet's namespace. So first define a processAction javascript function just like one would have done while overriding the GenericPortlet's processAction method in the Portlet's Java class on server side. The processAction method, uses XMLPortletRequest which is initialized using the portlet's namespace. Portlet container automatically makes this available by adding the javascript file to the portlet during deployment. It is only required to include the script in the jsp from under <context-path>/js/ like this, In the example above, function names processAction and render are only a matter of following the JSR-286 pattern. The names of these functions could have been anything. More important is to note how "setEvent" is called on the XMLPortletRequest and an arbitrary event payload can be passed as a JSON object. It is possible to use qName without the uri. The new RFE makes it possible to call setEvent with a string as an event name instead of requiring QName.Show/Hide JSP The serveResource method of the portlet computes latitude and longitude for the zip code, using Yahoo geocode service and returns a JSON object from its serveResource method.Show/Hide serveResource EventConsumerAjaxPortlet in this example, consumes the event generated by EventGeneratorAjaxPortlet, by implementing a PortletObj.processEvent javascript function. This function is called in response to the XMLPortletRequest.setEvent call in the processAction of event generator. The wiring is taken care by Portlet Container at runtime as we will see later, but is done in portlet.xml by the developer as shown below. The portlets could have been packaged separately and may have separate portlet.xml. Also, since this is same as server-side eventing, it is possible to use tools like NetBeans PortalPack, and to use the eventing storyboard to wire the portlets visually instead of handcoding the xml. Typically, this is how a consumer is implemented. The namespaced PortletObj.processEvent function is mandatory and is the only forced naming convention required to be followed. It is not required to implement render function. It was possible to include all render code, instead of a call to render, in the callback function. But again, as a matter of convention and pattern, its much cleaner to implement render.Show/Hide JSP The processEvent function gets zip, latitude and longitude in the event payload. It in turn calls serveResouce of the portlet to get all the pictures recently uploaded in 10 mile radius of the given latitude/longitude, using Flickr's REST service.Show/Hide serveResource How it works Portlet Container reads portlet.xml during deployment and recognizes client-side events from the special marker class com.sun.portlet.ClientEvent. While rendering the portlet, it generates a namespaced javascript event queue, <portlet:namespace/>EventQueue, for the generator and populates it with a list of consumers on the page. When XMLPortletRequest.setEvent is called, it in turn calls setEvent on the generator's event queue. Generator's event queue loops through consumer list and calls <portlet:namespace/>PortletObj.processEvent on each. This is why it is mandatory to implement processEvent function to be able to consume events.
https://blogs.oracle.com/pdblog/tags/portal
CC-MAIN-2014-15
en
refinedweb
I have been putting my thoughts toward the elusive goal of making python as fast as some LISP dialects. As a result, I have a proposal that could drastically speed up the common cases of accessing global variables, while slightly slowing down less common accesses, with some rare pathologies that will eat time and memory. (Such cases would, of course, become even more rare should my proposal ever get implemented :-) Please comment on why this is a terrible/great idea, has already been thought of, etc. etc. The basic idea is to split namespace dictionaries into two parts, a simple C array of PyObject pointers, and a SymbolTable which maps names to indexes into the C array. A SymbolTable can grow, but never shrink. Deleting a name from the namespace is represented by storing a 0 into the C PyObject array. The C array is automatically enlarged as required when adding names to the dictionary (call it a SymbolDictionary). In the general case, access is slightly slower due to the extra index into the C array, and memory could be wasted due to deleted names, which still occupy a (0 value) slot in the array and SymbolTable. However, all python semantics are preserved. What does this gain us? For starters, when compiling a module, references to module globals can be compiled to a prebuilt SymbolTable in the bytecode. Outside of functions, references to module globals can use an indexed bytecode similar to local variables. References to globals in imported modules cannot safely be converted to indexed lookups because the module global might be replaced with some other module or object. Of course, this is not a big win unless functions and methods can use this optimization. The problem is, functions can be called with varying namespaces (because they are assigned to multiple modules, for instance). I propose that functions contain a dictionary or cache of namespaces used to call that function. The first time a function is called for a namespace not in the cache, its bytecodes are optimized for that namespace by converting global references to indexes in that namespaces C array. The optimized code is used and saved. For classes, the SymbolTable for class instances can be stored with the class, and only the C array kept with class instances. It might be helpful to provide an overflow dictionary for class instances to handle the case where instances are used as a container for values with arbitrary identifiers - otherwise the instance C arrays would have lots of 0 entries with that kind of usage. In normal python code, a method is only part of one class, and once it is optimized for that classes namespace, it can access instance variables almost as quickly as local vars. If it is assigned to multiple classes, the caching mechanism takes over. Single inheritance can be optimized by making derived class SymbolTables "extend" the first base class - i.e. keep the same indexes - and make the namespace caching smart enough to recognize this case and not reoptimize base class methods for the derived class. Of course, I can envision programming styles involving using functions with many different namespaces - all needing their own copy of optimized code. This can be mitigated by falling back to the unoptimized code when there are two many namespaces for a given function. If SymbolTables support mapping indexes to names, as well as names to indexes, then a function optimized for one namespace can be easily optimized for another, but looking up the name for each indexed global in the old namespace and looking up the index in the new namespace. (But name errors need to be delayed somehow.) I am late and need to go, but I just realized now to avoid the function namespace caching and still reap the benefits of this scheme! So until next time... --.
https://mail.python.org/pipermail/python-list/2001-November/104979.html
CC-MAIN-2014-15
en
refinedweb
You can subscribe to this list here. Showing 1 results of 1 The branch "master" has been updated in SBCL: via 4b25bb8e20bf3c1419a11b7d4cfefa23e4f3279b (commit) from e0aff99a73d836da0dad4602e5559595fbe5ba5c (commit) - Log ----------------------------------------------------------------- commit 4b25bb8e20bf3c1419a11b7d4cfefa23e4f3279b Author: Stas Boukarev <stassats@...> Date: Mon May 14 05:12:45 2012 +0400 Optimize copy-tree. copy-tree used to always call itself, even on linear lists, which caused stack exhaustion on long lists. Make it copy linear lists linearly, and recur only when necessary. This also makes it somewhat faster. Fixes lp#98926. --- NEWS | 2 ++ src/code/list.lisp | 15 ++++++++++++++- 2 files changed, 16 insertions(+), 1 deletions(-) diff --git a/NEWS b/NEWS index 51b81fa..a4f9d84 100644 --- a/NEWS +++ b/NEWS @@ -64,6 +64,8 @@ changes relative to sbcl-1.0.56: *) * documentation: ** improved docstrings: REPLACE (lp#965592) diff --git a/src/code/list.lisp b/src/code/list.lisp index d74cc1c..da6ca8e 100644 --- a/src/code/list.lisp +++ b/src/code/list.lisp @@ -445,8 +445,21 @@ #!+sb-doc "Recursively copy trees of conses." (if (consp object) - (cons (copy-tree (car object)) (copy-tree (cdr object))) + (let ((result (list (if (consp (car object)) + (copy-tree (car object)) + (car object))))) + (loop for last-cons = result then new-cons + for cdr = (cdr object) then (cdr cdr) + for car = (if (consp cdr) + (car cdr) + (return (setf (cdr last-cons) cdr))) + for new-cons = (list (if (consp car) + (copy-tree car) + car)) + do (setf (cdr last-cons) new-cons)) + result) object)) + ;;;; more commonly-used list functions ----------------------------------------------------------------------- hooks/post-receive -- SBCL
http://sourceforge.net/p/sbcl/mailman/sbcl-commits/?viewmonth=201205&viewday=14
CC-MAIN-2014-15
en
refinedweb
How to Add Hyper-V Hosts in a Disjointed Namespace a disjointed name space as managed Hyper-V hosts in Virtual Machine Manager. A disjointed name space occurs when the computer’s primary Domain Name System (DNS) suffix does not match the domain of which it is a member. For example, a disjointed namespace occurs when a computer that has the DNS name of HyperVHost03.contosocorp.com is in a domain that has the DNS name of contoso.com. For more information about disjointed namespaces, see Naming conventions in Active Directory for computers, domains, sites, and OUs. Prerequisites Before you begin this procedure, make sure that the following prerequisites are met: - The System Center Virtual Machine Manager service must be running as the local system account or a domain account that has permission to register a Service Principal Name (SPN) in Active Directory Domain Services (AD DS). - Before you can add a host cluster that is in a disjointed namespace to a VMM management server that is not in a disjointed namespace, you must add the Domain Name System (DNS) suffix for the host cluster to the TCP/IP connection settings on the VMM management server. - If you use Group Policy to configure Windows Remote Management (WinRM) settings, understand the following before you add a Hyper-V host to VMM management: - VMM supports only the configuration of WinRM Service settings through Group Policy, and only on hosts that are in a trusted Active Directory domain. Specifically, VMM supports the configuration of the Allow automatic configuration of listeners, the Turn On Compatibility HTTP Listener, and the Turn on Compatibility HTTPS Listener Group Policy settings. Configuration of the other WinRM Service policy settings is not supported. - If the Allow automatic configuration of listeners policy setting is enabled, it must be configured to allow messages from any IP address. To verify this, view the policy setting and make sure that the IPv4 filter and IPv6 filter (depending on whether you use IPv6) are set to “*”. - VMM does not support the configuration of WinRM Client settings through Group Policy. If you configure WinRM Client Group Policy settings, these policy settings may override client properties that VMM requires for the VMM agent to work correctly. To add a Hyper-V host in a disjointed namespace Follow the steps in the topic How to Add Trusted Hyper-V Hosts and Host Clusters in VMM. Note the following: - On the Credentials page, enter credentials for a valid domain account. - On the Discovery scope page, enter the fully qualified domain name (FQDN) of the host. Also, select the Skip AD verification check box. On the last page of the wizard, click Finish to add the host. When you use the Add Resource Wizard to add a computer that is in a disjointed namespace, VMM checks AD DS to see if an SPN exists. If it does not, VMM tries to create one. If the System Center Virtual Machine Manager service is running under an account that has permission to add an SPN, VMM adds the missing SPN automatically. Otherwise, host addition fails. If host addition fails, you must add the SPN manually. To add the SPN, at the command prompt, type the following command, where <FQDN> represents the disjointed namespace FQDN, and <NetBIOSName> is the NetBIOS name of the host: setspn -A HOST/<FQDN> <NetBIOSName> For example, setspn –A HOST/hypervhost03.contosocorp.com hypervhost03. See Also ---------- For additional resources, see Information and Support for System Center 2012. Tip: Use this query to find online documentation in the TechNet Library for System Center 2012. For instructions and examples, see Search the System Center 2012 Documentation Library. -----
http://technet.microsoft.com/en-us/library/gg610641.aspx
CC-MAIN-2014-15
en
refinedweb
t:PAR Element | par Object This topic documents a feature of HTML+TIME 2.0, which is obsolete as of Windows Internet Explorer 9. Defines a new timeline container in an HTML document for independently timed elements. Members Table The following table lists the members exposed by the par object.Attributes/PropertiesCollectionsEventsMethodsObjects Remarks All HTML descendants of this element either have independent or parallel timing. Use this element instead of the TIMECONTAINER attribute to create a time container without using an HTML element. All descendant elements, or time children, of this new time container inherit the time properties of their container. Unlike the time children of the t:SEQ element, the par descendants have no implicit timing relationships with each other, and their timelines might overlap. The t:PAR element effectively groups elements together so they can be easily modified as a single unit. The default value of begin for children of a par.:PAR element to apply a timeline to a group of HTML elements.<HTML XMLNS: <HEAD> <TITLE>PAR</TITLE> <STYLE> .time {behavior:url(#default#time2);} </STYLE> <?IMPORT namespace="t" implementation="#default#time2"> </HEAD> <BODY TOPMARGIN=0 LEFTMARGIN=0 <t:PAR <H3>Paragraph 1</H3> <P>This is paragraph number one. It appears for ten seconds immediately after the page is loaded.</P> <SPAN CLASS="time" BEGIN="5"> <H3>Paragraph 2</H3> <P>This is paragraph number two. It appears five seconds after the page is loaded, and remains displayed until its parent element's timeline ends at ten seconds.</P> </SPAN> </t:PAR> </BODY> </HTML> Code example: See Also
http://technet.microsoft.com/en-us/library/ms533599(v=vs.85).aspx
CC-MAIN-2014-15
en
refinedweb
random resultThanks a lot! Always got answers so quick here! random resulthow come without using rand(). My program still generate random result. I was trying to make a matr... checker program errorbut x=0, y=0 will mess up the game [output] 7 - b - b - b - b 6 b - b - b - b - 5 - b - b - b - b... checker program errorIf I choose some ok function like x=0, y=2, It is fine [output] 7 - b - b - b - b 6 b - b - b - b ... checker program errorI think the problem is in here: [code] #include <iostream> using namespace std; #ifndef GO_CXX #de... This user does not accept Private Messages
http://www.cplusplus.com/user/holywingz/
CC-MAIN-2014-15
en
refinedweb
Using NHibernate with ServiceStack A few people have asked me how they can use ServiceStack with other persistence technologies like RavenDB and NHibernate, or if you really must... EntityFramework... Rather than ServiceStack.OrmLite or ServiceStack.Redis like many of the examples in SS show. Note: This isn't about best practice on using NHibernate or ServiceStack, my services are named just to quickly get something up and running. Note 2: This blog post may not be completely inline with the GitHub repository since I will be updating the repository to include some additional samples in the future. I've created a small sample project which can be found here on GitHub, I plan to flesh it out a little bit from the date this is posted, but it's purely there as a sample. No Repositories! Utilizing other persistence frameworks is really easy with ServiceStack, the thing with ServiceStack Services is that they are doing something concise, it's a single service implementation, so there's really no need to use repositories for them, you gain absolutely no benefit from using repositories other than adding an additional layer of abstraction and complexity to your services. That doesn't mean you don't have to use repositories, if you REALLY want to use them. You can, and I'll add a sample of using a repository with the RestService implementation. Setting Up SS Assuming you've setup NHibernate and your mappings, all we need to do is setup ServiceStack. First things first! Install ServiceStack. PM> Install-Package ServiceStack Note: You can use ServiceStack.Host.* (replace the * with MVC or ASPNET) which will automatically configure the web.config. Personally I prefer to do it myself. Using the Package Manager (or GUI) install Service Stack into your project. This should add the required code to the web.config, if not you can double check your web config is setup like shown here. Next in the global.asax we want to create an AppHost and configure it: public class Global : HttpApplication { public class SampleServiceAppHost : AppHostBase { private readonly IContainerAdapter _containerAdapter; public SampleServiceAppHost(ISessionFactory sessionFactory) : base("Service Stack with Fluent NHibernate Sample", typeof(ProductFindService).Assembly) { base.Container.Register<ISessionFactory>(sessionFactory); } public override void Configure(Funq.Container container) { container.Adapter = _containerAdapter; } } void Application_Start(object sender, EventArgs e) { var factory = new SessionFactoryManager().CreateSessionFactory(); (new SampleServiceAppHost(factory)).Init(); } } In our service host we take in our NHibernate Session Factory, and we wire it up to Funq (the default IoC container SS uses), this is so when the Service is resolved, it gets the SessionFactory to create a Session. If you were using RavenDB, this is where you would inject your DocumentStore, and if you were using EntityFramework, you would inject that DataContext thing it uses. So when the application is started, we create the SessionFactory, and create an instance of the AppHost, passing in the SessionFactory. Services Now that SS is setup, we need to implement our services. This part is just as easy. In some of the SS samples such as this one, dependencies are injected via the properties. Personally I don't like this, because the service is absolutely dependent on that dependency. It cannot function without it, so in my opinion this dependency should be done via the constructor. I'm not going to go over EVERY service implementation, I'm only going to show Insert and Select By Id. Insert Besides the model defined for NHibernate, we need our Service Request Model, and we need our implementation. public class ProductInsert { public string Name { get; set; } public string Description { get; set; } } This is our Service Request Model, really plain and simple DTO used for doing a Product Insert. public class ProductInsertService : ServiceBase<ProductInsert> { public ISessionFactory NHSessionFactory { get; set; } public ProductInsertService(ISessionFactory sessionFactory) { NHSessionFactory = sessionFactory; } protected override object Run(ProductInsert request) { using (var session = NHSessionFactory.OpenSession()) using (var tx = session.BeginTransaction()) { var result = request.TranslateTo<Product>(); session.Save(result); tx.Commit(); } return null; } } This is our Service Implementation, as you can see we have a constructor which takes in the ISessionFactory, this is our NHibernate ISessionFactory, you need to be careful here since ServiceStack has it's own ISessionFactory: We need to make sure this is the NHibernate one: You can of course, inject your Unit Of Work, or the NHibernate Session, or what ever you like, if you're using Repositories you may opt to inject an instance of your desired repository such as IProductRepository. For this example I'm using NHibernates SessionFactory so that the service is responsible for opening a Session and Transaction. So that's all there is to it, inject your SessionFactory, or your desired persistence implementation, and do your thing. The cool thing about ServiceStack is it has built in functionality to do mapping. var result = request.TranslateTo<Product>(); TranslateTo<T> is functionality built into ServiceStack for mapping 1 object to another. If you want to update an object, ServiceStack handles that too using PopulateWith. var existing = session.Get<Product>(request.Id) .PopulateWith(request); No need to introduce anything like AutoMapper. Select By Id This service I've called ProductFindService, in the future there will be a ProductSearchService to show selection by criteria. Like the Insert service, I've defined a simple model which only has an Id property for selecting the product out: public class ProductFind { public Guid Id { get; set; } } In addition to the Request Model I have a Response Model: public class ProductFindResponse : IHasResponseStatus { public class Product { public Guid Id { get; set; } public string Name { get; set; } public string Description { get; set; } } public Product Result { get; set; } public ResponseStatus ResponseStatus { get; set; } } This has a nested Product class which defines all the properties of a Product. The outer Response object has a Result and Status. (status is for Exception/Error information) As you can see the Response is the same name as the Request, with Response appended to the end, so that SS can create this object itself. When I setup these Request/Response objects in Visual Studio, I use an extension called NestIn which allows me to select the two classes and nest the Response under the Request like so: The service is similar to the insert, we inject the SessionFactory, open a Session, no transaction (unless you want to use 2nd level caching, but that's beyond this post) and select out the Product: public class ProductFindService : ServiceBase<ProductFind> { public ISessionFactory NHSessionFactory { get; set; } public ProductFindService(ISessionFactory sessionFactory) { NHSessionFactory = sessionFactory; } protected override object Run(ProductFind request) { using (var session = NHSessionFactory.OpenSession()) { var result = session.Load<Models.Product>(request.Id); return new ProductFindResponse { Result = result.TranslateTo<ProductFindResponse.Product>() }; } } } Lastly we return a new Response object, and translate the result from NHibernate to the Response result. Easy peasy :) Swapping out NHibernate for anything else like RavenDB or MongoDB is super easy. I hope this helps those few people who have asked me how to use other persistence frameworks get up and running. I find it amazing how little code you're required to write to get a ServiceStack Service up and running.comments powered by Disqus
http://www.philliphaydon.com/2012/06/using-nhibernate-with-servicestack/
CC-MAIN-2014-15
en
refinedweb
Say that a have: # file test.py a=7 At the prompt: import test dir() I would like to see the variables created in the test namespace. However, variable "a" does not appear in the list, only "test". Since I know that var "a" is reachable from the prompt by means of test.a, how can I list this sort of variables? Vicente Soler
https://mail.python.org/pipermail/python-list/2009-October/555808.html
CC-MAIN-2014-15
en
refinedweb
This tutorial introduces attribute variables. It builds on the tutorial about a minimal shader and the RGB-cube tutorial about varying variables. This tutorial also introduces the main technique to debug shaders in Blender: false-color images, i.e. a value is visualized by setting one of the components of the fragment color to it. Then the intensity of that color component in the resulting image allows you to make conclusions about the value in the shader. This might appear to be a very primitive debugging technique because it is a very primitive debugging technique. Unfortunately, there is no alternative in Blender. Where Does the Vertex Data Come from?Edit In the RGB-cube tutorial you have seen how the fragment shader gets its data from the vertex shader by means of varying variables. The question here is: where does the vertex shader get its data from? Within Blender this data is specified for each selected object by the settings in the Properties window, in particular the settings in the Object Data tab, Material tab, and Textures tab. All the data of the mesh of the object is sentEdit In Blender, // (usually normalized; also in object coordinates) Blender: import bge cont = bge.logic.getCurrentController() VertexShader = """ varying vec4 color; attribute vec4 tangent; // this attribute is specific to Blender // and has to be defined explicitly void main() { color = gl_MultiTexCoord0; // set the varying to this attribute // other possibilities to play with: // color = gl_Vertex; // color = gl_Color; // color = vec4(gl_Normal, 1.0); // color = gl_MultiTexCoord0; // color = gl_MultiTexCoord1; // color = tangent; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } """ FragmentShader = """ varying vec4 color; void main() { gl_FragColor = color; } """ mesh = cont.owner.meshes[0] for mat in mesh.materials: shader = mat.getShader() if shader != None: if not shader.isValid(): shader.setSource(VertexShader, FragmentShader, 1) shader.setAttrib(bge.logic.SHD_TANGENT) Note the line shader.setAttrib(bge.logic.SHD_TANGENT) in the Python script which tells Blender to provide the shader with tangent attributes. However, Blender will only provide several of the attributes for certain settings in the Properties window, in particular something should be specified as UV Maps in the Object Data tab (just click on the “+” button), a material should be defined in the Material tab, and a texture (e.g. any image) should be defined in the Textures tab. In the RGB-cube tutorial we have already seen, how to visualize the gl_Vertex coordinates by setting the fragment color to those values. In this example, the fragment color is set to gl_MultiTexCoord0 such that we can see what kind of texture coordinates Blender provides for certain settings in the Properties window. How to Interpret False-Color ImagesEdit When trying to understand the information in a false-color image, it is important to focus on one color component only. For example, if the attribute gl_MultiTexCoord0 is written to the fragment color then the red component of the fragment visualizes the x coordinate of gl_MultiTexCoord0, i.e. it doesn't matter whether the output color is maximum pure red or maximum yellow or maximum magenta, in all cases the red component is 1. On the other hand, it also doesn't matter for the red component whether the color is blue or green or cyan of any intensity because the red component is 0 in all cases. If you have never learned to focus solely on one color components, this is probably quite challenging; therefore, you might consider to look only at one color component at a time. For example by using this line to set the varying in the vertex shader: color = vec4(gl_MultiTexCoord0.x, 0.0, 0.0, 1.0); This sets the red component of the varying variable to the x component of gl_MultiTexCoord0 but sets the green and blue components to 0 (and the alpha or opacity component to 1 but that doesn't matter in this shader). The specific texture coordinates that Blender sends to the vertex shader depend on the UV Maps that is specified in the Object Data tab and the Mapping that is specified in the Textures tab._Normal is a three-dimensional vector. Black corresponds then to the coordinate -1 and full intensity of one component to the coordinate +1. If the value that you want to visualize is in another range than 0 to 1 or -1 to +1, you have to map it to the range from 0 to 1, which is the range of color components. If you don't know which values to expect, you just have to experiment. What helps here is that if you specify color components outside of the range 0 to 1, they are automatically clamped to this range. I.e., values less than 0 are set to 0 and values greater than 1 are set to 1. Thus, when the color component is 0 or 1 you know at least that the value is less or greater than what you assumed and then you can adapt the mapping iteratively until the color component is between 0 and 1. Debugging PracticeEdit In order to practice the debugging of shaders, this section includes some lines that produce black colors when the assignment to color in the vertex shader is replaced by each of them. Your task is to figure out for each line, why the result is black. To this end, you should try to visualize any value that you are not absolutely sure about and map the values less than 0 or greater than 1 to other ranges such that the values are visible and you have at least an idea in which range they are. Note that most of the functions and operators are documented in “Vector and Matrix Operations”. color = gl_MultiTexCoord0 - vec4(1.5, 2.3, 1.1, 0.0); color = vec4(1.0 - gl_MultiTexCoord0.w); color = gl_MultiTexCoord0 / tan(0.0); The following lines work only with spheres and, gl_Vertex), 1.0); Does the function radians() always return black? What's that good for? color = radians(gl_MultiTexCoord0); Consult the documentation in the “OpenGL ES Shading Language 1.0.17 Specification” available at the “Khronos OpenGL ES API Registry” to figure out what radians() is good for. Special Variables in the Fragment ShaderEdit the tutorial on cutaways. SummaryEdit Congratulations, you have reached the end of this tutorial! We have seen: - The list of built-in attributes in Blender:Edit If you still want to know more - about the data flow in vertex and fragment shaders, you should read the description of the “OpenGL ES 2.0 Pipeline”. - about operations and functions for vectors, you should read “Vector and Matrix Operations”. < GLSL Programming/Blender
http://en.m.wikibooks.org/wiki/GLSL_Programming/Blender/Debugging_of_Shaders
CC-MAIN-2014-15
en
refinedweb
. Definition at line 79 of file TsqrFactory.hpp. Instantiate and return TSQR implementation. Instantiate and return (through the output arguments) the two TSQR implementation objects. Definition at line 109 of file TsqrFactory.hpp.
http://trilinos.sandia.gov/packages/docs/r10.6/packages/anasazi/doc/html/classTSQR_1_1Trilinos_1_1TsqrFactory.html
CC-MAIN-2014-15
en
refinedweb
Path Objects Path objects represent a sequence of directories that may or may not include a file. There are three ways to construct a Path object: - FileSystems.getDefault().getPath(String first, String… more) - Paths.get(String path, String… more), convenience method that calls FileSystems.getDefault().getPath - Calling the toPath method on a java.io.File object From this point forward in all our examples, we will use the Paths.get method. Here are some examples of creating Path objects: //Path string would be "/foo" Paths.get("/foo"); //Path string "/foo/bar" Paths.get("/foo","bar"); To manipulate Path objects there are the Path.resolve and Path.relativize methods. Here is an example of using Path.resolve: //This is our base path "/foo" Path base = Paths.get("/foo"); //filePath is "/foo/bar/file.txt" while base still "/foo" Path filePath = base.resolve("bar/file.txt"); Using the Path.resolve method will append the given String or Path object to the end of the calling Path, unless the given String or Path represents an absolute path, the the given path is returned, for example: Path path = Paths.get("/foo"); //resolved Path string is "/usr/local" Path resolved = path.resolve("/usr/local"); The Path.relativize works in the opposite fashion, returning a new relative path that if resolved against the calling Path would result in the same Path string. Here’s an example: // base Path string "/usr" Path base = Paths.get("/usr"); // foo Path string "/usr/foo" Path foo = base.resolve("foo"); // bar Path string "/usr/foo/bar" Path bar = foo.resolve("bar"); // relative Path string "foo/bar" Path relative = base.relativize(bar); Another method on the Path class that is helpful is the Path.getFileName, that returns the name of the farthest element represented by this Path object, with the name being an actual file or just a directory. For example: //assume filePath constructed elsewhere as "/home/user/info.txt" //returns Path with path string "info.txt" filePath.getFileName() //now assume dirPath constructed elsewhere as "/home/user/Downloads" //returns Path with path string "Downloads" dirPath.getFileName() In the next section we are going to take a look at how we can use Path.resolve and Path.relativize in conjunction with Files class for copying and moving files. Files Class The Files class consists of static methods that use Path objects to work with files and directories. While there are over 50 methods in the Files class, at this point we are only going to discuss the copy and move methods. Copy A File To copy one file to another you would use the (any guesses on the name?) Files.copy method – copy(Path source, Path target, CopyOption… options) very concise and no anonymous inner classes, are we sure it’s Java?. The options argument are enums that specify how the file should be copied. (There are actually 2 different Enum classes, LinkOption and StandardCopyOption, but both implement the CopyOption interface) Here is the list of available options for Files.copy: - LinkOption.NOFOLLOW_LINKS - StandardCopyOption.COPY_ATTRIBUTES - StandardCopyOption.REPLACE_EXISTING There is also a StandardCopyOption.ATOMIC_MOVE enum, but if this option is specified, an UsupportedOperationException is thrown. If no options are specified, the default is to throw an error if the target file exists or is a symbolic link. If the path object is a directory then an empty directory is created in the target location. (Wait a minute! didn’t it say in the introduction that we could copy the entire contents of a directory? The answer is still yes and that is coming!) Here’s an example of copying a file to another with Path objects using the Path.resolve and Path.relativize methods: Path sourcePath ... Path basePath ... Path targetPath ... Files.copy(sourcePath, targetPath.resolve(basePath.relativize(sourcePath)); Move A File. Files.move can be called on an empty directory or if it does not require moving a directories contents, re-naming for example, the call will succeed, otherwise it will throw an IOException (we’ll see in the following section how to move non-empty directories). The default is to throw an Exception if the target file already exists. If the source is a symbolic link, then the link itself is moved, not the target of the link. Here’s an example of Files.move, again tying in the Path.relativize and Path.resolve methods: Path sourcePath ... Path basePath ... Path targetPath ... Files.move(sourcePath, targetPath.resolve(basePath.relativize(sourcePath)); Copying and Moving Directories One of the more interesting and useful methods found in the Files class is Files.walkFileTree. The walkFileTree method performs a depth first traversal of a file tree. There are two signatures: - walkFileTree(Path start,Set options,int maxDepth,FileVisitor visitor) - walkFileTree(Path start,FileVisitor visitor) The second option for Files.walkFileTree calls the first option with EnumSet.noneOf(FileVisitOption.class) and Integer.MAX_VALUE. As of this writing, there is only one file visit option – FOLLOW_LINKS. The FileVisitor is an interface that has four methods defined: - preVisitDirectory(T dir, BasicFileAttributes attrs) called for a directory before all entires are traversed. - visitFile(T file, BasicFileAttributes attrs) called for a file in the directory. - postVisitDirectory(T dir, IOException exc) only called after all files and sub-directories have been traversed. - visitFileFailed(T file, IOException exc) called for files that could not be visited All of the methods return one of the four possible FileVisitResult enums : - FileVistitResult.CONTINUE - FileVistitResult.SKIP_SIBLINGS (continue without traversing siblings of the directory or file) - FileVistitResult.SKIP_SUBTREE (continue without traversing contents of the directory) - FileVistitResult.TERMINATE To make life easier there is a default implementation of the FileVisitor, SimpleFileVisitor (validates arguments are not null and returns FileVisitResult.CONTINUE), that can be subclassed co you can override just the methods you need to work with. Let’s take a look at a basic example for copying an entire directory structure. Copying A Directory Tree Example Let’s take a look at a class that extends SimpleFileVisitor used for copying a directory tree (some details left out for clarity): public class CopyDirVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private StandardCopyOption copyOption = StandardCopyOption.REPLACE_EXISTING; .... @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if(!Files.exists(targetPath)){ Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file)), copyOption); return FileVisitResult.CONTINUE; } } On line 9, each directory will be created in the target, ‘toPath’, as each directory from the source, ‘fromPath’,is traversed. Here we can see the power the Path object with respect to working with directories and files. As the code moves deeper into the directory structure, the correct Path objects are constructed simply from calling relativize and resolve on the fromPath and toPath objects, respectively. At no point do we need to be aware of where we are in the directory tree, and as a result no cumbersome StringBuilder manipulations are needed to create the correct paths. On line 17, we see the Files.copy method used to copy the file from the source directory to the target directory. Next is a simple example of deleting an entire directory tree. Deleting A Directory Tree Example In this example SimpleFileVisitor has been subclassed for deleting a directory structure: public class DeleteDirVisitor extends { if(exc == null){ Files.delete(dir); return FileVisitResult.CONTINUE; } throw exc; } } As you can see, deleting is a very simple operation. Simply delete each file as you find them then delete the directory on exit. Combining Files.walkFileTree with Google Guava The previous two examples, although useful, were very ‘vanilla’. Let’s take a look at two more examples that are a little more creative by combining the Google Gauva Function and Predicate interfaces. public class FunctionVisitor extends SimpleFileVisitor<Path> { Function<Path,FileVisitResult> pathFunction; public FunctionVisitor(Function<Path, FileVisitResult> pathFunction) { this.pathFunction = pathFunction; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { return pathFunction.apply(file); } } In this very simple example, we subclass SimpleFileVisitor to take a Function object as a constructor parameter and as the directory structure is traversed, apply the function to each file. public class CopyPredicateVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private Predicate<Path> copyPredicate; public CopyPredicateVisitor(Path fromPath, Path toPath, Predicate<Path> copyPredicate) { this.fromPath = fromPath; this.toPath = toPath; this.copyPredicate = copyPredicate; } @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { if (copyPredicate.apply(dir)) { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if (!Files.exists(targetPath)) { Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; } return FileVisitResult.SKIP_SUBTREE; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file))); return FileVisitResult.CONTINUE; } } In this example the CopyPredicateVisitor takes a Predicate object and based on the returned boolean, parts of the directory structure are not copied. I would like to point out that the previous two examples, usefulness aside, do work in the unit tests foe the source code provided with this post. DirUtils Building on everything we’ve covered so far, I could not resist the opportunity to create a utility class, DirUtils, as an abstraction for working with directories that provides the following methods: //deletes all files but leaves the directory tree in place DirUtils.clean(Path sourcePath); //completely removes a directory tree DirUtils.delete(Path sourcePath); //replicates a directory tree DirUtils.copy(Path sourcePath, Path targetPath); //not a true move but performs a copy then a delete of a directory tree DirUtils.move(Path sourcePath, Path targetPath); //apply the function to all files visited DirUtils.apply(Path sourcePath,Path targetPath, Function function); While I wouldn’t go as far to say it’s production ready, it was fun to write. Conclusion That wraps up the new copy and move functionality provided by the java.nio.file package. I personally think it’s very useful and will take much of the pain out of working with files in Java. There’s much more to cover, working with symbolic links, stream copy methods, DirectoryStreams etc, so be sure to stick around. Thanks for your time. As always comments and suggestions are welcomed. Reference: What’s New In Java 7: Copy and Move Files and Directories from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog.
http://www.javacodegeeks.com/2012/02/java-7-copy-and-move-files-and.html
CC-MAIN-2014-15
en
refinedweb
geotrigger-python 0.1.1 A simple client library for interacting with the ArcGIS Geotrigger API. A simple Python client library for interacting with the ArcGIS Geotrigger Service via the GeotriggerClient object. The Geotrigger Service is a cloud-hosted geofencing platform, which sends push notifications or notifies a remote server when a device enters or exits an area. The Geotrigger API manages Application and Device information and permissions, as well as providing access to create, update, and list information about Triggers, Tags, and Device Locations. For more information please refer to the Geotrigger Service Documentation. Features - Handles authentication and refreshing of credentials - Supports making requests as an application, allowing for full management of devices, triggers, tags, and permissions - Also supports making requests as a device, which can be useful for testing purposes Dependencies - Requests (>= 2.1.0) For running tests, you'll also need: - Mock (>= 1.0.1) Installation You can install geotrigger-python from PyPI using the following command: pip install geotrigger-python It's also possible to install from a clone of this repository by running setup.py install or setup.py develop. Examples Using the GeotriggerClient as an application This method of using the GeotriggerClient is for server-side apps, acting as the sole owner of your ArcGIS application. Before continuing, you'll need to find the client_id and client_secret for your ArcGIS application on the ArcGIS for Developers site. You'll find them in the API Access section of your applications details. Please make sure to keep your client_secret secure. If a third party obtains your client secret, the will be able to do anything they want to your Geotrigger application. Your client_secret should only be included in server-side applications and should never be distributed as part of a client-side web or mobile application. You will need to fill in values for the variable names given in all-caps. from geotrigger import GeotriggerClient # Create a GeotriggerClient as an Application gt = GeotriggerClient(CLIENT_ID, CLIENT_SECRET) # Fetch a list of all triggers in this application. triggers = gt.request('trigger/list') # Print all the triggers and any tags applied to them. print "\nFound %d triggers:" % len(triggers['triggers']) for t in triggers['triggers']: print "- %s (%s)" % (t['triggerId'], ",".join(t['tags'])) # Add "testing123" tag to all of the triggers that we fetched above. triggers_updated = gt.request('trigger/update', { 'triggerIds': [t['triggerId'] for t in triggers['triggers']], 'addTags': TAG }) # Print the updated triggers. print "\nUpdated %d triggers:" % len(triggers_updated['triggers']) for t in triggers_updated['triggers']: print "- %s (%s)" % (t['triggerId'], ",".join(t['tags'])) # Delete the "testing123" tag from the application. tags_deleted = gt.request('tag/delete', {'tags': TAG}) print '\nDeleted tags: "%s"' % ", ".join(tags_deleted.keys()) Using the GeotriggerClient as a Device The GeotriggerClient can also be used as if it were a device, which will allow you to send location updates and fire triggers, but you will not be able to receive any Geotrigger notifications, because they are sent as push messages to actual mobile devices. You can use the trigger/history API route or configure your triggers with a callbackUrl in order to observe that triggers are being fired. You'll only need the client_id for your application in order to use the GeotriggerClient as a device. For testing callback triggers, you can use the handy RequestBin service. Create a new bin and provide its URL as the callbackUrl when creating a trigger. You will need to fill in values for the variable names given in all-caps. from geotrigger import GeotriggerClient # Create a GeotriggerClient as a device gt = GeotriggerClient(CLIENT_ID) # Default tags are created for all devices and triggers. Device default tags # can be used when you want to allow devices to create triggers that only they # can fire. Default tags look like: 'device:device_id' or 'trigger:trigger_id' device_tag = 'device:%s' % gt.session.device_id # Build a callback trigger, using your default tag and RequestBin URL. esri_hq = { 'condition': { 'geo': { 'latitude': 34.0562, 'longitude': -117.1956, 'distance': 100 }, 'direction': 'enter' }, 'action': { 'callbackUrl': CALLBACK_URL }, 'setTags': device_tag } # Post the trigger to the Geotrigger API trigger = gt.request('trigger/create', esri_hq) print trigger # Construct a fake location update to send to the Geotrigger API. # Supplying a previous location is not strictly required, but will speed up # trigger processing by avoiding a database lookup. location_update = { 'previous': { 'timestamp': datetime.now().isoformat(), 'latitude': 45.5165, 'longitude': -122.6764, 'accuracy': 5, }, 'locations': [ { 'timestamp': datetime.now().isoformat(), 'latitude': 34.0562, 'longitude': -117.1956, 'accuracy': 5, } ] } # Send the location update. update_response = gt.request('location/update', location_update) print update_response Shortly after running the above code, you will see a POST to your callback url. Advanced GeotriggerClient usage If you already have an ArcGIS Application access_token that you'd like to use to create a GeotriggerClient, pass in a GeotriggerApplication as the session kwarg. You may want to do this if you are integrating Geotrigger functionality into an application that already obtains credentials from ArcGIS Online. Similarly, if you want to impersonate an existing device for which you already have a client_id, device_id, access_token, and refresh_token, you can create your own GeotriggerDevice to pass into the GeotriggerClient. This can be used to debug apps that are being developed with the Geotrigger SDKs for Android and iOS. from geotrigger import GeotriggerClient, GeotriggerApplication, GeotriggerDevice app = GeotriggerApplication(CLIENT_ID, CLIENT_SECRET, ACCESS_TOKEN) app_client = GeotriggerClient(session=app) device = GeotriggerDevice(CLIENT_ID, DEVICE_ID, ACCESS_TOKEN, REFRESH_TOKEN) device_client = GeotriggerClient(session=device) LICENSE file. - Downloads (All Versions): - 20 downloads in the last day - 131 downloads in the last week - 452 downloads in the last month - Author: Josh Yaganeh - Bug Tracker: - Requires requests - Categories - Package Index Owner: jyaganeh - DOAP record: geotrigger-python-0.1.1.xml
https://pypi.python.org/pypi/geotrigger-python
CC-MAIN-2014-15
en
refinedweb
Feel free to talk to her, by typing into the text box above. If you ask her questions, she'll answer you. The cool thing is that we have designed her for extensibility, so that the internet community can make her smarter. As an example, I built a small code snippet to interface with the Best Buy Remix API so that you can ask her questions about where Best Buy stores are. Here's a picture of a dialog that I had with Amy Iris earlier today: As you can see, I have asked Amy Iris a couple of different ways to tell me where various Best Buy stores are. This example could be extended for all of the Best Buy API calls (such as product lookups). Here's the code that's part of the Bot's logic. One little 14-line code snippet, submitted to Amy Iris' brain, and she now is that much smarter. # example of Best Buy Store Locator import amyiris, re from google.appengine.api import urlfetch if ("best buy" in str(textin)) and ((' in ' in str(textin)) or ('near' in str(textin))): fields = ['name','distance','address'] url = "(area(%s,50))?show=" url += ",".join(fields) + "&apiKey=amysremixkeygoeshere" r = "The nearest Best Buy store to %s is the %s store, "+ "which is %s miles away, at %s." vals = [re.search(r'\d'*5,str(textin)).group(0),] #grab the zip code page = urlfetch.fetch(url%vals[0]).content #look up results based on zipcode for tag in fields: #parse the xml vals.append(re.search('<'+tag+'>(.*?)</'+ tag+'>',page, re.DOTALL).group(1)) say(r%tuple(vals),confidence=21 ) A quick code review reveals the tricks (and limitations) of this conversational parser. I scan for the words "best buy", " in ", and "near", and rely on a 5-digit zip code in a regex search (that is, r'\d'*5). And if I find all these, then the snippet will retrieve the information from the Best Buy web site and present it to the user in conversation form. Imagine - it's now available on the web, on twitter, on your cell phone. And this is just one small look-up. Imagine what happens as people begin contributing similar code snippets over the years! Amy Iris will be brilliant!
http://blog.amyiris.com/2009/06/how-amy-iris-knows-where-best-buy-is.html
CC-MAIN-2014-15
en
refinedweb
NAME ng_sppp -- sppp netgraph node type SYNOPSIS #include <netgraph/ng_sppp.h> DESCRIPTION(4), and as an alternative to ng_cisco(4) node. While having less features than net/mpd + ng_ppp(4), it is significantly easier to use in the majority of simple configurations, and allows the administrator to not install the net/mpd port. With sppp you do not need any other nodes, not even an ng_iface(4) node. When an sppp node is created, a new interface appears which is accessible via ifconfig(8). Network interfaces corresponding to sppp nodes are named sppp0, sppp1, etc. When a node is shut down, the corresponding interface is removed, and the interface name becomes available for reuse by future sppp nodes. New nodes always take the first unused interface. The node itself is assigned the same name as its interface, unless the name already exists, in which case the node remains unnamed. The sppp node allows drivers written to the old sppp(4) interface to be rewritten using the newer more powerful netgraph(4) interface, and still behave in a compatible manner without supporting both network modules. An sppp node has a single hook named downstream. Usually it is connected directly to a device driver hook. The sppp nodes support the Berkeley Packet Filter, bpf(4). HOOKS This node type supports the following hooks: downstream The connection to the synchronous line. CONTROL MESSAGES This node type supports the generic control messages, plus the following: NGM_IFACE_GET_IFNAME Returns the name of the associated interface as a NUL-terminated ASCII string. Normally this is the same as the name of the node. SHUTDOWN This node shuts down upon receipt of a NGM_SHUTDOWN control message. The associated interface is removed and becomes available for use by future sppp nodes. Unlike most other node types and like ng_iface(4) does, an sppp node does not go away when all hooks have been disconnected; rather, an explicit NGM_SHUTDOWN control message is required. EXAMPLES For example, if you have the cx(4) device, you could run PPP over it with just one command: ngctl mkpeer cx0: sppp rawdata downstream Now you have the sppp0 interface (if this was the first sppp node) which can be accessed via ifconfig(8) as a normal network interface, or via spppcontrol(8) as an sppp(4) interface. SEE ALSO bpf(4), cx(4), netgraph(4), ng_cisco(4), ng_iface(4), ng_ppp(4), sppp(4), ifconfig(8), ngctl(8), spppcontrol(8) For complex networking topologies you may want to look at net/mpd port. HISTORY The sppp node type was implemented for FreeBSD 5.0. It was included to the system since FreeBSD 5.3. AUTHORS
http://manpages.ubuntu.com/manpages/oneiric/man4/ng_sppp.4freebsd.html
CC-MAIN-2014-15
en
refinedweb
I want to make text strikethrough using CSS. I use tag: button.setStyle("-fx-strikethrough: true;"); But this code is working, please help. You need to use a CSS stylesheet to enable strikethrough on a button. Using a CSS stylesheet is usually preferable to using an inline CSS setStyle command anyway. /** file: strikethrough.css (place in same directory as Strikeout) */ .button .text { -fx-strikethrough: true; } The CSS style sheet uses a CSS selector to select the text inside the button and apply a strikethrough style to it. Currently (as of Java 8), setStyle commands cannot use CSS selectors, so you must use a CSS stylesheet to achieve this functionality (or use inline lookups, which would not be advisable) - the style sheet is the best solution anyway. See @icza's answer to understand why trying to set the -fx-strikethrough style directly on the button does not work. Here is a sample application: import javafx.application.Application; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Strikeout extends Application { @Override public void start(Stage stage) throws Exception { Button strikethrough = new Button("Strikethrough"); strikethrough.getStylesheets().addAll(getClass().getResource( "strikethrough.css" ).toExternalForm()); StackPane layout = new StackPane( strikethrough ); layout.setPadding(new Insets(10)); stage.setScene(new Scene(layout)); stage.show(); } public static void main(String[] args) { launch(args); } } Button does not support the -fx-strikethrough CSS property, so setting this has no effect. Here is the official JavaFX 8 CSS reference: JavaFX CSS Reference Guide The -fx-strikethrough is defined for the Text node. Button only supports the following: cancel, default, and all inherited from Labeled: -fx-alignment, -fx-text-alignment, -fx-text-overrun, -fx-wrap-text, -fx-font, -fx-underline, -fx-graphic, -fx-content-display, -fx-graphic-text-gap, -fx-label-padding, -fx-text-fill, -fx-ellipsis-string And Labeled inherits these from Control: -fx-skin, -fx-focus-traversable. So looking at the supported properties, the following works (of course it's not strikethrough): button.setStyle("-fx-underline: true");
https://javafxpedia.com/en/knowledge-base/26843951/not-working-ccs-style-javafx--fx-strikethrough
CC-MAIN-2020-40
en
refinedweb
A Web API is an online “application programming interface” that allows developers to interact with external services. These are the commands that the developer of the service has determined will be used to access certain features of their program. It is referred to as an interface because a good API should have commands that make it intuitive to interact with. An example of this might be if we want to get information about a user from their social media account. That social media platform would likely have a web API for developers to use in order to request that data. Other commonly used APIs handle things like advertising (AdMob), machine learning (ML Kit), and cloud storage. It’s easy to see how interacting with these types of services could extend the functionality of an app. In fact, the vast majority of successful apps on the Play Store will use at least one web API! In this post, we’ll explore how to use a web API from within an Android app. How a Web API works Most APIs work using either XML or JSON. These languages allow us to send and retrieve large amounts of useful information in the form of objects. XML is eXtensible Markup Language. If you are an Android developer, then you’re probably already familiar with XML from building your layouts and saving variables. XML is easy to understand and generally places keys inside triangle brackets, followed by their values. It looks a bit like HTML: <client> <name>Jeff</name> <age>32</age> </client> JSON, on the other hand, stands for “Javascript Object Notation.” It is a short-hand for sending data online. Like XML or a CSV file, it can be used to send “value/attribute pairs.” Here the syntax looks a little different, though: [{client: {“name”:”Jeff”, “age”: 32}}] These are “data objects” in that they are conceptual entities (people in this case) that can be described by key/value pairs. We use these in our Android apps by turning them into objects just as we normally would, with the use of classes. See also: How to use classes in Java To see this in action, we need to find a Web API that we can use readily. In this example, we will be using JSON Placeholder. This is a free REST API specifically for testing and prototyping, which is perfect for learning a new skill! REST is a particular architectural “style” that has become standard for communicating across networks. REST-compliant systems are referred to as “RESTful” and share certain characteristics. You don’t need to worry about that right now, however. Setting up our project for Retrofit 2 For this example, we’ll also be using something called Retrofit 2. Retrofit 2 is an extremely useful HTTP client for Android that allows apps to connect to a Web API safely and with a lot less code on our part. This can then be used, for example, to show Tweets from Twitter, or to check the weather. It significantly reduces the amount of work we need to do to get that working. See also: Consuming APIs: Getting started with Retrofit on Android First up, we need to add internet permission to our Android Manifest file to make sure our app is allowed to go online. Here is what you need to include: <uses-permission android: We also need to add a dependency if we are going to get Retrofit 2 to work in our app. So in your module-level build.gradle file add: implementation 'com.squareup.retrofit2:retrofit:2.4.0' We also need something called Gson: implementation 'com.squareup.retrofit2:converter-gson:2.4.0' Gson is what is going to convert the JSON data into a Java object for us (a process called deserialization). We could do this manually, but using tools like this makes life much easier! There are actually later versions of Retrofit that make a few changes. If you want to be up-to-the-moment, check out the official website. Converting JSON to Java object A “Route” is a URL that represents an endpoint for the API. If we take a look at JSON Placeholder, you’ll see we have options such as “/posts” and “/comments?postId=1”. Chances are you will have seen URLs like this yourself while browsing the web! Click on /posts and you’ll see a large amount of data in JSON format. This is a dummy text that mimics the way a page full of posts on social media looks. It is the information we want to get from our app and then display on the screen. [{ "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipitnsuscipit recusandae consequuntur expedita et cumnreprehenderit molestiae ut ut quas totamnnostrum rerum est autem sunt rem eveniet architecto" }, { "userId": 1, "id": 2, "title": "qui est esse", "body": "est rerum tempore vitaensequi sint nihil reprehenderit dolor beatae ea dolores nequenfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendisnqui aperiam non debitis possimus qui neque nisi nulla" }, { "userId": 1, "id": 3, "title": "ea molestias quasi exercitationem repellat qui ipsa sit aut", "body": "et iusto sed quo iurenvoluptatem occaecati omnis eligendi aut adnvoluptatem doloribus vel accusantium quis pariaturnmolestiae porro eius odio et labore et velit aut" } To handle this information, we’re going to need a class that can build objects from the deserialized data. To that end, create a new class in your project and call it “PlaceholderPost”. This will need variables that correspond to the data we’re getting from the /posts page (“body”, “ID” etc.). We’ll be getting that information from the web API, so we need a getter for each of them. The final class should look like this: public class PlaceholderPost { private int userID; private int id; private String title; private String body; public int getUserId() { return userID; } public int getId() { return id; } public String getTitle() { return title; } public String getBody() { return body; } } This could just as easily be users on Twitter, messages on Facebook, or information about the weather! Interface files Next, we need a new interface file. You create this the same way you create a class: by clicking on your package name in the project window and choosing “New > Class” but here you’re selecting “Interface” underneath where you enter the name. An interface file contains methods that are later implemented by a class. I’ve called mine “PlaceholderAPI”. This interface needs just a single method to retrieve all the data from “/Post”. If you take a look at that JSON again, you’ll notice that the curly brackets are inside square brackets. This means that we have an array of objects, which is why we want to build a list for them. The objects are instances of our “PlaceholderPost” that we just made, so that’s what we’re putting in here! For those that are very new to programming, remember that any red lines probably mean you haven’t imported a class. Just click on the highlighted statement and press alt+return to do this automatically. (I can’t imagine anyone using this as an early programming lesson but you never know!) This looks like so: import java.util.List; import retrofit2.Call; import retrofit2.http.GET; public interface PlaceholderAPI { @GET("posts") Call<List> getPosts(); } Displaying the content Now, hop back into your main activity. We could build a fancy layout for displaying all this data, but to keep things nice and simple, I’m just going to stick with the layout as it is. To use Retrofit, we’re going to need to create a new Retrofit object. We do this with the following lines of code: Retrofit retrofit = new Retrofit.Builder() .baseUrl("") .build(); As you can see, we’re passing in the rest of the URL here. We then want to use our interface: Call<List> call = placeholderAPI.getPosts(); Now we just need to call the method! Because things have been too easy so far, Android does throw a little spanner in the works by preventing you from doing this on the main thread. The reason, of course, is that if the process takes too long, it will end up freezing the app! This is true when using any Web API. It makes sense, but it’s not terribly convenient when we just want to make a tutorial. Fortunately, we don’t need to create a second thread ourselves as Retrofit actually does all that for us. We’ll now get an onResponse and onFailure callback. onFailure is, of course, where we need to handle any errors. onResponse does not mean that everything went smoothly, however. It simply means that there was a response; that the website exists. Should we get a 404 message, this would still be considered a “response.” Thus, we need to check again if the process went smoothly with isSuccessful(), which checks to see that the HTTP code is not an error. To keep things really simple, I’m going to display just one piece of data from one of the objects we’ve received. To achieve this, I renamed the textView in the layout file to give it the id “text”. You can experiment with this yourself. The full code looks like this: call.enqueue(new Callback<List>() { @Override public void onResponse(Call<List> call, Response<List> response) { if (response.isSuccessful()) { List posts = response.body(); Log.d("Success", posts.get(3).getBody().toString()); TextView textView = findViewById(R.id.text); textView.setText(posts.get(3).getBody().toString()); } else { Log.d("Yo", "Boo!"); return; } } @Override public void onFailure(Call<List> call, Throwable t) { Log.d("Yo", "Errror!"); } }); Log.d("Yo","Hello!"); } } Wrapping up At this point, you should have a good idea of how a web API works and why you want one. You would have also created your first app that uses a web API to do something potentially useful. Of course, there are countless other web APIs, and each work in their own ways. Some will require additional SDKs to use or different libraries. Likewise, there are many other actions beyond the “GET” request we demonstrated here. For example, you can use “POST” in order to send data to the server, which is useful if you ever want your users to be able to post to social media from your apps. The possibilities are endless once you combine the power and flexibility of Android with the huge resources available online.
http://www.nochedepalabras.com/how-to-use-a-web-api-from-your-android-app.html
CC-MAIN-2020-40
en
refinedweb
I have existing Hive data stored in Avro format. For whatever reason reading these data by executing SELECT is very slow. I didn't figure out yet why. So I decided to read the data directly by navigating to the partition path and using Spark SQLContext. This works much faster. However, the problem I have is reading the DOUBLE values which are stored as logical double type. In the schema file they are defined as: {"name":"ENDING_NET_RECEIVABLES_LOCAL","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":18}],"doc":"Ending Net Receivables Local","default":null} Somewhere I found a recommendation of using the following approach to convert my Avro schema into a Spark SQL schema: def getSparkSchemaForAvro(sqc: SQLContext, avroSchema: Schema): StructType = { val tmpPath = "/tmp/avro_dummy" val dummyFIle = File.createTempFile(tmpPath, "avro") val datumWriter = new GenericDatumWriter[Any]() datumWriter.setSchema(avroSchema) val writer = new DataFileWriter(datumWriter).create(avroSchema, dummyFIle) writer.flush() writer.close() val df = sqc.read.format("com.databricks.spark.avro").load("file://" + dummyFIle.getAbsolutePath) val sparkSchema = df.schema sparkSchema }However, it converts the above type into I this conversion was correct I could have read the file like this: val df = sqlContext.read.schema(CRDataUtils.getSparkSchemaForAvro(sqlContext, avroSchema)).avro(path) I also looked at com.databricks.spark.avro.SchemaConverters but the conversion method deftoSqlType(avroSchema: Schema):SchemaType returns SchemaType instead of StructType required by the above approach. Can anyone help how to read Avro files with logical types in Spark. Answer by subhamoy chowdhury · Mar 03, 2017 at 01:41 AM facing same issue here. Can anyone please help? I am using spark 1.6.0 in CDH 5.7, Same issue here can anyone please help? Answer by subhamoy chowdhury · Mar 03, 2017 at 08:03 PM I am trying to write a dataframe to avro using databricks scala api. The writing is successful. But while reading the data from hive it is throwing exception: Error: java.io.IOException: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to obtain scale value from file schema: "bytes" (state=,code=0) In the avsc file I have column wityh type byte: --> {"name":"rate","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":18}],"default":null} reading ==================== val df = sqlContext.read.format("com.databricks.spark.avro") .option("avroSchema", schema.toString) .option("inferSchema", "true") .avro(sourceFile) .filter(preparePartitionFilterClause); ==================== writing ======================= df.write.mode(SaveMode.Append).format("com.databricks.spark.avro").partitionBy(TrlConstants.PARTITION_COLUMN_COUNTRYCODE).save(path); ======================= I am completely clue less please help!!! subhamoy chowdhury, How do your questions above qualify as answers? Answer by sai krishna Pujari · Apr 10, 2017 at 12:42 PM @subhamoy chowdhuryI am also facing similar problem .. Let me know if you got the solution. Answer by Uthayakumar · Mar 01, 2018 at 09:41 AM HI Subhamoy / Pujari, Similar thing happening for me, if any findings please share with me . thanks in advance. UK Answer by smiksha · Jun 11, 2019 at 12:29 PM I am facing the same issue. Can someone suggest something? @Databricks_Support Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 [email protected] 1-866-330-0121
https://forums.databricks.com/questions/9199/how-using-spark-read-logical-double-value-stored-i.html
CC-MAIN-2020-40
en
refinedweb
data¶ - api_attr declarative programming (static graph) paddle.fluid. data(name, shape, dtype='float32', lod_level=0)[source] Data Layer This function creates a variable on the global block. The global variable can be accessed by all the following operators in the graph. The variable is a placeholder that could be fed with input, such as Executor can feed input into the variable. Note paddle.fluid.layers.data is deprecated. It will be removed in a future version. Please use this paddle.fluid.data. The paddle.fluid.layers.data set shape and dtype at compile time but does NOT check the shape or the dtype of fed data, this paddle.fluid.data checks the shape and the dtype of data fed by Executor or ParallelExecutor during run time. To feed variable size inputs, users can set None or -1 on the variable dimension when using paddle.fluid.data, or feed variable size inputs directly to paddle.fluid.layers.dataand PaddlePaddle will fit the size accordingly.) – List|Tuple of integers declaring the shape. You can set “None” or -1 at a dimension to indicate the dimension can be of any size. For example, it is useful to set changeable batch size as “None” or -1. LoD-Tensor User Guide . Default: 0. - Returns The global variable that gives access to the data. - Return type Variable Examples import paddle.fluid as fluid import numpy as np # Creates a variable with fixed size [3, 2, 1] # User can only feed data of the same shape to x x = fluid.data(name='x', shape=[3, 2, 1], dtype='float32') # Creates a variable with changeable batch size -1. # Users can feed data of any batch size into y, # but size of each data sample has to be [2, 1] y = fluid.data(name='y', shape=[-1, 2, 1], dtype='float32') z = x + y # In this example, we will feed x and y with np-ndarry "1" # and fetch z, like implementing "1 + 1 = 2" in PaddlePaddle feed_data = np.ones(shape=[3, 2, 1], dtype=np.float32) exe = fluid.Executor(fluid.CPUPlace()) out = exe.run(fluid.default_main_program(), feed={ 'x': feed_data, 'y': feed_data }, fetch_list=[z.name]) # np-ndarray of shape=[3, 2, 1], dtype=float32, whose elements are 2 print(out)
https://www.paddlepaddle.org.cn/documentation/docs/en/api/fluid/data.html
CC-MAIN-2020-40
en
refinedweb
Mono comes with a really cool CSharp compiler as a service. The only problem is that no one seems to know about it! I think the main reason for this is that anything related to Mono causes a fair bit of confusion to all the people who are not familiar with it. And that certainly includes myself, as I know very little about it besides what I’m discussing in this post! Talking to various people, the general misconceptions are: - Mono only runs on Linux - Even if it runs on Windows, it doesn’t use the CLR, so I can’t use it - Mono is for strange people :) And while that may be true for some aspects of Mono, it certainly isn’t for Mono.CSharp.dll. In fact, it’s a totally ‘normal’ library that you can use in your very ‘normal’ C# projects in Visual Studio. The next hurdle is that it’s not all that easy to just get Mono.CSharp.dll. You have to either install an 80MB setup from here, or get a big .tar.gz file with lots of other things from here. And a lot of people on Windows don’t like dealing with tar.gz files (hint: use 7zip). Now the good news: after chatting with Miguel de Icaza on Twitter, I put Mono.CSharp.dll on NuGet, making it totally trivial to use from VS. There goes that hurdle. (note: I’m the package owner for now, until some Miguel-blessed dev claims it). Try Mono.CSharp in under 5 minutes Just open VS and create a Console app, and add a NuGet package reference to Mono.CSharp. That takes a whole 30 seconds. And I’ll re-emphasize that there is nothing ‘Mono’ about this Console app. It’s just plain vanilla. Now write some basic code to use the compiler. It all revolves around the Evaluator class. Here is the sample code I used (GitHub). It’s quick and dirty with poor error handling, as the focus is to just demonstrate the basic calls that make things work: using System; using System.IO; using Mono.CSharp; namespace MonoCompilerDemo { public interface IFoo { string Bar(string s); } class Program { static void Main(string[] args) { var evaluator = new Evaluator( new CompilerSettings(), new Report(new ConsoleReportPrinter())); // Make it reference our own assembly so it can use IFoo evaluator.ReferenceAssembly(typeof(IFoo).Assembly); // Feed it some code evaluator.Compile( @" public class Foo : MonoCompilerDemo.IFoo { public string Bar(string s) { return s.ToUpper(); } }"); for (; ; ) { string line = Console.ReadLine(); if (line == null) break; object result; bool result_set; evaluator.Evaluate(line, out result, out result_set); if (result_set) Console.WriteLine(result); } } } } It feeds it some starter code and start a REPL look to evaluate expressions. e.g. run it and try this. You type the first two, and the 3rd is output: MonoCompilerDemo.IFoo foo = new Foo(); foo.Bar("Hello Mono.CSharp"); HELLO MONO.CSHARP You get the idea! What about Roslyn? I blogged. But there is one major argument right now in favor of using the Mono compiler: it’s pretty much feature complete today, while Roslyn is not even close. Totally understandable given that it’s a CTP, and is only meant to give an early taste of the feature. So anyway, I still know close to nothing about Mono, but if I need to dynamically compile some pieces of C# in a ‘normal’ non-Mono project, I know that Mono.CSharp is not far away! Funny thing that usually mono lagged behind .NET usually but now it is ahead.. Thanks for the quick intro, it looks promising. However, in the version of Mono that I have (with Ubuntu 11.10, C2011), Compile is a static method of Evaluator. This is the version of dmcs: $ dmcs --version Mono C# compiler version 2.10.5.0 I have changed part of the code to the following, but still don't quite have it working: Mono.CSharp.Evaluator.Init(new string[] {} ); Evaluator.ReferenceAssembly(typeof(IFoo).Assembly); // Feed it some code var compiledMethod = Evaluator.Compile(code); @Kristian: not sure, it's probably an API change. I wish the Mono guys could start maintaining the nuget package so it stays up to date. I think the API I'm using is old. My Mono.CSharp.dll is dated 2011-09-05, even though it claims to be version 4.0.0.0. I think they *all* claim to be 4.0, which apparently just means it targets framework 4.0. It's pretty confusing...
http://oldblog.davidebbo.com/2012/02/quick-fun-with-monos-csharp-compiler-as.html?showComment=1335442249672
CC-MAIN-2020-40
en
refinedweb
. Architecture The Scala’s GRPC-Gateway is made of 3 components: - The code generator: Generates source files from the .proto files definition - The runtime library: Provides the runtime infrastructure (e.g. Netty server) - The user code: Implements business logic and runs infrastructure (e.g. spawns up server on specific port) Code generators Code generators run at compile time to generate application source files. Therefore their code is similar to compiler plugin code and is executed directly by SBT. It means the generator source code must be compatible with the Scala version used by SBT. Currently Scala 2.10 is used by SBT 0.13. As the GRPC-Gateway code generators relies on annotations define in protobuf files, these files need to be compile against Scala 2.10. Note that this code won’t be embedded into the application’s package, it is only used to generate sources that will in turn be compiles and packaged with the application. ScalaPB code generation ScalaPB provides an easy way to define your own code generators. All you have to do is to extend protocbridge.ProtocCodeGenerator and implement the def run(request: CodeGeneratorRequest): CodeGeneratorResponse method. Then the code generation is performed by means of a FunctionalPrinter to which you pass the strings of generated code: val fp = FunctionalPrinter() .add(s"package io.grpc.gateway.MyGatewayHandler") .newline .add("object MyGatewayHandler { /* More code comes here */ }") .newline All you have to do is then create a File.Builder and add the content using the FunctionalPrinter val b = CodeGeneratorResponse.File.newBuilder() b.setName(s"io/grpc/gateway/MyGatewayHandler.scala") b.setContent(fp.result) val file = b.build Finally we need to turn the File into a CodeGeneratorResponse: val b = CodeGeneratorResponse.newBuilder b.addFile(file) b.build As the FunctionalPrinter accepts any String we can use this very same mechanism to generate any type of file. In fact we use it to generate the Swagger YAML specification corresponding to the GRPC definitions. The HTTP annotations In order to define the HTTP endpoint associated to a gRPC call, the gRPC method definitions must be annotated with an HTTP option extension: import "google/api/annotations.proto"; rpc GetFeature(Point) returns (Feature) { // define associated REST endpoint on GRPC Gateway option (google.api.http) = { get: "/api/routeguide/feature" }; } This example specifies that the GetFeature method is available on the path /api/routeguide/feature using the HTTP GET method. This also introduces a dependency to the annotations.proto file. This file is provided by Google and must be available in the user project when scalaPB parses the .proto files. In order to reduce the user burden this file is embedded in the runtime’s jar file so the user only has to add the following dependency to his build.sbt: libraryDependencies += "beyondthelines" %% "grpcgatewayruntime" % "0.0.1" % "compile,protobuf" Note that the protobuf is important as it tells ScalaPB to look into this jar for .proto files. One pitfall worth noting is that in order to be able to use extensions the corresponding .proto files must be compiled in Java. In our case it means that the Google annotations .proto files must be compiled as Java sources. Fortunately it’s just a matter of configuring ScalaPB accordingly: PB.targets in Compile += PB.gens.java -> (sourceManaged in Compile).value JSON translation The JSON translation is provided by the scalapb-json4s library which is able to translate any GeneratedMessage (case classes generated from protobuf definitions) into JSON. unaryCall(req.method(), req.uri(), body) // calls the gRPC service .map(JsonFormat.toJsonString) // transform into JSON format .map(_.getBytes(StandardCharsets.UTF_8)) // transform into bytes Similarly transforming an HTTP request body is straightforward: printer.add( s"val input = Try(JsonFormat.fromJsonString[${method.getInputType.getName}](body))" ) This snippet above is from the GRPC-Gateway generator. It generates the following code: val input = Try(JsonFormat.fromJsonString[T](body)) which turns a JSON string into an instance of type T. The Swagger generator The swagger generator is quite simple. It iterates over the services and methods definitions and generates corresponding Swagger specification for unary methods calls (no streaming supported) having an HTTP endpoint defined using the HTTP annotation. It doesn’t generate a Scala file but a YAML file (.yml). The GRPC-Gateway generator The structure of the gateway generator follows a similar structure except that this time it does generate a Scala source file. As the generated code is going to be compiled and included with the application code it doesn’t have to be limited to Scala 2.10 but can use the latest Scala version. The code generated for the GRPC-Gateway is a simple Netty handler in charge of: - Building gRPC input parameters from requests body or query string - Performing corresponding gRPC call using the instances extracted above - Translating back the responses into JSON - Forwarding the translated responses to the HTTP Client In fact only the extraction of the input parameters and the call to the appropriate gRPC method is generated. The last 2 actions (translating the response to Json and sending the response back) are generic enough so they can be placed into a super class that is extended by the generated code. printer .add(s"class ${service.getName}Handler(channel: ManagedChannel)(implicit ec: ExecutionContext)") .indent .add( "extends GrpcGatewayHandler(channel)(ec) {", s"""override val name: String = "${service.getName}"""", s"private val stub = ${service.getName}Grpc.stub(channel)" ) .newline .call(generateUnaryCall(service)) .outdent .add("}") .newline The runtime library The GRPC Gateway Handler The runtime provides the Netty’s handler common functionality that is extended by the generated code. This is the GrpcGatewayHandler. Its role is to extract the HTTP request body and parameters, invoke the subclass method to trigger the call to the GRPC server, then translate the response into JSON and send it back to the HTTP client. The Swagger Handler The swagger handler is a simple Netty handler that relies Swagger-UI to serve the generated swagger specification. It gets the requested files from the classpath (as both the generated swagger specs and the swagger-ui files are packaged inside jars) and serves them back to the client. The swagger .yml files is available on the /specs path while the swagger files are available on the /docsfiles. Therefore to open the swagger docs one must point its browser to the following url:. (Assuming the GRPC gateway runs on localhost:8981 and that the spec file is named RouteGuide.yml). The server utilities Finally the runtime provides a utility class GrpcGatewayServerBuilder in order to start a gateway server easily. This is what a user must write in order to start its own gateway: // channel pointing to the GRPC server val channel = ManagedChannelBuilder .forAddress("localhost", 8980) .usePlaintext(true) .build() val gateway = GrpcGatewayServerBuilder .forPort(port) // add the generated handler .addService(new RouteGuideHandler(channel)) .build() gateway.start() Conclusion Here ends our tour of the GRPC gateway implementation. It is merely a Proof-of-concept but it should get you through the main pitfalls of working with protobuf code generation. It still suffers a few limitations: no streaming is supported (might be worth looking at websocket) or try a different backend. In fact I’ll be glad to see gRPC running on top of Akka-Http now that it officially supports HTTP/2 it surely sounds possible. And as always the source code is available on github for those who want to have a closer look or contribute. If you only want to give it a try you may have a look at this example. Enjoy!
http://www.beyondthelines.net/computing/grpc-rest-gateway-in-scala/
CC-MAIN-2020-40
en
refinedweb
Blog archive in space I’ve been writing articles on this blog for about 13 years, and for a while now I’ve marked all of the 400ish articles with geo tags. This blog is Jekyll-based. To add Geo tags, all I had to do was add the information to the ‘front-matter’. Here’s the header of a sample post: title: "Browser tabs are probably the wrong metaphor" date: "2019-06-11 21:14:00 UTC" tags: - browsers - ux geo: [43.660773, -79.429926] location: "Bloor St W, Toronto, Canada" I thought it would be neat to grab all these posts and plot them on a map, so next to my ‘time-based’ archive, I can look at a ‘space-based’ one. This is how that looks like: Want to check it out? Browse this interactive map To generate this map, I did two things. First I generated a .kml file. The process for this is basically the same as generating an atom feed for your Jekyll blog. This is how mine looks like: --- layout: null --- <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="" xmlns: <Document> <name>{{ site.title }}</name> <description> This map contains a list of locations where I wrote an article on this blog. </description> <Folder> <name>Posts</name> {% for post in site.posts %}{% if post.geo %} <Placemark> <name>{{ post.title | xml_escape }}</name> <Point> <coordinates> {{ post.geo[1] }},{{ post.geo[0] }},0 </coordinates> </Point> <description>{{post.url}}</description> <atom:link </Placemark> {% endif %}{% endfor %} </Folder> </Document> </kml> Lastly, I needed to generate a map page and use the Google maps API to pull in the .kml: --- layout: default bodyClass: map title: "Blog archive in space" --- <div id="map"></div> <script> var map; var src = ''; function initMap() { map = new google.maps.Map(document.getElementById('map'), { center: new google.maps.LatLng(-19.257753, 146.823688), zoom: 2, mapTypeId: 'terrain' }); var kmlLayer = new google.maps.KmlLayer(src, { suppressInfoWindows: false, preserveViewport: false, map: map }); } </script> <script async defer </script>
https://evertpot.com/blog-archive-in-space/
CC-MAIN-2020-40
en
refinedweb
This example demonstrates the usage of abstraction in Java programming language What is Abstraction Abstraction is a process of hiding the implementation details from the user. Оnly the functionality will be provided to the user. In Java, abstraction is achieved using abstract classes and interfaces. We have a more detailed explanation on Java Interfaces, if you need more info about Interfaces please read this tutorial first. Abstraction is one of the four major concepts behind object-oriented programming (OOP). OOP questions are very common in job interviews, so you may expect questions about abstraction on your next Java job interview. Java Abstraction Example To give an example of abstraction we will create one superclass called Employee and two subclasses – Contractor and FullTimeEmployee. Both subclasses have common properties to share, like the name of the employee and the the amount of money the person will be paid per hour. There is one major difference between contractors and full-time employees – the time they work for the company. Full-time employees work constantly 8 hours per day and the working time of contractors may vary. Lets first create the superclass Employee. Note the usage of abstract keyword in class definition. This marks the class to be abstract, which means it can not be instantiated directly. We define a method called calculateSalary() as an abstract method. This way you leave the implementation of this method to the inheritors of the Employee class. package net.javatutorial;; } } The Contractor class inherits all properties from its parent Employee but have to provide it’s own implementation of calculateSalary() method. In this case we multiply the value of payment per hour with given working hours. package net.javatutorial; public class Contractor extends Employee { private int workingHours; public Contractor(String name, int paymentPerHour, int workingHours) { super(name, paymentPerHour); this.workingHours = workingHours; } @Override public int calculateSalary() { return getPaymentPerHour() * workingHours; } } The FullTimeEmployee also has it’s own implementation of calculateSalary()method. In this case we just multiply by constant 8 hours. package net.javatutorial; public class FullTimeEmployee extends Employee { public FullTimeEmployee(String name, int paymentPerHour) { super(name, paymentPerHour); } @Override public int calculateSalary() { return getPaymentPerHour() * 8; } } if i am not wrong according to you FullTimeEmployee achieve abstraction.but i confused how this guy can achieve abstraction. because he already know what to implement calculateSalary () in FullTimeEmployee . calculateSalary() is an abstract method in the parent class – Employee, so the ancestors like FullTimeEmployee have to do their own implementation of this method. Employee does not know what is the implementation in FullTimeEmployee and that’s the abstraction Thanks functionality of abstract methods and inheritance is similar. whats the difference? Wrong. Abstract methods do not have functionality at all. They just declare the signature of the method and the implementing class should care about the functionality. So abstraction is not inheritance, because when we talk about inheritance, this means you inherit the functionality 🙂 i don’t understand why you need to use abstract class in every abstraction example. When both are totally different?
https://javatutorial.net/java-abstraction-example
CC-MAIN-2020-40
en
refinedweb
winston-errbit-v2winston-errbit-v2 Errbit winston transporter for using the v2 API of Errbit. How to useHow to use Super duper easy. import winston from 'winston'; import { ErrbitLogger } from 'winston-errbit-v2'; import { config } from 'your-config'; export function errbitInit() { const options = { key: config.errbit.key, host: config.errbit.url, env: config.env, }; winston.add(ErrbitLogger, options) } Maintaining and informationMaintaining and information I created this npm package because I had to use it in one of my projects. Since the errbit was kinda legacy we used, I had to make a really simple transporter for it. At the moment I am really busy with my work, however, I might take my time and make this transporter for every API version of errbit. But it might not happen until 2016, December.
https://libraries.io/npm/winston-errbit-v2
CC-MAIN-2020-40
en
refinedweb
Hello!, i need help i have been stuck for about a week with this, i used multiple plugins on a proyect and those plugins uses the AndroidManifest.xml, so when i import a plugin it overwrites the old AndroidManifest, so i have picked all of them individually and im looking a way to merge them all. The plugins im using are: Google Play OBB Downloader Everyplay Mobile Social Plugin GameAnalytics Unity Package Qualcomm Vuforia Yeah i know thats a lot of plugins, i have tried merging the files manually but i dont really know how it works so the new merged AndroidManifest didnt work, i have been investigating and i think it has to do with the < intent-filter >, from what i have read it should be only one < activity > with the < intent-filter > on the entire < application >. Im not even sure that if merging those files the way i did its the right way to do it, i need someone to point me in the right direction. Im uploading here all the manifests and the one i created merging all those, if you need more information or another file just tell me and i will upload it or give the information needed, Thank You! :) Manifests.zip Ever figure this out? I'm trying to merge Unibill and Everyplay plugins but I haven't a clue where to start. I'm trying to merge Vuforia and facebook I managed to connect Facebook and Vuforia i am going to post my answer later today! :) Answer by Lucas_Gaspar · Aug 08, 2014 at 09:00 PM Hello, i managed to merge two plugins who used the actions "android.intent.action.MAIN" and "android.intent.category.LAUNCHER" (that is the main conflict, you can have any number of activities but just only one with those two actions) so here is what i did. The plugins i merged where Mobile Social Plugin and the Vuforia plugin. I mailed the developer of the Mobile Social Plugin (wich i recommend) asking for advice, he told me that i should recompile the .jar file but include all other plugins there. In the same package of the plguin there is a .zip called "ANEclipseProject.zip" where the entire Eclipse proyect was located. I needed Eclipse so i downloaded Eclipse for Android from here AndroidSDK+Eclipse i created a new Eclipse proyect and added the files i got from the zip, then i added the .jar files of the Vuforia plugin ("QCARUnityPlayer.jar" and "Vuforia.jar" to be exact). Now you have to find how its called the main script, to do this i looked in the manifest how it was called the Activity that wanted the actions MAIN and LAUNCHER, in the Mobile Social Plugin its called "AndroidNativeBridge" find that file in the proyect and open it. Now you have to find how its named the other plugin activity that wants the actions MAIN and LAUNCHER, for example for vuforia its named "QCARPlayerNativeActivity", what you should do now is extend the file you opened before (AndroidNativeBridge) to the class "QCARPlayerNativeActivity" (before this the file should be extending "UnityPlayerActivity"), dont forget to add the "import com.qualcomm.QCARUnityPlayer.QCARPlayerNativeActivity;" to the import list to be detected correctly. (the complete name can be seen on the manifest) Then Eclipse automatically recompiles itself, you can find the .jar in the folder "bin" inside the Eclipse proyect with the name "androidnative.jar" and this file its what you need to overwrite in your proyect, find the file inside unity and replace it with your new .jar. Now all you need to do is adjust the manifest to include all the activities and permissions of both of the plugins, and in the secondary plugin activities delete the lines where tries to use the actions MAIN and LAUNCHER. So basically what you need is the Eclipse proyect of one of the plugins and include the jar files of the other plugins, extend the main activity to them and adjust the manifest to include all the atcivities. Here is the links that was given to me where i got the information about merging the two plugins. I hope this helps, you can ask me if you have any question, im not an expert but i will try to help. :) I used Android native plugin package for unity too. Your solution is very nice, but I still have some questions. You mean put the second plugin in Android native project right? Its mean copy the package of second plugin in Android native project, and set the Active turn to extend AndroidNativeBridge? If I built a plugin by my self, I have 3~4 class in one package, So if I fix code at Active class like this: public class myPlugin extends Active -->public class myPlugin extends AndroidNativeBridge. And copy the whole package in the AndroidNative project, Is it correct? By the way, How can I import other .jar in the peoject(eclipse), Because I can't find out the .jar in the file when I imported. Im sure the .jar is in the file. I used AndroidNativePlugin for unity too, Your solution is very nice, but I still have some question. You mean copy the second plugin's .jar in the AndroidNative project and change extend Active to extend AndroidNativeBridge, right? If I built a plugin in eclipse, that have 3~4 class in a package.SO, I just copy the whole package in AndroidNative project, and fix the Active class like this: public class myPlugin extend Active ---> public class myPlugin extend AndroidNativeBridge,right? Is that correct? Because I'm not familiar in eclipse, so I can't sure what I did is correct or not. By the way, How can I import .jar in the eclipse project. I can't find out the .jar in the folder when I imported. I'm sure that the .jar is in the folder exactly.What is the problem. thank you :) To add a .jar to eclipse you have to rigth click on the main proyect then "Build Path"/"Configure Build Path" then a window should appear, go to the tab "Libraries" then on the button "Add External JARs". This will make the other .jar part of your proyect so you can reference them on your code. I hope it helps :) Hi, thank you for your solution. I extend Android native plugin to my other plugin, and I have a new Android native.jar , So I replace it to my Unity Project. And my other plugin.jar also have to put in my Unity Project? Hello, I've followed each and every step of Currently using the "AndroidNativeBridge extends QCARPlayerNativeActivity" pattern, the .jar file seems to recompile correctly, but i cannot get both plugins to work together : the latest builds throws an AndroidJavaException: java.lang.NoSuchMethodError: no method with name='getClass' signature='()Ljava/lang/Object;' in class Lcom/androidnative/AndroidNativeBridge; I understand getClass is a top-level java method and cannot be overridden, and i'm no Android dev... If anyone got this working, I would be very grateful for pointers, or even an example androidmanifest file :) Thanks Answer by liortal · Aug 06, 2014 at 05:17 PM To answer this, i can generally say that there may be scenarios where multiple AndroidManifests.xml will be OK, and there will be times where it won't. The only rule is that there's no general solution for making it work (if any, at all). Your claim regarding multiple activities in a single manifest is wrong, as you can see in some of the attached files, where multiple elements are defined. Unity claims to perform some merging of provided AndroidManifests (e.g: the main one under Plugins/Android will be merged with others that are placed in subfolders of Plugins/Android) - quoted from here: The issue is that each plugin defines its own Activity that should be the LAUNCHER activity (the "entry point" to the app), and so multiple plugins will not be able to co-exist. So, for this scenario, there's no easy solution. I would try to do the following: Double check to see if i really need that many plugins? Verify with the plugin developer if there's a version or option to not use the manifest and their activity ? Following #2, check if there's an option to manually initialize the plugin code, and if so, create my own custom activity that will do the initialization (skipping the need for the plugin to have its own activity). Answer by unity_s_uRTmg4Kx61_A · Oct 03, 2017 at 10:11 AM Hello everyone, This may be coming late, but it can help other people out there to solve the issue of using two or more plugin in your unity android project. Most especially Vuforia and any other plugin. I had spend many days, weeks, and not able to combine the androidmanifest.xml of the two plugins. All of the credits go to this great guy (). He made a very detailed tutorial on how to use multiple plugin in your project. All you have to do is first read carefully, identify the exact files and edit the exact lines. The link to the tutorial is , you can also ask me any question and I can help you as much as I can. Thanks make an Android plugin for unity? 0 Answers Will an Android Manifest file named "AndroidManifest 1.xml" file work? 1 Answer FileNotFoundException: Could not find file MyProject\Temp\StagingArea\AndroidManifest.xml 0 Answers How to check if user has given camera or location permissions (android) 0 Answers Android app may work in BG with Plugin? 1 Answer
https://answers.unity.com/questions/735848/how-to-merge-multiple-androidmanifestxml-files.html
CC-MAIN-2020-40
en
refinedweb
Announcing the PyCharm 4 Release Candidate We are now approaching the final steps towards the PyCharm 4 release. So today we’re happy to announce the availability of the PyCharm 4 Release Candidate. Please take it for a spin and give us your feedback. The PyCharm 4 RC build 139.431 is available for download from the Early Access Preview page. PyCharm 4 Release Candidate mostly includes a consolidation of fixes for bugs comparing to previous EAP builds. For the detailed list of notable changes and improvements, please check the Release Notes. In case you missed what’s new coming in PyCharm 4 – please read the blog posts covering new features in the first, second, and third EAP builds. We hope that there will be no major bugs, however, should you encounter any problems, please report them to YouTrack – there’s still a bit of time to fix stuff before the final release. Stay tuned for a PyCharm 4 release announcement, follow us on twitter and develop with pleasure! -PyCharm team 9 Responses to Announcing the PyCharm 4 Release Candidate Omu cu Lopata says:November 15, 2014 Autoupdate is again not working… I don’t want to install it manually every week, so I’ll wait for the release 🙂 traff says:November 17, 2014 Sorry for inconvenience, the impossibility of patch update was caused by some changes regarding binaries assembling. This will be fixed starting from the next update. Jason Sachs says:November 15, 2014 I can’t seem to run PyCharm 4 RC on my Mac. I’m running OSX Snow Leopard. I downloaded the Community Edition with the bundled JRE 7, and I double-click on the app icons once it’s in the Applications folder, with no response, no dialog box, no nothing. Bastian says:November 19, 2014 Looks nice.. but I get a warning when doing remote debugging.. maybe you forgot to update someting ,) Warning: wrong debugger version. Use pycharm-debugger.egg from PyCharm installation folder. traff says:November 19, 2014 Hi Bastian, that is a known problem, that is already fixed. The fix will be included in final PyCharm 4.0 release. Nicholas Stevenson says:November 19, 2014 Ohh, great to hear, and it looks like 4.0 just dropped, thanks for letting me know! Nicholas Stevenson says:November 19, 2014 Oops, I mis-commented, thinking this dev response was for my issue just below… nevermind me! Nicholas Stevenson says:November 19, 2014 I am running into a strange issue, I searched through the known issues and didn’t find anything, so hopefully I’m not repeating a known bug. Code completion will happily work, showing me all of my functions and variables inside a project module. But, it will immediately stop if I am working inside a simple function. Our project setup works perfectly with other IDEs, as well as PyCharm 3, but seems to act funny with 4. I’d be happy to provide more information if this is indeed a new issue! import common.commonUtility common.commonUtility.WorksHere def testFunc(): common.commonUtility.CodeCompletionHasNoIdea Nicholas Stevenson says:November 19, 2014 This is still the case with the new 4.0 release unfortunately.
https://blog.jetbrains.com/pycharm/2014/11/announcing-the-pycharm-4-release-candidate/?replytocom=114585
CC-MAIN-2020-40
en
refinedweb
This page was generated from examples/xgboost_model_fitting_adult.ipynb. A Gradient Boosted Tree Model for the Adult Dataset¶ Introduction¶ This example introduces the reader to the fitting and evaluation of gradient boosted tree models. We consider a binary classification task into two income brackets (less than or greater than $50, 000), given attributes such as capital gains and losses, marital status, age, occupation, etc.. [3]: import matplotlib.pyplot as plt import numpy as np import xgboost as xgb from alibi.datasets import fetch_adult from copy import deepcopy from functools import partial from itertools import chain, product from scipy.special import expit invlogit=expit from sklearn.metrics import accuracy_score, confusion_matrix from tqdm import tqdm Data preparation¶ Load and split¶ The fetch_adult function returns a Bunch object containing the features, targets, feature names and a mapping of categorical variables to numbers. [36]: adult = fetch_adult() adult.keys() data = adult.data target = adult.target target_names = adult.target_names feature_names = adult.feature_names category_map = adult.category_map Note that for your own datasets you can use the utility function gen_category_map imported from alibi.utils.data to create the category map. [37]: np.random.seed(0) data_perm = np.random.permutation(np.c_[data, target]) data = data_perm[:,:-1] target = data_perm[:,-1] idx = 30000 X_train,y_train = data[:idx,:], target[:idx] X_test, y_test = data[idx+1:,:], target[idx+1:] Create feature transformation pipeline and preprocess data¶ Unlike in a previous example, the categorical variables are not encoded. For linear models such as logistic regression, using an encoding of the variable that assigns a unique integer to a category will affect the coefficient estimates as the model will learn patterns based on the ordering of the input, which is incorrect. In contrast, by encoding the into a sequence of binary variables, the model can learn which encoded dimensions are relevant for predicting a given target but cannot represent non-linear relations between the categories and targets. On the other hand, decision trees can naturally handle both data types simultaneously; a categorical feature can be used for splitting a node multiple times. So, hypothetically speaking, if the categorical variable var has 4 levels, encoded 0-3 and level 2 correlates well with a particular outcome, then a decision path could contain the splits var < 3 and var > 1 if this pattern generalises in the data and thus splitting according to these criteria reduce the splits’ impurity. In general, we note that for a categorical variable with \(q\) levels there are \(2^{q-1}-1\) possible partitions into two groups, and for large \(q\) finding an optimal split is intractable. However, for binary classification problems an optimal split can be found efficiently (see references in [1]). As \(q\) increases, the number of potential splits to choose from increases, so it is more likely that a split that fits the data is found. For large \(q\) this can lead to overfitting, so variables with large number of categories can potentially harm model performance. The interested reader is referred to consult these blog posts (first, second), which demonstrate of the pitfalls of encoding categorical data as one-hot when using tree-based models. sklearn expects that the categorical data is encoded, and this approach should be followed when working with this library. Model optimisation¶) xgboost defines three classes of parameters that need to be configured in order to train and/or optimise a model: general parameters: high level settings such as the type of boosting model learning parameters: these are parameters that control the boosting process (model hyperparameters) learning task parameters: define the optimisation objective and the metric on which the validation performance is measured [8]: learning_params = {} booster_params = {} general_params = {} params = {} This is a binary classification problem, optimised with binary cross-entropy as an objective, defined as: where \(y_i\) is the true label for the \(i\)th observation and \(\hat{y}_i\) is the decision score (logit) (1) of the positive class (whose members’ income exceeds $50, 000). Setting the objective to binary:logitraw means the call to the predict method will output \(\hat{y}_i\). [ ]: learning_params.update({ 'objective':'binary:logitraw', 'seed': 42, 'eval_metric': ['auc', 'logloss'] # metrics computed for specified dataset }) The AUC will be used as a target for early stopping during hyperparameter optimisation . Using this metric as opposed to, e.g., accuracy helps deal with the imbalanced data since this metric balances the true positive rate and the false positive rate. However, it should be noted that AUC is an aggregate performance measure since it is derived by matching predicted labels with ground truths across models with different output thresholds. In practice, however, only one such model is selected. Thus, a higher AUC just reflects that on average, the ensemble performs better. However, whether the classifier selected according to this metric is optimal depends on the threshold chosen for converting the predicted probabilities to class labels. Additionally, the weights of the positive are scaled to reflect the class imbalance. A common setting is to scale the positive class by the ratio of the negative to positive examples (approximately 3 for this dataset). Since this is a heuristic approach, this parameter will be cross-validated. The first parameters optimised are: max_depth: the maximum depth of any tree added to the ensemble. Deeper trees are more accurate (on training data) since they represent more specific rules min_child_weight: child nodes are required to have a total weight above this threshold for a split to occur. For a node \(L\), this weight is computed according to where the summation of Hessians is over all examples \(i\) split at the node, and the subscript \(t-1\) indicates that the derivative is with respect to the output evaluated at the previous round in the boosting process. In this example, the weight \(w_i\) depends on the class and is controlled through the scale_pos_weight argument. The second derivative above is given by whose variation is depicted in Figure 1. /> Figure 1 shows that when the classifier assigns a high positive or a low negative score, the contribution of data point \(i\) to the child weight is very small. Therefore, setting a very small value for min_child_weight parameter can result in overfitting since the splitting process will make splits in order to ensure the instances in a leaf are correctly classified at the expense of finding more parsimonious rules that generalise well. scale_pos_weight: a scaling factor applied to the positive class to deal with class imbalance gamma: is a parameter that controls the minimum gain that has to be attained in order for a split to be made To understand gamma, recall that the gain of making a particular split is defined as function of the structure scores of the left (L) and right (R) child nodes and the structure score of the parent as where \(\lambda\) is a regularisation hyperparameter shrinking the model output, \(H_L\) is defined above and \(G_L\) is given by and \(i\) sums over the points that flow through the node \(L\). Note that these structure scores represent minimisers of the objective (which is simply a quadratic in the leaf value). To make a split, the gain should exceed \(\gamma\). The learning rate ( eta) is fixed. This parameter is the fraction of the output score a member of the ensemble contributes to the decision score. Lower values yield larger ensembles. [1]: def tune_params(dtrain, base_params, param_dict, maximise=True, prev_optimum=None, **kwargs): """ Given a training set `dtrain`, a dictionary of parameters to be optimised `param_dict` and all the other learning and booster parameters (`base_param`), this function runs an exhaustive grid search over the tuning parameters. NB: Specifying `prev_optimum` allows one to tune parameters in stages. maximise should indicate if the evaluation metric should be maximised during CV. """ def _statistic(maximise, argument=False): if maximise: if argument: return np.argmax return np.max if argument: return np.argmin return np.min def _compare(optimum, val, maximise=True): eps=1e-4 if maximise: if val > optimum + eps: return True return False if val < optimum - eps: return True return False statistic = partial(_statistic, maximise) compare = partial(_compare, maximise=maximise) metrics = kwargs.get("metrics") if isinstance(metrics, list): opt_metric = metrics[-1] else: opt_metric = metrics print(f"CV with params: {list(param_dict.keys())}") print(f"Tracked metrics: {metrics}") print(f"Cross-validating on: {opt_metric}") if prev_optimum: optimum = prev_optimum else: optimum = -float("Inf") if maximise else float("Inf") params = deepcopy(base_params) pars, pars_val = list(param_dict.keys()), list(param_dict.values()) combinations = list(product(*pars_val)) best_combination = {} # run grid search for combination in tqdm(combinations): for p_name, p_val in zip(pars, combination): params[p_name] = p_val cv_results = xgb.cv( params, dtrain, **kwargs, ) mean = statistic()(cv_results[f'test-{opt_metric}-mean']) boost_rounds = statistic(argument=True)(cv_results[f'test-{opt_metric}-mean']) improved = compare(optimum, mean) if improved: optimum = mean for name, val in zip(pars, combination): best_combination[name]=val print(f"{opt_metric} mean value: {mean} at {boost_rounds} rounds") msg = 'Best params:' + '\n{}: {}'*len(pars) print(msg.format(*list(chain(*best_combination.items())))) return optimum, best_combination, boost_rounds [18]: booster_params.update({'eta': 0.1}) tuning_params={ 'scale_pos_weight': [2, 3, 4, 5], 'min_child_weight': [0.1, 0.5, 1.0, 2.0, 5.0], 'max_depth': [3, 4, 5], 'gamma': [0.01, 0.05, 0.08, 0.1, 0.2] } All parameters apart from the ones tuned are included in params. The cross-validation process is controlled through cv_opts. [19]: params.update(general_params) params.update(learning_params) params.update(booster_params) cv_opts = { 'num_boost_round': 1000, 'nfold': 5, 'stratified': True, 'metrics': ['logloss', 'aucpr', 'auc'], # can alternatively perform early stopping on log-loss or aucpr 'early_stopping_rounds': 20, 'seed': 42, } Optimise scale_pos_weight, min_child_weight, max_depth and gamma. Note that this section is long running since it conducts an extensive grid search. [ ]: optimum, best_params, boost_rounds = tune_params(dtrain, params, tuning_params, maximise=True, **cv_opts ) if best_params: params.update(best_params) params.update({'boost_rounds': boost_rounds}) [ ]: params Further optimisation is possible by adjusting the following parameters: subsample: this is the ratio of the total training examples that will be used for training during each boosting round colsamplebytree: this is the ratio of the features used to fit an ensemble member during a boosting round Training on uniformly chosen data subsamples with uniformly chosen subsets of features promotes noise robustness. [ ]: tuning_params = { 'subsample': [0.6, 0.7, 0.8, 0.9, 1.0], 'colsamplebytree': [0.6, 0.7, 0.8, 0.9, 1.0] } optimum, best_params, boost_rounds = tune_params(dtrain, params, tuning_params, maximise=True, prev_optimum=optimum, **cv_opts ) None of the stated configuration resulted in an improvement of the AUC, which could be a consequence of the fact that: the parameters selected in the previous round provide strong model regularisation; in particular, the maximum tree depth for any ensemble member is 3, which means only a subset of features are used anyway to perform the splits in any given tree. Further subsampling may thus not be effective since the subsampling is already implicit in the chosen tree structure the AUC is insensitive to small model changes since it measures how the proportion of false positives changes as the number of false negatives changes across a range of models. The confidence of the models does not feature in this measure (since a highly confident classifier and one that predicts probabilities near the decision threshold will have identical AUC) [ ]: if best_params: params.update(best_params) params.update({'boost_rounds': boost_rounds}) [ ]: params To prevent overfitting, a regulariser \(\Omega(f_t)\) with the form is added to the objective function at every boosting round \(t\). Above \(T\) is the total number of leaves and \(s_{j,t}\) is the score of the \(j\)th leaf at round \(t\). For the binary logistic objective, a higher \(\lambda\) penalises confident predictions (shrinks the scores). By default \(\lambda = 1\). Since subsampling data and features did not improve the performance, we explore with relaxing regularisation in order to adjust the model regularisation. [ ]: tuning_params = { 'lambda': [0.01, 0.1, 0.5, 0.9, 0.95, 1, 2, 5, 10] } optimum, best_params, boost_rounds = tune_params(dtrain, params, tuning_params, maximise=True, prev_optimum=optimum, **cv_opts) if best_params: params.update(best_params) params.update({'boost_rounds': boost_rounds}) Model training¶ The model will now be trained with the following parameters (skip the param update if you ran the optimisation section): [27]: learning_params = { 'objective':'binary:logitraw', 'seed': 42, 'eval_metric': ['auc', 'logloss'] # metrics computed for specified dataset } params = { 'scale_pos_weight': 2, 'min_child_weight': 0.1, 'max_depth': 3, 'gamma': 0.01, 'boost_rounds': 541, } params.update(learning_params) [ ]: if 'boost_rounds' in params: boost_rounds = params.pop('boost_rounds') model = xgb.train( params, dtrain, num_boost_round=boost_rounds, evals=[(dtrain, "Train"), (dtest, "Test")], ) [44]: model.save_model('adult_xgb.mdl') Model assessment¶ The confusion matrix is used to quantify the model performance below. [ ]: [40]: y_pred_train = predict(model, dtrain) y_pred_test = predict(model, dtest) plot_conf_matrix(y_test, y_pred_test, target_names) print(f'Train accuracy: {round(100*accuracy_score(y_train, y_pred_train), 4)} %.') print(f'Test accuracy: {round(100*accuracy_score(y_test, y_pred_test), 4)}%.') Train accuracy: 87.75 %. Test accuracy: 86.6797%. Footnotes¶ (1): One can derive the stated formula by noting that the probability of the positive class is \(p_i = 1/( 1 + \exp^{-\hat{y}_i})\) and taking its logarithm.
https://docs.seldon.io/projects/alibi/en/latest/examples/xgboost_model_fitting_adult.html
CC-MAIN-2020-40
en
refinedweb
# __. # Quick Install The following instructions were tested on Ubuntu 14.04. The modules used for parser development are located under the 'nomadcore' package. You can setup this package in any way you like, but a simple installation script 'setup.py' is provided for ease of use. It will install the nomadcore package along with the needed dependencies. You can install the package in development mode by calling the terminal command ```sh python setup.py develop --user ``` in the folder where setup.py is located.'. # Manual install This package depends on other python libraries which are declared in 'requirements.txt'. The requirements can be installed simply by calling the terminal command ```sh pip install -r requirements.txt ``` in the folder where the file is located. In order to use the nomadcore package you have to add the directory to PYTHONPATH so that python knows where to look for it. This can be achieved temporarily by using a script like this ```python import sys dir = os.path.normpath("path/to/python-common/python")) if not commonDir in sys.path: sys.path.insert(0, commonDir) ``` or the addition can be made permanent by adding the line ```sh export PYTHONPATH="${PYTHONPATH}:/path/to/python-common/python" to your ~/.bashrc file.
https://gitlab.mpcdf.mpg.de/nomad-lab/python-common/-/blame/3f243d579e12c5effd402ef25699cc550a14ff59/README.md
CC-MAIN-2020-40
en
refinedweb
Bug #4681 namespace declaration not honored: error XTDE1390 undeclared prefix: saxon 0% Description When I run the file <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet xmlns: <xsl:output <xsl:template <xsl:text xmlns:Run with {system-property('xsl:product-name')} {system-property('xsl:product-version')} {system-property('saxon:platform')}</xsl:text> </xsl:template> </xsl:stylesheet> with Saxon-JS 2 under Node (using xslt3 -it -xsl) I get an error: Transformation failure: Error XTDE1390 at ns-problem2.xsl#10 Undeclared prefix: saxon Transformation failed: Error XTDE1390 at ns-problem2.xsl#10 Undeclared prefix: saxon The namespace is declared on the xsl:text element so I think this is a bug. Running the stylesheet through Saxon HE 10.1 Java does not give any error. Moving the namespace declaration to the stylesheet's root element enables Saxon-JS to run the code but I think it should work with the local namespace declaration as well. History #1 Updated by Debbie Lockett 3 days ago - Assignee set to Michael Kay #2 Updated by Michael Kay 2 days ago - Status changed from New to In Progress Added XSLT3 test case system-property-025, which adapts the repro by binding a different prefix locally to the XSLT namespace. #3 Updated by Michael Kay 2 days ago Confirmed that the new test is failing under latest Saxon-JS build. #4 Updated by Michael Kay 2 days ago In the SEF file, the call on system-property is output with "ns": "= xml=~ fn=~ xsl=~ xs=~ ", indicating that the namespace declaration on the xsl:text element has been lost. #5 Updated by Michael Kay 2 days ago In the static phase (static.xsl#702) an xsl:text element is replaced by its content, which loses the namespace declarations. It also probably loses other attributes that might affect the interpretation of TVTs within the xsl:text element, for example @version or @default-collation. #6 Updated by Michael Kay 2 days ago First attempt to fix this by passing the xsl:text through to the next phase using xsl:next-match, and picking it up in the sef phase along with xsl:value-of at creating-new-nodes#279 was unsuccessful. (Trying to remember how to debug the compiler, haven't done it for a while...) #7 Updated by Michael Kay 1 day ago Now working (though I need to do some clearing up before committing, to get rid of temporary diagnostics, and to do some regression testing). The changes to the XX compiler are: In static.xsl, in the match="xsl:text"template, call xsl:next-matchso the xsl:textelement survives into the mode="sef"phase. In creating-new-nodes.xsl, in the match="xsl:text[normalize-space()]"template, apply-templates with mode attribute-sans-prefixto process attributes appearing on the xsl:textelement. #8 Updated by Michael Kay 1 day ago - Status changed from In Progress to Resolved - Priority changed from Low to Normal - Fix Committed on JS Branch Trunk added #9 Updated by Debbie Lockett 1 day ago - Status changed from Resolved to In Progress Reopening because I'm seeing regression. E.g. xslt30 axes test set. Reverting the change in static.xsl for the match="xsl:text" template, means the axes tests pass again, but t=system-property-025 fails. Please register to edit this issue Also available in: Atom PDF Tracking page
https://saxonica.plan.io/issues/4681
CC-MAIN-2020-40
en
refinedweb
Editor's note: The following post was written by Visual Studio and Development Technologies MVP Houssem Dellai as part of our Technical Tuesday series.. Before we start, here are some considerations: - The backend will be in ASP.NET and Identity, using the “Individual Accounts” template. - The client application will be in Xamarin Forms which will generate iOS, Android and Windows UWP apps. - We’ll use the Http Client nuget library to create Http requests. - To test the web services, we’ll use Postman. You can use other tools such as Fiddler or curl Creating the backend Visual Studio has a project template with the minimum code required for registering and signing users. We’ll use that template to create a new project: File -> New -> Project -> Web -> ASP.NET Web Application (.NET Framework). Give a name to the project, and select Web API. Then click on Change Authentication and choose Individual User Accounts. Then click OK to create the project. Note: We selected .NET Framework instead of .NET Core because right now there’s no template with Individual User Accounts in ASP.NET Core Web API project. That template will be released with ASP.NET Core 2.1. This tutorial should stay applicable as both ASP.NET Core and .NET Framework default templates relies on Identity. At this point, Visual Studio generated an ASP.NET project with all the code required to run authentication. The ApplicationUser inherits properties like UserName, Email, PhoneNumber. You can also add other properties (i.e BirthDate or CreatedAt). Creating the client app To add a Xamarin Forms project to the solution, right click the solution and go through the following steps: Add -> New project -> Cross Platform -> Cross Platform App (Xamarin). Give your app a name, then choose Portable Class Library (PCL) and click OK. This will add 4 projects to the solution: iOS, Android and Windows UWP projects where we can add platform specific code, and a PCL project where we can put the shared code. Here we’ll use the PCL project with the HTTP Client Library NuGet package. Testing and implementing Signup The AccountController exposes a web service for registering users. It takes a parameter which has the user’s email and password. // POST api/Account/Register [AllowAnonymous] [Route("Register")] public async Task<IHttpActionResult> Register(RegisterBindingModel model) { // …code removed for brievety var user = new ApplicationUser() { UserName = model.Email, Email = model.Email }; IdentityResult result = await UserManager.CreateAsync(user, model.Password); // …code removed for brievety } To test this web service using Postman, we can create an HTTP request with the following parameters: Address: Verb : POST Content : json object ({“Email”: …}) Content-Type : application/json To be able to register users from Xamarin app, we’ll invoke the Register web service. For that, we add a new class; we typically call this ApiServices. We then add the following code, which will create the HTTP request using the HttpClient object. public async Task RegisterUserAsync( string email, string password, string confirmPassword) { var model = new RegisterBindingModel { Email = email, Password = password, ConfirmPassword = confirmPassword }; var json = JsonConvert.SerializeObject(model); HttpContent httpContent = new StringContent(json); httpContent.Headers.ContentType = new MediaTypeHeaderValue("application/json"); var client = new HttpClient(); var response = await client.PostAsync( "", httpContent); } Note: To use the HttpClient object, install the HTTP Client Library nuget package. To do this, right click the PCL project -> Manage Nuget Packages -> Browse, search for it and click Install. To use the JsonConvert object, install the Newtonsoft.json nuget package. To use the RegisterBindingModel, copy it from the ASP.NET project to the PCL project. Testing and implementing Signin Unlike Register, Signin web service is not shown in AccountController. Instead, its endpoint and configuration are specified in Startup.Auth.cs class: TokenEndpointPath = new PathString("/Token"), We invoke the Signin web service to get an access token by creating an HTTP request: Address: Verb : POST Content : key values for UserName, Password and grant_type. Content-Type : x-www-form-urlencoded We should have the following request in Postman, which we get as a result a JWT object containing the access token with expiration date and type. Note: The token here expires in 14 days, that value could be changed from within the Startup.Auth.cs file: AccessTokenExpireTimeSpan = TimeSpan.FromDays(14); grant_type value should be “password”, each OAuth implementation could change these key value pairs. The equivalent to this call in Xamarin is the following Login method. Note how we are extracting the access_token value from the JWT object: public async Task LoginAsync(string userName, string password) { var keyValues = new List<KeyValuePair<string, string>> { new KeyValuePair<string, string>("username", userName), new KeyValuePair<string, string>("password", password), new KeyValuePair<string, string>("grant_type", "password") }; var request = new HttpRequestMessage(HttpMethod.Post, ""); request.Content = new FormUrlEncodedContent(keyValues); var client = new HttpClient(); var response = await client.SendAsync(request); var content = await response.Content.ReadAsStringAsync(); JObject jwtDynamic = JsonConvert.DeserializeObject<dynamic>(content); var accessToken = jwtDynamic.Value<string>("access_token"); } Testing and Invoking protected web service The generated web project already exposes a protected web service (Get method). ASP.NET Identity uses the [Authorize] attribute to protect web services. This means any HTTP request to invoke Get should contain the access_token. Otherwise, the server will return 401 unauthorized. Authorize will verify the validity of the token then pass the request to the Get method. [Authorize] public class ValuesController : ApiController { // GET api/values public IEnumerable<string> Get() { return new string[] {"value1", "value2"}; } } Now that we have a valid access_token, we can invoke the protected web services by sending it in the Authorisation Header of the HTTP request. Note: We added the Bearer keyword to the Authorisation value to indicate the type of the access_token. That value was specified in the JWT object as token_type. The invocation from the code is as follows: public async Task GetValuesAsync(string access_token) { var client = new HttpClient(); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", access_token); var json = await client.GetStringAsync( ""); var values = JsonConvert.DeserializeObject<List<string>>(json); } Conclusion The access_token can be used multiple times, and even if the client app relaunches. This means we need to save the access_token to reuse it during the 14 days default validity period. As it is a simple string, we don’t need a database or file. Instead, we use application settings through Xam.Plugins.Settings nuget plugin. This allow us to save data as key value pairs. Reading and writing is as simple as manipulating an object attribute Settings.AccessToken. For more information, I also have the project available as a GitHub repository here: And check out the following video tutorials: OAuth with ASP.NET Identity Implementing Signup to ASP.NET with Xamarin Forms Login from Xamarin Forms to ASP.NET Identity Houssem Dellai is a Tunisian Microsoft MVP who shares his professional experience as Xamarin Consultant through articles and videos on channel9. Follow him on Twitter @HoussemDellai.
https://blogs.msdn.microsoft.com/mvpawardprogram/2018/02/06/signin-signup-xamarin-forms/
CC-MAIN-2018-09
en
refinedweb
System and method for web services Java API-based invocationDownload PDF Info - Publication number - US20040064503A1US20040064503A1 US10606573 US60657303A US2004064503A1 US 20040064503 A1 US20040064503 A1 US 20040064503A1 US 10606573 US10606573 US 10606573 US 60657303 A US60657303 A US 60657303A US 2004064503 A1 US2004064503 A1 US 2004064503A1 - Authority - US - Grant status - Application - Patent type - - Prior art keywords - web - service - client - services - A runtime architecture for web services utilizes a container driver to accept an invoke request for web services, such as from a protocol adapter. The container driver performs any necessary data binding and unbinding required to process the invoke request and associated message context, utilizing an appropriate plugin component. An interceptor receives the context information from the container driver and modifies the message context for web service compatibility. An invocation handler receives the formatted context information from the container driver and passes parameters from the message context to the target of the request. The invocation handler processes values returned from the target and passes these values to the container driver. The container driver can then formulate a response to the invoke request, and return the response and message context to the client or protocol adapter. Description - [0001]This application is a continuation-in-part of U.S. Utility patent application Ser. No. 10/366,236, filed Feb. 13, 2003 entitled “WEB SERVICES RUNTIME ARCHITECTURE”, which claims the benefit of U.S. Provisional Patent Application Serial No. 60/359,098, filed Feb. 22, 2002 entitled “WEB SERVICES RUNTIME ARCHITECTURE”, both of which are incorporated herein by reference. This application also claims the benefit of U.S. Provisional Application No. 60/392,217, filed Jun. 27, 2002, incorporated herein by reference. - [0002]This application is related to U.S. Utility patent application Ser. No. 10/366,246, filed Feb. 13, 2003 entitled “WEB SERVICES PROGRAMMING AND DEPLOYMENT”, and U.S. Provisional Patent Application Serial No. 60/359,231, filed Feb. 22, 2002 entitled “WEB SERVICES PROGRAMMING AND DEPLOYMENT”, which applications are also incorporated herein by reference. - [0003]1. Field of the Invention - [0004]The present invention relates to implementing web services, and to a system for using an implementation such as JAX-RPC for invoking a web service from a Java client application. - [0005]2. Background - [0006]Web services are becoming an integral part of many application servers, with an importance that can rival HTTP or RMI stacks. Java standards for web services are being developed through the Java Community Process. Businesses are building important applications on top of web services infrastructures, such as is available in WebLogic Server from BEA Systems of San Jose, Calif. - [0007]Web services are a type of service that can be shared by and used as components of distributed Web-based applications. They commonly interface with existing back-end applications, such as customer relationship management systems, order-processing systems, and so on. - [0008]Traditionally, software application architecture tended to fall into two categories: huge monolithic systems running on mainframes or client-server applications running on desktops. Although these architectures work well for the purpose the applications were built to address, they are closed and can not be easily accessed by the diverse users of the Web. - [0009]Thus the software industry is evolving toward loosely coupled service-oriented applications that dynamically interact over the Web. The applications break down the larger software system into smaller modular components, or shared services. These services can reside on different computers and can be implemented by vastly different technologies, but they are packaged and transported using standard Web protocols, such as XML and HTTP, thus making them easily accessible by any user on the Web. - [0010]Presently however, there is no complete implementation of Web services upon which to build. - [0011]The present invention relates to implementing web services, and to a system for using an implementation such as JAX-RPC for invoking a web service from a Java client application. - [0012]The Java API for XML-based RPC (JAX-RPC) is a an increasingly common industry standard specification that defines a client API for invoking a web service and defines standards that allow software developers to create interoperable web services. - [0013]The following list briefly describes the core JAX-RPC interfaces and classes. - [0014]Service—Main client interface. Used for both static and dynamic invocations. - [0015]ServiceFactory—Factory class for creating Service instances. - [0016]Stub—Represents the client proxy for invoking the operations of a web service. Typically used for static invocation of a web service. - [0017]Call—Used to dynamically invoke a web service. - [0018]JAXRPCException—Exception thrown if an error occurs while invoking a web service. - [0019. - [0020][0020]FIG. 1 is a diagram of a system in accordance with one embodiment of the present invention. - [0021][0021]FIG. 2 is a diagram of a web service container that can be used with the system of FIG. 1. - [0022][0022]FIG. 3 is a diagram of a web service client that can be used with the system of FIG. 1. - [0023]A system and method in accordance with one embodiment of the present invention overcome deficiencies in existing web service implementations by providing a more stable, complete implementation that is suitable as an application integration platform. - [0024]A web services architecture can allow for communication over a number of transports/protocols. HTTP can continue to be a primary transport mechanism for existing applications, while transport mechanisms such as SMTP, FTP, JMS, and file system mailboxes can also be supported. - [0025]Message formats such as SOAP 1.1 and 1.2 with attachments can be used as primary message formats. It is also possible to accept web service requests that are XML-encoded and submitted via HTTP posts. A web services architecture can support plugging in other message formats and provide a mechanism for access to the raw message for user code. - [0026]A web services architecture can utilize a stack that supports a standards-based default binding of fundamental XML data types supported in various web service platforms. An API can be used to allow other binding mechanisms to be plugged in. The binding to be used can be specified on a per-operation basis. - [0027]Various security and messaging standards require a context or state to be associated with web service messages. A web services architecture can support the processing of multiple message contexts, such as those related to security or conversation ID, and can offer user code access to these contexts. Many of these contexts can be encoded into a SOAP header, although it is also possible to include the contexts in the underlying protocol layer (e.g., cookies in HTTP) or in the body of the message (e.g., digital signatures). A web services container can explicitly process contexts associated with planned security features for web services. - [0028]A web services stack can support a number of dispatch and synchrony models. Dispatch to both stateless and stateful components can be supported, using both RPC and messaging invocation semantics. Synchronous and asynchronous processing of requests can also be supported. In particular, it can be possible to enqueue an incoming message, to enqueue an outbound message, and to make asynchronous outbound calls. Enqueuing involves downloading files, one at a time, from a queue. - [0029]A component such as a session EJB can be used to implement application-hosted web services, such as for business logic. An API can be provided for sophisticated users to integrate other types of components, even custom components, into a web service mechanism. This may be used rarely, such as by developers who wish to build higher-level facilities and infrastructure on top of application-specific web services. - [0030]A web services architecture should not preclude the embedding of a web service container in a lightweight server running in a more restricted Java platform, such as J2ME/CDC. A very thin web service Java client, such as less than 100 k, can also be supported. Such an architecture allows support for a variety of web service standards, such as JAX-RPC. In addition, application-specific APIs can be defined for the implementation of web services. - [0031]A runtime web services architecture can support both synchronous and asynchronous (“one-way”) RPC style web services, such as may be backended by an EJB. Message-style web services can also be supported, which allow content to be submitted to and received from a JMS destination. Endpoints can be supported in a transport-specific way. These endpoints can associate a transport-specific address with a web service. For instance, in HTTP a particular address can be associated with a web service to be invoked. For SMTP an email address can be associated with a web service. - [0032][0032]FIG. 1 shows the relationship of a web container 108 and SMTP listener 104 and a host server or web service container 108, utilizing an architecture in accordance with one embodiment of the present invention. A protocol adapter for HTTP 102 is shown in a web container 100, which. - [0033][0033. - [0034]A message context is a representation of a web service invocation flowing through a container. A message context can contain a request message, which is the web service request. A message context can be rendered into the canonical form of SOAP plus attachments. A response message is the web services response, or at least a place holder for the response if the response has not been formulated yet. A response message can also be in the canonical form of SOAP plus attachments. Transport information can contain relevant information that is specific to the transport over which the request came, and over which the response must be sent. For example, the transport information can contain the HTTP request and response streams for HTTP transport. An invocation context can also be used, which is described below. - [0035]A protocol adapter can be inserted into the subsystem of a host server. A protocol adapter can be responsible for processing incoming requests for a particular transport/protocol, such as HTTP or SMTP. This allows the web service container to process web service messages in various formats that are sent over multiple protocols. It will also allow the web service container to reside in different kinds of servers. One condition for a protocol adapter is that the host server can support the protocol and that the message format can be converted into SOAP internally. There are no known important message formats that cannot be expressed via SOAP. - [0036]A protocol adapter can be responsible for identifying requests as web service messages, as well as routing the messages to a web services container. If the protocol being used supports synchronous responses, a protocol adapter can also receive the response data and return the data to the originator of the request. The protocol adapter can convert the message to the original message format if it is not SOAP plus attachments. A protocol adapter can deal with any message context that is required by the container, such as a conversation ID, and is transmitted at the protocol level, such as cookies in HTTP. The protocol adapter can propagate the message context to and from a web services container. - [0037]The actual implementation of protocol adapter functionality can depend on the architecture of the host server, as well as the way that the protocol is hosted in the server. For example, the functions of a protocol adapter can be implemented in part by the normal function of a web container for an HTTP protocol. Due to the dependency of protocol processing on server internals, there may not be many public protocol adapter APIs. - [0038]An invocation context can be used, which is an inheritable thread-local object that can store arbitrary context data used in processing a web service request. The context can be available from various components of the architecture involved in the processing of the request and response. Typical data that might be stored in such a context are a conversation ID, a message sequence number, and a security token. A particular invocation handler can choose to make the invocation context available to the target component. This can allow application code to read and write to the invocation context. - [0039]An architecture can utilize interceptors. Interceptors are plugins that can provide access to inbound and outbound web service messages. An interceptor API can be public, and an implementation of an interceptor API can be part of a web service application. An interceptor can modify SOAP messages as required. An interceptor can also read and write information on the invocation context. Interceptors can be associated with either operation, or with the namespace of the message body. - [0040]There are different types of interceptors. Header handlers can be used, for example, which operate only on message headers. Header handlers must declare which message headers they require so that the header information can be exposed, such as in WSDL generated for the web service. Flow handlers, on the other hand, can operate on full message content. Flow handlers do not require a declaration of which message parts are processed, and do not result in the existence of any additional information in the generated WSDL. Application developers may use header handlers primarily, while business units that are building infrastructure on top of an application server may choose to use flow handlers. Both APIs, however, can be public. - [0041]XML serialization and deserialization plugins can be supported, which can handle the conversion of method parameters from XML to Java objects and return values from Java to XML. Built-in mappings for the SOAP encoding data types can be included with an application server. The processing of literal XML data that is sent outside any encoding can also be supported, as well as Apache “Literal XML” encoding. Users can also implement their own custom data type mappings and plug those mappings in to handle custom data types. - [0042]A container driver can be used with a web services architecture in accordance with one embodiment of the present invention. A container driver can be thought of as the conceptual driver of a web service container. A container driver can implement the process flow involved in performing a web service request. - [0043]For RPC web services hosted on an application server, the default target of a web service invocation can be an EJB instance. For message-style web services, the default target can be a JMS destination. In certain cases, it may be desirable to allow other components or subsystems as targets. People can build middleware infrastructure on top of application servers to require this functionality. Therefore, an invocation handler API can be supported to allow the web service requests to be targeted at different components besides EJBs. - [0044]An invocation handler can insulate the web service container from details of the target object lifecycle, transaction management, and security policies. The implementer of an invocation handler can be responsible for a number of tasks. These tasks can include: identifying a target object, performing any security checks, performing the invocation, collecting the response, and returning the response to the container driver. The implementer can also be responsible for propagating any contextual state, such as a conversation ID or security role, as may be needed by a target component. - [0045]A protocol adapter can perform the following steps in one embodiment. The protocol adapter can identify the invocation handler of the target component deployment, such as a stateless EJB adapter. The protocol adapter can identify any additional configuration information needed by the invocation handler to resolve the service, such as the JNDI name of a deployed EJB home. This information can be in the deployment descriptor of the protocol adapter deployment, such as a service JNDI name, and/or the information could be in the headers or body of the request or in the protocol. - [0046]A protocol adapter can identify the style of a web service request, such as one-way RPC, synchronous RPC, or messaging. If necessary, a protocol adapter can convert an incoming request message into the SOAP with attachments canonical form. A protocol adapter can create a message context containing the request, a place holder for a response, information about the transport, and information about the target invocation handler. A protocol adapter can also dispatch message context configuration to the web service container. - [0047]A container driver can manage the flow of processing in the container. The container driver can receive the message context from the protocol adapter and, in one embodiment, sequentially performs the following steps. The container driver can dispatch to registered inbound interceptors, extract operation parameters, and perform data binding. The container driver can submit the operation request to the appropriate invocation handler, which can delegate the invoke to a target object. The container driver can receive a response from the invocation handler, possibly including a return value. If there is a return value, the container driver can perform data unbinding. If the synchrony model is request-response, the container driver can formulate a SOAP response. The response can be dispatched to registered outbound interceptors and returned to the protocol adapter for return to the caller. - [0048]The protocol adapter can return the SOAP response to the caller, converting the response back to the original message format if it was not SOAP. The protocol adapter, interceptors, and invocation handler can each have access to the invocation context object. Any necessary state needed during request processing can be propagated through this context. The invocation handler can also provide access to the context, such as to the component to which the invocation handler delegates. - [0049]An invocation handler that has been targeted to process an invoke can receive the following data from the container: the operation name, an array of Java Object parameters, any invocation handler configuration data, and the invocation context. The invocation handler can perform the invocation and return an array of Java Object return values. - [0050]An invocation handler can perform the following steps for one method in accordance with the present invention. A target object can be identified for the invocation. The invocation can be performed by passing the parameters to the target. The invocation context object can be provided to the target. Also, a transaction, security, or component-specific context can be passed to the target object. Any return value(s) from the target can be processed and returned to the container driver. - [0051]An API can be used for invocation handlers. Invocation handlers can be configured when the protocol adapter is deployed. For example, the HTTP protocol handler can be a web application. - [0052]Many types of built-in invocation handlers can be used. One such invocation handler is an EJB invocation handler. EJB invocation handlers can require a service identity, such as the JNDI name of the EJB home, and possibly a conversation ID, which can be extracted from a cookie, in the case of stateful EJB targets. The body of the request can indicate the operation name that will be mapped to the proper method call on the EJB. - [0053]A stateless EJB invocation handler can be used to dispatch web service invokes to an EJB. This handler can require the JNDI name of the stateless EJB home. The handler can obtain an instance of the EJB and can dispatch the invoke and return the return value, if there is one. - [0054]A stateful session EJB invocation handler can be used to dispatch invokes to a stateful session bean. The handler can require the JNDI name of the stateful EJB home, as well as a conversation ID, which can be extracted from the message. The default approach for HTTP can be to extract the conversation ID from a cookie in the HTTP protocol handler and to put it in the invocation context under a documented name. If this default behavior is not suitable, the developer can provide a header handler that extracts the conversation ID from message headers and places the ID in the invocation context. - [0055]A stateful session bean (SFSB) invocation handler can maintain a table of mappings between a conversation ID and EJB handles. If no conversation ID is found, the stateful EJB invocation handler can create a new conversation ID, a new session bean instance, and can add its handle to the mapping table. The invoke can then be dispatched to the SFSB referenced by the handle. - [0056]A JMS invocation handler can dispatch the body of a SOAP message to a JMS destination. The handler can require the JNDI name of the destination, the JNDI name of the connection factory, and the destination type. - [0057]The configuration of protocol handlers can involve specifying the mapping between web service endpoint URIs, such as URLs in the case of HTTP or email addresses in the case of SMTP, and the name of an invocation handler. A particular invocation handler can require additional configuration information, such as the JNDI-name of a target EJB deployment. - [0058]An HTTP protocol handler can be a special web application that is automatically deployed when a web archive file (WAR) is deployed. The URL mappings to invocation handlers can be extracted from the WSP (“web service provider”) description of the web service. An HTTP protocol handler can map HTTP headers to the invocation context and can attempt to extract a conversation ID from an HTTP cookie, if one is present. An SMTP Protocol Handler can also be utilized. - [0059]An HTTP-based web service can be packaged in and deployed from a J2EE WAR that is contained inside a J2EE Enterprise Archive File (EAR). The WAR can contain a web service WSP document, which defines a web service. The WSP can describe the shape of the web service and how the implementation maps to backend components. A WSP can be referenced in the URL of a web service, like a JSP. It can also allow reference to user-defined tags, like a JSP which can integrate user-developed functions into the definition of the web service. It can also support the scripting of web service functions. Unlike a JSP, however, a WSP may not compile down to a servlet. The WSP can be directly utilized by the web service runtime. - [0060]The remaining contents of the EAR can include EJB-JARs or other classes that are part of the implementation of the web service. - [0061]A web container can manage the deployment of HTTP WSPs in a similar manner to JSPs. There can be a default WSP servlet registered with each web application that intercepts requests for WSPs. The default servlet can then redirect each request to the appropriate WSP handler. - [0062]A user can open a web application in a console, or in a console view, and can view the names of the WSPs that are part of that web application. It can be necessary to modify an MBean, such as WebAppComponentMBean, on order to provide a list of WSPs. - [0063]Java-based web services client distributions can be used with services hosted on many different platforms. A full set of features supported on a client can include: - [0064]HTTP protocol with cookie support - [0065]SOAP 1.2 with attachments - [0066]JAX-RPC client API, including synchronous and “one-way” RPC invokes - [0067]Header Handler and Flow Handler API - [0068]Message-style web service client API - [0069]Support for “dynamic mode” (no Java interfaces or WSDL required) - [0070]Support for “static mode” (Java interfaces and service stubs required) - [0071]The full set of SOAP encodings+Literal XML+support for custom encodings - [0072]Security support; including - [0073]128-bit two-way SSL; - [0074]Digital Signatures; and, - [0075]XML Data Encryption - [0076]There is an inherent tradeoff between the “thinness” of a client and the richness of features that can be supported. To accommodate customers with differing needs regarding features and footprint, several different client runtime distributions can be offered with varying levels of features. - [0077]A J2SE Web Service client, which can have a footprint of around 1 MB, can be full-featured. SSL and JCE security functions can be included in separate jars. This client can run in regular VM environments, such as those hosting application servers. A J2ME/CDC thin client can have a limited set of features, but can be designed to run in a J2ME CDC profile on devices. A JDK 1.1 thin client can have a limited set of features, but can be intended to run in JDK 1.1 virtual machines, including those hosting applets. - [0078]Client distributions can include classes needed to invoke web services in “dynamic” mode. Utilities can be provided to generate static stubs and Java interfaces, if given WSDL service descriptions. - [0079]A Java™ 2 Platform, Standard Edition (J2SE) web service client can be a standard, full-featured client, which can be intended to run inside an application server. The client can be included in a regular server distribution, and can also be available in a separate jar so that it may be included in other J2EE or “fat client” JVMs. There may be no size restriction on this client. The client can utilize JDK 1.3. - [0080][0080. - [0081 102. - [0082. - [0083. - [0084]The Java API for XML based RPC (JAX-RPC) is a standard specification that defines the client API for invoking a web service. The following list briefly describes the core JAX-RPC interfaces and classes. - [0085]Service—Main client interface. Used for both static and dynamic invocations. - [0086]ServiceFactory—Factory class for creating Service instances. - [0087]Stub—Represents the client proxy for invoking the operations of a web service. Typically used for static invocation of a web service. - [0088]Call—Used to dynamically invoke a web service. - [0089]JAXRPCException—Exception thrown if an error occurs while invoking a web service. - [0090]In accordance with an embodiment of the present invention, the server (eg. WebLogic Server) includes an implementation of the JAX-RPC specification. - [0091]To create a Java client application that invokes a web service, the developer should follow these steps: - [0092]1. Download and install the webservices.jar file which contains the implementation of the JAX-RPC specification. - [0093]2. Obtain the Java client JAR files (provided by WebLogic Server) and add them to the developer's CLASSPATH. If the developer's client application is running on WebLogic Server, the developer can omit this step. - [0094]3. Write the Java client application code. - [0095]4. Compile and run the developer's Java client application. - [0096]An embodiment of WebLogic Server provides the following types of client JAR files: - [0097]A JAR file, named webserviceclient.jar, that contains the client runtime implementation of JAX-RPC. This JAR file is typically distributed as part of the WebLogic Server product. - [0098]A JAR file, named webserviceclient+ssl.jar, that contains an implementation of SSL. This JAR file is also typically distributed as part of the WebLogic Server product. - [0099. - [0100]To obtain the client JAR files, the developer should follow these steps: - [0101. - [0102. - [0103. - [0104: - [0105]The operations defined for this web service are listed under the corresponding <binding> element. For example, the following WSDL excerpt shows that the TraderService web service has two operations, “buy” and “sell” (for clarity, only relevant parts of the WSDL are shown): - [0106. - [0107: - [0108]A web service-specific implementation of the Service interface, which acts a stub factory. The stub factory class uses the value of the wsdl attribute of the clientgen Ant task used to generate the client JAR file in its default constructor. - [0109]An interface and implementation of each SOAP port in the WSDL. - [0110]Serialization class for non-built-in data types and their Java representations. - [0111]The following code shows an example of invoking the sample TraderService web service. In this example, TraderService is the stub factory, and TraderServicePort is the stub itself: - [0112]The main points to note about the example shown above are briefly described as follows. This code segment shows how to create a TraderServicePort stub: - [0113. - [0114]The following code shows how to invoke the buy operation of the TraderService web service: - [0115]The trader web service shown in the above example has two operations: buy( ) and sell( ). Both operations return a non-built-in data type called TradeResult. - [0116. - [0117. - [0118]For example, assume the developer wants to create a dynamic client application that uses WSDL to invoke the web service found at the following URL: - [0119][0119] - [0120]Dynamic clients that do not use WSDL are similar to those that use WSDL, except for having to explicitly set information that is found in the WSDL, such as the parameters to the operation, and the target endpoint address. - [0121]The following example shows how to invoke the same web service as in the previous example, but not specify the WSDL in the client application: - [0122]Web services can use out or in-out parameters as a way of returning multiple values. - [0123. - [0124]For example, the web service described by the following WSDL has an operation called echoStructAsSimpleTypes( ) that takes one standard input parameter and three out parameters: - [0125]The following static client application shows one way to invoke this web service. The application assumes that the developer has included the web service-specific client JAR file that contains the Stub classes, generated using the clientgen Ant task, in the developer's CLASSPATH. - [0126. - [0127]In addition, some implementations of the of WebLogic Server may not include an SSL-enabled client runtime JAR file for J2ME clients. - [0128]Creating a J2ME client application that invokes a web service is almost the same as creating a non-J2ME client. The developer should follow the steps described in “Invoking Web Services: Main Steps” above, but with the following changes: - [0129]When the developer runs the clientgen Ant task to generate the web service-specific client JAR file, they must specify the j2me=“True” attribute, as shown in the following example: - [0130]When the developer writes, compiles, and runs the developer Java client application, they must use the J2ME virtual machine and APIs. - [0131]Every web service deployed on WebLogic Server has a Home Page. From the Home page the developer can: - [0132]View the WSDL that describes the service; - [0133]Download the web service-specific client JAR file that contains the interfaces, classes, and stubs needed to invoke the web service from a client application; and, - [0134]Test each operation to ensure that it is working correctly. - [0135]As part of testing a web service, the developer can edit the XML in the SOAP request that describes non-built-in data types to debug interoperability issues, or view the SOAP request and response messages from a successful execution of an operation. - [0136]The following URLs show first how to invoke the web service Home page and then the WSDL in the developer's browser: - [0137]where in the above example the following terms take the meanings shown below: - [0138]. - [0139]“host” refers to the computer on which the server (eg. WebLogic Server) is running. - [0140]“port” refers to the port number on which the server (eg. WebLogic Server) is listening (default value is 7001). - [0141]. - [0142]. - [0143]For example, assume the developer used the following build.xml file to assemble a WebLogic web service using the servicegen Ant task: - [0144]The URL to invoke the web service Home Page, assuming the service is running on a host named “ariel” at the default port number, is: - [0145]The URL to get the automatically generated WSDL of the web service is: - [0146]If the developer encounters an error while trying to invoke a web service (either WebLogic or non-WebLogic), it is useful to view the SOAP request and response messages that are generated because they often point to the problem. - [0147]To view the SOAP request and response messages, one can run the client application with a -Dweblogic.webservice.verbose=true flag, as shown in the following example that runs a client application called runService: - [0148]The full SOAP request and response messages are then printed in the command window from which the developer runs the client application. - [0149. - [01. - [0151 oroptical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. - [0152 embodiments of the invention have been described herein with respect to a WebLogic environment, that embodiments and implementations may also be used with other application servers,) 1. A system for providing access to web services, comprising: a container driver that accepts invoke requests from a client for web services, an interceptor that receives context information for the invoke request from said container driver, and modifies the message context to be used with web services; and, an invocation handler that receives the modified context information from said container driver, passes parameters from the message context to the target of the request, processes values returned from the target, and passes the values to the container driver, such that the container driver can formulate a response to the invoke request. 2. The system of claim 1 wherein the client utilizes JAX-RPC to invoke the web services. 3. The system of claim 1 wherein said container driver is adapted to perform any data binding and unbinding required to process the invoke request. 4. The system of claim 1, further comprising a protocol adapter that intercepts web service invoke requests and passes the web service invoke requests to said container driver. 5. The system of claim 4, wherein said protocol adapter converts the format of an invoke request and create a message context containing the invoke request. 6. The system of claim 1, further comprising a plugin component to be used by said container driver to perform any data binding and unbinding. 7. The system of claim 1, further comprising an invocation context for storing arbitrary context data useful in processing the web request, said invocation context available to at least one of said interceptor and said invocation handler. 8. The system of claim 1, wherein said invocation handler manages security policies, transaction management, and target object life cycle for the request. 9. The system of claim 1, further comprising a web service container for hosting said container driver, said interceptor, and said invocation handler. 10. The system of claim 1, further comprising a target object to which said invocation handler can delegate processing the invoke request. 11. A method for providing access to web services, comprising:. 12. The method of claim 11 wherein the client utilizes JAX-RPC to invoke the web services. 13. The method of claim 11 wherein a container driver is used to perform any data binding and unbinding required to process the invoke request. 14. The method of claim 11, further comprising intercepting an invoke request from a web services client using a protocol adapter and generating message context for the invoke request to be sent to the container manager. 15. The method of claim 11, wherein said step of formatting message context comprises using an interceptor to format the message context. 16. The method of claim 11, wherein said step of binding the message context comprises using a codec selected from the group consisting of Java Binding codecs, SOAP codecs, XML codecs, and custom codecs. 17. The method of claim 11, further comprising storing arbitrary context data for use in processing the invoke request. 18. The method of claim 11, further comprising managing life cycle, transaction, and security information for the processing of the invoke request. 19. The method of claim 11, further comprising delegating the processing of the invoke request to a target object. 20. A computer readable medium, including instructions stored thereon which when executed by the computer cause the computer to perform the steps of:.
https://patents.google.com/patent/US20040064503
CC-MAIN-2018-09
en
refinedweb
In earlier blogs we have discussed the use of UI configuration tool and Easy enhancement workbench. Here we will discuss another way to enhance the UI 2007 screen by the use of Enhancement sets. Scenario 3: You want to show a standard field related to the opportunity but not part of the opportunity structure. e.g. you want to show the Industry Sector of the Prospect in the opportunity screen. Now you know the component and view details for the opportunity view. The following are the general steps for enhancing the view BT111H_OPPT/Details Step 1 Use transaction SM34 and Open the view cluster BSPWDVC_CMP_EXT. Create the enhancement set that you can later use in the BSP component workbench (Enter a name and a description) Step 2 Use transaction SM30. Open table BSPWDV_EHSET_ASG to make the necessary assignment. Step 3 Start transaction BSP_WD_CMPWB. Choose the component (e.g. BT111H_OPPT). Select Display. Now Click Enhance Component and choose the enhancement set ZTEST Select a package and choose save. Step 4 Select the view you want to enhance (e.g. BT111H_OPPT/Details) under Browser Component Structure –> Component –> Views Right Click on the view and select Enhance in the context menu. (The object is created automatically in the customer namespace.) Step 5 Go to Structures –> Context Node –> BTPARTNERPROSPECT –> Attributes Right click and select Create from the menu Step 6 Use the Wizard to add new attribute Step 7 Select Attribute Type Step 8 Enter Attribute Name and BOL Entity Step 9 To select the BOL attribute use the selection help. As you can see in the picture above for the BOL Entity BTPartner there exists a lot of BOL attributes divided under two nodes. Attribute and Relations. Expand the nodes to see the list of attributes available under each node. Finally select the attribute that you want to add to the opportunity screen. Step 10 View the details of the attribute and choose “complete” to add the attribute. Step 11 Now the required attribute is available to be used by UI configuration tool. Use the steps as mentioned in scenario 1 to add the attribute in the view (e.g. BT111H_OPPT/Details) (Refer CRM 2007 – Enhancing views – Part I for more details on UI Config) I have a problem with a view. The BSP is BT116IT_SRVO. The view is ServiceItem. I have made an enhancement of the component, an the view. I have added an attribute, the batch_id. This belongs to the context node BTADMINI-BTItemProductExt/BATCH_ID. The attribute is OK, first it doesn’t appear in the group of attributes of the context node. After I have activated the methods of the class the field batch_id appears in the attributes of the context node. But it doesn’t appear in the tab of configuration UI. Do you know why this happens?. How can i resolve this problem ?Any idea?. Thanks Enrique Estévez Greetings. After adding the attribute to the context node you have to use UI configuration tool as explained in the Part 1 of the blog. regards
https://blogs.sap.com/2009/05/20/crm-2007-enhancing-views-part-iii/
CC-MAIN-2018-09
en
refinedweb
Yet Another Kyoto Cabinet Binding Project Description yakc (Yet Another Kyoto Cabinet Python Binding) is a Python module to manipulate Kyoto Cabinet with Python dictionary manner. This module also can be used as drop-in replacement of Python dict if you want to store many data in dict. yakc focuses to replace Python dict to store huge amount of data. If you want to Kyoto Cabinet database as key/value store, I recommend to use the official Python binding. License Yet Another Kyoto Cabinet Python Binding Copyright (C) 2013 Yasunobu OKAMURA to Install From pypi pip install yakc From source code python setup.py build python setup.py install Basic use import yakc d = yakc.KyotoDB('test.kch') d['a'] = 123 d[98] = [12,3,4] for key, value in d.iteritems(): print key, value Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/yakc/0.1/
CC-MAIN-2018-09
en
refinedweb
UNIT 2: Recursion Table of Contents 1 Recursion Review 1.1 Recursion! Yes, it's back! You all learned about recursion in IC210, and you might have thought that it's pretty useless because you could do all the same things with iteration. And because recursion is so confusing. Well, Data Structures (and most especially, trees) is where recursion becomes really, really useful. When we have more sophisticated data structures and algorithms, we need recursion to write simple code that works and we can understand. The simple fact is this: with many, many data structures, recursion is WAY easier to cope with than iteration is. Embrace it, because it's about to make your life easier. Today, then, is an IC210-level reminder of what recursion is, and how it works Recursion is a function calling itself. So, if we understand how function calls work, we'll understand recursion a lot better. Imagine the following set of functions (in pseudocode): f1() { print "inside f1" } f2() { print "starting f2" f1() print "leaving f2" } main() { print "starting main" f2() print "leaving main" }, recursion. Let's consider the standard first-recursion-example of finding a factorial using recursion: int fac(int n) { if (n == 0) return 1; return n * fac(n-1); }, which is what fac(0),} \\ } \] 1.2. 2 Linked List traversal Let's define a Linked Linked List of int's as follows: public class LinkedList{ private class Node{ private int data; private Node next; //... remaining methods } private Node head,tail; //head and tail of the list //... remaining methods } Using that definition, a traversal of the list iteratively would look something like this: void traverse(){ Node n = head; while (n != null){ //... perform some action on n n = n.next; } } and here is a similar process, this time using recursion: void traverse(){ traverse(head); } void traverse(Node n) { if (n == null) //Empty next is a good base case return; //... perform some action on n traverse(n.next) } If we were to start the traversal at the head, like traverse(head) in the traverse() method, then the two code snippets would perform identical operations. Make sure you understand how this works. Soon, there will be data structures where the iterative version is pages long, and the recursive version is five lines. You want to know and understand this. As an example of a recursive traversal and action that we can take is a length() method. public int length(Node cur) { if (cur == null) return 0; return 1+length(cur.next); } Another example is to add to a sorted linked list. For example, if we have a linked list holding the integers 1, 2, and 4, after running addInOrder(3), we would have a linked list with the Integers 1, 2, 3, and 4. public void addInOrder(int element) { head = addInOrder(element, head); if (tail == null) //first item inserted tail = head; } //This is a helper method, so our user doesn't have to know or care //about nodes. private Node addInOrder(int element, Node n) { if (n==null) return new Node(element); if (n.data > element) { Node t = new Node(element); t.next = n; return t; } n.next = addInOrder( element, n.next); return n; } Make some examples to try the above on, and understand why it works! Your homework will go better. 3 Big-O with Recursion 3.1 Determining the Big-O of a Recursive Function We have some practice identifying the Big-O of iterative algorithms. However, now that we're (again) experts on recursion, it's important that we be able to identify the Big-O of recursive algorithms. We're not going to spend a huge amount of time on this; that's Algorithms' job (note for IT people: consider taking Algorithms. It's an Important Class). However, it is important that you can identify some of the most common patterns. Let's start with an example: linked-list traversal. void traverse(Node node) { if (node == null) //base case return; //... perform some action on n traverse(n.next) //recursive case } Obviously, we have two cases; a base case, and a recursive case. We're going to write a function \(T\) that describes the number of steps taken in each case as a function of the input. The input for this example will be length of the remaining list, if we were to assume that node were the head. The base case can then easily be defined as when \(n=0\), or the length of the list of nodes following the current node is 0, as would occur when node is null. The base case is easily defined then as: \[T(0) = 2\] This is because when \(n=0\), there is a constant amount of work to do (namely, do the comparison in the if-statement, and perform the return statement). Ultimately, this constant 2 won't really matter in the big-O, but when we're dealing with recursion it's safer not to do big-O until the end. Now what about the other case - if \(n > 0\)? \[T(n) = 2 + T(n-1)\] That is, it is two constant steps plus doing the work of performing the work on the next node in the list. This is because when \(n > 0\), we do some constant amount of work (if statement comparison, function call to traverse, etc), then run it all again on the remainder of the list. The two of these functions together are known as a recurrence relation. So the recurrence relation of this algorithm is: \[\begin{align*} T(0) &= 2 \\ T(n > 0) &= 2 + T(n-1) \end{align*} \] The same exact thing can also be written as \[ T(n) = \begin{cases} 2,& n=0 \\ 2 + T(n-1), & n > 0 \end{cases} \] 3.2 Solving Recurrence Relations Our five-step process for solving a recurrence relation is: - Write down the recurrence relation. - Expand the recurrence relation, for several lines. - Figure out what you would write for the \(i\)-th line. - Figure out what value of \(i\) would result in the base case. - In your equation from (3), replace \(i\) with the solution from (4). OK, so we've written it. Let's do step 2. \[\begin{align*} T(n) &= 2 + T(n-1)\\ &= 2 + \underbrace{2 + T(n-2))}_{T(n-1)}\\ &= 2 + 2 + \underbrace{2 + T(n-3)}_{T(n-2)}\\ \end{align*}\] And so on. But this is probably enough to recognize the pattern. Let's do step 3. Hopefully you agree that on the i-th line, we'd have: \[T(n) = 2i + T(n-i)\] So when does this stop? Well, that's where the other part of our recurrence relation comes in: our base case equation tells us that when \(n=i\), it's just \(2\). So, we figure out what value of \(i\) makes the \(T(n-i)\) equal to our base case of \(T(0)\). In this case, that happens when \(i=n\) because when \(i=n\) then \(n-i=0\). Finally, we do step five, and take our equation \(T(n) = 2i + T(n-i)\), and replace \(i\) with our solution of step 4 (\(i=n\)), to get \(T(n) = 2n + T(0)\). Because \(T(n-n) = T(0)=2\), the runtime of that equation is \[2n+2\] which is \(O(n)\). 3.3 A Simpler Example! Let's do one more, and this time, let's do the simple recursive sum function: int sum(int n){ if(n==0) return 0; else return n+sum(n-1); } In the base case, we can say one step occurs, since all we are doing is returning 0 and this occurs when \(n=0\). \[ T(0) = 1 \] The number of steps in the recursive case (when \(n>0\)) is one sum, one subtraction, and the function call — so three steps plus the number of steps in the recursive call. \[ T(n) = 3 + T(n-1) \] We can now write the recursive relation: \[ T(n) = \begin{cases} 1,& n=0 \\ 3 + T(n-1), & n > 0 \end{cases} \] Let's now identify the pattern:\begin{array} TT(n) &= 3 + T(n-1) \\ &= 3 + 3 + T(n-2) \\ &= 3 + 3 + 3 + T(n-3) \\ & \ldots \\ T(n) & =3i + T(n-i) \end{array} Substituting for \(i\) the value \(n\), we get \[ 3n + T(n-n) = 3n+T(0) = 3n+1 \] And clearly, \(3n+1\) is \(O(n)\) 3.4 A Harder Example! Before you saw an iterative version of binary search. Here's a slightly different recursive version, but the principle is the same. Let's now use recurance relations to identify the Big-O. int binarySearch(int[] A, int toFind) { return binarySearch(A, 0, A.length-1, toFind); } boolean binarySearch(int[] A, int left, int right, int toFind) { if(right-left <= 1){ if(toFind == A[right] && right > 0) return right; else if(toFind == A[left]) return left; else return -1; //not found } int mid = (left + right) / 2; if (toFind < A[mid]) return binarySearch(A, left, mid-1, toFind); else if (toFind > A[mid]) return binarySearch(A, mid+1, right); else return mid; //it must be equal, so we found it } Let's say that \(n\) is defined as the distances between left and right, so the base case is when \(n \le 1\). There are two recursive cases, though, but the worst case for both is the same number. So we if we count steps \[T(n) = \begin{cases} 7,& n = 1 \\ 6 + T\left(\tfrac{n}{2}\right),& n > 1 \end{cases}\] (*If you disagree or come up with different values for 6 and 7, remember that those constants do not actually matter at all for the big-O, as long as they're constant! If you were solving this for a HW or exam and put down 3 and 9 instead of 6 and 7, no problem!) So let's figure out the Big-O. Step 2 is to get the pattern for the \(i\)th step: \[\begin{align*} T(n) &= 6 + T\left(\frac{n}{2}\right) \\ &= 6 + \underbrace{6 + T\left(\frac{n}{4}\right)}_{T\left(\frac{n}{2}\right)} \\ &= 6 + 6 + \underbrace{6 + T\left(\frac{n}{6}\right)}_{T\left(\frac{n}{4}\right)} \\ &= 6i + T\left(\frac{n}{2^i}\right)\\ \end{align*}\] The next step is to determine how big \(i\) has to be to hit the base case: \[\begin{align*} \frac{n}{2^i} & \le 1 \\ 2^i &\ge n \\ i &\ge \log_2 n \end{align*}\] Finally, we substitute our discovered value of \(i\) into the formula. \[ T(n) = 6(\log_2 n) + T\left(\frac{n}{2^{\log_2 n}}\right) \\ \] Notice that \(2^{\log_2 n}\) is \(n\) by the definition of logorithms, so we can now solve to get the non-recursive total cost: \[\begin{align*} T(n) &= 6(\log_2 n) + T\left(\frac{n}{2^{\log_2 n}}\right) \\ &=6\lg n + T(n/n) \\ &= 6\lg n + T(1) \\ & = 6\lg n + 7 \end{align*}\] which of course is \(O(\log n)\) as we expected. (* From now on, I'm going to write \(\lg n\) instead of \(\log_2 n\), because I'm lazy. You should feel free to do the same.) Hopefully, before the last line, it was clear to you at a minimum that this was sublinear, that is, faster than linear time as we identified before. Credit: Ric Crabbe, Gavin Taylor and Dan Roche for the original versions of these notes.
https://www.usna.edu/Users/cs/aviv/classes/ic312/f16/units/02/unit.html
CC-MAIN-2018-09
en
refinedweb
Other AliasTcl_SetStdChannel SYNOPSIS #include <tcl.h> Tcl_Channel Tcl_GetStdChannel(type) Tcl_SetStdChannel(channel, type) ARGUMENTS - - int type (in) The identifier for the standard channel to retrieve or modify. Must be one of TCL_STDIN, TCL_STDOUT, or TCL_STDERR. - - Tcl_Channel channel (in) The channel to use as the new value for the specified standard channel. DESCRIPTION a non-NULL value for channel is passed to Tcl_SetStdChannel, then that same value should be passed to Tcl_RegisterChannel, like so: Tcl_RegisterChannel(NULL, channel);. See Tcl_StandardChannels for a general treatise about standard channels and the behaviour of the Tcl library with regard to them. KEYWORDSstandard channel, standard input, standard output, standard error
http://manpages.org/tcl_getstdchannel/3
CC-MAIN-2018-09
en
refinedweb
No unread comment. View All Comments No unread message. View All Messages No unread notification. View All Notifications * * Login using C# Corner In Focus How to Get Most Benefit from a Tech Conference C# Corner Contribute An Article A Blog A News A Video A Link An Interview Question Ask a Question TECHNOLOGIES .NET Career Advice How do I Networking SharePoint .NET Core Chapters HTML 5 Node.js SignalR .NET Standard Cloud Internet & Web Office Development Software Testing Agile Development Coding Best Practices Internet of Things OOP/OOD SQL Language Algorithms in C# Cognitive Services Ionic Operating Systems SQL Server Android Cortana Development iOS Philosophy Swift Angular Current Affairs Java and .NET PHP TypeScript Architecture Cyber Security JavaScript Power BI UWP Artificial Intelligence Databases & DBA JQuery Printing in C# Visual Studio ASP.NET Design Patterns & Practices JSON Progressive Web Apps Web Development ASP.NET Core DevOps kotlin Project Management Windows Controls Azure Dynamics CRM Leadership Python Windows Forms Big Data Enterprise Development LINQ Q# Windows PowerShell Blockchain Entity Framework Machine Learning R Workflow Foundation Bot Framework F# Microsoft 365 React WPF C# Google Development Microsoft Office Ruby on Rails Xamarin C# Corner Graphics Design Mobile Development Servers XML C, C++, MFC Hiring and Recruitment Multithreading Request a new Category | View All ANSWERS BLOGS VIDEOS INTERVIEWS BOOKS NEWS EVENTS CHAPTERS ANNUAL CONFERENCE Mathura Conference Xamarin DevCon Delhi Conference Startup Conference Hyderabad Conference Chandigarh Conference CAREER JOBS TRAINING MORE Consultants IDEAS LINKS STUDENTS STORIES Certifications Efficiently Storing Passwords in .NET Framework Afzaal Ahmad Zeeshan Jul 30 2015 Article 2 0 13.3k twitter google Plus Reddit WhatsApp Bookmark Print Other Artcile Expand Introduction and Background In this article, I will discuss a few things about security in user accounts and other lockers that you might be creating in your own applications. First of all, you need to understand that security is the most essential part of every application where a user is able to share his data. A user trusts an application if he is sure that your application is safe of any external hacks and unauthorized calls to get the data. Users must be provided enough grounds to put their faith in you. I am not just talking about the users with enterprise solutions, instead I am talking about every single application. If you develop an application that provides users with access to authentication services, such as logging in. Then, you must always provide security to users. First of foremost thing to secure is the credentials that they will provide you with. Let me simply one thing for you. An authentication system is not only the one that is created using a username , an email for example , and a , made up using alphanumeric characters . Authentication system, by definition is a system that authenticates the users. A system that tells and ensures that user indeed is the user that he/she is trying to be. For example, on C# Corner I am, "AuthorID: 201fc1". So when server wants to communicate with me. It would perform actions for my interactions using that ID. Which is then used, to show my name, Afzaal Ahmad Zeeshan, and my display picture to users for good readability purposes. On the backend it is the AuthorID that is being used (I am not sure what they call it in their database). Now think of a scenario where C# Corner has no security. Someone logs in to computer, and in the AuthorID enters my ID. C# Corner would look up for a record and work act respectively for what I am authorized to do . It may delete all of my articles, or at worse share my private information on the internet, such as passwords or other private resources stored with my account. Thank God and development team, there are security measures on C# Corner . Now, this was just a common example. There are some worse conditions and scenarios that might be raised by this security leak. That is why, it is always recommended to pay attention to security foremost! Securing users Securing your users is a very simple yet complex task. Many frameworks that are being used nowadays have these features already built in. ASP.NET and most specially, .NET framework have cryptography techniques already built in for you. You can use the namespaces ( discussed later in this article ) to secure your protected data in a way that no one can steal the data, even if they do, I repeat, even if they do, it would take a lot of time of theirs to decrypt the ciphers. Mostly, authentication systems are built using email address and password as the key/value pair for credentials. But, let me tell you one more thing. You can create your own login system using a security password and an answer . A sample example for a server-client communication that sets up authentication based on a question and an answer. Now, you can see that the above logic is not pretty much secure, because everyone knows (or at least ones who know me) that Eminem is my favorite rapper. Anyone can take that and authenticate themselves on the servers. For this, there is another technique known as, Hashing. I will show you what that is in a minute. First, understand the actual authentication logic. Authentication is actually just a function, feature or a program that asks the users a security question before they can continue using the service as who they are trying to say they are. For example, your username and password. Username, everyone knows. Password, only you know. Server asks you to tell them both, accurate, so that it can authenticate you on itself and allow you to work as the one you are trying to say you are. If the credentials are false, server won't authenticate you. Each user has to go through this process, no matter sharing their answers to security questions or providing their passwords. Above image has a demonstration of the authentication system in websites. You do not need to be using only username and password combination anymore, there are many more functions that can be implemented to the application, just to make sure that the user is actually himself whom he is trying to claim to be. :-) Securing the passwords Securing user accounts starts with the process of securing the passwords of users. A question can be shared with anyone, but the answer to that question should not be shared, even with the server. Many techniques have been introduced to store passwords and I would only talk about hashing the password. The technique of hashing is to hide the underlying message, known as password, so that no one can ever get an idea of what user might be using as the secure string for his account. Passwords are similar to keys for your locks at homes. There is no purpose to lock down your house if you are going to leave the key to it unsafe, exposed or unprotected. Similarly, if you create a login system, but do not secure the passwords for users then chances are that your users are already at stake of privacy issues. Cryptography began ever since computers have been introduced, programmers have created algorithms to secure your data in the computer, so that no potential user can ever steal it without your intent or consent. Privacy issues, threats etc are the worst ones that can make your application lose users, so easily as compared to an application with bad UI or bad UX techniques. Hashing the passwords Enough with introduction, you need to understand what hashing process is, how it helps the programmers and how does it maintain the security. Hashing a password is key to securing the passwords in your system. A hashing algorithm is designed in a way that it is almost impossible for anyone to convert the hashed string back to the actual password string. For example, (I am not sure of the actual algorithm used, but as a sample example) I ask you to select one number from 1-100. Selected? Next, I will use 1 number (50) and multiply your number by it. Let's assume, you selected 60. The result is 3000. Similarly, storing 3000 (and the number I chose) would provide me a solution to mask your data, 60. No hacker would be able to convert 3000 back to the actual password, unless they know either one of the inputs. Either 50, or 60 is required to run the algorithm back to get the result of 3000. Above example, was a very stupid and a childish one. In real, the answers span to a hundreds of characters and the process is executed 1000s time to make sure, string is no longer able to be converted back. For real hashing processes, a special key is used as a private number. This key is used to hash the string. The hash created is an alphanumeric string. It is almost impossible to convert it back, however I will talk about a few other side effects that cause the hashed passwords to be cracked. Read the article... Process of hashing a password; with salt. It is just a security measure that makes sure that your content of passwords, is not easily accessible even if your database is exposed to hackers! Adding Salt Um, ever had something to eat without masala ? Possibly, you can eat your food without it. But, the purpose of salting the food is to add extra taste to it, but is not required . Similarly, adding a salt string in password before processing it in hash algorithm is not required. But, it would give very fruitful results if done. Purpose of adding a salt to password. From above image it is clear as to why should one add a salt string to the password string before hashing it. The purpose is not to distinguish anything at all. Purpose of adding salt The main purpose to add a salt to the password before hashing is that there are quite a lot of procedures defined in hacking. Rainbow Table attack Brute Force attack Dictionary attack These attacks, try to attempt as many password strings on the server to guess the actual password as they can. Rainbow Table attack is a very common one. Rainbow attack, for example, try to enter a few most commonly used password strings and get their hashes so that server may let them in . Dictionary attack, brute force attacks are also most commonly used in this hacking technique. Adding a salt just makes it even harder for the hacker to create an algorithm to guess a password string for which the hash would be computed to a result that was stored for the user. For example, I enter password, "KingsNeverDieIsAGreatSongByEminem", and at the same time server generates another salt key, "DidYouKnow". So the password string now looks like, "DidYouKnowKingsNeverDieIsAGreatSongByEminem". The hash computed for this is way too much different for either one of them. Hacker, when attempting to create a hash for these, would not be able to create a same one. Also, it must be noted that salt must be a random string, so hacker or his tool cannot judge it. A few good techniques for salt are: It must be a new salt for every password. For every user, a new salt must be generated. A new salt must be generated every time you need to compute hash. Hash and salt can be stored in the database for later purposes. Salt must be random string of alphanumeric characters. Although appending or prepending salt doesn't matter, but by convention salt is prepended to the password. Following the above conditions, you can create an extra security layer over the password. In next chapter, we will learn how to hash passwords in .NET framework using C#. Remember: .NET framework and its derived frameworks, such as WPF, WinForms, ASP.NET etc all can use native cryptographic libraries of .NET framework. So, the code I would provide is applicable to ASP.NET web applications, WPF desktop apps, WinForms applications and other frameworks that run on .NET framework. Hashing a password in .NET Fun part of the article, in this section I will show you a code of a few lines only, that can be used to generate hashes. I would show you many kinds of procedures that can be used to mask the passwords; hash the passwords. .net framework cryptography passwords security TRENDING UP 01 Top 5 Trending Web Development Frameworks In 2018 02 SQL Coding Best Practices 03 SharePoint Online Automation - Email Notification Using PowerShell Script In SharePoint Online 04 Top Five Developer Trends Of 2018 05 Top Software Job Trends Of 2018 06 Integrating Charts With Angular 5 - Part One 07 Getting Started With Docker For Windows - Containerize a C# Console App 08 CRUD Operation With Angular 5 HTTP Client And ASP.NET Core Web API 09 Email Directly From C# .NET On Azure With No Mail Server 10 Top 10 New Features Of ASP.NET Core 2.0 View All
http://www.c-sharpcorner.com/UploadFile/201fc1/efficiently-storing-passwords-in-database/
CC-MAIN-2018-09
en
refinedweb
Introduction and Background Yesterday, I was talking to myself about writing another post for C# programming; I agree I am addicted to programming now. So, I wanted to write a blog post and I found that I had a topic on my “write list.” So that I wasn't just writing about the topic, I added a bit more programming to it to support another feature. So basically what I had on my writing list was an email client in C# programming language. In this blog post, I will share the basics for developing an email client in C# using Windows Presentation Foundation framework. The source code is much shorter and more compact so that building your own basic client takes no time at all. However, adding a few more features will will be used to build this one SMTP + IMAP application to develop a simple email client. Understanding the protocols Before I dig deeper and start to write the source code and explain the methods to build the application. I will try to explain the protocols that will in .NET framework. For more of such packages and namespaces to manage the networking capabilities of an application, read System.Net namespaces on MSDN. Instead, I will use a library to get myself started in no time. A very long time ago, I came upon a library, “ImapX,” which is a wonderful tool for implementing the IMAP protocol in C# applications. You can get ImapX from NuGet galleries by executing the following NuGet package manager command: Install-Package Imapx This will add the package. Remember, Imapx works in only selected environments or not all. You should read more about it, on the CodePlex website. The IMAP protocol is used to fetch the emails, to read, rather than downloading the entire mailbox and storing it on your own device. It is it is the time to continue the programming part and develop the application itself. First of all, create a new application of WPF framework. It is always a good approach to write the concerns and different sections of your applications. Our application will have the following modules: 1. Authentication module 2. Folder view 3. Message view 4. Create a new message. Separating these concerns will help us build the application in a much more agile way so that when we have to update or create a new feature in the application, doing so won’t take any longer. However, if you're hard-code everything in the same page then things get really very difficult. In this post, I will also give you a few tips to ensure that things are not made more difficult than they can be. Managing the “MainWindow”" Each WPF application will contain a MainWindow window, which is the default window that gets rendered on the screen. But, my recommendation is that you only create a Frame object in that window. Nothing else. That frame object will be used to navigate to multiple pages and different views of the applications depending on the user and application interactions. The event is used to notify when a new message comes. The object is defined as below: Read more articles on WPF: View All
http://www.c-sharpcorner.com/article/building-custom-email-client-in-wpf-using-C-Sharp/
CC-MAIN-2018-09
en
refinedweb
ContentHandler public interface ContentHandler Receive notification of the logical content of a document. This module, both source code and documentation, is in the Public Domain, and comes with NO WARRANTY. See for further information. This is the main interface that most SAX applications implement: if the application needs to be informed of basic parsing events, it implements this interface and registers an instance with the SAX parser using the setContentHandler method. The parser uses the instance to report basic document-related events like the start and end of elements and character data.. This interface is similar to the now-deprecated SAX 1.0 DocumentHandler interface, but it adds support for Namespaces and for reporting skipped entities (in non-validating XML processors). Implementors should note that there is also a ContentHandler class in the java.net package; that means that it's probably a bad idea to do import java.net.*; import org.xml.sax.*; In fact, "import ...*" is usually a sign of sloppy programming anyway, so the user should consider this a feature rather than a bug. See also: Summary Public methods characters. Individual characters may consist of more than one Java char value. There are two important cases where this happens, because characters can't be represented in just sixteen bits. In one case, characters are represented in a Surrogate Pair, using two special Unicode values. Such characters are in the so-called "Astral Planes", with a code point above U+FFFF. A second case involves composite characters, such as a base character combining with one or more accent characters. Your code should not assume that algorithms using char-at-a-time idioms will be working in character units; in some cases they will split characters. This is relevant wherever XML permits arbitrary characters, such as attribute values, processing instruction data, and comments as well as in data reported from this method. It's also generally relevant whenever Java code manipulates internationalized text; the issue isn't unique to XML. Note that some parsers will report whitespace in element content using the ignorableWhitespace method rather than this one (validating parsers must do so). endDocument void endDocument () Receive notification of the end of a document. There is an apparent contradiction between the documentation for this method and the documentation for fatalError(SAXParseException). Until this ambiguity is resolved in a future major release, clients should make no assumptions about whether endDocument() will or will not be invoked when the parser has reported a fatalError() or thrown an exception. The SAX parser will invoke this method only once, and it will be the last method invoked during the parse. The parser shall not invoke this method until it has either abandoned parsing (because of an unrecoverable error) or reached the end of input. See also: endElement void endElement (String uri, String localName, String qName) Receive notification of the end of an element. The SAX parser will invoke this method at the end of every element in the XML document; there will be a corresponding startElement event for every endElement event (even when the element is empty). For information on the names, see startElement. endPrefixMapping void endPrefixMapping (String prefix) End the scope of a prefix-URI mapping. See startPrefixMapping for details. These events will always occur immediately after the corresponding endElement event, but the order of endPrefixMapping events is not otherwise guaranteed. ignorableWhitespace void ignorableWhitespace (char[] ch, int start, int length) Receive notification of ignorable whitespace in element content. Validating Parsers must use this method to report each chunk of whitespace in element content . See also: processingInstruction void processingInstruction (String target, String data) Receive notification of a processing instruction. The Parser will invoke this method once for each processing instruction found: note that processing instructions may occur before or after the main document element. A SAX parser must never report an XML declaration (XML 1.0, section 2.8) or a text declaration (XML 1.0, section 4.3.1) using this method. Like characters(), processing instruction data may have characters that need more than one char value. setDocumentLocator Content SAX event callbacks after startDocument returns and before endDocument is called. The application should not attempt to use it at any other time. skippedEntity void skippedEntity (String name) Receive notification of a skipped entity. This is not called for entity references within markup constructs such as element start tags or markup declarations. (The XML recommendation requires reporting skipped external entities. SAX also reports internal entity expansion/non-expansion, except within markup constructs.) The Parser will invoke this method each time the entity is skipped. Non-validating processors may skip entities if they have not seen the declarations (because, for example, the entity was declared in an external DTD subset). All processors may skip external entities, depending on the values of the and the properties. startDocument void startDocument () Receive notification of the beginning of a document. The SAX parser will invoke this method only once, before any other event callbacks (except for setDocumentLocator). See also: startElement void startElement (String uri, String localName, String qName, Attributes. This event allows up to three name components for each element: - the Namespace URI; - the local name; and - the qualified (prefixed) name. Any or all of these may be provided, depending on the values of the and the properties: - the Namespace URI and local name are required when the namespaces property is true (the default), and are optional when the namespaces property is false (if one is specified, both must be); - the qualified name is required when the namespace-prefixes property is true, and is optional when the namespace-prefixes property is false (the default). Note that the attribute list provided will contain only attributes with explicit values (specified or defaulted): #IMPLIED attributes will be omitted. The attribute list will contain attributes used for Namespace declarations (xmlns* attributes) only if the property is true (it is false by default, and support for a true value is optional). Like characters(), attribute values may have characters that need more than one char value. startPrefixMapping void startPrefixMapping (String prefix, String uri) immediately before the corresponding startElement event, and all endPrefixMapping events will occur immediately after the corresponding endElement event, but their order is not otherwise guaranteed. There should never be start/endPrefixMapping events for the "xml" prefix, since it is predeclared and immutable.
https://developer.android.com/reference/org/xml/sax/ContentHandler.html
CC-MAIN-2018-09
en
refinedweb
given a matcher object has already a pattern to match and a string value to search for. 1) is the .find() method similar to looping a character sequence just to find a matching pattern? for (int x = 0; x < someString.length(); x++) { // do some string processing } 2) does the .find() method starts from beginning? something like index '0' 3) is there a way to reverse the search from last to beginning? .find() but starts from the length of the string value like.. for (int x = someString.length(); x >= 0; x--) { // do some string processing } the above question arises from the code problem below: Code : import java.util.regex.Pattern; import java.util.regex.Matcher; public class Ch9Exercise_20 { public static void main(String[] args) { String word = "I love Java"; Pattern pattern = Pattern.compile("[aeiou]", Pattern.CASE_INSENSITIVE); Matcher matcher = pattern.matcher(word); StringBuffer built = new StringBuffer(word); if (matcher.find()) { System.out.println("as" ); built.insert(matcher.start(), "egg"); } // 2nd Approach: it complies with the problem perfectly // // for (int x = word.length() - 1; x >= 0; x--) { // // if (("" + word.charAt(x)).matches(pattern.pattern())) { // // built.insert(x, "egg"); // } // } System.out.println(built); } } Problem: Add a string "egg" in the start of every vowel that you can find, in this case the word "I love java" should be "eggI leggovegge Jeggavegga", with the second approach (commented) the problem is easy to deal with, but im trying to practice Pattern , Matcher and StringBuilder/StringBuffer classes for a more complex but safer String processing, the problem is, i dont have a way to make the matcher object to do the search(.find()) from the last index to the beginning index of the word "I love java" i notice that the pattern only matches with the character "I" or 'I' that results in an output "eggI love java" , i dont have any idea if the .find() method searches EACH character in the sequence ...
http://www.javaprogrammingforums.com/%20java-se-apis/10746-matcher-object-find-method-question-printingthethread.html
CC-MAIN-2015-18
en
refinedweb
May someone please spare a moment of there time to quickly help me, I am new to java and would like only about 10 minutes of your time to just fix an error that I cannot identify. May someone please spare a moment of there time to quickly help me, I am new to java and would like only about 10 minutes of your time to just fix an error that I cannot identify yet. Please post the code and the full text of the error message. Be sure to wrap your code with code tags: [code=java] YOUR CODE HERE [/code] to get highlighting and preserve formatting. If you don't understand my answer, don't ignore it, ask a question. There are multiple packages, I am trying to make a simple mod but it isn't working when I try to add in tools. Welcome to the forum! Please read this topic to learn how to post code in code or highlight tags and other useful info for newcomers. Are we supposed to guess the problem? You tell us the problem as clearly as possible with specific questions about what you need help with, and we'll do what we can. EvenMoreSwords package Unlimit3d.EvenMoreSwords.common; import net.minecraft.block.Block; import net.minecraft.block.material.Material; import net.minecraft.item.EnumToolMaterial; import net.minecraft.item.Item; import net.minecraftforge.common.EnumHelper; import Unlimit3d.EvenMoreSwords.block.BlockCopperOre; import Unlimit3d.EvenMoreSwords.item.ItemCopperAxe; import Unlimit3d.EvenMoreSwords.item.ItemCopperHoe; import Unlimit3d.EvenMoreSwords.item.ItemCopperIngot; import Unlimit3d.EvenMoreSwords.item.ItemCopperShovel; import Unlimit3d.EvenMoreSwords.item.ItemCopperPickaxe; import Unlimit3d.EvenMoreSwords.item.ItemCopperSword; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.Init; import cpw.mods.fml.common.SidedProxy; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.network.NetworkMod; import cpw.mods.fml.common.registry.GameRegistry; import cpw.mods.fml.common.registry.LanguageRegistry; @Mod(modid = "EvenMoreSwords", name = "Even More Swords Mod", version = "1.0.0 Alpha" ) @NetworkMod(clientSideRequired = true, serverSideRequired = false) public class EvenMoreSwords { private static final EnumToolMaterial toolCopper = null; @SidedProxy(clientSide = "Unlimit3d.EvenMoreSwords.client.ClientProxy", serverSide = "Unlimit3d.EvenMoreSwords.common.CommonProxy") public static CommonProxy proxy; //Tools Materials public static EnumToolMaterial toolCopperIngot = EnumHelper.addToolMaterial("COPPER", 3, 1200, 6.0F, 9.0F, 30); //Copper Toolset public static Item CopperPickaxe = new ItemCopperPickaxe(1202, toolCopper).setUnlocalizedName("Copper Pickaxe"); public static Item CopperShovel = new ItemCopperPickaxe(1203, toolCopper).setUnlocalizedName("Copper Shovel"); public static Item CopperHoe = new ItemCopperHoe(1204, toolCopper).setUnlocalizedName("Copper Hoe"); public static Item CopperAxe = new ItemCopperAxe(1205, toolCopper).setUnlocalizedName("Copper Axe"); public static Item CopperSword = new ItemCopperSword(1206, toolCopper).setUnlocalizedName("Copper Sword"); //Blocks public static Block CopperOre = new BlockCopperOre(1200, Material.rock).setUnlocalizedName("Copper Ore"); //Items public static Item CopperIngot = new ItemCopperIngot(1201).setUnlocalizedName("Copper Ingot"); @Init public void load(FMLInitializationEvent event) { proxy.registerRenderInformation(); } public EvenMoreSwords() { GameRegistry.registerBlock(CopperOre); //Blocks LanguageRegistry.addName(CopperOre, "Copper Ore"); //Items LanguageRegistry.addName(CopperIngot, "Copper Ingot"); //Copper Toolset Language Registries LanguageRegistry.addName(CopperPickaxe, "Copper Pickaxe"); LanguageRegistry.addName(CopperShovel, "Copper Shovel"); LanguageRegistry.addName(CopperHoe, "Copper Hoe"); LanguageRegistry.addName(CopperAxe, "Copper Axe"); LanguageRegistry.addName(CopperSword, "Copper Sword"); } } Sorry, I don't do Minecraft, but I'm sure there are some who do. okay well thanks for trying --- Update --- If you can help just tell me and I will show you the rest of the code Can you isolate the problem into a Short, Self Contained, Correct Example and not post the whole program? If you don't understand my answer, don't ignore it, ask a question. Sorry, we are not able to understand what are you trying to do exactly and what is the problem you are facing. Please can we have a detailed explanation.Sorry, we are not able to understand what are you trying to do exactly and what is the problem you are facing. Please can we have a detailed explanation. Thanks and regards, Sambit Swain
http://www.javaprogrammingforums.com/whats-wrong-my-code/35691-help-needed.html
CC-MAIN-2015-18
en
refinedweb
Post your Comment Graphics 2D . Noise Image in Graphics This Java... be an image, graphics, picture, photograph, video or any illustration. We... Graphics 2D   graphics - Java Beginners graphics hi..all, I have drawn an image using mouse pointer..an img has come.. Now i want to select that image and move.. how should i write the code...help me.. I need it urgently Hi friend, Plz give Noise Image in Graphics Noise Image in Graphics  .... In this tutorial you will learn how to create a noise image in graphics. Now lets find out what we have defined in this code for creating a noise image Noise Image in Graphics Noise Image in Graphics  .... In this tutorial you will learn how to create a noise image in graphics. Now... a noise image that we have created in java using graphics.   What is Web Graphics What is Web Graphics Web graphics.... An excellent designed graphics can give better and creative ideas to customer of what they are looking for. Web graphics helps designers to enhance PHP GD graphics ); imagefilledpolygon($img, $corners, 3, $white); header ("Content-type: image Add RenderingHints to a Graphics Add RenderingHints to a Graphics  ... to a graphics on the frame. The rendering hints uses the Graphics2D and creates the following image. Description of program: This program uses the Graphics2D Java get Graphics Java get Graphics  ... image.getGraphics() returns the Graphics object. The method...()); BufferedImage image=new BufferedImage how to draw lines,circles, rectangles on JSP (using Java Graphics) how to draw lines,circles, rectangles on JSP (using Java Graphics) how to draw lines,circles, rectangles on JSP (using Java Graphics) Hello Anuj Try the Following Code : image.jsp <%@ page contentType="image graphics program graphics program i want a program that implements merge sort algorithm in graphics iPhone Graphics iPhone Graphics Hi, How to create iPhone Graphics? I am learning to crate UI Design for iPhone and iPad. Thanks Graphics MIDlet Example Graphics MIDlet Example  ... a image that look and act like satellite and earth. For creating these types of graphics in J2ME we use MIDlet's. In the example we have created PacerCanvas class FXG graphics FXG graphics Hi.... What is FXG in flex4? can you give me the explanation about it.. Thanks Ans: FXG: FXG is a declarative syntax for defining static graphics. You typically use a graphics tool such as Adobe java graphics Image on frame Image on frame Hi, This is my code. In this I am unable to load... java.awt.event.*; public class AwtImg extends Frame { Image img; public...(); } AwtImg() { super("Image Frame"); MediaTracker mt=new Upload image ); } public void paint(Graphics g) { g.drawImage(image... Code image and send to the server and display value in Mobile Screen i want code in Java ME .java extension.. Regards senthil To capture an image Eclipse Plunging-Graphics simultaneously. Image Export-Graphics An Eclipse plug in to to simplify... Eclipse Plunging-Graphics  .... It contributes two Diagram Image export wizards and allows for other plug Graphics class in flex Graphics class in flex Hi.... What does clear() do in graphics class? please tell me about that.... Thanks Ans: Clears the graphics that were drawn to this Graphics object, and resets fill and line style settings AWT Image { public void paint( Graphics g ) { g.drawOval (50, 10, 220, 220); g.drawOval (70, 30... { public void paint(Graphics g) { g.drawOval (50, 10, 220, 220); g.drawOval (70 graphics - Java Beginners graphics In java-graphics.. I want to draw a rectangle and resize that rectangle(small,big) ..by using mouse cursors image processing - Java3D image processing hii i have to compare 2 images. for this i try to convert image into greyscale. i think the greyscale comparisonn is more..., One way to convert a color image to gray scale, is to change the color Java Graphics Programming Java Graphics Programming Hi<BR> I am newbie to java and I...;BR> public void paint(Graphics g)<BR> {<BR> g.setColor... Graphics class. import javax.swing.*; import java.awt.*; public class image embadding - Java Beginners ) { } } public void paintComponent(Graphics g) { g.drawImage(image, 0...image embadding sir how to put images in JFrame/JPanel Hi... class DisplayImage extends JPanel{ private BufferedImage image image effects - Java Beginners ); } Graphics g = image1.createGraphics(); g.drawImage(image, 0...image effects hey can u help me in loadin an image file... that will show you image crop effect: import java.sql.*; import java.awt. Java 2D Graphics - Applet Getting image pixel values Getting image pixel values how to get image pixels values on mouse... GetPixels extends JPanel { BufferedImage image; JLabel[] labels; public GetPixels(BufferedImage image) { this.image = image java 2d graphics - Java Beginners java 2d graphics Hello All I need to use 2d graphics in java to build up a map from the given geographic coordinates. What i am not getting is how to scale down these geographic coordinates to device coordinates. I would image image how to add the image in servlet code Display Image in Java Display Image in Java This example takes an image from the system and displays it on a frame using ImageIO class. User enters the name of the image using Image Image how to insert image in xsl without using xml. the image was displayed in pdf..Please help me Post your Comment
http://www.roseindia.net/discussion/22977-Noise-Image-in-Graphics.html
CC-MAIN-2015-18
en
refinedweb
CanQuery - jQuery Tutorials and examples piece of code that provides very good support for ajax. jQuery can be used... jQuery - jQuery Tutorials and examples The largest collection of jQuery examples PHP Examples In this section of PHP examples, the new bees will learn the basic examples of most common used functions. These examples will help to the experts too as assesing our codes, the programmers can develop simple to very complex programs Java Tutorial with examples Java Tutorial with examples What is the good urls of java tutorial with examples on your website? Thanks Hi, We have many java tutorial with examples codes. You can view all these at Java Example Codes Javascript Examples JavaScript Examples Clear cookie example Cookies can... In this part of JavaScript examples, we have created a simple example which shows the use... getElementById with select by using a very simple example. JavaScript get PHP Tutorials Guide with Examples PHP Tutorials Guide with Examples Hi, I am a beginners in PHP. I have searching for php tutorial guides with examples. Can any one provide me... fundamentals. This PHP beginner guide will be very essential for beginner?s Struts Layout Examples - Struts be clicked on. Im not able to find simple examples/explanation on it. Any help on this would be very helpful. Thanks, Priya Hi priya, I am JSP Tutorial For Beginners With Examples JSP Tutorial For Beginners With Examples In this section we will learn about... with an example wherever it is required. JSP also called Java Server Pages is used... to add the dynamic content in JSP page. Examples that are given Struts 2 Tags Examples Struts 2 Tags Examples In this section we are discussing the Struts 2 tags with examples. Struts 2 tags provides easy..., Control Tags and Data Tags. We will show you all the tags with good working MySQL Examples, Learn MySQL with examples. the SQL examples using MySQL Database Server. These MySQL example SQL's tutorial... Loader Examples Mysql Loader Example is used to create the backup... MySQL Example   Ajax Examples Ajax Examples Displaying Time: This example is simple one to understand Ajax with JSP. The objective of the example is to display the current date jQuery tutorial for beginners with examples In this section we will create a simple Hello World example using jQuery... based application. Due to simple features and easy development, jQuery is very... the Ajax support in a web application. jQuery library can be used along with JSP Simple Examples JSP Simple Examples Index 1. Creating... page. Html tags in jsp In this example... will be included in the jsp page at the translation time. In this example we have Apache MyFaces Examples ; Examples provided by the myfaces-example... to webapps\myfaces directory of Tomcat server and open JSP file for the examples... through some examples : Data Scroller Example:(dataScroller.j.   JPA 2.1 CRUD examples Learn how to create CRUD operation examples in JPA 2.1 In this section you will learn how to create example program that forms CRUD operations against database. The CRUD application (example) in JPA is very important topic which JSP Simple Examples Struts 2 Date Format Examples Struts 2 Date Format Examples  ... have provided fully tested example code to illustrate the concepts. You can... The nice format is very interesting. The following table shows, how Java ArrayList, Java Array List examples list.add("Java "); list.add("is "); list.add("very "); list.add("good... as shown below: Java is very good programming language Download... the ArrayList in Java with the example code. SampleInterfaceImp.java import very urgent very urgent ** how to integrate struts1.3 ,ejb3,mysql5.0 in jboss server with myeclipse IDE Java tutorials for beginners with examples Java Video tutorials with examples are being provided to help the beginners... programs are accompanied with Java video that displays Java examples... they can learn every aspect of programming that too very quickly. Java Videos programs connection with mysql with jstl - JSP-Servlet dsn- for with Dsn- Hi, We have very good examples Examples of iText gives the file good look and feel. After going through this example you... Examples of iText  ... application. pdf system In this example we Hibernate Tutorial: Learn Hibernate with examples of articles and example programs given here. Hibernate is popular open source ORM framework for developing the data access code easily and fast. It is very popular among developers as it is easy to learn and very easy to Learn Hibernate programming with Examples with the examples detailing the use of Hibernate API to solve the persistence problems In this tutorial I will provide you the examples of Hibernate API which teaches you.... Here is the pre-requisite of learning the Hibernate with these example codes jsp - JSP-Servlet JSP directives with example JSP Directive with examples for overloading and overriding examples for overloading and overriding examples for overloading and overriding simple examples JSP are for generating presentation elements while JavaBeans are good for storing information Web Services Examples in NetBeans and test the webservices very easily in the NetBeans IDE. NetBeans Webservices examples Let's start developing the webservices example... Web Services Examples in NetBeans   jsp { response.sendRedirect("/examples/jsp/login.jsp"); } } catch...JSP entered name and password is valid HII Im developing a login page using jsp and eclipse,there are two fields username and password,I want Nested classes: Examples and tutorials Nested classes: Examples and tutorials Nested classes Here is another advantage of the Java...) of Java. Very rarely local class is defined within a method. The lacal Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/9788
CC-MAIN-2015-18
en
refinedweb
// helloworld.cpp #include <iostream> #include <luabind> void greet() { std::cout << "hello world!\n"; } // int luaopen_[libname] is the entry point extern "C" int luaopen_libhelloworld(luaState* L) { using namespace luabind; open(L); module(L) [ def("greet", &greet), ]; return 0; }The above, when compiled into a .so, works as expected. require 'libhelloworld' greet() -- prints "hello world!"Unfortunately, chaos sets in when I attempt to bind a simple class, e.g.: class Test { public: Test(); void say() { std::cout << "Inside a class!" << std::endl; } }; ... def("greet", &greet), class_<Test>("TestClass") .def(constructor<>()) .def("say", &Test::say) ...at which point the undefined symbols start flying as I try to load the library in Lua. Specifically: - ./libhelloworld.so: undefined symbol: _ZN4TestC1Ev The only other valid example I can find is the Ogre bindings to Lua (using Luabind) which doesn't use an entry point at all as far as I can tell. Unfortunately, I'm not entirely clear how they got away with that since anytime I try Lua - surprise - whines about not having a valid entry point. I realize this should probably be elsewhere (like on the mailing list) but I was kind of hoping my problem was either obvious or a Luabind guru was somewhere nearby. Any help would be greatly appreciated. EDIT: SOLVED. I forgot to define the constructor. *hangs head in shame* My apologies folks, this is precisely why sleep is important. Edited by Kyan, 30 April 2012 - 09:56 AM.
http://www.gamedev.net/topic/624251-luabind-inside-a-shared-library/
CC-MAIN-2015-18
en
refinedweb
#include <math.h> double sqrt(double x); float sqrtf(float x); long double sqrtl(long double x); These functions shall compute the square root of their argument x, sqrt square root of x. For finite values of x < -0, a domain error shall occur, and either a NaN (if supported), or an implementation-defined value shall be returned. If x is NaN, a NaN shall be returned. If x is ±0 or +Inf, x>
http://www.makelinux.net/man/3posix/S/sqrt
CC-MAIN-2015-18
en
refinedweb
Send Message Updated: February 4, 2015 Sends a message to a Service Bus queue or topic. The following table describes required and optional request headers. In addition to the listed properties, the header can contain custom properties. See the example. The request body is the message payload. If the message is to be received via a protocol other than HTTP/HTTPS, the message body must be serialized; for example, with an XML DataContractSerializer. For example: The preceding code produces the following message body: You can receive and process the message with the following code: The response includes an HTTP status code and a set of response headers. For information about status codes, see Status and Error Codes. Content-type as passed in. None. The following HTTP request sends a message to a queue or topic. The message has the following properties: Label: “M1” TimeToLive: 10 seconds State: Active Message body: “This is a message.” In addition to the BrokeredProperties, the message carries the following custom properties: Priority = “High” and Customer = “12345,ABC”. POST HTTP/1.1 Authorization: SharedAccessSignature sr=your-namespace&sig=Fg8yUyR4MOmXfHfj55f5hY4jGb8x2Yc%2b3%2fULKZYxKZk%3d&se=1404256819&skn=RootManageSharedAccessKey BrokerProperties: {"Label":"M1","State":"Active","TimeToLive":10} Priority: High Customer: 12345,ABC Content-Type: application/atom+xml;type=entry;charset=utf-8 Host: your-namespace.servicebus.windows.net Content-Length: 18 Expect: 100-continue This is a message. Service Bus returns the following response: Other ResourcesService Bus HTTP Client sample
https://msdn.microsoft.com/library/azure/hh780786.aspx
CC-MAIN-2015-18
en
refinedweb
MS Dynamics CRM 3.0 I have just started learning C# ; I was working in C++ till now (and would like to if allowed, but learning new things dont hurt). I am used to specifiying "const" to get the assurance that unless someone specifically cast-away the constness, object wont change. So, as part of my first ever program I wrote a following class - public class FirstOne { public int intTry; public FirstOne() { intTry = 0; } //public FirstOne(readonly FirstOne o) //public FirstOne(const FirstOne o) public FirstOne(FirstOne o) { intTry = o.intTry; o.intTry = -100; // This would be possible if I can't make the parameter const.. } } Since class in C# is reference type (still getting hang of the concept) object 'o' is already a reference, I am just trying to make it const so that it doesn't change. But the syntax (commented lines) does not work. Is it even possible to have const ref or it's just a glitch in my C# syntax understanding? Thanks in advance, Neel. You could perhaps make the class immutable, but this prevents anybody from changing the contents (you need to recreate the object from scratch to "change" it). You could create an interface with only the "get" accessors, but that doesn't prevent you from casting back to the real type. Otherwise you could perhaps pass clones around, but that is an overhead and relies on the various parties playing ball. There is no C# syntax to do what you want (treat a mutable object as immutable within a scope). Marc -----------------------------------------------Reply----------------------------------------------- public readonly int intTry; now *only* the constructor can set intTry. Also - can I recommend using properties instead of direct fields? In simple cases this will be inlined by the JIT anyway, but gives you a lot more flexibility to add functionality; as an example (with a use-case)... private void CheckEditable() { // allow some kind of lock/unlock mechanism... if(!IsEditable) throw new InvalidOperationException("The record is not currently editable"); - Hide quoted text - Even if I have just started learning it now, I can't help but wonder why it was left out.. Thanks, Neel.
http://www.megasolutions.net/cSharp/Is-const-allowed-anywhere_-71147.aspx
CC-MAIN-2015-18
en
refinedweb
by Josh Juneau Learn how to use the Java API for Batch Processing, utilize Java Message Service, and create a chat client using WebSockets. This article is the third in a three-part series demonstrating how to use Java EE 7 improvements and newer web standards, such as HTML5, WebSocket, and JavaScript Object Notation (JSON) processing, to build modern enterprise applications. In this third part, we will improve the movieplex7 application, which we started in the first two articles, by adding the ability to view ticket sales, accumulate movie points, and chat with others. Note: The complete source code for the application can be downloaded here. In the first two articles, we learned how to download, install, and configure NetBeans and GlassFish 4, which will also be used for building and deploying the application in this article. We built the application from the ground up, using JavaServer Faces (JSF) 2.2 and making use of new technology, such as the Faces Flow for navigation. We demonstrated how to utilize Java Persistence API (JPA) 2.1, Java API for RESTful Web Services (JAX-RS), and JSON Processing (JSON-P) 1.0 for implementing business logic and data manipulation. For the remainder of this article, please follow along using the Maven-based project that you created using NetBeans IDE in Part 1 and Part 2 of this series. Java API for Batch Processing 1.0 (JSR 352) will be used in this section to import movie ticket sales for each show and populate the database by reading cumulative sales from a comma-separated values (CSV) file. Batch processing is a method for executing a series of tasks or jobs. Most often, these jobs do not require intervention, and they are bulk-oriented and long-running. For details regarding batch processing terminology, please refer to the “Java EE 7 Tutorial.” Generate classes. First, let’s create the classes that will be used to process movie sales and persist them to the database. org .glassfish.movieplex7 .batch. Click Finish. SalesReader. Change the class definition to extend AbstractItemWriter, and resolve imports. implements the AbstractItemReader ItemReader interface, which defines methods for reading a stream of items. @Namedand @Dependentas class-level annotations, and resolve imports. The @Named annotation allows the bean to be injected into the job XML, and @Dependent makes the bean available for injection. private BufferedReader reader; openmethod shown in Listing 1. public void open(Serializable checkpoint) throws Exception { reader = new BufferedReader( new InputStreamReader(Thread.currentThread() .getContextClassLoader().getResourceAsStream ("META-INF/sales.csv"))); } Listing 1 For this example, the CSV file must be placed within the META-INF directory of the application, which resides at the root of the application source packages. readItemmethod and replace with the code shown in Listing 2. Resolve imports. @Override public String readItem() { String string = null; try { string = reader.readLine(); } catch (IOException ex) { ex.printStackTrace(); } return string; } Listing 2 The readItem method is responsible for reading the next item from the stream, with a null indicating the end of the stream. org.glassfish.movieplex7.batchpackage, and name it SalesProcessor. Implement ItemProcessorby adding the following to the class definition: implements ItemProcessor Implementing ItemProcessor allows the class to operate on an input item and produce an output item. @Namedand @Dependentclass-level annotations, and resolve imports. Click the yellow lightbulb, and choose Implement all abstract methods, as shown in Figure 1. processItemto match that shown in Listing 3 and resolve imports. @Override public Sales processItem(Object s) { Sales sales = new Sales(); StringTokenizer tokens = new StringTokenizer ((String) s, ","); sales.setId(Integer.parseInt(tokens.nextToken())); sales.setAmount(Float.parseFloat(tokens.nextToken())); return sales; } Listing 3 The processItem method creates a new Sales object, and then it creates a StringTokenizer that is used to parse through each item within the object passed in. The Sales object is populated with the processed item value and then returned. org.glassfish.movieplex7 .batchpackage, and name it SalesWriter. Extend AbstractItemWriterby adding the following to the class definition: extends AbstractItemWriter @Namedand @Dependentclass-level annotations, and then resolve imports. javax.batch .api.chunk.AbstractItemWriter. Once the import has been added, click the lightbulb and choose Implement all abstract methods to add the writeItemsmethod to the class. EntityManagerinstance into the class by adding the following declaration: @PersistenceContext EntityManager em; writeItemsmethod with a forloop, which will be used for persisting all the Salesobjects that have been aggregated from the batch runtime, as shown in Listing 4. @Override @Transactional public void writeItems(List items) throws Exception { for (Sales sale : (List<Sales>) items) { em.persist(sale); } } Listing 4 @Transactionalannotation to the method to incorporate transaction control into your method. Resolve the imports. Create a batch job. The next task is to implement a procedural task within XML. In this case, the job will consist of one chunk that contains three items. Those items are the <reader>, <processor>, and <writer> elements, and their respective class implementations are SalesReader, SalesProcessor, and SalesWriter. To create the task within XML, use the following procedure: batch-jobs. Click Finish. batch-jobsfolder and selecting New and then XML Document. Name the file eod-sales. Click Next and accept all default values. Click Finish. Note the following about the XML file: item-countattribute of <chunk>indicates that there are three items to be given to the writer. skip-limitattribute of <chunk>indicates the number of exceptions that will be skipped. <skippable-exception-classes/>lists the set of exceptions to be skipped by chunk processing. <?xml version="1.0"?> <job id="endOfDaySales" xmlns= "" version="1.0"> <step id="populateSales" > <chunk item- <reader ref="salesReader"/> <processor ref="salesProcessor"/> <writer ref="salesWriter"/> <skippable-exception-classes> <include class="java.lang.NumberFormatException"/> </skippable-exception-classes> </chunk> </step> </job> Listing 5 Invoke the batch job. Create the implementation for invoking the batch job. org.glassfish .movieplex7.batchpackage, select New and then Session Bean, and then provide the name SalesBean. Click Finish. runJobmethod, as shown in Listing 6, to the new session bean. public void runJob(){ try { JobOperator jobOp = BatchRuntime.getJobOperator(); long jobId = jobOp.start ("eod-sales", new Properties()); System.out.println("Job Start ID: " +jobId); } catch (JobStartException ex){ ex.printStackTrace(); } } Listing 6 In the code, the BatchRuntime is used to obtain a new JobOperator. The new JobOperator is then used to start the batch process outlined within eod-sales, and it could also be used for stopping or restarting the job. @Namedannotation to the class to make it injectable. Resolve imports. EntityManagerinto the class, as seen below: @PersistenceContext EntityManager em; get SalesDatato the class, which will use an @NamedQueryto return all rows within the table, as shown in Listing 7. Resolve imports. public List<Sales> getSalesData() { return em.createNamedQuery("Sales.findAll", Sales.class) .getResultList(); } Listing 7 batchfor the folder name. Click Finish, and right-click the newly created folder and select New and then Faces Template Client. Specify salesfor the File Name, and browse to WEB-INF/template.xhtmlto specify the template. Then click Finish. Replace all the content within the <ui:composition>tags with the code shown in Listing 8. Repair namespace prefix/URI mapping by clicking the yellow lightbulb. <h1>Movie Sales</h1> <h:form> <h:dataTable <h:column> <f:facet <h:outputText </f:facet> #{s.id} </h:column> <h:column> <f:facet <h:outputText </f:facet> #{s.amount} </h:column> </h:dataTable> <h:commandButton <h:commandButton </h:form> Listing 8 This page contains a dataTable, which will be used to display the contents of the sales file once the batch process has populated the database. It also contains two commandButtons: one to run the job and one to refresh the page. template.xhtmlfile to incorporate a link to the sales .xhtmlview by adding the code shown in Listing 9. <h:outputLinkSales</h:outputLink> Listing 9 SALESdatabase table. Click the Refresh button to display the list of sales that have been processed, as shown in Figure 2. To implement a movie point system for the movieplex7 application, we’ll use Java Message Service (JMS) 2.0 and its new API. JMS 2.0 increases developer productivity by decreasing the amount of code and complexity that are necessary for sending and receiving messages compared with prior releases. Note: In order to work with JMS, a topic or queue must be created within the application server container. This can be done with code or via the application server’s administrative utilities. In this example, you will learn how to create a queue using code. Create a JSF managed bean. First, let’s create a JSF managed bean that will be bound to a JSF view for collecting movie points data and sending that data to the queue. org.glassfish.movieplex7 .points, and then click Finish. SendPointsBean. Implement Serializable, and add the following class-level annotations to make the bean Expression Language (EL)–injectable and session-scoped. Resolve imports. @Named @SessionScoped Stringfield shown in Listing 10 to the class, which will be bound to a JSF inputTextfield to capture point data. Generate the getters/setters for the field by right-clicking the editor pane and selecting Insert Code and then Getter and Setter. Select the field, and click Generate. @NotNull @Pattern(regexp = "^\\d{2},\\d{2}", message = "Message format must be 2 digits, comma, 2 digits, e.g. 12,12") private String message; Listing 10 Note: This field uses bean validation annotations to ensure that the text entered into the field adheres to the required format. JMSContextand a Queueinto the class by adding the following code. Then resolve imports, taking care to import javax.jms.Queue. @Inject JMSContext context; @Resource(mappedName = "java:global/jms/pointsQueue") Queue pointsQueue; JMSContext is a new JMS 2.0 interface that combines Connection and Session objects into a single object. Note: Java EE–compliant application servers contain a default JMS connection factory under the name java:comp/DefaultJMSConnectionFactory, which is used when no connection factory is specified. public void sendMessage() { System.out.println("Sending message: " + message); context.createProducer().send(pointsQueue, message); } Listing 11 The JMSContext createProducer method can be used to create a JMSProducer, which provides methods to configure and send messages synchronously and asynchronously. The send method of the JMSProducer is used to send a message to the queue instance, which we will create in the next step. org.glassfish .movieplex7.pointspackage, select New and then Java Class, and specify the name ReceivePointsBean. Implement Serializable, and add the class-level annotations shown in Listing 12. 12 Introduced in JMS 2.0, the @JMSDestinationDefinition annotation is used to reduce the administrative overhead of application configuration by programmatically provisioning required resources. In this case, the annotation is used to create a javax.jms.queue named java:global/jms/pointsQueue. JMSContextand Queueresources that will be used within the class by adding the following code. Then resolve imports, being sure to import the correct classes: javax.jms.Queueand so on. @Inject JMSContext context; @Resource(mappedName= "java:global/jms/pointsQueue") Queue pointsQueue; Note: Although we are creating the queue by using the @JMSDestinationDefinition annotation, we still need to inject the queue into the class via pointsQueue to make it usable. receiveMessage()method shown in Listing 13. public String receiveMessage() { String message = context.createConsumer(pointsQueue). receiveBody(String.class); System.out.println("Received message: " + message); return message; } Listing 13 This method uses the JMSContext to create a consumer for the pointsQueue and then synchronously receive a String message from the queue. getQueueSize, which is shown in Listing 14, for returning the number of messages currently in the queue. Resolve imports. 14 This method creates a QueueBrowser and then uses it to traverse through each message within the queue and add it to the total. Create a Facelet. Next, we need to create a Facelet view that will be used for entering movie points and simulating the send/receive message functionality. points, and then click Finish. points.xhtmlwithin that folder. Add the code shown in Listing 15 to the view, replacing the code within the <ui:composition>elements. Click the yellow lightbulb to resolve the namespace prefix/URI mapping, as needed. <h1>Points</h1> <h:form> Queue size: <h:outputText<p/> <h:inputText <h:commandButton </h:form> <h:form> <h:commandButton </h:form> Listing 15 This view contains a field for entering a message, along with a commandButton that invokes the sendMessage method. When the method is invoked, the message contained within the inputText is added to the queue, and the queueSize is increased by 1. Another commandButton in the view invokes the receiveMessage method, which will read a message from the queue and decrement the queueSize by 1. template.xhtmlalong with the other outputLinkcomponents, exposing the points.xhtmlview to the application. <p/> <h:outputLink Points</h:outputLink> Listing 16 Run the movieplex7 application. 1212to the field and click Send Message. An error will be displayed (see Figure 4). This message was produced by the bean validation that was added to SendPointsBean. Figure 4 12,12and click Send Message. This time the error message disappears and the queue is increased by 1 (see Figure 5). Figure 5 Continue to send and receive messages in any order. You should notice the queue number being incremented each time a message is sent and decremented with every message received. In this section, we will build a chat room for movieplex7 visitors using the WebSocket 1.0 API, which provides a full-duplex, bidirectional communication protocol over a single TCP connection that defines an opening handshake and data transfer. A standard API for WebSocket, under JSR 356, has been added to Java EE 7, enabling developers to create WebSocket solutions using a common API, rather than customized frameworks. Create a class to implement WebSocket communication. To begin, let’s create the Java class for implementation of the WebSocket communication. org.glassfish.movieplex7.chat. ChatServer. javax.websocket.Session, among others. public class ChatServer { private static final Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>()); @OnOpen public void onOpen(Session peer) { peers.add(peer); } @OnClose public void onClose(Session peer) { peers.remove(peer); } @OnMessage public void message(String message, Session client) throws IOException, EncodeException { for (Session peer : peers) { if (!peers.equals(client)) { peer.getBasicRemote().sendObject(message); } } } } Listing 17 The code for the ChatServer class does the following: @ServerEndpointis used to indicate that this class is a WebSocket endpoint. The value attribute defines the URI to be used for accessing the endpoint. @OnOpenand @OnCloseannotate the methods that are invoked when a session is opened or closed, respectively. The Sessionparameter defines the client that is requesting the initiation or termination of a connection. @OnMessageannotates the method that is invoked when a message is received. The first parameter is the payload of the message, and the second is the client session that defines the other end of the connection. This method broadcasts the received message to all the connected clients. Generate the web view. Now, let’s generate the web view for the chat client. chat, and click Finish. chatroomfor the File Name, and browse to WEB-INF/template.xhtmlto specify the template. Keep the other defaults and click Finish. <ui:composition>tags with the code shown in Listing 18. <ui:define <form action=""> <table> <tr> <td> Chat Log<br/> <textarea readonly="true" rows="6" cols="50" id="chatlog"></textarea> </td> <td> Users<br/> <textarea readonly="true" rows="6" cols="20" id="users"></textarea> </td> </tr> <tr> <td colspan="2"> <input id="textField" name="name" value="Duke" type="text"/> <input onclick="join();" value="Join" type="button"/> <input onclick="send_message();" value="Send" type="button"/><p/> <input onclick="disconnect();" value="Disconnect" type="button"/> </td> </tr> </table> </form> <div id="output"></div> <script language="javascript" type="text/javascript" src="${facesContext. externalContext.requestContextPath}/chat/websocket.js"> </script> </ui:define> Listing 18 This code constructs the HTML form that will be used for the chat client, using two textareas that will be used to display the chat log and current user list and an input textField for entering a username and messages. There are three buttons: one to open a WebSocket session, another to send a message, and the last to disconnect the session. All three are bound via the onclick attribute to JavaScript functions, which implement the WebSocket communication. Near the end of the HTML is a <script> element, which includes the websocket .js JavaScript file. websocket, and click Finish. Edit websocket.jsso it contains the code shown in Listings 19a and 19b. var wsUri = 'ws://' + document.location.host + document.location.pathname.substr(0, document.location.pathname.indexOf ("/faces")) + '/websocket'; console.log(wsUri); var websocket = new WebSocket(wsUri); var username; websocket.onopen = function(evt) { onOpen(evt); }; websocket.onmessage = function(evt) { onMessage(evt); }; websocket.onerror = function(evt) { onError(evt); }; websocket.onclose = function(evt) { onClose(evt); }; var output = document.getElementById("output"); function join() { username = textField.value; websocket.send(username + " joined"); } function send_message() { websocket.send(username + ": " + textField.value); } Listing 19a function onOpen() { writeToScreen("CONNECTED"); } function onClose() { writeToScreen("DISCONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); if (evt.data.indexOf("joined") !== -1) { users.innerHTML += evt.data.substring (0, evt.data.indexOf(" joined")) + "\n"; } else { chatlog.innerHTML += evt.data + "\n"; } } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function disconnect() { websocket.close(); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } Listing 19b The websocket.js implementation constructs the endpoint URI by appending the URI specified in the ChatServer class. A new WebSocket object is then created, and each WebSocket event function is assigned to a JavaScript function that is implemented within the file. When a user clicks the Join button on the page, the username is captured, and the initial WebSocket send method is invoked, passing the username. When messages are sent, any relevant data is passed as a parameter to the send_message function, which appends the username to the message and broadcasts to all clients. The onMessage method is invoked each time a message is received, updating the list of logged-in users. The Disconnect button on the page initiates the closing of the WebSocket connection. WEB-INF/template.xhtmland overwrite the outputLinkelement for Item 2with the code shown in Listing 20. <h:outputLink Chat Room </h:outputLink> Listing 20 You will see the CONNECTED message presented at the bottom of the chat room (see Figure 8). Figure 8 localhost:8080/movieplex7, and then open the chat room in that browser. Click Join in one of the browser windows (we will refer to it as “browser 1”), and you will be joined to the room as Duke. You should see the user list updated in the other browser window (browser 2) as well. Join the session in browser 2 under a different name (for example, Duke2), and you should see the user list in each of the windows updated (see Figure 9). Note: Chrome Developer Tools can be used to monitor WebSocket traffic. In this article, we modified the movieplex7 application that was started in Part 1 and Part 2, providing the ability to calculate movie sales, assign movie points, and chat with peers. The article covered the following technologies: This three-part article series has taken you on a whirlwind tour of the new features that are available in Java EE 7. To learn more about Java EE 7, take a look at the online tutorial.
http://www.oracle.com/technetwork/articles/java/ma14-new-to-java-pt3-2177806.html
CC-MAIN-2015-18
en
refinedweb
On Sat, 27 Dec 1997, Tim Waugh wrote:> Here's the oops report (reconstructed by hand) I get when booting 2.1.76> (compiled with gcc-2.7.2.1) on my Alpha.> > Tim.> */> > Unable to handle kernel paging request at virtual address ffffbe1c0160918> swapper(1): Oops 1> pc = [<fffffc0000424678>] ra = [<fffffc000040341c>] ps = 0000> r0 = 0000000000000000 r1 = fffffc01c0000000 r2 = ffffffff0000b000(snip!)That's precisely what I was seeing with the DE45x ethernet driver. I'mcurrently running 2.1.76 with the tulip driver selected in kernel config,and Donald Becker's 0.83 beta code. The following little patch isrequired to build under newer kernels:*** tulip.c~ Fri Dec 5 19:32:40 1997--- tulip.c Fri Dec 5 19:36:17 1997****************** 266,276 **** #define PCI_DEVICE_ID_DEC_TULIP_21142 0x0019 #endif! /* #ifndef PCI_VENDOR_ID_LITEON */ #define PCI_VENDOR_ID_LITEON 0x11AD #define PCI_DEVICE_ID_PNIC 0x0002 #define PCI_DEVICE_ID_PNIC_X 0x0168! /* #endif */ /* The rest of these values should never change. */--- 266,278 ---- #define PCI_DEVICE_ID_DEC_TULIP_21142 0x0019 #endif! #ifndef PCI_VENDOR_ID_LITEON #define PCI_VENDOR_ID_LITEON 0x11AD+ #endif+ #ifndef PCI_DEVICE_ID_PNIC #define PCI_DEVICE_ID_PNIC 0x0002 #define PCI_DEVICE_ID_PNIC_X 0x0168! #endif /* The rest of these values should never change. */The driver is on Donald's ftp site at: it a try.I'm currently doing battle with the knfs subsystem. It took an entireafternoon of hacking to build the utilities under glibc. Everything loadsand runs, but there are plenty of error messages emitted during theprocess. Attempting to mount the alpha from another box fails with:Dec 27 09:44:40 alpha mountd[202]: authenticated mount request from air.steve.net Dec 27 09:44:40 alpha mountd[202]: getfh failed: Operation not permitted I'll be working on this today. (Bill Hawes: If you have any ideas or suggestions as to why knfs wouldnot work on an Alpha, please chime in. I'd appreciate the help.)Steve
https://lkml.org/lkml/1997/12/27/22
CC-MAIN-2015-18
en
refinedweb
django.js 0.8.1 Django JS Tools Dj. Compatibility Django.js requires Python 2.6+ and Django 1.4.2+. Installation')), ... ) Documentation The documentation is hosted on Read the Docs Changelog 0.8.1 (2013-10-19) - Fixed management command with Django < 1.5 (fix issue #23 thanks to Wasil Sergejczyk) - Fixed Django CMS handling (fix issue #25 thanks to Wasil Sergejczyk) - Cache Django.js views and added settings.JS_CACHE_DURATION - Allow customizable Django.js initialization - Allow manual reload of context and URLs - Published Django.js on bower (thanks to Wasil Sergejczyk for the initial bower.json file) - Do not automatically translate languages name in context 0.8.0 (2013-07-14) - Allow features to be disabled with: - settings.JS_URLS_ENABLED - settings.JS_USER_ENABLED - settings.JS_CONTEXT_ENABLED) 0.7.5 (2013-06-01) - Handle Django 1.5+ custom user model - Upgraded to jQuery 2.0.2 and jQuery Migrate 1.2.1 0.7.4 (2013-05-11) 0.7.3 (2013-04-30) - Upgraded to jQuery 2.0.0 - Package both minified and unminified versions. - Load minified versions (Django.js, jQuery and jQuery Migrate) when DEBUG=False 0.7.1 (2013-04-25) - Optionnaly include jQuery with {% django_js_init %}. 0.7.0 (2013-04-25) - Added RequireJS/AMD helpers and documentation - Added Django Pipeline integration helpers and documentation - Support unnamed URLs resolution. - Support custom content types to be passed into the js/javascript script tag (thanks to Travis Jensen) - Added coffee and coffescript template tags - Python 3 compatibility 0.6.5 (2013-03-13) - Make JsonView reusable - Unescape regex characters in URLs - Fix handling of 0 as parameter for Javasript reverse URLs 0.6.4 (2013-03-10) - Support namespaces without app_name set. 0.6.3 (2013-03-08) - Fix CSRF misspelling (thanks to Andy Freeland) - Added some client side CSRF helpers (thanks to Andy Freeland) - Upgrade to jQuery 1.9.1 and jQuery Migrate 1.1.1 - Do not clutter url parameters in js, javascript and js_lib template tags. 0.6.2 (2013-02-18) - Compatible with Django 1.5 0.6.1 (2013-02-11) - Added static method (even if it’s a unused reserved keyword) 0.6 (2013-02-09) - Added basic user attributes access - Added permissions support - Added booleans context processor - Added jQuery 1.9.0 and jQuery Migrate 1.0.0 - Upgraded QUnit to 1.11.0 - Added QUnit theme support - Allow to specify jQuery version (1.8.3 and 1.9.0 are bundled) 0.5 (2012-12-17) Added namespaced URLs support Upgraded to Jasmine 1.3.1 - Refactor testing tools: - Rename test/js into js/test and reorganize test resources - Renamed runner_url* into url* on JsTestCase - Handle url_args and url_kwargs on JsTestCase - Renamed JasmineMixin into JasmineSuite - Renamed QUnitMixin into QUnitSuite - Extracted runners initialization into includable templates Added JsFileTestCase to run tests from a static html file without live server Added JsTemplateTestCase to run tests from a rendered template file without live server - Added some settings to filter scope: - Serialized named URLs whitelist: settings.JS_URLS - Serialized named URLs blacklist: settings.JS_URLS_EXCLUDE - Serialized namespaces whitelist: settings.JS_URLS_NAMESPACES - Serialized namespaces blacklist: settings.JS_URLS_NAMESPACES_EXCLUDE - Serialized translations whitelist: settings.JS_I18N_APPS - Serialized translations blacklist: settings.JS_I18N_APPS_EXCLUDE Expose PhantomJS timeout with PhantomJsRunner.timeout attribute 0.4 (2012-12-04) Upgraded to jQuery 1.8.3 Upgraded to Jasmine 1.3.0 Synchronous URLs and context fetch. Use django.utils.termcolors - Class based javascript testing tools: - Factorize JsTestCase common behaviour - Removed JsTestCase.run_jasmine() and added JasmineMixin - Removed JsTestCase.run_qunit() and added QUnitMixin - Extract TapParser into djangojs.tap Only one Django.js test suite Each framework is tested against its own test suite Make jQuery support optionnal into JsTestCase Improved JsTestCase output Drop Python 2.6 support Added API documentation 0.3.2 (2012-11-10) - Optionnal support for Django Absolute 0.3.1 (2012-11-03) - Added JsTestView.django_js to optionnaly include django.js - Added js_init block to runners to templates. 0.3 (2012-11-02) - Improved ready event handling - Removed runners from urls.py - Added documentation - Added ContextJsonView and Django.context fetched from json. - Improved error handling - Added DjangoJsError custom error type 0.2 (2012-10-23) - Refactor template tag initialization - Provides Jasmine and QUnit test views with test discovery (globbing) - Provides Jasmine and QUnit test cases - Added Django.file() - Added {% javascript %}, {% js %} and {% css %} template tags 0.1.3 (2012-10-02) - First public release - Provides django.js with url() method and constants - Provides {% verbatim %} template tag - Patch jQuery.ajax() to handle CSRF tokens - Loads the django javascript catalog for all apps supporting it - Loads the django javascript i18n/l10n tools in the page - Downloads (All Versions): - 128 downloads in the last day - 544 downloads in the last week - 2702 downloads in the last month - Author: Axel Haustant - Download URL: - Keywords: django javascript test url reverse helpers - License: LGPL - Categories - Development Status :: 4 - Beta - Environment :: Web Environment - Framework :: Django - Intended Audience :: Developers - License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL) - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python - Programming Language :: Python :: 3 - Topic :: Software Development :: Libraries :: Python Modules - Topic :: System :: Software Distribution - Package Index Owner: noirbizarre - DOAP record: django.js-0.8.1.xml
https://pypi.python.org/pypi/django.js/
CC-MAIN-2015-18
en
refinedweb
18 June 2010 17:12 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--Chloralkali producers and, one might expect, their principal customers, take it in their stride - or at least seem to. The demand pull on the twin products from the electrolysis of brine is very different. If the paper business is buoyant then producers have to hope that the demand pull for chlorine into construction via the polyvinyl chloride chain is healthy, otherwise they could have problems. Prices of chlorine and caustic will reflect activity in very different parts of the economy. This is because companies look at the output and economics of electrolysis plants in totality. Production economics are based on the electrochemical unit, or ecu, which is the combined value of one tonne of chlorine and 1.1 tonnes of caustic soda. In a similar position, shouldn’t makers of phenol and acetone work harder at taking a similar approach? For every one tonne of phenol produced you are left with 0.62 tonnes of acetone. Phenol demand is growing strongly; but that for acetone is not. The disparity will add colour to acetone in the way it has done in the past and create problems for producers and consumers alike. The way in which prices for the two products are established, will only help exacerbate potential problems. After a year of difficulty - with end-use demand weak and production issues adding to price volatility in phenol/acetone - the world has turned. Polycarbonate demand is much stronger, even though consumers buy fewer CDs and DVDs made from polycarbonate than they once did. Acetone, by contrast, is a much more ‘stable’ product. Some 40% of global output goes into solvents, 24% to make methyl methacrylate, 20% into bisphenol A and 14% into a range of other uses. Mature and declining end use markets for the chemical will have an impact on the entire phenol chain. Larger consumers are recycling more acetone. There are new routes to methyl methacrylate that avoid the chemical - one uses ethylene, another, butylenes. Solvent use is mature growing only with the industrial economy. The ICIS phenol/acetone conference in ?xml:namespace> Phenol use is expected to expand with increased polycarbonate demand in fast-growing This is a problem for phenol/acetone producers not simply because of the disparities that will exist when they have excess acetone to sell. It is a difficult problem because while they are able to negotiate acetone prices based on the price of propylene and market conditions their price for phenol is firmly fixed to benzene. Excess acetone could prove to be a real headache: industry analysts expect there to be a 100,000-200,000 tonne/year global surplus by 2015. Acetone could be recycled by the producer- back to propylene or to cumeme. It could even find its way into the gasoline pool - although it is said to corrode important engine parts. A more interesting use would be conversion to isopropanol (IPA), which is widely used as a solvent, an intermediate for paints and inks, and in lacquers, thinners and household products. IPA is potentially a faster-growing product if you consider solvent use in the growing semiconductor industry in Of great interest is the price spread between it and acetone of some $300/tonne. The cash cost of making IPA, albeit with a secure source of hydrogen, is between $100 (€81) and $150/tonne. Adding value down the chain appears to be a real option for phenol/acetone players and a way in which they might be able to gain a little more control over their markets. An IPA plant fed by acetone was brought on-stream by Novapex in Delegates in ($1 = €0.81) For more on phenol
http://www.icis.com/Articles/2010/06/18/9369388/insight-moving-down-the-chain-to-capture-growth-and.html
CC-MAIN-2015-18
en
refinedweb